uspol, doubting one's sanity, empiricism, wasting resources?
So yesterday I ended up in a situation whereI was in disagreement about what I thought I could clearly hear in a video. Since it sounded perfectly clear to me, and the topic of the related discussion was politically charged, _and_ I have no reason to doubt the other participants honesty about what they say they are hearing, this is pretty concerning. I see three options:
1. I am so influenced by propaganda my basic senses are broken.
2. The above, but for the other participant.
3. This specific video is an auditory case of blue/black vs white/gold dress.
I think the odds are about 5/80/15. I kind of hope it's 3 though, it would mean the propaganda is not strong enough to wrap the minds of intelligent people that badly. If it is 1, I obviously need to at least make a drastic change in the media I am consuming, and probably re-evaluate a lot of stuff.
This toot is mostly a pre-commitment, so that I follow up on my attempt to settle this. My plan is as follows, mostly in order of effort needed:
0. Look at the auto-generated captions on the YT video. If this confirms what I hear this would be _extremely weak_ evidence against 1. There might not even be auto-captions enabled for the video and I am not sure if manual captions can be distinguished from automatic ones.
1. Extract the crucial part of the sound from the video and re-upload it to YT with no real visuals attached and no suggestive title. Check the auto-captions there. This could be weak to moderate evidence for any of the above.
2. Same but with a different system than YT. I'll probably pick a couple options from this page: https://fosspost.org/open-source-speech-recognition/ . They all would be weak to moderate evidence for any of the above, in aggregate they are strong if in agreement.
3. Use Mechanical Turk to ask people about what they hear. **If anyone knows a reasonable non-amazon alternative, let me know.** This would be strong evidence towards something, with the possibility of bias due to people being familiar with the content.
4. Same as above, but cut the audio into separate words to limit bias.
If too many of the steps fail (producing no reasonable output) I can fall back on using the single words to ask friends who are hopefully unfamiliar with the context, but this would be kind of weak. I might skip some later steps if previous steps produce sufficient agreement or if they turn out to be too expensive (I don't really know the rates on mturk...).
Crucially, what my specific claims about what I clearly hear are (which are incompatible with what the other person hears), in order of how confident I am of them:
1. The second word starts with an 'm', not a 'w'.
2. The first word ends with a consonant, most likely an 'ng' sound.
3. The first word starts with 'ha'.
4. The second word starts with a 'my' sound.
This might take a couple of days...
uspol, doubting one's sanity, empiricism, wasting resources?
# Test 0
No captions on the original video. Not a huge disappointment, it wouldn't have been strong evidence anyway.
Before I get to Test 1, I wanted to point out that if it correctly reconstructs the given name present in the chant this would be _weaker_ evidence of whatever gets recognized, because it might suggest the captioning system recognized the chant and assigned known captions to it (I don't know whether anything like that actually happens). Something like "Hang my pants!" (which is actually what I heard before I corrected for context) would be stronger evidence. Thankfully this won't be an issue in Test 2.
uspol, doubting one's sanity, empiricism, wasting resources?
# Test 1
Let's document this one properly.
## Preparation
Downloaded the video using `youtube-dl`.
Extracted the relevant part of the sound, from the moment it becomes clear (IMO) to when the video cuts to another part of the crowd.
```
ffmpeg -i Rioters\ chant\ \'hang\ Mike\ Pence\'\ as\ they\ breach\ Capitol-ba0UR7gITrU.mp4 -vn -acodec copy chant.aac
```
Created a video out of the sound file with a irrelevant name and the least political picture I could find on short notice (a drawing of a mathematical pun in Polish).
```
ffmpeg -loop 1 -y -i ../kurakLematowskiegoZorna.jpg -i chant.aac -shortest -acodec copy -vcodec libx264 sillyTestVideo.avi
```
Uploaded the result to YT, as of now there are no auto-generated captions present, but the instructions suggest this might take a while.
uspol, doubting one's sanity, empiricism, wasting resources?
On second look, if I'm understanding the UI correctly it generated captions already, but they are _empty_. There is a warning it might not generate proper captions if there are multiple people speaking, so maybe that's a problem. That would make the results inconclusive again. Oh well, I can wait just to make sure before declaring that.
uspol, doubting one's sanity, empiricism, wasting resources?
Well, that ended up silly. YT managed to autogenerate captions, but not for the chant, but for some barely audible person talking close to the person recording. And all the words it identified were "el bote no". Waiting for Q theories how this proves these were Mexican antifa who entered the capitol by ship and had problems escaping.
At least this is a very clear inconclusive result. I'll continue tomorrow with the other tests, but the odds of me needing to use actual money on this are rising.
uspol, doubting one's sanity, empiricism, wasting resources?
# Test 2
Apparently speech-to-text is something only professionals usually do, because the tools I managed to find are not especially easy to use. For now I managed to get julius, followed instructions on its GitHub substituting the file I wanted for the test file. It needed to be converted as follows:
```
ffmpeg -i chant.aac -ar 16000 -map_channel 0.0.0 chantL.wav -ar 16000 -map_channel 0.0.1 chantR.wav
```
The two channels were actually indistingiushable as far as I (and julius btw) can tell. Unfortunately all it recognized was "details had", which means it probably also picked up some random person talking, treating the chant as background noise.
I'll try cutting the file into smaller bits (bit per word, where I think they are most clear), since I will need to do this for further steps anyway, and check whether this helps.
uspol, doubting one's sanity, empiricism, wasting resources?
tl;dr Did not help.
The first word is recognized as "oh", the second as "five", the last as "but added". These are so nonsensical (especially the last one) that I believe they provide no evidence one way or another ("five" kinda sounds like "Mike"? pfffft), except for julius being terrible at transcribing chants. _Maaaybe_ this is tiny evidence towards 3., since a chant that's incomprehensible to programs might also be incomprehensible to humans.
I'll try at least one more software of this kind, but at this point I believe mturk will be necessary.
uspol, doubting one's sanity, empiricism, wasting resources!
Next I used this: https://github.com/facebookresearch/flashlight/tree/master/flashlight/app/asr/tutorial
It did not detect any words in the first clip and the word "one" in both the other clips. This suggests it again was picking up on noise different from the chant. It also didn't detect anything on the full chant.
Finally I tried Vosk. Did not detect anything on any file.
Welp, MTurk it is. But not today.
uspol, doubting one's sanity, empiricism, wasting resources!
There we go, sent both the full chant (without repetitions, I just picked the IMO clearest sounding instance) and single words (cut from the full chant). I requested 20 answers for every data piece, which should be enough for reasonable evidence (unless the responses are atrocious quality). I expect some people will be familiar with the full chant, so answers which correctly identify the given name present there are weaker evidence. With the single words this problem should be somewhat mitigated.
Still hoping I'm not theonly one hearing "Hang my pants!" when ignoring the context.
uspol, doubting one's sanity, empiricism, wasting resources!
So lets start with the predictably most disappointing, the full chant. Four people were clearly familiar with the chant, divided equally between the "standard" interpretations. One more person had an interpretation that was not exactly one of the standard ones, but close enough to make me suspect they were also familiar. Two further people had interpretations that were clearly made through careful listening, somewhat phonetically close to one of the standard interpretations (one each, lol). Ten people had done a terrible job and returned nonsense, wild guesses or just claims it's unintelligible. Three people had tried, but their interpretations are not close to either of the "standard" ones, and phonetic similarities are unclear.
This is relatively strong evidence for 3, and against both 1 and 2.
uspol, doubting one's sanity, empiricism, wasting resources!
Before I get to the first word, I need to point out that some people who listened to the full chants were also guessing single words. Among them only one managed to identify that one of the words was part of the full chant and assign the same word as he assigned to it in the full chant. Because of that I decided to not remove these people from the results (including the guy who managed to guess).
uspol, doubting one's sanity, empiricism, wasting resources!
@timorl Can you share the raw results before/along with your interpretation please? I think that would be useful for others to "peer review" your analysis.
uspol, doubting one's sanity, empiricism, wasting resources!
@timorl yea once i see the raw data ill read through your interpritation having that to compare with as i see your interpritation, and ill offer any counterpoints or agreements as I see them when i go through it.
uspol, doubting one's sanity, empiricism, wasting resources!
@freemo Yup, I understand I'll write them down and post the file, so maybe don't read them beforehand if you still can? I'll tag you in the file-toot.