Another example of Communuty Notes on X being useless:
"This is an AI generated video of Vice President Kamala Harris using audio of clips that were never actually stated by the VP,” read one suggested Community Note. “Videos like this are dangerous to those who can not decipher AI generated content from reality.”
X is now basically Gab. Another reason to organize here.
https://www.nytimes.com/2024/07/27/us/politics/elon-musk-kamala-harris-deepfake.html?smid=url-share
I have not seen the video, but perhaps we on fedi can take a different approach to help people recognise fake videos like this.
Make a video where the original is shown in full, then do a commentary where each segment is analysed and disected in such a way it helps people understand which bits are fake, how these may be generated etc.
Fake and AI generated content is not going away, lets help people recognise the signs.
Companies yes, but what about others, who make fake videos as they follow conspiracy theories, deep fakes for celebrities, or pornography etc. The technology can be used for 'good' but also by people with malicious intent, are they likely to label a video as such?
@zleap Yeah, I'm all for mandatory watermarking, but it's at best a speed bump. What we're really going to need is built-in provenance for media.
@aerofreak @tchambers