The ability of AI tools to readily generate highly convincing "deepfake" text, audio, images, and (soon) video is, arguably, one of the greatest near-term concerns about this emerging technology. Fundamental to any proposal to address this issue is the ability to accurately distinguish "deepfake" content from "genuine" content. Broadly speaking, there are two sides to this ability:

* Reducing false positives. That is, reducing the number of times someone mistakes a deepfake for the genuine article. Technologies to do so include watermarking of AI images and digital forensics.

* Reducing false negatives. That is, reducing the number of times one believes content that is actually genuine content to be a deepfake. There are cryptogrpahic protocols to help achieve this, such as digital signatures and other provenance authentication technology.

Much of the current debate about deepfakes has focused on the first aim (reducing false positives), where the technology is quite weak (AI, by design, is very good at training itself to pass any given metric of inauthenticity, as per Goodhart's law). However, the second aim is at least as important, and arguably much more technically and socially feasible, with the adoption of cryptographically secure provenance standards. One such promising standard is the C2PA standard c2pa.org/ that is already adopted by several major media and technology companies (though, crucially, social media companies will also need to buy into such a standard and implement it by default to users for it to be truly effective).

@tao The cryptography infrastructure would be broken܍ in no time and then the courts would have to face "cryptographically secure" fakes.

܍ Knowing ahem.. state actors, the thing would be backdoored through and through so the services could do their thing whenever they need.

@dpwiz Badly designed cryptosystems can be broken in a number of ways, but well designed ones, particularly ones with a transparent implementation and selection process, are orders of magnitude more secure. Breaking SHA-2 for instance - which the C2PA protocol uses currently - would not simply require state-level computational resources, but a genuine mathematical breakthrough in cryptography.

Perhaps ironically, reaching the conclusion "all cryptosystems can be easily broken" from historical examples of weak cryptosystems falling to attacks, is another example of eliminating false negatives (trusting a cryptosystem that is weak) at the expense of increasing false positives (distrusting a cryptosystem that is strong).

Follow

@tao It's in their published threat model / security assumptions:

> Attackers do not have access to private keys referenced within the C2PA ecosystem (e.g., claim signing private keys, Time-stamping Authority private keys, etc.). They may, however, attempt to access these keys via exploitation techniques...

And later, in the spoofing section.

Proper key handling is notoriously difficult. And with incentives like here, attackers would be motivated to hit it even more than some DRM system.

And anyway, no need for a breakthrough if you can walk in with a gag order and do what you need.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.