Deepfake Fund Highlights the Fallaciousness of Online Content

On September 26, 2019, Google joined efforts by Facebook, Microsoft, and a consortium of universities to develop research needed to identify and eventually purge the Internet from “deepfakes.” As the Economist summarized, “events captured in realistic-looking or -sounding video and audio recordings need never have happened. They can instead be generated automatically, by powerful computers and machine-learning software. The catch-all term for these computational productions is “deepfakes.””

Deepfake audio and video images already exist across the Internet, superimposing female celebrities into pornographic video clips, creating satirical monologues for politicians, allowing users to insert faces into TV clips, and committing commercial fraud. ZDNet, for example, has reported that an AI-generated voice was used successfully to trick an employee to transfer €220,000 to a nonexistent Hungarian supplier.

News reports suggest that deepfakes remain relatively easy to spot today, but that the technology continues to improve rapidly. At the most sophisticated, feature films like Gemini Man have used CGI and other techniques to create a full-length performance by an earlier version of Will Smith. (I find the reliance on the Fresh Prince of Bel Air a bit off-putting, but that’s just me.)

The potential harm from deepfakes is immeasurable. Teachers and professors are often suspended pending investigations, and the ability to create a deepfake showing misconduct or moral turpitude is rather easy. Well-timed false information about private conversations held by politicians can easily be used to smear candidates. Links of false content can be quietly shared with employers by disgruntled colleagues to sabotage careers, without the victim ever knowing of the slander.

The practical solution is to model media sites with the same verification models that Amazon.com has been using to verify product endorsements. The system isn’t perfect, but verified content goes a long way to discourage fraudulent content and to create real consequences for the misconduct.

Of course, Facebook and Google do not want to lose the billions in advertising revenue that is associated with free, user-generated content. They have no incentive to actually stop the deepfakes. After all, the amusing deepfakes generate a lot of user traffic.

Instead, these companies are seeking a technological solution to the problem they fueled. By spending roughly $10 million, a consortium hopes to create software that can detect deepfakes. Google is the latest to join Facebook, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY. These entities are coming together to build the Deepfake Detection Challenge.

Ironically, the funding and competition to detect deepfakes will also serve to improve their overall quality and likely spread their technology. Google and its sister-company Jigsaw have helped, at least, by releasing a dataset made using 28 actors. These performers helped generate 3,000 deepfake videos to serve as a dataset against which the detection software can be benchmarked and improved. GitHub has excellent resources for this project.

For those wishing to learn more about these issues from a technical standpoint, a recent paper on the subject was updated on August 26th. See Andreas RösslerDavide CozzolinoLuisa VerdolivaChristian RiessJustus ThiesMatthias Nießner, FaceForensics++: Learning to Detect Manipulated Facial Images. Here is the paper’s abstract:

The rapid progress in synthetic image generation and manipulation has now come to a point where it raises significant concerns for the implications towards society. At best, this leads to a loss of trust in digital content, but could potentially cause further harm by spreading false information or fake news. This paper examines the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans. To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection. In particular, the benchmark is based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as prominent representatives for facial manipulations at random compression level and size. The benchmark is publicly available and contains a hidden test set as well as a database of over 1.8 million manipulated images. This dataset is over an order of magnitude larger than comparable, publicly available, forgery datasets. Based on this data, we performed a thorough analysis of data-driven forgery detectors. We show that the use of additional domain specific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers.

A comparison of the paper with the new Deepfake Detection Challenge is the size of the dataset. The academic paper is developed from a database of 1.8 million manipulated images. That alone suggests the scope of the problem is already vastly larger than the advertising giants are suggesting.

While the automated software proposed by the paper is an important step in the right direction, it is only a small step. Platforms that host software without running such software should be held accountable for the libels and fraud perpetrated using those platforms, returning the Internet to the common law approach for republication of libel and for a duty of care in commercial transactions.

Secondly, any limitations on liability should be available only after a platform takes reasonable steps to verify the identity of the person publishing the content. Such laws already exist for pornography (although they are not well enforced) as well as for other sectors of the marketplace. Anonymity and pseudonymity can be preserved for speakers through a third-party verification system.

Changes such as these would do far more than a $10 million fund to detect deepfakes. But the competition is a start, and it helps highlight the pernicious risk that false information has when it is hard to detect the fraud.

In the old days, it was lies, damned lies, and statistics. Today it has become lies, social media, and videos. What will be left for tomorrow?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.