news image

As concern grows over fake news created by fake AI-generated video, the Defense Department is readying for battle against the imagery known as deepfakes.

As concern grows over fake news created by fake AI-generated video, the Defense Department is readying for battle against the imagery known as deepfakes.

Image: THEGONCAS2/YOUTUBE

The U.S. Defense Department is already preparing itself for the fight against deepfakes, fake audio and video created by artificial intelligence that burst into the mainstream last year thanks to sites like Reddit.

According to MIT Technology Review, the development of tech to catch deepfakes is currently underway. Through the Media Forensics program run by the US Defense Advanced Research Projects Agency (DARPA), researchers have already built some of the tools to expose these fake AI creations. The Media Forensics program was actually originally set up to automate existing forensic tools, however its mission changed due to the concern over the rise of deepfakes. The project’s deepfake mission was announced earlier this year.

In 2017, users on Reddit started utilizing what amounts to extremely convincing face-swap technology to add actor Nicolas Cage into random movies he wasn’t already in. The technology was also being used to insert some female Hollywood celebrities into pornographic video clips. After deepfakes found its way into the daily news cycle and the outrage grew online, some websites banned deepfakes from being posted on their platforms. 

However, deepfake creators kept perfecting the technology, continuously making the fake AI-generated imagery even more realistic. Earlier this year, an app called FakeApp was released effectively making the creation of deepfakes even easier. Concern over the tech quickly turned to its possible use in domestic abuse cases, such as generating sham revenge porn, and in creating fake news. In April, Buzzfeed created an Obama deepfake with Jordan Peele showcasing just how realistic these fake videos were becoming.

Fast forward to today, where the Defense Department and others are developing tools to combat deepfakes. One such tool comes from Professor Siwei Lyu of SUNY Albany and his students. The AI-generated superimposed video depends heavily on data collected from scanning static imagery. Because of this, Lyu noticed that the face-swapped deepfake videos rarely blink, opening an avenue of detection, at least for now.

Additional tools are being developed as part of the DARPA program to catch other deepfake inaccuracies such as strange or abnormal head and body movements. And while Lyu admits that an experienced deepfake creator or video editor can get around a tool such as one that examines eye-blinking, more sophisticated detection techniques are in the works.

With artificial intelligence becoming more and more advanced in general, it’s clear the deepfake battle will be an arms race between the fake video makers and those looking to unmask the face-swapped truth.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f86378%2f5b127220 45f9 462a a72e 308d4e64736f

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here