Deepfakes is a trend which puts online pornography involving face swaps into popularity.
Starting out on Reddit, deepfakes are porn video clips which feature animated faces of celebrities superimposed on pornstars' head using AI. The creators trained a software by feeding it footage of Hollywood actors or other famous figures, so the AI can then create porn clips limited to only the imagination.
While some platforms including Twitter, PornHub and even Reddit are banning Deepfakes' videos from being posted, halting the spread, Facebook thinks otherwise.
The social giant sees a future in this trend. However, it isn't interested in porn, of course.
Read: An AI Capable In Creating Fake Porn, Is Starring Gal Gadot And More: A Terrifying Implication
Natalia Neverova, and Iasonas Kokkinos with INRIA researcher Rıza Alp Güler from Facebook’s AI research (FAIR) revealed the details of a neural network that maps 2D images to humans in videos. Basically the team was able to teach the AI how to add “skins” to people in videos, all in real-time.
The AI was built by first creating a human-annotated data set and then training a "teacher" AI. 50,000 images of human body parts were scrutinized by humans who then annotated more than 5 million data points which provided the training data for the network.
Once the system understood how it should see people, like humans seeing other humans, it was then able to train its "learner" how to see humans the same way.
The end result is an AI that uses a 2D RGB image as input and applies it to any number of humans in a video.
So instead of putting a celebrity’s face on someone else’s body, the AI enables people to change the way people look in a video.
But of course, the application for this technology can go to variety of limits.
Because the AI has the ability to isolate multiple humans inside a video by targeting 2D image maps, the technology could be useful to law enforcement, for example. Like potentially being adapted to extract all humans from footage of a crowd to create a searchable index by body language or suspicious movement patterns.
According to the team’s white paper, the AI can also become useful in the gaming industry.
Initially, the team has made the AI to operate at 20-26 frames per second for a 240 × 320 image or 4-5 frames per second for a 800 × 1100 image. But with some tweaks and optimization, the AI can help developers in creating character modeling in video games. An example is to create realistic augmented reality characters for people to interact with, all in real-time.