Apple Granted Patent for Deepfakes Based on Reference Images2 min read
According to patent documents first spotted by Patently Apple, Apple’s technology uses machine learning to create synthetic images of human faces based on a reference image provided by the user. Once the tech has generated a synthetic face, it can manipulate that face to create changes in expression. Given a reference image or “target shape” depicting a whole person (not just a face), the image generator can also create synthetic images in which the reference person is posed differently.
The generator’s neural network is trained to constrain generation enough that the synthetic image can convincingly look like the reference person, not an entirely new—or simply “inspired—creation. These constraints are incorporated using a generative adversarial network (GAN) in which multiple synthetic images are generated, after which a discriminator attempts to determine which images are real or synthetic. The discriminator’s findings are then used to further train both the generator and the discriminator.
Apple is well aware that its technology is plainly associated with deepfakes (though maybe not as comical as the one above). “The generated image is a simulated image that appears to depict the subject of the reference image, but it is not actually a real image,” the patent reads. So much for working together to get rid of photos and videos of people doing things they haven’t actually done.
Some believe Apple’s motivation for the patent comes from a desire to squash facial motion capture and related technologies—not out of the goodness of its heart, of course, but to ensure it doesn’t have any competitors when it comes to things like Memoji. (Apple supposedly spent years buying up Faceshift, PrimeSense, Perceptio, and Metaio for this exact reason.) If that’s the case, Apple may never use the patent; after all, the company patents new technology all the time without ever actually using it.
Others think Apple could be working towards an app or feature that puts a “fun” or “convenient” twist on deepfakes. If that ends up happening, at which point does the Biden Administration’s proposed AI Bill of Rights get involved?