As the US presidential election nears, the company’s new tech should also help assure people that an image or video is authentic
Microsoft has announced a new tool that’s designed to identify deepfakes and help combat the proliferation of doctored media on the internet. Dubbed Microsoft Video Authenticator, the new technology can analyze both photos and videos, looking for signs that the media was artificially manipulated.
For context, deepfakes are synthetic media created using artificial intelligence to superimpose the likeness of a person onto an existing image or video. This can be done from scratch or using a template, with the doctored result sometimes being practically indistinguishable from the real thing. This allows people to appear to be saying something they did not say or appear to be in places where they haven’t been.
Microsoft Video Authenticator, which the company hopes can also be helpful in the run-up to the US presidential election, can analyze videos and photos and provide a percentage chance or confidence score to estimate if the media has been artificially manipulated.
“In the case of a video, it can provide this percentage in real-time on each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye,” explains Microsoft’s blog.
Source: Microsoft
The tool was created using a public dataset from Face Forensics++ and its testing was conducted using the Deepfake Detection Challenge Dataset, both of which are considered to be paragons of training and testing detection technologies.
However, the Redmond giant expects deepfake and similar technologies to evolve and become more sophisticated. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media,” said Microsoft.
Per the company, there aren’t many tools to help readers verify that the media they’re consuming are coming from a trusted source and that the material wasn’t altered. That’s why Microsoft is also launching a new piece of technology that aims to ferret out manipulated or doctored content, as well as assure readers that the media they’re seeing is genuine. The tech is made up of two parts.
The first component is integrated into Microsoft Azure and allows the content creator to add digital hashes and certificates to their content, which remain a part of it in the form of metadata wherever it spreads across the interwebs. The second component is a reader that verifies the certificates and matches the hashes so as to confirm their authenticity.
The tech titan also partnered up with the University of Washington, Sensity, and USA Today to launch an interactive quiz to educate people on synthetic media and spotting deepfakes. Although the quiz is aimed at people in the United States in the run-up to the presidential elections, you can test yourself as well.