SAN FRANCISCO — Several months ago, Google hired dozens of actors to sit at a table, stand in a hallway and walk down a street while talking into a video camera.
Then the company’s researchers, using a new kind of artificial intelligence software, swapped the faces of the actors. People who had been walking were suddenly at a table. The actors who had been in a hallway looked like they were on a street. Men’s faces were put on women’s bodies. Women’s faces were put on men’s bodies. In time, the researchers had created hundreds of so-called deepfake videos.
By creating these digitally manipulated videos, Google’s scientists believe they are learning how to spot deepfakes, which researchers and lawmakers worry could become a new, insidious method for spreading disinformation in the lead-up to the 2020 presidential election.
For internet companies like Google, finding the tools to spot deepfakes has gained urgency. If someone wants to spread a fake video far and wide, Google’s YouTube or Facebook’s social media platforms would be great places to do it.
Imagine a fake Senator Elizabeth Warren, virtually indistinguishable from the real thing, getting into a fistfight in a doctored video. Or a fake President Trump doing the same. The technology capable of that trickery is edging closer to reality.
“Even with current technology, it hard for some people to tell what is real and what is not,” said Subbarao Kambhampati, a professor of computer science at Arizona State University who is among the academics partnering with Facebook on its deepfake research.
On ‘The Weekly,’ A.I. Engineers Create a Deepfake Video
Deepfakes — a term that generally describes videos doctored with cutting-edge artificial intelligence — have already challenged our assumptions about what is real and what is not.
In recent months, video evidence was at the center of prominent incidents in Brazil, Gabon in Central Africa and China. Each was colored by the same question: Is the video real? The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video. Opponents claimed it had been faked. Experts call that confusion “the liar’s dividend.”
“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”
For decades, computer software has allowed people to manipulate photos and videos or create fake images from scratch. But it has been a slow, painstaking process usually reserved for experts trained in the vagaries of software like Adobe Photoshop or After Effects.
Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images. That means they can handle a portion of the workload that once fell to trained technicians. And that means people can create far more fake stuff than they used to.
The technologies used to create deepfakes is still fairly new and the results are often easy to notice. But the technology is evolving. While the tools used to detect these bogus videos are also evolving, some researchers worry that they won’t be able to keep pace.
Google recently said that any academic or corporate researcher could download its collection of synthetic videos and use them to build tools for identifying deepfakes. The video collection is essentially a syllabus of digital trickery for computers. By analyzing all of those images, A.I. systems learn how to watch for fakes. Facebook recently did something similar, using actors to build fake videos and then releasing them to outside researchers.
Engineers at a Canadian company called Dessa, which specializes in artificial intelligence, recently tested a deepfake detector that was built using Google’s synthetic videos. It could identify the Google videos with almost perfect accuracy. But when they tested their detector on deepfake videos plucked from across the internet, it failed more than 40 percent of the time.
They eventually fixed the problem, but only after rebuilding their detector with help from videos found “in the wild,” not created with paid actors — proving that a detector is only as good as the data used to train it.
Their tests showed that the fight against deepfakes and other forms of online disinformation will require nearly constant reinvention. Several hundred synthetic videos are not enough to solve the problem, because they don’t necessarily share the characteristics of fake videos being distributed today, much less in the years to come.
“Unlike other problems, this one is constantly changing,” said Ragavan Thurairatnam, Dessa’s founder and head of machine learning.
In December 2017, someone calling themselves “deepfakes” started using A.I. technologies to graft the heads of celebrities onto nude bodies in pornographic videos. As the practice spread across services like Twitter, Reddit and PornHub, the term deepfake entered the popular lexicon. Soon, it was synonymous with any fake video posted to the internet.
The technology has improved at a rate that surprises A.I. experts, and there is little reason to believe it will slow. Deepfakes should benefit from one of the few tech industry axioms that have held up over the years: Computers always get more powerful and there is always more data. That makes the so-called machine-learning software that helps create deepfakes more effective.
“It is getting easier, and it will continue to get easier. There is no doubt about it,” said Matthias Niessner, a professor of computer science at the Technical University of Munich who is working with Google on its deepfake research. “That trend will continue for years.”
The question is: Which side will improve more quickly?
Researchers like Dr. Niessner are working to build systems that can automatically identify and remove deepfakes. This is the other side of the same coin. Like deepfake creators, deepfake detectors learn their skills by analyzing images.
Detectors can also improve by leaps and bounds. But that requires a constant stream of new data representing the latest deepfake techniques used around the internet, Dr. Niessner and other researchers said. Collecting and sharing the right data can be difficult. Relevant examples are scarce, and for privacy and copyright reasons, companies cannot always share data with outside researchers.
Though activists and artists occasionally release deepfakes as a way of showing how these videos could shift the political discourse online, these techniques are not widely used to spread disinformation. They are mostly used to spread humor or fake pornography, according to Facebook, Google and others who track the progress of deepfakes.
Right now, deepfake videos have subtle imperfections that can be readily detected by automated systems, if not by the naked eye. But some researchers argue that the improved technology will be powerful enough to create fake images without these tiny defects. Companies like Google and Facebook hope they will have reliable detectors in place before that happens.
“In the short term, detection will be reasonably effective,” said Mr. Kambhampati, the Arizona State professor. “In the longer term, I think it will be impossible to distinguish between the real pictures and the fake pictures.”
Here’s the TECNO Phantom X2 Pro in photos, plus specifications
1824 Club owner Kanani, a junior Urban Planning officer at City Hall on Eacc radar, accounts frozen
Everything You Need To Know
Book Review – Treason: The Case Against Tyrants & Renegades by Miguna Miguna
Vipingo Safari Tour enters final round with trio tied at the top
Spotify Kenya Wrapped; Top Streamed Songs and Artistes 2022
Call for use of technology to combat cattle rustling
Senegal’s Koulibaly gives World Cup man-of-the-match trophy to deceased Diop’s family
Java names first Kenyan CEO
Burial set for 3 KURA staff killed in Road Crash – Weekly Citizen
Ford Kenya’s Wafula Wakoli wins Bungoma senatorial by election
Govt urged to focus on enhancing productivity, commercialization in agriculture
Wafula Wakoli Beats UDA, Azimio Candidates in Bungoma Senatorial Race
Naivas Opens 89th Branch in Parklands – Kenyan Business Feed
09-12-2022 KBC world cup quarter final game today, which big game will KBC air today
How Uhuru orphans failed to block appointment of news PSs
Oliver Kiptanui Maina, The Master Of Fiction World
Twist In Viral Video Of Kitengela Club Owner Claiming To Know Ruto
Mume asimulia jinsi mke alivyomdunga mtoto wao kisu – Taifa Leo
Twitter Founder Jack Dorsey Visits Kenya
Politics7 days ago
How ministry of Water secretary Alima mints millions – Weekly Citizen
Tech7 days ago
Xiaomi postpones MIUI 14 and Xiaomi 13 series launch
Sports7 days ago
China Grand Prix cancelled for fourth successive year due to Covid
General News2 days ago
Kenya’s Bid to Replace Fuel & Gas With New Energy Takes Shape
Columns And Opinions6 days ago
William Ruto calls MPs every evening to keep a firm grip on Kenya Kwanza team
Business News7 days ago
Court of Appeal postpones Owino Uhuru residents Sh1.3bn poison pay judgement
Tech7 days ago
According to A Feature Discovered in An iOS Beta, WhatsApp Will Soon Allow Users to Search Messages by Date : TechMoran
Business News5 days ago
Kilimapesa owners to ramp up exploration at Narok gold mine