Is Anything Still True? On the Internet, 

No One Knows Anymore

New tools can create fake videos and clone the voices of those closest to us.

 ‘This is how authoritarianism arises.’


By Christopher Mims



Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.


Generative artificial intelligence has been capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.


We’ve long been warned about the potential of social media to distort our view of the world, and now there is the potential for more false and misleading information to spread on social media than ever before. Just as importantly, exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 


“When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.


This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,” says Renee DiResta, a researcher at the Stanford Internet Observatory.


The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”


Examples of misleading content created by generative AI are not hard to come by, especially on social media. One widely circulated and fake image of Israelis lining the streets in support of their country has many of the hallmarks of being AI-generated—including telltale oddities that are apparent if you look closely, such as distorted bodies and limbs. For the same reasons, a widely shared image that purports to show fans at a soccer match in Spain displaying a Palestinian flag doesn’t stand up to scrutiny.


The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.


“What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,” says Rand. People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true, he adds.


With increasingly hard-to-spot fake content proliferating, it’s no surprise people are now using its existence as a pretext to dismiss accurate information. Earlier this year, for example, in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real. The judge in the case said this line of argument was “deeply troubling.”


More recently, many have claimed, on TikTok and Twitter, that a grisly image of a victim of the attack on Israel by Hamas, tweeted by Israeli Prime Minister Benjamin Netanyahu, was created by AI. Experts say there is no evidence the image was altered or generated by AI.


If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes. The U.S. Federal Trade Commission now warns that what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.


Similarly, teens in New Jersey were recently caught sharing fake nude images of their classmates, made with AI tools.


And while these are malicious examples, companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 


With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.


Making pictures perfect is nifty, but it also welcomes the end of capturing authentic personal memories, with their spontaneous quirks and unplanned moments. Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.


In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.


The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, said Hany Farid, a professor at the University of California, Berkeley who specializes in digital forensics and image analysis.


To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.


Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.


Even if that’s true, it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.


“What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”








NervyHitch.com