This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
The Iowa caucuses on January 15 officially kicked off the 2024 presidential election. I’ve said it before and I’ll say it again—the biggest story of this year will be elections in the US and all around the globe. Over 40 national contests are scheduled, making 2024 one of the most consequential electoral years in history.
While tech has played a major role in campaigns and political discourse over the past 15 years or so, and candidates and political parties have long tried to make use of big data to learn about and target voters, the past offers limited insight into where we are now. The ground is shifting incredibly quickly at technology’s intersection with business, information, and media.
So this week I want to run down three of the most important technology trends in the election space that you should stay on top of. Here we go!
Generative AI Perhaps unsurprisingly, generative AI takes the top spot on our list. Without a doubt, AI that generates text or images will turbocharge political misinformation.
We can’t yet be sure just how this will manifest; as I wrote in a story about a recent report from Freedom House, “Venezuelan state media outlets, for example, spread pro-government messages through AI-generated videos of news anchors from a nonexistent international English-language channel; they were produced by Synthesia, a company that produces custom deepfakes. And in the United States, AI-manipulated videos and images of political leaders have made the rounds on social media.”
This includes incidents like a video that was manipulated to show President Biden making transphobic comments and a fake image of Donald Trump hugging Anthony Fauci. It’s not hard to imagine how this kind of thing could change a voter’s choice or discourage people from voting at all. Just look at how presidential candidates in Argentina used AI during the 2023 campaign.
Generative AI won’t just spread disinformation in election campaigns; we might also see the tech used in unexpected ways, such as hyperrealistic robocall programs. Last month Shamaine Daniels, a Democratic congressional candidate from Pennsylvania, announced that her campaign would use Ashley, an artificial-intelligence campaign volunteer, to reach more voters one on one. And just this week, a new super PAC launched Dean.Bot, an AI chat bot emulating Dean Phillips, a Democrat challenging Biden.
Political micro-influencers Micro-influencers—meaning people with large but not huge followings, who are likely influential at a local level—are an emerging feature of political campaigns.
The use of influencers in political messaging is not itself new. Michael Bloomberg’s short-lived presidential campaign played around with getting major influencers to create memes on his behalf, the city of Minneapolis planned to pay local influencers to encourage peace during protests, and the Biden administration has used influencers to advocate for covid-19 vaccination.
But researchers I’ve spoken with over the past few months say the 2024 US presidential election will be the first with widespread use of micro-influencers who don’t typically post about politics and have built small, specific, highly engaged audiences, often composed primarily of one particular demographic. In Wisconsin, for example, such a micro-influencer campaign may have contributed to record voter turnout for the state supreme court election last year. This strategy allows campaigns to plug into a specific group of people via a messenger they already trust. In addition to posting for cash, influencers also help campaigns understand their audience and platforms.
This new messaging strategy seems to operate in a bit of a legal gray area. Currently, there aren’t clear rules on how influencers need to disclose paid posts and indirect promotional material (like, say, if an influencer posts about going to a campaign event but the post itself isn’t sponsored). The Federal Election Commission has drafted guidance, which several groups have urged it to adopt.
While most of the sources I’ve spoken with have talked about the growth of this trend in the US, it’s also happening in other countries. Wired wrote a great story back in November about the impact of influencers on India’s election.
Digital censorship Crackdowns on speech by political actors are of course not new, but this activity is on the rise, and its increased precision and frequency is a result of technology-enabled surveillance, online targeting, and state control of online domains. The latest internet freedom report from Freedom House showed that generative AI is now aiding censorship, and authoritarian governments are increasing their control of internet infrastructure. Blackouts too are on the rise.
In just one example, recent reporting by the Financial Times shows that the current Turkish government is tightening internet censorship ahead of elections in March by directing internet service providers to limit access to private networks.
More broadly, digital censorship is going to be a critical human rights issue and a core weapon in the wars of the future. Take, for example, Iran’s extreme censorship during protests in 2022, or the ongoing partial internet blackout in Ethiopia.
I’d urge you to keep a close eye on these three technological forces throughout the new year, and I’ll be doing the same—albeit from afar!
On a personal note, this is my last Technocrat at MIT Technology Review, as I’ll be leaving to pursue opportunities outside of journalism. I’ve loved having a home in your inboxes over the past year and am humbled by the trust you’ve given me to cover stories of immense importance, like how police are surveilling Black Lives Matter protesters, the ways technology is changing beauty standards for young girls, and why government technology is so hard to get right.
Stories about how technology is changing our countries and our communities have never been more important, so please keep reading my colleagues at MIT Technology Review, who will continue to cover these topics with expertise, balance, and rigor. I’d also encourage you to sign up for our other newsletters: The Algorithm on AI, The Spark on climate, The Checkup on biotech, and China Report on all things tech and China.
What I am reading this week OpenAI has removed its ban on military use of its AI tools, according to this great report by Hayden Field in CNBC. The move comes as the company begins work with the Department of Defense on AI. Many of the world’s biggest and brightest are in Davos this week at the World Economic Forum, and Cat Zakrzewski says the talk of the town is AI safety. I really enjoyed her insider look in The Washington Post at the tech policy concerns that are top of mind. Researchers from Indiana University Bloomington have found that OpenAI and other large language models power some malicious websites and services, such as tools that generate malware and phishing emails. I found this write-up from Prithvi Iyer in Tech Policy Press really insightful! What I learned this week Google’s DeepMind has created an AI system that is very good at geometry, a historically hard field for artificial intelligence. My colleague June Kim wrote that the new system, called AlphaGeometry, “combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make deductions.” She says the system is “a significant step toward machines with more human-like reasoning skills.”
Source : Technology Review