This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
The UN just handed out an urgent climate to-do list. Here’s what it says.
Time is running short to limit global warming to 1.5°C (2.7 °F) above preindustrial levels, but there are feasible and effective solutions on the table, according to a new UN climate report.
Despite decades of warnings from scientists, global greenhouse-gas emissions are still climbing, hitting a record high in 2022. If humanity wants to limit the worst effects of climate change, annual greenhouse-gas emissions will need to be cut by nearly half between now and 2030, according to the report.
That will be complicated and expensive. But it is nonetheless doable, and the UN listed a number of specific ways we can achieve it. Read the full story.
How people are using GPT-4
Last week was intense for AI news, with a flood of major product releases from a number of leading companies. But one announcement outshined them all: OpenAI’s new multimodal large language model, GPT-4. William Douglas Heaven, our senior AI editor, got an exclusive preview. Read about his initial impressions.
Unlike OpenAI’s viral hit ChatGPT, which is freely accessible to the general public, GPT-4 is currently accessible only to developers. It’s still early days for the tech, and it’ll take a while for it to feed through into new products and services. Still, people are already testing its capabilities out in the open. Read about some of the most fun and interesting ways they’re doing that, from hustling up money to writing code to reducing doctors’ workloads.
Melissa’s story is from The Algorithm, her weekly AI newsletter. Sign up to receive it in your inbox every Monday.
Language models might be able to self-correct biases—if you ask them
The news: Large language models are infamous for spewing toxic biases. But if the models are large enough, and humans have helped train them, then they may be able to self-correct for some of these biases, a new paper from AI lab Anthropic has found. Remarkably, all we have to do is ask.
How they did it: The team of researchers wanted to know if simply asking these models to produce output that was unbiased—without even having to define what they meant by bias—would be enough to alter what they produced. They found that just prompting a model to make sure its answers didn’t rely on stereotyping had a dramatically positive effect on its output.
The significance: The work raises the obvious question whether this “self-correction” could and should be baked into language models from the start. Read the full story.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 We don’t know how to deal with the problems AI creates
Maybe we should be pumping the brakes, not accelerating. (Vox)
+ How to stop worrying and learn to love your AI colleague. (WP $)
+ Generative AI is changing everything. But what’s left when the hype is gone? (MIT Technology Review)
2 China’s top chipmakers have been granted new powers
They’ll have tighter control over state-backed research and greater access to subsidies. (FT $)
+ Chinese chips will keep powering your everyday life. (MIT Technology Review)
3 A Meta manager was wiretapped by Greek authorities
Artemis Seaford, who is a US and Greek national, was spied on for a year. (NYT $)
4 Amazon is planning to cut another 9,000 jobs
Just months after it laid off more than 18,000 workers. (CNBC)
+ Amazon’s worker union is facing a series of setbacks. (NYT $)
5 The locations of US border surveillance towers are being made public
The Electronic Frontier Foundation has mapped close to 300 towers along the US-Mexico border. (The Intercept)
+ How US police use counterterrorism money to buy spy tech. (MIT Technology Review)
6 College coding classes aren’t always what they seem
Some universities outsource software boot camps to unregulated third parties. (Wired $)
7 TikTok’s depressing algorithm loops can be tough to break
There’s no easy way to say ‘please stop showing me this.’(The Atlantic $)
+ The app has 150 million monthly active users in the US, now. (Reuters)
+ When my dad was sick, I started Googling grief. Then I couldn’t escape it. (MIT Technology Review)
8 It costs a lot more to charge EVs on the street than at home
It’s also cheaper to charge overnight. (Reuters)
+ Ecuador’s taxi drivers want EVs, but worry about the lack of chargers. (Rest of World)
+ How does an EV battery actually work? (MIT Technology Review)
9 Do we want to talk to chatbots, really?
Just because we can, doesn’t mean we should. (Slate $)
+ A US senator wants to know how chatbot makers will protect children. (Bloomberg $)
10 China wants its residents to find love
Ideally through its new state-sponsored dating app, Palm Guixi. (The Guardian)
Quote of the day
“This is a headwind compared to the hurricane of the dotcom crash.”
—Manish Madhvani, managing partner of technology investment firm GP Bullhound, tells the Financial Times that comparisons between today’s tech downturn and the dotcom bust are wildly overblown.
The big story
This scientist is trying to create an accessible, unhackable voting machine
For the past 19 years, computer science professor Juan Gilbert has immersed himself in perhaps the most contentious debate over election administration in the United States—what role, if any, touch-screen ballot-marking devices should play in the voting process.
While advocates claim that electronic voting systems can be relatively secure, improve accessibility, and simplify voting and vote tallying, critics have argued that they are insecure and should be used as infrequently as possible.
As for Gilbert? He claims he’s finally invented “the most secure voting technology ever created.” And he’s invited several of the most respected and vocal critics of voting technology to prove his point. Read the full story.
Source : Technology Review