Sasha Luccioni
๐ค SpeakerAppearances Over Time
Podcast Appearances
So I've been an AI researcher for over a decade.
And a couple of months ago, I got the weirdest email of my career.
A random stranger wrote to me saying that my work in AI is going to end humanity.
Now, I get it.
AI is so hot right now.
It's in the headlines pretty much every day, sometimes because of really cool things like discovering new molecules for medicine or that dope pope in the white puffer coat.
But other times, the headlines have been really dark, like that chatbot telling that guy that he should divorce his wife, or that AI meal planner app proposing a crowd-pleasing recipe featuring chlorine gas.
And in the background, we've heard a lot of talk about doomsday scenarios, existential risk and the singularity, with letters being written and events being organized to make sure that doesn't happen.
Now, I'm a researcher who studies AI's impacts on society, and I don't know what's going to happen in 10 or 20 years, and nobody really does.
But what I do know is that there's some pretty nasty things going on right now, because AI doesn't exist in a vacuum.
It is part of society, and it has impacts on people and the planet.
AI models can contribute to climate change.
Their training data uses art and books created by artists and authors without their consent, and its deployment can discriminate against entire communities.
But we need to start tracking its impacts.
We need to start being transparent and disclosing them and creating tools so that people understand AI better, so that hopefully future generations of AI model are going to be more trustworthy, sustainable, maybe less likely to kill us, if that's what you're into.
But let's start with sustainability, because that cloud that AI models live on is actually made out of metal, plastic and powered by vast amounts of energy.
And each time you query an AI model, it comes with a cost to the planet.
Last year, I was part of the Big Science Initiative, which brought together a thousand researchers from all over the world to create Bloom, the first open, large-language model like Jachet BT, but with an emphasis on ethics, transparency and consent.
And the study I led that looked at Bloom's environmental impacts found that just training it used as much energy as 30 homes in a whole year and emitted 25 tons of carbon dioxide, which is like driving your car five times around the planet, just so somebody can use this model to tell a knock-knock joke.
And this might not seem like a lot, but other similar large-language models like GPT-3 emit 20 times more carbon.