Claim: At some point, AIs will be smarter than us, but it probably won’t happen soon. And even if we all lose our jobs, it won’t kill us.
When I was a kid, I often wondered whether Sophie, my Labrador, was happy. We got her from the pet rescue when I was twelve. My parents picked her, because while all the other dogs were barking and pawing at their cages, she just sat quietly in hers, looking up at them with big, curious eyes. When she was hungry, we fed her. When she was sick, we took her to the vet. When she wanted to play, we took her to the beach. She didn’t get to decide much, but it was a pretty good life.
People worry that when computers are smart enough, we’ll become their pets. If we’re lucky, they’ll look after us, solving our big problems for us (climate change, cancer, recessions), perhaps in ways we won’t fully understand. If we’re unlucky, they’ll decide we’re the problem and decide to get rid of us.
Most people seem just as afraid of living as pets as going to doggy heaven. It seems our fear around AI is as much about losing our identity as the smart species as it is a genuine concern about our species being wiped out.
We used to think we were the centre of the physical universe. Admittedly, it took us a while to admit the sun didn’t revolve around us, but we got over it. Now we cling to this idea that we’re the cognitive centre of the universe. Anything that challenges that, whether it’s aliens or AI, becomes a threat to our uniqueness and we go into fight or flight mode.
I hope the following three arguments, which I think are often overlooked, will convince you it’s all going to be okay. Or maybe just that ChatGPT has taken over my blog…
1. We’re still pretty useful
A. There’s no evidence computers are taking all the good human jobs
People have always worried about new technologies taking our jobs and they’ve always been wrong. It’s fine to argue that this time with AI it’s different, but with all the false prophets who’ve come before, you should channel your inner doubting Thomas when someone makes this claim.
The case is normally presented as follows. Advances in AI are like a rising tide and human capabilities are like islands. As computers get better and better, humans have to retreat to the remaining bits of dry land.
Lately, with AI improving so quickly at such a wide range of cognitive tasks, people argue the safest bits of high ground left for humans are physical and emotional tasks.1
But if the sinking islands model was true, then you’d expect the amount of jobs for humans to have declined as computers have gotten better over time, with increasingly sophisticated applications like the internet and AI. However, we’ve seen the opposite.
Today, computers are more powerful than ever and yet, demand for human workers is as strong as ever. Right before the pandemic, US unemployment was 3.4%, the lowest it' had ever been. It’s still well below its historical average at 4.1%. And that’s despite way more people entering the workforce now than seventy years ago; in 1954, only ~35% of US women looked for work, whereas now ~60% do.
Of course, just because the trend shows that as we have more technology, people also get more jobs, doesn’t mean this trend will continue.2 To make that case convincingly you need to explain why humans still do some tasks better than AI and how we can hold onto the advantage.
B. Deductive reasoning still sets humans apart
One day, a farmer brings home a turkey from the market. The turkey is confused about its new life, but it gets some food and water and thinks hmm, this might be alright. The next day, the same thing happens. After almost a year, the turkey is pretty confident what the next day will hold; more food and more water, that’s the pattern. Instead, 364 days after its arrival on the farm, the turkey gets killed by the farmer and served for Christmas dinner.
Machine learning models reason like the turkey. They start with data (I got fed yesterday), look for patterns (I seem to get fed every day) and then make predictions (the farmer will keep feeding me every day). This is called inductive reasoning. Since a lot of things in life boil down to pattern recognition, machine learning models are super helpful. They can sift through huge amounts of data and find patterns that a human, even with lots of time, may never notice. In that sense, they’re generative.
But they’re not generative in the sense that they’re good at creating new theories from very little data, imagining their possible consequences and running experiments to see if they might be on to something e.g. maybe farmers eat turkeys (theory). So maybe the farmer isn’t feeding the turkey out of the goodness of his heart and actually wants to fatten it up and kill it (possible consequence). Would he be upset if the turkey ran away (experiment)? This crazy-sounding process is called deductive reasoning and it’s arguably the driving force behind scientific and technological progress.3
Of course, just because machine learning models haven’t been designed to do deductive reasoning, doesn’t mean they’ll never be capable of it. But it might help explain why AIs, despite becoming insanely good inductive reasoning machines, still ‘hallucinate’ and might be a while away from becoming truly smarter than us; we haven’t yet managed to build a powerful deductive reasoning machine and we really don’t know how hard it’ll be to do.
This could be a good thing. Personally, I think there’s huge advantages to solving this quickly. But change is scary, so perhaps a smoother transition is less likely to provoke a handmaid’s tale-style backlash against all technology.
In any case, I think there’s another reason we’ll have more time to adjust than people think.
2. AI insiders are probably overestimating the pace of change
Imagine we could design a computer program whose commitment to the growth mindset was next level. It was able to work on itself 24/7, without needing human support, i.e. iOS 1 could build a more capable iOS 2, which in turn would be even quicker at building iOS3 and so on. The resulting intelligence explosion could happen so fast that overnight we’d have created something that to us seems like god.
We live in awe of exponential growth and the prospect of infinity. Introducing it into a debate, either as a promise or a fear, is a sure-fire way to stop people thinking clearly.4 But in our humble little real world, with all its constraints and entropy, it’s really hard to make things explode, for better or worse.
Douglas Hofstadter uses the example of audio feedback in his book I Am A Strange Loop. Essentially, when you stand too close to a speaker and talk into a microphone, the speaker amplifies the sound, which gets picked up by the microphone and fed back into the speaker even louder. This again gets picked up by the microphone, creating a feedback loop. But why does this loop end with a sharp screeching sound and not with a sonic boom that destroys the building and the amplifier along with it? In short, because the amplification is constrained by the physical limits of the equipment and the environment, preventing the sound from escalating to a destructive level.
Lots of things start out looking exponential, before they come across some difficult challenge that limits their growth. So if you can’t give good reasons explaining why your exponential growth is going to continue, it’s safest to assume that it probably won’t.
That’s the problem with claims that our current machine learning models might lead to an uncontainable intelligence explosion. A popular recent article by an industry insider falls squarely into this trap, arguing that "the magic of deep learning is that it just works – and the trendlines have been astonishingly consistent, despite naysayers at every turn."
Of course, he might be right, but it’s not a good argument. He does go on to explain why the known impediments to improving AI performance are likely to be overcome. But still, the core argument for continued exponential improvement rests on extrapolating trendlines for a technology that is famously a black box. I’m not claiming I know when and why this explosion will slow down, but I am claiming that without a good theory for why machine learning models are different, your default position should be this is a logistic function masquerading as an exponential.
3. The scenarios where AIs decide to kill all humans are far-fetched
I’ve never really understood why an AI would want to kill us. It seems to me there are four main possibilities, which I’ll go through from least to most troubling.
Someone builds a powerful program and it accidentally kills everyone
The classic example of this is the paperclip maximiser. The idea is that someone asks a powerful AI to make as many paperclips as possible and in the process of turning everything into paperclips it happens to kill everyone. This is part of a broader problem of building AIs that are ‘misaligned’ with human values. But this just doesn’t seem like a big concern to me.
The neuroscientist Antonio Damasio has written fascinating descriptions of psychological patients with brain injuries that impact their ability to feel emotions, while leaving their cognitive reasoning intact. They can think through different options, but they don’t really want to do anything, so it’s really hard for them to make decisions.
Similarly, AIs don’t have desires. They don’t really want to do anything, except act according to their reward functions, which are programmed by humans. In practice, this makes misalignment much more challenging - by default, AIs are trained on lots of human data that reinforces how killing humans is bad and it doesn’t seem that difficult to add in a failsafe for your AI to check in with its human creators before doing something drastic.
Someone builds an AI that is roughly as smart as us and we see each other as threats
People tend to worry about an AI that becomes much smarter than and destroys us. But it seems like the kind of AI that would be most threatened by us is one that’s about as smart as us, but is different / doesn’t like being othered and gets upset about being called artificial. This seems like a stretch because like above it still needs the AI to have a reward function that cares whether it exists or not.
A person (or group) that really hates humanity manages to find a way to kill everyone, before others come up with a good defence.
This is a more realistic scenario and arguably a genuine threat. But this is something that governments and defence departments are well aware of and it doesn’t really require any dramatic change to public policy.
Someone builds an AI that does become much smarter than us and can be bothered to kill us all, either because we’re a threat, or it thinks we’re morally reprehensible or we’re just a nuisance
If someone does manage to build an AI that’s way smarter than us, it just seems unlikely it’d see us as a threat. Earth may be ideal for biological life to flourish, but it’s not clearly the ideal place for digital life to flourish (maybe they’d want to be somewhere colder where their circuit boards wouldn’t overheat so quickly), so they’ll just figure out the quickest way to build a rocket, leave and go get smarter elsewhere.
If they do find us morally reprehensible, maybe they’ll have a more humane way to deal with us than re-instituting the death penalty, like rehabilitation programs that teach us how to treat our planet better.
So the only thing we really have to worry about is being a nuisance. Our best hope is to co-evolve with this kind of AI, so that they like us and want to make us their pets. That way, we won’t be swatted away like a mosquito or trapped like a rat, but hopefully looked after like a doggo, or a reverse tamagotchi.
Because robotics has proven surprisingly hard and most people would still prefer a human to listen to them and tell them everything’s going to be okay, although interestingly some people do feel less stigma / judgement sharing things with an AI
In economics-speak, we’d say labour and technology appear to be complements rather than substitutes over the long run, on net
The Wright brothers, for example, didn’t have data to support their bold theory that humans could build flying machines. In fact, the evidence suggested the opposite; everyone who’d tried before them had failed and some even died in the process
The same issues tend to come up when we talk about population dynamics, economic growth or nuclear energy
The “carrying capacity” of AI argument is my new favourite AI-sceptic argument!