On AI and the end of the world

5Arz...2PCy
30 Mar 2024
25

Rishi Sunak and Elon Musk
This post first appeared on my newsletter, Future Proof. Please subscribe to that to support my writing and get early access.
Honestly, it feels like human civilisation has enough to worry about at present, without a sudden focus on the existential threat of extinction.
But that’s the conversation that we’re having, thanks in part to the AI summit that’s been taking place at Bletchley Park. Bletchley Park, remember, was the codebreaking centre of British intelligence during the Second World War. It’s where the Nazis’ famous Enigma machine — an apparently uncrackable cipher — was finally broken, by a team led by Alan Turing. Turing, in turn, became the father of computer science and artificial intelligence, and pioneered the famous Turing Test, a thought experiment about the point at which AI becomes indistinguishable from just old-fashioned I.
The cryptanalysis machine used by Hut 8 codebreakers — bombe, as it was known — is unbelievably rudimentary by the standards of modern computing. It is more akin to a safecracker with a stethoscope than the 21st century peril posed by emergent quantum computing. But its development at Bletchley Park has ensured the Buckinghamshire site has remained a place of pilgrimage for technologists. It is here that technology was, in some ways, born, and here that it reached its moral apotheosis: defeating the Nazis.
The fact that, subsequent to the end of the war in 1945, computer science has had a much more chequered ethical run is not the subject of conversation this week. Instead, we are dealing with a moving target: the rise and rise of Artificial Intelligence. What began, in recent years, as little more than a curiosity with the launch of generative text services like ChatGPT has become, alongside geopolitical instability and climate change, the single most important political challenge of our era. The UK (which no longer really has a dog in the fight of technological manufacturing, though it’s still a key hub for software development) is doing its part. The inaugural Artificial Intelligence Safety Summit was an act of regulatory colonisation. The UK, in front of an audience of 28 countries, staking its claim to be an arbiter of the future pathways of AI.
“We’ve learned, over the years, that having a referee is a good thing,” said Elon Musk, in a strange fireside Q&A conducted by British Prime Minister Rishi Sunak. Sunak, for those outside the UK, is something of a lame duck; his Conservative party trail the Labour party by almost 20 points. There will (almost certainly) be an election next year. But Sunak is a City slicker who married into the Murty dynasty — the Indian billionaires who own Infosys, one of the world’s largest IT companies — and fancies himself a potential referee in this space. Maybe his new gig, outside of the cushy world of Downing Street, will be heading up the Global AI Agency (GAIA, for sure).
Musk, meanwhile, swapped a rambling appearance on The Joe Rogan Experience earlier in the week to line up with political and policy dignitaries in Bletchley. His comments on the need for more regulation in the AI space are not universally shared amongst his peers — nor are they entirely consistent with Musk’s personal history. Like the vast majority of Silicon Valley pioneers, he has raged against red tape, whether that’s safety specifications for Tesla, Apple’s data privacy rules or NASA’s issuance of private space contracts. If he truly things that having a referee is “a good thing” then I suspect that his imagined referee looks a lot like Elon Musk.
But in the case of AI he has been consistent since co-founding OpenAI with Sam Altman and others in 2015. There are risks to this technology. Musk jumped on one of his bugbears during his chat with Joe Rogan — a bugbear that I see with increasing frequency within the world of climate scepticism — claiming that “extinctionists” don’t worry about the erasure of the human race. Declining population sizes are evidence of this trend towards fatalism.
It’s been interesting to me (as someone who doesn’t massively care about whether populations are increasing or decreasing) to see a shift from a Malthusian paranoia about over-population to a preoccupation with falling birth rates, largely played out by the same agitated personalities. It is precisely the same people who were having Thanos-inspired nightmares about urban crush a decade or so ago, who are now arguing for everyone to have a half-dozen kids. Musk ascribes this trend, as with many things, to the “woke mind virus”, but he’s tapping into another discourse of AI. The threat of extinction.
Geoffrey Hinton, the British-born AI pioneer, has been a key voice in this conversation. Frequently labelled the “Godfather of AI” (though not sure how the family dynamics work, if Turing is the father), Hinton reiterated, via the cerebral medium of X, his concerns about AI this week. “I left Google so that I could speak freely about the existential threat,” he tweeted. In his capacity as an elder statesman of the industry, Hinton attended the November conference at Bletchley Park, and indeed was one of the first experts cited in the Musk x Sunak chat.
Now, obviously, questions don’t get much bigger than survival. And this newsletter isn’t about Artificial Intelligence on that sort of scale. If AI eradicates humanity, it will have started writing this blog several days before, I’m sure. But when the world is fixated on AI it’s worth throwing a few thoughts, ideas and questions into the mixer.


Amongst my peers, I’m definitely someone who rates Elon Musk’s basic strategic intelligence. I don’t share many social or political ideas with him, but I think he runs a good electric car company and a good space tech company, and I think he’s generally a good barometer of where the tides of industry are headed. And I certainly agree with him on the issues of AI. There is far less to be lost in being too cautious, than what could potentially be lost in being too rash.
For Musk, who calls himself a “Cassandra” in this space, after the prophetess who foretold the fall of Troy but was not believed, there are two bands to this: an ideological objection and a commercial objection. Cynic that I am, I tend to think that the commercial objection is the driving force (and supporting this, Musk announced, shortly after his Sunak powwow, that xAI, his AI company, would shortly release its first model). The notion of a fully automated world, where jobs are optional luxuries, is, necessarily, not a very capitalistic one. And Musk is an arch-capitalist. But that’s a utopia. More pressingly, the journey to get there doesn’t look good for Musk’s business interests. OpenAI (and Microsoft) and DeepMind (and Google) could rapidly overtake Musk’s holdings in terms of their value, not to mention the impact that automation could have on the commuter workforce (the bedrock of Tesla’s customers). And if computers can do rocket science at the press of a button, will NASA need to subcontract to SpaceX?
So yes, of course he’s going to push for regulation. The pressure he’s exerting there is very akin to the battles between SBF (he of recent FTX conviction fame) and Binance’s CZ. Binance was always perceived as embracing the Wild West element of crypto, deploring government interference in the sector. Bankman-Fried, meanwhile, courted the regulators, attempting to use light touch regulation as a way of off-setting his own liabilities (ha!) and gaining credibility for his product. But as much as anything SBF was trying to slow down Binance, which always had a head start on FTX. His push for regulation in the sector would, according to many analysts’ interpretations of the FTX debacle, be his undoing.
AI needs some sort of regulation. It needs peer or governmental review of the safety of products before they reach consumers. That much is clear. But it’s fraught with difficulty. I produced an interview last week with the UK’s AI Minister, Viscount Camrose (yes, the UK’s AI minister is a hereditary peer), ahead of the Bletchley Park summit. He reiterated i) the need for regulation, but ii) the high chance that government regulation would not be sufficiently nimble to meet the demands of a rapidly changing world.
Because AI is moving fast and is only going to move faster, such is the expert opinion. It reminds me of the old maths riddle: If I put an amoeba in a jar at 9am, it doubles in size every minute, and the jar is full at 10am, when was it half-full? At the moment, AI is the amoeba in the jar and we have gone from 9am to 9:01am. The real challenge will be going from 9:59am to 10am.
Regulating something moving that fast is going to be impossible. It’s also quite clear that AI has the potential to exit the control of either Big Tech or governments sooner rather than later. Trying to hit a moving target is every archer’s nightmare. Better, then, that we try and regulate the outcome. What do we want to use AI for? What do we not want to use AI for? What would be a good consequence of AI development, and what would be a bad one?
Too much ink is spilled on speculating about timeframes and capacities. There is a tendency to blindly accept the axiom that once an AI can do something it will do that thing. But the reality is that humans and technology frequently interface in ways that don’t simply push the capabilities of the tech. Remote work and learning are a good example. When covid-19 struck, it was proved pretty much immediately that we had the capabilities to do remote work and education at a grand scale. No new technologies were needed; we had already developed what we needed. But as lockdown restrictions eased, most sectors returned to, at the very least, a hybrid model. We shelved that capability, just as it had been shelved for a decade prior to 2010.
“I think that at times there may be too much optimism about technology,” Musk told Sunak, a line that should’ve got cheers. AI, he thinks, will probably be used for good — but we shouldn’t take that for granted. And if your greatest fear is the creep of an extinctionist ideology, is AI really going to help that?
Musk attributes declining birth dates to a tendency to see the planet as unsavable. But it’s quite clear that birth rates are much more closely correlated to income pressures and housing instability. South Korea — a very expensive place to live — ranks bottom of the global fertility list, alongside Singapore. Rich western countries like the UK, Canada, Germany, Italy, Spain and Switzerland are all in the bottom 50. The US is 142 of 203. The top 14 countries are all in Africa. In the short term, AI is unlikely to relieve income insecurity in countries with a high cost of living — in fact it might increase that. It is also unlikely to do anything towards the cost of housing in these countries, other than potentially driving down wages. And it’s certainly not going to do anything to help with childcare issues (I wouldn’t trust ChatGPT to keep a baby’s fingers out the plug socket). And so, from the perspective of Musk and the Libertarian anti-extinctionists, there is a real need to address Planet Earth issues before Planet AI.
These are Musk’s problems for society and they explain his lack of optimism about technology as a cure. For my own part, I worry less about society and more about civilisation. The legacy of humans on earth is the point where society and culture meet and create something tangible, lasting, distinct. It is hard to see what contribution AI could make to civilisation, other than its erasure. We must ask, therefore, what we humans — the over-evolved apes who have been given stewardship of this planet — want to do with the time left to us on the blue dot we call home, and utilise abstract technologies only where they complement that ambition.
Follow me on Twitter, and subscribe to my newsletter.

Write & Read to Earn with BULB

Learn More

Enjoy this blog? Subscribe to Josna Akter

2 Comments

B
No comments yet.
Most relevant comments are displayed, so some may have been filtered out.