ALCON ENTERTAINMENT VIA ALAMY
Around the same time, Tegmark founded the Future of Life Institute, with a mission to study and promote the safety of artificial intelligence. Depp’s co-star in the film, Morgan Freeman, was on the institute’s board, and Elon Musk, who starred in the film, donated $10 million in its first year. For Cave and Dihal, Excess is a perfect example of the multiple entanglements between popular culture, academic research, industrial production, and “the billionaire-funded battle to shape the future.”
On the London leg of his world tour last year, Altman was asked what he meant when he did he tweeted: “Artificial intelligence is the technology people always wanted.” Standing in the back of the room that day, behind an audience of hundreds, I heard him tell his own origin story: “I was, like, a really nervous kid. I read a lot of sci-fi. I spent many Friday nights at home, playing on the computer. But I’ve always been really interested in artificial intelligence and I thought it would be really cool.” He went to college, got rich, and watched neural networks get better and better. “This can be very good and also very bad. What are we going to do about it?’ he recalled thinking in 2015. “I ended up starting OpenAI.”
![](https://wp.technologyreview.com/wp-content/uploads/2024/07/Chapter4.png)
Why should you care that a bunch of nerds are fighting over AI
OK, you get it: No one can agree on what artificial intelligence is. But what everyone seems to agree on is that the current debate around artificial intelligence has moved far beyond the academic and scientific. There are political and moral components to the game — which isn’t helped by everyone believing that everyone else is wrong.
Untangling this is difficult. It can be hard to see what happens when some of these moral views are turned on the entire future of humanity and anchored to a technology that no one can precisely define.
But we can’t just throw up our hands and walk away. Because whatever this technology is, it’s coming, and unless you live under a rock, you’re going to use it in one form or another. And the form that technology takes—and the problems it solves and creates—will be shaped by the thinking and motivations of people like the ones you just read about. In particular, from the people with the most power, the most cash and the biggest loudspeakers.
Which brings me to the TESCREAlists. Wait, come back! I realize it’s unfair to introduce another new concept so late in the game. But to understand how people in power can shape the technologies they build and how they explain them to the world’s regulators and lawmakers, you have to really understand their mindset.
![Timnit Gebru](https://wp.technologyreview.com/wp-content/uploads/2024/07/Heads_TimnitGebru.png?w=867)
WIKIMEDIA
Gebru, who founded the Distributed AI Research Institute after leaving Google, and Émile Torres, a philosopher and historian at Case Western Reserve University, have traced the influence of several techno-utopian belief systems in Silicon Valley. The pair argue that to understand what’s happening with artificial intelligence right now—both why companies like Google DeepMind and OpenAI are racing to build AGI and why die-hards like Tegmark and Hinton are warning of a coming destruction—the field must be viewed through the lens of what Torres has dubbed the TESCREAL framework.
The heavy acronym (pronounced tes-cree-all) replaces an even more difficult list of tags: transhumanism, extratropism, uniqueness, Cosmism, rationalism, effective altruismand long term. Much has been (and will be) written about each of these worldviews, so I’ll spare you here. (There are rabbit holes within rabbit holes for anyone who wants to dive deeper. Choose your forum and get your gear ready.)