The conference featured several robots (including one that dispenses wine), but what I liked most of all was how it managed to bring together people working in artificial intelligence from all over the world, with speakers from China, the Middle East and Africa, such as Pelonomi Moiloa, CEO of Lelapa AI, a startup creating AI for African languages. AI can be very US-centric and male-dominated, and any effort to make the conversation more global and diverse is laudable.
But frankly, I didn’t leave the conference feeling confident that AI would play a significant role in advancing any of the UN’s goals. In fact, the most interesting talks were about how AI does the opposite. Climate activist Sage Lenier spoke about how we must not let artificial intelligence accelerate environmental destruction. Tristan Harris, the co-founder of the Center for Humane Technology, gave a fascinating talk connecting the dots between our addiction to social media, the financial incentives of the tech sector, and our failure to learn from past technological booms. And there are still deep-rooted gender biases in technology, Mia Shah-Dand, the founder of Women in AI Ethics, reminded us.
So while the conference itself was about using AI for “good,” I’d like to see more discussion about how increased transparency, accountability, and inclusion could make AI himself good from development to development.
We now know that generating an image with genetic artificial intelligence consumes as much energy as charging a smartphone. I would like more honest discussions about how to make technology itself more sustainable in order to meet climate goals. And it felt scary to hear discussions about how AI can be used to help reduce inequalities, when we know that so many of the AI systems we use are built on the backs of human content moderators in the Global South who look at the traumatic content while paying peanuts.
Advocating for the “enormous benefit” of AI was OpenAI CEO Sam Altman, the summit’s keynote. Altman remotely interviewed Nicholas Thompson, the CEO of Atlantic, who by the way just announced a deal for OpenAI to share its content to train new AI models. OpenAI is The company that has fueled the current artificial intelligence boom and it would be a great opportunity to ask him about all these topics. Instead, the two had a relatively vague, high-level discussion about security, leaving the public none the wiser about what exactly OpenAI is doing to make their systems safer. It seemed they just had to take Altman’s word for it.
Altman’s speech came about a week after Helen Toner, a researcher at the Georgetown Center for Security and Emerging Technology and a former OpenAI board member, said in an interview that the board found out about ChatGPT’s launch through Twitter and that the Altman had on several occasions given the board inaccurate information about the company’s official security procedures. It also has he argued that it’s a bad idea to let AI companies rule themselves, because massive profit motives will always win. (Altman said he “disagrees[s] with the memory of its events.”)
When Thompson asked Altman what the first good thing to come out of productive AI would be, Altman mentioned productivity, citing examples such as software developers being able to use AI tools to do their jobs much faster. “We’re going to see different industries become much more productive than they used to be because they can use these tools. And that will have a positive impact on everything,” he said. I think the jury is still out on that one.
Deeper Learning
Why Google’s AI Overviews Get It Wrong