This story originally appeared in The Algorithm, our weekly AI newsletter. To get stories like this in your inbox first, Register here.
Knock Knock.
Who’s there?
An AI with general jokes. Researchers from Google DeepMind asked 20 professional comedians to use popular AI language models to write jokes and comedy acts. Their results were mixed.
Comedians said the tools were helpful in helping them create an initial “vomit draft” they could repeat and helped them structure their routines. But the AI was unable to produce anything original, stimulating or, above all, funny. My colleague Rhiannon Williams has the full story.
As Tuhin Chakrabarty, a computer science researcher at Columbia University specializing in artificial intelligence and creativity, told Rhiannon, humor often relies on being surprising and incongruous. Creative writing requires its creator to deviate from the norm, while LLMs can only emulate it.
And this becomes quite clear in the way artists approach AI today. I have just returned from Hamburg, which hosted one of the biggest events for creatives in Europeand the message I got from those I spoke to was that AI is too unpleasant and unreliable to fully replace humans and is best used as a tool to enhance human creativity.
Right now, we’re in a moment where we’re deciding how much creative power we’re comfortable giving to AI companies and tools. After the boom that first started in 2022 when DALL-E 2 and Stable Diffusion first appeared, many artists expressed concerns that AI companies were breaking their copyrighted work without consent or compensation. Tech companies argue that anything on the public Internet falls under fair use, a legal doctrine that allows copyrighted material to be reused in certain circumstances. Artists, writers, image companies and the New York Times have filed lawsuits against these companies, and it will likely be years before we have a clear answer as to who is right.
Meanwhile, the court of public opinion has changed a lot in the past two years. Artists I’ve recently interviewed say they were harassed and ridiculed for protesting AI companies’ data mining practices two years ago. Now, the general public is more aware of the harms associated with artificial intelligence. In just two years, audiences have gone from blasting AI-generated images to sharing viral social media posts about how to opt out of AI scraping — a concept that was foreign to most ordinary people until very recently. Companies have also benefited from this shift. Adobe has been successful in promoting it AI offers as an “ethical” way to use technology without having to worry about copyright infringement.
There are also several grassroots efforts to shift AI power structures and give artists more power over their data. I have written about Nightshade, a tool created by researchers at the University of Chicago that allows users to add an invisible poison attack to their images to break AI models when scratched. The same team is behind Glaze, a tool that allows artists to hide their personal style from artificial intelligence copycats. Glaze has been integrated into Cara, a busy new art portfolio site and social media platform, which has seen a surge of interest from artists. Cara presents itself as a platform for art created by people. filters AI-generated content. It gained almost a million new users in a matter of days.
All of this should be reassuring news to any creative person worried that they might lose their job to a computer program. And the DeepMind study is a great example of how AI can really be useful for creatives. It can take away some of the boring, mundane, formulaic aspects of the creative process, but it can’t replace the magic and originality that people bring. AI models are limited to their training data and will forever only reflect the zeitgeist at the time of their training. This gets old pretty quickly.
Now read the rest of the Algorithm
Deeper Learning
Apple promises personalized artificial intelligence in a private cloud. Here’s how this will work.
Last week, Apple unveiled its vision for supercharging its product line with artificial intelligence. The key feature, which will run through nearly its entire product line, is Apple Intelligence, a suite of AI-based capabilities that promises to deliver personalized AI services while keeping sensitive data secure.
Why this matters: Apple says its privacy-focused system will first try to fulfill AI tasks locally on the device itself. If data is exchanged with cloud services, it will be encrypted and then deleted. It’s a step that offers an implicit contrast to companies like Alphabet, Amazon or Meta, which collect and store vast amounts of personal data. Read more from James O’Donnell here.
Bits and Bytes
How to opt out of Meta’s AI training
If you post or interact with chatbots on Facebook, Instagram, Threads or WhatsApp, Meta may use your data to train the artificial intelligence models that are created. Even if you don’t use any of Meta’s platforms, it can intercept data such as your photos if someone else posts them. Here’s our quick guide on how to opt out. (MIT Technology Review)
Microsoft’s Satya Nadella is building an AI empire
Nadella is going all in AI. Its $13 billion investment in OpenAI was just the beginning. Microsoft has become “the world’s most aggressive concentrator of AI talent, tools and technology” and has begun building an internal competitor to OpenAI. (The Wall Street Journal)
OpenAI has hired an army of lobbyists
As countries around the world consider AI legislation, OpenAI is on a lobbyist hiring spree to protect its interests. The AI firm has expanded its global affairs team from three lobbyists in early 2023 to 35 and plans to have as many as 50 by the end of this year. (Financial Times)
The UK has Amazon-powered emotion-recognition AI cameras on trains
People traveling through some of the UK’s biggest train stations may have had their faces scanned by Amazon software without knowing it during an AI test. London stations such as Euston and Waterloo have trialled AI-powered CCTV cameras to reduce crime and detect people’s emotions. Emotion recognition technology is highly controversial. Experts say it’s unreliable and simply doesn’t work.
(Wired)
Clearview AI used your face. Now you may own a stake in the company.
The facial recognition company, which has come under fire for scraping images of people’s faces from the web and social media without their permission, has agreed to an unusual settlement in a class-action lawsuit against it. Instead of paying cash, it is offering a 23% stake in the company for Americans whose faces are in its data sets. (The New York Times)
Elephants call each other by name
This is so cool! Researchers used artificial intelligence to analyze the calls of two herds of African savannah elephants in Kenya. They found that elephants use individual-specific vocalizations and recognize when other elephants are addressing them. (The guardian)