Since then, ChatGPT’s competitor chatbot Bard (which, you may remember, temporarily $100 billion disappeared Google’s stock price when it made a real mistake during the demo) has been replaced by the more advanced Gemini. But, to me, the AI revolution didn’t feel like that. Instead, there has been a slow slide toward marginal efficiency gains. I’m seeing more autocomplete features in my email and word processing apps, and Google Docs now offers more ready-made templates. They’re not ground-breaking features, but they’re also reassuringly unfussy.
Google is holding its I/O conference tomorrow, May 14, and we expect it to announce a whole new set of AI features, further integrating it into everything it does. The company is tight-lipped about its announcements, but we can make educated guesses. There has been much speculation that it will upgrade its gem, Search, with AI-building capabilities that could, for example, be used behind a paywall. Perhaps we’ll see Google’s version of AI agents, a buzzword that basically means more capable and helpful smart assistants capable of doing more complex tasks like booking flights and hotels like a travel agent would.
Google, though owning 90% of the online search market, is in a defensive position this year. Leaders like Perplexity AI have launched their own versions of AI-powered search to rave reviewsMicrosoft’s AI-powered Bing has managed to increase its market share slightly, and OpenAI is working on its own Online search function with artificial intelligence and is also reportedly in talks with Apple to integrate ChatGPT on smartphones.
There are some hints about what all the new AI-powered search features might look like. Felix Simon, a researcher at the Reuters Institute for Journalism, participated in the Google Search Generative Experience test, which is the company’s way of testing new products on a small selection of real users.
Last month, Simon noticed that his Google searches of links and short excerpts from online sources had been replaced by more detailed, beautifully packaged summaries generated by artificial intelligence. He was able to get these results from questions about nature and health, such as “Do snakes have ears?” Most of the information it was offered was correct, which was surprising, as AI language models have a tendency to “hallucinate” (meaning make things up) and have been criticized for being an unreliable source of information.
To Simon’s surprise, he enjoyed the new feature. “It’s convenient to ask [the AI] to present something just for you,” he says.
Simon then started using Google’s new AI-powered feature to search for news rather than scientific information.
For most of these questions, such as what happened yesterday in the UK or Ukraine, he was simply offered links to news sources such as the BBC and Al Jazeera. However, he managed to get the search engine to generate an overview of recent news from Germany, in the form of a bulleted list of news headlines from the previous day. The first entry was of an attack on Franziska Giffey, a Berlin politician who was attacked in a library. The AI summary had the date of the attack wrong. But it was so close to the truth that Simon didn’t think twice about its accuracy.