In the future, an AI agent could not only suggest things to do and places to stay on my honeymoon. He would also go a step further than ChatGPT and book flights for me. It would remember my hotel preferences and budget and only recommend accommodations that match my criteria. It can also remember what I liked to do on previous trips and suggest very specific things to do tailored to those tastes. It can even request restaurant reservations on my behalf.
Unfortunately for Honeymooners, today’s AI systems lack the kind of logic, programming, and memory they need. It is still early days for these systems and there are many unsolved research questions. But who knows—maybe for our 10th anniversary trip?
Deeper Learning
A way to let robots learn by listening will make them more useful
Most AI robots today use cameras to understand their environment and learn new tasks, but it’s becoming easier to train robots with sound as well, helping them adapt to tasks and environments where visibility is limited.
Sound on: Researchers at Stanford University have looked at how much more successful a robot can be if it is able to ‘listen’. They chose four tasks: turning a bun in a pan, erasing a blackboard, joining two Velcro strips, and rolling dice from a cup. In each task, the sounds provided cues that cameras or touch sensors struggle with, such as knowing whether the eraser is making proper contact with the board or whether the cup contains dice. When using vision alone in the final test, the robot could tell 27% of the time whether there were dice in the cup, but this rose to 94% when sound was included. Read more from James O’Donnell.
Bits and Bytes
AI lie detectors are better than humans at spotting lies
Researchers at the University of Würzburg in Germany found that an artificial intelligence system was significantly better at detecting fabricated statements than humans. Humans usually get it right about half the time, but the AI could detect whether a statement was true or false 67% of the time. However, lie detection is a controversial and unreliable technology, and it is debatable whether we should be using it in the first place. (MIT Technology Review)
A hacker stole secrets from OpenAI
A hacker managed to gain access to OpenAI’s internal messaging systems and steal information about its AI technology. The company believes the hacker was a private individual, but the incident raised fears among OpenAI officials that China could also steal the company’s technology. (The New York Times)
Artificial intelligence has greatly increased Google’s emissions over the past five years
Google said its greenhouse gas emissions totaled 14.3 million metric tons of carbon dioxide equivalent throughout 2023. That’s 48 percent higher than in 2019, the company said. This is mainly due to Google’s huge push into artificial intelligence, which will likely make it difficult to achieve its goal of eliminating carbon emissions by 2030. This is a completely disappointing example of how our societies are prioritizing profits over the climate. emergency we are in. (Bloomberg)
Why a $14 billion startup is hiring PhDs to train AI systems from their living rooms
An interesting read on the shift happening in AI and data work. Scale AI has previously hired low-wage data workers in countries like India and the Philippines to annotate data used to train AI. However, the huge boom in language models has prompted Scale to hire highly skilled contractors in the US with the necessary expertise to help train these models. This highlights how important data work really is to AI. (The information)
A new “ethical” AI music generator can’t write a half-decent song
Copyright is one of the thorniest issues facing AI today. Just last week I wrote about how AI companies are forced to cough up high-quality training data to build powerful AI. This story shows why this matters. This story is about an “ethical” AI music generator, which used only a limited dataset of licensed music. But without high-quality data, it’s unable to produce anything even close to decent. (Wired)