The artificial intelligence (AI) industry kicked off 2023 with a bang as schools and universities raced to get students to use OpenAI’s ChatGPT to help them with homework and essay writing.
Less than a week into the year, New York City’s public schools banned ChatGPT — released weeks earlier to huge fanfare — a move that would set the stage for much of the debate surrounding genetic artificial intelligence in 2023.
As the buzz grew around Microsoft-backed ChatGPT and competitors like Google’s Bard AI, Baidu’s Ernie Chatbot and Meta’s LLaMA, so did questions about how to handle a powerful new technology that had become publicly accessible overnight.
While AI-generated images, music, videos and computer code generated by platforms such as Stability AI’s Stable Diffusion or OpenAI’s DALL-E have opened up exciting new possibilities, they have also fueled concerns about disinformation, targeted harassment and hacking Copyright.
In March, a group of more than 1,000 signatories, including Apple co-founder Steve Wozniak and billionaire tech entrepreneur Elon Musk, called for a halt to the development of more advanced artificial intelligence in light of the “profound risks to society and humanity.”
Although there has been no pause, governments and regulators have begun to issue new laws and regulations to put in place guardrails for the development and use of artificial intelligence.
While many questions about artificial intelligence remain unresolved ahead of the new year, 2023 is likely to be remembered as a major milestone in the history of the field.
Drama in OpenAI
After ChatGPT amassed more than 100 million users in 2023, the OpenAI developer hit the headlines again in November when its board abruptly fired CEO Sam Altman — claiming he had not been “consistently honest in his communication with the board.”
Although the Silicon Valley startup did not specify the reasons for Altman’s firing, his ouster was widely attributed to an ideological struggle within the company between security and commercial concerns.
Altman’s removal sparked five days of very public drama that saw OpenAI staff threaten to quit en masse and Altman was briefly hired by Microsoft, until his reinstatement and board replacement.
While OpenAI has tried to move on from the drama, the questions raised during the upheaval remain real for the industry at large — including how to balance the drive for profit and new product launches against fears that AI could become too powerful too quickly or fall into the wrong hands.
In a survey of 305 developers, policymakers and academics conducted by the Pew Research Center in July, 79 percent of respondents said they were either more worried than excited about the future of artificial intelligence or equally worried as excited.
Despite AI’s potential to transform fields from medicine to education and mass communications, respondents expressed concern about risks such as mass surveillance, government and police harassment, job displacement and social isolation.
Sean McGregor, the founder of the Responsible AI Collaborative, said 2023 highlighted the hopes and fears surrounding genetic AI, as well as deep philosophical divisions in the field.
“More hopeful is the light now shining on the societal decisions that technologists make, although it’s troubling that many of my tech peers seem to view this attention negatively,” McGregor told Al Jazeera, adding that artificial intelligence should be shaped by the needs of the people most affected.”
“I still feel largely positive, but it’s going to be a challenging few decades as we realize that the AI security discourse is a fancy technological version of old societal challenges,” he said.
Legislation of the future
In December, European Union policymakers agreed on sweeping legislation to regulate the future of artificial intelligence, capping a year of efforts by national governments and international bodies such as the United Nations and the G7.
Key concerns include the sources of information used to train AI algorithms, much of which is plucked from the internet without regard to privacy, bias, accuracy or copyright.
The draft EU legislation requires developers to disclose training data and their compliance with the bloc’s laws, with restrictions on certain types of use and a route for user complaints.
Similar legislative efforts are underway in the US, where President Joe Biden in October issued an executive order on artificial intelligence standards, and in the UK, which hosted an AI Security Summit in November attended by 27 countries and industry stakeholders.
China has also taken steps to regulate the future of artificial intelligence, publishing temporary rules for developers requiring them to undergo a “security assessment” before releasing products to the public.
The guidelines also restrict AI training data and ban content deemed to “support terrorism”, “undermine social stability”, “subvert the socialist system” or “harm the country’s image”.
Globally, 2023 also saw the signing of the first interim international agreement on AI security, by 20 countries, including the United States, the United Kingdom, Germany, Italy, Poland, Estonia, the Czech Republic, Singapore , Nigeria, Israel and Chile.
AI and the future of work
Questions about the future of artificial intelligence are also rampant in the private sector, where its use has already led to class-action lawsuits in the US by writers, artists and news outlets alleging copyright infringement.
Fears that artificial intelligence will replace jobs was a driving factor behind months-long strikes in Hollywood by the Screen Actors Guild and Writers Guild of America.
In March, Goldman Sachs predicted that genetic artificial intelligence could replace 300 million jobs through automation and affect two-thirds of current jobs in Europe and the US in at least some way – making work more productive but also more automated.
Others tried to temper the more dire predictions.
In August, the International Labor Organization, the U.N.’s labor agency, said genetic artificial intelligence is more likely to augment most jobs than replace them, with office work cited as the occupation most at risk.
Year of the ‘deepfake’?
The year 2024 will be a major test for genetic AI, as new applications come to market and new legislation comes into force amid global political upheaval.
Over the next 12 months, more than two billion people are set to vote in elections in a record 40 countries, including geopolitical powerhouses such as the US, India, Indonesia, Pakistan, Venezuela, South Sudan and Taiwan.
While online disinformation campaigns are already a regular part of many election cycles, AI-generated content is expected to make matters worse as false information becomes harder to distinguish from real and easier to replicate at scale.
AI-generated content, including “deepfake” images, has already been used to cause anger and confusion in conflict zones such as Ukraine and Gaza, and has been featured in heated election races such as the US presidential election.
Meta last month told advertisers it would ban political ads on Facebook and Instagram created with genetic AI, while YouTube announced it would require creators to flag realistic-looking AI-generated content.