The Biden-Harris administration has was announced that it has secured a second round of voluntary security commitments from eight prominent AI companies.
Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI and Stability attended the White House for the announcement. These eight companies are committed to playing a central role in promoting the development of safe, secure and reliable artificial intelligence.
The Biden-Harris administration is actively working on an Executive Order and pursuing bipartisan legislation to ensure that the US leads the responsible development of AI that unlocks its potential while managing its risks.
The commitments made by these companies revolve around three fundamental principles: safety, security and trust. They are committed to:
- Make sure products are safe before import:
Companies commit to rigorous internal and external security testing of their AI systems before releasing them to the public. This includes assessments by independent experts, helping to protect against significant AI risks such as biosecurity, cyber security and wider societal impacts.
They will also actively share information on AI risk management with governments, civil society, academia and across industry. This collaborative approach will include the sharing of security best practices, information on attempts to circumvent safeguards, and technical cooperation.
- Build systems with security as a top priority:
The companies are committed to investing in cyber security and internal threat safeguards to protect proprietary and unpublished weight models. Recognizing the critical importance of these model weights in AI systems, they are committed to releasing them only when intended and when security risks are adequately addressed.
Additionally, companies will make it easier for third parties to discover and report vulnerabilities in their AI systems. This proactive approach ensures that issues can be identified and resolved immediately even after an AI system is deployed.
- Gain public trust:
To enhance transparency and accountability, companies will develop robust technical mechanisms – such as watermarking systems – to indicate when content is generated by AI. This step aims to enhance creativity and productivity while reducing the risks of fraud and deception.
They will also publicly state the capabilities, limitations, and areas of appropriate and inappropriate use of their AI systems, covering both security and societal risks, including fairness and bias. In addition, these companies are committed to prioritizing research into the societal risks of AI systems, including addressing harmful bias and discrimination.
These leading AI companies will also develop and deploy advanced AI systems to address major societal challenges, from cancer prevention to climate change mitigation, contributing to prosperity, equity and security for all.
The Biden-Harris administration’s commitment to these commitments extends beyond the US, with consultations involving many international partners and allies. These commitments complement global initiatives including the UK’s AI Security Summit, Japan’s leadership of the G-7 Hiroshima process and India’s leadership as Chair of the Global Partnership on Artificial Intelligence.
The announcement marks a major milestone in the journey towards responsible AI development, with industry leaders and government coming together to ensure that AI technology benefits society while mitigating its inherent risks.
(Photo by Tabrez Syed on Unscrew)
See also: UK AI ecosystem to reach £2.4T by 2027, third in global race
Want to learn more about AI and big data from industry leaders? Checkout AI & Big Data Expo takes place in Amsterdam, California and London. The comprehensive event is co-located with Digital Transformation Week.
Explore other upcoming corporate tech events and webinars powered by TechForge here.