Top Technology Companies Agree to Standardize Artificial Intelligence Security

Coalition for Secure AI (CoSAI) Leading the Way in AI Security and Development Standards

The largest and most influential artificial intelligence (AI) companies are coming together to prioritize security in the development and use of generative AI. The Coalition for Secure AI, or CoSAI, is a collaborative effort aimed at creating standardized guardrails, security technologies, and tools to mitigate the risks associated with AI.

Founding members of CoSAI include Google, OpenAI, and Anthropic, who own the most widely used large language models (LLMs). Other members include tech giants like Microsoft, IBM, Intel, Nvidia, and PayPal. The coalition’s goal is to create a secure framework around the access and use of AI models, protecting them from cyberattacks.

Google’s vice president of security engineering, Heather Adkins, and Google Cloud’s chief information security officer, Phil Venables, emphasized the importance of AI security in a statement. They highlighted the need for a framework that meets the current challenges and opportunities presented by AI.

AI safety has become a top priority due to concerns about cybersecurity risks, especially since the launch of ChatGPT in 2022. Security firms like Trend Micro and CrowdStrike are now leveraging AI to help companies detect and prevent threats. Gartner analyst Avivah Litan stressed the importance of AI safety, trust, and transparency to prevent harmful actions and decisions.

US President Joe Biden has called on the private sector to prioritize AI safety and ethics, citing concerns about inequity and national security risks. In response, he issued an executive order in July 2023, requiring commitments from major companies to develop safety standards and prevent AI misuse.

CoSAI will collaborate with organizations like the Frontier Model Forum, Partnership on AI, OpenSSF, and MLCommons to develop common standards and best practices. MLCommons is set to release an AI safety benchmarking suite this fall, which will assess LLMs on their responses to hate speech, exploitation, and other sensitive topics.

Managed by OASIS Open, CoSAI aims to ensure that AI development is done securely and responsibly. OASIS Open is known for its work on open source projects and standards, making it a fitting organization to oversee the efforts of CoSAI in enhancing AI security. Stay tuned for more updates on CoSAI’s progress in the coming months.

LEAVE A REPLY

Please enter your comment!
Please enter your name here