Otto Williams
Jul 19, 2024
Exciting times in the world of AI! Google, OpenAI, Microsoft, and other tech giants have come together to form the Coalition for Secure AI, aiming to advance comprehensive security measures for AI. Stay ahead with Spectro Agency. Join us at spectroagency.com
The past two years of technological advancement have all been dwarfed by AI going mainstream. OpenAI's ChatGPT was clearly the catalyst of the AI arms race, with giants like Google, Microsoft, Apple, Meta, Samsung, and many more forced to play catch up.
The rapid development of AI is a cause for concern. Last year, several public figures and AI researchers penned an open letter to AI labs globally to pause the development of large-scale AI systems, quoting "profound risks to society and humanity." That didn't really go anywhere.
Recognizing the critical need for robust measures surrounding AI and its development, Google introduced the Secure AI Framework or SAIF last year. Building on it, the tech giant is now introducing a new coalition with all the big shots in tow. At Aspen Security Forum today, Thursday, July 18, Google made its Coalition for Secure AI (CoSAI) official, stating that it has been steadfast at pulling the team together over the past year in an effort to "advance comprehensive security measures for addressing the unique risks that come with AI," both short term (those that arise in real time) and long term (those looming).
CoSAI's stacked lineup of founding members includes Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, NVIDIA, OpenAI, PayPal, and Wiz. Google detailed the coalition and its plans in a blog post, and there's also a dedicated website for CoSAI.
CoSAI's Initial Plans
In its blog post, Google said a whole lot of nothing technical jargon, like the CoSAI focusing on "Software Supply Chain Security for AI systems," which will essentially ensure that AI code is built using safe and reliable software, track how AI software is built, and identify problems early on.
The coalition also aims to create safeguards or "defender’s framework" with tools to identify and fight AI security threats as they come up. "This workstream will develop a defender’s framework to help defenders identify investments and mitigation techniques to address the security impact of AI use. The framework will scale mitigation strategies with the emergence of offensive cybersecurity advancements in AI models," reads the blog post.
Lastly, CoSAI intends to create a rulebook that defines how to develop AI and ensure its safe use, with checklists and scorecards to "guide practitioners in readiness assessments."
CoSAI's efforts seem to be focused mainly on the security aspect of AI. While its initiative is commendable, and the need of the hour, it might potentially be redundant. Previously formed organizations like the Frontier Model Forum, and Partnership on AI already have roles that overlap with CoSAI's plans.
Further, CoSAI has all the big players, which might be a double-edged sword. A coalition of all the big shots does somewhat help the cause, considering that there wouldn't be a lack of resources needed act upon its plans. However, it might also raise questions about bias with CoSAI favoring its members, and protecting its own AI interests. The coalition can minimize questions related to bias by being transparent about its decisions, though we'll have to wait and see how things play out.
---
At Spectro Agency, we understand the importance of staying ahead in the ever-evolving tech landscape. Our expertise in high-end digital marketing, app creation, AI-powered solutions, chatbots, software creation, and website development can help your business navigate these advancements securely and effectively. Learn more at spectroagency.com
Source: [Android Police](https://www.androidpolice.com/google-openai-microsoft-and-more-form-new-coalition-for-secure-ai/)