Pages

12 June 2023

Exclusive: Google lays out its vision for securing AI


Sam Sabin

Google has a new plan to help organizations apply basic security controls to their artificial intelligence systems and protect them from a new wave of cyber threats.

Why it matters: The new conceptual framework, first shared with Axios, could help companies quickly secure the AI systems against hackers trying to manipulate AI models or steal the data the models were trained on.

The big picture: Often, when a new emerging tech trend takes hold, cybersecurity and data privacy are an afterthought for businesses and consumers.One example is social media, where users were so eager to connect with one another on new platforms that they paid little scrutiny to how user data was collected, shared, or protected.
Google worries the same thing is happening with AI systems, as companies quickly build and integrate these models into their workflows.

What they're saying: "We want people to remember that many of the risks of AI can be managed by some of these basic elements," Phil Venables, CISO at Google Cloud, told Axios."Even while people are searching for the more advanced approaches, people should really remember that you've got to have the basics right as well."

Details: Google's Secure AI framework pushes organizations to implement six ideas:Assess what existing security controls can be easily extended to new AI systems, such as data encryption;

Expand existing threat intelligence research to also include specific threats targeting AI systems;

Adopt automation into the company's cyber defenses to quickly respond to any anomalous activity targeting AI systems;

Conduct regular reviews of the security measures in place around AI models;

Constantly test the security of these AI systems through so-called penetration tests and make changes based on those findings;

And, lastly, build a team that understands AI-related risks to help figure out where AI risk should sit in an organization's overall strategy to mitigate business risks.

Between the lines: Many of these security practices are ones that mature organizations are already employing across their other departments, Venables said.
"We realized fairly quickly that most of the ways that you manage the security around the use and development of AI is quite resonant with how you think about managing data access," he added.

The intrigue: To incentivize the adoption of these principles, Google is working with its own customers and governments to figure out how to apply the concepts.The company is also expanding its bug bounty program to accept new findings uncovering security flaws related to AI safety and security, according to a blog post.

What's next: Google plans to seek out feedback on its framework from industry partners and government bodies, Venables said."We think we're pretty advanced on these topics in our history, but we're not so arrogant to assume that people can't give us suggestions for improvements," Venables said.

No comments:

Post a Comment