Pages

2 March 2019

Troubling Trends Towards Artificial Intelligence Governance

Jayshree Pandya

Introduction

This is an age of artificial intelligence (AI) driven automation and autonomous machines. The increasing ubiquity and rapidly expanding potential of self-improving, self-replicating, autonomous intelligent machines has spurred a massive automation driven transformation of human ecosystems in cyberspace, geospace and space (CGS). As seen across nations, there is already a growing trend towards increasingly entrusting complex decision processes to these rapidly evolving AI systems. From granting parole to diagnosing diseases, college admissions to job interviews, managing trades to granting credits, autonomous vehicles to autonomous weapons, the rapidly evolving AI systems are increasingly being adopted by individuals and entities across nations: its government, industries, organizations and academia (NGIOA).


Individually and collectively, the promise and perils of these evolving AI systems are raising serious concerns for the accuracy, fairness, transparency, trust, ethics, privacy and security of the future of humanity -- prompting calls for regulation of artificial intelligence design, development and deployment.

While the fear of any disruptive technology, technological transformation, and its associated changes giving rise to calls for the governments to regulate new technologies in a responsible manner are nothing new, regulating a technology like artificial intelligence is an entirely different kind of challenge. This is because while AI can be transparent, transformative, democratized, and easily distributed, it also touches every sector of global economy and can even put the security of the entire future of humanity at risk. There is no doubt that artificial intelligence has the potential to be misused or that it can behave in unpredictable and harmful ways towards humanity—so much so that entire human civilization could be at risk.

While there has been some -- much-needed -- focus on the role of ethics, privacy and morals in this debate, security, which is equally significant, is often completely ignored. That brings us to an important question: Areethics and privacy guidelines enough to regulate AI? We need to not only make AI transparent, accountable and fair, but we need to also create a focus on its security risks.

As seen across nations, security risks are largely ignored in the AI regulation debate. It needs to be understood that any AI system: be it a robot, a program running on a single computer, a program running on networked computers, or any other set of components that hosts an AI, carries with it security risks.

So, what are these security risks and vulnerabilities? It starts with the initial design and development. If the initial design and development allows or encourages the AI to alter its objectives based on its exposure and learning, those alterations will likely occur in accordance with the dictates of the initial design. Now, the AI will one day become self-improving and will also start changing its own code, and, at some point, it may change the hardware as well and could self-replicate. So, when we evaluate all these possible scenarios, at some point, humans will likely lose control of the code or any instructions that were embedded in the code. That brings us to an important question: How will we regulate AI when humans will likely lose control of its development and deployment cycle?

As we evaluate the security risks originating from disruptive and dangerous technology over the years, each technology required substantial infrastructure investments. That made the regulatory process fairly simple and easy: just follow the large amounts of investments to know who is building what. However, the information age and technologies like artificial intelligence have fundamentally shaken the foundation of regulatory principles and control. This is mainly because determining the who, where and what of artificial intelligence security risks is impossible because anyone from anywhere with a reasonably current personal computer (or even a smartphone or any smart device) and an internet connection can now contribute to the development of artificial intelligence projects/initiatives. Moreover, the same security vulnerabilities of cyberspace also translate to any AI system as both the software and hardware are vulnerable to security breaches.

Moreover, the sheer number of individuals and entities across nations that may participate in the design, development and deployment of any AI system’s components will make it difficult to identify responsibility and accountability of the entire system if anything goes wrong.

Now, with many of the artificial intelligence development projects going open source and with the rise in the number of open-source machine learning libraries, anyone from anywhere can make any modification to such libraries or to the code—and there is just no way to know who made those changes and what would be its security impact in a timely manner. So, the question is when individuals and entities participate in any AI collaborative project from anywhere in the world, how can security risks be identified and proactively managed from a regulatory perspective?

There is also a common belief that in order to develop AI systems that have the power to cause existential threats to humanity, it would require greater computational power and will be easy to track. However, with the rise in development of neuromorphic chips, computational power is soon going to be a non-issue—taking away this tracking capability of large use of computing power.

There is also another issue of who is evaluating security risks? Because irrespective of the stage of design, development or deployment of artificial intelligence, do the researchers/designers/developers have the necessary expertise to make broad security risk assessments? That brings us to an important question: What kind of expertise is required to evaluate the security risks of algorithms or any AI systems? Would someone qualify to evaluate these security risks purely based on their background in computer science, cyber-security, or hardware—or we need someone with an entirely different kind of skill set?

Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on Regulating Artificial Intelligence with Dr. Subhajit Basu onRisk Roundup.

No comments:

Post a Comment