31 May 2023

When Artificial Intelligence Goes Wrong

BOB GOURLEY

This special report is a guide into some of the darker sides of AI deployments.

When fielded incorrectly, AI can do more harm to your business than good. Rogue AI can damage brands, invoke bad decisions and cause your organization to unintentionally fall out of compliance with regulations or potentially even break the law. It is important to remember that whenever we enter into a period or advanced innovation, adversaries also seek to exploit those innovations. In the case of AI, adversaries are already using this technology for nefarious ends.

This guide is part of a series of OODAloop member reports. Also see the OODA Loop Guide to AI and our research report on AI for Business Advantage.

When AI Goes Wrong

AI has already proven itself as a hugely valuable technology. That is why it is here to stay and will keep evolving in service to just about every sector of the economy. We remain absolute believers in the importance of AI to our future.

Unfortunately, this acceleration of AI solutions causes many to overlook critically important cybersecurity and business risk considerations. And the very nature of these solutions is bringing new risks.

Here are key concerns:Self-Corruption: Algorithms that can teach themselves can corrupt themselves. Code that learns and modifies itself has been shown to introduce the risk of brand damage. Code that modifies itself can also cause harm to employees and introduce other risks to the firm.

Hallucination: This term is used to capture the issues primarily with large language models like GPT which can be so creative they flat out lie. Care must be used to ensure decisions are not being based on results that are hallucinated.Inscrutability: Many Machine Learning algorithms, especially Deep Learning solutions that leverage back propagation, can add in so many new variables to their own models that no human can understand what they are doing. You can’t, for example, look at the deep learning algorithm that Google uses to identify cat photos on the internet and find the variables that detected the cats. We simply don’t know how the model was able to perform this work. This is not a big deal when it comes to identifying cats in pictures. But what about models that will be shaping your business strategy or that will be helping diagnose healthcare patients? When a solution is inscrutable like this it is hard to trust the results. This causes a lack of trust in outputs that result in sub-optimized solutions.Deceivability: Since most AI solutions assume the presence of trusted training data, they can be easily deceived (no AI today can recognize mistruths in data, doing so would require a large quantity of example data that does not have mistruths and programs focused on ensuring the new data does not have mistruths, requiring its own recursive expert system).New Attack Vectors: Bad actors can learn to manipulate AI in surprising new ways, including attacking training data, modifying inputs (especially external data dependencies), and even change algorithms.Data Protection: The large collections of data required for advanced security solutions are frequently stored with little to no security controls, resulting in larger risks to firmsBias: There have been numerous examples of bias being coded into AI systems. Additionally, the previously mentioned inscrutability of AI and ability of some AI solutions to modify themselves, means this problem can creep into a solution over time and with no warning. The term “algorithmic bias” describes the state of outcomes of some machine learning algorithms putting some groups at a disadvantage. The creators of these algorithms may now have meant for this bias or discrimination to be there, but the companies need to seek that out in results and take action when this is uncovered. The challenge is not a new one, bias has been around for years, but with AI the problem is being accelerated. And with newer deep learning approaches sometimes the problem is not detectable till the model has run for a while.

Most of these challenges can be mitigated, and those that cannot might be better understood and dealt with in other creative ways. Mitigating the problems starts with a clear understanding of how the technology has been malfunctioning or underperforming.
Examples of AI gone wrong

Here are more specific examples of AI going wrong:ChatGPT Hallucinations: Almost every user we run into has a story about hallucinations from ChatGPT. This example of AI gone wrong has been seen by millions.

Facebook: Facebook’s many data woes are not caused by AI. Neither were the actions by Russian and other actors who used Facebook to sow discord before the 2016 election. The example of AI going wrong relates to how Facebook has tried for years now to leverage AI to mitigate these threats. And so far there is no evidence their AI savvy has helped mitigate these problems at all. Catching bad actors and malicious posting or malicious ad buys might be aided by automated tools but the approach has been to hire large teams of smart humans to read, review, correlate, and take action against bad actors on the Facebook platform.

Smart Speakers: Research shows that many popular smart speakers are 30 percent less likely to understand non-native U.S. accents.

Facial Recognition: Several facial recognition systems are shown to perform much worse on African American faces.

Sentencing Guidelines: One of the most frequently cited examples of algorithmic bias is the system known as COMPAS (Correctional Offender Management Profiling for Alternative Systems). This system, exampled in detail in a ProPublica report, is used for generating sentencing guidelines. It uses a basic form of machine learning. It is trained on data to deliver a scoring system that is then applied to new data. The system attempts to predict the likelihood of recidivism. And is one factor that can be used in recommending sentences for convicted offenders. It was discovered to predict that black defendants will have a higher risk of recidivism than warranted, and ti also predicted lower rates for white defendants than warranted. The results of this study have been debated and argued, but one thing is clear, since there is a lack of transparency over the algorithms, everyone has reason to doubt what the firm that runs the code says.

Amazon Resume System: Other examples of algorithmic bias involve the potential for unfair treatment in how companies assess and hire job candidates. Many firms now use algorithms to score job applicants in ways that may well be biased, even though this is not intentional, it is bad. One of the most famous cases of this involved a resume database screening tool put in place by Amazon in 2017. Over time the Machine Learning system became biased against women. The program had to be terminated because it was a misogynist.

Financial Rating: Algorithmic bias has also been found in financial programs and has caused problems with credit scoring. Right now complex neural networks are being leveraged in credit scoring. If you get turned down for a loan, it might be that no one will be able to explain why.

Media Bias: Many major news organizations, search providers and major social media sites have been accused of “burying” news that does not fit their worldview. This is always denied by the organizations accused of this, but the fact that inscrutable algorithms are at play are cause for concern.

Google, Bing and Every Other Search System: It is a sad fact that AI has not kept up with human needs for search and discovery, forcing people to focus hard on how to be creative and work very hard crafting complex search strategies to find the right information. We consider this a failure of current AI, especially for missions and business functions that matter.

A key lesson from these and many other major issues is that businesses should understand the new risks that come with AI solutions. Planning in advance can mitigate the occurrence of problems, and, when they occur, can speed the response to the incident.

AI Has Compliance Issues

Real world experience with AI solutions has brought to light the many risks articulated above. And business leaders have seen enough that many companies are slowing down their AI initiatives till they can come to grips with the risks. A chief concern is the inability to ensure security and compliance, which is complicated by the explainability of complex AI. The issues of fairness and ethics also inject cause for management scrutiny. Mistakes here can do more than damage brands, they can hurt society. They can also cause firms to run afoul of the regulatory environment.

Regulations impacting AI deployments include:The Civil Rights Acts of 1964 and 1991
The Americans with Disabilities Act
The Genetic Information nondiscrimination Act
The Health Insurance Portability and Accountability Act
The Federal Financial Institution Examination Council (FFIEC)
The Family Educational Rights and Privacy Act (FERPA)
The Fair Housing Act
Federal Reserve SR 11-7
The EU Greater Data Privacy Regulation (GDPR)
New York Cybersecurity Regulations
Workplace AI recruitment selection regulations in New York, Illinois and Maryland

Even if you could get your AI into compliance with these many regulations, it still does not mean you are secure from the AI making poor decisions. And these regulations do little to nothing to prevent a hostile adversary from attacking your AI solutions and will not prepare you to respond to an AI security event.

This means AI solutions require security that goes far beyond traditional IT needs. This is not just about protecting data and is not just about compliance.

Fear of AI May Be Unfounded, But Fear Can Impact Decisions

Automation has long had an impact on employment. Since the dawn of the industrial age there has been fear that increasing automation would cause job displacement, but history has shown consistent innovation around technology produced more jobs than the automation of work displaced. The optimist will always point to this history as they project the future of AI on the workforce. However, AI is different than other technologies. It may change the needs for routine work so fast that the job creation in the economy cannot keep up. Our advice is for business leaders and employees both to prepare themselves for more rapid job displacement through automation. Assume that there will be less demand for more routine jobs and more demand for positions that require things AI cannot do (those that require human empathy, creativity, rapid comprehensive understanding (intuition)).

Employers should consider helping retrain workers now so that their greatest talents can be tapped into as the routine parts of their work are automated. This will help all to embrace the power of AI tools with less fear of the future.

The Age of Geopolitical AI Is Upon Us

We’ve seen adoption of AI in business processes, but we’ll start seeing the emergence of AI in geopolitics and also greater geopolitical influence on AI. Increasingly, machine learning is being used to support geopolitical analysis as sources of data have grown to the point where human analytical analysis at scale is no longer achievable. Given the controversy surrounding Google’s departure from USG programs like Project Maven, this an area that is ripe for disruption and innovation from emerging companies. Hopefully, government acquisition programs can keep up.

We are also entering into an era of increased geopolitical influence on the algorithms that drive business processes, especially those in the investment sector. Don’t think geopolitics can influence algorithms, take a close look to how the stock market reacted to the arrest of a Huawei Technologies Co Ltd’s chief financial officer, Meng Wanzhou or the economic impact of trade and IP enforcement issues between China and the US.
Conclusions and Recommendations

We review things that go wrong to inform your business strategy. We capture more insights on how to do that in our follow-on report titled Artificial Intelligence for Business Advantage. That is a good next step in ensuring your business is approaching AI with risk mitigation in mind.

Looking for a primer on what executives need to know about real AI and ML? See A Decision-Maker’s Guide to Artificial Intelligence.

No comments: