Pages

26 October 2018

Intentional Bias Is Another Way Artificial Intelligence Could Hurt Us

by Douglas Yeung
The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society—those that wrongly predict black defendants will commit future crimes, for example, or facial-recognition technologies developed mainly by using photos of white men that do a poor job of identifying women and people with darker skin. But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn. This could introduce a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news.


According to a U.S. government study on big data and privacy (PDF), biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent. Commercial data brokers collect and hold onto all kinds of information, such as online browsing or shopping habits, that could be used in this way.

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.

Nefarious actors could attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn.Share on Twitter

Finally, national security threats from foreign actors could use deliberate bias attacks to destabilize societies by undermining government legitimacy or sharpening public polarization. This would fit naturally with tactics that reportedly seek to exploit ideological divides by creating social media posts and buying online ads designed to inflame racial tensions.

Injecting deliberate bias into algorithmic decisionmaking could be devastatingly simple and effective. This might involve replicating or accelerating pre-existing factors that produce bias. Many algorithms are already fed biased data. Attackers could continue to use such data sets to train algorithms, with foreknowledge of the bias they contained. The plausible deniability this would enable is what makes these attacks so insidious and potentially effective. Attackers would surf the waves of attention trained on bias in the tech industry, exacerbating polarization around issues of diversity and inclusion.

The idea of “poisoning” algorithms by tampering with training data is not wholly novel. Top U.S. intelligence officials have warned (PDF) that cyber attackers may stealthily access and then alter data to compromise its integrity. Proving malicious intent would be a significant challenge to address and therefore to deter.

But motivation may be beside the point. Any bias is a concern, a structural flaw in the integrity of society's infrastructure. Governments, corporations and individuals are increasingly collecting and using data in diverse ways that may introduce bias.

What this suggests is that bias is a systemic challenge—one requiring holistic solutions. Proposed fixes to unintentional bias in artificial intelligence seek to advance workforce diversity, expand access to diversified training data, and build in algorithmic transparency (the ability to see how algorithms produce results).

There has been some movement to implement these ideas. Academics and industry observers have called for legislative oversight that addresses technological bias. Tech companies have pledged to combat unconscious bias in their products by diversifying their workforces and providing unconscious bias training.

As with technological advances throughout history, we must continue to examine how we implement algorithms in society and what outcomes they produce. Identifying and addressing bias in those who develop algorithms, and the data used to train them, will go a long way to ensuring that artificial intelligence systems benefit us all, not just those who would exploit them.

Douglas Yeung is a behavioral scientist at the nonprofit, nonpartisan RAND Corporation and on the faculty of the Pardee RAND Graduate School.

This commentary originally appeared on Scientific American on October 19, 2018. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.

No comments:

Post a Comment