Pages

27 February 2017

Battle of the Trolls II — A.I. botnets


Puru Naidu


Politically sanctioned A.I. botnets for micro-targeting of voters for political messaging poses a threat to our democracy.

Last week, I blogged that politically sanctioned troll armies on social media to manage public perception and opinion is a weaponized tactic, and has become the new status quo of political campaigning. But, it seems like I had the mild version of the new reality. Using paid troll armies is just the surface of it. The advanced version is using big data analytics and behavioral science to form psychographic profiles of users for political messaging. ‘This isn’t anything new! Its basic advertising tactic that has been done for years’, is probably what you are thinking right now. No, not at the level of accuracy, speed, and collective influential power an intelligent botnet is capable of.

This is what Cambridge Analytica, a data analytics company, did during the Brexit and Trump campaigns. It used big data to create personality profiles and used A.I. botnets to prey on users with manipulative political messaging that ultimately changed their behavior. The company started off with massive amounts of data collection from data brokers and social media companies. Used it to develop personality profiles, a.k.a psychographic profiles, for each individual users. Then used A.I. botnets with automated scripts to target each of those users with A/B testing tactic that probes them for response towards different news articles, fake news, advertisements, and dark posts. The psychographic profiles are further updated with specific information about the user, and user manipulation continues.

For example, an African American swing voter who read two or three articles about Clinton’s email server negligence is bombarded with fake news and dark posts about Clinton’s alleged links to sex trafficking and her calling black people “super predator” on election day. This is highly likely to influence the voter to either change candidate or not vote at all.

“This new wave has brought the world something exponentially more insidious — personalized, adaptive, and ultimately addictive propaganda.” — Professor Jonathan Albright.

To keep it simple, every like you click on Facebook and Twitter, every link you visit via social media sites gives the bot more information about you. It knows your personality, weak points, what makes you trigger, and most importantly how to use that information to manipulate you. It knows more than your mother or best friend know about you. With that information, in a matter of milliseconds, you are exposed to dark posts, ads, and fake news specifically tailored to manipulate you personally, which will likely change your opinion and influence your voting behavior.

This puts our democracy at stake. Utilizing A.I. propaganda to influence opinion and behavior is detrimental to any democracy. The human mind, no matter how educated it is, is no match to counter micro-targeting by these massive networks of artificially intelligent bots. This is a major threat to our society and sanity. It will further polarize our society on political ideologies, causing friction and communal discord. Its not a matter of if our politicians copy-paste western tactics, but only a matter of who has the better algorithm to manipulate people and who can afford to sanction these botnets. And, of course, politicians aren’t short on cash these days. Hence, watch for the battle of trolls in the upcoming elections.

No comments:

Post a Comment