As the deadline for my project fast approaches I find myself scrambling to get in “one last article” for fear of missing out on a key idea or insight. The ethical implications of Artificial Intelligence [AI] are far reaching. In my opinion it might be largest technological shift our society has undergone since the advent of the Internet, or the industrialization of manufacturing. My intention this week is to start synthesizing the information I’ve collected and identify key themes.
But there is something still missing.
Since teachers in my province have found themselves thrust into the spotlight (due to contract negotiations) I started to wonder how AI will impact the political process. Everyone is well aware of the foreign election interference in both Canada and the United States. Peddling influence is nothing new. Foreign actors will always try to sway things in their favour (I am sure that we and our security partners have had similar influence campaigns for years), but AI’s ability to work with truly massive data sets presents new opportunities to undermine our institutions. To this end I read an article from the Yale Journal of Law & Technology entitled Artificial Intelligence: Risks to privacy and democracy by Manheim and Kaplan. Below are some of the insights it provided.
The biggest threat may not be from foreign actors
I have long believed that my decisions are my own. Evaluating information, making choices, and living with the consequences are at my sole discretion. I do the things I do because I have chosen to (and no one else). According to Manheim and Kaplan this may not be the case. As they point out our daily existence creates an enormous amount of information facilitated by the Internet of Things (IOT); the sum total of all the devices, gadgets, and trinkets that are connected to the Internet. This data is the “lifeblood” of AI. By its very nature it correlates, connects, and analyzes these disparate bits of information on the behalf of corporations and comes up with detailed user profiles to “micro-target’ information and advertising. As they put it:
“Sophisticated manipulation technologies have progressed to the point where individuals perceive that decisions they make are their own, but are instead often “guided” by algorithm. A robust example is ‘big nudging,’ a form of ‘persuasive computing’ ‘that allows one to govern the masses efficiently, without having to involve citizens in democratic processes’ Discouraged political participation is one of the aims of those who abuse AI to manipulate and control us.”
This means my wants, desires, and opinions may not actually be fully under my control. My opinion is being silently guided by AI-powered technologies telling me how to vote, what to think, and most importantly what to buy. This not only disturbs me, but it also offends my sensibilities at a fundamental level. Intellectually, I think everyone is aware that they are susceptible to marketing. If it did not work it would not be a multi-billion dollar industry employing millions of people. On the other hand, I believed practicing a degree of media literacy inoculated one against advertising’s most egregious effects. This may not be the case. As per our class discussions the individual cannot be solely responsible for protecting themselves from AI-enabled influence campaigns. Governments need to abandon their laissez faire attitude to AI regulation and get serious about enacting new laws to reign in this “invisible hand” – sooner rather than later.
Democracy itself is threatened by the unchecked use of AI
As Manheim and Kaplan warn voters are being targeted with disinformation, psychometric profiling, and other techniques with the end goal of undermining the fundamental tenants of democracy. As they point out these methods are nothing new, but AI’s ability to rapidly and accurately disseminate and target them is. In their words,
“…data fusion and analytics reveal deeply personal and granular detail about each ‘data subject,’ which is then used to micro-target and emotionally influence what should be a deliberative, private, and thoughtful choice [voting]. This process of psychometric profiling uses quantitative instruments to manipulate behaviors. Free will is the obstacle here, which AI can help overcome.”
They point out that there is a fundamental conflict of interest for the tech companies that serve as vectors for spreading and collecting this information (i.e. social media conglomerates). While Facebook and YouTube make superficial efforts to reign in the ‘fake news’ and election misinformation they are making enormous profits off those who view it. What incentive do these companies have to clean up their act when the most sensational and salacious content gets the most viewership? Compound this with the decline of traditional media and journalistic practices and it seems that AI is not helping make us more free or democratic.
However it would be remiss of me to not note that these tools can be turned on their masters as well. As Manheim and Kaplan note the same AI based warfare that has been turned against democracy might be the key to fighting back. I don’t know how comfortable that makes me feel – elections hanging in the balance not because the will of the voters, but by who has the strongest algorithm.