When Lawyers Attack – AI Insights from our Friends in the Legal Profession

Scale of justice 2.svg by DTRPublic Domain

I have to admit that when I started researching the ethical considerations of Artificial Intelligence (AI) and Machine Learning (ML) my first instinct was to stick with education databases.  After all, I am an educator; teaching, learning, and curriculum are in my wheel house.  However the more that I thought about the ethical issues surrounding the development and use of AI the more apparent that it became that I needed to expand my search.  So I started searching law libraries (and their associated online collections) and was hit with a figurative landslide of material addressing a myriad of issues.  Lawyers across the world have been grappling with the ramifications of these new technologies and have sent me down a rabbit hole exploring bias, government policy, and intellectual property.

1. Sources of Bias in AI – Bad data and Following the Money

AI is trained on huge data sets created by people just like you and me.  As the authors of this article point out these data sets often omit marginalized groups, while overrepresenting others.  This results in AI models reaching bias laden conclusions.  Why is this important?  AI is becoming pervasive across medicine, court systems, commerce, and banking.  For example an AI-assisted system that determines credit worthiness might deny a person of color a bank loan based solely on the area that they live in (the authors of the article point out that many north American cities and neighborhoods are still stratified along racial lines).  As they note algorithms on popular job websites have forced Black and Latino Americans applicants to limit racially indefinable information on their resumes (to increase their chances of interviewing for jobs).

How did we get to this point?  The authors suggest that we follow the money.  The companies that are currently developing AI technologies in Canada and the United States are overwhelming white and male.  The venture capitalists who who fund this research are (unsurprisingly) also white males.  When investors look for companies to invest in they favor those that are run by people who have similar backgrounds, training, and experience.  Furthermore, they often select companies that they, or their peers already have connections to.  This means money flows in a circular pattern through the same professional and social circles (that are mostly white).  Thus venture capital bakes in bias by selecting non-diverse companies to invest in.  There is no incentive to create more inclusive approaches to AI because the people bank rolling the operation are part of a privileged majority and don’t demand it.

2. Threats to Privacy & Government Responses

In their article Gaon and Stedman point out that AI poses a particular challenge when it comes to protecting privacy rights in Canada.  First, as they note AI-based algorithms are scraping data from an enormous number of sources which poses a serious threat to our personal information (Canadians do not have a say in what bots collect and use as the basis for their decision making processes).  Secondly, Gaon and Stedman caution that AI based technologies employed in the interests of public safety (facial recognition software, software used to predict violent altercations based on body language, etc.) may violate our fundamental rights to privacy without us knowing it.

3. Who Owns the Things that AI Create?

As George and Walsh argue or current system of laws around intellectual property and patents are insufficient to address the oncoming tsunami of AI-supported (and derived) innovations on the horizon.  Simply put AI algorithms are not people, and only people have their intellectual property rights protected under our current legal system.  If AI comes up with an impressive new technology who owns it?  Can the company that directed the AI to search for solutions lay sole claim to it?  The answers are not entirely clear and the authors of this article have urged governments to create AI-specific intellectual property laws and frameworks rather than trying to apply existing precedents to situations that weren’t evenly remotely feasible when the laws where written.  As they point out the countries that lay the ground work now are most likely to reap the benefits of the investments made by companies actively developing these technologies.

Overall, this is a lot to think about – Although the more that I read about it the more certain my project is going to further pursue the ethical angle of AI as it pertains to digital citizenship and media literacy.


Leave a Reply

Your email address will not be published. Required fields are marked *