The Sword of Damocles – Scrambling to the Finish Line

The theme of this week has been the integration of ideas.  I have roughly 40 pages of single space quotes, notes, and musings on the ethical implications of AI from roughly 20 peer reviewed papers, blogs, and websites.  What needed to happen this week was the consolidation of this information into a useable form.  My process is unusually antiquated given the context of my presentation, but I prefer different coloured highlighters, sticky notes, and tape when it comes to identifying themes in data.  I print out my research and lay it out on the ground.  I enjoy the physical act of moving from page to page highlighting similar points and identifying key arguments.  It looks a bit like a suspect board from a 1990s police procedural show (minus the wool string and pushpins connecting ideas – I am allergic to wool), but it works for me.

The Big Themes of My Presentation

In terms of ethical implications of AI I will be focusing on 7 broad areas:

  1. Introducing AI – setting the context of the conversation
  2. How Bias enters & is reproduced in AI systems
  3. Accountability – Algorithmic black boxes
  4. Ownership – Is AI glorified theft (and the remix counter-argument)
  5. Creativity & Authenticity (Can AI actually “create” anything?)
  6. Privacy
  7. Threats to Democratic Principles

General Outline of Topics 2-4

After outlining my points I have been writing a script for my presentation.  Given my 20 minute target length and my reading speed this means I have roughly 3500-4000 words to play with (which in itself is a bit terrifying – some great points are being edited out due to time constraints).  Below you will find the script outline for the first 3 major issues (the actual script is completed for these sections but you will have to wait to hear it).

How Bias enters and is reproduced in AI systems

If the data has bias, then then the AI that trains on it will have bias

  • Data human generated and reflects or strengths and weaknesses
  • Non-representative samples (over-representation or under representation)
  • Programmers have biases & blind spots introducing them during data-labelling (subjective and arbitrary – who is “unworthy” of credit – where is the threshold?)
  • Investment in AI research is backed by mostly white males
  • Widespread use can amplify biases

Algorithms can develop bias over time

  • No moral compass

Examples of harm

  • Employment
  • Personal credit/banking/loans
  • Sentencing/Parole/Predictive analytics (likelihood to reoffend)
  • People with disabilities (non-recognition by systems, denied participation in education, employment)

How do we mitigate bias in an AI system?

  • Difficulties


A lack of transparency

  • Algorithms are opaque
    • Users and regulators cannot examine the inner workings of most algorithms
    • Technology companies treat algorithms as “trade” secrets due to current patent law
  • Programmers and the systems themselves cannot explain their actions
  • Regulation is difficult
    • Companies have been left to their own devices
    • In Canada there is a lack of experts in this field

Why this is important?

  • We are using systems that you don’t understand and cannot explain. This is compounded by an inability to examine or understand the base data it draws its conclusions from

Ownership & Plagiarism 

Creators argue that AI systems steal their work without attribution or compensation

  • Artist’s Perspective
    • They are not being compensated for their work
    • AI produces volumes of derivative work that threatens their livelihood
    • AI does not create original works
    • AI allows their work to be misused (created in the style of…)
  • Rebuttal
    • Many forms of art reuse materials without attribution
    • Re-mixing is a fundamental part of creativity (but does this apply to AI?)
    • Art is not created in a vacuum – it is fundamentally impossible for an artist to credit all their influences – essentially no one is creating “original” work
    • Remixed work can exist along original works
    • Corporations hold copyrights (which are not people) and have used them to suppress creativity
    • The law should regulate use – not the copies themselves
    • Digital technologies have lowered the barrier to entry which is threatening to artists and creators

AI tools prevent people from learning fundamental skills and enable plagiarism

  • Poses a threat to learning basic information and skills
  • It is best to think about how to integrate the tools to enable critical thinking and interrogation of ideas.
  • Expectations and process are key


Leave a Reply

Your email address will not be published. Required fields are marked *