Deep Fakes, Fake News, Biases, and Bubbles

Note: I wrote my first draft, then ran it through ChatGPT to proofread. The prompt replied: “Your text is already engaging and well-organized. I’ve made some suggestions for clarity, grammar, and flow. Here’s a revised version.”  I went back through it to make sure that my direct quotes were not changed and to put the links back in. I also changed some of the words and statements that just didn’t sound like me.

It feels like every week brings information overload (in the best way possible), and this week was no exception. From deepfakes to fake news, media literacy, biases of all kinds, and the filter bubble, there was plenty of discussion and reflection. So, let’s dive right in.

What is fake news, anyway? And what is the difference between misinformation and disinformation? Chris opened up his content catalyst video by explaining this distinction. He included a helpful infographic and emphasized that the difference comes down to intent:

  • Misinformation is false information spread without the intent to cause harm—people share it because they believe it to be true.
  • Disinformation is false information deliberately spread to mislead and achieve a specific goal.

https://www.maltego.com/blog/infographic-misinformation-and-disinformation/

Note to self: When teaching this to our students, it’s a great opportunity to discuss morphology and the meanings of the morphemes mis- (incorrect, as in “mistake”) versus dis- (apart, as in “distant”).

Chris also shared an insightful article from the Stanford Social Innovation Review on combating misinformation. The authors argue that we need to “radically expand and accelerate our counterattacks” (Lord & Vogt), suggesting that occasional digital citizenship courses are not enough.

Informational digital detox. Fake news. A girl in stress and anxiety of depression closed her ears. He does not hear false information about the corona virus covid 19.Having fallen prey to disinformation myself, I can attest that combating it requires people to consistently question the content they consume. It takes practice and frequent reminders. As educators, we need to integrate this mindset into as many lessons and discussions as possible. Reminding students that it’s okay to use the CRAAP test (thanks to Kathleen for sharing) helps them adopt a more critical lens when evaluating information. Here’s the CRAAP test infographic.

Lisa led a fantastic discussion on cognitive bias and the filter bubble. In the article she shared, author Vangie Beal notes that confirmation bias is the most prominent type of bias when it comes to fake news (misinformation and disinformation). Beal defines confirmation bias as “the tendency to process information by looking for, or interpreting, information consistent with one’s own beliefs” (Britannica).

It makes sense, right? When searching for information online, we tend to agree with facts that align with our existing beliefs instead of challenging them. This is a major issue because it facilitates the spread of misinformation—something we all experienced during the COVID-19 pandemic.

What complicates this further is the filter bubble. In the same article, Vangie Beal describes filter bubbles as “intellectual isolation that can occur when websites make use of algorithms to selectively assume the information a user would want to see, and then give information to the user according to this assumption Websites make these assumptions based on the information related to the user, such as former click behaviour, serach history and location.” This creates a vicious cycle of misinformation. I can only imagine how much this has exacerbated issues that were already polarizing enough. While it’s understandable why filter bubbles exist—it seems logical for search engines to prioritize what users want to see to keep them engaged—perhaps this wasn’t fully considered during development, or the companies involved simply don’t care.

Hands with smartphone surrounded by semantic bubble or filtration bubble. Filtering information as result of web algorithms. Limitations, selectivity of search results. Intellectual isolation

Anna explored fake news through the lens of deepfakes, which are images or recordings convincingly altered to misrepresent someone as saying or doing something they never actually did. (Merriam-Webster)

Reading this article that Anna shared initially scared me. It made me question how we can teach students to differentiate between real footage and deepfakes, especially as the technology improves and becomes harder to detect.

After watching Anna’s content catalyst video, I realized it all comes back to teachingTwo happy friends laughing hilarious using phone at home critical thinking and encouraging a skeptical, analytical approach. It’s ironic that while technology contributes to these issues, it also provides tools to verify information. I also believe it’s crucial to teach students about the risks of using deepfake technology. In my school division we have an Administrative Procedure on use of AI, which states that creating deepfakes is not, even for educational purposes. As educators, we can foresee the potential dangers, but our students might not yet understand these implications.

My hope for the future is that our students will continue to think critically, questioning the content they consume online and from other sources. While there was initial fear that increased technology use would make young people less intelligent, I believe it’s pushing us to think in new and different ways.

A colorful crayon silhouette of a deep thinker surrounded by vivid question marks conveying curiosity and contemplation

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *