EC&I 832

Exploring the Ethical Maze of AI and Social Media in Education: Prepping for Tomorrow’s Insights

Tomorrow, we’re set to dive into some thought-provoking discussions with Jennifer Casa-Todd, a digital media expert and author, and Taylor Zerr, who will be sharing insights on social media, AI, and digital ethics in education. As technology becomes more embedded in the classroom, these topics have never been more relevant.

Before we meet our speakers, let’s explore some of the key content and questions around AI, social media, and the ethical considerations they bring into education.

The Ethical Dilemma of AI: Friend, Foe, or Something In-Between?

Marzia A. Coltri’s article, The Ethical Dilemma with OpenAI ChatGPT: Is it Right or Wrong to Prohibit it?” gives us a deep dive into the complex ethical landscape of using AI tools like ChatGPT in education. On the one hand, AI can be an invaluable asset. It offers personalized learning support, helps non-native speakers, and provides easy access to knowledge. But as Coltri highlights, AI also brings some major challenges.

These challenges include concerns over critical thinking, privacy, and dependency on AI for learning. The article explores different ethical perspectives—such as utilitarianism, which weighs the risks and benefits of AI, and deontological ethics, which emphasizes moral responsibility. Coltri points out that AI can be helpful, but it might also lead to a depersonalized educational experience, where students rely on quick AI answers rather than engaging deeply with material.

Coltri’s article is a reminder that while AI is powerful, it requires a balanced and thoughtful approach.

Questions for Taylor and Jennifer:

  • Taylor, based on your understanding, what are the most important ethical guidelines educators should follow when using AI tools like ChatGPT in the classroom?
  • Jennifer, how can we encourage students to rely on their own critical thinking instead of defaulting to AI-generated answers?

Social Media: A Tool for Connection or a Source of Stress?

Social media has become a natural part of students’ lives, and Jennifer Casa-Todd’s work focuses on both the positive and negative impacts it can have. Platforms like Instagram, Snapchat, and TikTok are not just distractions; they shape students’ identities, values, and views of the world. Casa-Todd’s research emphasizes the importance of digital citizenship and teaching students to approach social media as responsible, respectful contributors—not just passive consumers.

But social media isn’t all sunshine and Instagram filters. Casa-Todd highlights that social media can lead to stress, misinformation, and even issues like comparison and self-esteem. Students need guidance to understand both the potential and the pitfalls of these platforms. Social media literacy isn’t just about using the tools—it’s about knowing when and why to use them and developing a critical eye for the content they consume.

Questions for Jennifer and Taylor:

  • Jennifer, what practical strategies do you recommend for helping students develop digital citizenship skills in a way that feels natural and engaging?
  • Taylor, do you think there are specific age groups that are more vulnerable to social media’s negative effects? If so, how can we best support them?

Ethical AI and Social Media Use: Building a Responsible Digital Culture

As we blend AI and social media into our classrooms, the need for an ethical approach is more important than ever. Coltri’s article underscores the importance of ethical guidelines and clear boundaries, while Casa-Todd’s work highlights the need for students to develop digital literacy skills that allow them to interact responsibly with both AI and social media.

Here are a few principles to keep in mind as we build toward a balanced digital culture in our schools:

  1. Use AI as a Learning Aid, Not a Replacement
    Coltri’s article reminds us that AI can be a helpful tool, but it should be used to supplement learning, not replace it. Educators can encourage students to use AI thoughtfully, seeing it as a resource rather than the main source of information.

  2. Teach Digital Citizenship as an Essential Skill
    Casa-Todd’s research emphasizes that social media literacy should be woven into the curriculum. Rather than simply discouraging social media use, educators can guide students to engage responsibly, encouraging them to think critically about the content they create and consume.

  3. Encourage Critical Thinking and Open Dialogue
    Coltri draws on Socratic principles, suggesting that encouraging students to question, debate, and discuss technology’s role in their lives can foster a more thoughtful digital experience. Casa-Todd’s focus on respectful and ethical engagement online aligns perfectly with this approach, emphasizing that students should learn to think before they click.

Additional Questions for Discussion:

  • Taylor, how can we make AI tools like ChatGPT accessible without making students overly reliant on them?
  • Jennifer, how can we encourage students to question the content they see on social media and avoid falling into echo chambers?

Looking Forward: Preparing for Tomorrow’s Insights

As we prepare to hear from Jennifer Casa-Todd and Taylor Zerr, it’s clear that the intersection of technology, ethics, and education is full of both opportunities and challenges. AI and social media are here to stay, but by guiding students to use these tools responsibly, we can help them develop a thoughtful, ethical, and balanced approach to the digital world.

Tomorrow, we’ll get a chance to dig deeper into these issues and explore ways to make our classrooms spaces where students not only learn with technology but learn about technology in ways that support critical thinking, ethical behaviour, and personal growth. Let’s keep these questions in mind as we prepare for an exciting and insightful discussion!

AI usage for this blog post: I used ChatGPT to summarize the article sent by Taylor to read for this week. After watching Taylor’s videos, I inputted my written notes from the video as well. I did use ChatGPT to summarize the website for Jennifer as well, I just made sure to read over the resulting text to make sure it aligned with my personal views on this topic. I also used ChatGPT to make headings and organize my material this week. The photos and videos were all supplemental material that I seeked out to make my blog post stronger.

Hi there, I am a full-time high school mathematics teacher in Saskatchewan. I am also a single mom to my daughter, Ardann. I am currently taking my Masters of Adult Education and Human Resources through the University of Regina. My long-term goal is to teach at the University of Regina, in a Mathematics course. I keep busy playing volleyball in a competitive women's league twice a week, coaching the senior girls' volleyball team in my school, and doing activities with my daughter. I have a love for art as well, which is practiced by drawing intricate chalk art outside for my daughter or her favourite characters on my iPad.

6 Comments

  • chris brennan

    Hi Allysia, I enjoyed the format of your blog and was interested to see how you use ChatGPT to organize into subheadings. I like that! And now I want to learn that myself.
    I liked this topic this week and I am reminded of the quote “Do not remove a fence until you know why it was put up in the first place” by Chesterton’s Fence, which resonates in many facets for how we prepare students to integrate AI. One of the trickiest parts I think about surrounds the concept of getting teenagers to ask the question of why the fence may be there when their brain is so focused on getting over it or tearing it down. I wonder how we as teachers can make this information relevant and practical for students without them having to learn from a painful mistake, which might be the reality.
    Thanks for posting, and I look forward to the discussions this week in class to uncover more depth in this conversation.

    • Allysia Doratti

      Thank you so much for your thoughtful comment! I’m glad you found the blog format helpful—ChatGPT has been a great tool for organizing my thoughts, and I’m sure you’ll find it just as useful when you dive in. It’s perfect for breaking down big ideas into manageable pieces.

      Your mention of Chesterton’s Fence is such a great analogy for discussing AI with students! Helping them understand why the “fence” is there before they try to tear it down is definitely one of the biggest challenges, especially when they’re naturally inclined to push boundaries. Making this relevant to them might involve real-world scenarios or simulations that let them experience the “why” in a controlled, low-risk way. For example, exploring case studies of AI misuse or unintended consequences could help them see the importance of understanding the “fences” before they act.

      I’m looking forward to discussing this more in class too—I think we’ll uncover some great strategies together! Thanks again for sharing this—it adds such a rich perspective to the conversation.

  • Miranda Wenc

    There is a lot to learn from this thorough blog post Allysia! I quite enjoyed that you composed questions to guide your learning for the upcoming presentation. Setting intentions is a powerful tool for deep learning.

    Your quote, ” the intersection of technology, ethics, and education is full of both opportunities and challenges” is a great summary statement for this entire course! I’ve enjoyed, and had anxiety about these opportunities and challenges, and their potential impact on our future. Social polarization– not so great, accessibility opportunities — pretty great! I think often about Mo Gawdat’s assertion (https://www.youtube.com/watch?v=bk-nQ7HF6k4) that parenting AI is important. How many versions of AI are there, and what do they know? I would imagine that we will see both inspiring and devastating effects as a result.

    • Allysia Doratti

      Thank you for your thoughtful comment! I’m glad you found the questions helpful—it’s been a great way for me to stay focused and intentional in navigating this complex topic. I couldn’t agree more about the mix of excitement and anxiety when reflecting on the opportunities and challenges AI brings. Your mention of Mo Gawdat’s assertion about “parenting AI” is so poignant—it emphasizes just how critical it is to shape AI development with care and responsibility.

      The idea of multiple “versions” of AI and their varying levels of knowledge is such an interesting point. It makes me wonder: how do we ensure that these systems are both ethical and transparent in their evolution? I also share your concern about social polarization versus accessibility. It’s a delicate balance, but conversations like this give me hope that we’re collectively moving in the right direction. Thank you for sharing the Mo Gawdat video—definitely adding it to my watchlist! Looking forward to continuing this conversation.

  • Taylor Zerr

    Hey Allysia! I love the format of your post. It makes it so easy to read and understand. Thank you, also, for posing some specific questions for me. I decided to try to answer one of them in my response to your post.

    To make ChatGPT accessible without students relying on it too much, balance is key. Teach them what AI can and can’t do. As I mentioned in my presentation, AI is great for expediting work that takes too much time, but it can cut learning short or off completely. It’s great for ideas and summaries but needs fact-checking and critical thinking. I think we should teach kids to use it as a starting point, like drafting outlines, but encourage them to refine and personalize their work. If teachers can mix in some hands-on activities, like debates or projects, creativity and collaboration continue to live in the learning environment.

    I’ve noticed this balance in my teaching this year when I had students create their outline for an essay, and I had one student use Chat GPT for the entire thing. Great learning opportunity.

    • Allysia Doratti

      Thank you for your thoughtful response and for tackling one of the questions—I love how you highlighted balance as the key to integrating ChatGPT effectively! Your point about teaching students what AI can and can’t do is so important, especially in helping them develop critical thinking and fact-checking skills. Using AI as a starting point, like for outlines or brainstorming, while encouraging refinement and personalization, is such a practical approach.

      I really liked your example of the student using ChatGPT for their essay outline—it’s such a perfect “teachable moment.” It shows how AI can expedite processes but also how students need to engage with the content to truly learn. Your idea of incorporating hands-on activities like debates or projects is brilliant for keeping creativity and collaboration alive in classrooms. Thanks for sharing your experiences—it’s great to hear how you’re already finding that balance! Looking forward to seeing how we continue to navigate this in our teaching.

Leave a Reply

Your email address will not be published. Required fields are marked *