
The Pro AI Team’s Argument
The pro-AI team presented by Teagan and Sheila helped me develop a very rosy picture of the future in the classroom if AI is integrated into education. I really enjoyed Asyia Kazmi’s (2024) article for the Gates Foundation, in which she clearly articulates three global education challenges: lack of access to individualized learning materials, lack of training on how to use technology in the classroom, and a lack of time for educators to create high-quality teaching content. She provides AI solutions to all three and forcefully makes the case that responsible AI use is “no longer optional but necessary” given the amount of technology already in our lives.
The Principal’s Handbook podcast (n.d.) was also a good one, in which Priten Shah shares his perspective on the use of AI in the classroom. In the podcast, Priten makes a case that AI can reduce teacher burnout, reduce administrative burdens, and help teachers practice differentiation. He even offers practical advice for schools on the small, intentional steps they can take toward responsible AI adoption. I appreciated that his message was optimistic yet not overwhelming for educators who feel bombarded by constant change.
Additionally, the peer-reviewed article, Lessons Learned for AI Education with Elementary Students and Teachers, was also very supportive of the use of AI to help us build the necessary competencies for the future generation. The authors make the case for AI literacy, which I believe in 100%. They do find a continued need for teacher training as the single biggest barrier to AI use in education. I can relate to that as well; as a teacher, if I am unsure how to do something, I do not feel comfortable bringing it to the classroom.
The Anti-AI Team’s Argument by Jessalyn and Daegan
The anti-AI team raised a number of concerns, some of which I will need to consider before using AI in my classroom. Thompson’s (2025) Opinion: AI is Harming Education has a lot of good points, especially when he points to the intellectual laziness that AI is enabling, the harm to teacher-student relationships, and the ethical considerations around privacy and bias. His message resonates with me: just because we can do something doesn’t always mean we should.
Finally, the peer-reviewed article by Zhai et al. (2024) raises some important concerns. The systematic review of the literature that the authors conducted has found that the overreliance on dialogue systems like ChatGPT can lead to students losing their critical thinking skills, as well as their creativity and problem-solving skills. The question that comes to my mind after I read this study is whether we are empowering students to become independent thinkers or teaching them how to avoid thinking.
Reflective Questions & My Answers
1. How can I, as an educator, model the critical and ethical use of AI to my students?
2. What policies and supports would I expect to see in place at school or at a government level before embracing AI tools?
At a school level, I would expect to see policies around data privacy, student protection, online safety, ethical use of AI, and equal access to the AI tools. I would also expect schools to provide ongoing professional development for teachers to enable them to use AI in the classroom responsibly.
3. How do we strike the right balance between embracing innovation and preserving the human aspects of teaching and learning that are so important for K12 students?
We can strike the right balance between embracing innovation and preserving the human aspects of teaching and learning by remembering that AI tools are just that: tools that can take over some of the more tedious and repetitive tasks. At the core of education, however, is humanity, and those are values we cannot teach computers: the empathy, the encouragement, the one-on-one attention, the creativity, and the mentorship.
Final Thoughts
This debate has allowed me to approach AI as neither an angel nor a demon. It is just a tool, and the value will be given to it by the user. I still believe that if used thoughtfully, ethically, and with a well-thought-out set of policies and teacher support, AI can help the education system adjust to the world around us. However, without sufficient forethought, AI can harm the very aspects of education that we care about the most.
I agree with you that just like other technologies in the classroom, AI itself is neutral. Its impact depends entirely on how it is implemented. With thoughtful policies, proper guidance, and careful integration, AI can elevate education to new heights.
Chi
I appreciate your recap, and I am intrigued by the third question: How do we strike the right balance between embracing innovation and preserving the human aspects of teaching and learning that are so important for K12 students?
I listened in to the debate, two words came to my mind: apathy and empathy. Overuse and misuse of AI can drive our students to become more apathetic than they already are and this increase, I beleive, will only lead to the reduction of our empathy. Empathy is a vital part of humanity. AI is void of empathy. By avoiding AI through bans, we are failing to prepare our students. By ignoring AI by not teaching it, we are failing to prepare our students. As teachers, we need to embrace the challenges that AI will bring and support our students by teaching how, when and why we would use AI as a tool and not as a toy. Thanks for your thoughtful questions.
Hi Nofisat!
I enjoyed reading your response as it was very research-based, and I could tell that you put lots of thought and consideration into your writing. Something I haven’t seen yet in some other posts is creating and responding to your critical questions! Your second question, “What policies and supports would I expect to see in place at school or at a government level before embracing AI tools?” had me thinking about the current curriculum we have in Saskatchewan. I teach Grade 4, and part of the Grade 4 Health curriculum is to teach cyber safety and proper internet etiquette to students. Obviously, this was not always a part of the curriculum and was adapted as the online world evolved. I would not be surprised if the curriculum were further adapted in the future to discuss the positives of AI usage, as well as its ethical concerns. What are your thoughts on this? Do you think that these policy changes you speak of will come about quickly? Or do you think we are still years away from these changes?
Hi Nofisat,
Thank you for sharing such your thoughts on the role of AI in education. I really appreciated the three questions you posted. They helped me reflect on where I stand with AI in my own teaching practice
First, your point about modeling ethical and critical AI use is so important! Students need to see us questioning AI-generated content, verifying sources, and using these tools as a starting point, not the final product. That kind of modeling helps build the digital literacy skills our students need in today’s world
This brings me to your second question about the supports needed to implement AI effectively in classrooms. For me, the answer is clear: I need support, more support, and then even more support. AI has entered the educational landscape so quickly, and since it’s our responsibility to prepare students with the digital literacy skills they need for the real world, I feel like I still need to learn more about the capabilities and concerns around AI. Does anyone else feel this way?
Finally, a key concern for many teachers when new technology is introduced is the fear of being replaced. But your statement that AI should support learning, not replace it, is a powerful reminder that each new advancement doesn’t take away from what we do, it changes how we do it. Technology cannot replace human connection. Building relationships is at the heart of teaching, and no tool can replicate that. While AI can help with time-consuming tasks, it will never replace the value of student-teacher relationships.
Great post, Nofisat!