Cheaters Cheaters Pumpkin Eaters

The current hot topic in the world of online classes is cheating. More specifically, cheating using AI. Seeing as how this week we were discussing the ethical implications of incorporating technology into the classroom I felt this was the perfect opportunity to talk more about this topic and really just get my thoughts out there. I tend to stay pretty quiet during team meetings, so I’m going to use this as my personal space to word vomit a little bit of what I’ve been thinking during these discussions. Sorry in advance if it at any point becomes incoherent.

A hard truth for some to accept is that if a student wants to cheat, they’re going to find a way to cheat. There is no stopping that. You might say “What if I make the punishment for cheating really severe?”. I don’t think that really matters. The possible punishment doesn’t deter a student from cheating. At most it’s giving them a better understanding of how hard they’re going to have to work to ensure they don’t get caught. There are students who will spend more time coming up with a plan of how to cheat than it would’ve taken them to just do their assignment, or study for their test.

The common phrase we say about cheaters is that they’re robbing themselves of the opportunity to learn. Their goal is just to finish the task. Paraphrasing Erik Winerö’s Ted Talk, he said that the goal isn’t where the learning takes place. It’s the journey towards the goal where we learn. These cheaters may be robbing themselves of the chance to learn, but if they don’t have an interest in the topic then they’re bound to be indifferent towards the fact that they’re not learning. As Dean said at the end of this week’s class, and I’m paraphrasing again, sometimes students aren’t doing a task to learn, they’re just doing it out of obligation.

To cheat or not to cheat is each individual students own ethical dilemma. It’s been here long before AI and will be here long after. Some students are going to side on the thought that utilizing AI to complete their work is wrong, and others are going to see no issues with it. Plug any form of cheating into that last sentence the point applies. So, with that said, students are still going to approach schoolwork the same way they would with any other form of cheating. With that in mind the focus shouldn’t be on how students might use it to cheat, but instead what do we consider to be cheating with AI.

Let’s go back to a comparison I have undoubtedly made before, the calculator. From what I recall of my brief time in schools almost a decade ago the younger students scarcely use the calculator. Learning the DMAS portion of BEDMAS seems pointless if a calculator is being used to solve it in seconds. At this point the calculator would be considered cheating because it prevents the actual learning from taking place. Then we start getting into algebra and plotting graphs. At this point students are expected to have a solid understanding of math’s basic foundations, and here’s where the rule changes. A basic calculator is allowed because it saves students from wasting time on concepts they already know, thus giving more time to practice the concepts they don’t. At the same time, a graphing calculator is not allowed because it would once again prevent true learning from taking place. The blueprint is right there, so let’s take it and apply it to AI.

Covering each AI tool and all the potential school subjects would take far too long (and might cut into my final project), so let’s narrow it a little. Let’s focus on learning writing skills and the use of ChatGPT. When a student is first learning writing skills they learn things like basic sentence structure, basic grammar, the hamburger method and so on. This is providing them with the knowledge of not only how to write, but how to read. At this point it’s cut and dry. Using AI to hand in an assignment that already has these things solved would prevent learning from occurring. Side note, thanks to the discussion I had with my breakout group I cannot get the mental image of a third grader submitting an assignment far beyond their knowledge on a topic because they didn’t specify to ChatGPT what grade level this submission was for.

Now let’s jump ahead. Say a high school student must write an essay on a moment in history. Just like the graphing calculator they could reach the goal with just a few clicks, robbing them of learning, and qualifying as cheating. But what if they use it to eliminate wasted time on some of those basic portions of writing? The student could spend their time researching that moment in history and learning about what took place, they could create a rough draft of an essay with all the ideas and points they want to get across, and then they use AI to fix any of the errors they made along the way. Some may still consider this cheating, but comparing it to the calculator scenario, instead of wasting time going over the basics this student is using AI to save time and focus more on the new skills they’re being asked to learn. In my mind that’s not cheating, that’s being productive.

Some of you may disagree with me and that’s okay. I would love to hear your perspectives or examples on what qualifies as cheating. My takeaway having thrown all my thoughts onto the page is that it’s important to be open to allowing AI into the classroom. Trying to ban it altogether is doomed to fail. Students will inevitably find a way to use it, likely doing so in a way where no learning occurs. In order to combat this, we have to find ways to not only allow it in the classroom but encourage students to use it. This way we can attempt to gain some benefit out of it, rather than foolishly assuming that ignoring it will make the problem go away. It should come as a shock to no one that AI will be taking the Jordan Belfort stance. Let’s find a way to focus on the positives.

*Language warning*

3 Comments

Add a Comment

Your email address will not be published. Required fields are marked *