Educate.AI: A New Era in Learning
*Title created by ChatGPT.
When AI first came to public awareness, there was a general fear amongst educators that student assessment might be compromised. It was immediately recognized that students would be able to use it to complete assignments, or at least make a large contribution to the content. What was not immediately known was how AI could help on the side of instruction. While these are separate topics, both require a great deal of attention.
My readings have revealed that Educators and Universities should embrace AI as it can assist with tutoring, task automation, course creation, etc. These discussions revolve around how to leverage this technology for course improvement and/or time savings. The larger discussion revolves around the use of AI by the students, particularly around assessments.
The webinar suggested by @Lauren presented a speaker panel, including a 4th-year student, which provided a variety of opinions about AI in higher education. You can see the slides here. I will share quite a bit from this webinar as I found it very relevant to this discussion.
In terms of educators using AI, Dr. F. Alex Feltus, a professor at Clemson University, is using AI to an extreme. He created an AI assistant for his courses, called Pria, that has been so successful he now includes Pria in his course syllabus as a Teaching Assistant. Pria can be integrated into Canvas and other LMS.
I previously shared a short article on Discord written by Michael Mace, another panelist who spoke about AI and Accessibility. This article creates an exciting picture for the future.
Dr.Nirmala Shenoy, another panelist, admitted that she has not used AI, and provided some thoughts about student use of AI.
I took this screenshot of her slide as I thought it provided an interesting insight into the concerns educators have with AI use in the classroom. Despite the slide using words such as advancements and empowerment, her thoughts were not about a future of hope but rather concern. She believes that report writing is an important skill that may be lost and that “garbage in = garbage out” is applicable here. Although she did maintain that AI is here, concluded that we need to figure out “how to best use the tools”. I agree and am particularly interested in the question she posed “Can AI harm the students’ ability for critical thinking and problem-solving or enhance it?”. If we figure out how to best use AI, then it should enhance these skills.
The last panelist I will mention is the student panelist, Josh Garner. As expected, he embraces AI and his message was that students need to develop AI literacy to succeed beyond University. He stresses that allowing it in the classroom levels the playing field. Students are going to use it anyway, which provides an unfair advantage.
While I do agree with his points, I disagree that it levels the playing field. From what I understand AI can be expensive to use, and perhaps not all will have the skill set to engage it to the same level. However, I suppose that can be said for any new technology. I am dating myself here, but I remember in my UBC days my roommate had a computer in her room (which was amazing in itself) and it had Microsoft 3.0 as the operating system (but no internet yet!). I guess it can be said she had an unfair advantage. But boy, did we enjoy playing Minesweeper!
Let’s agree AI cannot be banned for student use (instructors, however, will love it)
As an educator, what I am most interested in are the questions posed by Dr. Shenoy.
- Can AI harm the students’ ability for critical thinking and problem-solving or enhance it?
- How do we best use the tools?
Some answers, maybe only opinions on the manner, can be found in our readings and Dr. Couros’ lecture. Ben Talsma did a much better job than I did at highlighting examples of new technology that were considered ‘cheating’ in his Chalkbeat article. Calculators and Spell check are great examples, and I lived through these ‘cultural evolutions’. The attitudes toward these technologies within the educational space feel similar to what is being thrown at AI now within the same space.
Ben makes the same point that Dr. Shenoy made: students need to be prepared for the world. He provides an example of how to incorporate this into pedagogy which is to provide a sample of AI-generated work and then work to improve it. I feel this simple exercise alone addresses Dr. Shenoy’s concern about critical thinking and problem-solving. A second example he provided was to have the students fact-check ChatGPT’s writing. If this is done in an engaging way, what a lesson in critical thinking!
In addition, I think it provides a lesson in using AI to create content, that it should not be relied on to create reliable content. Another “feature” of AI that should be pointed out to students is that it produces biased results which is very well highlighted in Dr. Couros’ AI-created images. This bias will not always be so overt and it is up to the AI user to critically assess the content through an EDI lens.
I found reading the Generative AI guidelines for faculty and instructors at the U of Regina interesting. The document offers guidelines to help instructors prevent the misuse of AI. I think the guidelines are open-minded to the use of AI, what isn’t offered is any guidelines to categorize the “misuse” of AI. Except for the indications of the AI-generated content section, this is left up to the discretion of the instructor. Perhaps over time, these definitions will be refined as various ways AI can be used. Dr. Couros’ lecture highlights different ways AI can be used in the example of the student in the Tiktok video, (0:45:00). Dr. Couros opines that she doesn’t even feel like she is cheating. What do you think? What is cheating? Does this live up to your definition of Academic Integrity?
Is using a calculator in class cheating?