Artificial intelligence technologies, particularly generative AI, are significantly changing numerous aspects of human life, including education and content creation. In the digital age, the integration of generative AI tools into academic writing and content generation is fundamentally changing the way students learn, teachers educate, and researchers approach their work (Kim et al., 2024; Wang, 2024). While these tools offer numerous benefits, it is crucial to understand how they are being used in education. As a result, this post explored the perspectives of students on the use of AI and Generative AI for academic purposes and content creation.
This blog post is my EC&I 832 final research project, which explores the perspectives of students on the use of GenAI for academic purposes and reflects on its implications for ethical digital citizenship. Using a qualitative approach embedded within the interpretive paradigm, a one-on-one semi-structured interview was conveniently conducted with two high school students and four undergraduate students. The generated data were coded, categorized, and analyzed using content analysis.
Participants’ perceptions on the use of generative AI for academic purposes and content creation are discussed based on elements of Ribble’s digital citizenship framework (Ribble, 2023) and the Technology Acceptance Model (Davis, 1989). The Technology Acceptance Model by Davis (1989) posits that specific beliefs related to technology use, such as perceived ease of use and perceived usefulness, influence users’ attitudes toward technology and their behavioural intention to use it.
Perceived usefulness (PU) refers to the degree to which a person believes that using a particular system would enhance their job performance (Or, 2024; Akbarini, 2024); whereas perceived ease of use (PEU) indicates how easy technology is to use (Or, 2024), indicating that PEU significantly influences actual technology use, sometimes even bypassing intention (Or, 2024; Ampo et al., 2024). The use of TAM in this study helps to shed more light on the processes underpinning students’ acceptance of generative AI for academic writing and content creation. On the other hand, Heick (2021) described digital citizenship as ‘self-monitored habits that sustain and improve the digital communities we enjoy or depend on’. The digital citizenship framework, according to Ribble (2023), outlines nine essential characteristics that guide individuals in their digital interactions, promoting ethical and informed use of technology. However, many of the responses tend to overlap when interpreted through the lens of the digital citizenship framework.
Results of the study revealed that participants perceived generative AI to be useful and easy to use. In terms of its usefulness, participants noted that generative AI tools help streamline their academic work, assist in brainstorming, and improve writing clarity. For instance, Judah, an undergraduate student, indicated that generative AI helps in the provision of answers to questions with detailed explanations and assistance in test preparations by generating academic practice questions. In addition to his response, other participants also recognized the usefulness of AI in enhancing learning and productivity, as shown in the excerpts below.
“I use generative AI tools to explain schoolwork… I basically use ChatGPT to research information and sometimes break down ideas, and then I use Google to confirm if that information is correct” (Isaiah, Grade 9). This indicates that he finds AI useful for understanding and completing schoolwork.
“Generative AI is capable of generating new images, texts, and videos based on information or data that is given or entered into the tool/system” (Geraalda, undergraduate). This highlights the perceived usefulness of AI in creating diverse content for academic purposes.
“AI is a technology that can be used to answer most problems, reflecting her belief in the usefulness of AI in problem-solving” (Mariah, undergraduate).
These responses indicate a positive perception of the usefulness of AI in enhancing their education. Thus, it is assumed that educators can focus on demonstrating the practical applications and benefits of these tools in academic settings, thereby enhancing students’ perceived usefulness of generative AI. Regarding the ease of use of generative AI, some participants mentioned that using AI tools made tasks easier and faster, suggesting that they find these technologies user-friendly. However, they also raised concerns about over-reliance on AI, which could indicate a need for better training or support to ensure that students can use these tools effectively without compromising their learning. For instance, Vashti, a grade 11 student, indicated that AI makes her work easier by simply telling it to do something, like giving it instructions. She went further to explain, “………I just provided what I wanted my poster to look like, and the app generated a poster for me in school, making the whole work faster” (Vashti). Supporting Vashti’s response, Judah indicated, “I personally use AI tools to facilitate my study, like generating questions for me to work on and their solutions. I also find it easy to use Meta AI for making quick references and clarifications on ideas.” This suggests that he finds AI easy to use and accessible. However, he mentioned that sometimes one needs to provide detailed input to achieve the necessary desired results. He said, “Like when I want to generate images, I have to provide every detail of what and how I want it; if not, I will not get the result I desire.”.
Probing further into students’ perceptions regarding the use of generative AI for academic purposes revealed that participants highlighted the importance of using AI responsibly and ethically, emphasizing the need for students to engage with AI tools in a way that respects academic integrity and originality. This reflects the element of digital etiquette, which encourages respectful and responsible behavior online. Participants also expressed concerns about the ethical implications of using AI, including issues of dependency and authenticity. The following are excerpts from participants:
“My concern is majorly the fact that Gen AI can’t be trusted blindly as it can make mistakes… students tend to be lazy when they know there is software that can help them do what they’re supposed to do” (Isaiah, Grade 9 student). This reflects a concern about over-reliance on AI and its impact on critical thinking.
“Using AI at a younger age is not good because you become reliant on it, and such people might end up not knowing anything” (Belshaba, undergraduate student). This highlights the potential negative consequences of depending on AI.
“I think people could become too dependent on these tools, which can lead to plagiarism; it does not help students think on their own” (Gaalda, undergraduate student). This emphasizes the ethical concern regarding academic integrity and the importance of maintaining critical thinking skills.
Gaalda’s response could also be aligned with the need for digital literacy, where students need to learn how to critically evaluate AI-generated content and understand its limitations. The issue of digital fluency and literacy is also portrayed in Isaiah’s response, where he said:
“Sometimes you have to enter detailed information when asking AI input questions. For example, when I want to generate images, I have to provide every detail of what I want and how I want it; if not, I will not get the desired result. ChatGPT makes a lot of mistakes. Sometimes, when I use it to do math, it does give correct answers, but on some occasions, I figure out the answers are wrong, and that is a big challenge. As a result, I sometimes don’t trust what ChatGPT generates because it’s not fully correct.”
When asked about privacy concerns around the use of GenAI, one of the participants said, “ChatGPT tells its users not to input sensitive or private information. Gen AI has done its part concerning privacy. It’s left to its users” (Judah, undergraduate). Furthermore, participants emphasised the importance of maintaining academic integrity and properly citing AI-generated content. This relates to the rights and responsibilities aspect of digital citizenship, where students must understand their obligations when using digital tools. This was noted when Gaalda indicated that students can use these tools responsibly by limiting how they use AI and not allowing AI to do all their work for them. This statement was also supported by Mariah when she said, “WhatsApp and Facebook have Meta AI, which I sometimes use to generate images but like ChatGPT, I do not use it because most of my schoolwork does not support students to use AI. Nevertheless, I sometimes make use of the MetaAI to help me understand some things.” These statements reflects an understanding of digital responsibility.
The analysis of student perspectives on AI reveals a complex interplay of perceived usefulness, ease of use, ethical considerations, and responsible digital citizenship. While students recognise the potential benefits of AI in enhancing their learning experiences, they also express valid concerns about dependency, authenticity, and the ethical implications of its use. As AI continues to evolve, it is essential for educators and policymakers to consider these perspectives to foster an environment that encourages responsible and effective use of emerging technologies in education. By addressing these concerns, we can ensure that AI serves as a valuable tool for learning rather than a crutch that undermines critical thinking and creativity.
It is therefore important to encourage educators, students, and technology developers to engage in discussions about the role of AI in education. It is believed that sharing experiences, concerns, and best practices can help create a more informed and responsible approach to integrating AI into learning environments. Thus, all school stakeholders need to work together to harness the power of AI while promoting ethical and responsible use in the education system.