My Design Extended Explanation
Problem / Opportunity
Generative AI tools such as ChatGPT have quickly become part of everyday academic life for university students, especially first-year undergraduates who are still forming their learning habits and academic identities. Students now routinely use generative AI to summarize readings, check understanding, brainstorm ideas, and draft assignments. However, recent work on AI literacy suggests that many learners still lack confidence and strategies for critically evaluating AI-generated outputs (Long & Magerko, 2020; Ng et al., 2021). They often cannot determine whether a response is accurate, when it requires verification, how to distinguish between hallucinations, or how to recognize biased or incomplete information. In higher education, where independent reasoning and responsible knowledge use are central goals, this difficulty is particularly concerning.
Emotional and motivational factors make this gap even more consequential. Many first-year students describe AI responses as confident and authoritative, even when they are unsure about their correctness. This can lead to over-reliance on AI or to passive acceptance of answers that are not fully understood (Kasneci et al., 2023). At the same time, students worry about whether their AI use might count as cheating and whether heavy reliance on AI undermines their own learning. They may experience a tension between convenience and growth, using AI to cope with workload while feeling that they are not truly developing their skills. These overlapping concerns about credibility, integrity, and overuse point to the need for learning supports that combine critical evaluation practice with emotional reassurance and that help students reposition AI as a tool requiring judgment rather than as an unquestioned expert.
Together, these conditions create a clear opportunity for an informal learning tool that fits into students’ existing routines. Such a tool should support microlearning, so that learners can practise AI literacy skills in short, frequent sessions rather than through a single intensive event. The core challenge is not only to explain risks and guidelines, but to help students build everyday habits of evaluation, reflection, and responsible use whenever they work with generative AI in their courses.
Audience
AIPAWS is designed for registered undergraduate students, with a particular focus on those in their first year of study. This group stands at the intersection of high generative AI use and emerging academic identity. New university students often turn to efficiency-oriented tools while they are still learning institutional expectations about academic integrity, citation practices, and independent thought. Many have only partial or fragmented understandings of how generative AI works or how its outputs should be evaluated in academic contexts (Ng et al., 2021).
These students also face time pressure and cognitive overload as they adjust to new workloads and new forms of assessment. It is therefore unrealistic to expect them to seek out long, optional workshops on AI ethics or literacy. They are more likely to engage with short, accessible learning experiences that can be fit into everyday routines. Research on self-directed learning suggests that learners benefit when they have opportunities to exercise autonomy, monitor their own understanding, and reflect on their choices in flexible ways (Knowles, 1975; Garrison, 1997).
AIPAWS therefore targets students who already use generative AI for coursework, feel uncertain about the validity and appropriateness of AI outputs, have limited access to systematic AI literacy instruction in their daily practice, and would benefit from low-stakes, frequent opportunities for reflection and practice outside formal class time. Instructors and academic support staff are secondary audiences in the sense that they may recommend the app, but the primary users are students working independently.
How AIPAWS Addresses the Problem
-
Design overview:
-
AIPAWS is a quiz-based microlearning application that supports the development of generative AI literacy through short, playful interactions. At the center of the design is a virtual pet named Lumo, whose visual design was generated by AI (OpenAI, 2025). The pet’s growth reflects the learner’s engagement and progress over time. Students complete daily tasks composed of multiple-choice and scenario-based questions focused on generative AI use in higher education. Each activity takes only a few minutes and is followed by immediate feedback and a brief reflection tip that invites learners to consider why a particular choice is responsible, risky, or incomplete.
-
The pedagogical content is grounded in existing AI literacy frameworks (Long & Magerko, 2020; Ng et al., 2021), which conceptualize AI literacy as an integration of conceptual knowledge, practical skills, critical evaluation, and ethical reasoning. AIPAWS translates these dimensions into short questions that align with the time constraints and attention patterns of undergraduate life.
-
-
Visual identity and learning journey:
-
The visual identity of AIPAWS also supports its pedagogical aims. The name AIPAWS was chosen to symbolize step-by-step learning, like leaving small pawprints that gradually accumulate into meaningful progress in AI literacy. The colorful letters and pawprint imagery create a light and approachable atmosphere that contrasts with the way AI is often perceived as complex or intimidating. At the center of the experience is a virtual cat whose growth mirrors the learner’s progress. As students complete quizzes and engage with reflection prompts, the cat becomes livelier and displays new expressions. This visual progression reinforces the idea that learning AI concepts can be incremental and enjoyable, and it adds a sense of emotional connection that encourages students to return regularly and to treat AI literacy as a playful, low-pressure journey rather than a purely technical challenge.
-
-
Learning modules:
-
The content is organized into four modules.
-
(a) AI Basics introduces simple explanations of how generative models are trained, what hallucinations are, and how these systems differ from human reasoning.
-
(b) Ethics and Safety focuses on data privacy, academic integrity, appropriate disclosure of AI use, and recognition of biased or harmful outputs.
-
(c) AI in Daily Life presents realistic study situations, such as how to use AI to outline a paper or explain a difficult concept, and asks learners to judge when to accept, revise, or verify AI responses.
-
(d) Creative AI addresses responsible co-creation with generative AI by exploring prompt construction, hybrid human AI workflows, and authorship boundaries, with attention to preserving students’ own voices.
-
-
Learners earn progress by completing tasks within each module. As they advance, they unlock higher-level units with more complex scenarios and questions. Progression through the modules exposes students to AI literacy concepts in gradually deepening ways without requiring long, continuous study sessions.
-
-
Virtual pet, learning approach, and data:
-
The virtual pet is intended to provide affective support and encourage persistence. Studies of virtual companions and Tamagotchi-like mechanics show that such elements can motivate students to return regularly, especially when an agent’s well-being is linked to the learner’s actions (Ahn et al., 2015; Hellings et al., 2019). In AIPAWS, correct answers and thoughtful engagement make Lumo (OpenAI, 2025) more lively and expressive, whereas hurried and inattentive choices lead to neutral animations. The goal is not to punish mistakes, but to gently signal the value of slowing down and reflecting. Daily reminders from the pet invite learners back into the app in a friendly way.
-
AIPAWS is framed as an informal, self-directed learning environment. Students can enter and exit at any time, choose modules according to their interests and needs, and decide how often to respond to daily prompts. Each session is short, supporting spaced practice instead of massed learning. Reflection tips after each question encourage students to connect scenarios in the app with their own AI use in coursework. The system records basic usage data, such as completed tasks, time stamps, and unlocked units, which can later support design-based research on engagement patterns and shifts in learners’ confidence and reported AI use. At the current stage, AIPAWS functions as an interactive prototype that demonstrates core mechanics and representative content. Future iterations may extend the design with an integrated conversational AI agent so that students can practice writing prompts and critiquing AI responses, but the present version focuses on structured questions, daily nudges, pet-based feedback, and reflective tips that help higher education students build generative AI literacy in small, manageable steps.
-
Acknowledgments:
The textual analysis and research synthesis presented in this design document are my original work. The visual design of the virtual pet “Lumo” referenced herein was generated using AI (OpenAI, 2025).
References
-
Ahn, S. J. (Grace), Johnsen, K., Robertson, T., Moore, J., Brown, S., Marable, A., & Basu, A. (2015). Using Virtual Pets to Promote Physical Activity in Children: An Application of the Youth Physical Activity Promotion Model. Journal of Health Communication, 20(7), 807–815. https://doi.org/10.1080/10810730.2015.1018597
-
Garrison, D. R. (1997). Self-directed learning: Toward a comprehensive model. Adult Education Quarterly, 48(1), 18–33. https://doi.org/10.1177/074171369704800103
-
Hellings, P., Leek, P., & Bredeweg, B. (2019). StudyGotchi: Tamagotchi-like game mechanics to motivate students during a programming course. In U. Cress, C. D. Kloos, & J. Kravcik (Eds.), European Conference on Technology Enhanced Learning (pp. 622–625). Springer. https://doi.org/10.1007/978-3-030-29736-7_53
-
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., & Spiegel, S. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274
-
Knowles, M. S. (1975). Self-directed learning: A guide for learners and teachers. Association Press.
-
Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376727
-
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
-
OpenAI. (2025). ChatGPT (GPT-4 version with integrated image generation) [Large language model]. https://chat.openai.com
AIPaws Design Story “Maya’s Choices”
Maya, a first-semester freshman, sits on her dorm bed after another long day of classes. Her friends talk excitedly about how they use AI to finish assignments more quickly. Maya listens quietly. She is curious, but she often feels unsure about whether she is interpreting AI-generated answers correctly. Sometimes the responses look confident but do not make sense, and she is never sure how to judge them.
That evening, she opens an app her professor mentioned in class, AIPAWS, a learning game where a small digital pet grows as she practices AI literacy. A round creature named Lumo appears and greets her: “Hi Maya. I grow when you grow. Ready for your first question?”
A question pops up:
When an AI gives an answer you do not fully understand, what is the best first step?
A. Accept it because AI is usually right.
B. Search for more information and compare sources.
C. Ask someone else to decide for you.
Maya hesitates, then chooses B. Lumo’s eyes brighten. “Great thinking,” it says. “Why do you think comparing sources matters?” A small reflection tip appears, and she pauses before answering the next question.
Over the next week, Maya returns to AIPAWS for a few minutes each night. Some questions are simple, like choosing which AI-generated image shows less bias. Others are harder, such as deciding who holds responsibility when AI gives incorrect academic information. When she rushes, Lumo looks tired; when she slows down and thinks more carefully, the pet glows, and its world becomes more colorful.
Gradually, Maya notices a shift. Before relying on AI for homework, she now asks herself: “Do I understand this?” “Should I verify it?” “What extra questions should I ask?” She rewrites prompts, compares multiple sources, and looks more critically at AI responses.
One evening, she opens the app and sees Lumo glowing warmly next to the message: “Growth through curiosity.” She smiles, realizing that the pet has become a reflection of her own developing judgment and that she is learning not just to use AI, but to understand it.
Acknowledgments:
This story, including the character of Maya and her journey, is my original narrative work. The companion character “Lumo,” who appears above, was visually conceived with the assistance of AI (OpenAI, 2025). During the writing process, I used OpenAI’s ChatGPT and DeepSeek‘s DeepSeek Chat for editorial assistance in proofreading and language refinement.
References:
-
OpenAI. (2025). ChatGPT (GPT-4 version with integrated image generation) [Large language model]. https://chat.openai.com
-
DeepSeek. (2025). DeepSeek Chat [Large language model]. https://www.deepseek.com