top of page
日記本和鉛筆

Reasoning Behind Design

Theoretical framing

As scholars in AI literacy have noted, learners increasingly need the ability to evaluate and monitor AI-generated information rather than rely on it uncritically (Long & Magerko, 2020; Ng et al., 2021). These competencies include recognizing uncertainty, questioning correctness, and understanding the limits of AI systems. This emphasis aligns with foundational work in self-directed learning, which frames learning as an active process in which students set goals, evaluate resources, and regulate their own progress (Garrison, 1997; Knowles, 1975). As university students integrate generative AI into everyday coursework, these tools can support learning by offering explanations and feedback, yet they may also encourage overreliance or reduce opportunities for judgment and reflection. AIPAWS responds to this challenge by translating principles from AI literacy and SDL into short, playful learning interactions that invite students to practise evaluation, reflection, and regulation in a manageable daily routine.

Conjecture 1: Daily microlearning supports evaluative judgment

Evaluating the reliability of AI outputs is a central component of AI literacy, but many students, especially first-year undergraduates, struggle with this task in real academic contexts (Long & Magerko, 2020; Ng et al., 2021). With limited experience assessing information quality independently, they may accept AI responses simply because they appear fluent or confident. Research on self-directed learning suggests that judgment develops through repeated cycles of checking, monitoring, and adjusting understanding over time rather than through a single instructional event (Garrison, 1997).

 

AIPAWS incorporates this principle by organizing learning into short daily micro-activities. Each session presents a brief scenario or question prompting learners to evaluate the accuracy, clarity, or appropriateness of an AI response. The low-stakes format makes it easier for first-year students to participate without feeling overwhelmed, while the daily rhythm supports spaced practice. Through repeated exposure to similar decision points, students strengthen habits of questioning AI outputs, recognizing uncertainty, and verifying information before relying on AI responses in their coursework.

Conjecture 2: Reflective prompts support metacognitive monitoring

AI literacy and SDL research emphasize that learners must not only evaluate information but also monitor their own understanding, especially when interacting with confident-sounding AI systems that may be mistaken (Garrison, 1997; Long & Magerko, 2020). First-year students, who are still adapting to the demands of academic judgment, may be particularly likely to accept AI output quickly to complete tasks.

 

AIPAWS addresses this through short reflection prompts that appear immediately after each quiz item. Instead of only revealing the correct answer, the app asks students to consider why a decision is responsible, how they might verify an uncertain AI response, or what risks a particular action might create in an academic setting. These brief, structured pauses encourage students to articulate their reasoning rather than click through. Over time, these small but frequent moments of reflection help learners build a habit of monitoring their understanding and questioning AI outputs more deliberately.

Conjecture 3: A virtual pet system supports motivation by strengthening autonomy

Self-Determination Theory highlights autonomy, competence, and relatedness as fundamental psychological needs underlying sustained motivation (Deci & Ryan, 2000). For many first-year students who feel uncertain about using AI responsibly, low-pressure emotional support can reduce anxiety and encourage continued practice. Studies on virtual companions show that expressive agents can increase engagement by making learning feel guided without being controlling (Laureano-Cruces & Rodríguez-García, 2012; Tan et al., 2020).

 

AIPAWS applies these insights through Lumo, a virtual pet that grows as students complete daily tasks. Lumo gently responds to student progress, allowing learners to choose when and how much they want to engage. This sense of voluntary participation strengthens autonomy, while Lumo’s expressive feedback lightly reinforces competence. By making practice feel self-directed and emotionally supportive, the pet system helps students return regularly to the app and sustain motivation for improving their AI literacy.

Conjecture 4: A progressive module structure supports conceptual development in AI literacy

AI literacy is often described as a layered set of competencies that develops over time, progressing from basic conceptual understanding to more complex evaluation and responsible use (Long & Magerko, 2020; Ng et al., 2021). Introducing these competencies in a gradual sequence can reduce cognitive load and help novice learners integrate new ideas more coherently. This is especially relevant for first-year undergraduates, who often feel overwhelmed by unfamiliar technical concepts and uncertain about appropriate academic uses of AI.

AIPAWS incorporates this developmental perspective by organizing learning into short, structured modules that build on one another. Rather than presenting all aspects of AI literacy at once, the staged progression helps students connect foundational ideas to more complex evaluative and ethical considerations. This structure complements the app’s microlearning, reflection, and motivational elements, creating a coherent pathway for students to develop more thoughtful, confident engagement with generative AI.

References

  • Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268. https://doi.org/10.1207/S15327965PLI1104_01

  • Garrison, D. R. (1997). Self-directed learning: Toward a comprehensive model. Adult Education Quarterly, 48(1), 18–33. https://doi.org/10.1177/074171369704800103

  • Knowles, M. S. (1975). Self-directed learning: A guide for learners and teachers. Association Press.

  • Laureano-Cruces, A. L., & Rodríguez-García, A. (2012). Design and implementation of an educational virtual pet using the OCC theory. Journal of Ambient Intelligence and Humanized Computing, 3(1), 61–71. https://doi.org/10.1007/s12652-011-0089-4

  • Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376727

  • Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041

  • Tan, S.-M., Liew, T. W., & Gan, C. L. (2020). Motivational virtual agent in e-learning: The roles of regulatory focus and message framing. Information and Learning Sciences, 121(1–2), 37–51. https://doi.org/10.1108/ILS-09-2019-0088

© 2025 by Tianxing Feng. Powered and secured by Wix

  • Facebook
  • Flickr
  • Twitter
  • Pinterest
bottom of page