top of page

Tentative Research Plan

Introduction

As generative AI tools become part of everyday academic work, many undergraduate students rely on them without fully understanding how to evaluate their outputs or how to use them responsibly. Early findings in AI literacy research show that students often struggle to judge accuracy, identify hallucinations, and recognize when verification is needed (Long & Magerko, 2020; Ng et al., 2021). AIPAWS was designed to provide short, low stakes opportunities for students to practise these skills. The aim of this research plan is to explore how students interact with the prototype and how these interactions may relate to changes in their confidence and judgment when working with generative AI.

Research Question

  • How does using AIPAWS influence undergraduate students’ ability to evaluate and explain the reliability of generative AI outputs during academic tasks?

Methods

This exploratory study will use a small group of undergraduate volunteers who will interact with the AIPAWS prototype for one to two weeks. Students will be asked to complete brief daily learning activities in the app, which consist of multiple-choice questions and short reflection prompts. Usage data, such as the number of completed tasks, time spent, and unlocked modules, will be collected to identify engagement patterns.

To understand changes in students’ thinking, two forms of qualitative data will be collected. First, a short post activity survey will ask students to rate their confidence in evaluating AI outputs and their perceptions of the app’s usefulness. Second, semi-structured interviews will invite students to describe how they currently use AI, how they decide whether to trust an AI response, and whether the app influenced their habits or strategies. Interview prompts will be connected to example items from the app to encourage concrete reflection and to align with learner-centered approaches to self-directed learning in adult and higher education contexts (Knowles, 1975; Garrison, 1997).

The study is not intended to measure learning outcomes in a formal or experimental sense. Instead, it aims to generate early insights into how students experience the AIPAWS design, which elements they find most helpful, and how the prototype might be refined. These findings will guide the next iteration of the design and support future design-based research on microlearning approaches to AI literacy (Sandoval, 2014).

References

  • Garrison, D. R. (1997). Self-directed learning: Toward a comprehensive model. Adult Education Quarterly, 48(1), 18–33. https://doi.org/10.1177/074171369704800103

  • Knowles, M. S. (1975). Self-directed learning: A guide for learners and teachers. Association Press.

  • Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16). Association for Computing Machinery. https://doi.org/10.1145/3313831.3376727

  • Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041

  • Sandoval, W. (2014). Conjecture Mapping: An Approach to Systematic Educational Design Research. Journal of the Learning Sciences, 23(1), 18–36. https://doi.org/10.1080/10508406.2013.778204

“AI literacy is the set of competencies that enable people to critically evaluate and make informed decisions when interacting with AI systems.”
Long & Magerko, 2020

© 2025 by Tianxing Feng. Powered and secured by Wix

  • Facebook
  • Flickr
  • Twitter
  • Pinterest
bottom of page