Abstract

The rapid adoption of Generative AI (Gen-AI) tools across educational and professional sectors has outpaced the development of standard measures to evaluate user competency. Without validated assessment tools, organizations face significant hurdles in promoting responsible and effective AI use. This systematic literature review analyzes 17 primary studies to assess how current research measures Gen-AI literacy. Using a 12-competency framework and a three-level coding scheme, we conducted a competency-coverage analysis. The results reveal severe gaps: no single study evaluates Gen-AI literacy comprehensively. While technical prompting skills are frequently assessed, foundational knowledge and critical governance-related skills (such as ethical evaluation and bias detection) are largely ignored. This review establishes a crucial baseline and calls for the immediate development of holistic, validated Gen-AI literacy assessments.

Author: Anshuman Rangaraj

Our Sponsors