
Teaching in the AI Era

As the conversation about generative artificial intelligence in higher education continues to evolve, two dominant frameworks have emerged: the AI fluency framework and the AI literacy framework. While the two overlap to some degree—fluency, or using AI tools well, requires literacy, or understanding how generative AI works in the first place—the fluency framework prioritizes the application of AI tools in academic and professional work, while the literacy framework prioritizes understanding of the mechanisms and impacts of generative AI in academia and society at large.
One of the most discussed aspects of AI fluency is “prompt engineering” (a process that might more accurately be referred to as prompt writing or prompt composition). Prompting AI tools to produce high-quality results is not as straightforward as students might imagine. Successful prompts require a precise understanding of the task at hand (including its goals, component steps, restrictions, context, target audience, level of sophistication, etc.) as well as a precise understanding of what AI algorithms can and cannot deliver. Outputs produced by initial prompts often require further refinement via iterative prompting, a process that will look differently for different AI tools (ChatGPT, Copilot, Claude, Perplexity AI, Dall-E, Grammarly, QuillBot, Gemini, etc.) and in different contexts. Gaining AI fluency requires direct engagement and practice using specific tools—an engagement some instructors might allow, encourage, or even require if appropriate for their discipline, course, and learning goals.
In contrast to AI fluency, building AI literacy does not necessarily require students to use AI tools—just to think about them. Ideally, all course-level AI policies would include some component of AI literacy as part of the “why” of AI recommendations, restrictions, or prohibitions. (For recommendations on writing course-level AI policies, see our previous AI Policy Teaching Tip.) Under the broader umbrella of AI literacy, the more focused critical AI literacy framework focuses on the social, environmental, and psychological impacts of generative AI tools, serving as a much needed corrective to the (largely commercially driven) phenomenon of “AI Hype” and to the pervasive techno-determinism found in media coverage of generative AI.
Instructors interested in exploring the emergent field of critical AI literacy with their students can start with Leon Furze’s simple infographic which offers a quick overview of ethical concerns in AI development and use. The Student Guide to AI Literacy featured in a previous TLC Teaching Tip is another excellent, and more robust, starting point. For a student-friendly overview of bias in AI outputs, this PBS video on algorithmic bias and this analysis and visualization of bias in AI image generation tools from Bloomberg.com provide a wealth of hands-on examples. For a deeper dive, the Critical AI journal, published by Duke University Press and hosted by Rutgers University, provides a platform for scholarly conversations about generative AI’s ethical complexities and impacts. Finally, the Guide to AI Refusal (from the field of writing studies) is an example of the “AI Refusal” movement among educators whose pedagogical and ethical goals might be undermined by the proliferation of generative AI tools.
The decision on whether to focus on AI fluency, AI literacy, or AI critical literacy (or a combination of the three) is in the hands of each instructor. Ideally, every student working with AI tools in an academic context would have a solid critical literacy foundation in order to understand the broader contexts and impacts of their work. Regardless of the approach, the fluency/literacy frameworks can help instructors (and students) conceptualize their engagement with an evolving technology that continues to transform our fields of work and study.
Contact Us
3401 Market Street
Philadelphia, PA 19104