The Opposite of Cheating: How Can We Get Over the Moral Panic and Create Thriving Learning Environments in the Age of AI
By Magdalena Mączyńska
In their 2025 book, The Opposite of Cheating. Teaching for Integrity in the Age of AI, authors Tricia Bertram Gallant and David A. Rettinger address the question many academic instructors have been grappling since the fall 2022 release of ChatGPT: how can we create learning environments where students appreciate the importance of academic integrity and resist the temptation of using GenAI to circumvent the hard and messy process of learning?
Gallant and Rettinger resist the moral panic surrounding cheating with AI by pointing out that, (a) cheating is a common human behavior that most students (and professors) will engage in occasionally, given the right conditions, and (b) educators have the power to create learning environments that support academic integrity. The latter can be accomplished by making intentional pedagogical choices about assessment design, communication, and class culture that produce “scenarios for learning” rather than “scenarios for cheating” (p. 3).
Drawing on an extensive body of research, Gallant and Rettinger argue that, while cheating behaviors evolve with changing technology, the underlying reasons for why students cheat remain relatively constant:
- Lack of motivation (perceiving the task as not worth the effort)
- Low self-efficacy (not believing they can complete a task unaided)
- Lack of understanding (not having a clear sense of what academic integrity means in a specific context, especially given the pervasiveness of GenAI tool integrations across digital platforms)
- Peer effects (observing their friends engage in cheating)
- Instructor effects (experiencing the class as disorganized, unenjoyable, or not conducive to forming meaningful relationships with instructor and peers)
- Lack of time management skills (not having sufficient time to complete the task unaided)
Gallant and Rettinger advocate for a learning-centered approach that focuses on tackling these root causes rather than policing GenAI use. Their approach is based on an assumption of trust (most students who use GenAI do not do so to dupe or insult us) and an unapologetic growth mindset that extends beyond academics (students can grow not only their intellectual capacities but also their capacity for ethical decision-making). The book’s key recommendations include:
- Clearly communicating academic integrity expectations
Our students’ ideas about what constitutes "cheating" can differ vastly from our own—a discrepancy exacerbated by the growing integration of GenAI tools into digital platforms used by students, intentionally and unintentionally, for their academic work. It’s our job to clearly communicate our academic integrity expectations (including the rationale behind them) on the syllabus, and to reinforce them throughout the course in assignment prompts, informal communications, and integrity nudges. For help with crafting your GenAI syllabus language, see TLC Teaching Tip on Developing an AI Course Policy.
- Supporting a culture of academic integrity
Peer norms play a significant role in the ethical choices made by college students - including the choice to use GenAI tools in ways that interfere with learning. Creating spaces for structured reflection and peer-to-peer discussions can help students appreciate the ethical complexities of AI usage and support them in developing community norms (either informal or formalized) aligned with the specific learning goals of the course. For help with initiating conversations about GenAI ethics, see our 2026 Dragon's Guide to GenAI. - Designing scaffolded and flexible assessment structures
Moving away from high-stakes, all-or-nothing assignments towards more distributed, low-stakes practice opportunities not only disincentivizes cheating but also deepens learning. The problem of last-minute ‘panic cheating’ can be alleviated by providing flexible deadlines, offering revision opportunities, and replacing zero-tolerance lateness policies with modest penalties. Finally, shifting focus from grades to the “so what” of assignments can build intrinsic motivation and help students see the point of tedious or difficult academic tasks. (Alternate approaches like mastery grading, specifications grading, and un-grading provide useful models for deemphasizing grades in academic teaching.) - Fostering self-efficacy and metacognitive skills
Students, like all humans, are much more likely to cheat if they don’t feel up to the task (lack of self-efficacy) or don’t have a clear understanding of how to go about completing it (lack of metacognitive skills). By helping students become more self-aware and self-regulated learners, we build their capacity for recognizing and avoiding the danger of AI shortcuts. For information about academic support services provided by Drexel’s academic coaches and learning specialists, see Academic Resource Center (ARC) and Center for Learning and Academic Success Service (CLASS) websites.
Gallant and Rettinger argue that current higher education curricula don’t place enough emphasis on the development of ethical reasoning skills: “we teach our students the skills of an academic discipline without teaching them how to ethically practice that discipline” (p. 202). This ethics gap has become even more painfully visible in the age of AI. In response, the authors argue, we should all help students make ethical choices about AI use in academic and professional contexts: “ethical reasoning and acting must be integrated into the educational curriculum if we aim to prepare graduates for a life in which ethical conduct is expected and necessary” (p. 203).
Contact Us
3401 Market Street
Philadelphia, PA 19104