Generative Artificial Intelligence (GAI) and Teaching

Transparent Course Guidelines

Whether we choose to disallow, allow, or encourage the use of AI tools in our classes, we need to clearly articulate the reasoning behind our guidelines. Crafting course-level and assignment-level AI guidelines provides an opportunity to take a fresh look at our learning goals, refine the alignment between goals and assignments, and consider how AI might amplify/diminish the relevance of our current goals. Including students in this conversation or inviting them to co-create course/assignment AI guidelines opens a much-needed dialogue on the ethics of AI.

An AI syllabus statement might include:

  • Course-level and assignment-level limits (if any) on AI use
  • Detailed rationale for above limits (or lack thereof), aligned with course learning goals, pedagogical approach, and community norms
  • Guidelines for documenting allowed uses of AI (if applicable)
  • Reminder that disallowed uses of AI violate academic honesty policy

In addition, instructors may choose to include:

  • Basics of AI literacy relevant to the course

  • Invitation to reflect on the ethics and/or efficacy of AI use in the course and field

  • Invitation to examine, expand, or amend the course policy, and/or customize it for individual units and assignments

As with any syllabus policy, a non-punitive, student-centered tone can go a long way in establishing a positive class climate and building student trust and motivation.

Generative AI Literacy

Generative AI will continue to affect our students' academic and professional lives, so it is vital that students understand its affordances, limitations, and dangers. Elements of AI literacy (and especially critical AI literacy) can be introduced in small doses throughout any course, from syllabus language, to the framing of assignments, to informal in-class mentions. In the context of higher education, students should be particularly alert to:

  • The temptation to anthropomorphize AI bots: while Large Language Model (LLM) chatbots create the illusion of human conversation, and while their outputs may successfully mimic human writing, these outputs are the product of predictive mathematical algorithms rather than critical thinking, rhetorical awareness, or ethical discernment.
  • The distinction between traditional search engine results and AI outputs: apart from the problem of inaccuracies, AI-generated responses to queries do not give users the option of selecting or vetting sources. While some chatbots can be prompted to produce lists of sources, their outputs are not the result of "research" in the sense of collating, evaluating, or analyzing source material.

  • The danger of epistemological bias: AI outputs reflect biases present in their training data sets (data gaps, representational disparities, biased cultural standards, harmful stereotypes, etc.), as well as the design of their algorithms (lack of context-awareness, amplification/generalization from one dataset to another, bias in weighing data points, etc.). As a result, in spite of guardrails and retroactive scrubbing, AI outputs run the danger of replicating existing patterns of under-representation, over-representation, and mis-representation.

  • The continuing importance of expertise: recognizing AI-generated inaccuracies is easier for an expert than a novice. Students need to fact-check and understand the importance of expertise in assessing AI outputs.

AI and Academic Integrity

For many academic instructors the most pressing concern raised by generative AI remains academic dishonesty. Initial hopes that AI detectors would help identify algorithm-generated language were dampened by reports of inconsistencies, false positives, and discriminatory results. Addressing AI-related threats to academic integrity might include:

  • Crafting comprehensive AI policies

  • Building trust-based classroom communities

  • Including more in-class activities to assess student learning in real time

  • Encouraging metacognitive reflection and open conversations about AI ethics

  • Re-tooling assignments to foreground process and integrate AI tools/critical analysis

Assignment Makeovers

As generative AI tools continue to evolve and proliferate, instructors might consider re-designing assignments to incorporate AI use and/or critical analysis:

  • Fact checking: AI outputs require ongoing vigilance. Students can gain disciplinary knowledge (as well as AI tool assessment skills) by evaluating outputs, revising them to meet academic standards of accuracy, and critically examining the illusion of neutrality built into AI responses. Example: ask students to assess the quality of AI-generated response to a relevant question in your field. Which elements of the output are accurate/inaccurate/biased/incomplete?

  • Prompting: Students can learn a lot about their field by designing well-structured prompts and/or refining AI outputs via iteration. Example: ask students to prompt AI to produce a plausible field-specific "forgery" and analyze outputs to identify successful prompt features (action verbs, contextual information, well-defined point of view, step-by-step instructions, clearly delineated constraints, etc.). Note: you can prompt AI to generate prompts!

  • Metacognition: by learning how to prompt AI tools, students can refine their understanding of the conceptual moves required in academic work (synthesis vs. analysis; application vs. evaluation, etc.). Example: ask students to identify steps of an academic task and predict which of the sub-tasks might benefit from AI assistance. Then, have students test their predictions and reflect on the results.

  • Brainstorming: AI can generate large amounts of ideas in a very short time. Students can use these prolific outputs to jumpstart and/or refine their own brainstorming process. Example: ask students to use AI to expand/merge/iterate on existing ideas, generate new ones, or offer counterarguments to help refine their brainstorming.

  • Reflection: as students discover new AI tools and learn how to incorporate them into academic work, reflection provides a powerful tool for navigating gray areas, registering changing social protocols, and engaging in reciprocal learning. Consider adding reflection tasks to AI-based assignments, to allow students to build their AI literacy (technical and ethical) to help facilitate community conversation. For instance ask students to document use of AI tools and to complete a questionnaire in which they reflect on how AI affected their sense of agency. (See professor Paul Fyfe's "How to Cheat on Your Final Paper: Assigning AI for student writing"  for an example.)

  • Collaboration: any of the above uses can be incorporated into in-class or out-of-class group activities, with AI incorporated into the workflow as an auxiliary "member" of a student group. Example: expand the think-pair-share formula by having students consider a question on their own, then in a small group, then with the help of AI, before sharing results with the class as a whole.

  • Critical analysis: for instructors who do not want to use AI tools but want students to gain a critical understanding of AI ethics, exercises like annotating terms of use or conducting technoethical audits can open a larger conversation about AI's wide-ranging influence. Example: ask students to consider the controversy around using AI-generated text in a statement from Vanderbilt University's Peabody College of Education after the 2023 mass shooting at Michigan State. Why was the use of AI controversial? What social contracts might have been violated? What values were at stake? Note: you can also prompt AI to build hypothetical case studies for class discussion.

For instructors considering AI assignment makeovers, this checklist by Derek Bruff can help with thinking through the redesign process.

Individualized Learning

Multiple tech companies are currently building bots based on foundational models (e.g., Khanmigo) to support individualized learning. Individual instructors are also using AI to create custom course-level GPTs or GPT assistants for a number of applications:

  • Learning assistance: explaining complex concepts; offering examples and analogies; assessing prior knowledge; correcting misconceptions; role-play

  • Reading assistance: paraphrasing/summarizing; translation, vocabulary assistance

  • Research assistance: brainstorming; locating sources; pattern analysis

  • Writing assistance: brainstorming; outlining; text generation; editing; revision

  • Exam preparation: retrieval practice; problem solving practice; role-play

While working with bot assistants, students and instructors should always remember that they are interacting with an algorithm and not a human mind (linguistic fluency and cute chatbot names notwithstanding) and maintain appropriate critical distance.

Next: Additional Resources and References

Schedule a consultation

Book an appointment with one of our consultants to discuss teaching with technology.

Request a workshop

Ask us about a custom workshop on teaching with technology for your department, school, or college.

Contact Us