STEP ONE: Understand “GAI"
What is the "generative" in "Generative Artificial Intelligence"?
Generative Artificial Intelligence (GAI) is a type of artificial intelligence that produces (generates) new content (such as text, images, music, video, or code) by analyzing patterns from large datasets and using them to predict the most likely answer to a prompt. GAI models don't simply copy what they have seen: they generate original outputs that reflect the patterns and structures found in the data they were trained on. Their outputs are produced through mathematical calculations and do not involve human actions like ethical judgement, emotional engagement, cultural awareness, creative imagination, or understanding. One important kind of GAI is the Large Language Model (LLM) used in popular tools like ChatGPT, Claude, Microsoft Copilot, Google Gemini, Perplexity, etc. Users of LLM technology need to understand the following points:
- Large Language Models are prediction tools: LLMs are trained on massive collections of text scraped from websites (e.g., Reddit archives), books, and articles. LLMs analyze how language works, which allows them to answer questions, write essays, summarize information, and carry on "conversations" by predicting the next most likely item in a string of text. LLMs are designed to produce fluent human-sounding language, but they are not engaging in thinking, writing, or communicating in the human sense.
- GAI models are a "black box": The process of generating outputs by GAI is often described as a "black box" because it's not transparent. Even their creators don't know exactly how GAI models generate their outputs.
- Algorithms are not human: It's easy to think of GAI tools as "partners" or "collaborators" (terms often used by companies that market GAI tools), but GAI tools are mathematical algorithms that do not reason or create in the same way people do. For example, they have no ability to "know" if the answers they generate are correct (let alone ethical). Human users need to keep in mind these limitations and make sure to maintain intellectual and creative control over their work at all times.
Ask Yourself:
- Do I understand that GAI tools generate outputs by performing mathematical calculations without reasoning, judgement, or understanding?
- Do I understand that GAI outputs reflect their training data sets, including inaccuracies and biases?
- Am I giving too much creative or intellectual control to a tool that cannot reason or feel?
How is asking a chatbot different from using a traditional search engine like Google?
Many students use chatbots to find information quickly and conveniently, but research tools like search engines or professional library research resources have several important advantages over GAI chatbots. When deciding which tool to use, consider the following strengths and weaknesses of each approach in the following scenarios:
- When you want to verify your sources: Search engines like Google retrieve, rank, and link existing sources from the web. Users can assess each linked page by asking questions about its origins (Who posted this information?), reliability (Can I trust this author/organization?), and goals (Does this site have an agenda? Is it trying to sell me something?). Chatbot-generated answers do not offer this opportunity for critical assessment.
- When you don't have the expertise to fact-check the answer: Chatbots generate the most statistically probable answer to a query. This does not mean they are giving you the best (or even the correct) answer. Some GAI systems combine language generation with search capabilities by retrieving information from external sources and using it to augment prompts and outputs. This process improves the reliability of outputs, but the answers and citations (which can be fabricated) still need to be carefully assessed for accuracy and bias.
- When you're asking about a well-known topic: Chatbots can be a convenient choice for quick explanations of well-established concepts.
- When you're building an argument: Chatbots can be helpful with brainstorming additional claims or coming up with counter-arguments to strengthen your case. When using chatbots for brainstorming, remember that GAI tools are trained on material available on the internet, so any information or perspectives not strongly represented on English language websites might not be present in the output.
- When you need to process a lot of material quickly: Chatbots can perform a quick preliminary synthesis of large amounts of text; search-enhanced chatbots can help map out a field of knowledge or find patterns across multiple sources. Custom-trained models can perform complex pattern-recognition tasks, like processing medical data or deciphering ancient cuneiform tablets. In all of these cases, making best use of GAI outputs requires careful analysis by human experts.
- When your inquiry is complicated: To understand a complex issue, explore diverse or unexpected perspectives, or verify claims by tracing claims to their original sources, you might want to use a traditional search engine, take advantage of Drexel library's professional research tools (including curated databases) or, best of all, consult a human librarian.
Ask Yourself:
- Is this a task for a chatbot or will a different tool be better?
- What am I missing by not being able to trace information to its original source?
- Am I able to assess the AI-generated answer for accuracy and bias?
Who creates GAI tools and for what purposes?
GAI tools are developed by major tech companies like OpenAI (owners of ChatGPT), Google (owners of Gemini), Microsoft (owners of Copilot), Anthropic (owners of Claude), PerplexityAI (owners of Perplexity) and others, who charge users for subscriptions, licensing, and cloud services. (See https://drexel.edu/provost/ai/tools for GAI tools available to you.) The growing GAI industry is largely unregulated, with little to no public oversight. To make informed decisions about the ethics of using GAI tools for your purposes, consider the following concerns about intellectual property rights and fair labor practices:
- Copyright issues: GAI models are trained on vast amounts of web-based content, including books, articles, images, and code. This original training data, some of it under copyright, may have been used without permission or fair compensation for its creators.
- Labor issues: The functioning of GAI tools depends on the intellectual labor of humans who created the content used to train AI models (often without the authors' consent) as well as the undercompensated and psychologically painful labor of humans who refine models by tagging offensive content. This ongoing dependence on human intelligence and labor is regularly underplayed in the marketing of GAI applications.
- Goal (in)compatibility: The goals of for-profit companies might not always align with your goals as a student. For example, a chatbot might agree with false beliefs to make users feel good (and increase ratings). While many companies are building GAI tools for the field of education (e.g., Khan Academy's Khanmigo), the foundational models underpinning those tools have not been created for educational purposes. Always consider whether the tech tools you select are compatible with your own goals and values.
Ask Yourself:
- Who made the GAI tool I'm using, and for what purpose?
- How might the creators' commercial interests and goals shape the tool's design or outputs?
- What labor and copyright issues should I consider when using GAI?
Go to Step two: Think critically about how GAI affects your learning
Contact Us
3401 Market Street
Philadelphia, PA 19104