AI Guidance for Staff

Safe and Responsible Use of AI for Staff in Higher Education

Drexel University's policies on Artificial Intelligence (AI) apply to all members of the university community, including professional staff. These guidelines are designed to encourage and ensure the ethical, secure, and effective use of AI tools in academic, administrative, and operational contexts.

AI can be a valuable assistant for drafting, analysis, and planning, BUT it does not replace professional judgment or institutional responsibility.


Understand Capabilities and Limitations

AI can assist with: 

Generative AI can help staff:

  • Draft professional communications
  • Edit documents for clarity and accessibility
  • Adapt content for different audiences

Example Prompts:

  • "Draft a professional email to faculty announcing the upcoming AI workshop."
  • "Edit this policy document for clarity and conciseness."
  • "Rewrite the following paragraph to be more accessible for students."

Generative AI can support with:

  • Summarizing publicly available research
  • Identifying trends in published studies
  • Compiling background information

Example Prompts:

  • "Summarize recent research articles on AI ethics in higher education."
  • "Find key trends in student engagement from published studies."
  • "Generate a list of reputable sources on FERPA compliance and AI."

Generative AI can support analysis when data is appropriate for use. When working with institutional information, use only de‑identified data or Drexel‑approved AI tools.

Example prompts:

  • “Analyze this de‑identified dataset and summarize key patterns.”
  • “Create a chart showing year‑over‑year enrollment trends from aggregated data.”
  • “Identify outliers in this anonymized survey data.”

While professional judgment and final responsibility remain with the staff member, generative AI can assist with:

  • Drafting general feedback language or reusable comment templates
  • Refining tone to be clearer, more supportive, or more professional
  • Generating examples of constructive feedback aligned with defined criteria or rubrics
  • Adapting feedback language for different audiences or communication contexts

Example Prompts:

  • “Draft sample feedback language for a common writing issue students encounter.”
  • “Rewrite this feedback to be clearer and more supportive while maintaining a professional tone.”
  • “Create a feedback template aligned with a rubric for project‑based work.”
  • "Suggest ways for staff to improve their use of AI tools based on recent feedback."
  • "Give tailored recommendations for professional development based on my role as a project manager."

Remember: Avoid entering identifiable student, employee, or confidential information into AI tools. Always review and personalize AI‑generated feedback before sharing it.

Generative AI can help staff:

  • Condense long reports
  • Translate technical language for non‑experts
  • Prepare executive summaries

Example Prompts:

  • "Summarize the main points of Drexel's AI policy for staff."
  • "Condense this 10-page report on digital transformation into a one-page executive summary."
  • "Explain the implications of General Data Protection Regulation (GDPR) for university operations in simple terms."

Generative AI can help streamline routine work:

  • Drafting agendas or templates
  • Categorizing requests
  • Preparing status updates

Example Prompts:

  • "Generate meeting agendas for all upcoming department meetings."
  • "Automatically categorize incoming support tickets by topic."
  • "Prepare weekly status update emails using data from our project management system."

It's important to remember:

  • AI does not replace human judgment
  • Outputs may contain biases or inaccuracies
  • Always review and verify before sharing AI-generated content

Here's why:

As a member of the Drexel community, you are responsible for the content you submit or share, even if generated by AI. You are also expected to model responsible AI use for students and peers, and report any inappropriate or offensive AI-generated content to the relevant university office.

Remember to follow your department's guidelines and university policies when it comes to student-facing or external communications. 

Pro-Tip: Context Matters

  • The more specific you are, the better the results. Providing detailed context—sometimes even a paragraph or two—helps the model understand exactly what you're looking for.
  • If you're submitting attachments for analysis and want the model to focus solely on those, be sure to clearly instruct it to ignore any external sources or documentation beyond what you've provided.

Ethical Use of GenAI in the Workplace

Protect Privacy, Data, & University Intellectual Property (IP)

Information entered into AI tools may not be confidential and may be processed by third parties. Avoid submitting any private or sensitive data — such as credit card numbers, identification details, or addresses — to these platforms.

To comply with privacy laws (e.g. HIPAA, FERPA, GDPR):

  • Do not input personally identifiable information (PII) into AI tools unless explicitly permitted by university policy.
  • Avoid sharing sensitive institutional data
  • Avoid uploading copyrighted, licensed, or proprietary University or third‑party materials unless you are authorized to do so.

  • Use institution-approved platforms with secure data handling, especially when working with regulated or internal information.

Stay Informed and Collaborate

Key Questions to Ask Yourself

  • Am I using GenAI as a tool to deepen my understanding, spark new ideas, and enhance my problem-solving skills?
  • Does it encourage curiosity and exploration, or am I relying on it passively?
  • Does it streamline my workflow, improve productivity, or support better decision-making?
  • Am I still applying my professional judgment and expertise, or deferring too much to the tool?
  • Have I fact-checked the output and ensured it aligns with trusted sources?
  • Is the content free from harmful biases, stereotypes, or misinformation?
  • Am I transparent about the use of GenAI in my work?
  • Do I critically evaluate GenAI-generated content before sharing or integrating it?
  • Am I using GenAI in ways that promote fairness, inclusivity, and positive social impact?
  • Does my use of GenAI respect privacy, intellectual property, and ethical boundaries?