Back to Modules
⚖️Module 06

Ethical Considerations

Navigate when to use AI, when not to, and maintaining academic integrity

Why Ethics Matter More Than Ever

AI makes it easier than ever to take shortcuts. The question isn't just “can I use AI for this?” but “should I?” and “what are the consequences?”

As educators, we model the behaviors we want to see in students. Our thoughtful, ethical use of AI teaches them how to navigate these tools responsibly.

When NOT to Use AI

Not every task should involve AI. Here are situations where human-only work is essential:

Sensitive Student Communications

Messages about safeguarding concerns, disciplinary matters, bereavement, or other sensitive issues require your authentic voice and professional judgment. AI-generated content here would be inappropriate.

Relationship-Building Moments

Personal notes to students, genuine praise, and messages that build trust must come from you. Students can often sense inauthenticity.

Professional Judgment Calls

Decisions about student welfare, special educational needs assessments, or safeguarding concerns must be made by qualified professionals, not AI.

Skill Development Tasks

Tasks that develop your professional skills—like reflecting on a lesson, analysing student work, or planning from scratch—shouldn't always be delegated. Some cognitive work keeps you sharp.

When Authenticity Is Required

References, testimonials, and personal recommendations should reflect your genuine knowledge of and relationship with the person.

Academic Integrity

For Your Own Work

When using AI to help with professional writing, reports, or research:

  • Be transparent about AI assistance where required by your institution
  • Ensure the final product reflects your professional judgment
  • Don't present AI-generated content as entirely your own where that would be misleading
  • Verify all facts, citations, and claims—you're responsible for accuracy

For Student Work

When students use AI:

  • Be clear about what AI use is permitted for each assignment
  • Distinguish between using AI as a tool versus having AI do the work
  • Teach students the augmentation approach—AI assists, doesn't replace thinking
  • Help students understand why doing their own thinking matters

The Augmentation Standard

A useful test: “Has AI helped me think better, or has it done my thinking for me?” If the human could explain, defend, and extend the work independently, AI was used appropriately. If they couldn't, there's an integrity problem.

Data and Privacy Considerations

Critical Warning

Information you share with AI systems may be stored, processed, and potentially used for training. Never share confidential or identifiable student information unless you're certain the platform is GDPR-compliant and approved by your institution.

Never Share:

  • Student names or identifiable information
  • Sensitive medical, behavioral, or family information
  • SEND information or confidential assessments
  • Safeguarding concerns or disclosures
  • Staff personnel information

Safe Practices:

  • Use “a student” instead of names
  • Anonymise any student work you share
  • Use general descriptors (“Year 5 class”) rather than specific identifiers
  • Check your institution's AI policy before using any new tool
  • When in doubt, don't share

The EU AI Act: Core Principles

The EU AI Act is the world's first comprehensive AI regulation. While not the focus of this course, it's worth being aware of its core principles:

Human Oversight

AI systems must allow for human intervention and control

Transparency

Users must know when they're interacting with AI

Risk-Based Approach

Higher-risk AI applications face stricter requirements

Data Governance

Training data must be relevant, representative, and error-free

Accuracy & Robustness

AI systems must be reliable and secure

Non-Discrimination

AI must not create or reinforce unfair bias

Accountability

Clear responsibility for AI system outcomes

These principles align well with the augmentation approach—keeping humans in control and thinking critically about AI use.

Bias and Fairness

AI systems reflect biases present in their training data. Be alert to:

Cultural Assumptions

AI may default to American spellings, cultural references, or educational terminology that doesn't match your context. It may overlook diverse perspectives or assume Western norms.

Representation Gaps

AI-generated examples, stories, or images may underrepresent certain groups or reinforce stereotypes. Review outputs for diverse representation.

Socioeconomic Assumptions

Suggestions may assume resources, technology access, or family support that not all students have. Check recommendations against your students' realities.

Your Responsibility

You are the filter. Review AI outputs for bias before using them. Add perspectives that are missing. Challenge assumptions that don't fit your diverse classroom.

Teaching Students About AI Ethics

Your students need to develop their own ethical framework for AI use. Key concepts to teach:

Augmentation vs Automation

Use AI to think better, not to avoid thinking

Verification Responsibility

Always check AI outputs—you're responsible for accuracy

Transparent Use

Be honest about when and how you've used AI

Skill Development

Don't let AI do the work that builds your abilities

Critical Thinking

Question AI outputs; don't accept them blindly

Privacy Awareness

Be careful about what information you share with AI

Before Using AI: The Ethics Checklist

Ask yourself these questions before engaging AI:

Is AI use appropriate for this task, or does it require authentic human work?
Am I being transparent about AI assistance where required?
Have I protected student privacy and confidential information?
Will I critically evaluate and verify the output?
Am I using AI to augment my thinking, not replace it?
Does this use align with my institution's policies?
Would I be comfortable if students used AI in this same way?
Am I maintaining the professional skills this task develops?

Key Takeaways

  • Not every task should involve AI—know when human-only work is essential
  • Protect student privacy: never share identifiable information with AI
  • Be transparent about AI use where your institution requires it
  • Review all AI outputs for bias and cultural assumptions
  • Model ethical AI use for your students—they're watching how you navigate this
  • The augmentation approach is itself an ethical stance: AI assists, humans decide

Interactive Lab

AI Ethics for Students

Practice deciding when AI use is appropriate for your coursework

Step 1 of 617% complete
Scenario 1 of 6

Having AI write the analysis section of your lab report

You've Completed the Course!

You now have the foundation for using AI as a tool for augmentation—enhancing your expertise rather than replacing it. Remember: think first, prompt wisely, verify always, and keep learning.

Review All Modules