Skip to main content
KS3

AI Safety: Understanding Artificial Intelligence

A lesson helping secondary students understand how AI works, its limitations, and how to use it responsibly and safely.

55 minutesAges: 11-14 Use Ctrl+P to print

Overview

This lesson introduces students to artificial intelligence in a practical, grounded way. Rather than focusing on futuristic scenarios, it explores the AI tools students are already using — chatbots, image generators, recommendation algorithms — and helps them think critically about accuracy, privacy, and responsible use. Students develop a personal AI use policy by the end of the lesson.

Learning Objectives

  • Understand what AI is and how common AI tools work at a basic level
  • Recognise the limitations of AI, including hallucination and bias
  • Evaluate the privacy implications of sharing personal information with AI tools
  • Develop a personal policy for responsible AI use

Activities

AI myth busters

10 minutes

Students vote on whether common statements about AI are true or false (e.g. 'ChatGPT understands what it writes', 'AI can always tell fact from fiction'). Discuss the correct answers and challenge misconceptions.

Test the chatbot

15 minutes

In pairs, students ask an AI chatbot a question they already know the answer to and evaluate the response for accuracy. They then ask it a question about themselves and discuss what the AI 'knows' versus what it invents.

Privacy audit

15 minutes

Students examine the privacy policies of two popular AI tools and identify what data is collected, how long it is stored, and whether it is used for training. Groups present their findings as a one-minute brief.

My AI use policy

15 minutes

Each student drafts a personal AI use policy covering: when they will and will not use AI for schoolwork, what personal information they will never share with AI, and how they will verify AI-generated information.

Discussion Points

  • If an AI chatbot sounds confident, does that mean it is correct?
  • Should you use AI to help with homework? Where is the line between help and cheating?
  • What would happen if you told a chatbot your real name, school, and problems — who might see that?
  • How might AI bias affect the information you receive?

Key Takeaways

  • AI tools sound confident but are frequently wrong — always verify their output
  • Anything you type into an AI tool may be stored and used to train future models
  • Using AI responsibly means understanding its limitations and setting your own boundaries

This content is designed to support professionals in their safeguarding role. It does not replace your organisation's safeguarding policies or training requirements.

Related Resources

Was this page helpful?

Last reviewed: 2026-03-29

Explore more