Academic Integrity (AI Context)
🟡 AI in Education
The ethical standards that govern how students and educators use AI tools in academic work — ensuring that AI assists learning without replacing original thought, honest authorship, or intellectual ownership.
The rules for using AI fairly in school: you can use it to help you think, but you can't pretend the AI's work is your own original creation.
★ NEW
Adaptive Learning
🟡 AI in Education
An AI-powered educational approach that continuously adjusts the difficulty, pace, and content of instruction based on each individual student's performance, learning style, and progress data.
An AI tutor that pays attention to how each student is doing and automatically makes lessons easier or harder based on their progress — like a teacher who adjusts in real time for every student.
★ NEW
Agent (AI Agent)
🟣 AI Tools
An AI system that can perceive its environment, make decisions, plan multi-step actions, and execute tasks autonomously to achieve a specified goal — operating with minimal human intervention.
An AI that doesn't just answer questions — it can actually go do tasks for you, like researching a topic, organizing files, booking appointments, or writing and sending emails on your behalf.
A set of step-by-step rules or instructions that tells a computer how to solve a problem or make a decision. Machine learning algorithms learn to improve their performance through experience with data.
A recipe a computer follows step-by-step to solve a problem — like how a recipe tells you exactly what to do to bake a cake.
Artificial General Intelligence
AGI
🔵 Core AI
A hypothetical type of AI that would possess human-level intelligence across virtually all cognitive tasks — capable of learning, reasoning, and applying knowledge across domains the same way a human can. AGI does not yet exist.
The idea of an AI as smart as a human at everything — not just one task like chess or answering questions, but truly thinking, learning, and solving any problem the way a person could. We don't have this yet.
★ NEW
Artificial Intelligence
AI
🔵 Core AI
The simulation of human intelligence processes by machines, especially computer systems — including learning, reasoning, problem-solving, perception, and language understanding.
Computers doing things that normally require human thinking, like learning, problem-solving, recognizing faces, or having a conversation.
The use of technology — including AI — to perform tasks with minimal or no human intervention, enabling repetitive processes to be completed faster, more consistently, and at scale.
Having machines do repetitive work automatically so humans don't have to — like a robot that grades multiple-choice tests or an AI that sorts emails into folders.
A standardized test or evaluation used to measure an AI model's performance across specific tasks, allowing researchers and users to compare models against each other and against human performance levels.
A standardized test for AI models — like the SAT for artificial intelligence — that shows how well a model performs compared to other models or to humans.
★ NEW
Bias in AI
🔴 Ethics & Safety
Systematic and repeatable errors in AI outputs that create unfair, inaccurate, or discriminatory outcomes — typically resulting from biased training data, flawed model design, or unrepresentative datasets.
When an AI makes unfair decisions because the data it learned from was prejudiced or incomplete — like a face recognition system that works well for some races and poorly for others.
A software application designed to simulate conversation with human users — ranging from simple rule-based systems to sophisticated AI models capable of nuanced, context-aware dialogue.
A computer program you can text or talk to. Some are simple (only answering set questions), while others use AI to have real conversations about almost anything.
Computer Vision
🟢 How AI Works
A field of AI that enables computers to derive meaningful information from digital images, videos, and other visual inputs — allowing machines to identify objects, people, scenes, and actions.
Giving computers the ability to "see" and understand what's in a picture or video — like recognizing faces, reading text in photos, or detecting a stop sign while driving.
Context Window
🩵 Technical
The maximum amount of text (measured in tokens) that an AI language model can process and "remember" in a single interaction. Information outside this window is not accessible to the model.
The AI's working memory for one conversation. Like a notebook that holds everything you've said — but when it fills up, the AI starts forgetting what was at the beginning of your chat.
★ NEW
Copyright (AI Context)
🔴 Ethics & Safety
The complex and evolving legal questions around ownership and intellectual property rights for AI-generated content — including who owns AI outputs, whether AI training on copyrighted data is legal, and how educators should handle AI-generated materials.
The unresolved question of "who owns what AI creates?" — the law is still catching up, so educators need to be careful about using AI-generated content without understanding the ownership and attribution implications.
★ NEW
Curriculum Personalization
🟡 AI in Education
The use of AI to tailor educational content, pace, and resources to individual student needs, learning styles, background knowledge, and performance data — moving away from one-size-fits-all instruction.
Using AI to create a custom learning path for each student — so a struggling reader gets different support than an advanced one, even in the same classroom.
★ NEW
A personalized version of ChatGPT configured with specific instructions, knowledge, and behaviors for a defined purpose — allowing educators to create subject-specific AI tutors, classroom assistants, or specialized tools without coding.
A ChatGPT you've trained and set up for a specific job — like creating a "5th Grade Math Tutor" that only answers math questions in student-friendly language and always encourages effort.
★ NEW
Data Privacy (AI Context)
🔴 Ethics & Safety
The legal and ethical obligation to protect personally identifiable information (PII) when using AI tools — particularly relevant for educators under FERPA, which prohibits sharing student data with unauthorized third parties including AI platforms.
Never type real student names, ID numbers, or personal information into AI tools like ChatGPT — it's a legal requirement (FERPA) and an ethical responsibility. Use role descriptions instead: "a 7th grader who reads at a 4th-grade level."
★ NEW
Deepfake
🔴 Ethics & Safety
AI-generated synthetic media — typically video, audio, or images — that realistically depicts a real person saying or doing something they never actually said or did, created using deep learning techniques.
A fake video or audio clip made by AI that looks so real it can fool you — like a video that makes it seem like a famous person said something they never said. A growing concern for media literacy education.
★ NEW
Deep Learning
🟢 How AI Works
A subset of machine learning that uses artificial neural networks with many layers (hence "deep") to learn complex patterns from large datasets — enabling breakthroughs in image recognition, language processing, and generative AI.
A more complex version of machine learning that can handle very complicated tasks, like recognizing faces in photos or generating realistic images — because it processes information through many layers, like a very deep stack of filters.
Diffusion Model
🟢 How AI Works
A type of generative AI model that creates new images or data by learning to gradually remove "noise" from random pixels — the technology behind many AI image generators like DALL·E, Midjourney, and Stable Diffusion.
The technology behind AI image generators — it starts with random static (like TV snow) and gradually "clears it up" into a clear image based on your text description. It's why Midjourney and DALL·E can create pictures from words.
★ NEW
Numerical representations of words, sentences, images, or other data in a high-dimensional mathematical space — allowing AI models to capture meaning and relationships, so that similar concepts are represented by nearby numbers.
A way AI converts words and ideas into numbers so it can understand how similar or different they are — like mapping words on a map where "king" and "queen" are close together because their meanings are related.
★ NEW
Ethical AI / Responsible AI
🔴 Ethics & Safety
The principles, frameworks, and practices that guide the development and use of AI in ways that are fair, transparent, accountable, safe, and beneficial to individuals and society — including in educational settings.
The rules and values for using AI the right way — making sure it's fair to everyone, that it doesn't harm people, that it's honest about what it is, and that humans stay in control of important decisions.
★ NEW
Explainability (AI)
🔴 Ethics & Safety
The degree to which humans can understand and trace how an AI model arrived at a specific output or decision — addressing the "black box" problem where AI reasoning is opaque even to its creators.
The ability to understand WHY an AI gave a certain answer — many AI systems make decisions in ways that even their creators can't fully explain, which is a problem when those decisions affect students or grading.
★ NEW
Few-Shot Learning
🩵 Technical
A prompting technique where you provide an AI model with a small number of examples (typically 2-5) within the prompt itself to demonstrate the format, style, or task you want — improving output quality without retraining the model.
Showing the AI 2-3 examples of what you want before asking it to do the same thing — like showing a student three worked math problems before asking them to solve a new one.
★ NEW
Fine-Tuning
🟢 How AI Works
The process of taking a pre-trained AI foundation model and further training it on a smaller, domain-specific dataset — adapting it to perform better on particular tasks, styles, or subject areas.
Giving an already smart AI some specialized training so it becomes an expert in one specific area — like taking a general-knowledge tutor and training them specifically in advanced calculus.
Foundation Model
🔵 Core AI
A large-scale AI model trained on broad, diverse datasets that serves as the base from which many specialized applications are built — including GPT-4 (ChatGPT), Gemini, Claude, and other AI tools educators use.
The massive, general-purpose AI brain that sits underneath tools like ChatGPT or Gemini — like the engine under a car that makes many different car models possible.
★ NEW
AI systems capable of generating new, original content — including text, images, audio, video, and code — based on patterns learned from training data, in response to user prompts.
AI that creates brand new things (a picture, a poem, a lesson plan, a song) instead of just analyzing or organizing existing data. ChatGPT, Midjourney, DALL·E, and Gemini are all examples.
Guardrails (AI)
🔴 Ethics & Safety
Safety constraints and content filters built into AI systems to prevent them from generating harmful, dangerous, or inappropriate outputs — the mechanisms that cause AI to decline certain requests.
The built-in rules that stop AI from saying harmful, dangerous, or inappropriate things — like the filters that prevent ChatGPT from writing content that would be unsafe for students.
★ NEW
Hallucination
🔴 Ethics & Safety
When an AI model generates false, nonsensical, or unverified information but presents it as fact with the same confident tone as accurate information — a structural artifact of how language models predict text.
When the AI confidently makes something up because it doesn't actually know the answer — like a student who fills in answers on a test without actually knowing them, but sounds sure of themselves.
Human-in-the-Loop
🔴 Ethics & Safety
An AI design approach where human judgment, review, or approval is required at key decision points — ensuring that critical decisions are not made autonomously by AI without human oversight.
Keeping a human in charge of important decisions — like a teacher who must review and approve every AI-generated lesson plan before it reaches students. The AI helps, but the human has final say.
★ NEW
The process of using a trained AI model to generate outputs — predictions, responses, or content — based on new inputs. When you type a question into ChatGPT and receive an answer, you're triggering inference.
What happens every time you ask an AI a question and it answers — the AI is "running inference," using everything it learned to figure out the best response to your specific input.
★ NEW
Instructional AI
🟡 AI in Education
AI tools and systems specifically designed or adapted for educational contexts — including AI tutors, automated feedback systems, content generators, and assessment tools built for or used by educators.
AI that's been set up specifically for teaching and learning — like MagicSchool.ai or Khanmigo, which are built for classrooms rather than general use, with educators' needs in mind.
★ NEW
Iteration (AI Prompting)
🟡 AI in Education
The professional practice of refining AI outputs through multiple rounds of follow-up prompts — progressively improving the quality, accuracy, and specificity of AI-generated content rather than accepting first-generation outputs.
Going back and forth with the AI to make its answers better and better — like drafting and revising an essay, except you're telling the AI what to fix each round: "Now make it simpler. Now add examples. Now adjust the reading level."
★ NEW
Large Language Model
LLM
🔵 Core AI
A type of AI model trained on vast amounts of text data to understand and generate human-like language — the technology underlying ChatGPT, Claude, Gemini, and most conversational AI tools.
A super-smart autocomplete that has read almost the entire internet and can write essays, answer questions, generate lesson plans, or chat with you — because it's learned the patterns of human language at massive scale.
Learning Management System (AI-Enhanced)
LMS
🟡 AI in Education
Digital platforms (such as Google Classroom, Canvas, or Schoology) that increasingly incorporate AI features to automate grading, personalize learning pathways, flag at-risk students, and generate insights from student performance data.
The online classroom platform (like Google Classroom or Canvas) that's now getting AI features built in — so it can automatically notice when a student is struggling, suggest resources, and give teachers real-time data insights.
★ NEW
Machine Learning
ML
🔵 Core AI
A subset of AI that enables systems to learn and improve from experience without being explicitly programmed — using statistical patterns in data to make predictions and decisions.
Teaching a computer to recognize patterns by showing it lots of examples, rather than giving it strict rules. Like how a spam filter learns to recognize junk mail by seeing thousands of examples.
Media Literacy (AI Context)
🟡 AI in Education
The ability to critically evaluate, analyze, and understand AI-generated content — including recognizing synthetic media, identifying AI-authored text, questioning AI accuracy, and understanding how AI systems can reflect bias or misinformation.
The essential skill of looking at any content — a news article, a video, a photo — and being able to ask: "Was this made by AI? Could it be fake? Who created it and why? Is it accurate?" A life skill students need for the digital age.
★ NEW
AI systems capable of processing and generating multiple types of data — combining text, images, audio, and video in a single model. GPT-4o and Gemini Ultra are examples of multimodal AI.
An AI that can see, read, and listen — not just process text. You can show it a photo of a math problem and it can solve it, or speak to it and it responds. Multiple "modes" of communication in one AI.
★ NEW
Natural Language Processing
NLP
🟢 How AI Works
A branch of AI that enables computers to understand, interpret, and generate human language — the foundational technology behind translation services, voice assistants, chatbots, and language models.
The technology that lets computers understand what we say or type — what makes Siri, Alexa, and Google understand your words rather than just seeing them as random letters.
Neural Network
🟢 How AI Works
A computing system inspired by biological neural networks in animal brains — consisting of interconnected nodes (neurons) that process information in layers, enabling pattern recognition and complex learning.
A computer system designed to work a bit like a human brain — with connected "neurons" that pass information to each other, getting better at recognizing patterns the more they practice.
Open Source vs. Closed Source AI
🩵 Technical
Open source AI models (like Meta's Llama) make their code and weights publicly available, allowing anyone to inspect, modify, or deploy them. Closed source models (like GPT-4 or Claude) are proprietary, accessible only through APIs or products.
Open source AI is like a recipe anyone can see and use. Closed source AI is like a restaurant's secret recipe — you can enjoy the food (use the AI), but you can't see exactly how it was made or modify it yourself.
★ NEW
The text, image, audio, video, or other content that an AI model generates in response to a prompt or input — the end product of an AI inference process.
Whatever the AI produces in response to your request — the essay it wrote, the image it created, the lesson plan it generated. The output is what you get back after you send the AI your prompt.
★ NEW
The internal numerical variables that an AI model learns and adjusts during training — determining how it processes input and generates output. GPT-4 has an estimated 1.8 trillion parameters.
The millions (or trillions) of little numerical settings inside the AI's "brain" that it adjusts while learning, to get better at its job. More parameters generally means a more capable model.
Personalized Feedback (AI)
🟡 AI in Education
The use of AI to generate individualized, specific, and actionable feedback on student work — tailored to each student's level, learning goals, and specific errors rather than generic comments.
AI that gives each student their own specific feedback — not just "good job" or "needs improvement," but detailed, targeted comments that tell each student exactly what they did well and what to fix next.
★ NEW
The input, instruction, or question given to an AI system to direct its output — ranging from simple one-line questions to complex structured instructions with role context, parameters, and format specifications.
The question or command you type into an AI to tell it what you want it to do. The quality of your prompt directly determines the quality of what the AI produces — better input, better output.
Prompt Engineering
🔵 Core AI
The practice of designing, structuring, and refining prompts to consistently elicit high-quality, accurate, and useful outputs from AI models — a professional skill involving context-setting, parameter specification, and format direction.
Figuring out the exact right words, structure, and context to use so the AI gives you exactly what you're looking for — it's a professional skill, like learning to ask the right interview questions to get the best answers.
Personally Identifiable Information
PII
🔴 Ethics & Safety
Any data that can be used to identify a specific individual — including names, student ID numbers, addresses, dates of birth, or academic records. Federal law (FERPA) prohibits sharing student PII with unauthorized parties, including AI tools without proper data agreements.
Any information that could reveal who someone is — like a student's name, ID, address, or grades. Never put this into AI tools. Instead, describe the person's situation without using identifying details.
★ NEW
Retrieval-Augmented Generation
RAG
🩵 Technical
A technique that combines an AI language model with a real-time search or retrieval system — allowing the model to pull in current, specific information from a knowledge base before generating a response, reducing hallucination.
An AI that looks things up before answering — instead of relying only on what it memorized during training, it searches a database for relevant facts first, making its answers more accurate and current. Gemini's live web search is a form of this.
★ NEW
Reinforcement Learning from Human Feedback
RLHF
🟢 How AI Works
A training technique used to align AI models with human preferences — human raters evaluate AI outputs and the model is trained to produce responses humans prefer, making it more helpful, honest, and safe.
How AI companies teach their models to behave like good AI — real humans read thousands of AI responses, rate which ones are most helpful and safe, and the AI learns to produce more of the good ones and fewer of the bad ones.
★ NEW
Representation (AI & Equity)
🔴 Ethics & Safety
The extent to which AI-generated content, imagery, and examples reflect the diversity of all communities — including race, ethnicity, culture, gender, ability, and lived experience. AI often defaults to dominant cultural representations if not explicitly directed otherwise.
Whether the AI's output actually looks like and speaks to YOUR students — AI will default to the most "common" people in its training data if you don't tell it otherwise. Educators must explicitly prompt for diverse, culturally responsive representation.
★ NEW
Synthetic Data
🩵 Technical
Artificially generated data created by AI models that mimics the statistical patterns of real-world data — used to train AI models, fill data gaps, or protect privacy when real data is unavailable or sensitive.
Fake data that an AI creates to look exactly like real data — used when real data is too private to share or when there isn't enough of it. Like making up realistic practice test scores to train an AI grading system.
★ NEW
Hidden background instructions given to an AI model before the user interaction begins — used to set the AI's persona, behavioral rules, and constraints. System prompts are how Custom GPTs and specialized AI tools are configured.
The invisible instructions that tell an AI how to behave before you start talking to it — like a job briefing given to a new employee before their first customer interaction. Custom GPTs use system prompts to create specialized AI tools.
★ NEW
AI Safety
🔴 Ethics & Safety
The interdisciplinary field focused on ensuring that AI systems behave safely, reliably, and in alignment with human values — including preventing unintended harmful behaviors, misuse, and catastrophic failures as AI systems become more powerful.
The ongoing work of making sure AI systems do what we want and don't cause harm — like making sure a very powerful tool comes with safety features, clear instructions, and isn't misused. A growing field as AI becomes more capable.
★ NEW
Speech Recognition
🟣 AI Tools
AI technology that converts spoken language into written text — powering voice assistants, dictation software, real-time captioning, and accessibility tools that support students with disabilities and language learners.
The AI technology that turns spoken words into written text — what makes voice-to-text, Siri, Alexa, and automatic captions on videos work. Highly valuable for ELL students and students with learning differences.
★ NEW
Temperature (AI Setting)
🩵 Technical
A parameter that controls the randomness and creativity of AI outputs — lower temperature produces more predictable, focused responses while higher temperature produces more varied, creative, and unexpected outputs.
A dial that controls how creative vs. predictable the AI is. Turn it down low and the AI gives safe, expected answers. Turn it up and the AI gets more creative and surprising — useful to know when you want consistent or creative outputs.
★ NEW
Text-to-Image Generation
🟣 AI Tools
AI systems that generate images from text descriptions (prompts) — including Midjourney, DALL·E, Adobe Firefly, and Stable Diffusion. Used by educators to create custom classroom illustrations, visual aids, and instructional materials.
AI that turns your words into pictures — type "a diverse group of students conducting a science experiment in a Bronx classroom" and get a custom illustration created specifically for your class. No stock photos, no licensing fees.
★ NEW
The basic unit of text processed by a language model — tokens can be a whole word, part of a word, punctuation, or a single character. An AI model's context window and pricing are typically measured in tokens.
How AI breaks down your text to read and understand it — roughly 1 token per word. A typical conversation uses thousands of tokens. This is why very long conversations can cause the AI to "forget" earlier parts.
Training Data
🟢 How AI Works
The large dataset used to train an AI model — teaching it patterns, facts, language, and relationships. The quality, diversity, and size of training data fundamentally determines an AI model's capabilities and limitations.
The textbooks, websites, books, and examples an AI studied to learn its job. If the training data is biased or incomplete, the AI's outputs will be too — garbage in, garbage out.
Transformer (Architecture)
🟢 How AI Works
The neural network architecture introduced in 2017 that revolutionized AI — using "attention mechanisms" to process all words in a text simultaneously and understand context across long distances. The "T" in ChatGPT stands for Transformer.
The breakthrough design that made ChatGPT, Gemini, and modern AI possible — instead of reading text one word at a time, it looks at the whole sentence at once and figures out which words are most important for understanding the meaning.
★ NEW
AI Watermarking
🔴 Ethics & Safety
Techniques for embedding invisible or visible markers in AI-generated content to identify it as synthetic — enabling detection tools to distinguish AI-authored text, images, and video from human-created content.
A hidden label embedded in AI-generated content that lets detection tools identify it as AI-made — like an invisible "Made by AI" stamp on a piece of writing or image. Still being developed, but increasingly important for academic integrity.
★ NEW