Category

AI Safety Terms

Key vocabulary for understanding AI safety, alignment, and responsible AI deployment.

Hallucination
AI hallucination refers to when an AI model generates information that is factually i...
Alignment
AI alignment is the challenge of ensuring that AI systems pursue goals and behave in ...
Guardrails
Guardrails are rules, filters, classifiers, or constraints built into an AI system to...
Bias
AI bias refers to systematic errors or unfair patterns in AI outputs that reflect pre...
Jailbreak
A jailbreak is a prompt, technique, or sequence of inputs designed to bypass an AI mo...
Constitutional AI
Constitutional AI is a training technique developed by Anthropic in which an AI model...

Learn these terms with spaced repetition

AI Terminology Scrambler uses daily challenges and spaced repetition to help you build AI vocabulary that actually sticks — in just 5 minutes a day.

Start learning free →