Core · beginner

What is Inference?

A plain-English explanation of Inference (Inference) — what it means, why it matters, and how it is used in AI.

Inference
Inference
Inference is the process of running a trained AI model to generate predictions or outputs from new input data. It is the "using" phase of AI as opposed to the "training" phase.
"When you type a question into an AI chatbot and it responds, that response is generated through inference."

Also known as: Model inference, forward pass, prediction

Why does Inference matter?

Inference is the computational step that runs every time an AI model is used. Inference cost and speed are key factors in deploying AI products.

Practice this term

The best way to remember Inference is to practice unscrambling it. AI Terminology Scrambler uses spaced repetition to help you learn and retain AI vocabulary in just a few minutes a day.

Practice Inference now →

Related AI terms