Techniques · intermediate

What is LoRA?

A plain-English explanation of LoRA (Low-Rank Adaptation) — what it means, why it matters, and how it is used in AI.

LoRA
Low-Rank Adaptation
LoRA (Low-Rank Adaptation) is a parameter-efficient fine-tuning technique that enables training of very large language models without updating all their parameters.
"Using LoRA, a researcher can fine-tune a 70-billion-parameter model on a single consumer GPU."

Also known as: LoRA, Low-Rank Adaptation, parameter-efficient fine-tuning, PEFT

Why does LoRA matter?

LoRA makes fine-tuning large models accessible to researchers and companies without access to massive GPU clusters.

Practice this term

The best way to remember LoRA is to practice unscrambling it. AI Terminology Scrambler uses spaced repetition to help you learn and retain AI vocabulary in just a few minutes a day.

Practice LoRA now →

Related AI terms