A plain-English explanation of LoRA (Low-Rank Adaptation) — what it means, why it matters, and how it is used in AI.
Also known as: LoRA, Low-Rank Adaptation, parameter-efficient fine-tuning, PEFT
LoRA makes fine-tuning large models accessible to researchers and companies without access to massive GPU clusters.
The best way to remember LoRA is to practice unscrambling it. AI Terminology Scrambler uses spaced repetition to help you learn and retain AI vocabulary in just a few minutes a day.
Practice LoRA now →