LoRA

Terms from Artificial Intelligence: humans at the heart of algorithms

Page numbers are for draft copy at present; they will be replaced with correct numbers when final book is formatted. Chapter numbers are correct and will not change now.

LoRA (low-rank adaptation of large language models) performs dimensional reduction on inner layers of the deep neural network used in a large language model thus reducing the cost of execution and retraining. This has economic and environmental advantages.

Used on Chap. 23: page 574

Also known as low-rank adaptation of large language models

Reducing the dimensionality of inner layers to reduce re-training and runtime costs