LoRA
Also known as: Low-Rank Adaptation
A parameter-efficient fine-tuning method that trains small adapter matrices instead of full weights, lowering GPU memory and storage needs for domain adaptation.
Also known as: Low-Rank Adaptation
A parameter-efficient fine-tuning method that trains small adapter matrices instead of full weights, lowering GPU memory and storage needs for domain adaptation.
Contact if you need a term added for a security or procurement review.