Back

Tenyx Announces rsLoRA, An Advance in Efficient Fine-Tuning of LLMs

transcript

Tenyx Announces rsLoRA, An Advance in Efficient Fine-Tuning of LLMs

Tenyx has released a new parameter efficient method for fine tuning Large Lanuage Models (LLMS). As its name implies, “Rank-Stabilized LoRA (rsLoRA)" (or rsLoRA) improves on the popular LoRA (Low Rank Adaptation) method.The method has been recently featured on Hugging Face’s community blog posts.

LoRA uses auxilliary matrices (called "adapters") of varying rank (roughly, low rank = low complexity) to reduce the number of parameters to be modified during fine-tuning. One of the important findings of the original LoRA work was that fine-tuning with very low adapter rank (i.e. 4 to 32) could perform well, and that this performance did not improve further with increasing rank.

However, it turns out that this saturation of performance with very low rank is not primarily due to a very low intrinsic dimensionality of the learning manifold, but rather, it stems from the stunted learning of LoRA outside of very low adapter ranks.

Tenyx's rsLoRA rseearch demonstrates, both theoretically and empirically, this limitation of LoRA, and corrects for it with the rsLoRA method. To find out more, you can view the full blog here, and paper about the method here.

Browse more

News
January 11, 2024

Generative AI Report

News
July 25, 2024

Exclusive: Automated Voice Agents are Hurting Customer Retention

News
December 12, 2023

Tenyx aims to fix LLMs’ catastrophic forgetting problem