Analyzing LLMs Fine-Tuned with LoRA using WeightWatcher
Evaluating Large Language Models (LLMs) can be a challenging task, especially when you don’t have a lot of test data to work with. In a previous blog post, we discussed how to evaluate fine-tuned LLMs using the weightwatcher tool. Specifically, we looked at models after the ‘deltas’ or updates have been merged into the base model.
In this blog post, we will focus on LLMs fine-tuned using Parameter Efficient Fine-Tuning (PEFT), also known as Low-Rank Adaptations (LoRA). The LoRA technique allows for updating the weight matrices of the LLM with a Low-Rank update, making it more efficient in terms of storage and computation.
To analyze LoRA fine-tuned models, you need to ensure that the update or delta is either loaded in memory or stored in a directory/folder in the appropriate format. Additionally, the LoRA rank should be greater than 10, and the layer names for the A and B matrices updates should include the tokens ‘lora-A’ and/or ‘lora-B’. The weightwatcher tool version should be 0.7.4.3 or higher to analyze LoRA models accurately.
By loading the adapter model files directly into weightwatcher and using the peft=True option, you can analyze the LLMs fine-tuned using the LoRA technique separately from the base model. The tool provides useful layer quality metrics such as alpha, which can help you evaluate the effectiveness of the fine-tuning process.
One interesting observation is that in some LoRA fine-tuned models, the layer alphas are less than 2, indicating that the layers may be over-regularized or overfitting the training data. Comparing the LoRA layer alphas to the corresponding layers in the base model can provide insights into the fine-tuning process and help optimize the training parameters.
Overall, analyzing LLMs fine-tuned with the LoRA technique can provide valuable insights into the model’s performance and guide further optimization strategies. By leveraging tools like weightwatcher and experimenting with different fine-tuning approaches, researchers and developers can enhance the efficiency and effectiveness of large language models.