QLoRA is all you need (Fast and lightweight model fine-tuning)
YouTube Viewers YouTube Viewers
1.32M subscribers
63,354 views
0

 Published On Sep 15, 2023

Learning and sharing my process with QLoRA (quantized low rank adapters) fine-tuning. In this case, I use a custom-made reddit dataset, but you can use anything you want.

I referenced a LOT of stuff in this video, I will do my best to link everything, but let me know if I forget anything.

Resources:
WSB-GPT-7B Model: https://huggingface.co/Sentdex/WSB-GP...
WSB-GPT-13B Model: https://huggingface.co/Sentdex/WSB-GP...
WSB Training data: https://huggingface.co/datasets/Sentd...

Code:
QLoRA Repo: https://github.com/artidoro/qlora
qlora.py: https://github.com/artidoro/qlora/blo...
Simple qlora training notebook: https://colab.research.google.com/dri...
qlora merging/dequantizing code: https://gist.github.com/ChrisHayduk/1...

Referenced Research Papers:
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning: https://arxiv.org/abs/2012.13255
LoRA: Low-Rank Adaptation of Large Language Models: https://arxiv.org/abs/2106.09685
QLoRA: Efficient Finetuning of Quantized LLMs: https://arxiv.org/abs/2305.14314

Yannic's GPT-4chan model: https://huggingface.co/ykilcher/gpt-4...
Condemnation letter: https://docs.google.com/forms/d/e/1FA...
   • GPT-4chan: This is the worst AI ever  

Contents:

0:00 - Why QLoRA?
0:55 - LoRA/QLoRA Research
4:13 - Fine-tuning dataset
11:10 - QLoRA Training Process
15:02 - QLoRA Adapters
17:10 - Merging, Dequantizing, and Sharing
19:34 - WSB QLoRA fine-tuned model examples

Neural Networks from Scratch book: https://nnfs.io
Channel membership:    / @sentdex  
Discord:   / discord  
Reddit:   / sentdex  
Support the content: https://pythonprogramming.net/support...
Twitter:   / sentdex  
Instagram:   / sentdex  
Facebook:   / pythonprogramming.net  
Twitch:   / sentdex  

show more

Share/Embed