Comparing Full Fine-Tuning and LoRA on a News Generation Task
↓ ScrollAG News + CNN/DailyMail
Instruction-formatted news text
OPT-125M
Full FT vs LoRA (~1%)
Quality vs Efficiency
Real-world PEFT trade-offs
Large language models are expensive to train and deploy. Parameter-efficient fine-tuning methods like LoRA promise massive savings — but with performance trade-offs. This demo makes those trade-offs visible and measurable.
This system is trained exclusively on news-style datasets (AG News, CNN/DailyMail). It is designed to demonstrate fine-tuning trade-offs, not factual question answering. Outputs may be stylistically fluent but are not guaranteed to be correct or up-to-date.