LLM Fine-Tuning Trade-offs

Comparing Full Fine-Tuning and LoRA on a News Generation Task

↓ Scroll

Project Overview

Dataset

AG News + CNN/DailyMail

Instruction-formatted news text

Models

OPT-125M

Full FT vs LoRA (~1%)

Goal

Quality vs Efficiency

Real-world PEFT trade-offs

Why This Matters

Large language models are expensive to train and deploy. Parameter-efficient fine-tuning methods like LoRA promise massive savings — but with performance trade-offs. This demo makes those trade-offs visible and measurable.

⚠️ Demo Notice

This system is trained exclusively on news-style datasets (AG News, CNN/DailyMail). It is designed to demonstrate fine-tuning trade-offs, not factual question answering. Outputs may be stylistically fluent but are not guaranteed to be correct or up-to-date.

Try It Yourself

Full Fine-Tuned Model

LoRA Model (~1% params)