All Tools
L
Fine-tuningFreeOpen Source
LLAMA FACTORY
Fine-tune and post-train open models with a CLI or web UI
Apache-2.0
ABOUT
Fine-tuning open models usually means stitching together separate scripts for dataset prep, LoRA adapters, preference optimization, evaluation, and serving. LLaMA Factory puts those workflows in one toolkit so teams can adapt language and vision-language models locally with less boilerplate, less infrastructure glue code, and lower GPU memory requirements.
INSTALL
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e .
INTEGRATION GUIDE
1. Supervised fine-tune an open LLM on domain-specific instruction data
2. Run LoRA or QLoRA training on limited GPU memory for smaller teams
3. Post-train chat models with DPO, PPO, KTO, or ORPO alignment workflows
4. Adapt vision-language models for multimodal assistants and document tasks
5. Let non-experts experiment locally through the built-in web UI and dashboards
TAGS
pythonfine-tuningllmvlmloraqlorarlhfweb-ui