All Tools
T
Fine-tuningFreeOpen Source
TORCHTUNE
PyTorch-native library for LLM post-training and fine-tuning
BSD-3-Clause
ABOUT
Adapting open language models usually requires custom training code, experiment scaffolding, and careful memory optimization before teams can run practical fine-tuning jobs. torchtune packages common post-training workflows into reusable PyTorch recipes, YAML configs, and a CLI so developers can fine-tune, distill, optimize, and evaluate models with less glue code.
INSTALL
pip install torchtuneINTEGRATION GUIDE
1. Run LoRA or QLoRA fine-tuning for Llama, Qwen, Gemma, or Mistral models
2. Launch supervised fine-tuning jobs from reusable YAML training recipes
3. Distill larger teacher models into smaller student models for deployment
4. Experiment with preference optimization workflows such as DPO, PPO, or GRPO
TAGS
pytorchfine-tuningloraqloradistillationpost-trainingllm