All Tools
U
Fine-tuningFreemiumOpen Source
UNSLOTH
Train and run LLMs locally 30x faster with 70% less memory
Apache-2.0
ABOUT
Traditional LLM fine-tuning is slow, memory-intensive, and requires complex setup. Unsloth solves this by providing optimized custom kernels that make training up to 30x faster with 70-90% less memory, enabling developers to fine-tune large models on consumer hardware without sacrificing accuracy.
INSTALL
pip install unslothINTEGRATION GUIDE
1. Fine-tuning LLMs like Mistral, Gemma, Llama, and Qwen for custom tasks
2. Running models locally with an OpenAI-compatible API and tool-calling support
3. Creating custom training datasets from PDFs, CSVs, and JSON files
4. Fine-tuning vision, audio, and embedding models efficiently
5. Exporting fine-tuned models to GGUF, Safetensors, Ollama, or vLLM formats
TAGS
llmfine-tuninginferencelocal-aiopen-sourceloragguftrainingaimachine-learning