All Tools
L
LLMFreeOpen Source
LLAMAFILE
Distribute and run LLMs with a single file.
Apache-2.0
ABOUT
Running and distributing LLMs usually requires complex dependency management, multiple files, and platform-specific installations. llamafile solves this by packaging everything into a single executable that runs locally across macOS, Windows, Linux, FreeBSD, OpenBSD, and NetBSD on both AMD64 and ARM64 with zero installation, making open LLMs accessible to developers and end users alike.
INSTALL
curl -LO https://huggingface.co/mozilla-ai/llamafile_0.10.0/resolve/main/llamafile_0.10.0 && chmod +x llamafile_0.10.0INTEGRATION GUIDE
1. Local AI chatbot and assistant deployment
2. Offline speech-to-text transcription and translation
3. Distributing LLMs as portable single-file executables
4. Running vision-language models on local hardware
5. Hosting OpenAI-compatible local API servers
TAGS
llmlocal-aisingle-fileexecutableopen-sourcemozillallama.cppcosmopolitancpugpuspeech-to-textwhisperportablegguf