HomeToolsMCPHow It WorksStoriesPhilosophyArchitectureStar on GitHub
All Tools
A
RAGFreemiumOpen Source

ANYTHINGLLM

The all-in-one AI productivity accelerator.

MIT

ABOUT

Teams that want private AI over their own documents usually have to stitch together ingestion, embeddings, vector storage, chat UI, permissions, and model hosting by hand. AnythingLLM removes that setup burden by packaging document chat, retrieval, agents, and workspace management into one deployable app that can run locally, offline, or in a shared self-hosted environment.

INSTALL
export STORAGE_LOCATION=$HOME/anythingllm mkdir -p $STORAGE_LOCATION touch "$STORAGE_LOCATION/.env" docker run -d -p 3001:3001 \ --cap-add SYS_ADMIN \ -v ${STORAGE_LOCATION}:/app/server/storage \ -v ${STORAGE_LOCATION}/.env:/app/server/.env \ -e STORAGE_DIR="/app/server/storage" \ mintplexlabs/anythingllm:latest

INTEGRATION GUIDE

1. Chat with internal documents and knowledge bases using retrieval-augmented generation 2. Run private desktop AI workflows with local models and offline access 3. Host a multi-user browser workspace with admin controls for team knowledge chat 4. Publish embeddable support or knowledge chat widgets backed by your documents

TAGS

ragdocument-chatai-agentslocal-llmself-hostedprivacymulti-user
AnythingLLM — AI Tool | Agentic AI For Good