HomeToolsMCPHow It WorksStoriesPhilosophyArchitectureStar on GitHub
All Tools
A
MonitoringFreemiumOpen Source

ARIZE PHOENIX

LLM tracing, evaluation, and experimentation platform

Elastic License 2.0

ABOUT

LLM and agent systems fail in ways that are hard to reproduce from logs alone. Phoenix makes prompts, spans, retrieval context, responses, feedback, and evaluation results visible so teams can debug quality issues, compare experiments, and improve production AI behavior.

INSTALL
pip install arize-phoenix

INTEGRATION GUIDE

1. Trace and debug LLM agent workflows with OpenTelemetry instrumentation 2. Evaluate RAG retrieval quality, hallucination risk, and response quality 3. Compare prompts, models, parameters, datasets, and experiment results 4. Collect human feedback and annotations for production AI outputs

TAGS

llm-observabilityevaluationtracingopentelemetryagentsragprompt-management
Arize Phoenix — AI Tool | Agentic AI For Good