All Tools
G
LLMFreeOpen Source
GUARDRAILS AI
Open-source guardrails for safer, more reliable LLM apps
Apache-2.0
ABOUT
Guardrails AI helps teams detect and mitigate LLM risks such as PII leakage, hallucinations, jailbreaks, unsafe content, policy violations, and malformed structured outputs before they reach users.
INSTALL
pip install guardrails-aiINTEGRATION GUIDE
1. Validate LLM inputs and outputs for safety and policy compliance
2. Detect PII leakage, hallucinations, jailbreaks, and unsafe content
3. Generate and enforce structured outputs from large language models
4. Add runtime guardrails to chatbots, RAG pipelines, and agent workflows
TAGS
guardrailsllm-safetyvalidationpolicy-enforcementstructured-outputshallucination-detectionpii-detection