HomeToolsMCPHow It WorksStoriesPhilosophyCommunityArchitectureStar on GitHub
All Tools
N
OtherFreeOpen Source

NVIDIA NEMO GUARDRAILS

Programmable guardrails for safer LLM and agent apps

Apache-2.0

ABOUT

LLM and agent systems can go off-topic, leak sensitive data, follow malicious instructions, or answer without proper grounding. NVIDIA NeMo Guardrails helps teams define and enforce input, output, retrieval, and dialogue policies so AI applications behave more safely and predictably in production.

INSTALL
pip install nemoguardrails

INTEGRATION GUIDE

1. Block jailbreak, prompt-injection, and unsafe content attempts 2. Enforce domain and policy boundaries for support or enterprise assistants 3. Detect and handle PII in user prompts and model responses 4. Ground RAG answers against approved knowledge sources before responding 5. Add configurable conversation flows and refusal behaviors to agent apps

TAGS

guardrailsai-safetyprompt-injectionpiipolicy-enforcementllmagentsrag
NVIDIA NeMo Guardrails — AI Tool | Agentic AI For Good