Direct Answer (TL;DR)
Brilo AI Sandbox Mode lets you test knowledge changes, training data, and call flows in an isolated test environment before those changes go live. A separate knowledge set and test routing are used so the Brilo AI voice agent answers test calls, runs validation checks, and logs results without affecting production calls or live customers. Use Sandbox Mode to run dry runs, confirm intent mapping, and validate webhooks or integrations before promoting changes to production. Sandbox Mode supports versioning and rollback workflows so you can compare outcomes between the sandbox and production voice agents.
Can I test knowledge updates in a sandbox?
Yes — Brilo AI Sandbox Mode runs your knowledge updates against a non-production voice agent so you can validate behavior before deployment.
How do I run test calls for knowledge changes?
Run test calls from the Brilo AI dashboard or use a test webhook to exercise the sandboxed knowledge set; review transcripts and intent confidence scores to validate results.
Can I validate integrations and webhooks in sandbox?
Yes — when configured, Sandbox Mode will call your webhook endpoint and simulate integration responses without touching production routing.
Can I revert a knowledge change after testing?
Yes — Sandbox Mode includes versioning so you can keep the previous production knowledge set intact and roll back if testing surfaces issues.
Why This Question Comes Up (problem context)
Buyers ask about Sandbox Mode because changing knowledge or retraining can unintentionally degrade live call quality. Enterprises in healthcare, banking, and insurance need a safe way to validate updates without risking customer experience or regulatory exposure. Testing knowledge changes, intent updates, and new dialogue paths in an isolated environment reduces change-related incidents and supports controlled releases and compliance reviews.
How It Works (High-Level)
Brilo AI Sandbox Mode creates an isolated copy of your voice agent’s knowledge base, dialog flows, and routing rules for testing. When enabled, the Brilo AI voice agent will route calls or simulated sessions to the sandboxed environment instead of production, letting you confirm responses, intent classification, and webhook behavior. Typical sandbox activities include running representative test calls, validating answer quality, and checking fallback and escalation conditions.
A sandbox knowledge set is a copy of your live knowledge base used only for testing and not served to real customers. Consider running sample test calls and accent/locale variations to ensure voice and ASR performance before promoting changes.
Guardrails & Boundaries
Brilo AI Sandbox Mode is intentionally limited to testing. It should not be used for production traffic, customer-facing updates, or any activity that requires audit-level production logging unless explicitly configured. Sandbox sessions may not include the same data retention, monitoring integrations, or third-party connectors as production by default. Before using sandbox test data for regulatory purposes, confirm your organization’s compliance requirements and log policies.
In Brilo AI, a training run (dry run) is a non-production execution of updated training data that collects metrics and example transcripts without modifying the production model. Do not assume sandbox results will exactly match production performance; differences can occur because production monitoring, scaling, or certain integrations may behave differently.
For fallback and uncertain answers, see Brilo AI’s guidance on how the system behaves when unsure: Brilo AI — What happens when the AI is unsure?
Applied Examples
Healthcare: A hospital tests updates to appointment scheduling prompts in Sandbox Mode to confirm the Brilo AI voice agent correctly captures procedure codes and preferred appointment windows during simulated patient calls, without exposing real patient records.
Banking: A retail bank validates revised knowledge for account-authentication prompts in Sandbox Mode, running test calls to verify intent recognition and confirming that the Brilo AI voice agent triggers the correct verification workflows.
Insurance: An insurer uses Sandbox Mode to trial new claims triage questions and to confirm escalation triggers when a customer indicates high severity, ensuring human handoff workflows activate only when intended.
Human Handoff & Escalation
When a sandbox test reveals an escalation condition, Brilo AI can simulate the human handoff path without connecting to live agents. Configure sandbox routing so that escalations either record a simulated transfer or send a non-production webhook to your test endpoint. In production, the Brilo AI voice agent uses the same escalation rules but routes to live queues or contact centers; Sandbox Mode keeps those handoffs contained for validation and training. Use the test logs to confirm that handoff prompts, context passed to agents, and metadata (for example, intent and confidence) are correct before enabling them in production.
Setup Requirements
Prepare a test knowledge set by exporting or cloning your live knowledge base into a sandbox copy.
Create a sandbox voice agent instance in the Brilo AI dashboard and assign the sandbox knowledge set to it.
Configure test routing so inbound test numbers or simulator sessions target the sandbox instance instead of production.
Point any test webhooks or integrations to your webhook endpoint that accepts non-production calls for validation.
Run a suite of test calls (including accent and locale variants) and review transcripts, intent confidence, and handoff logs to validate behavior.
For more on running representative test calls and adjusting language/locale, see: Brilo AI — How does the AI handle accents and speech variations?
Business Outcomes
Using Brilo AI Sandbox Mode reduces deployment risk by catching regressions before they reach customers. Buyers can expect fewer live-call incidents, clearer validation workflows for compliance reviews, and faster, safer iteration on knowledge content. Sandbox testing supports staged rollouts and improves stakeholder confidence in production changes without claiming specific uptime or compliance guarantees.
FAQs
Can Sandbox Mode use real customer data?
No. Sandbox Mode should not be used with live customer data unless you have explicit legal and compliance approval. Use synthetic or anonymized test data for safe validation.
How long does a sandbox copy persist?
Sandbox retention varies by your Brilo AI plan and account settings. Consult your Brilo AI admin settings to confirm sandbox lifecycle policies before creating long-running test environments.
Will sandbox tests affect production analytics?
Sandbox tests are isolated and do not feed into production analytics unless you explicitly configure a shared reporting endpoint. Keep reporting separated to avoid polluting production metrics.
Can I promote a sandbox knowledge set to production?
Yes — after validation, you can promote tested knowledge changes to production using the Brilo AI promotion workflow or by exporting and applying the validated knowledge set to the production voice agent.
Do sandbox tests include webhook and CRM integrations?
Sandbox Mode can call your test webhook endpoint and simulate CRM updates when configured, but it will not modify your production CRM unless you point it to production integration endpoints.
Next Step
Contact your Brilo AI account team to enable a sandbox instance and to receive recommended test scripts for healthcare or financial services scenarios.