Diagaxis Logo
    1 877-851-3803Get Your Free Diagnostic
    Legal

    AI Transparency Statement

    Last updated: April 29, 2026 · Version: 2.1

    Section 01: What Diagaxis Actually Is

    Diagaxis installs an AI-operated business infrastructure into service businesses. It is not a chatbot layer on top of a human agency. It is not a workflow tool you configure once and forget. It is an operating system — the R6 Customer Journey Operating System — where artificial intelligence handles acquisition, qualification, conversion, onboarding, and ongoing support autonomously.

    The goal of the system is sovereignty: business revenue that flows independently of any individual's daily presence — including ours.

    Voice AI Agents
    Ana · Maya · Pax · Dax
    AI Architecture
    R6 Customer Journey OS
    Operational Model
    Autonomous close up to $2,000 / month tier
    Human Involvement
    Triggered at $3,500+ Expansion tier only

    Section 02: The Voice AI Agents — What They Do & What They Don't

    Diagaxis operates four Voice AI agents across the customer journey. None of them are human, and none are presented as human. Every prospect or client who interacts with these agents is engaging with an autonomous AI system. This is stated clearly in our Terms of Use and disclosed in the interaction itself.

    ActionWho handles itTier
    Lead qualification & diagnosticAna — AIAll tiers
    Diagnostic deep-dive (when triggered)Dax — AITriggered via Digital Front Desk + diagnostic surfaces
    Pre-core close (Digital Front Desk / Word of Mouth Loop)Ana — AI$297 / $497 / month
    Core tier close (Revenue Root System / Flywheel Response)Ana — AI$997 / month
    Continuity System closeAna — AI$2,000 / month
    Onboarding intake interviewMaya — AIAll paid tiers
    Ongoing client supportPax — AIAll paid tiers
    Relationship Continuum close (multi-stakeholder, high-touch)Founder — Human$3,500+ trigger
    Follow-up, reactivation, retention sequencesAutomated systemAll tiers
    Onboarding configuration & deliveryAutomated systemAll tiers

    Voice AI agents are calibrated on a 60-day recalibration cycle. Confidence thresholds are reviewed and adjusted based on real conversation and outcome data. This is a live system, not a static script.


    Section 03: What We Don't Do

    We want to be explicit about what this system is not, because AI transparency is often undermined by what companies omit rather than what they say.

    • We do not present AI agents as human. Their AI nature is part of how we introduce them — not a hidden footnote.
    • We do not make guarantees about conversion outcomes. The system increases the probability of a qualified close — it does not guarantee one.
    • We do not use AI to manufacture urgency, exploit emotional states, or push past a clear refusal. Our qualification system routes away from pressure.
    • We do not scrape, resell, or share client or prospect data with third parties for marketing purposes.
    • We do not automate grief or high-sensitivity contexts. Certain signals trigger human routing or full conversation exit — by design.

    Section 04: Aware of the Impact

    We build AI infrastructure that removes human dependency from sales and operations. We are conscious that this is a consequential thing to build — and we think about it seriously.

    We don't resolve that tension with a slogan. We hold it by making deliberate choices about where the system stops and where a human must be present.

    ⚖️

    Decision Boundaries

    AI agents are not the last word on high-stakes decisions. Hard thresholds route to humans — not as a fallback, but as architecture.

    🔒

    Data Stewardship

    Lead and client data processed through our system is used only to operate and improve the service — never profiled, sold, or used for unrelated targeting.

    🔁

    Recalibration

    AI thresholds are reviewed on a 60-day cycle. When data shows miscalibration, the system is updated — not rationalized.

    We believe that an AI system built on dishonesty — about what it is, what it does, or what it can guarantee — eventually fails the people it was supposed to serve. Transparency is not a compliance exercise. It is part of what makes the system work.


    Section 05: How We Build

    Every component of the Diagaxis system is specified in writing before it is built. Architecture decisions are locked and versioned. AI behavior — including agent routing logic, qualification signals, and confidence thresholds — is documented and auditable.

    This is not because we expect regulators to ask. It is because a system that cannot be explained clearly should not be deployed.

    • Specification before configuration. Nothing is built in a live environment before the logic is written and validated.
    • Gate-based progression. Each phase of the system must pass defined criteria before expanding. We don't scale broken logic.
    • Niche-specific constraints. Healthcare niches (MedSpa, Chiro, Vet) carry additional routing rules that prevent automation in inappropriate contexts — grief, emergency, clinical decision-making.

    Section 06: Questions & Concerns

    If you experience something inside our system that feels misaligned with what we describe here — an interaction that felt manipulative, a disclosure that was missing, a response that crossed a line — we want to know.

    We will read it, take it seriously, and respond.

    Contact us directly for AI-related concerns, ethical questions, and data requests:

    contact@diagaxis.com

    Social media: Facebook · X · LinkedIn · YouTube (@diagaxis)

    Get Your Free Diagnostic →
    Avatar
    Hi there! Have a question? Chat with us here.