You're exploring a demo tenant. Real data flows, no signup. Editing is disabled.
Want this for your team? →

Responsible AI by Design

Pillars, golden rules and the data sensitivity matrix that governs every approved tool.

Our 4 AI Pillars

Human in the Loop

Every customer-facing AI output is reviewed by a qualified human before it ships.

Data Minimisation

We never send PII, financial records, or proprietary code to public models.

Transparent Use

If AI helped produce it, we say so — internally and to customers.

Continuous Learning

Every employee dedicates 1 hour per week to AI literacy. Tracked, not policed.

Golden Rules

  1. Never paste customer PII or financial data into a public model (ChatGPT, Claude, Gemini consumer).
  2. Always disclose AI use in client-facing deliverables.
  3. Verify every fact, statistic, and citation a model generates.
  4. Tools handling Tier 3+ data must be on the approved registry.
  5. Report shadow AI use to your team lead — not to HR. We learn from it.
  6. Generative AI cannot make hiring, firing, or compensation decisions.

Data Sensitivity Matrix

TierExamplesAllowed tools
Tier 0 · Public
Marketing copy, public blog drafts, press releasesAny approved tool
Tier 1 · Internal
Internal docs, meeting notes, project plansEnterprise ChatGPT, Copilot, Claude Enterprise
Tier 2 · Confidential
Strategy docs, financials, unreleased productsSelf-hosted only (Acme GPT, on-prem Llama)
Tier 3 · Restricted
Customer PII, contracts, source code, M&ASelf-hosted + DPO approval required

EU AI Act risk classification is mapped automatically per tool. Tier 3 changes require DPO sign-off.