Run the Certification Wizard
Complete the AI-powered interview to bootstrap your full ISO 42001 compliance program.
This guide will help you deploy a GLACIS sidecar and start generating compliance attestations in under 10 minutes.
Before you begin, ensure you have:
Create a GLACIS Organization
Log into the GLACIS Dashboard and create a new organization. Note your:
org_...)glc_...)Install the Sidecar
npm install @glacis/sidecarpnpm add @glacis/sidecaryarn add @glacis/sidecarConfigure Environment Variables
Create a .env file in your project root:
# GLACIS ConfigurationGLACIS_ORG_ID=org_your_org_idGLACIS_API_KEY=glc_your_api_key
# AI Provider (example: OpenAI)OPENAI_API_KEY=sk-your-openai-keyCreate the Sidecar Configuration
Create glacis.config.ts:
import { defineConfig } from '@glacis/sidecar';
export default defineConfig({ orgId: process.env.GLACIS_ORG_ID!, apiKey: process.env.GLACIS_API_KEY!,
provider: { type: 'openai', apiKey: process.env.OPENAI_API_KEY!, },
sampling: { rate: 100, // 1 in 100 requests get L2 attestation policies: ['toxicity', 'pii'], },});Update Your AI Client
Replace your direct AI provider calls with the GLACIS-wrapped client:
// Before: Direct OpenAI callimport OpenAI from 'openai';const client = new OpenAI();
// After: GLACIS-wrapped clientimport { createGlacisClient } from '@glacis/sidecar';const client = createGlacisClient({ provider: 'openai',});
// Usage remains the sameconst response = await client.chat.completions.create({ model: 'gpt-4', messages: [{ role: 'user', content: 'Hello!' }],});Verify Attestations
Make a test request and check the GLACIS dashboard:
const response = await client.chat.completions.create({ model: 'gpt-4', messages: [{ role: 'user', content: 'What is GLACIS?' }],});
console.log('Response:', response.choices[0].message.content);console.log('Attestation ID:', response._glacis?.attestationId);After making your first request, navigate to the GLACIS Dashboard:
If you prefer containerized deployment:
# Pull the sidecar imagedocker pull ghcr.io/glacis-io/sidecar:latest
# Run with environment variablesdocker run -d \ -e GLACIS_ORG_ID=org_your_org_id \ -e GLACIS_API_KEY=glc_your_api_key \ -e OPENAI_API_KEY=sk-your-openai-key \ -p 8080:8080 \ ghcr.io/glacis-io/sidecar:latestThen configure your application to use http://localhost:8080 as the AI provider endpoint.
When you make an AI request through the GLACIS sidecar:
┌─────────────────────────────────────────────────────────────┐│ 1. Request received by sidecar │├─────────────────────────────────────────────────────────────┤│ 2. Sidecar obtains bearer token from witness service ││ (establishes epoch binding) │├─────────────────────────────────────────────────────────────┤│ 3. Request forwarded to AI provider (OpenAI, Anthropic) │├─────────────────────────────────────────────────────────────┤│ 4. Response received from AI provider │├─────────────────────────────────────────────────────────────┤│ 5. Sidecar generates attestation: ││ • L0 (always): Request metadata + commitment ││ • L2 (if sampled): Full evidence + policy scores │├─────────────────────────────────────────────────────────────┤│ 6. Attestation sent to receipt service ││ (Merkle tree inclusion proof returned) │├─────────────────────────────────────────────────────────────┤│ 7. Response returned to your application ││ (with optional attestation metadata) │└─────────────────────────────────────────────────────────────┘Once attestations flow into GLACIS, they automatically map to ISO 42001 controls:
| Control | Evidence Type | Auto-Mapped |
|---|---|---|
| A.6.2.6 | AI system monitoring | Request/response attestations |
| A.6.2.8 | Performance tracking | Latency and error metrics |
| A.7.5 | Data quality | Input validation scores |
| A.9.4 | User monitoring | Usage pattern analysis |
Check the Controls page in the dashboard to see evidence automatically linked to compliance requirements.
Run the Certification Wizard
Complete the AI-powered interview to bootstrap your full ISO 42001 compliance program.
Configure Sampling
Learn how to tune L2 sampling rates and enable additional policy checks.
Deploy to Production
Choose a production deployment: Cloudflare Workers, Cloud Run, Lambda, or Kubernetes.
Understand Attestations
Deep dive into the cryptographic details of L0 and L2 attestations.
https://receipts.glacis.ioGLACIS applies rate limits per organization:
Contact support to increase limits for production workloads.