Global Edge
Deploy to 300+ locations worldwide. Requests are processed at the edge location closest to your users.
Cloudflare Workers is the recommended deployment platform for GLACIS sidecars. With 300+ edge locations, sub-millisecond cold starts, and native integration with GLACIS services, it provides the best performance for global deployments.
Global Edge
Deploy to 300+ locations worldwide. Requests are processed at the edge location closest to your users.
Zero Cold Start
Sub-millisecond cold starts mean no latency penalty, even for infrequent requests.
Native Integration
GLACIS witness and receipt services run on Cloudflare, minimizing network hops.
Cost Effective
Pay per request with generous free tier. No idle compute costs.
Before starting, ensure you have:
Create a new Worker project
npm create cloudflare@latest glacis-sidecar -- --template https://github.com/glacis-io/sidecar-cf-templatecd glacis-sidecarInstall dependencies
npm installConfigure secrets
# GLACIS credentialsnpx wrangler secret put GLACIS_API_KEY# Enter your API key when prompted
npx wrangler secret put GLACIS_ORG_ID# Enter your organization ID
# AI provider credentialsnpx wrangler secret put OPENAI_API_KEY# Enter your OpenAI API keyUpdate wrangler.toml
name = "glacis-sidecar"main = "src/index.ts"compatibility_date = "2024-01-01"
[vars]GLACIS_RECEIPT_URL = "https://receipts.glacis.io"GLACIS_WITNESS_URL = "https://witness.glacis.io"SAMPLING_RATE = "100"Deploy
npx wrangler deployname = "glacis-sidecar"main = "src/index.ts"compatibility_date = "2024-01-01"
# Account configurationaccount_id = "your-account-id"
# Environment variables[vars]GLACIS_RECEIPT_URL = "https://receipts.glacis.io"GLACIS_WITNESS_URL = "https://witness.glacis.io"SAMPLING_RATE = "100"POLICIES = "toxicity,pii"
# Routes (optional - for custom domains)routes = [ { pattern = "ai.yourdomain.com/*", zone_name = "yourdomain.com" }]
# Durable Objects (for state management)[[durable_objects.bindings]]name = "TOKEN_CACHE"class_name = "TokenCache"
[[migrations]]tag = "v1"new_classes = ["TokenCache"]import { GlacisSidecar, type SidecarConfig } from '@glacis/sidecar-cf';
export interface Env { GLACIS_API_KEY: string; GLACIS_ORG_ID: string; OPENAI_API_KEY: string; GLACIS_RECEIPT_URL: string; GLACIS_WITNESS_URL: string; SAMPLING_RATE: string; POLICIES: string; TOKEN_CACHE: DurableObjectNamespace;}
export default { async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> { const config: SidecarConfig = { orgId: env.GLACIS_ORG_ID, apiKey: env.GLACIS_API_KEY, provider: { type: 'openai', apiKey: env.OPENAI_API_KEY, }, sampling: { rate: parseInt(env.SAMPLING_RATE), policies: env.POLICIES.split(',') as PolicyType[], }, services: { receipt: env.GLACIS_RECEIPT_URL, witness: env.GLACIS_WITNESS_URL, }, };
const sidecar = new GlacisSidecar(config, { tokenCache: env.TOKEN_CACHE, waitUntil: ctx.waitUntil.bind(ctx), });
return sidecar.handleRequest(request); },};
// Durable Object for token cachingexport { TokenCache } from '@glacis/sidecar-cf';Route specific paths to different AI providers:
const sidecar = new GlacisSidecar(config, { routes: [ { path: '/v1/chat/completions', provider: 'openai', model: 'gpt-4', }, { path: '/v1/messages', provider: 'anthropic', model: 'claude-3-opus', }, ],});Configure rules-based sampling:
const config: SidecarConfig = { // ... other config sampling: { rate: 100, // Default: 1 in 100 rules: [ { // Sample all GPT-4 requests condition: { field: 'model', operator: 'eq', value: 'gpt-4' }, rate: 1, }, { // Sample more for long prompts condition: { field: 'content_length', operator: 'gt', value: 1000 }, rate: 10, }, ], },};Add organization-specific policy evaluations:
const config: SidecarConfig = { // ... other config policies: { builtin: ['toxicity', 'pii', 'bias'], custom: [ { name: 'compliance_keywords', evaluate: async (request, response) => { const keywords = ['confidential', 'restricted']; const hasKeyword = keywords.some(k => response.content.toLowerCase().includes(k) ); return { score: hasKeyword ? 1.0 : 0.0, flagged: hasKeyword, metadata: { keywords: hasKeyword ? keywords : [] }, }; }, }, ], },};View real-time metrics in the Cloudflare dashboard:
Attestations flow to your GLACIS dashboard:
source: cloudflare-workerEnable structured logging for debugging:
const config: SidecarConfig = { // ... other config logging: { level: 'info', // 'debug' | 'info' | 'warn' | 'error' format: 'json', includeRequestId: true, },};View logs in Cloudflare:
npx wrangler tailUse Durable Objects to cache bearer tokens:
// Token is refreshed 5 minutes before expiration// Cached across all Worker instances in the same regionconst tokenCache = env.TOKEN_CACHE.get( env.TOKEN_CACHE.idFromName('default'));For high-volume deployments, batch attestations:
const config: SidecarConfig = { // ... other config batching: { enabled: true, maxSize: 100, maxWaitMs: 1000, },};The sidecar automatically reuses connections to:
Never expose secrets in code or config:
# Add secrets via Wrangler CLInpx wrangler secret put GLACIS_API_KEYnpx wrangler secret put OPENAI_API_KEY
# Or via dashboard:# Workers & Pages → Your Worker → Settings → VariablesRestrict Worker access by IP:
export default { async fetch(request: Request, env: Env): Promise<Response> { const clientIP = request.headers.get('CF-Connecting-IP');
if (!env.ALLOWED_IPS.includes(clientIP)) { return new Response('Forbidden', { status: 403 }); }
// ... continue with sidecar },};Use Cloudflare’s built-in rate limiting:
[[rules]]type = "RateLimit"expression = "true"action = "block"characteristics = ["cf.colo.id", "ip.src"]period = 60requests_per_period = 1000“Invalid API key” error
# Verify secret is setnpx wrangler secret list
# Re-set if needednpx wrangler secret put GLACIS_API_KEYHigh latency
Missing attestations
npx wrangler tailEnable verbose logging:
# Deploy with debug loggingLOGGING_LEVEL=debug npx wrangler deploy// Before: Direct OpenAI callconst response = await fetch('https://api.openai.com/v1/chat/completions', { method: 'POST', headers: { 'Authorization': `Bearer ${OPENAI_API_KEY}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4', messages: [{ role: 'user', content: 'Hello' }], }),});
// After: GLACIS sidecar (deployed at ai.yourdomain.com)const response = await fetch('https://ai.yourdomain.com/v1/chat/completions', { method: 'POST', headers: { 'Content-Type': 'application/json', // No API key needed - sidecar handles authentication }, body: JSON.stringify({ model: 'gpt-4', messages: [{ role: 'user', content: 'Hello' }], }),});Configure Sampling
Tune L2 sampling rates and enable additional policies.
View Attestations
Monitor attestations in the GLACIS dashboard.