Google Cloud Run Deployment
Deploy GLACIS sidecars on Google Cloud Run for container-native, auto-scaling attestation infrastructure.
Why Cloud Run?
- Container-native: Deploy any Docker container
- Auto-scaling: Scale to zero when idle, scale up under load
- Regional deployment: Choose your GCP region
- Managed infrastructure: No servers to manage
Prerequisites
- Google Cloud SDK installed
- GCP project with billing enabled
- GLACIS organization and API key
Quick Start
-
Clone the sidecar template
Terminal window git clone https://github.com/glacis-io/sidecar-cloudrun-templatecd sidecar-cloudrun-template -
Configure secrets
Terminal window # Create secrets in Secret Managerecho -n "glc_your_api_key" | gcloud secrets create glacis-api-key --data-file=-echo -n "sk-your-openai-key" | gcloud secrets create openai-api-key --data-file=- -
Deploy to Cloud Run
Terminal window gcloud run deploy glacis-sidecar \--source . \--region us-central1 \--set-env-vars GLACIS_ORG_ID=org_your_org_id \--set-secrets GLACIS_API_KEY=glacis-api-key:latest \--set-secrets OPENAI_API_KEY=openai-api-key:latest \--allow-unauthenticated -
Get service URL
Terminal window gcloud run services describe glacis-sidecar --region us-central1 --format 'value(status.url)'
Dockerfile
FROM node:20-slim
WORKDIR /appCOPY package*.json ./RUN npm ci --only=productionCOPY . .
ENV PORT=8080EXPOSE 8080
CMD ["node", "dist/index.js"]Configuration
export const config = { orgId: process.env.GLACIS_ORG_ID!, apiKey: process.env.GLACIS_API_KEY!, provider: { type: 'openai', apiKey: process.env.OPENAI_API_KEY!, }, sampling: { rate: parseInt(process.env.SAMPLING_RATE || '100'), },};Performance
| Metric | Value |
|---|---|
| Cold start | ~200ms |
| Request overhead | ~20ms |
| Memory | 256MB default |
| Concurrency | 80 requests/instance |
Next Steps
- Configuration Reference - Full configuration options
- Attestation Overview - Understand attestation flow