Skip to content

Google Cloud Run Deployment

Deploy GLACIS sidecars on Google Cloud Run for container-native, auto-scaling attestation infrastructure.

Why Cloud Run?

  • Container-native: Deploy any Docker container
  • Auto-scaling: Scale to zero when idle, scale up under load
  • Regional deployment: Choose your GCP region
  • Managed infrastructure: No servers to manage

Prerequisites

  • Google Cloud SDK installed
  • GCP project with billing enabled
  • GLACIS organization and API key

Quick Start

  1. Clone the sidecar template

    Terminal window
    git clone https://github.com/glacis-io/sidecar-cloudrun-template
    cd sidecar-cloudrun-template
  2. Configure secrets

    Terminal window
    # Create secrets in Secret Manager
    echo -n "glc_your_api_key" | gcloud secrets create glacis-api-key --data-file=-
    echo -n "sk-your-openai-key" | gcloud secrets create openai-api-key --data-file=-
  3. Deploy to Cloud Run

    Terminal window
    gcloud run deploy glacis-sidecar \
    --source . \
    --region us-central1 \
    --set-env-vars GLACIS_ORG_ID=org_your_org_id \
    --set-secrets GLACIS_API_KEY=glacis-api-key:latest \
    --set-secrets OPENAI_API_KEY=openai-api-key:latest \
    --allow-unauthenticated
  4. Get service URL

    Terminal window
    gcloud run services describe glacis-sidecar --region us-central1 --format 'value(status.url)'

Dockerfile

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
ENV PORT=8080
EXPOSE 8080
CMD ["node", "dist/index.js"]

Configuration

src/config.ts
export const config = {
orgId: process.env.GLACIS_ORG_ID!,
apiKey: process.env.GLACIS_API_KEY!,
provider: {
type: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
},
sampling: {
rate: parseInt(process.env.SAMPLING_RATE || '100'),
},
};

Performance

MetricValue
Cold start~200ms
Request overhead~20ms
Memory256MB default
Concurrency80 requests/instance

Next Steps