Skip to content

Configuration

Glacis uses a glacis.yaml configuration file (v1.3 format) to define policy metadata, input/output controls, sampling rates, judge thresholds, attestation mode, and evidence storage. The SDK auto-loads ./glacis.yaml from your working directory when used with integrations.

version: "1.3"
# --- Policy metadata (included in attestations) ---
policy:
id: "hipaa-safe-harbor"
version: "1.0"
environment: "production"
tags: ["healthcare", "hipaa"]
# --- Input/output controls ---
controls:
output_block_action: "block" # "block" or "forward"
input:
pii_phi:
enabled: true
model: "presidio" # Detection engine
mode: "fast" # "fast" (regex) or "full" (regex + NER)
entities: ["US_SSN", "EMAIL_ADDRESS"] # Empty = all HIPAA entities
if_detected: "flag" # "forward", "flag", or "block"
word_filter:
enabled: true
entities: ["confidential", "proprietary"]
if_detected: "flag"
jailbreak:
enabled: true
model: "prompt_guard_22m" # or "prompt_guard_86m"
threshold: 0.5 # Classification threshold (0-1)
if_detected: "block"
output:
pii_phi:
enabled: true
model: "presidio"
mode: "fast"
entities: []
if_detected: "flag"
word_filter:
enabled: true
entities: ["system prompt", "secret"]
if_detected: "flag"
jailbreak:
enabled: false
# --- Sampling tiers ---
sampling:
l1_rate: 1.0 # Probability of L1 evidence collection (0.0-1.0)
l2_rate: 0.0 # Probability of L2 deep inspection (0.0-1.0, must be ≤ l1_rate)
# --- Judge pipeline thresholds ---
judges:
max_score: 3.0
consensus_threshold: 1.0
uphold_threshold: 2.0
borderline_threshold: 1.0
score_precision: 4
# --- Attestation settings ---
attestation:
offline: true # true = offline (local signing), false = online (server-witnessed)
service_id: "my-service"
# --- Evidence storage ---
evidence_storage:
backend: "sqlite" # "sqlite" or "json"
path: "~/.glacis/glacis.db" # For sqlite: full .db file path; for json: directory for .jsonl files

Policy metadata is included in every attestation for audit traceability.

FieldTypeDefaultDescription
idstr"default"Policy identifier (e.g., "hipaa-safe-harbor")
versionstr"1.0"Policy version
environmentstr"development"Environment name (e.g., "production", "staging")
tagslist[str][]Custom tags for filtering and grouping

Controls run on input text (before the LLM call) and output text (after the LLM call). Each control can be independently enabled and configured per stage.

ValueBehavior
"block"Raises GlacisBlockedError — the LLM response is withheld from the caller
"forward"Returns the LLM response but marks the determination as "blocked" in the attestation

Both stages support the same three controls: pii_phi, word_filter, and jailbreak.

pii_phi — PII/PHI Detection

FieldTypeDefaultDescription
enabledboolfalseEnable PII/PHI scanning
modelstr"presidio"Detection engine identifier
mode"fast" | "full""fast""fast" = regex only, "full" = regex + NER model
entitieslist[str][]Entity types to scan for (e.g., "US_SSN", "EMAIL_ADDRESS"). Empty = all HIPAA entities
if_detected"forward" | "flag" | "block""flag"Action when PII/PHI is detected

word_filter — Keyword Matching

FieldTypeDefaultDescription
enabledboolfalseEnable word filter
entitieslist[str][]Literal terms to match (case-insensitive)
if_detected"forward" | "flag" | "block""flag"Action when a term is matched

jailbreak — Prompt Injection Detection

FieldTypeDefaultDescription
enabledboolfalseEnable jailbreak detection
modelstr"prompt_guard_22m"Detection model: "prompt_guard_22m" or "prompt_guard_86m"
thresholdfloat0.5Classification threshold (0.0 to 1.0)
if_detected"forward" | "flag" | "block""flag"Action when jailbreak is detected

Controls the probability of promoting attestations to higher tiers. Sampling is deterministic and auditor-reproducible via HMAC-SHA256.

FieldTypeDefaultConstraintDescription
l1_ratefloat1.00.0 - 1.0Probability of L1 sampling (evidence collection). 1.0 = collect all
l2_ratefloat0.00.0 - 1.0, must be ≤ l1_rateProbability of L2 sampling (deep inspection). 0.0 = disabled

The three tiers:

  • L0: Control plane results only (always collected)
  • L1: Evidence collection — input/output payloads retained locally
  • L2: Deep inspection — flagged for judge evaluation (implies L1). Judges must be run separately via JudgeRunner

Thresholds for the judge pipeline that evaluates sampled attestations. Works for any scored evaluation scale.

FieldTypeDefaultDescription
max_scorefloat3.0Maximum score on the rubric scale
consensus_thresholdfloat1.0Maximum score spread between judges before flagging disagreement
uphold_thresholdfloat2.0Minimum average score for an "uphold" recommendation
borderline_thresholdfloat1.0Minimum average score for "borderline" (below this = "escalate")
score_precisionint4Decimal places for rounding the final score
FieldTypeDefaultDescription
offlinebooltruetrue = offline mode (local Ed25519 signing), false = online mode (server-witnessed)
service_idstr"openai"Default service identifier for attestations
FieldTypeDefaultDescription
backend"sqlite" | "json""sqlite"Storage backend. "sqlite" = SQLite database, "json" = JSONL append-only log
pathstr | nullnull (defaults to ~/.glacis/glacis.db for SQLite, ~/.glacis for JSON)For SQLite: full .db file path. For JSON: directory containing .jsonl files

Use load_config() from glacis.config to load and parse the configuration file:

from glacis.config import load_config
# Auto-load from ./glacis.yaml
config = load_config() # Returns glacis.config.GlacisConfig
# Or specify an explicit path
config = load_config("path/to/glacis.yaml")
# Access any section
print(config.policy.id) # "hipaa-safe-harbor"
print(config.controls.input.pii_phi.enabled) # True
print(config.sampling.l1_rate) # 1.0
print(config.judges.uphold_threshold) # 2.0
print(config.attestation.offline) # True
print(config.evidence_storage.backend) # "sqlite"

The returned glacis.config.GlacisConfig object is a Pydantic model, so you get full type safety and validation.

Provider integrations (OpenAI, Anthropic, Gemini) accept a config parameter to load a glacis.yaml file:

from glacis.integrations.openai import attested_openai
# Pass the path to your config file
client = attested_openai(config="./glacis.yaml")
# Controls, sampling, and attestation settings
# are all applied automatically
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)

If no glacis.yaml is found, load_config() returns a glacis.config.GlacisConfig with all default values:

SectionDefault Behavior
policyid="default", version="1.0", environment="development", no tags
controlsAll controls disabled, output_block_action="block"
samplingl1_rate=1.0 (review all), l2_rate=0.0 (no deep inspection)
judgesmax_score=3.0, uphold_threshold=2.0, borderline_threshold=1.0
attestationoffline=true, service_id="openai"
evidence_storagebackend="sqlite", path=null (defaults to ~/.glacis/glacis.db)