Skip to content

LiteLLM Integration

The Glacis LiteLLM integration wraps LiteLLM’s unified completion interface to automatically create cryptographic attestations for every LLM call across 100+ providers. Your data is hashed locally and never leaves your environment — only hashes and metadata are sent to the Glacis transparency log.

Since LiteLLM exposes module-level functions (not a client object), this integration returns a thin wrapper object with .completion() and .acompletion() methods that mirror LiteLLM’s interface.

Terminal window
pip install glacis[litellm]
from glacis.integrations.litellm import attested_litellm, get_last_receipt
client = attested_litellm(glacis_api_key="glsk_live_...")
# Make a normal LiteLLM call — attestation happens automatically
response = client.completion(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
# Get the attestation receipt
receipt = get_last_receipt()
print(f"Attested: {receipt.id}")
print(f"Status: {receipt.witness_status}")

For each completion, Glacis captures:

FieldTreatmentDetails
Request messagesHashedSHA-256, never sent to Glacis
Response contentHashedSHA-256, never sent to Glacis
System promptHashedSHA-256 hash included in control plane record
Model nameMetadataSent as-is (e.g., gpt-4, claude-3-sonnet, mistral/mistral-large)
TemperatureMetadataIncluded in control plane record
Token countsMetadataprompt, completion, and total tokens
ProviderMetadataAlways "litellm"

LiteLLM supports 100+ providers through a unified interface. The model parameter determines which provider is used:

from glacis.integrations.litellm import attested_litellm
client = attested_litellm(glacis_api_key="glsk_live_...")
# OpenAI
response = client.completion(model="gpt-4", messages=[...])
# Anthropic
response = client.completion(model="claude-3-sonnet-20240229", messages=[...])
# Azure OpenAI
response = client.completion(model="azure/my-deployment", messages=[...])
# AWS Bedrock
response = client.completion(model="bedrock/anthropic.claude-v2", messages=[...])
# Ollama (local)
response = client.completion(model="ollama/llama3", messages=[...])

Each call is attested with the actual model name used, giving you a unified attestation trail across all your LLM providers.

from glacis.integrations.litellm import attested_litellm
# LiteLLM reads provider API keys from env vars automatically
# (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)
client = attested_litellm(glacis_api_key="glsk_live_...")

Use get_last_receipt() to retrieve the attestation from the most recent API call. Receipts are stored in thread-local storage, so each thread maintains its own last receipt independently.

from glacis.integrations.litellm import get_last_receipt
receipt = get_last_receipt()
if receipt:
print(f"ID: {receipt.id}")
print(f"Evidence hash: {receipt.evidence_hash}")
print(f"Status: {receipt.witness_status}") # "WITNESSED" or "UNVERIFIED"
print(f"Service: {receipt.service_id}")

Offline mode creates locally-signed attestations without connecting to the Glacis server. This is useful for development, air-gapped environments, or when you want to defer attestation submission.

import os
from glacis.integrations.litellm import attested_litellm, get_last_receipt
client = attested_litellm(
offline=True,
signing_seed=os.urandom(32),
)
response = client.completion(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
receipt = get_last_receipt()
print(f"Status: {receipt.witness_status}") # "UNVERIFIED"

Controls let you scan inputs and outputs for PII, jailbreak attempts, banned words, and more. Configure them via a glacis.yaml file:

from glacis.integrations.litellm import attested_litellm, GlacisBlockedError
client = attested_litellm(config="glacis.yaml")
try:
response = client.completion(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
except GlacisBlockedError as e:
print(f"Blocked by {e.control_type} (score={e.score})")

Attach business-specific metadata to every attestation created by the client:

from glacis.integrations.litellm import attested_litellm, get_last_receipt
client = attested_litellm(
glacis_api_key="glsk_live_...",
metadata={"department": "legal", "use_case": "contract-review"},
)
response = client.completion(
model="gpt-4",
messages=[{"role": "user", "content": "Summarize this contract."}],
)
# Metadata is included in every attestation:
# {"provider": "litellm", "model": "gpt-4", "department": "legal", "use_case": "contract-review"}

Evidence includes the full input, output, and control plane results that were attested. Evidence is stored locally and never sent to Glacis servers.

from glacis.integrations.litellm import get_last_receipt, get_evidence
receipt = get_last_receipt()
if receipt:
evidence = get_evidence(receipt.id)
if evidence:
print(evidence["input"]) # Original request (model, messages)
print(evidence["output"]) # Full response (choices, usage)

The wrapper also provides acompletion() for async workflows:

import asyncio
from glacis.integrations.litellm import attested_litellm, get_last_receipt
async def main():
client = attested_litellm(glacis_api_key="glsk_live_...")
response = await client.acompletion(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
receipt = get_last_receipt()
print(f"Attested: {receipt.id}")
asyncio.run(main())
ParameterTypeDefaultDescription
glacis_api_keyOptional[str]NoneGlacis API key. Required for online mode.
glacis_base_urlstr"https://api.glacis.io"Glacis API base URL.
service_idstr"litellm"Service identifier for attestations.
debugboolFalseEnable debug logging.
offlineOptional[bool]NoneEnable offline mode. If None, inferred from config or presence of glacis_api_key.
signing_seedOptional[bytes]None32-byte Ed25519 signing seed. Required when offline=True.
policy_keyOptional[bytes]None32-byte HMAC key for sampling decisions. Falls back to signing_seed if not provided.
configOptional[str]NonePath to glacis.yaml config file for controls, sampling, and policy settings.
input_controlsOptional[list[BaseControl]]NoneCustom controls to run on input text before the LLM call.
output_controlsOptional[list[BaseControl]]NoneCustom controls to run on output text after the LLM call.
metadataOptional[dict[str, str]]NoneCustom metadata to include in every attestation. Merged with provider defaults (provider, model). Cannot override reserved keys.

Returns: An AttestedLiteLLM wrapper with .completion() and .acompletion() methods.

Raises: GlacisBlockedError if a control blocks the request.