Skip to main content

Documentation Index

Fetch the complete documentation index at: https://onecli.sh/docs/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Store your LLM API keys in OneCLI and the gateway injects them into requests automatically. Agents call model APIs without ever seeing the raw key.
LLMs tab in the OneCLI Connections dashboard
ProviderTarget hostAuth method
Anthropicapi.anthropic.comAPI key or OAuth token
OpenAIapi.openai.comAPI key

How it works

  1. You add an API key in the OneCLI dashboard under Connections > LLMs
  2. OneCLI encrypts and stores the key (AES-256-GCM at rest)
  3. When an agent sends a request to a matching host (e.g. api.anthropic.com), the gateway injects the key into the appropriate header
  4. The request is forwarded to the provider
If you rotate a key, update it in the dashboard and all agents pick up the new key automatically.

Controlling access with rules

Use OneCLI’s rules engine to control how agents use your LLM keys. For example, you can rate-limit requests, restrict agents to specific models by blocking certain paths, or flag high-cost operations for manual approval. Rules are evaluated before credential injection, so a blocked request never reaches the provider.