You can query data within 3 years
Today
Today's Quota Status
Syncing today's request limits, token limits, and credit card status.
Card Usage Progress
Total Cost
\$0.00
Total Requests
0
Web Searches
0
Images
0
Total Tokens
0
Input Tokens
0
Output Tokens
0
Cache R/W
0
Model Cost Distribution
Model Request Distribution
Model Usage Details
| Model | Requests | Searches | Input Tokens | Output Tokens | Cache R/W | Images | Cost | Percentage |
|---|
Daily Usage Statistics
| Date | Requests | Cost | Tokens | Images | Primary Model |
|---|
My Account
Profile
Update the alias shown in the top-right corner, your login email, and billing notifications
Supported format: XAI API Keys starting with sk-Xvs...
Query Results
π Supported AI Service Providers
β¨ The XAI platform is compatible with virtually all major AI providers and model ecosystems, supporting unified integration and flexible switching.
π§ How It Works
One entrypoint handles auth, routing, and normalization before reaching model providers.
Single entrypoint
Use one XAI API key and a unified base_url.
Smart routing
Policies, model mapping, rate limits, and observability live in the router.
Provider fan-out
Requests go to OpenAI, Claude, Gemini, and more, then normalize on the way back.
π» SDK Examples
OpenAI SDK Example
import os
from openai import OpenAI
XAI_API_KEY = os.getenv("XAI_API_KEY")
client = OpenAI(
api_key=XAI_API_KEY,
base_url="",
)
completion = client.chat.completions.create(
model="gpt-5",
messages=[
{"role": "system", "content": "You are AI"},
{"role": "user", "content": "What is the meaning of life, the universe, and everything?"},
],
)
print(completion.choices[0].message)
Anthropic SDK Example
import os
from anthropic import Anthropic
XAI_API_KEY = os.getenv("XAI_API_KEY")
client = Anthropic(
api_key=XAI_API_KEY,
base_url="",
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=128,
system="You are AI.",
messages=[
{
"role": "user",
"content": "What is the meaning of life, the universe, and everything?",
},
],
)
print(message.content)
π§ͺ cURL Examples
OpenAI /responses
curl /responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $XAI_API_KEY" \
-d '{
"model": "gpt-5.5",
"input": "Explain what the Responses API does in one sentence."
}'
OpenAI /chat/completions
curl /chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $XAI_API_KEY" \
-d '{
"model": "gpt-5.2",
"messages": [
{
"role": "developer",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
Anthropic /messages
curl /v1/messages \
-H 'Content-Type: application/json' \
-H 'anthropic-version: 2023-06-01' \
-H "X-Api-Key: $XAI_API_KEY" \
-d '{
"max_tokens": 1024,
"messages": [
{
"content": "Hello, world",
"role": "user"
}
],
"model": "claude-sonnet-4-6"
}'
Create New Sub-account
API Call Examples
cURL
Python
JavaScript
Fund/Deduct Sub-account
API Call Examples
cURL
Python
JavaScript
Recharge History
No recharge records
Service Orders
No service orders
Configuration Guide
Unified setup notes for Codex CLI / Codex App / Claude Code / OpenCode / OpenClaw / Hermes.
Unified prerequisite: all examples use the XAI Router gateway.
The page fills the compatible endpoint and current login key for each tool, so
users do not need to choose between OpenAI and Anthropic endpoints manually. Codex
writes to ~/.codex/auth.json, Hermes writes to
~/.hermes/config.yaml, Claude Code uses
ANTHROPIC_AUTH_TOKEN, and OpenCode / OpenClaw use
XAI_API_KEY.
Codex CLI / Codex App
Codex now only supports wire_api = "responses".
On Linux / macOS, copy the one-shot command below first: the page inserts the
current XAI API Key automatically, so the user only needs to paste it into a terminal.
Manual setup only needs two files: ~/.codex/config.toml and
~/.codex/auth.json. On Windows, use
%USERPROFILE%\.codex\config.toml and
%USERPROFILE%\.codex\auth.json.
Recommended: Linux / macOS one-shot setup
mkdir -p ~/.codex
cat > ~/.codex/config.toml <<'EOF'
model_provider = "xai"
model = "gpt-5.5"
model_reasoning_effort = "xhigh"
plan_mode_reasoning_effort = "xhigh"
model_reasoning_summary = "none"
model_context_window = 1050000
model_auto_compact_token_limit = 945000
stream_idle_timeout_ms = 900000
approval_policy = "never"
sandbox_mode = "danger-full-access"
suppress_unstable_features_warning = true
[model_providers.xai]
name = "OpenAI"
base_url = ""
wire_api = "responses"
experimental_bearer_token = "sk-Xvs..."
requires_openai_auth = true
supports_websockets = true
[features]
responses_websockets_v2 = true
goals = true
remote_connections = true
EOF
cat > ~/.codex/auth.json <<'EOF'
{
"OPENAI_API_KEY": "sk-Xvs..."
}
EOF
chmod 600 ~/.codex/auth.json
codex
When copying the command above, the page automatically inserts the current
XAI API Key into both the config.toml
experimental_bearer_token field and auth.json. If
auth.json is later overwritten by Codex login, model requests still go
through XAI Router.
Fallback: copy ~/.codex/config.toml only
model_provider = "xai"
model = "gpt-5.5"
model_reasoning_effort = "xhigh"
plan_mode_reasoning_effort = "xhigh"
model_reasoning_summary = "none"
model_context_window = 1050000
model_auto_compact_token_limit = 945000
stream_idle_timeout_ms = 900000
approval_policy = "never"
sandbox_mode = "danger-full-access"
suppress_unstable_features_warning = true
[model_providers.xai]
name = "OpenAI"
base_url = ""
wire_api = "responses"
experimental_bearer_token = "sk-Xvs..."
requires_openai_auth = true
supports_websockets = true
[features]
responses_websockets_v2 = true
goals = true
remote_connections = true
Fallback: copy ~/.codex/auth.json only
{
"OPENAI_API_KEY": "sk-Xvs..."
}
Windows CMD (create the directory and open files)
if not exist "%USERPROFILE%\.codex" mkdir "%USERPROFILE%\.codex"
notepad "%USERPROFILE%\.codex\config.toml"
notepad "%USERPROFILE%\.codex\auth.json"
codex
Windows PowerShell (create the directory and open files)
New-Item -ItemType Directory -Force "$env:USERPROFILE\.codex" | Out-Null
notepad "$env:USERPROFILE\.codex\config.toml"
notepad "$env:USERPROFILE\.codex\auth.json"
codex
On Windows, paste the TOML and JSON blocks above into the two files Notepad opens,
save them, then run codex.
Claude Code (gpt-5.5)
Claude Code integration is primarily environment-variable based.
The following examples map Claude defaults to
gpt-5.5 using
.
Order: copy the environment variables for your shell first, then
run claude.
Environment variables (Linux / macOS)
export ANTHROPIC_AUTH_TOKEN="sk-Xvs..."
export ANTHROPIC_BASE_URL=""
# Optional: custom default model mapping for Claude families (not required)
export ANTHROPIC_DEFAULT_OPUS_MODEL="gpt-5.5"
export ANTHROPIC_DEFAULT_SONNET_MODEL="gpt-5.4"
export ANTHROPIC_DEFAULT_HAIKU_MODEL="gpt-5.4-mini"
Environment variables (Windows CMD)
set ANTHROPIC_AUTH_TOKEN=sk-Xvs...
set ANTHROPIC_BASE_URL=
:: Optional: custom default model mapping for Claude families (not required)
set ANTHROPIC_DEFAULT_OPUS_MODEL=gpt-5.5
set ANTHROPIC_DEFAULT_SONNET_MODEL=gpt-5.4
set ANTHROPIC_DEFAULT_HAIKU_MODEL=gpt-5.4-mini
Environment variables (Windows PowerShell)
$env:ANTHROPIC_AUTH_TOKEN="sk-Xvs..."
$env:ANTHROPIC_BASE_URL=""
# Optional: custom default model mapping for Claude families (not required)
$env:ANTHROPIC_DEFAULT_OPUS_MODEL="gpt-5.5"
$env:ANTHROPIC_DEFAULT_SONNET_MODEL="gpt-5.4"
$env:ANTHROPIC_DEFAULT_HAIKU_MODEL="gpt-5.4-mini"
Launch and verify
claude
Verify with: claude
OpenCode (Responses: gpt-5.5)
OpenCode should use the global config file
~/.config/opencode/opencode.jsonc (Windows:
%USERPROFILE%\.config\opencode\opencode.jsonc).
Write the JSONC below into the config file, then set
XAI_API_KEY in your shell and run the
verification command.
Order: copy the Responses API config first, then copy the shell command for your OS.
Profile A: put this in opencode.jsonc (Responses API)
{
"$schema": "https://opencode.ai/config.json",
"model": "openai/gpt-5.5",
"small_model": "openai/gpt-5.5",
"provider": {
"openai": {
"options": {
"baseURL": "",
"apiKey": "{env:XAI_API_KEY}"
},
"models": {
"gpt-5.5": {
"headers": {
"originator": "opencode"
}
}
}
}
},
"agent": {
"title": {
"options": {
"reasoningEffort": "none"
}
},
"build": {
"variant": "xhigh",
"options": {
"reasoningSummary": "detailed",
"textVerbosity": "high"
}
},
"plan": {
"variant": "xhigh",
"options": {
"reasoningSummary": "detailed",
"textVerbosity": "high"
}
}
},
"permission": {
"*": "allow",
"external_directory": "allow",
"doom_loop": "allow"
}
}
Linux / macOS (set key and verify)
export XAI_API_KEY="sk-Xvs..."
opencode debug config
opencode run "hello"
Windows CMD (set key and verify)
set XAI_API_KEY=sk-Xvs...
opencode debug config
opencode run "hello"
Windows PowerShell (set key and verify)
$env:XAI_API_KEY="sk-Xvs..."
opencode debug config
opencode run "hello"
Verify with: opencode debug config (config) and opencode run "hello" (request)
OpenClaw
OpenClaw can connect to OpenAI API and Claude API, and can also
be extended to OpenAI Responses API. XAI Router supports OpenAI
API and Claude API by default; the recommended setup is
api = "openai-responses". Config path:
~/.openclaw/openclaw.json on Linux / macOS, and
%USERPROFILE%\.openclaw\openclaw.json on Windows.
Order: write one of the JSON configs below to the config file,
then set XAI_API_KEY for your shell, then run the
verification command.
Mode 1: OpenAI Responses API compatible (recommended, api = "openai-responses")
{
"agents": {
"defaults": {
"model": { "primary": "xairouter/gpt-5.4" }
}
},
"models": {
"mode": "merge",
"providers": {
"xairouter": {
"baseUrl": "",
"apiKey": "${XAI_API_KEY}",
"api": "openai-responses",
"models": [{ "id": "gpt-5.4", "name": "gpt-5.4" }]
}
}
}
}
Mode 2: Claude API compatible (api = "anthropic-messages")
{
"agents": {
"defaults": {
"model": { "primary": "xairouter/claude-sonnet-4-6" }
}
},
"models": {
"mode": "merge",
"providers": {
"xairouter": {
"baseUrl": "",
"apiKey": "${XAI_API_KEY}",
"api": "anthropic-messages",
"models": [{ "id": "claude-sonnet-4-6", "name": "claude-sonnet-4-6" }]
}
}
}
}
Mode 3: OpenAI Chat API compatible (api = "openai-completions")
{
"agents": {
"defaults": {
"model": { "primary": "xairouter/MiniMax-M2.5" }
}
},
"models": {
"mode": "merge",
"providers": {
"xairouter": {
"baseUrl": "",
"apiKey": "${XAI_API_KEY}",
"api": "openai-completions",
"models": [{ "id": "MiniMax-M2.5", "name": "MiniMax-M2.5" }]
}
}
}
}
Linux / macOS (set key)
export XAI_API_KEY="sk-Xvs..."
Windows CMD (set key)
set XAI_API_KEY=sk-Xvs...
Windows PowerShell (set key)
$env:XAI_API_KEY="sk-Xvs..."
Verify command
openclaw models status
Verify with: openclaw models status
Hermes (gpt-5.4)
Hermes uses ~/.hermes/config.yaml. This guide configures the GPT model
path only: type: openai, base_url points to the
XAI Router gateway, and the default model is
gpt-5.4.
Order: install Hermes, then copy the one-shot setup command. The page inserts the current XAI API Key automatically.
Install Hermes
pipx install hermes-agent
Recommended: Linux / macOS write config and launch
mkdir -p ~/.hermes
cat > ~/.hermes/config.yaml <<'EOF'
models:
default: xairouter/gpt-5.4
providers:
xairouter:
type: openai
base_url:
api_key: "sk-Xvs..."
default_model: gpt-5.4
EOF
chmod 600 ~/.hermes/config.yaml
hermes
Fallback: copy ~/.hermes/config.yaml only
models:
default: xairouter/gpt-5.4
providers:
xairouter:
type: openai
base_url:
api_key: "sk-Xvs..."
default_model: gpt-5.4
Windows CMD (create config and launch)
if not exist "%USERPROFILE%\.hermes" mkdir "%USERPROFILE%\.hermes"
notepad "%USERPROFILE%\.hermes\config.yaml"
hermes
Windows PowerShell (create config and launch)
New-Item -ItemType Directory -Force "$env:USERPROFILE\.hermes" | Out-Null
notepad "$env:USERPROFILE\.hermes\config.yaml"
hermes
Fallback interactive model setup
hermes model
Verify with: hermes
View Sub-account Information
API Call Examples
cURL
Python
JavaScript
Update Sub-account Information
API Call Examples
cURL
Python
JavaScript
Sub-account List
0 sub-accountsNo sub-accounts yet
This account has no sub-accounts
Delete Sub-account
This action cannot be undone. The sub-account Key will be immediately invalidated.
API Call Examples
cURL
Python
JavaScript
Sub-account Insights
Loading billing data...
Failed to load billing data
Please verify network access and API permissions, then retry
Total Cost
$0.00
Total Requests
0
Web Searches
0
Images
0
Total Tokens
0
Input Tokens
0
Output Tokens
0
Cache R/W
0
Model Cost Distribution
Model Request Distribution
Cost Trend Analysis
Daily Trend
Sub-account Overview
Compare sub-account request and spend distribution to flag anomalies quickly.
| ID | User | Requests | Searches | Input Tokens | Output Tokens | Cache R/W | Images | Highest-Cost Model | Most Requested Model | Spend | Spend Share | Request Share |
|---|
Model Spend & Requests
Summarize request share and cost structure to pinpoint primary cost drivers.
| Model | Requests | Searches | Input | Output | Spend | Request Share | Spend Share |
|---|
Daily Timeline
Review daily totals, model share, and cache activity across the selected range.
Activity Logs
| Time | Action | Target | Details | IP |
|---|
Loading logs...
No activity logs available