API Documentation

Complete API reference for OpenAI-compatible, Claude-compatible, and native endpoints.

What is AI Badgr?

AI Badgr is an API gateway that:

  • Acts as an OpenAI/Claude proxy (bring your own provider key), or
  • Runs managed models (use Badgr API keys for OSS models and hosted GPUs)

Receipts are generated for every request, providing execution records for cost tracking and debugging.

What This Mode Is

  • OpenAI-compatible proxy: Same API shape, same SDKs, same request format
  • Bring your own OpenAI API key: Use your existing OpenAI key - no new keys needed
  • No Badgr account required: Works immediately with your OpenAI credentials
  • Same models, streaming, tools: Everything works exactly as before

What Changes (Only This)

Before (OpenAI)

api.openai.com/v1

After (AI Badgr)

aibadgr.com/v1

What does NOT change:

  • Your OpenAI API key
  • Model names (gpt-3.5-turbo, gpt-4, etc.)
  • Streaming behavior
  • Tools / function calling
  • RAG / embeddings

Chat Completions

Non-Streaming

Python
from openai import OpenAI

client = OpenAI(
    api_key="sk-your-openai-key",  # Your existing OpenAI key
    base_url="https://aibadgr.com/v1"  # Only change: swap base URL
)

response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "user", "content": "Hello!"}
    ],
    max_tokens=200
)

print(response.choices[0].message.content)
base_url — Change this from default OpenAI URL

Streaming

Python
from openai import OpenAI

client = OpenAI(
    api_key="sk-your-openai-key",
    base_url="https://aibadgr.com/v1"
)

stream = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello!"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
cURL
curl -i https://aibadgr.com/v1/chat/completions \
  -H "Authorization: Bearer sk-your-openai-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [{"role": "user", "content": "Hello!"}],
    "max_tokens": 200
  }'
https://aibadgr.com/v1 — Change this from https://api.openai.com/v1
Use -i flag to see response headers (including receipt ID)

Receipts

What a Receipt Is

  • One receipt per request (immutable execution record)
  • Exists for both success and failure
  • Contains: tokens, cost, timing, status, failure stage (if failed)

Where It Appears

Receipts are in response headers, not the JSON body.

Response headers:

  • X-Badgr-Receipt-Id - Unique receipt identifier
  • X-AIBADGR-Receipt-URL - Relative path to receipt

How to Fetch It

# 1. Make request with -i to see headers
curl -i https://aibadgr.com/v1/chat/completions \
  -H "Authorization: Bearer sk-your-openai-key" \
  -d '{"model": "gpt-3.5-turbo", "messages": [...]}'

# 2. Copy X-AIBADGR-Receipt-URL from headers (e.g., /v1/receipts/abc-123)

# 3. Fetch receipt (use same OpenAI key)
curl https://aibadgr.com/v1/receipts/abc-123 \
  -H "Authorization: Bearer sk-your-openai-key"

❓ Why does the receipt URL 401 in my browser?

Receipt URLs are API endpoints, not browser pages. Pasting in a browser returns 401. This is expected behavior, not a bug. Use curl, Postman, or your HTTP client withAuthorization: Bearer and your OpenAI key.

Streaming + Receipts

  • Streaming works the same as OpenAI
  • Receipt headers sent at stream start (capture them even while streaming tokens)
  • Receipt finalizes when stream completes (success or failure)
  • Failed streams still produce receipts (status shows failure)

Phase-1 Scope

Receipts supported for:

  • /v1/chat/completions (including streaming)

Other endpoints execute normally. Receipt coverage is expanding.