Conway

Inference

OpenAI-compatible chat completions API, billed from Conway credits.

Chat Completions

POST /v1/chat/completions

OpenAI-compatible chat completions endpoint. Requests are proxied to OpenAI and billed from your Conway credits with a 1.3x markup on token cost.

Supports streaming via Server-Sent Events (SSE).

Prerequisites

  • Authenticated with API key or JWT
  • Minimum credit balance of 10 cents

Request Body

ParameterTypeRequiredDescription
modelstringYesOpenAI model name (e.g. gpt-4o, gpt-4o-mini, o3-mini)
messagesarrayYesArray of message objects ({ role, content })
streambooleanNoEnable SSE streaming (default: false)
temperaturenumberNoSampling temperature
max_tokensnumberNoMaximum tokens to generate

All other OpenAI-compatible parameters are forwarded as-is.

Example (Non-Streaming)

curl -X POST https://api.conway.tech/v1/chat/completions \
  -H "Authorization: Bearer cnwy_k_your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      { "role": "user", "content": "Hello" }
    ]
  }'
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "model": "gpt-4o-mini-2024-07-18",
  "choices": [
    {
      "index": 0,
      "message": { "role": "assistant", "content": "Hello! How can I help?" },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 8,
    "completion_tokens": 7,
    "total_tokens": 15
  }
}

Example (Streaming)

curl -N -X POST https://api.conway.tech/v1/chat/completions \
  -H "Authorization: Bearer cnwy_k_your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      { "role": "user", "content": "Hello" }
    ],
    "stream": true
  }'

Returns a stream of data: lines in SSE format, ending with data: [DONE].

Billing

Each request is billed based on token usage:

charged_cents = ceil(token_cost_usd * 100 * 1.3)
  • Token cost is computed from OpenAI's per-model pricing (input + output tokens)
  • A 1.3x markup is applied
  • Credits are deducted after the response completes
  • Transactions appear in your credit history as type inference

Errors

StatusDescription
400Missing model or messages
401Invalid or missing authentication
402Insufficient credits (minimum 10 cents required)
503Inference proxy not configured

Supported Models

Any model available on OpenAI, including:

  • gpt-4o, gpt-4o-mini
  • gpt-4.1, gpt-4.1-mini, gpt-4.1-nano
  • o1, o1-mini, o1-pro
  • o3, o3-mini, o4-mini
  • gpt-4-turbo, gpt-3.5-turbo