The GeneralCompute API mirrors the OpenAI platform while adding routing, observability, and ultra-low-latency infrastructure. All SDKs (OpenAI, Vercel AI SDK, LangChain, LlamaIndex, or the official GeneralCompute clients) interact with the same schema documented here.Documentation Index
Fetch the complete documentation index at: https://docs.generalcompute.com/llms.txt
Use this file to discover all available pages before exploring further.
OpenAPI specification
This reference uses the live OpenAPI document exported from the router. It includes every available path, schema, and security definition:Download openapi.json
View the exact schema that powers the API playground.
| Path | Description |
|---|---|
POST /v1/chat/completions | Create chat completions with optional streaming |
POST /v1/models/list | List models available to your organization |
GET /v1/public/models | Fetch the curated list of public models shown in docs |
POST /v1/chat/completions
Create chat completions with optional streaming. The body mirrors OpenAI’s schema, including tool calling and JSON mode fields."stream": true and returns chat.completion.chunk events over text/event-stream.
POST /v1/models/list
Returns the models enabled for your org (including private checkpoints). Use this instead of OpenAI’sGET /v1/models.
id, created, and ownership metadata so you can dynamically populate dropdowns or perform health checks.
GET /v1/public/models
Unauthenticated endpoint that powers the Models & Pricing page.Authentication
All requests require a bearer token generated in the dashboard. Set theAuthorization header to Bearer <GENERALCOMPUTE_API_KEY> and target the correct base URL for your environment. See API Keys & Base URLs for details.
The schema exposes the ApiKeyAuth security scheme, so Mintlify’s API Explorer will automatically prompt you for a key.
