Documentation Index
Fetch the complete documentation index at: https://docs.generalcompute.com/llms.txt
Use this file to discover all available pages before exploring further.
Build a complete streaming chat application using Next.js and the Vercel AI SDK, powered by General Compute’s fast inference.
Clone the full example project:
git clone https://github.com/generalcompute/docs
cd docs/examples/vercel-ai-chat
npm install
1. Install dependencies
npx create-next-app@latest my-chat-app
cd my-chat-app
npm install ai @ai-sdk/openai-compatible @ai-sdk/react
2. Create the API route
The Vercel AI SDK has an @ai-sdk/openai-compatible provider that works with any OpenAI-compatible API — including General Compute.
import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
import { streamText } from "ai";
const generalcompute = createOpenAICompatible({
name: "generalcompute",
baseURL: "https://api.generalcompute.com/v1",
headers: {
Authorization: `Bearer ${process.env.GENERALCOMPUTE_API_KEY}`,
},
});
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: generalcompute("minimax-m2.7"),
// Convert UI messages (parts format) to model messages (content format)
messages: messages.map((m: { role: string; parts?: { type: string; text: string }[]; content?: string }) => ({
role: m.role,
content: m.parts
? m.parts.filter((p) => p.type === "text").map((p) => p.text).join("")
: m.content,
})),
});
return result.toUIMessageStreamResponse();
}
We’re using minimax-m2.7 here — our best general-purpose model with 160k context at an unbeatable price. Swap it for deepseek-v3.2 if you need stronger reasoning. See all models on the Models & Pricing page.
3. Create the chat UI
The useChat hook from @ai-sdk/react handles message state and streaming. You manage input state yourself and call sendMessage() to send messages.
"use client";
import { useChat } from "@ai-sdk/react";
import { useState } from "react";
export default function Chat() {
const { messages, sendMessage, status } = useChat();
const [input, setInput] = useState("");
const isLoading = status === "streaming" || status === "submitted";
return (
<div style={{ maxWidth: 600, margin: "0 auto", padding: 20 }}>
<h1>Chat with General Compute</h1>
<div style={{ marginBottom: 20 }}>
{messages.map((m) => (
<div key={m.id} style={{ marginBottom: 10 }}>
<strong>{m.role === "user" ? "You" : "AI"}:</strong>
<p>
{m.parts
?.filter((p) => p.type === "text")
.map((p) => p.text)
.join("")}
</p>
</div>
))}
</div>
<form
onSubmit={(e) => {
e.preventDefault();
if (!input.trim()) return;
sendMessage({ text: input });
setInput("");
}}
style={{ display: "flex", gap: 8 }}
>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Say something..."
style={{ flex: 1, padding: 8 }}
/>
<button type="submit" disabled={isLoading}>
Send
</button>
</form>
</div>
);
}
4. Add your API key
Create a .env.local file in your project root:
GENERALCOMPUTE_API_KEY=gc_your_api_key_here
5. Run it
Open http://localhost:3000 and start chatting. Responses stream in token-by-token.
Next steps
- Swap
minimax-m2.7 for deepseek-v3.2 to try reasoning capabilities
- Add system prompts by prepending to the
messages array in the API route
- Deploy to Vercel with
vercel deploy — just add GENERALCOMPUTE_API_KEY to your environment variables