跳转到主要内容
Qhaigc exposes a native Claude Messages API endpoint at /v1/messages. You can use it exactly as you would the Anthropic API — including the anthropic-version header, the system field, and the content[] response format. Existing Anthropic SDK code works without modification. Endpoint: POST https://api.qhaigc.net/v1/messages
You can also use style: "claude" in the body of POST /v1/chat/completions to receive a Claude-format response from the unified endpoint. The dedicated /v1/messages path shown here is the recommended approach for Anthropic SDK compatibility.

Request Headers

HeaderRequiredDescription
Authorization: Bearer <YOUR_API_KEY>YesYour Qhaigc API key
anthropic-version: 2023-06-01YesClaude Messages API version identifier
Content-Type: application/jsonYesJSON request body
You can also authenticate with x-api-key: <YOUR_API_KEY> instead of the Authorization: Bearer header, matching the Anthropic SDK’s default behavior.

Request Parameters

model
string
必填
The Claude model to use. Example: claude-opus-4-20250514, claude-sonnet-4-5.
messages
array
必填
Array of conversation turns. Each object must have role ("user" or "assistant") and content (a string or array of content blocks).
max_tokens
integer
必填
Maximum number of tokens to generate in the response. This field is required by the Claude Messages API.
system
string
A system prompt that sets context and instructions for the model before the conversation begins.
temperature
number
Sampling temperature between 0 and 1. Lower values are more deterministic; higher values are more varied.
stream
boolean
Set to true to receive the response as a stream of server-sent events in Claude’s streaming format.

Response Fields

The Claude Messages API returns a different structure from the OpenAI format. The content is in a content[] array of typed blocks rather than choices[].message.content.
id
string
Unique message identifier, prefixed with msg_.
type
string
Always "message".
role
string
Always "assistant" for the model’s reply.
model
string
The Claude model that generated the response.
content
array
Array of content blocks. Each block has a type field. For standard text responses, you receive {"type": "text", "text": "..."}.
stop_reason
string
Why generation ended. Common values: "end_turn" (natural completion), "max_tokens" (hit the limit), "stop_sequence" (stop sequence matched).
usage
object
Token usage for this request.

Code Examples

import anthropic

client = anthropic.Anthropic(
    api_key="sk-your-api-key-here",
    base_url="https://api.qhaigc.net"
)

message = client.messages.create(
    model="claude-opus-4-20250514",
    max_tokens=1024,
    system="You are a concise technical assistant.",
    messages=[
        {"role": "user", "content": "Summarize the core value of quantum computing in 3 points."}
    ],
    temperature=0.3
)

print(message.content[0].text)

Example Response

{
  "id": "msg_01ABCDEF",
  "type": "message",
  "role": "assistant",
  "model": "claude-opus-4-20250514",
  "content": [
    {
      "type": "text",
      "text": "1) Exponential speedup for certain problems...\n2) Enables new cryptographic paradigms...\n3) Unlocks simulation of quantum systems..."
    }
  ],
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 33,
    "output_tokens": 86
  }
}

Streaming Example

import anthropic

client = anthropic.Anthropic(
    api_key="sk-your-api-key-here",
    base_url="https://api.qhaigc.net"
)

with client.messages.stream(
    model="claude-opus-4-20250514",
    max_tokens=512,
    messages=[{"role": "user", "content": "Write a haiku about the ocean."}]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)