跳转到主要内容
Use this endpoint to send a conversation history and receive an AI-generated reply in the standard OpenAI Chat Completions format. Because Qhaigc is fully OpenAI-compatible, you can use the official OpenAI SDK without any code changes — just swap the base_url. Endpoint: POST https://api.qhaigc.net/v1/chat/completions

Request Parameters

model
string
必填
The model to use for completion. Examples: gpt-4o, gpt-4o-mini, gpt-5-mini. Call GET /v1/models to retrieve the full list of available models.
messages
array
必填
The conversation history as an array of message objects. Each object must contain role and content.
stream
boolean
默认值:"false"
When true, the response is delivered as a stream of server-sent events (SSE). Each event contains a partial delta until the [DONE] sentinel is sent.
stream_options
object
Additional options for streaming responses. Pass {"include_usage": true} to receive token usage data in the final stream chunk.
max_tokens
integer
The maximum number of tokens to generate. Set this to control costs and prevent unexpectedly long responses.
temperature
number
Sampling temperature between 0 and 2. Higher values produce more varied output; lower values make responses more focused and deterministic. Defaults to 1.

Response Fields

id
string
A unique identifier for this completion, prefixed with chatcmpl-.
object
string
Always "chat.completion" for non-streaming responses.
created
integer
Unix timestamp (seconds) of when the completion was generated.
model
string
The model that produced the completion.
choices
array
Array of completion choices. For most requests this contains a single element.
usage
object
Token counts for this request.

Code Examples

from openai import OpenAI

client = OpenAI(
    api_key="sk-your-api-key-here",
    base_url="https://api.qhaigc.net/v1"
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain RAG in three sentences."}
    ],
    max_tokens=256,
    temperature=0.7
)

print(response.choices[0].message.content)

Streaming Example

from openai import OpenAI

client = OpenAI(
    api_key="sk-your-api-key-here",
    base_url="https://api.qhaigc.net/v1"
)

stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a short poem about the sea."}],
    stream=True,
    stream_options={"include_usage": True}
)

for chunk in stream:
    delta = chunk.choices[0].delta if chunk.choices else None
    if delta and delta.content:
        print(delta.content, end="", flush=True)

print()  # newline after stream ends

Multimodal Input

To send an image alongside your text, replace the content string with an array of content parts. Each part is either {"type": "text", "text": "..."} or {"type": "image_url", "image_url": {"url": "https://..."}}. Only models that support vision (such as gpt-4o) will process the image.
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What is in this image?"},
                {"type": "image_url", "image_url": {"url": "https://example.com/photo.jpg"}}
            ]
        }
    ]
)