跳转到主要内容
LangBot manages AI models through its Configure Models (or Models) page in the admin interface. You add each Qhaigc model individually by specifying the model name, provider, request URL, and API key — then assign the model to the pipeline or bot that should use it.

Prerequisites

  • LangBot deployed and its admin interface accessible
  • A Qhaigc API key — get yours from the API Tokens page

Configure LangBot

1

Open the model configuration page

Log in to the LangBot admin interface and open Configure Models (or Models).
2

Add a new LLM model

Click the button to add a new model, then fill in the following fields:
FieldValue
Model NameThe model ID you want to use (e.g. gpt-4o)
Model ProviderSelect the OpenAI-compatible option
Request URLhttps://api.qhaigc.net/v1
API KeyYour Qhaigc API key (starts with sk-)
Save the model.
3

Assign the model to a pipeline or bot

In the pipeline or bot configuration, select the model you just added as the active model. Save and reload the configuration.
4

Configure an embedding model (if using a knowledge base)

If you plan to use LangBot’s knowledge base features, add a separate embedding model entry using the same Request URL and API Key, and specify an embedding model name (e.g. bge-m3 or text-embedding-3-large).

Verify the Connection

  • Send a message through the platform your bot is connected to (for example, a messaging app).
  • A successful reply confirms that the model is active and reachable.
  • If knowledge base queries return unexpected results, confirm that the embedding model is separately configured.
A plain LLM configuration is sufficient for general conversation. Knowledge base features require an additional embedding model entry with its own Request URL and API key.

Troubleshooting

Connection fails. Check the Request URL, API key, and model name for typos. All three must be correct for the model to work. Knowledge base queries fail. Verify that an embedding model has been configured separately from the LLM model.