Prerequisites
- LangBot deployed and its admin interface accessible
- A Qhaigc API key — get yours from the API Tokens page
Configure LangBot
Open the model configuration page
Log in to the LangBot admin interface and open Configure Models (or Models).
Add a new LLM model
Click the button to add a new model, then fill in the following fields:
Save the model.
| Field | Value |
|---|---|
| Model Name | The model ID you want to use (e.g. gpt-4o) |
| Model Provider | Select the OpenAI-compatible option |
| Request URL | https://api.qhaigc.net/v1 |
| API Key | Your Qhaigc API key (starts with sk-) |
Assign the model to a pipeline or bot
In the pipeline or bot configuration, select the model you just added as the active model. Save and reload the configuration.
Verify the Connection
- Send a message through the platform your bot is connected to (for example, a messaging app).
- A successful reply confirms that the model is active and reachable.
- If knowledge base queries return unexpected results, confirm that the embedding model is separately configured.
A plain LLM configuration is sufficient for general conversation. Knowledge base features require an additional embedding model entry with its own Request URL and API key.