Prerequisites
- Access to an AnythingLLM desktop app or self-hosted instance
- A Qhaigc API key — get yours from the API Tokens page
Step 1: Configure the LLM Provider
Fill in the LLM fields
Enter the following values:
Save the configuration.
| Field | Value |
|---|---|
| Base URL | https://api.qhaigc.net/v1 |
| API Key | Your Qhaigc API key (starts with sk-) |
| Chat Model Name | The model you want to use (e.g. gpt-4o) |
| Model context window | The context window size for your chosen model |
| Max Tokens | The maximum output token limit |
Step 2: Configure the Embedding Provider
Step 3: Test the Setup
Run a connection test
Use the built-in connection test (if available) to confirm both configurations save correctly.
Verify the Connection
- The LLM configuration saves without error.
- The embedding configuration saves without error.
- You can upload a document to a workspace and receive accurate, document-grounded answers in chat.
AnythingLLM separates Model context window (the model’s total context length) from Max Tokens (the maximum number of output tokens). These are distinct fields — do not enter the same value in both.