ChatOpenAI and ChatOpenAI Custom nodes. This tutorial covers connecting Qhaigc for both chat and embedding use cases.
Prerequisites
- A running Flowise instance (local, Docker, or self-hosted)
- A Qhaigc API key — get one from the API Tokens page
Configuration Steps
Add a ChatOpenAI node
From the component library, drag a ChatOpenAI node onto the canvas. This serves as the primary chat model node.
Create or connect a credential
In the node panel, create a new OpenAI credential or select an existing one. Enter your Qhaigc API key (starts with
sk-) as the API key value.Set the Base Path
In the node’s Additional Parameters section, find the Base Path field and enter:Include
/v1 — Flowise’s ChatOpenAI node expects the full versioned endpoint.Switch to ChatOpenAI Custom for unlisted models
If the model you want to use does not appear in the default dropdown list, replace the
ChatOpenAI node with a ChatOpenAI Custom node. This lets you type any model ID directly into the Model Name field.Add an embeddings node for RAG flows
If your chatflow includes a knowledge base or vector retrieval, you also need to configure an embedding component separately. Add an OpenAI Embeddings or OpenAI Embeddings Custom node and configure it with the same base URL and your API key.Recommended embedding models:
bge-m3text-embedding-3-large
Verifying the Connection
Your setup is working when:- The
ChatOpenAInode saves without connection errors - The chatflow executes and returns a reply
- If you configured embeddings, the vectorization and retrieval steps also complete successfully
Frequently Asked Questions
I can’t find a Base Path field in the credential form — where is it? The Base Path setting is in the node’s Additional Parameters section, not in the credential. Open the node panel and look for Additional Parameters to find it. The default model list doesn’t include Qhaigc models — what do I do? Switch from theChatOpenAI node to the ChatOpenAI Custom node, which lets you enter any model ID manually.
Do I need to configure embeddings separately?
Yes. For RAG and knowledge base flows, you must configure an embedding node independently. The chat model node does not provide embeddings automatically.