Dialogue Large Models Integration
AI dialogue large models are deep learning artificial intelligence models with very large parameter quantities that can process and generate natural language and handle multiple tasks. These models are trained on large-scale data, possess powerful understanding and generation capabilities, and can complete complex tasks such as text dialogue and article creation.
Overview
This document introduces in detail how to integrate mainstream AI dialogue large models into a website, including platform selection, account registration, API key acquisition and configuration, backend API invocation methods, multi-model unified encapsulation, and finally quickly building a complete process for a website supporting dialogue large models.
Getting Started
By selecting the **dialogue Large Model Integration Template** ai-chatbot-starter provided by the Edgeone Pages platform, you can build a **multi-model ai conversation** website. It mainly includes the following three **core components**:
Register with a mainstream AI large model platform and obtain an API Key to authorize model capacity and enable secure access.
Debug and integrate AI conversation large model APIs to complete intelligent backend calls and data interaction
Deeply integrate AI capabilities with website pages to efficiently implement intelligent applications.
Register AI Conversation API Key
To implement the AI dialogue function, first sign up and obtain an API Key. The API Key ensures requests can be authorized to call the AI platform. For example, after registering with DeepSeek, you can access the https://platform.deepseek.com/api_keys webpage to get the API Key.
Integrating AI Conversation Model API
1. Download Code
The previous section mentioned Edgeone's ai-chatbot-starter template. This document uses Edgeone's ai-chatbot-starter template as the base to explain. Subsequent operations and integration flows are based on this project. First, execute `git clone https://github.com/tomcomtang/ai-chatbot-starter.git` to clone the project code to local.
2. AI Conversation API Integration and Integration
After applying for and configuring the API Keys for large models, next needed is to get familiar with the API call of AI conversation models. The following lists API call examples for part of the models. Understanding them can provide a basis for subsequent unified integration and adaptation.
Standard API request example for DeepSeek:
curl https://api.deepseek.com/chat/completions \-H "Content-Type: application/json" \-H "Authorization: Bearer <DeepSeek API Key>" \-d '{"model": "deepseek-chat","messages": [{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Hello!"}],"stream": false}'
If necessary to integrate which AI models, must encapsulate which API calls. The Edgeone template of course also has appropriate implementation, it already supports mainstream models such as DeepSeek, OpenAI, Gemini, and Claude. The template's integration implementation mainly includes the following points:
Edge function API encapsulation
Use edge function to provide lightweight API service
Inject config via global variable
To directly access the AI conversation model API from the WEB, an API Key is required for authentication. The API Key is not allowed to be exposed in plaintext, which necessitates an API service layer for encapsulation. Since APIs are involved, the preferred choice is to use a function service due to its lightweight nature and extremely low cost. The template contains an edge function file for API encapsulation at
functions/API/AI/index.js
, and the code example is as shown in the following figure:const { model, messages } = await request.json();if (!model || !messages) {return new Response(JSON.stringify({ error: 'Missing model or messages' }),{ status: 400, headers: { 'Content-Type': 'application/json' } });}if (model === 'deepseek-chat' || model === 'deepseek-reasoner') {return proxyDeepSeek(messages, model, env);} else if (model === 'gpt-4o-mini') {return proxyOpenAI(messages, env);} else if (model === 'gemini-flash') {return proxyGemini(messages, env);} else if (model === 'claude') {return proxyClaude(messages, env);} else if (model === 'gemini-flash-lite') {return proxyGeminiFlashLite(messages, env);} else if (model === 'gemini-2-5-flash-lite') {return proxyGemini25FlashLite(messages, env);} else {return new Response(JSON.stringify({ error: 'Unknown model' }),{ status: 400, headers: { 'Content-Type': 'application/json' } });}
In the function file,
onRequest
serves as the unified entry. It can identify the model
and messages
parameters in the request, where model
is used to identify the called AI model, and messages
contains the data of dialogue messages. The WEB client accesses the edge function api service via /api/AI
. If support is needed for more AI conversation models, developers must add branch logic themselves.The API Key still needs to be configured in plaintext in the
functions/api/ai/index.js
file. Users need to use another way via global variable injection to configure the API Key. Create a local .env file and configure the following content:DEEPSEEK_API_KEY=YOUR_API_KEYOPENAI_API_KEY=YOUR_API_KEYGEMINI_API_KEY=YOUR_API_KEYCLAUDE_API_KEY=YOUR_API_KEY
Edge function
functions/api/ai/index.js
reads API Key via global variable. Sample code:const apiKey = env.DEEPSEEK_API_KEY;if (!apiKey) {return new Response(JSON.stringify({ error: 'DEEPSEEK_API_KEY not set in environment' }),{ status: 500, headers: { 'Content-Type': 'application/json' } });}
The API Key injection can be achieved by env.DEEPSEEK_API_KEY, avoiding API Key leakage.
Next, verify the function API. Edgeone supports direct access locally. Users only need to follow the template under the directory.
npm install -g edgeoneedgeone pages initedgeone pages linkedgeone pages dev
After the command execution, the API will start up locally on port 8088. You can perform local testing through this port to verify if the large model interface is working properly.
Execute the following command in the terminal for testing.
curl 'http://localhost:8088/api/ai'-H 'Content-Type: application/json'--data-raw '{"model":"deepseek-chat","messages":[{"role":"user","content":"Hello"}]}'
If the API Key is configured correctly, you can see the API response streaming content in return mode after executing the above command.
3. Static Site Integration with AI Interface
The template also provides a default UI interface and depends on the injection of global variables. Based on global variables, it configures the AI model list. The implementation approach is to request the edge function API at
functions/api/models/index.js
to get the model list configuration. The core code for returning the model list based on global variables is as follows:if (env.DEEPSEEK_API_KEY) {models.push({ value: "deepseek-chat", label: "DeepSeek-V3"},{ value: "deepseek-reasoner", label: "DeepSeek-R1"});}
For the UI style part of the template, you can manually start and view it via command. If there are customized needs, you can also modify the template's UI code. To perform a local UI preview, directly start via the command line `npm run dev`. After startup, you can access the local webpage http://localhost:3000/ to view the UI of the WEB page. As shown in the figure below:

At this point, the main integration of the AI conversation model has already been implemented. Subsequently, you only need to submit the project to github and deploy it via the Edgeone console.
Deploying to EdgeOne Pages
1. Publishing Code to Git
For both AI conversation and AI text-to-image generation, the Edgeone Pages deployment process is the same. Step one requires publishing local code to github, which can be done by logging in to github via git and directly submitting the code.
2. Importing a Project to Pages
After submission, if you are already an EdgeOne Pages user and have associated your GitHub account, access the Console to deploy the submitted project. On the deployment page, you need to configure the API Key for each AI model. Click "Environment Variables" to start configuring the API Key. Configuring the API Key is the same as setting global variables in the .env file during local development. After configuring environment variables in the console, they will be injected into the cloud environment during project deployment. The configuration interface is as follows:

3. Publishing to Pages
Once configured, click the "Start Deployment" button and wait for deployment to complete. The deployment successful interface will display. We have now completed the entire solution deployment process.
More Related Content