• Product Introduction
  • Quick Start
    • Importing a Git Repository
    • Starting From a Template
    • Direct Upload
    • Start with AI
  • Framework Guide
    • Frontends
    • Backends
    • Full-stack
      • Next.js
    • Custom 404 Page
  • Project Guide
    • Project Management
    • edgeone.json
    • Configuring Cache
    • Error Codes
  • Build Guide
  • Deployment Guide
    • Overview
    • Create Deploys
    • Manage Deploys
    • Deploy Button
    • Using Github Actions
    • Using CNB Plugin
    • Using IDE PlugIn
    • Using CodeBuddy IDE
  • Domain Management
    • Overview
    • Custom Domain
    • Configuring an HTTPS Certificate
    • How to Configure a DNS CNAME Record
  • Pages Functions
    • Overview
    • Edge Functions
    • Node Functions
  • Log Analysis
  • KV Storage
  • Edge AI
  • API Token
  • EdgeOne CLI
  • Pages MCP
  • Integration Guide
    • AI
      • Dialogue Large Models Integration
      • Large Models for Images Integration
    • Database
      • Supabase Integration
      • Pages KV Integration
    • Ecommerce
      • Shopify Integration
      • WooCommerce Integration
    • Payment
      • Stripe Integration
      • Integrating Paddle
    • CMS
      • WordPress Integration
      • Contentful Integration
      • Sanity Integration
    • Authentication
      • Supabase Integration
      • Clerk Integration
  • Best Practices
    • Using General Large Model to Quickly Build AI Application
    • Use the Deepseek-R1 model to quickly build a conversational AI site
    • Building an Ecommerce Platform with WordPress + WooCommerce and GatsbyJS
    • Building a SaaS Site Using Supabase and Stripe
    • Building a Company Brand Site Quickly
    • How to Quickly Build a Blog Site
  • Migration Guides
    • Migrating from Vercel to EdgeOne Pages
    • Migrating from Cloudflare Pages to EdgeOne Pages
    • Migrating from Netlify to EdgeOne Pages
  • Troubleshooting
  • FAQs
  • Contact Us
  • Release Notes

Dialogue Large Models Integration

This document introduces in detail how to integrate AI large models and quickly build an AI conversation application. It includes API key configuration, complete code implementation analysis, development debugging, etc., covering two technical approaches: native API calls and AI SDK encapsulated invocation. The former is suitable for customization requirements, while the latter is suitable for quick integration.


Getting Started

EdgeOne Pages offers multiple AI conversation templates, supporting two access methods: native API calls and SDK encapsulated invocation. Example:

Native API call template: ai-chatbot-starter
AI SDK Encapsulated Invocation Template: ai-sdk-chatbot-starter

This document uses these two templates as samples to perform a detailed integration analysis of two technology paths: native API calls and AI SDK encapsulated invocation.


Pre-Deployment Instructions

To implement the dialogue feature, you must first apply for an API Key. The following is the API Key obtain address for mainstream AI providers:



One-Click Deployment

Start by deploying the template in the "template introduction" section, generate the template to the user's GitHub account, then explain the details of seamless integration.

Take ai-sdk-chatbot-starter as an example. Click the "Deploy" button on the template details page to enter the EdgeOne Pages console. After entering the console, an environment variable configuration list will appear on the project deployment page. The environment variables here are used to configure the API Key for the AI conversation model to be used. Different templates will display different configuration lists. However, at least one valid API Key must be configured. The example configuration interface is as follows:


Once configured, click "start deployment" to initiate the deployment.


Integration Details

Download Code

After successful deployment, a project same as the template has been generated in your Github account. Here, use the clone command first to download the code to local from your Github account. Similarly, take the ai-sdk-chatbot-starter template as an example, run the following command in the terminal:
git clone https://github.com/[your_github_account]/vercel-ai-sdk-chatbot.git
cd vercel-ai-sdk-chatbot


Native API Call

If the cloned project is the ai-chatbot-starter template. It implements seamless integration via native API calls. The logic process of native API calls is: frontend sends request → edge function processing → call AI model API → return response → front-end rendering message list. And based on this logic process, the conversation can be achieved. Below explains each step.

1. Sending a Request From the Frontend
Starting from sending a request on the frontend. In the project, the core code to initiate a request is located in the app/apiApp.js file. The frontend sends a request via the /api/ai API. The core code is as follows:
const res = await fetch("/api/ai", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ model, messages }),
signal: controller.signal,
});

Among them, messages are the message records between the user and AI, containing the dialogue history; model is the model name of the request, used to specify the AI model to call.

2. Edge Function Processing
The processing logic of the edge function is implemented in the functions/api/ai/index.js file. This file handles requests sent from the frontend and calls the corresponding AI model API based on the model parameter. Core code is as follows:
const { model, messages } = await request.json();
if (!model || !messages) {
return new Response(JSON.stringify({ error: "Missing model or messages" }), {
status: 400,
headers: { "Content-Type": "application/json" },
});
}
if (model === "deepseek-chat" || model === "deepseek-reasoner") {
return proxyDeepSeek(messages, model, env);
}

Here, DeepSeek is used as an example to show the processing logic. The actual template also implements handling for mainstream AI models like Claude, OpenAI, and Google. The env parameter in the code covers the configured environment variable values. In subsequent actual AI API calls, these API Keys will be securely transmitted through the env parameter to avoid clear text transmission of API Keys and ensure the security of the key.

3. Calling AI Model API
Next is the API call logic. The edge function calls the corresponding AI model API based on the model parameter. Similarly, taking DeepSeek as an example, the core code is as follows:
const res = await PROVIDERS.fetch("https://api.deepseek.com/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify(requestBody),
});

By setting messages (chat record) and apiKey (API Key) to the corresponding position in the request, the call to the AI model is implemented. Among them, messages is passed to the AI model as the request body, and apiKey is authenticated through the Authorization header.

4. Return a Response
After the AI model processing is complete, the edge function returns the response to the frontend. Please note the returned data uses a streaming mode, letting the frontend display content partially in advance. This can shorten the frontend's wait time and optimize experience.

return new Response(res.body, {
status: res.status,
headers: {
"Content-Type":
res.headers.get("Content-Type") || "application/octet-stream",
"Cache-Control": "no-store",
},
});

5. Processing Messages in Frontend
After receiving the response from the AI model, the frontend displays the returned data in components/MessageItem.jsx. This component is responsible for rendering chat messages, splitting the message list queue into two types (AI and user) for display. Different types of messages can be distinguished by UI style.

This is the complete integration process for native API calling. From downloading the template and configuring environment variables, to sending requests from the frontend, processing by edge functions, calling the AI model API, and returning responses, finally displaying messages on the frontend—forming a full AI conversation feature implementation chain.


AI SDK Encapsulated Invocation

If the cloned project is the ai-sdk-chatbot-starter template, it uses the ai sdk encapsulated invocation method. The core principle is to use ai sdk UI and ai sdk to achieve the ai conversation process. The ai sdk provides a uniform API to call ai models from different vendors, while the ai sdk UI offers ready-made React components to manage the message list in the chat interface. The logic flow of the template is: create a message manager → modify the message queue → ai sdk encapsulated invocation → message manager refresh.

1. Creating a Message Manager
The AI SDK UI provides the concept of a message manager, used to manage the state and interaction of chat records. The message manager is responsible for processing user input, AI responses, message history, and provides a unified data format to operate the message queue.

In the project, the core code for creating a message manager is located at hooks/useChatLogic.ts:
import { useChat } from "ai/react";
const {
messages,
sendMessage,
status,
error,
regenerate,
setMessages,
clearError: clearChatError,
} = useChat({
onFinish: (message) => {
// ...
},
onError: (error) => {
//...
},
});

useChat hook provides the core feature of message management:

messages: message list containing ALL chat records
sendMessage: function for proactive sending to AI model
setMessages: function to set the message list for manual update or reset the message queue
status: current status (idle, loading, error)
error: error information
regenerate: regenerate the last AI response

2. Update Message Queue
When the user enters a message in the chat UI and clicks send, it triggers the send process. The send button on the page triggers the following example code:
sendMessage(
{ text },
{
headers: {
"X-Model": selectedModel,
},
}
);

The function receives user input text content and sends the message via the sendMessage function to the API. By default, sendMessage requests the api/chat API. Meanwhile, it specifies the AI model to be used through the X-Model header, enabling model switching functionality.

3. AI SDK Encapsulated Invocation
The processing logic of the AI SDK is implemented in the app/api/chat/route.js file. This file receives requests sent from the frontend and calls the corresponding AI model API based on the model parameter.
When calling the AI model, the system will process the received message and convert it to a uniform format. Code as follows:
// Convert messages to UI format
const uiMessages: UIMessage[] = body.messages.map((msg: any) => ({
id: msg.id || Math.random().toString(36).substr(2, 9),
role: msg.role,
parts: msg.parts || [{ type: "text", text: msg.content || msg.text || "" }],
}));

The function automatically selects the corresponding AI SDK provider based on the user-selected model name through the code providerConfig.provider(selectedModel). The API Key transfer method is the same as the native API invocation method, transmitted via env.
The function already supports the following model list:
import { deepseek } from "@ai-sdk/deepseek";
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";
import { openai } from "@ai-sdk/openai";
import { xai } from "@ai-sdk/xai";

If users need to support more models, they can add them manually.
The following is the model invocation process of the AI SDK, with the core code as follows:
const result = streamText({
model: providerConfig.provider(selectedModel),
system:
"You are an intelligent AI assistant dedicated to helping users. Please follow these principles:\n1. Provide accurate, useful, and concise answers\n2. Maintain a friendly and professional tone\n3. Be honest when uncertain about answers\n4. Support both Chinese and English communication\n5. Provide practical advice and solutions",
messages: convertToModelMessages(uiMessages),
maxOutputTokens: 1000,
temperature: 0.7,
onError: (error) => console.error("AI API Error:", error),
onFinish: (result) =>
console.log("AI Response finished:", {
provider: providerConfig.name,
model: selectedModel,
usage: result.usage,
finishReason: result.finishReason,
}),
});

streamText is a function provided by the AI SDK to implement AI conversation. After executing streamText, the returned content needs to be converted into the standard data format for the AI SDK UI. The implementation approach is through the AI SDK's toUIMessageStreamResponse. Code as follows:
result.toUIMessageStreamResponse();

At this point, the AI SDK call process is completed. Subsequently, you only need to process the API response content in the frontend code.

4. Processing Messages in Frontend
After receiving the response from the AI model, the frontend displays the returned data in components/MessageList.jsx. This component listens to changes in the messages object transmitted by hooks/useChatLogic.ts. When there is new AI response content, the message manager automatically updates the new message list.

The above is the complete integration process of the AI SDK encapsulated invocation method. From downloading the template and configuring environment variables to the AI SDK encapsulated invocation, it completes the implementation chain of the dialogue feature.


Local Debugging

After downloading the project to your local system and parsing the project implementation details, developers may need to perform local development, debugging, or preview. To enable local debugging, configuring environment variables is also required, which may seem complex. At this point, you can use EdgeOne CLI, which can synchronize the deployment configuration mentioned in the previous context from EdgeOne Pages to your local system. It also allows direct local deployment of the project. Using EdgeOne CLI requires installation and login. For detailed information, refer to the EdgeOne CLI document introduction.

After installation and login, execute the following command in the local project to associate it with the project in the Edgeone Pages console.
edgeone pages link

After executing edgeone pages link, you will be prompted to enter the project name of EdgeOne Pages, that is, the name of the template project deployed earlier. Upon entering the project name, the environment variables configured in the EdgeOne Pages console for the deployment project will be synchronized locally. The synchronization generates an .env file in the root directory of the local project, containing the environment variable list configured before.

EdgeOne CLI also supports enabling DEV mode locally. The command is as follows:
edgeone pages dev
After enabling DEV mode, you can access it at localhost:8088. Taking ai-sdk-chatbot-starter as an example, upon successful local startup, the project interface appears as follows when accessing localhost:8088:


If you need to customize the code, you can directly submit the project to GitHub via git. EdgeOne Pages will automatically detect GitHub commit records and perform redeployment. After submission, go to the console to view the project details page. When finished, the "Deploying" status will show the interface in the figure below.

At this point, just click the Access Address to verify the update content on the public network.


More Related Content