• Product Introduction
  • Quick Start
    • Importing a Git Repository
    • Starting From a Template
    • Direct Upload
    • Start with AI
  • Framework Guide
    • Frontends
      • Vite
      • React
      • Vue
      • Other Frameworks
    • Backends
    • Full-stack
      • Next.js
      • Nuxt
      • Astro
      • React Router
      • SvelteKit
      • TanStack Start
      • Vike
    • Custom 404 Page
  • Project Guide
    • Project Management
    • edgeone.json
    • Configuring Cache
    • Error Codes
  • Build Guide
  • Deployment Guide
    • Overview
    • Create Deploys
    • Manage Deploys
    • Deploy Button
    • Using Github Actions
    • Using Gitlab CI/CD
    • Using CNB Plugin
    • Using IDE PlugIn
    • Using CodeBuddy IDE
  • Domain Management
    • Overview
    • Custom Domain
    • HTTPS Configuration
      • Overview
      • Apply for Free Certificate
      • Using Managed SSL Certificate
    • Configure DNS CNAME Record
  • Observability
    • Overview
    • Metric Analysis
    • Log Analysis
  • Pages Functions
    • Overview
    • Edge Functions
    • Cloud Functions
      • Overview
      • Node Functions
  • Middleware
  • KV Storage
  • Edge AI
  • API Token
  • EdgeOne CLI
  • Pages MCP
  • Message Notification
  • Integration Guide
    • AI
      • Dialogue Large Models Integration
      • Large Models for Images Integration
    • Database
      • Supabase Integration
      • Pages KV Integration
    • Ecommerce
      • Shopify Integration
      • WooCommerce Integration
    • Payment
      • Stripe Integration
      • Integrating Paddle
    • CMS
      • WordPress Integration
      • Contentful Integration
      • Sanity Integration
      • Payload Integration
    • Authentication
      • Supabase Integration
      • Clerk Integration
  • Best Practices
    • Using General Large Model to Quickly Build AI Application
    • Use the DeepSeek model to quickly build a conversational AI site
    • Building an Ecommerce Platform with Shopify
    • Building a SaaS Site Using Supabase and Stripe
    • Building a Company Brand Site Quickly
    • How to Quickly Build a Blog Site
  • Migration Guides
    • Migrating from Vercel to EdgeOne Pages
    • Migrating from Cloudflare Pages to EdgeOne Pages
    • Migrating from Netlify to EdgeOne Pages
  • Troubleshooting
  • FAQs
  • Contact Us
  • Release Notes

Large Models for Images Integration

This document mainly describes integrating AI image generation capability into application. From getting key to deploy online, it can be implemented through native API and AI SDK to meet different development requirements. Native API provides fine-grained control and high flexibility, while AI SDK simplifies development process and supports multi-vendor switchover.


Getting Started

EdgeOne Pages offers multiple AI image generation Templates, supporting two access methods: native API call and AI SDK encapsulated invocation. Example as follows:

Native API call Template: ai-image-generator-starter

AI SDK Encapsulated Invocation Template: ai-sdk-image-generator-starter

This document will use these two Templates as samples to perform a detailed integration analysis of the two technology paths: native API call and AI SDK encapsulated invocation.


Pre-Deployment Instructions

To implement the AI image generation feature, you must first apply for an API Key. The following is the API Key obtain address for mainstream AI image generation providers:



One-Click Deployment

Start by deploying the previous template, synchronize the project code to the GitHub repository, then introduce in detail the seamless integration implementation process.

Take the ai-sdk-image-generator-starter template as an example. On the template detail page, click the "Deploy" button to navigate to the EdgeOne Pages console. After entering the deployment page, environment variable configuration options will be shown. These configurations correspond to the API Keys of different AI image generation services. Different templates present different lists of configuration items, but you must make sure at least one API Key is configured correctly and available. The example configuration page is as follows:


Once configured, click the "Start Deployment" button to start project deployment.


Integration Details

Downloading Code

After successful deployment, a project same as the template has been generated under your Github account. Here, first use the clone command to download the code to local from your Github account. Similarly, take the ai-sdk-image-generator-starter template as an example, run the following command in the terminal:

git clone https://github.com/${your-github-account}/ai-sdk-image-generator-starter.git
cd ai-sdk-image-generator


Native API Calls

If selected as a native API call template, the logic process of the image generation template project mainly includes: image parameter selection → edge function calling AI → frontend display. The following provides a detailed description of key links such as image parameter selection and edge function calling AI.

1. Image Parameter Selection
Edgeone's AI image generation template has a built-in rendering process for frontend pages. User only needs to configure the available model list in frontend parameter, no need to develop. The request logic for image generation is included in src/pages/index.tsx file. Core code example:

const res = await fetch("/v1/generate", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
image: `${prompt} (${modelInfo.name} style)`,
platform: platform.id,
model: modelInfo.value || selectedModel,
}),
});

2. Calling AI with Function
The processing logic of the edge function is implemented in the functions/v1/generate/index.js file. The logic flow of this file is: first receive parameters (including prompt, platform, model) from the frontend, then check if the environment variables for the corresponding platform are configured correctly. Sample code for checking environment variables:
// Token validation for different platforms
const validateToken = (platform) => {
const tokens = {
nebius: env.NEBIUS_TOKEN,
huggingface: env.HF_TOKEN,
replicate: env.REPLICATE_TOKEN,
openai: env.OPENAI_API_KEY,
fal: env.FAL_KEY,
};

if (!tokens[platform]) {
throw new Error(
`${platform} API token is not configured. Please check your environment variables.`
);
}
};

Access environment variables via env to effectively prevent API key exposure in code and improve application security. This approach stores sensitive information in environment variables rather than hard-coded in source code.

After checking the environment variables, it will directly request the image generative model API of the corresponding platform. For example, HuggingFace's standard API request core code is as follows:

'nerijs/pixel-art-xl': () => {
validateToken('huggingface');
return fal_query({
prompt,
}, 'https://router.huggingface.co/fal-ai/fal-ai/fast-sdxl');
}
const response = await PROVIDERS.fetch(url, {
headers: {
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
},
method: "POST",
body: JSON.stringify(data),
});

To integrate which AI models, learn about their AI call API protocols, then encapsulate them in functions. Here Edgeone's AI image generation templates already support models like HuggingFace, OpenAI, Replicate, Fal, and Nebius.

After image generation, return it to the front end. The template project has built-in logic for image display, with UI interaction before, during, and after the image request.


AI SDK Encapsulated Invocation

AI SDK encapsulated invocation method uses the unified API provided by AI SDK to call AI image models from different vendors, simplifying development process via SDK encapsulation. The logic process of the ai-sdk-image-generator-starter template is similar to that of the native API call template, with only slight variations in implementation details when calling AI image models.

1. Image Parameter Selection
Start with sending a request from the frontend. In the project, the core code to initiate request is located in src/pages.tsx file. The frontend sends request via /api/generate API. The core code is as follows:
const response = await fetch(apiUrl, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt,
model,
size,
}),
});

Among them, prompt is the user input for image generation; model is the requested model name, used to specify the AI image generation model to call; size is the image dimensions parameter; quality is the quality parameter.

Notably, the size parameter must be set in advance because different models support varying specifications. For example, DALL-E 3 supports sizes such as "1024x1024", "1024x1792", and "1792x1024", while Stable Diffusion may support different specifications like "512x512" and "768x768". Therefore, configure the size in advance to ensure selecting the correct parameter list when switching models.

EdgeOne Pages' AI SDK image generation template has organized the size list for supported models. The configuration details are located in components/modelSizeMapping.ts. Developers can use these preconfigured size mappings directly without manually handling model compatibility issues.

2. Calling AI with Function
The AI SDK encapsulated invocation method is the same as the native invocation method mentioned above, avoiding key leakage risk. Here, the environment variable check step is skipped. When calling the AI image model, the function uses the experimental_generateImage object exposed by the AI SDK to unify image generation. The key retrieval is automatically handled internally by experimental_generateImage. Users only need to configure it in the .env.local file as mentioned in the previous context. The core sample code for image generation using experimental_generateImage is as follows:
const imageResult = await experimental_generateImage({
model: imageModel,
prompt: prompt,
size: size, // Use frontend-provided size
});

After calling experimental_generateImage, you only need to read the function return in standard format. Following is the example code for reading the base64 content of the image:
const imageUrl = `data:image/png;base64,${imageResult.image.base64}`;
return new Response(
JSON.stringify({
images: [
{
url: imageUrl,
base64: imageResult.image.base64,
},
],
})
);

After obtaining the generated image data, return it to the frontend for display. The frontend display details are not described here. If interested in UI interaction details, just view the code.


Local Debugging

After downloading the project locally and parsing the project implementation details, developers may need to perform local development, debugging, or preview. To enable local debugging, configuring environment variables is also required, which can be complex. At this point, you can use EdgeOne CLI, which can synchronize the deployment configuration from EdgeOne Pages to your local system and also directly deploy the project locally. Using EdgeOne CLI requires installation and login. For details, refer to the EdgeOne CLI document introduction.

After installation and login, execute the following command in the local project to associate it with the project in the Edgeone Pages console.
edgeone pages link

Execute edgeone pages link, and you will be prompted to input the project name of EdgeOne Pages, that is the project name of the deployed template project mentioned above. After entering the project name, the environment variables of the deployment project in the EdgeOne Pages console will be synchronized locally. Upon successful association, a .env file will be generated in the project root directory, containing the environment variable list.

After association, execute the following command to proceed with local deployment:
edgeone pages dev

After deployment, you can access it at localhost:8088. Take ai-sdk-image-generator-starter as an example, the preview is as follows:


If you customize the code, you can directly submit the project to GitHub via git. EdgeOne Pages will detect GitHub commit records and automatically perform redeployment. After deployment, just verify on the console. The example interface that appears after deployment is as follows:



More Related Content


ai-agent
You can ask me like
How to Get Started with EdgeOne Pages?