Skip to main content
POST
/
v1
/
chat
/
completions
curl --request POST \
  --url https://api.apiyi.com/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "gpt-image-2-vip",
  "messages": [
    {
      "role": "user",
      "content": "Landscape 16:9 cinematic, old lighthouse at sunset, photorealistic"
    }
  ]
}
'
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1702855400,
  "model": "gpt-image-2-vip",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "![image](https://r2cdn.copilotbase.com/r2cdn2/xxxxx.png)"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 150,
    "total_tokens": 175
  }
}

Documentation Index

Fetch the complete documentation index at: https://docs.apiyi.com/llms.txt

Use this file to discover all available pages before exploring further.

Chat endpoint highlights: one endpoint handles both text-to-image and reference-image editing, accepts online image URLs (CDN links or base64 data URLs) directly, and supports natural multi-turn iteration. Enter your API Key in the Playground on the right and pick an example from the dropdown (text-to-image / reference editing / multi-turn).If you want one codebase that hits both the official and reverse channels, prefer /v1/images/generations and /v1/images/edits (OpenAI Images API standard format).
Mode selection:
  • Text-only messagestext-to-image
  • Add image_url (URL or base64 data URL) → reference-image edit
  • Keep prior assistant messages and continue → multi-turn iteration
Difference vs gpt-image-2-all: identical call format. Just swap model to gpt-image-2-vip and describe your target dimensions in the prompt.⚠️ The chat endpoint has no separate size field — dimensions are conveyed via the prompt (same as -all). For strict size locking, use the text-to-image endpoint with the size field.
🖥️ Browser Playground limitation (when response contains base64)This endpoint usually returns images as markdown URLs (![image](https://...)), which the Playground renders fine. If the model returns a base64 data URL embedded in message.content, or if you pass a large base64 input image via image_url, the response string can be multi-MB and the browser Playground may show 请求时发生错误: unable to complete requestthe request actually succeeded; the browser just can’t display such a long string.Recommended workflow: when you see this prompt, copy the code samples below to your local machine — your code can extract the image link or base64 data from content cleanly.

Code Examples

Python (text-to-image)

import requests

API_KEY = "sk-your-api-key"

response = requests.post(
    "https://api.apiyi.com/v1/chat/completions",
    headers={"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"},
    json={
        "model": "gpt-image-2-vip",
        "messages": [
            {"role": "user", "content": "Cinematic landscape 16:9, old lighthouse by the sea at dusk, photorealistic"}
        ]
    },
    timeout=300
).json()

print(response["choices"][0]["message"]["content"])

Python (reference-image edit)

import requests
import base64

API_KEY = "sk-your-api-key"

# HTTPS URL or base64 data URL — both work
with open("photo.png", "rb") as f:
    data_url = "data:image/png;base64," + base64.b64encode(f.read()).decode()

response = requests.post(
    "https://api.apiyi.com/v1/chat/completions",
    headers={"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"},
    json={
        "model": "gpt-image-2-vip",
        "messages": [
            {
                "role": "user",
                "content": [
                    {"type": "text", "text": "Convert this image into watercolor style"},
                    {"type": "image_url", "image_url": {"url": data_url}}
                ]
            }
        ]
    },
    timeout=300
).json()

print(response["choices"][0]["message"]["content"])

cURL (text-to-image)

curl -X POST "https://api.apiyi.com/v1/chat/completions" \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-image-2-vip",
    "messages": [
      {"role": "user", "content": "Cyberpunk rainy night street, 16:9, neon sign reading Hello World"}
    ]
  }'

cURL (reference-image edit)

curl -X POST "https://api.apiyi.com/v1/chat/completions" \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-image-2-vip",
    "messages": [
      {
        "role": "user",
        "content": [
          { "type": "text", "text": "Convert this image into watercolor style" },
          { "type": "image_url", "image_url": { "url": "https://example.com/photo.png" } }
        ]
      }
    ]
  }'

Node.js (text-to-image)

const API_KEY = "sk-your-api-key";

const response = await fetch("https://api.apiyi.com/v1/chat/completions", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${API_KEY}`,
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    model: "gpt-image-2-vip",
    messages: [
      { role: "user", content: "1024x1024 square logo, minimalist cat line art" }
    ]
  })
});

const data = await response.json();
console.log(data.choices[0].message.content);
from openai import OpenAI

client = OpenAI(
    api_key="sk-your-api-key",
    base_url="https://api.apiyi.com/v1"
)

resp = client.chat.completions.create(
    model="gpt-image-2-vip",
    messages=[{
        "role": "user",
        "content": "Generate a 16:9 ink wash landscape painting in traditional Chinese style"
    }]
)
print(resp.choices[0].message.content)

Parameters

ParameterTypeRequiredDescription
modelstringYesFixed at gpt-image-2-vip
messagesarrayYesConversation array; supports system / user / assistant roles
messages[].contentstring | arrayYesPlain string (text-to-image) or multimodal array (with reference image)
streambooleanNoStreaming. This model is one-shot; keep false.
Multimodal content fragments (when content is an array):
FieldTypeRequiredDescription
typeenumYestext or image_url
textstringConditionalRequired when type=text
image_url.urlstringConditionalRequired when type=image_url. Accepts https://... or data:image/png;base64,...
See the right-side Playground for full field details. The “Example” dropdown switches between text-to-image / reference editing / multi-turn iteration.

Response Format

The chat endpoint returns the standard OpenAI chat.completion shape. The generated image appears as a URL or data URL inside choices[0].message.content:
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1702855400,
  "model": "gpt-image-2-vip",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "![image](https://r2cdn.copilotbase.com/r2cdn2/xxxxx.png)"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 150,
    "total_tokens": 175
  }
}
Parsing tip: extract image links from content using a regex like https?://[^\s)]+\.(png|jpg|jpeg|webp) or data:image/[^\s)]+.

Why the Chat Endpoint

One endpoint, two abilities

No need to switch between generations / edits — all flows hit the same endpoint.

Online URL inputs

image_url accepts CDN URLs or base64 data URLs directly — no multipart upload needed.

Native multi-turn

Keep prior assistant messages to continue refining — same logic as ChatGPT.

Best SDK coverage

Works with the official OpenAI SDK, LangChain, and most Chat frontends out of the box.
For strict size locking: the chat endpoint has no separate size field. To get pixel-exact outputs from the 30-size set, use the text-to-image endpoint (/v1/images/generations).

Model Overview (full size table)

Complete 30-size table, pricing, technical specs

Text-to-Image API (/v1/images/generations)

OpenAI Images API compatible endpoint, pass size to lock dimensions

Image Editing API (/v1/images/edits)

multipart/form-data upload with reference images

Sister model gpt-image-2-all

Same call format when you don’t need locked size — faster output

Authorizations

Authorization
string
header
required

API Key from the API易 Console

Body

application/json
model
enum<string>
default:gpt-image-2-vip
required

Model name, fixed to gpt-image-2-vip

Available options:
gpt-image-2-vip
messages
object[]
required

Conversation messages. Supports multi-turn and multimodal content.

stream
boolean
default:false

Whether to stream the response. This model returns one-shot — keep false. Playground does not support streaming preview.

temperature
number
default:1

Sampling temperature (minor effect on image generation — default is fine)

Required range: 0 <= x <= 2

Response

Image generated (image URL or data URL appears in choices[0].message.content)

id
string
Example:

"chatcmpl-abc123"

object
string
Example:

"chat.completion"

created
integer

Unix timestamp (seconds)

Example:

1702855400

model
string
Example:

"gpt-image-2-vip"

choices
object[]
usage
object