🎨 ChatGPT 最新生图 gpt-image-2-all 已上线 | Now Live:$0.03/张图,对话式端点提示词遵循最佳!详情 Details
curl --request POST \
--url https://api.apiyi.com/v1/images/generations \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "flux-2-pro",
"prompt": "Naturally blend these two images",
"input_image": "https://static.apiyi.com/apiyi-logo.png"
}
'{
"created": 1776832476,
"data": [
{
"url": "https://delivery-eu.bfl.ai/results/xxx/sample.jpeg?signature=..."
}
]
}FLUX image editing API reference and live debugger — upload up to 8 reference images + instructions for single-image edits, multi-reference fusion. Works for FLUX.2 and FLUX.1 Kontext.
curl --request POST \
--url https://api.apiyi.com/v1/images/generations \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "flux-2-pro",
"prompt": "Naturally blend these two images",
"input_image": "https://static.apiyi.com/apiyi-logo.png"
}
'{
"created": 1776832476,
"data": [
{
"url": "https://delivery-eu.bfl.ai/results/xxx/sample.jpeg?signature=..."
}
]
}Documentation Index
Fetch the complete documentation index at: https://docs.apiyi.com/llms.txt
Use this file to discover all available pages before exploring further.
Bearer sk-xxx). Paste the public URL of reference image 1 into input_image; for multi-reference, fill the URLs of additional images into input_image_2 … input_image_8. Then fill prompt / model and send. The Playground only accepts URLs; for base64 data URL inputs, copy the code samples below and run them locally./v1/images/generations for both text-to-image and editing (unlike OpenAI gpt-image, FLUX has no separate /edits path): sending input_image triggers edit mode; without it the call is plain text-to-image. The request is application/json and every reference image field is a string (URL or base64 data URL). For pure text-to-image, see the Text-to-Image endpoint./v1/images/generations (shared with text-to-image; not /v1/images/edits)application/json (the apiyi FLUX channel requires JSON, not multipart)input_image / input_image_2 … input_image_8 accept a public URL (recommended) or data:image/...;base64,xxx data URLpng / jpg / webpdata[0].url must be downloaded immediatelyaspect_ratio is omitted, output dimensions match the first input imageinput_image / input_image_2 / input_image_3 … is exactly the index used by “image 1 / image 2 / image 3” in your prompt:Place the person from image 1 into the scene from image 2, applying the color palette of image 3.Each value must be a publicly reachable URL (≤ 20MB recommended) or a
data:image/png;base64,xxx data URL.curl -X POST "https://api.apiyi.com/v1/images/generations" \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "flux-2-pro",
"prompt": "Naturally blend these two images",
"input_image": "https://static.apiyi.com/apiyi-logo.png",
"input_image_2": "https://images.unsplash.com/photo-1762138012600-2ab523f8b35a",
"seed": 42,
"output_format": "jpeg"
}'
curl -X POST "https://api.apiyi.com/v1/images/generations" \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "flux-2-pro",
"prompt": "The person from image 1 is petting the cat from image 2, the bird from image 3 is next to them",
"input_image": "https://example.com/person.jpg",
"input_image_2": "https://example.com/cat.jpg",
"input_image_3": "https://example.com/bird.jpg",
"seed": 42,
"output_format": "jpeg"
}'
curl -X POST "https://api.apiyi.com/v1/images/generations" \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "flux-kontext-pro",
"prompt": "Convert this architectural photo into a pencil sketch style, preserve all structural details",
"input_image": "https://your-oss.example.com/architecture.jpg"
}'
# Encode local image as base64 data URL (macOS / Linux)
B64=$(base64 -w0 < person.png 2>/dev/null || base64 < person.png | tr -d '\n')
curl -X POST "https://api.apiyi.com/v1/images/generations" \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d "$(jq -nc --arg img "data:image/png;base64,$B64" '{
model: "flux-2-pro",
prompt: "Stylize image 1 as an oil painting",
input_image: $img
}')"
import requests
resp = requests.post(
"https://api.apiyi.com/v1/images/generations",
headers={
"Authorization": "Bearer sk-your-api-key",
"Content-Type": "application/json",
},
json={
"model": "flux-2-pro",
"prompt": "Naturally blend these two images",
"input_image": "https://static.apiyi.com/apiyi-logo.png",
"input_image_2": "https://images.unsplash.com/photo-1762138012600-2ab523f8b35a",
"seed": 42,
"output_format": "jpeg",
},
timeout=120,
)
image_url = resp.json()["data"][0]["url"]
# data[0].url is valid for only 10 minutes — download immediately
with open("fused.jpg", "wb") as f:
f.write(requests.get(image_url, timeout=30).content)
import base64, requests, mimetypes
def to_data_url(path: str) -> str:
mime = mimetypes.guess_type(path)[0] or "image/png"
with open(path, "rb") as f:
b64 = base64.b64encode(f.read()).decode()
return f"data:{mime};base64,{b64}"
resp = requests.post(
"https://api.apiyi.com/v1/images/generations",
headers={
"Authorization": "Bearer sk-your-api-key",
"Content-Type": "application/json",
},
json={
"model": "flux-2-pro",
"prompt": "Place the person from image 1 into the scene from image 2",
"input_image": to_data_url("person.png"),
"input_image_2": "https://your-oss.example.com/scene.jpg",
},
timeout=120,
)
print(resp.json()["data"][0]["url"])
from openai import OpenAI
import requests
client = OpenAI(
api_key="sk-your-api-key",
base_url="https://api.apiyi.com/v1"
)
# OpenAI SDK images.generate() targets /v1/images/generations with JSON;
# BFL-native fields are added straight into the body via extra_body.
resp = client.images.generate(
model="flux-2-pro",
prompt="Naturally blend these two images",
extra_body={
"input_image": "https://static.apiyi.com/apiyi-logo.png",
"input_image_2": "https://images.unsplash.com/photo-1762138012600-2ab523f8b35a",
"seed": 42,
"output_format": "jpeg",
},
)
image_url = resp.data[0].url
with open("fused.jpg", "wb") as f:
f.write(requests.get(image_url, timeout=30).content)
const resp = await fetch('https://api.apiyi.com/v1/images/generations', {
method: 'POST',
headers: {
'Authorization': 'Bearer sk-your-api-key',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'flux-2-pro',
prompt: 'Naturally blend these two images',
input_image: 'https://static.apiyi.com/apiyi-logo.png',
input_image_2: 'https://images.unsplash.com/photo-1762138012600-2ab523f8b35a',
seed: 42,
output_format: 'jpeg',
}),
});
const { data } = await resp.json();
const img = await fetch(data[0].url);
const fs = await import('node:fs');
fs.writeFileSync('fused.jpg', Buffer.from(await img.arrayBuffer()));
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | Yes | — | FLUX model ID. For multi-reference fusion prefer flux-2-pro / flux-2-max; for single-image edits also flux-kontext-max / flux-kontext-pro |
prompt | string | Yes | — | Edit / fusion instruction, up to 32K tokens. Use “image 1 / image 2 / image 3” to reference image / input_image_2 / input_image_3 ordering |
input_image | string | Yes | — | Reference image 1. Public URL (recommended) or data:image/...;base64,xxx data URL |
input_image_2 … input_image_8 | string | No | — | Reference images 2–8 — URL or data URL. FLUX.2 [pro/max/flex] up to 8, [klein] 4, Kontext does not support extras |
aspect_ratio | string | No | matches first input | E.g. 1:1 / 16:9 / 9:16 / 4:3 / 3:2 |
seed | integer | No | random | Fix for reproducibility |
safety_tolerance | integer | No | 2 | 0 (strictest) – 6 (most permissive) |
output_format | string | No | jpeg | jpeg / png |
prompt_upsampling | boolean | No | false | Auto-upsample the prompt |
steps | integer | No | 50 | Only flux-2-flex, max 50 |
guidance | number | No | 4.5 | Only flux-2-flex, 1.5–10 |
Character Consistency (up to 8 images)
Eight consistent characters from the reference images,
in a fashion editorial set on a Tokyo rooftop at golden hour
Style Transfer
Using the style of image 2, render the subject from image 1
Object Composition
The person from image 1 is petting the cat from image 2,
the bird from image 3 is next to them
Outfit / Product Swap
Replace the top of the person in image 1 with the one from image 2,
keep the pose and background unchanged
data[0].url, feed it back as image[] in the next call with a new instruction, and refine progressively. Each round bills as one image.{
"created": 1776832476,
"data": [
{
"url": "https://delivery-eu.bfl.ai/results/xxx/sample.jpeg?signature=..."
}
]
}
data[0].url is valid for only 10 minutesdelivery-eu.bfl.ai / delivery-us.bfl.ai, signature expires after 10 minfetch is blockedb64_json — only urlAPI Key from the APIYI Console
FLUX model ID. For multi-reference fusion prefer flux-2-pro / flux-2-max; for single-image edits also flux-kontext-max / flux-kontext-pro.
flux-2-pro, flux-2-max, flux-2-flex, flux-2-klein-9b, flux-2-klein-4b, flux-kontext-max, flux-kontext-pro Edit / fusion instruction. In multi-reference scenarios, refer to images by index: 'image 1' / 'image 2' / 'image 3' map to input_image / input_image_2 / input_image_3.
"Naturally blend these two images"
Public URL for reference image 1 (required). Use plain URLs in the Playground; for local code you can also pass a data:image/png;base64,xxx data URL.
"https://static.apiyi.com/apiyi-logo.png"
Public URL for reference image 2 (optional)
Public URL for reference image 3 (optional)
Public URL for reference image 4 (optional)
Public URL for reference image 5 (optional)
Public URL for reference image 6 (optional)
Public URL for reference image 7 (optional)
Public URL for reference image 8 (optional, only FLUX.2 [pro/max/flex] supports up to 8)
Aspect ratio, e.g. 1:1 / 16:9 / 9:16 / 4:3 / 3:4. Defaults to first input image.
Fix for reproducibility.
Moderation level. 0 = strictest, 6 = most permissive. Default 2.
0 <= x <= 6Output format. Default jpeg.
jpeg, png Auto-upsample the prompt. Default false.
Only flux-2-flex. Inference steps. Default 50.
1 <= x <= 50Only flux-2-flex. Guidance scale. Default 4.5.
1.5 <= x <= 10