Skip to main content
POST
/
v1
/
images
/
generations
Edit or fuse one or more reference images by instruction
curl --request POST \
  --url https://api.apiyi.com/v1/images/generations \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "flux-2-pro",
  "prompt": "Naturally blend these two images",
  "input_image": "https://static.apiyi.com/apiyi-logo.png"
}
'
{
  "created": 1776832476,
  "data": [
    {
      "url": "https://delivery-eu.bfl.ai/results/xxx/sample.jpeg?signature=..."
    }
  ]
}

Documentation Index

Fetch the complete documentation index at: https://docs.apiyi.com/llms.txt

Use this file to discover all available pages before exploring further.

Playground usage: enter your API Key in Authorization (format Bearer sk-xxx). Paste the public URL of reference image 1 into input_image; for multi-reference, fill the URLs of additional images into input_image_2input_image_8. Then fill prompt / model and send. The Playground only accepts URLs; for base64 data URL inputs, copy the code samples below and run them locally.
Use this page for “edit or fuse one or more reference images”. FLUX shares one endpoint /v1/images/generations for both text-to-image and editing (unlike OpenAI gpt-image, FLUX has no separate /edits path): sending input_image triggers edit mode; without it the call is plain text-to-image. The request is application/json and every reference image field is a string (URL or base64 data URL). For pure text-to-image, see the Text-to-Image endpoint.
⚠️ Key differences / notes
  • Endpoint path: /v1/images/generations (shared with text-to-image; not /v1/images/edits)
  • Content-Type: application/json (the apiyi FLUX channel requires JSON, not multipart)
  • Every reference image field is a string: input_image / input_image_2input_image_8 accept a public URL (recommended) or data:image/...;base64,xxx data URL
  • Reference image cap varies by model: FLUX.2 [pro/max/flex] up to 8, FLUX.2 [klein] up to 4, FLUX.1 Kontext supports 1 natively
  • Each image ≤ 20MB or 20MP, formats png / jpg / webp
  • Input resolution: min 64×64, max 4MP; dimensions must be multiples of 16
  • Result URL is valid for only 10 minutesdata[0].url must be downloaded immediately
  • If aspect_ratio is omitted, output dimensions match the first input image
📎 Multi-reference order mattersThe numbering of input_image / input_image_2 / input_image_3is exactly the index used by “image 1 / image 2 / image 3” in your prompt:
Place the person from image 1 into the scene from image 2, applying the color palette of image 3.
Each value must be a publicly reachable URL (≤ 20MB recommended) or a data:image/png;base64,xxx data URL.

Code Examples

cURL (two-image fusion · URL)

curl -X POST "https://api.apiyi.com/v1/images/generations" \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "flux-2-pro",
    "prompt": "Naturally blend these two images",
    "input_image": "https://static.apiyi.com/apiyi-logo.png",
    "input_image_2": "https://images.unsplash.com/photo-1762138012600-2ab523f8b35a",
    "seed": 42,
    "output_format": "jpeg"
  }'

cURL (three-image fusion · URL)

curl -X POST "https://api.apiyi.com/v1/images/generations" \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "flux-2-pro",
    "prompt": "The person from image 1 is petting the cat from image 2, the bird from image 3 is next to them",
    "input_image": "https://example.com/person.jpg",
    "input_image_2": "https://example.com/cat.jpg",
    "input_image_3": "https://example.com/bird.jpg",
    "seed": 42,
    "output_format": "jpeg"
  }'

cURL (single-image edit · Kontext)

curl -X POST "https://api.apiyi.com/v1/images/generations" \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "flux-kontext-pro",
    "prompt": "Convert this architectural photo into a pencil sketch style, preserve all structural details",
    "input_image": "https://your-oss.example.com/architecture.jpg"
  }'

cURL (local file · base64 data URL)

# Encode local image as base64 data URL (macOS / Linux)
B64=$(base64 -w0 < person.png 2>/dev/null || base64 < person.png | tr -d '\n')

curl -X POST "https://api.apiyi.com/v1/images/generations" \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d "$(jq -nc --arg img "data:image/png;base64,$B64" '{
    model: "flux-2-pro",
    prompt: "Stylize image 1 as an oil painting",
    input_image: $img
  }')"

Python (requests · two-image fusion)

import requests

resp = requests.post(
    "https://api.apiyi.com/v1/images/generations",
    headers={
        "Authorization": "Bearer sk-your-api-key",
        "Content-Type": "application/json",
    },
    json={
        "model": "flux-2-pro",
        "prompt": "Naturally blend these two images",
        "input_image": "https://static.apiyi.com/apiyi-logo.png",
        "input_image_2": "https://images.unsplash.com/photo-1762138012600-2ab523f8b35a",
        "seed": 42,
        "output_format": "jpeg",
    },
    timeout=120,
)
image_url = resp.json()["data"][0]["url"]

# data[0].url is valid for only 10 minutes — download immediately
with open("fused.jpg", "wb") as f:
    f.write(requests.get(image_url, timeout=30).content)

Python (requests · local file as base64)

import base64, requests, mimetypes

def to_data_url(path: str) -> str:
    mime = mimetypes.guess_type(path)[0] or "image/png"
    with open(path, "rb") as f:
        b64 = base64.b64encode(f.read()).decode()
    return f"data:{mime};base64,{b64}"

resp = requests.post(
    "https://api.apiyi.com/v1/images/generations",
    headers={
        "Authorization": "Bearer sk-your-api-key",
        "Content-Type": "application/json",
    },
    json={
        "model": "flux-2-pro",
        "prompt": "Place the person from image 1 into the scene from image 2",
        "input_image": to_data_url("person.png"),
        "input_image_2": "https://your-oss.example.com/scene.jpg",
    },
    timeout=120,
)
print(resp.json()["data"][0]["url"])

Python (OpenAI SDK · pass input_image via extra_body)

from openai import OpenAI
import requests

client = OpenAI(
    api_key="sk-your-api-key",
    base_url="https://api.apiyi.com/v1"
)

# OpenAI SDK images.generate() targets /v1/images/generations with JSON;
# BFL-native fields are added straight into the body via extra_body.
resp = client.images.generate(
    model="flux-2-pro",
    prompt="Naturally blend these two images",
    extra_body={
        "input_image": "https://static.apiyi.com/apiyi-logo.png",
        "input_image_2": "https://images.unsplash.com/photo-1762138012600-2ab523f8b35a",
        "seed": 42,
        "output_format": "jpeg",
    },
)
image_url = resp.data[0].url
with open("fused.jpg", "wb") as f:
    f.write(requests.get(image_url, timeout=30).content)

Node.js (fetch · multi-reference fusion)

const resp = await fetch('https://api.apiyi.com/v1/images/generations', {
    method: 'POST',
    headers: {
        'Authorization': 'Bearer sk-your-api-key',
        'Content-Type': 'application/json',
    },
    body: JSON.stringify({
        model: 'flux-2-pro',
        prompt: 'Naturally blend these two images',
        input_image: 'https://static.apiyi.com/apiyi-logo.png',
        input_image_2: 'https://images.unsplash.com/photo-1762138012600-2ab523f8b35a',
        seed: 42,
        output_format: 'jpeg',
    }),
});

const { data } = await resp.json();
const img = await fetch(data[0].url);
const fs = await import('node:fs');
fs.writeFileSync('fused.jpg', Buffer.from(await img.arrayBuffer()));

Parameter Reference

FieldTypeRequiredDefaultDescription
modelstringYesFLUX model ID. For multi-reference fusion prefer flux-2-pro / flux-2-max; for single-image edits also flux-kontext-max / flux-kontext-pro
promptstringYesEdit / fusion instruction, up to 32K tokens. Use “image 1 / image 2 / image 3” to reference image / input_image_2 / input_image_3 ordering
input_imagestringYesReference image 1. Public URL (recommended) or data:image/...;base64,xxx data URL
input_image_2input_image_8stringNoReference images 2–8 — URL or data URL. FLUX.2 [pro/max/flex] up to 8, [klein] 4, Kontext does not support extras
aspect_ratiostringNomatches first inputE.g. 1:1 / 16:9 / 9:16 / 4:3 / 3:2
seedintegerNorandomFix for reproducibility
safety_toleranceintegerNo20 (strictest) – 6 (most permissive)
output_formatstringNojpegjpeg / png
prompt_upsamplingbooleanNofalseAuto-upsample the prompt
stepsintegerNo50Only flux-2-flex, max 50
guidancenumberNo4.5Only flux-2-flex, 1.5–10

Multi-Reference Strategies

Upload multiple shots of the same character as references — the model preserves identity features automatically. Great for ad campaigns, comic panels, fashion editorials.
Eight consistent characters from the reference images,
in a fashion editorial set on a Tokyo rooftop at golden hour
One content image + one style image, with explicit reference in the prompt:
Using the style of image 2, render the subject from image 1
Combine objects from multiple images into one new scene:
The person from image 1 is petting the cat from image 2,
the bird from image 3 is next to them
Swap an outfit from one image to another subject:
Replace the top of the person in image 1 with the one from image 2,
keep the pose and background unchanged
Iterative editing: download data[0].url, feed it back as image[] in the next call with a new instruction, and refine progressively. Each round bills as one image.

Response Format

{
    "created": 1776832476,
    "data": [
        {
            "url": "https://delivery-eu.bfl.ai/results/xxx/sample.jpeg?signature=..."
        }
    ]
}
⚠️ data[0].url is valid for only 10 minutes
  • URL hosted on delivery-eu.bfl.ai / delivery-us.bfl.ai, signature expires after 10 min
  • CORS is disabled — browser fetch is blocked
  • Production must server-side download to your own OSS / CDN
  • The FLUX edit endpoint does not return b64_json — only url
Edit requests cost the same as text-to-image (per image, not per token). Multi-reference does not charge extra for additional images (unlike OpenAI gpt-image-2 editing).

Authorizations

Authorization
string
header
required

API Key from the APIYI Console

Body

application/json
model
enum<string>
default:flux-2-pro
required

FLUX model ID. For multi-reference fusion prefer flux-2-pro / flux-2-max; for single-image edits also flux-kontext-max / flux-kontext-pro.

Available options:
flux-2-pro,
flux-2-max,
flux-2-flex,
flux-2-klein-9b,
flux-2-klein-4b,
flux-kontext-max,
flux-kontext-pro
prompt
string
required

Edit / fusion instruction. In multi-reference scenarios, refer to images by index: 'image 1' / 'image 2' / 'image 3' map to input_image / input_image_2 / input_image_3.

Example:

"Naturally blend these two images"

input_image
string
required

Public URL for reference image 1 (required). Use plain URLs in the Playground; for local code you can also pass a data:image/png;base64,xxx data URL.

Example:

"https://static.apiyi.com/apiyi-logo.png"

input_image_2
string

Public URL for reference image 2 (optional)

input_image_3
string

Public URL for reference image 3 (optional)

input_image_4
string

Public URL for reference image 4 (optional)

input_image_5
string

Public URL for reference image 5 (optional)

input_image_6
string

Public URL for reference image 6 (optional)

input_image_7
string

Public URL for reference image 7 (optional)

input_image_8
string

Public URL for reference image 8 (optional, only FLUX.2 [pro/max/flex] supports up to 8)

aspect_ratio
string

Aspect ratio, e.g. 1:1 / 16:9 / 9:16 / 4:3 / 3:4. Defaults to first input image.

seed
integer

Fix for reproducibility.

safety_tolerance
integer

Moderation level. 0 = strictest, 6 = most permissive. Default 2.

Required range: 0 <= x <= 6
output_format
enum<string>

Output format. Default jpeg.

Available options:
jpeg,
png
prompt_upsampling
boolean

Auto-upsample the prompt. Default false.

steps
integer

Only flux-2-flex. Inference steps. Default 50.

Required range: 1 <= x <= 50
guidance
number

Only flux-2-flex. Guidance scale. Default 4.5.

Required range: 1.5 <= x <= 10

Response

Image generated

created
integer
Example:

1776832476

data
object[]

Result array (single image per call)