Skip to main content
POST
/
v1
/
images
/
edits
Edit or fuse one or more reference images by instruction
curl --request POST \
  --url https://api.apiyi.com/v1/images/edits \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: multipart/form-data' \
  --form model=flux-kontext-max \
  --form 'prompt=Place the person from image 1 into the scene from image 2, color palette from image 3' \
  --form 'image[]=<string>' \
  --form size=1024x1024 \
  --form width=1056 \
  --form height=1056 \
  --form seed=123 \
  --form safety_tolerance=2 \
  --form output_format=jpeg \
  --form steps=50 \
  --form guidance=4.5 \
  --form image[].items='@example-file'
{
  "created": 1776832476,
  "data": [
    {
      "url": "https://delivery-eu.bfl.ai/results/xxx/sample.jpeg?signature=..."
    }
  ]
}
The interactive Playground on the right supports direct file upload. Fill in your API Key in the Authorization header (format: Bearer sk-xxx), select image file(s), enter prompt and model, then send.
Use this page for “edit or fuse one or more reference images”. Requests are multipart/form-data. For pure text-to-image, see the Text-to-Image endpoint.
⚠️ Key differences / notes
  • Reference image cap varies by model: FLUX.2 [pro/max/flex] up to 8, FLUX.2 [klein] up to 4, FLUX.1 Kontext supports 1 natively
  • Each image ≤ 20MB or 20MP, formats png / jpg / webp
  • Input resolution: min 64×64, max 4MP; dimensions must be multiples of 16
  • Result URL is valid for only 10 minutesdata[0].url must be downloaded immediately
  • If width / height is omitted, output dimensions match the first input image
📎 Multi-reference order mattersThe image[] field accepts multiple files in order. The upload order becomes the index reference for “image 1 / image 2 / image 3” in your prompt:
Place the person from image 1 into the scene from image 2, applying the color palette of image 3.
Recommended ≤ 20MB per image, formats png / jpg / webp.

Code Examples

Python (OpenAI SDK · single-image edit)

from openai import OpenAI
import requests

client = OpenAI(
    api_key="sk-your-api-key",
    base_url="https://api.apiyi.com/v1"
)

resp = client.images.edit(
    model="flux-kontext-max",
    image=open("photo.png", "rb"),
    prompt="Replace 'joy' with 'BFL' on the sign, keep all other text and layout unchanged",
    size="1024x1024"
)

# data[0].url is valid for only 10 minutes — download immediately
image_url = resp.data[0].url
with open("edited.jpg", "wb") as f:
    f.write(requests.get(image_url, timeout=30).content)

Python (OpenAI SDK · multi-reference fusion)

resp = client.images.edit(
    model="flux-2-pro",
    image=[
        open("person.png", "rb"),
        open("scene.png", "rb"),
        open("style.png", "rb"),
    ],
    prompt="Place the person from image 1 into the scene from image 2, applying the color palette of image 3, keep lighting consistent",
    size="1536x1024"
)

image_url = resp.data[0].url
with open("fused.jpg", "wb") as f:
    f.write(requests.get(image_url, timeout=30).content)

cURL (multi-reference fusion)

curl -X POST "https://api.apiyi.com/v1/images/edits" \
  -H "Authorization: Bearer sk-your-api-key" \
  -F "model=flux-2-pro" \
  -F "prompt=Place the person from image 1 into the scene from image 2, color palette from image 3" \
  -F "size=1536x1024" \
  -F "image[][email protected]" \
  -F "image[][email protected]" \
  -F "image[][email protected]"

cURL (single-image edit · Kontext)

curl -X POST "https://api.apiyi.com/v1/images/edits" \
  -H "Authorization: Bearer sk-your-api-key" \
  -F "model=flux-kontext-pro" \
  -F "prompt=Convert this architectural photo into a pencil sketch style, preserve all structural details" \
  -F "image[][email protected]"

Node.js (native fetch + FormData · multi-reference)

import fs from 'node:fs';

const form = new FormData();
form.append('model', 'flux-2-pro');
form.append('prompt', 'Replace the top of the person from image 1 with the one from image 2');
form.append('size', '1024x1024');
form.append('image[]', new Blob([fs.readFileSync('./person.png')]), 'person.png');
form.append('image[]', new Blob([fs.readFileSync('./outfit.png')]), 'outfit.png');

const resp = await fetch('https://api.apiyi.com/v1/images/edits', {
    method: 'POST',
    headers: { 'Authorization': 'Bearer sk-your-api-key' },
    body: form
});

const { data } = await resp.json();
const img = await fetch(data[0].url);
fs.writeFileSync('fused.jpg', Buffer.from(await img.arrayBuffer()));

Parameter Reference

FieldTypeRequiredDefaultDescription
modeltextYesFLUX model ID. For editing, prefer flux-kontext-max / flux-kontext-pro (single-image) or flux-2-pro / flux-2-max (multi-image)
prompttextYesEdit / fusion instruction, up to 32K tokens. Use “image 1 / image 2 / image 3” to reference upload order
image[]fileYesReference image(s), repeatable. FLUX.2 [pro/max/flex] up to 8, [klein] 4, Kontext 1
sizetextNomatches first inputOpenAI-style size string, e.g., 1024x1024
widthintegerNomatches first inputBFL-native, must be multiple of 16
heightintegerNomatches first inputBFL-native, must be multiple of 16
seedintegerNorandomFix for reproducibility
safety_toleranceintegerNo20 (strictest) – 6 (most permissive)
output_formattextNojpegjpeg / png
stepsintegerNo50Only flux-2-flex, max 50
guidancenumberNo4.5Only flux-2-flex, 1.5–10

Multi-Reference Strategies

Upload multiple shots of the same character as references — the model preserves identity features automatically. Great for ad campaigns, comic panels, fashion editorials.
Eight consistent characters from the reference images,
in a fashion editorial set on a Tokyo rooftop at golden hour
One content image + one style image, with explicit reference in the prompt:
Using the style of image 2, render the subject from image 1
Combine objects from multiple images into one new scene:
The person from image 1 is petting the cat from image 2,
the bird from image 3 is next to them
Swap an outfit from one image to another subject:
Replace the top of the person in image 1 with the one from image 2,
keep the pose and background unchanged
Iterative editing: download data[0].url, feed it back as image[] in the next call with a new instruction, and refine progressively. Each round bills as one image.

Response Format

{
    "created": 1776832476,
    "data": [
        {
            "url": "https://delivery-eu.bfl.ai/results/xxx/sample.jpeg?signature=..."
        }
    ]
}
⚠️ data[0].url is valid for only 10 minutes
  • URL hosted on delivery-eu.bfl.ai / delivery-us.bfl.ai, signature expires after 10 min
  • CORS is disabled — browser fetch is blocked
  • Production must server-side download to your own OSS / CDN
  • The FLUX edit endpoint does not return b64_json — only url
Edit requests cost the same as text-to-image (per image, not per token). Multi-reference does not charge extra for additional images (unlike OpenAI gpt-image-2 editing).

Authorizations

Authorization
string
header
required

API Key from the APIYI Console

Body

multipart/form-data
model
enum<string>
default:flux-kontext-max
required

FLUX model ID. For editing prefer flux-kontext-max / flux-kontext-pro (single image) or flux-2-pro / flux-2-max (multi-reference).

Available options:
flux-2-max,
flux-2-pro,
flux-2-flex,
flux-2-klein-9b,
flux-2-klein-4b,
flux-kontext-max,
flux-kontext-pro
prompt
string
required

Edit / fusion instruction, up to 32K tokens. Reference images by index ('image 1 / image 2') matching image[] upload order.

Example:

"Place the person from image 1 into the scene from image 2, color palette from image 3"

image[]
file[]
required

Reference image(s), repeatable. Caps by model:

  • FLUX.2 [pro/max/flex]: up to 8
  • FLUX.2 [klein]: up to 4
  • FLUX.1 Kontext: 1 Each image ≤ 20MB / 20MP, formats png/jpg/webp.
Maximum array length: 8
size
string

OpenAI-style size string. Defaults to first input image dimensions.

Example:

"1024x1024"

width
integer

BFL-native syntax. Must be a multiple of 16.

Required range: 64 <= x <= 2048
height
integer

BFL-native syntax. Must be a multiple of 16.

Required range: 64 <= x <= 2048
seed
integer

Fix for reproducibility.

safety_tolerance
integer
default:2

Moderation level. 0 = strictest, 6 = most permissive.

Required range: 0 <= x <= 6
output_format
enum<string>
default:jpeg

Output format.

Available options:
jpeg,
png
steps
integer
default:50

Only flux-2-flex. Inference steps.

Required range: 1 <= x <= 50
guidance
number
default:4.5

Only flux-2-flex. Guidance scale.

Required range: 1.5 <= x <= 10

Response

Image generated

created
integer
Example:

1776832476

data
object[]

Result array (single image per call)