FLUX image editing API reference and live debugger — upload up to 8 reference images + instructions for single-image edits, multi-reference fusion. Works for FLUX.2 and FLUX.1 Kontext.
POST
/
v1
/
images
/
edits
Edit or fuse one or more reference images by instruction
curl --request POST \ --url https://api.apiyi.com/v1/images/edits \ --header 'Authorization: Bearer <token>' \ --header 'Content-Type: multipart/form-data' \ --form model=flux-kontext-max \ --form 'prompt=Place the person from image 1 into the scene from image 2, color palette from image 3' \ --form 'image[]=<string>' \ --form size=1024x1024 \ --form width=1056 \ --form height=1056 \ --form seed=123 \ --form safety_tolerance=2 \ --form output_format=jpeg \ --form steps=50 \ --form guidance=4.5 \ --form image[].items='@example-file'
The interactive Playground on the right supports direct file upload. Fill in your API Key in the Authorization header (format: Bearer sk-xxx), select image file(s), enter prompt and model, then send.
Use this page for “edit or fuse one or more reference images”. Requests are multipart/form-data. For pure text-to-image, see the Text-to-Image endpoint.
⚠️ Key differences / notes
Reference image cap varies by model: FLUX.2 [pro/max/flex] up to 8, FLUX.2 [klein] up to 4, FLUX.1 Kontext supports 1 natively
Each image ≤ 20MB or 20MP, formats png / jpg / webp
Input resolution: min 64×64, max 4MP; dimensions must be multiples of 16
Result URL is valid for only 10 minutes — data[0].url must be downloaded immediately
If width / height is omitted, output dimensions match the first input image
📎 Multi-reference order mattersThe image[] field accepts multiple files in order. The upload order becomes the index reference for “image 1 / image 2 / image 3” in your prompt:
Place the person from image 1 into the scene from image 2, applying the color palette of image 3.
from openai import OpenAIimport requestsclient = OpenAI( api_key="sk-your-api-key", base_url="https://api.apiyi.com/v1")resp = client.images.edit( model="flux-kontext-max", image=open("photo.png", "rb"), prompt="Replace 'joy' with 'BFL' on the sign, keep all other text and layout unchanged", size="1024x1024")# data[0].url is valid for only 10 minutes — download immediatelyimage_url = resp.data[0].urlwith open("edited.jpg", "wb") as f: f.write(requests.get(image_url, timeout=30).content)
resp = client.images.edit( model="flux-2-pro", image=[ open("person.png", "rb"), open("scene.png", "rb"), open("style.png", "rb"), ], prompt="Place the person from image 1 into the scene from image 2, applying the color palette of image 3, keep lighting consistent", size="1536x1024")image_url = resp.data[0].urlwith open("fused.jpg", "wb") as f: f.write(requests.get(image_url, timeout=30).content)
curl -X POST "https://api.apiyi.com/v1/images/edits" \ -H "Authorization: Bearer sk-your-api-key" \ -F "model=flux-2-pro" \ -F "prompt=Place the person from image 1 into the scene from image 2, color palette from image 3" \ -F "size=1536x1024" \ -F "image[][email protected]" \ -F "image[][email protected]" \ -F "image[][email protected]"
import fs from 'node:fs';const form = new FormData();form.append('model', 'flux-2-pro');form.append('prompt', 'Replace the top of the person from image 1 with the one from image 2');form.append('size', '1024x1024');form.append('image[]', new Blob([fs.readFileSync('./person.png')]), 'person.png');form.append('image[]', new Blob([fs.readFileSync('./outfit.png')]), 'outfit.png');const resp = await fetch('https://api.apiyi.com/v1/images/edits', { method: 'POST', headers: { 'Authorization': 'Bearer sk-your-api-key' }, body: form});const { data } = await resp.json();const img = await fetch(data[0].url);fs.writeFileSync('fused.jpg', Buffer.from(await img.arrayBuffer()));
Upload multiple shots of the same character as references — the model preserves identity features automatically. Great for ad campaigns, comic panels, fashion editorials.
Eight consistent characters from the reference images,in a fashion editorial set on a Tokyo rooftop at golden hour
Style Transfer
One content image + one style image, with explicit reference in the prompt:
Using the style of image 2, render the subject from image 1
Object Composition
Combine objects from multiple images into one new scene:
The person from image 1 is petting the cat from image 2,the bird from image 3 is next to them
Outfit / Product Swap
Swap an outfit from one image to another subject:
Replace the top of the person in image 1 with the one from image 2,keep the pose and background unchanged
Iterative editing: download data[0].url, feed it back as image[] in the next call with a new instruction, and refine progressively. Each round bills as one image.
URL hosted on delivery-eu.bfl.ai / delivery-us.bfl.ai, signature expires after 10 min
CORS is disabled — browser fetch is blocked
Production must server-side download to your own OSS / CDN
The FLUX edit endpoint does not return b64_json — only url
Edit requests cost the same as text-to-image (per image, not per token). Multi-reference does not charge extra for additional images (unlike OpenAI gpt-image-2 editing).