Overview
gpt-image-2-all is a GPT image generation reverse-engineered model available on the API易 platform. With extremely competitive pricing at $0.03/image per call, it generates images in about 30 seconds and supports text-to-image / single-image editing / multi-image fusion / natural-language editing — with high text-rendering fidelity, fewer content restrictions, and native support for Chinese prompts.Text-to-Image API
Image Editing API
Core Features
Highly Competitive Pricing
High Text Rendering
Chinese Prompt Friendly
Multi-Image Fusion
Fewer Content Restrictions
R2 CDN Accelerated
Natural-Language Editing
Triple Endpoint Support
/images/generations, /images/edits, and /chat/completionsPricing
| Model | Billing | Price | Output |
|---|---|---|---|
gpt-image-2-all | Per-call | $0.03 / image | 1 image per call |
- Flat pricing; no tiers by resolution, quality, or prompt length
- Failed requests are not charged (auth failures, parameter validation errors)
- For N images, call the API N times in parallel
Technical Specs
| Attribute | Value |
|---|---|
| Model name | gpt-image-2-all |
| Channel type | Official reverse-engineered |
| Pricing | $0.03 / image, per-call |
| Generation time | ~30 seconds |
| Output resolution | No explicit size parameter; adaptive (describe in prompt) |
| Default response format | url (R2 CDN accelerated link) |
| Alternative format | b64_json (already prefixed with data:image/png;base64,) |
| Chinese prompts | ✅ Natively supported |
| Capabilities | Text-to-image, single-image editing, multi-image fusion, natural-language editing |
Endpoints
| Endpoint | Purpose | Content-Type |
|---|---|---|
POST /v1/images/generations | Text-to-image | application/json |
POST /v1/images/edits | Image editing (single/multi) | multipart/form-data |
POST /v1/chat/completions | Chat-based (multi-turn + reference images) | application/json |
Size and Aspect Ratio Control
This model has nosize parameter — size is described in the prompt. Proven-stable phrasings:
| Need | Suggested Phrasing |
|---|---|
| Square | 1024×1024 square / 1:1 square composition |
| Landscape | Landscape 16:9 / Widescreen 16:9 cinematic |
| Portrait | Portrait 9:16 / Phone poster 9:16 |
| Ultra-wide | Banner 21:9 ultra-widescreen |
| Classic print | 4:3 standard / 3:2 classic |
Best Practices
Put size at the start of the prompt
Use text elements confidently
Annotate multi-image order
image[] order is meaningful. Reference explicitly as “image1/image2/image3” in the prompt.Choose response format by need
b64_json for direct web rendering; url for server-side storage/forwarding.Use ≥120s timeout
Error Codes and Retries
| Status | Meaning | Suggestion |
|---|---|---|
401 | Invalid token | Check Bearer Token |
429 | Rate limit / quota exhausted | Exponential backoff retry |
5xx | Transient gateway/backend error | Retry 1–2 times |
| Timeout | Occasional during peak hours | Use ≥120s client timeout |
- Request timeout starting at 120 seconds (typical ~30s, leaves 4× buffer for tails)
- Use exponential backoff for 5xx and timeouts (2–3 retries recommended)
- Log the
request-idresponse header for debugging
FAQ
Can I generate multiple images at once?
Can I generate multiple images at once?
Do I need to add data:image/png;base64, prefix to b64_json?
Do I need to add data:image/png;base64, prefix to b64_json?
b64_json field already includes the prefix. You can use it directly as <img src> or write it to a file. If your code follows the old “prepend prefix” pattern, you’ll produce a broken data URL — add a startsWith('data:') check first.Why am I getting a different size even though the prompt says 1024x1024?
Why am I getting a different size even though the prompt says 1024x1024?
cinematic, phone poster, square composition).What's the max reference image size and supported formats?
What's the max reference image size and supported formats?
png / jpg / webp. Overly large images may hit gateway limits. Each image in multi-image fusion must meet this limit.Does it support streaming?
Does it support streaming?
Can I use the official OpenAI SDK?
Can I use the official OpenAI SDK?
base_url to https://api.apiyi.com/v1 and set api_key to your API易 token. However, client.images.generate() sends size/n by default — we recommend:- Use
client.chat.completions.create()instead; or - Make raw HTTP calls with
requests/fetch.
Are Chinese and English prompts meaningfully different?
Are Chinese and English prompts meaningfully different?
Related Documentation
- Text-to-Image Playground - Interactive online testing
- Image Editing Playground - Multi-image fusion and editing
- GPT-Image Series Overview - Official GPT-Image comparison
- API Manual - General calling conventions