TL;DR
| If you need | Pick |
|---|---|
| Precise size / quality control (incl. 4K), must match OpenAI official exactly | gpt-image-2 (Official) |
| Predictable flat pricing ($0.03/image), faster generation, minimal params | gpt-image-2-all (Reverse) |
What
-all suffix means: It distinguishes the reverse-engineered model from the official one. On APIYI, -all suffix = reverse-engineered model; without suffix = official direct connection.Full Comparison Table
| Dimension | gpt-image-2-all (Reverse, cost-effective) | gpt-image-2 (Official) |
|---|---|---|
| Model name | gpt-image-2-all | gpt-image-2 |
| Channel nature | Reverse-engineered (parity with ChatGPT web image gen) | Official direct (OpenAI Images API) |
| Pricing | Per-call: flat $0.03/call | Token-metered: matches official; ~85% of list price after APIYI deposit bonuses |
| Typical cost/image | $0.03 (regardless of size / quality) | Measured $0.03 – $0.2 (correlates with prompt length, size, quality) |
| Token group | Default | Default |
| Token type | Per-call or Token-priority both work | Token-priority only (this model is token-billed; per-call tokens will be rejected) |
| Recommended endpoint | ⭐ /v1/chat/completions (chat-style, primary) | /v1/images/generations + /v1/images/edits |
| Alt endpoints | /v1/images/generations, /v1/images/edits | (only the two official ones) |
| Upload format | base64 or https URL (chat endpoint) | multipart file (edit endpoint) |
| Output format | b64_json (includes prefix) or url (R2 CDN) | b64_json (raw base64, no prefix) |
| Reference image count | Multiple (chat-mode upper bound is high) | Max 5 (image[]) |
| Mask inpainting | ❌ Not supported | ✅ Supported (alpha channel required) |
| Prompt adherence | Good | Excellent |
| Generation speed | ~60 seconds | ~100-120 seconds, complex + 4K can reach 3-5 minutes |
| Resolution control | Prompt-only, output between 1K-2K | size parameter, 1K / 2K / 3840×2160 4K |
| Common output sizes | 16:9 → 1672×941, 9:16 → 941×1672, 1:1 → 1254×1254 | 8 presets + any valid custom size |
| Quality parameter | ❌ No quality | ✅ low / medium / high / auto |
| Transparent background | — | ❌ Not supported (background: transparent errors) |
| Chinese prompts | ✅ Native | ✅ Native |
| Text rendering | High fidelity | High fidelity (strongest at high tier) |
| Content restrictions | Looser | Stricter (OpenAI official policy) |
| API docs | GPT-Image-2-All Overview | GPT-Image-2 Overview |
🔑 Create or manage API tokens: https://api.apiyi.com/token
When creating a token in the console, choose a group (
When creating a token in the console, choose a group (
Default is fine) and a token type (Per-call / Token-priority). Calling gpt-image-2 (official) requires a “Token-priority” token — per-call tokens will be rejected due to billing-mode mismatch.When to Pick Each
Pick gpt-image-2-all (Reverse) when
💰 Predictable cost
Stable $0.03/image with no size/quality tier. Ideal for batch production with hard cost ceilings (infographics, marketing assets, e-commerce thumbnails).
⚡ Speed-first
~60s generation, almost 2× faster than the official version — better for real-time UX.
🗨️ Chat-style workflows
Primary endpoint is
/v1/chat/completions — multi-turn iterative editing, text-to-image, and reference editing all share one endpoint. Simplest integration.🌏 Chinese + marketing text
Native Chinese prompt support, excellent text rendering for signage / posters / infographics — great for Chinese-audience content production.
Pick gpt-image-2 (Official) when
🖼️ Precise size control
size accepts any valid resolution (incl. 3840×2160 4K). Required for movie posters, desktop wallpapers, video covers where exact aspect ratio / resolution matters.🎚️ Quality tiers
quality supports low/medium/high/auto. Use low for drafts to save cost; high for print-grade finals.🎯 Mask inpainting
Alpha-channel mask supported — precisely modify a region while preserving the rest.
🔌 Same as OpenAI Official
Goes through the official Images API — fields and behavior identical to OpenAI official. Existing OpenAI-SDK-based code / systems migrate with zero changes and stay stable long-term.
Key Differences in Detail
1. b64_json format gotcha (migration trap!)
2. Resolution control
gpt-image-2-all (in the prompt):size parameter, strict):
3. Upload / output format differences
| Operation | gpt-image-2-all | gpt-image-2 |
|---|---|---|
| Upload reference | base64 data URL or https URL (in chat messages’ image_url) | multipart image[] file field |
| Download output | Default url (R2 CDN, 24h validity), can switch to b64_json (with prefix) | b64_json (raw base64, requires decode) |
| Multi-image fusion | Multiple image_url blocks in chat | image[] array, max 5 |
4. Cost ballpark
| Scenario | gpt-image-2-all | gpt-image-2 |
|---|---|---|
| 1024×1024 draft | $0.03 | ~$0.006 (low) |
| 1024×1024 medium quality | $0.03 | ~$0.053 (medium) |
| 1024×1024 high quality | $0.03 | ~$0.211 (high) |
| 2048×1152 high quality | $0.03 | ~$0.20+ (token-metered) |
| 3840×2160 4K high quality | ❌ Not supported | Token-metered, significantly higher than 1K |
| Edit / multi-image fusion | $0.03 | Input tokens rise sharply, single call can hit $0.1+ |
Bottom line: For batch / low-quality workloads,
gpt-image-2-all isn’t always cheaper (1K low is actually less expensive on the official tier). For the mid-to-high quality range without 4K, gpt-image-2-all’s flat $0.03 is more stable and budgetable. For 4K or precise parameter control, gpt-image-2 is the only choice.Client Settings
| Setting | gpt-image-2-all | gpt-image-2 |
|---|---|---|
| Timeout (conservative) | 300 seconds | 360 seconds (4K high quality realistically reaches 3-5 minutes) |
| Retry strategy | Exponential backoff on 5xx / timeout, max 2 retries | Same |
| Concurrency | chat endpoint is naturally concurrency-friendly | 1 image per call — issue parallel requests for multiple |
| Request ID | request-id response header | x-request-id response header |
FAQ
Can the same API Key call both models?
Can the same API Key call both models?
Yes. Both run on the Default channel — the same API Key calls both with no extra config.
Can the chat endpoint return text instead of an image?
Can the chat endpoint return text instead of an image?
It can. When the image-generation intent isn’t strong enough,
gpt-image-2-all’s chat endpoint may return plain text. Workaround: prepend a fixed prefix like “Generate image:” to the user prompt, or constrain output via a system message.Migrating from 1.5 — which one should I pick?
Migrating from 1.5 — which one should I pick?
- Stick with the OpenAI SDK / must match OpenAI official: pick
gpt-image-2(official). Dropinput_fidelity, avoidbackground: transparent, leave the rest unchanged. - Want to cut cost too: pick
gpt-image-2-all(reverse). Flat $0.03/image, simplest migration via the chat endpoint.
Can I deploy both for failover?
Can I deploy both for failover?
Yes. A common pattern: primary
gpt-image-2-all (predictable cost, faster), fallback gpt-image-2 (switch to it when 4K or precise control is needed). The two models’ response shapes differ — normalize at the business layer.The R2 CDN image link is slow — what can I do?
The R2 CDN image link is slow — what can I do?
Related Docs
- GPT-Image-2 Overview - Full official integration docs
- GPT-Image-2-All Overview - Full reverse-engineered integration docs
- Deep dive: gpt-image-2 launch - Official version launch
- Deep dive: gpt-image-2-all launch - Reverse-engineered version launch
- Community: Luck GPT-Image 2 ComfyUI Nodes - Dual-node ComfyUI pack covering both models
- Community: APIYI GPT-Image 2 Skills - Dual-skill AI Agent pack covering both models
- Deposit promotions - Recharge bonus policy