APIYI currently does NOT provide an async task-ID query interface for image generation. All image models are synchronous: the request opens a long connection → waits for generation → returns the image directly.We operate as an upstream pass-through and do not store any user business data, so we cannot offer “reconnect with an ID to fetch a previously generated result.” We recommend setting a reasonable timeout on the client side, keeping the connection alive, and recording requests/responses in your own backend.
In plain terms: synchronous calls + reasonable timeout + client-side task records = effectively a lightweight async queue you control. The end-user experience is nearly identical.
Our image endpoints mirror the upstream official API’s synchronous behavior exactly, with no extra queue layer that could introduce inconsistency or latency
Privacy & Security First
For user privacy and data security, we do not record any business content (prompts, generated images), so retrieving past results by ID is impossible by design
Sync Covers Most Cases
With a properly tuned timeout and a kept-alive connection, the vast majority of image generation requests complete successfully within a single call
Use long-lived connections + reasonable timeout on the client
Set your HTTP client timeout to a safe upper bound for the model’s generation time (typically 60–300 seconds depending on the model), and enable keep-alive so intermediate network layers don’t drop the connection early.Generation time varies significantly across models — contact support for a per-model recommended timeout table.
2
Record tasks and responses in your own backend
Since we don’t persist business data, generate a business-side task ID for each request and store the prompt, parameters, and the final result (or error) in your database. Even if the frontend disconnects, your backend still has the full record.
3
Implement your own async wrapper
If your product must be async (e.g., the frontend can’t wait on a long-running call), add a thin async layer in your backend:
Frontend POSTs a task → backend enqueues → returns a business task ID
A backend worker calls APIYI synchronously → writes the result back to the database
Frontend polls or subscribes via WebSocket using its task ID
This is functionally equivalent to a platform-native async API, and all your data stays under your own control.
# Pseudocode: implement an async shell in your own backenddef submit_image_task(prompt): task_id = uuid4() db.save(task_id, status="pending", prompt=prompt) queue.push({"task_id": task_id, "prompt": prompt}) return task_iddef worker(job): try: # Synchronous call to APIYI, timeout sized for the model result = apiyi_client.images.generate( prompt=job["prompt"], timeout=180, ) db.update(job["task_id"], status="done", url=result.url) except TimeoutError: db.update(job["task_id"], status="failed", error="timeout")def query_image_task(task_id): return db.get(task_id) # frontend polls your own backend by task_id
Key idea: the business task ID is generated by your code and stored in your database. APIYI is only responsible for the “synchronously generate” step.
My synchronous calls keep timing out — what should I do?
Most timeouts come from a too-short client timeout or an intermediate network layer (reverse proxy, gateway, etc.) terminating the long connection early.Troubleshooting order:
Confirm your HTTP client’s read timeout is raised to 60–300 seconds
Confirm intermediate layers (nginx, API gateway, CDN) also have raised timeouts
Enable keep-alive to prevent forced disconnection
Contact support for the recommended timeout for your specific model
The call timed out but the image may have actually been generated — can I recover it?
Unfortunately, no. We are an upstream pass-through and do not persist generation results. If a sync call is interrupted by a timeout, the result is lost and the client must retry.The fix is to set the timeout high enough up front to avoid cutting off a request that was about to succeed.
Will async task-ID endpoints be added in the future?
We’re aware that some upstream platforms are slow and that async is friendlier in those cases. We may add async capabilities in the future, but there is no timeline yet — no promises. Until then, please follow the “client-side async wrapper” approach above.
Are video generation endpoints (Sora / VEO etc.) async?
Yes — video generation is inherently asynchronous (by upstream design). It returns a task_id and the client polls task status to retrieve the final video. This differs from the synchronous image endpoints — please follow the model-specific documentation.
What's the recommended timeout for different image models?
Generation time varies widely across models (some take a few seconds, others 30+ seconds or even 1–2 minutes). Let support know which model you’re using and we’ll provide a per-model recommended timeout table.