Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.apiyi.com/llms.txt

Use this file to discover all available pages before exploring further.

Quick Answer

APIYI currently does NOT provide an async task-ID query interface for image generation. All image models are synchronous: the request opens a long connection → waits for generation → returns the image directly. We operate as an upstream pass-through and do not store any user business data, so we cannot offer “reconnect with an ID to fetch a previously generated result.” We recommend setting a reasonable timeout on the client side, keeping the connection alive, and recording requests/responses in your own backend.
In plain terms: synchronous calls + reasonable timeout + client-side task records = effectively a lightweight async queue you control. The end-user experience is nearly identical.

Why No Task-ID Async Query?

Upstream Pass-Through

Our image endpoints mirror the upstream official API’s synchronous behavior exactly, with no extra queue layer that could introduce inconsistency or latency

Privacy & Security First

For user privacy and data security, we do not record any business content (prompts, generated images), so retrieving past results by ID is impossible by design

Sync Covers Most Cases

With a properly tuned timeout and a kept-alive connection, the vast majority of image generation requests complete successfully within a single call
1

Use long-lived connections + reasonable timeout on the client

Set your HTTP client timeout to a safe upper bound for the model’s generation time (typically 60–300 seconds depending on the model), and enable keep-alive so intermediate network layers don’t drop the connection early.Generation time varies significantly across models — contact support for a per-model recommended timeout table.
2

Record tasks and responses in your own backend

Since we don’t persist business data, generate a business-side task ID for each request and store the prompt, parameters, and the final result (or error) in your database. Even if the frontend disconnects, your backend still has the full record.
3

Implement your own async wrapper

If your product must be async (e.g., the frontend can’t wait on a long-running call), add a thin async layer in your backend:
  • Frontend POSTs a task → backend enqueues → returns a business task ID
  • A backend worker calls APIYI synchronously → writes the result back to the database
  • Frontend polls or subscribes via WebSocket using its task ID
This is functionally equivalent to a platform-native async API, and all your data stays under your own control.

Client-Side Async Wrapper (Reference)

# Pseudocode: implement an async shell in your own backend
def submit_image_task(prompt):
    task_id = uuid4()
    db.save(task_id, status="pending", prompt=prompt)
    queue.push({"task_id": task_id, "prompt": prompt})
    return task_id

def worker(job):
    try:
        # Synchronous call to APIYI, timeout sized for the model
        result = apiyi_client.images.generate(
            prompt=job["prompt"],
            timeout=180,
        )
        db.update(job["task_id"], status="done", url=result.url)
    except TimeoutError:
        db.update(job["task_id"], status="failed", error="timeout")

def query_image_task(task_id):
    return db.get(task_id)  # frontend polls your own backend by task_id
Key idea: the business task ID is generated by your code and stored in your database. APIYI is only responsible for the “synchronously generate” step.

FAQ

Most timeouts come from a too-short client timeout or an intermediate network layer (reverse proxy, gateway, etc.) terminating the long connection early.Troubleshooting order:
  1. Confirm your HTTP client’s read timeout is raised to 60–300 seconds
  2. Confirm intermediate layers (nginx, API gateway, CDN) also have raised timeouts
  3. Enable keep-alive to prevent forced disconnection
  4. Contact support for the recommended timeout for your specific model
Unfortunately, no. We are an upstream pass-through and do not persist generation results. If a sync call is interrupted by a timeout, the result is lost and the client must retry.The fix is to set the timeout high enough up front to avoid cutting off a request that was about to succeed.
We’re aware that some upstream platforms are slow and that async is friendlier in those cases. We may add async capabilities in the future, but there is no timeline yet — no promises. Until then, please follow the “client-side async wrapper” approach above.
Yes — video generation is inherently asynchronous (by upstream design). It returns a task_id and the client polls task status to retrieve the final video. This differs from the synchronous image endpoints — please follow the model-specific documentation.

Model Selection Guide

Capabilities and use cases for each image model

API Concurrency & Rate

Concurrency limits, rate limiting, and best practices

Call Logs & Data

Our data retention policy and log controls

Contact Support

Get the per-model timeout table or further consultation