APIYI supports 200+ mainstream AI models. This page provides detailed model information, pricing, and usage instructions.
Enterprise-grade Professional and Stable AI Large Model API Hub
All models are officially sourced and forwarded, with ~20% off pricing (combining top-up bonuses and exchange rate advantages), aggregating various excellent large models. No speed limits, no expiration, no account ban risks, pay-as-you-go billing, long-term reliable service.
🔥 Currently Recommended Models
The following are currently stably supplied popular models. For complete model list and real-time pricing, visit APIYI Console Pricing Page.
Model Categories
🤖 OpenAI Series
Reasoning Models
| Model Name | Model ID | Features | Recommended Scenarios |
|---|
| GPT-5 ⭐ | gpt-5 | Latest flagship model, ultra-strong reasoning | Top-tier reasoning, complex tasks |
| GPT-5 Mini | gpt-5-mini | GPT-5 lightweight version, excellent performance | Balance performance and cost |
| GPT-5 Nano | gpt-5-nano | GPT-5 ultra-lightweight version | Large-scale batch processing |
| o3 ⭐ | o3 | Latest reasoning model, significantly price-reduced, extremely cost-effective | Complex reasoning, math, programming |
| o4-mini | o4-mini | Lightweight reasoning model | Top choice for programming tasks |
GPT-5 Series Usage Notes:
- Temperature parameter
temperature must be set to 1 (only supports 1)
- Use
max_completion_tokens instead of max_tokens
- Do not pass
top_p parameter
GPT Series
| Model Name | Model ID | Context Length | Features | Recommended Scenarios |
|---|
| GPT-5 Chat Latest ⭐ | gpt-5-chat-latest | 128K | Benchmarked against ChatGPT web GPT-5 | Need latest features |
| GPT-4.1 ⭐ | gpt-4.1 | 128K | Fast speed, one of the main models | General applications |
| GPT-4.1 Mini | gpt-4.1-mini | 128K | Cheaper lightweight version | Cost-sensitive scenarios |
| GPT-4o | gpt-4o | 128K | Balanced comprehensive capabilities, multimodal support | General scenarios |
| GPT-4o Mini | gpt-4o-mini | 128K | Lightweight fast version | Quick response |
Codex Programming Series
| Model Name | Model ID | Billing Mode | Features | Recommended Scenarios |
|---|
| GPT-5 Codex High ⭐ | gpt-5-codex-high | Per-token/Per-call | Benchmarked against GPT-5, strongest programming | Complex programming tasks |
| GPT-5 Codex Medium | gpt-5-codex-medium | Per-token/Per-call | Medium performance, moderate price | Regular programming tasks |
| GPT-5 Codex Low | gpt-5-codex-low | Per-token/Per-call | Lightweight version, lowest cost | Simple code generation |
Codex Series Dual Billing Modes:
- Per-token billing: Suitable for small token conversation scenarios
- Per-call billing: Suitable for large context programming scenarios, more cost-effective
Image Generation Models
| Model Name | Model ID | Supported Sizes | Features | Price |
|---|
| Nano Banana ⭐ | gemini-2.5-flash-image-preview | Multiple sizes | Google’s strongest image model, fast speed | $0.025/image |
| SeeDream 4.0 ⭐ | seedream-4-0-250828 | 4K HD | BytePlus Volcano partnership, high-quality output | $0.025/image |
| GPT-Image-1 ⭐ | gpt-image-1 | 1024×1024 etc. | Cost-effective image generation | See docs below |
| Sora Image | sora_image | Multiple sizes | Reverse-engineered model, simulates official conversation-based generation | See docs |
| GPT-4o Image | gpt-4o-image | Multiple sizes | Reverse-engineered model, conversation-style generation | See docs |
| DALL·E 3 | dall-e-3 | 1024×1024 etc. | Classic image generation model | Billed by size |
Image Generation Testing Tool
Visit imagen.apiyi.com to experience various image generation models.Detailed Documentation:
🎭 Claude Series (Anthropic)
Claude 4 Series (Latest)
| Model Name | Model ID | Context Length | Features | Recommended Scenarios |
|---|
| Claude 4 Sonnet ⭐ | claude-sonnet-4-20250514 | 200K | Latest model, top choice for programming | Code generation, analysis |
| Claude 4 Sonnet Thinking | claude-sonnet-4-20250514-thinking | 200K | Chain-of-thought mode | Complex reasoning |
| Claude Opus 4.1 ⭐ | claude-opus-4-1-20250805 | 200K | Iterative upgrade, programming-optimized | High-demand programming tasks |
| Claude Opus 4.1 Thinking | claude-opus-4-1-20250805-thinking | 200K | Chain-of-thought mode, reasoning-enhanced | Top-tier reasoning tasks |
Important Note: Opus 4 is no longer recommended. Please migrate to Opus 4.1 version for better performance and specific programming scenario optimization.
🌟 Google Gemini Series
| Model Name | Model ID | Context Length | Features | Recommended Scenarios |
|---|
| Gemini 2.5 Pro ⭐ | gemini-2.5-pro | 2M | Official release, programming advantage, strong multimodal | Long text, programming, multimodal |
| Gemini 2.5 Pro Preview | gemini-2.5-pro-preview-06-05 | 2M | Preview version | Test new features |
| Gemini 2.5 Flash ⭐ | gemini-2.5-flash | 1M | Fast speed, low cost | Quick response scenarios |
| Gemini 2.5 Flash Lite | gemini-2.5-flash-lite | 1M | Ultra-lightweight, faster and cheaper | Large-scale simple tasks |
🚀 xAI Grok Series
| Model Name | Model ID | Features | Recommended Scenarios |
|---|
| Grok 4 ⭐ | grok-4 | Latest official version | General tasks |
| Grok 3 | grok-3 | Official stable version | Daily use |
| Grok 3 Mini | grok-3-mini | Small model with reasoning | Lightweight tasks |
🔍 DeepSeek Series
| Model Name | Model ID | Context Length | Features | Recommended Scenarios |
|---|
| DeepSeek V3.1 ⭐ | deepseek-v3-1-250821 | 128K | Mixed reasoning mode, Think/Non-Think dual modes | Intelligent reasoning, programming |
| DeepSeek R1 | deepseek-r1 | 64K | Reasoning model | Math, reasoning |
| DeepSeek V3 | deepseek-v3 | 128K | Strong comprehensive capabilities | General scenarios |
🐘 Chinese Model Series
Alibaba Qwen
| Model Name | Model ID | Context Length | Features |
|---|
| Qwen Max | qwen-max | 32K | Strongest version |
| Qwen Plus | qwen-plus | 32K | Enhanced version |
| Qwen Turbo | qwen-turbo | 32K | Fast version |
Moonshot Kimi Series
| Model Name | Model ID | Context Length | Features |
|---|
| Kimi K2 Official Release ⭐ | kimi-k2-250711 | 200K | Official Volcano Engine partnership, strong stability |
Billing Methods
- Pay-as-you-go: Charged based on actual Token usage
- No minimum charge: Use what you pay for, balance never expires
- Real-time deduction: Fees deducted from balance immediately after each call
Pricing Advantages
- Official source forwarding with slight price advantages
- Bulk users can contact customer service for better pricing
- New users get 3 million tokens testing credit upon registration
View Real-time Pricing
Visit APIYI Console Pricing Page to view latest pricing for all models.
🛠️ Usage Recommendations
Model Selection Guide
Programming Development
- Top choice: GPT-5 Codex series, Claude 4 Sonnet, Claude Opus 4.1, DeepSeek V3.1, o4-mini, Gemini 2.5 Pro
- Alternatives: DeepSeek V3, Kimi K2 Official Release, GPT-5 (note parameter settings)
Text Creation
- Top choice: GPT-5, Claude 4 Sonnet, GPT-4.1
- Alternatives: GPT-4o, GPT-5 Chat Latest, Kimi K2 Official Release, Qwen Max
Quick Response
- Top choice: GPT-4o Mini, Gemini 2.5 Flash
- Alternatives: Gemini 2.5 Flash Lite, Grok 3 Mini, GPT-4.1 Mini
Image Generation
- Currently popular: Nano Banana, SeeDream 4.0 (both $0.025/image)
- Stable and reliable: GPT-Image-1 (high official pricing, ~20% off on our platform)
- Reverse-engineered, cheapest: sora_image, gpt-4o-image
Long Text Processing
- Top choice: Gemini 2.5 Pro (2M context)
- Alternatives: Claude 4 series (200K context)
Cost Optimization Recommendations
- Tiered Usage: Use cheaper models for simple tasks, advanced models for complex tasks
- Test Optimization: Test with small models first, use large models after determining needs
- Batch Processing: Choose Nano or Mini versions for large volumes of similar tasks
- Cache Reuse: Cache results for repeated queries
Model list is continuously updated. We will promptly add newly released excellent models. For specific model needs or bulk requirements, please contact customer service.