Skip to main content
API易 is an official partner for Claude models, connecting through both AWS Bedrock and direct Anthropic API dual channels. This ensures immediate access to the latest Anthropic models like Claude Opus 4.5 and Sonnet 4.5.

Key Advantages

Immediate Availability

AWS Bedrock + Direct API dual channels ensure new models are available on day one

Dual Format Support

Supports both OpenAI-compatible and Anthropic native formats for flexible integration

Claude Code Compatible

Fully supports Claude Code desktop application with simple configuration

Reliable Dual Routing

AWS + Direct API redundancy ensures service stability and availability

Available Models

API易 supports the complete Claude model family, including the latest Opus 4.5:
Model SeriesModel NameRecommended Use CasePrice (Input/Output per M tokens)
Opus 4.5claude-opus-4-5-20251101Complex coding, deep reasoning$5 / $25
Sonnet 4.5claude-sonnet-4-5-20250929General intelligence, code generation$3 / $15
Haiku 4.5claude-haiku-4-5-20251001Fast responses, high concurrency$1 / $5
Opus 4.1claude-opus-4-1-250806Advanced tasks (legacy)$15 / $75
We recommend using the latest 4.5 series models for better performance at lower prices. Opus 4.5 leads SWE-bench (80.9%) at just 1/3 the price of 4.1.

OpenAI-Compatible Format

API易 supports standard OpenAI SDK calls to Claude models. Simply modify the base_url and model parameters.

Python Example

from openai import OpenAI

client = OpenAI(
    api_key="your-apiyi-key",
    base_url="https://api.apiyi.com/v1"
)

response = client.chat.completions.create(
    model="claude-opus-4-5-20251101",
    messages=[
        {
            "role": "user",
            "content": "Implement a quicksort algorithm in Python and explain its time complexity."
        }
    ],
    # Optional: Set reasoning depth (Opus 4.5 exclusive)
    extra_body={
        "anthropic_effort": "medium"  # low/medium/high
    }
)

print(response.choices[0].message.content)

Node.js Example

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-apiyi-key',
  baseURL: 'https://api.apiyi.com/v1'
});

const response = await client.chat.completions.create({
  model: 'claude-sonnet-4-5-20250929',
  messages: [
    {
      role: 'user',
      content: 'Analyze the performance bottlenecks in this code...'
    }
  ]
});

console.log(response.choices[0].message.content);

cURL Example

curl https://api.apiyi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-apiyi-key" \
  -d '{
    "model": "claude-haiku-4-5-20251001",
    "messages": [
      {
        "role": "user",
        "content": "Hello, introduce yourself."
      }
    ]
  }'

Anthropic Native Format

API易 fully supports the official Anthropic SDK with the native /messages endpoint, requiring no conversion.

Python Native SDK

import anthropic

client = anthropic.Anthropic(
    api_key="your-apiyi-key",
    base_url="https://api.apiyi.com"
)

message = client.messages.create(
    model="claude-opus-4-5-20251101",
    max_tokens=8192,
    messages=[
        {
            "role": "user",
            "content": "Design a highly available microservices architecture and explain the core design principles."
        }
    ],
    # Opus 4.5 exclusive: Set reasoning depth
    extra_headers={
        "anthropic-effort": "high"  # low/medium/high
    }
)

print(message.content[0].text)

Streaming Response

import anthropic

client = anthropic.Anthropic(
    api_key="your-apiyi-key",
    base_url="https://api.apiyi.com"
)

with client.messages.stream(
    model="claude-sonnet-4-5-20250929",
    max_tokens=4096,
    messages=[
        {"role": "user", "content": "Write an article about AI development trends..."}
    ]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)

Vision Understanding (Multimodal)

Claude supports image input, ideal for UI analysis and chart interpretation:
import anthropic
import base64

client = anthropic.Anthropic(
    api_key="your-apiyi-key",
    base_url="https://api.apiyi.com"
)

# Read and encode image to base64
with open("screenshot.png", "rb") as f:
    image_data = base64.b64encode(f.read()).decode('utf-8')

message = client.messages.create(
    model="claude-opus-4-5-20251101",
    max_tokens=4096,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "source": {
                        "type": "base64",
                        "media_type": "image/png",
                        "data": image_data
                    }
                },
                {
                    "type": "text",
                    "text": "This is a webpage screenshot. Please implement the same UI using React + Tailwind CSS."
                }
            ]
        }
    ]
)

print(message.content[0].text)

Claude Code Desktop Configuration

Claude Code is Anthropic’s official AI coding assistant desktop application. API易 fully supports using Claude Code.

Configuration Steps

  1. Open Claude Code Settings
    • Click the gear icon ⚙️ in the top right
    • Select “Settings”
  2. Configure API Key
    • Enter your API易 key in the “API Key” field
    • Key format: sk-xxxxxxxxxxxxxxxx
  3. Configure Base URL
    • Enter in the “Custom API Endpoint” field:
      https://api.apiyi.com
      
  4. Select Model
    • Choose from the “Model” dropdown:
      • claude-opus-4-5-20251101 (strongest coding capability)
      • claude-sonnet-4-5-20250929 (balanced performance and cost)
      • claude-haiku-4-5-20251001 (fast response)
  5. Start Using
    • Click “Save” to save the configuration
    • Select code in your editor, right-click to invoke Claude Code
    • Enjoy the top-tier AI coding assistant!

Configuration Example (JSON)

For manual configuration file editing, use this format:
{
  "model": "claude-opus-4-5-20251101",
  "apiKey": "sk-xxxxxxxxxxxxxxxx",
  "baseURL": "https://api.apiyi.com",
  "maxTokens": 8192,
  "temperature": 0.7
}
Claude Code uses OpenAI-compatible format, so the base URL is https://api.apiyi.com/v1.

Reasoning Depth Control (Opus 4.5 Exclusive)

Claude Opus 4.5 introduces the new effort parameter to adjust reasoning depth:
ModeDescriptionUse CaseToken Consumption
lowFast response, shallow reasoningSimple Q&A, quick prototypingLow
mediumBalanced quality and speed (default)Most coding tasksMedium (76% less than high)
highDeep reasoning, highest qualityComplex architecture design, difficult analysisHigh

Setting effort in OpenAI Format

response = client.chat.completions.create(
    model="claude-opus-4-5-20251101",
    messages=[{"role": "user", "content": "Design a distributed caching system..."}],
    extra_body={
        "anthropic_effort": "high"  # low/medium/high
    }
)

Setting effort in Anthropic Native Format

message = client.messages.create(
    model="claude-opus-4-5-20251101",
    messages=[{"role": "user", "content": "Optimize this SQL query..."}],
    extra_headers={
        "anthropic-effort": "medium"
    }
)
The effort parameter only works with Claude Opus 4.5. Other models (Sonnet/Haiku/Opus 4.1) do not support this feature.

Best Practices

1. Model Selection Guide

  • Complex coding tasks: Use claude-opus-4-5-20251101 with high or medium mode
  • Daily code generation: Use claude-sonnet-4-5-20250929 for best value
  • Quick Q&A/High concurrency: Use claude-haiku-4-5-20251001 for speed and low cost
  • Multimodal tasks: Prefer Opus or Sonnet for stronger vision understanding

2. Cost Optimization Tips

  • Prefer medium mode: Opus 4.5’s medium mode produces quality close to Sonnet but uses only 24% of the tokens
  • Deposit bonuses: Get up to 20% bonus credits when depositing on API易, effectively reducing costs to 80%
  • Use caching wisely: Enable prompt caching for repeated prompts or contexts to save costs
  • Downgrade when appropriate: Use Haiku or Sonnet for simple tasks, save Opus for complex ones

3. Prompt Optimization

Claude models respond better to clear, structured prompts:
# Recommended: Clear task description
prompt = """
Task: Refactor the following Python code
Requirements:
1. Improve performance (reduce time complexity from O(n²) to O(n log n))
2. Add error handling and type hints
3. Add unit tests

Code:
[paste code]
"""

# Not recommended: Vague instructions
prompt = "Help me fix this code"

4. Long Context Usage

Claude Opus 4.5 supports 200K token context, suitable for codebase-level analysis:
# Example: Analyze entire codebase
files_content = ""
for file in ["main.py", "utils.py", "models.py"]:
    with open(file) as f:
        files_content += f"\n\n# {file}\n{f.read()}"

response = client.chat.completions.create(
    model="claude-opus-4-5-20251101",
    messages=[{
        "role": "user",
        "content": f"Analyze the architecture of this codebase and suggest optimizations:\n{files_content}"
    }]
)

Technical Specifications Comparison

ParameterOpus 4.5Sonnet 4.5Haiku 4.5
Context Length200,000 tokens200,000 tokens200,000 tokens
Max Output64,000 tokens8,192 tokens8,192 tokens
Knowledge CutoffMarch 2025October 2024October 2024
Multimodal✅ Image input✅ Image input✅ Image input
Reasoning Control✅ effort parameter
SWE-bench80.9%77.2%73.3%

FAQ

How do I use API易 with Claude Code?

Simply configure in Claude Code settings:
  • Base URL: https://api.apiyi.com
  • API Key: Your API易 key
  • Model: claude-opus-4-5-20251101 or other Claude models

What’s the difference between OpenAI format and Anthropic native format?

  • OpenAI format: Uses /v1/chat/completions endpoint, compatible with OpenAI SDK
  • Anthropic format: Uses /v1/messages endpoint, supports all Anthropic SDK features
  • Recommendation: Use OpenAI format for Claude Code or OpenAI SDK; use native format for Anthropic-specific features (like tool use)

How does the effort parameter affect performance and cost?

  • high mode: Maximum reasoning depth, highest output quality, but highest token consumption (about 3-4x of medium)
  • medium mode: Balanced quality and cost, output quality close to Sonnet, token consumption only 24% of high
  • low mode: Fast response, suitable for simple tasks, lowest token consumption

What are API易’s dual channels?

API易 connects to Claude models through AWS Bedrock and direct Anthropic API:
  • AWS Bedrock: High stability, low latency, ideal for enterprise applications
  • Direct API: First access to new models, full feature support
  • Auto-switching: System intelligently selects the optimal route to ensure service availability

How do I get an API易 key?

Visit the API易 website (apiyi.com) to register an account. After depositing, you can get your API key from the console. New users receive bonus credits upon registration, and deposits get up to 20% bonus.
API易 offers the latest Anthropic models immediately upon release, supporting both OpenAI and Anthropic formats, fully compatible with Claude Code. Register now to experience!