Skip to main content

Short Answer

This is completely normal and does not affect model capabilities.A model’s name is assigned after training is complete — the model itself never “learned” its own identity. Official web apps (like claude.ai or chatgpt.com) can answer correctly because they include a built-in System Prompt that tells the model “who it is.” API calls don’t include this information by default, so the model will “guess wrong” about its version.

Real-World Example

Common ScenarioWhen calling Claude Sonnet 4.5 via API and asking “What model are you?”, it might reply “I am Claude 3.5 Sonnet.” This does not mean you called the wrong model — the model simply doesn’t know its own name.The same thing happens with GPT-4o, Gemini, and all other models — this is a universal characteristic of large language models, not an issue specific to APIYI.

Simple Analogy

The Actor Analogy

Imagine a highly skilled actor:
  • Their skills come from years of training (like a model’s training process)
  • Their character name is told to them by the director before filming (like a System Prompt)
  • If nobody tells them what role they’re playing, they’re still talented but don’t know their own character name
AI models work the same way: their capabilities come from training data, but the identity “I am Claude Sonnet 4.5” needs to be explicitly provided.

Technical Explanation

Model naming happens after trainingThe development process for a large language model is:
  1. Collect data → Prepare training corpus
  2. Train the model → Learn language understanding and generation
  3. Evaluate and fine-tune → Optimize model performance
  4. Name and release → Assign a name (e.g., “Claude Sonnet 4.5”)
When training completes at step 2, the name from step 4 doesn’t exist yet. The training data may contain names of older model versions (like Claude 3.5 Sonnet), so when asked, the model “guesses” a name it has seen before.Analogy: It’s like how a person can’t know their name before birth — the name is given after they’re born.
The role of built-in System PromptsWhen you chat on claude.ai or chatgpt.com, the official web app automatically injects a hidden System Prompt at the beginning of each conversation, something like:
You are Claude, developed by Anthropic. Your model version is Claude Sonnet 4.5...
This prompt is invisible to users, but the model “reads” it and therefore knows who it is.So: the model doesn’t inherently know its own name — the official web app “reminds” it every time.
API calls don’t include identity information by defaultWhen calling a model via API, you only send:
  • The model parameter (tells the server which model to use)
  • The messages array (your conversation content)
  • An optional system parameter (your custom system prompt)
The model parameter is routing information for the server — the model itself cannot read this field. If you don’t specify the model’s identity in the system prompt, it can only “guess” based on its training data.This applies to all API platforms — whether it’s the official API, APIYI, or any other provider, the behavior is exactly the same.

How to Verify the Model You’re Actually Calling

Check Call Logs

In the APIYI console’s Call Logs, you can see the actual model name used for each request — this is the most accurate verification method.

Check API Response

Every API response JSON includes a model field that clearly identifies the actual model version called.
{
  "model": "claude-sonnet-4-5-20250514",
  "choices": [...]
}

APIYI Service Guarantee

APIYI forwards requests through official channels — model quality is identical to the source.
  • APIYI directly forwards requests to official APIs from OpenAI, Anthropic, Google, etc.
  • Models not knowing their own identity is a universal phenomenon across all API platforms
  • You can verify the actual model called through call logs and the model field in API responses
  • If you have any doubts, feel free to contact our technical team for verification

How to Make Models Correctly Identify Themselves

Simply add a system parameter to your API call to tell the model its identity:
from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="https://vip.apiyi.com/v1"
)

response = client.chat.completions.create(
    model="claude-sonnet-4-5-20250514",
    messages=[
        {
            "role": "system",
            "content": "You are Claude Sonnet 4.5, an AI assistant developed by Anthropic."
        },
        {
            "role": "user",
            "content": "What model are you?"
        }
    ]
)

print(response.choices[0].message.content)
# Output: I am Claude Sonnet 4.5, developed by Anthropic.
Tip: This works exactly the same way as the official web apps — telling the model its identity via System Prompt. Once added, the model can correctly answer “who am I.”

Contact Us

WeChat Support

WeChat ID: 8765058Model verification, technical support

Email