Documentation Index
Fetch the complete documentation index at: https://docs.apiyi.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Codex CLI is OpenAI’s official command-line programming assistant designed for terminal workflows. Integrating it with API易 takes one thing only:Replace OpenAI’s endpoint with API易API易 is an OpenAI-compatible interface (transparent proxy)—at its core, you just swap the Base URL and Key. No custom
model_provider, no custom env_key required. If it works with the OpenAI SDK, it works with Codex CLI.
🚀 One-Step Setup
Two environment variables — no complex config files
⚡ Latest Models
Supports
gpt-5.5 / gpt-5.4 / gpt-5.3-codex💰 Same Billing Model
Aligned with OpenAI’s billing, better pricing
🔧 Cross-Platform
Mac / Linux / Windows (PowerShell) — all supported
1. Prerequisites
Install Codex CLI
Globally install the official CLI (requires Node.js 18+):2. Core Config: Two Environment Variables
API易 is an OpenAI-compatible interface, so the entire config is two lines:Mac / Linux Setup
Edit your shell config:Windows Setup
- ✅ PowerShell (persistent, recommended)
- Temporary (current session only)
- GUI System Environment Variables
Use Close and reopen the terminal for it to take effect.
setx to write user-level environment variables:3. Launch & Basic Usage
Enter your project and launch:4. Models (API易 Recommendations)
Common models available on API易 for Codex CLI:| Model | Strengths | Best For |
|---|---|---|
gpt-5.5 | Latest flagship | Complex coding, engineering analysis, agent workflows |
gpt-5.4 | Stable workhorse | Day-to-day coding, debugging, refactoring |
gpt-5.4-mini | Cheap 5.4 variant | Mid-scale tasks, batch processing |
gpt-5.4-nano | Ultra-cheap | Simple completions, lightweight tasks |
gpt-5.3-codex | Codex-tuned | Complex software engineering tasks |
gpt-4.1 | Classic & cheap | Budget-sensitive daily use |
1. Specify model at launch
Use-m or --model:
2. Specify model in non-interactive mode
3. Switch models inside a session
After entering Codex CLI, in the active session:4. Configure default model
To make a model the default, edit~/.codex/config.toml:
Codex’s current official config file is
config.toml (not the older config.json). The user-level config lives at ~/.codex/config.toml; project-level .codex/config.toml is also supported.5. Advanced Configuration
Custom System Prompts
Edit:Project-Level AGENTS.md
On first entry into a project, runcodex /init to generate AGENTS.md recording the project structure and conventions. To make Codex respond in a specific language by default, add:
Common Flags
6. Notes
1. Key & Variable Names
- Get your API易 Key from the console at
api.apiyi.com/token - Variable name must be
OPENAI_API_KEY— hardcoded in the CLI, do not rename
2. Base URL
- Must include
/v1:https://api.apiyi.com/v1 - Other gateway domains (e.g.
b.apiyi.com/v1,vip.apiyi.com/v1) work the same way
3. Network Issues
If you hit timeouts / connection failures, check in order:- Local HTTP/HTTPS proxy settings
- DNS resolution
- Whether you’re accidentally using a Cloudflare relay domain (not recommended)
4. Model Differences
- Codex CLI is biased toward coding tasks; multimodal capabilities (image/audio) are better called via dedicated endpoints
- Different
gpt-5.xmodels vary in tool calling, long-context handling, and reasoning depth — pick by task complexity
7. FAQ
Why does the OpenAI Codex CLI work with API易?
Why does the OpenAI Codex CLI work with API易?
Because API易 is fully compatible with the OpenAI API protocol —
https://api.apiyi.com/v1 and https://api.openai.com/v1 are interchangeable in request/response format. Replacing the Base URL is enough.`command not found: codex`
`command not found: codex`
Confirm install:If it still fails, check whether
npm bin -g is on your PATH.Invalid API Key (401 / Invalid Key)
Invalid API Key (401 / Invalid Key)
-
Make sure you’re using an API易 Key (starts with
sk-), not an OpenAI key -
Verify the env var is actually exported:
-
After changing env vars, restart the terminal or run
source ~/.zshrc
Connection error / timeout / 404
Connection error / timeout / 404
Most common cause: Base URL is missing
/v1.Correct: https://api.apiyi.com/v1Next, check your local proxy and DNS.How do I switch / upgrade models?
How do I switch / upgrade models?
Three ways:
- One-off:
codex -m gpt-5.5 - In session:
/model - Default: edit
~/.codex/config.toml, setmodel = "gpt-5.5"
Which models are supported?
Which models are supported?
- OpenAI series: ✅ fully supported (recommended
gpt-5.5/gpt-5.4/gpt-5.3-codex) - Other vendor aggregated models: supported by API易, but Codex CLI itself is biased toward the OpenAI protocol — non-OpenAI models may behave differently around tool use / function calling
- For Claude / Gemini-based coding, use their native CLIs (e.g. Claude Code)
Suitable for production?
Suitable for production?
- CLI: best for dev-time productivity
- Production: prefer direct API calls (more control, monitoring, gradual rollout)
How do I uninstall or disable the API易 setup?
How do I uninstall or disable the API易 setup?
Uninstall the CLI:Disable API易 config: remove the env vars and config file.
8. Summary
The whole integration is one sentence:Replace OpenAI’s endpoint with API易The two essential lines:
instructions.md, AGENTS.md — is just polish on top.
Related Resources
API易 Console
Manage API keys and view usage
Claude Code Integration
Use Claude models for CLI coding
Model Comparison
All available models and pricing
API Manual
General calling conventions