b-ai · Flagship
GPT-5.5
Highest quality general model for advanced reasoning and production workloads.
gpt-5.5
Model catalog
Customers see clean SYC API model names. Your upstream keys stay private in the server environment and LiteLLM routes requests behind the scenes.
b-ai · Flagship
Highest quality general model for advanced reasoning and production workloads.
gpt-5.5
b-ai · Fast flagship
Fast premium model for interactive apps, agents, and chat products.
gpt-5.5-instant
b-ai · Pro
Stronger reasoning tier for coding, analysis, and automation workflows.
gpt-5.4-pro
b-ai · Balanced
Balanced quality and latency for SaaS features and developer tools.
gpt-5.4
b-ai · Balanced
Reliable default model for chat, summarization, and app integrations.
gpt-5.2
b-ai · Efficient
Cost-efficient default for support bots, tools, and high-volume calls.
gpt-5.4-mini
b-ai · Efficient
Lightweight model for fast responses and budget-sensitive workloads.
gpt-5-mini
b-ai · Low cost
Very low-cost model for extraction, classification, and simple automations.
gpt-5.4-nano
b-ai · Low cost
Smallest GPT option for simple routing, tagging, and utility calls.
gpt-5-nano
b-ai · Premium reasoning
Premium Claude-style reasoning for complex writing, code, and analysis.
claude-opus-4.7
b-ai · Premium reasoning
Claude Opus tier for writing, planning, and detailed code review.
claude-opus-4.5
b-ai · Reasoning
Balanced Claude option for coding, writing, and agent workflows.
claude-sonnet-4.6
b-ai · Reasoning
General Claude-style reasoning model for product and developer use cases.
claude-sonnet-4.5
b-ai · Fast
Fast Claude option for lightweight tasks and high-throughput apps.
claude-haiku-4.5
b-ai · Long context
Gemini-style model for long context, research, and multimodal workflows.
gemini-3.1-pro
b-ai · Fast
Fast Gemini option for latency-sensitive applications.
gemini-3-flash
b-ai · Reasoning
DeepSeek pro model for coding, reasoning, and structured output.
deepseek-v4-pro
b-ai · Fast
Fast DeepSeek option for code assistance and automation.
deepseek-v4-flash
b-ai · Balanced
Balanced DeepSeek model for chat, code, and general usage.
deepseek-v3.2
b-ai · General
GLM model for multilingual and general assistant workloads.
glm-5.1
b-ai · General
General GLM model for chat, tools, and automation.
glm-5
b-ai · Long context
Kimi-style model for long context reading and document workflows.
kimi-k2.5
b-ai · General
MiniMax model for chat, content, and application workflows.
minimax-m2.7
b-ai · General
MiniMax model for general purpose AI product features.
minimax-m2.5
Put your internal New API token in .env, then LiteLLM reads it from litellm/config.yaml. Customers only receive virtual SYC API keys.
NEWAPI_BASE_URL=https://sycapi.com/v1
NEWAPI_API_KEY=your-internal-newapi-token
LITELLM_MASTER_KEY=sk-syc-admin-change-me