S SYC API Developer

Model catalog

Choose models by use case, not upstream provider keys.

Customers see clean SYC API model names. Your upstream keys stay private in the server environment and LiteLLM routes requests behind the scenes.

b-ai · Flagship

GPT-5.5

Highest quality general model for advanced reasoning and production workloads.

gpt-5.5

b-ai · Fast flagship

GPT-5.5 Instant

Fast premium model for interactive apps, agents, and chat products.

gpt-5.5-instant

b-ai · Pro

GPT-5.4 Pro

Stronger reasoning tier for coding, analysis, and automation workflows.

gpt-5.4-pro

b-ai · Balanced

GPT-5.4

Balanced quality and latency for SaaS features and developer tools.

gpt-5.4

b-ai · Balanced

GPT-5.2

Reliable default model for chat, summarization, and app integrations.

gpt-5.2

b-ai · Efficient

GPT-5.4 Mini

Cost-efficient default for support bots, tools, and high-volume calls.

gpt-5.4-mini

b-ai · Efficient

GPT-5 Mini

Lightweight model for fast responses and budget-sensitive workloads.

gpt-5-mini

b-ai · Low cost

GPT-5.4 Nano

Very low-cost model for extraction, classification, and simple automations.

gpt-5.4-nano

b-ai · Low cost

GPT-5 Nano

Smallest GPT option for simple routing, tagging, and utility calls.

gpt-5-nano

b-ai · Premium reasoning

Claude Opus 4.7

Premium Claude-style reasoning for complex writing, code, and analysis.

claude-opus-4.7

b-ai · Premium reasoning

Claude Opus 4.5

Claude Opus tier for writing, planning, and detailed code review.

claude-opus-4.5

b-ai · Reasoning

Claude Sonnet 4.6

Balanced Claude option for coding, writing, and agent workflows.

claude-sonnet-4.6

b-ai · Reasoning

Claude Sonnet 4.5

General Claude-style reasoning model for product and developer use cases.

claude-sonnet-4.5

b-ai · Fast

Claude Haiku 4.5

Fast Claude option for lightweight tasks and high-throughput apps.

claude-haiku-4.5

b-ai · Long context

Gemini 3.1 Pro

Gemini-style model for long context, research, and multimodal workflows.

gemini-3.1-pro

b-ai · Fast

Gemini 3 Flash

Fast Gemini option for latency-sensitive applications.

gemini-3-flash

b-ai · Reasoning

DeepSeek V4 Pro

DeepSeek pro model for coding, reasoning, and structured output.

deepseek-v4-pro

b-ai · Fast

DeepSeek V4 Flash

Fast DeepSeek option for code assistance and automation.

deepseek-v4-flash

b-ai · Balanced

DeepSeek V3.2

Balanced DeepSeek model for chat, code, and general usage.

deepseek-v3.2

b-ai · General

GLM 5.1

GLM model for multilingual and general assistant workloads.

glm-5.1

b-ai · General

GLM 5

General GLM model for chat, tools, and automation.

glm-5

b-ai · Long context

Kimi K2.5

Kimi-style model for long context reading and document workflows.

kimi-k2.5

b-ai · General

MiniMax M2.7

MiniMax model for chat, content, and application workflows.

minimax-m2.7

b-ai · General

MiniMax M2.5

MiniMax model for general purpose AI product features.

minimax-m2.5

How upstream keys connect

Put your internal New API token in .env, then LiteLLM reads it from litellm/config.yaml. Customers only receive virtual SYC API keys.

NEWAPI_BASE_URL=https://sycapi.com/v1
NEWAPI_API_KEY=your-internal-newapi-token
LITELLM_MASTER_KEY=sk-syc-admin-change-me