Claudish: Claude Code. Any Model.
Claude Code is incredible. But you already pay for other AI subscriptions. Why not use your Gemini Advanced, ChatGPT Plus, Grok, or Kimi subscription with Claude Code's interface?
Claudish is the answer. It's a universal proxy that sits between Claude Code and any AI API, translating schemas bidirectionally and routing calls to the model you specify. No patches to Claude Code. No config files to hand-edit. Just claudish --model and you're using any model Claude Code supports.
The Problem It Solves
Claude Code only supports Anthropic models. If you have a Gemini, ChatGPT, Grok, or Kimi subscription, you can't use it with Claude Code's interface. You're forced to choose between:
- Paying for Claude Max and another provider (expensive)
- Using Claude but missing out on features from other models (suboptimal)
- Using a different agent (switching workflows, losing Claude Code's UX)
Claudish eliminates this choice. You use Claude Code's interface. You pay for whatever subscription you want. It just routes the calls.
How It Works: Native Translation
Claudish doesn't hack Claude Code. It intercepts calls via a local proxy, translates schemas, and forwards to the target API. The translation is bidirectional and happens at runtime.
Step 1: Intercept
Claudish starts a proxy on port 3000. Claude Code sends all API calls to localhost:3000 instead of api.anthropic.com. Claudish captures these calls.
Step 2: Translate
The schemas are converted in real-time:
- Outgoing: Claude's XML
<tool_use>→ Target's JSON{function_call} - Incoming: Target's JSON
{content: json}→ Claude's XML<result>
Step 3: Execute
The target model executes the request natively. Claudish re-serializes the response to look exactly like Claude 3.5 Sonnet output. Claude Code never knows it's talking to a different model.
The Provider Ecosystem
Claudish supports 15+ direct providers plus 580+ models via OpenRouter. Here's the breakdown:
| Provider | Direct API | Notes |
|---|---|---|
| Anthropic Max | Native passthrough | Use your existing Claude credits or Pro plan |
| Gemini Advanced | @gemini-3.1-pro-preview | Google's latest, 1M context |
| ChatGPT Plus | @gpt-5.4 | OpenAI's flagship |
| Kimi | @kimi-k2.5 / @kimi-for-coding | Oauth authentication |
| GLM / Zhipu | @glm-5 | Chinese LLM, 1M context |
| MiniMax | @MiniMax-M2.7 | Enterprise tier |
| Vertex AI | @gemini-3.1-pro-preview | Google Cloud managed |
| Z.AI | @glm-5 | 10% off with affiliate link |
| OllamaCloud | @qwen3-coder-next | Cloud-hosted local models |
| OpenRouter | @openai/gpt-5.4 | 580+ models, free tier available |
| Ollama (Local) | @llama3.2 | 100% offline, $0 cost |
Multi-Model Orchestration
Claudish isn't just a one-model proxy. You can route different models to different tasks within the same session:
claudish \ --model-opus google/gemini-3.1-pro-preview \ --model-sonnet openai/gpt-5.4 \ --model-haiku x-ai/grok-code-fast \ --model-subagent minimax/minimax-m2
This enables a multi-model mesh:
- Gemini 3.1 Pro for complex planning and vision tasks
- GPT-5.4 for main coding logic
- Grok Code Fast for fast context processing
- MiniMax M2 for background worker agents
The orchestrator manages compute nodes, latency, and cost in real-time:
- GEMINI-3-PRO: 45ms latency, Google Planner
- GPT-5.1-CODEX: 82ms latency, OpenAI Generator
- GROK-FAST: 12ms latency, x.AI Analyzer
- MINIMAX-M2: 110ms latency, MiniMax Worker
Vision Proxy
Some models (GLM 5, Kimi 2.5) don't support images. Claudish solves this with a Vision Proxy:
When Claude Code sends an image, Claudish intercepts it, calls a vision API to extract text/structure, and forwards a text description to the target model:
Source: Claude Code
{
"type": "image_url",
"url": "data:image..."
}
Claudish Proxy:
Extracting layout, text, and structure via Vision API...
Destination (Kimi 2.5 / GLM 5):
{
"type": "text",
"text": "UI shows a navigation bar with logo and menu items..."
}
This lets you use text-only models for tasks that require image understanding, without breaking Claude Code's image workflows.
1M Token Context Remapping
Claude Code expects 200K context. Claudish remaps larger context windows to this expectation:
- Gemini 3.1 Pro: 1,000K → 200K (with chunked retrieval)
- DeepSeek R1: 164K → 200K
- Grok 4.20: 131K → 200K
On supported models, this unlocks full 1M+ token windows via context remapping.
Cost Telemetry & Bypass
Claudish bypasses Claude's default pricing logic and intercepts token usage statistics to show exact API spend:
- Native Anthropic: 0% markup, direct API passthrough
- OpenRouter: Real-time cost per session
- BYOK providers: You pay the provider directly
The orchestrator displays tokens/sec, latency, CPU/MEM usage, and network traffic for each compute node.
Installation
# Homebrew brew tap MadAppGang/tap && brew install claudish # npm (recommended) npm install -g claudish # Free tier claudish --free
Claudish runs as a CLI that intercepts Claude Code's calls. It starts a proxy on port 3000 and routes everything through it. No daemon, no systemd service, no manual configuration.
Use Cases
✅ Ideal For:
- Multi-subscription households — Use one Claude Code install across all your paid subscriptions
- Cost optimization — Switch to free models when cost matters (Grok 4.20:free, DeepSeek R1:free)
- Vision tasks — Combine Claude's interface with Gemini's vision capabilities
- 1M context workloads — Use Gemini 3.1 Pro's 1M window for large codebases
- Privacy-conscious devs — Run offline with Ollama, no API calls to external providers
- Experimentation — Try new models without committing to new subscriptions
❌ Less Ideal For:
- Single Claude Max users — Native passthrough is already seamless; Claudish adds no value
- Simple chat apps — Overkill if you're just using Claude Code for casual chat
- Non-Node.js setups — Requires npm; no other installation method
- Enterprise with strict policies — Proxy-based approach may raise security concerns
Alternatives to Consider
For Multi-Model Flexibility
- OpenRouter directly — Use Claude Code's native Anthropic models only. Simpler, but you lose access to other providers through Claude Code's interface.
- Multi-agent frameworks — Build your own multi-model mesh with separate agents. More flexible, but requires significant engineering work.
For Single-Provider Users
- Claude Code native — If you only use Claude, Claudish adds complexity without benefit.
- Other agent UIs — Cursor, Replit, etc. — each has their own model selection, but you can't mix Claude Code's interface with other models.
Claudish's unique value is the unified Claude Code interface across all providers. No other tool combines this level of integration.
The Good: What Works Well
Pros
- Native translation — Zero patches to Claude Code
- 15+ direct providers + 580+ OpenRouter models — Massive ecosystem
- Multi-model orchestration — Different models for different tasks
- Vision proxy — Text-only models can see images
- 1M context remapping — Unlocks large windows
- Cost telemetry — Exact API spend per session
- BYOK support — Bring your own API key
- Offline mode — Ollama/LM Studio local inference
- Free tier — Real models, not trials
- 808 stars, 432 commits, active development
The Bad: Limitations & Pain Points
Cons
- Requires Node.js and npm — Installation friction
- Proxy-based approach — Adds one more service to manage
- Schema translation may miss edge cases
- Not officially supported by Anthropic
- Multi-model orchestration adds complexity
- Requires API keys for all providers
- Latency overhead from proxy routing
Verdict: 4.5/5 — Essential for Multi-Model Users
Claudish is the most practical solution to the multi-model problem we've seen. It doesn't try to be clever — it takes a well-understood approach (proxy + schema translation) and applies it correctly to Claude Code's workflow.
808 stars, 432 commits, active development by MadAppGang (the team behind Magmux). MIT licensed. One npm install and you're live.
Get Started
# Install npm install -g claudish # Use your Claude Max subscription (native passthrough) claudish --model claude-sonnet-4-6 # Use OpenRouter's free tier — real top models claudish --free # Use a specific provider claudish --model @gemini-3.1-pro-preview # Multi-model orchestration claudish \ --model-opus google/gemini-3.1-pro-preview \ --model-sonnet openai/gpt-5.4 \ --model-haiku x-ai/grok-code-fast
Repository: github.com/MadAppGang/claudish
Documentation: claudish.com/docs