Supercompat

Supercompat — Switch AI models without compromises.
Supercompat — Switch AI models without compromises.
Supercompat is a TypeScript library that lets you call any LLM provider through the OpenAI SDK (or the Anthropic SDK). Swap one adapter and the same client.responses.create() call reaches Anthropic, Google, Groq, Mistral, Together, OpenRouter, Perplexity, Ollama, or Azure — with the original SDK types intact.
It runs in-process. No proxy server, no request forwarding, no extra latency. Supercompat installs a custom fetch on the SDK instance and routes calls locally.

Install

npm install supercompat openai

Quick example

import { supercompat, anthropicClientAdapter, completionsRunAdapter, memoryStorageAdapter, } from 'supercompat/openai' import Anthropic from '@anthropic-ai/sdk' const client = supercompat({ clientAdapter: anthropicClientAdapter({ anthropic: new Anthropic() }), storageAdapter: memoryStorageAdapter(), runAdapter: completionsRunAdapter(), }) const response = await client.responses.create({ model: 'claude-sonnet-4-6', input: 'Say hello.', }) console.log(response.output_text)
client is a real OpenAI instance with the real TypeScript types. Every call made on it — responses, chat.completions, beta.threads — is intercepted by Supercompat and translated into a request against the Anthropic SDK. Switching providers is a change to clientAdapter; everything else stays the same.

Persistent state

memoryStorageAdapter is fine for one-shot scripts but loses everything on restart. For persisted conversations, threads, and runs, swap it for prismaStorageAdapter:
import { PrismaClient } from '@prisma/client' import { supercompat, anthropicClientAdapter, completionsRunAdapter, prismaStorageAdapter, } from 'supercompat/openai' import Anthropic from '@anthropic-ai/sdk' const prisma = new PrismaClient() const client = supercompat({ clientAdapter: anthropicClientAdapter({ anthropic: new Anthropic() }), storageAdapter: prismaStorageAdapter({ prisma }), runAdapter: completionsRunAdapter(), }) // Continue a conversation across requests with previous_response_id: const first = await client.responses.create({ model: 'claude-sonnet-4-6', input: 'My name is Alice.', }) const second = await client.responses.create({ model: 'claude-sonnet-4-6', input: 'What did I just tell you?', previous_response_id: first.id, })
Conversations, responses, assistants, threads, messages, and runs all land in Postgres. See Storage adapters for every option — including OpenAI-managed and Azure-managed state.

Where to go next