ollamaClientAdapter
Points the OpenAI SDK at a local (or remote) Ollama instance.
Signature
ollamaClientAdapter({
ollama: OpenAI,
})
Install
npm install supercompat openai
Make sure Ollama is running and you've pulled the model you want:
ollama serve
ollama pull llama3.2
Example
import OpenAI from 'openai'
import {
supercompat,
ollamaClientAdapter,
completionsRunAdapter,
memoryStorageAdapter,
} from 'supercompat/openai'
const ollama = new OpenAI({
apiKey: 'ollama',
baseURL: 'http://localhost:11434/v1',
})
const client = supercompat({
clientAdapter: ollamaClientAdapter({ ollama }),
storageAdapter: memoryStorageAdapter(),
runAdapter: completionsRunAdapter(),
})
const response = await client.responses.create({
model: 'llama3.2',
input: 'Hello.',
})
Remote Ollama
Swap the base URL to reach a different machine:
const ollama = new OpenAI({
apiKey: 'ollama',
baseURL: 'http://gpu-box.local:11434/v1',
})
Compatible run adapters