← back

Show HN: Conduit: One Swift interface for every AI provider, on-device and cloud

ckarani | 2026-02-18 02:21 UTC | source

christopherkarani/Conduit

1 points | 0 comments | original link
I built Conduit because I was tired of writing the same streaming boilerplate five times for five different AI providers, then rewriting it every time a new one became interesting. So I stopped. The core idea: one protocol hierarchy, every provider. Switch from Claude to a local Llama model running on Apple Silicon with a one-line change. No vendor lock-in at the call site.

The interesting decision was going actor-first from day one. Every provider is a Swift actor. You get data-race freedom enforced at compile time, not by convention. Swift 6.2's strict concurrency makes this a hard guarantee, not a README promise. LangChain can't say that.

The part I'm most proud of — @Generable

@Generable struct FlightSearch { @Guide(description: "Origin airport code") let origin: String

    @Guide(description: "Departure date", .format(.date))
    let date: Date
    
    @Guide(.range(1...9))
    let passengers: Int
}

let result = try await provider.generate( "Book me a flight to Tokyo next Friday", model: .claude3_5Sonnet, returning: FlightSearch.self )

The macro expands at compile time (via swift-syntax) to generate JSON Schema, streaming partial types, and all conversion boilerplate. The API is deliberately aligned with Apple's new Foundation Models framework — so the same struct works against on-device Apple models on iOS 26 and against Claude or GPT-4 with zero changes.

On-device is a first-class citizen, not an afterthought Most Swift AI SDKs treat cloud as the primary path and shim local models in awkwardly. Conduit treats MLX, llama.cpp, Core ML, and Apple's Foundation Models as fully equal providers. A ChatSession configured with an MLX Llama model and one configured with GPT-4o are indistinguishable at the call site.

Trait-based compilation keeps binary size sane

AsyncThrowingStream all the way down. Cancellation works via standard Swift task cancellation — no special teardown protocol. Back-pressure is handled naturally by the async iterator.

12 providers, one interface Anthropic, OpenAI, Azure OpenAI, Ollama, OpenRouter, Kimi, MiniMax, HuggingFace Hub, MLX, llama.cpp, Core ML, Foundation Models. The OpenAI-compatible ones share a single OpenAIProvider actor — the named variants are thin configuration wrappers, not code forks.

https://github.com/christopherkarani/Conduit Happy to dig into the actor model approach, the macro expansion strategy, or why wrapping LangChain was never an option.