Expand description
Provider implementations for local and remote model backends.
Each sub-module is gated behind a Cargo feature flag (e.g. provider-candle,
provider-openai). Only providers whose features are enabled will be compiled.
§Local providers
| Module | Feature | Engine |
|---|---|---|
candle | provider-candle | Candle |
fastembed | provider-fastembed | FastEmbed (ONNX Runtime) |
mistralrs | provider-mistralrs | mistral.rs |
§Remote providers
| Module | Feature | API |
|---|---|---|
openai | provider-openai | OpenAI |
gemini | provider-gemini | Google Gemini |
vertexai | provider-vertexai | Google Vertex AI |
mistral | provider-mistral | Mistral AI |
anthropic | provider-anthropic | Anthropic |
voyageai | provider-voyageai | Voyage AI |
cohere | provider-cohere | Cohere |
azure_openai | provider-azure-openai | Azure OpenAI |
Re-exports§
pub use candle::LocalCandleProvider;pub use openai::RemoteOpenAIProvider;pub use fastembed::LocalFastEmbedProvider;pub use gemini::RemoteGeminiProvider;pub use vertexai::RemoteVertexAIProvider;pub use mistral::RemoteMistralProvider;pub use anthropic::RemoteAnthropicProvider;pub use voyageai::RemoteVoyageAIProvider;pub use cohere::RemoteCohereProvider;pub use azure_openai::RemoteAzureOpenAIProvider;