core.ai.providers package

Submodules

core.ai.providers.base module

Abstract base class for AI chat providers.

class core.ai.providers.base.AIProvider[source]

Bases: ABC

Interface that all AI providers must implement.

abstractmethod chat(messages: List[Dict[str, str]], temperature: float = 0.7, max_tokens: int = 4096, **kwargs) str[source]

Send messages and return a complete response (blocking).

abstractmethod chat_stream(messages: List[Dict[str, str]], temperature: float = 0.7, max_tokens: int = 4096, **kwargs)[source]

Yield response chunks as they arrive (generator).

abstractmethod is_available() bool[source]

Return True if the provider is configured and reachable.

abstractmethod model_name() str[source]

Return the display name of the current model.

core.ai.providers.deepseek_provider module

DeepSeek provider via OpenRouter (free tier available).

class core.ai.providers.deepseek_provider.DeepSeekProvider(api_key: str = '', model: str = 'deepseek/deepseek-chat:free', base_url: str = '')[source]

Bases: AIProvider

Provider that talks to DeepSeek models via OpenRouter.

Uses the OpenAI-compatible endpoint at openrouter.ai. Requires an API key from OpenRouter (free tier available).

DEFAULT_BASE_URL = 'https://openrouter.ai/api/v1'
MAX_RETRIES = 3
RETRY_BASE_DELAY = 2.0
chat(messages: List[Dict[str, str]], temperature: float = 0.7, max_tokens: int = 4096, **kwargs) str[source]

Send messages and return a complete response (blocking).

chat_stream(messages: List[Dict[str, str]], temperature: float = 0.7, max_tokens: int = 4096, **kwargs)[source]

Yield response chunks as they arrive (generator).

is_available() bool[source]

Return True if the provider is configured and reachable.

model_name() str[source]

Return the display name of the current model.

core.ai.providers.local_provider module

Local LLM provider (Ollama, LM Studio, or any OpenAI-compatible local server).

class core.ai.providers.local_provider.LocalLLMProvider(model: str = 'llama3', base_url: str = 'http://localhost:11434/v1')[source]

Bases: AIProvider

Provider for local LLM servers that expose an OpenAI-compatible API.

Works with: - Ollama (default: http://localhost:11434/v1) - LM Studio (default: http://localhost:1234/v1) - Any OpenAI-compatible local server

chat(messages: List[Dict[str, str]], temperature: float = 0.7, max_tokens: int = 4096, **kwargs) str[source]

Send messages and return a complete response (blocking).

chat_stream(messages: List[Dict[str, str]], temperature: float = 0.7, max_tokens: int = 4096, **kwargs)[source]

Yield response chunks as they arrive (generator).

is_available() bool[source]

Return True if the provider is configured and reachable.

model_name() str[source]

Return the display name of the current model.

core.ai.providers.openai_provider module

OpenAI-compatible API provider (works with OpenAI, Azure, and compatible endpoints).

class core.ai.providers.openai_provider.OpenAIProvider(api_key: str = '', model: str = 'gpt-4o-mini', base_url: str = 'https://api.openai.com/v1')[source]

Bases: AIProvider

Provider that talks to the OpenAI Chat Completions API.

Also works with any OpenAI-compatible endpoint (e.g. Azure, Together, Groq, local vLLM) by changing base_url.

MAX_RETRIES = 3
RETRY_BASE_DELAY = 2.0
chat(messages: List[Dict[str, str]], temperature: float = 0.7, max_tokens: int = 4096, **kwargs) str[source]

Send messages and return a complete response (blocking).

chat_stream(messages: List[Dict[str, str]], temperature: float = 0.7, max_tokens: int = 4096, **kwargs)[source]

Yield response chunks as they arrive (generator).

is_available() bool[source]

Return True if the provider is configured and reachable.

model_name() str[source]

Return the display name of the current model.

core.ai.providers.openrouter_provider module

OpenRouter provider — unified access to 300+ models with tool calling and model discovery.

class core.ai.providers.openrouter_provider.OpenRouterProvider(api_key: str = '', model: str = 'deepseek/deepseek-chat:free', base_url: str = '')[source]

Bases: AIProvider

Provider that talks to OpenRouter’s OpenAI-compatible API.

Supports any model available on OpenRouter (DeepSeek, Claude, GPT, Gemini, Llama, Mistral, etc.), including tool/function calling and the special openrouter/auto model selector.

Get a free API key at: https://openrouter.ai/settings/keys

DEFAULT_BASE_URL = 'https://openrouter.ai/api/v1'
MAX_RETRIES = 3
RETRY_BASE_DELAY = 2.0
chat(messages: List[Dict[str, str]], temperature: float = 0.7, max_tokens: int = 4096, tools: List[Dict] | None = None) str[source]

Send messages and return a complete response (blocking).

chat_stream(messages: List[Dict[str, str]], temperature: float = 0.7, max_tokens: int = 4096, tools: List[Dict] | None = None)[source]

Yield response chunks as they arrive (generator).

chat_with_tools(messages: List[Dict[str, str]], tools: List[Dict], temperature: float = 0.7, max_tokens: int = 4096) Dict[str, Any][source]

Non-streaming chat that returns structured response including tool calls.

Returns dict with keys: ‘content’, ‘tool_calls’ (list or None), ‘finish_reason’.

fetch_free_models() List[Dict[str, Any]][source]

Fetch only free models from OpenRouter.

fetch_models() List[Dict[str, Any]][source]

Fetch available models from OpenRouter API.

Returns a list of model dicts with keys: id, name, description, context_length, pricing, etc.

is_available() bool[source]

Return True if the provider is configured and reachable.

model_name() str[source]

Return the display name of the current model.

Module contents