LLMs
LLM Integration
Endpoint Overview
# Base URL: https://api.openmind.org/
POST /api/core/{provider}/chat/completions # Single agent
POST /api/core/agent # Multi agent
DELETE /api/core/agent/memory # Multi agent memory wipe
GET /api/core/rag # RAG knowledge base
POST /api/core/agent/medical # Healthcare focused multi agent systemSingle-Agent LLM Integration
response = await self._client.beta.chat.completions.parse(
model=self._config.model,
messages=[*messages, {"role": "user", "content": prompt}],
response_format=self._output_model,
timeout=self._config.timeout,
)
message_content = response.choices[0].message.content
parsed_response = self._output_model.model_validate_json(message_content)
return parsed_responseSingle-Agent LLM Configuration
Multi-Agent LLM Integration
Local LLMs
Ollama Integration
Dual LLM support
How It Works
Agent Architecture
Main API Endpoint
API Debug Response Structure
Supported Models
Memory Management
RAG Integration (Currently Disabled)
Examples
A Smart Dog
Medical Robot
Last updated
Was this helpful?
