LLM Integration
input
information to LLMs and then (2) route LLM responses to various system actions, such as speak
and move
. The system provides a standardized interface for communicating with many different LLM endpoints from all the major providers including Anthropic, Google, DeepSeek, and OpenAI.
The plugins handle authentication, API communication, prompt formatting, response parsing, and conversation history management. LLM plugin examples are located in src/llm/plugins
: Code.
POST /api/core/{provider}/chat/completions
endpoint. Each LLM plugin takes fused input data (the prompt
) and sends it to an LLM. The response is then parsed and provided to runtime/cortex.py
for distribution to the system actions:
pydantic
output model is defined in src/llm/output_model.py
.
/api/core/agent
utilizes a collaborative system of specialized agents to perform more complex robotics tasks. The multi-agent system:
asyncio.gather()
/api/core/agent/memory
:
/api/core/rag
) to provide retrieval-augmented generation capabilities. To use RAG with your documents:
/api/core/agent
system_prompt_base
. For example:
/api/core/agent/medical
). This endpoint emphasizes the careful, responsible delivery of general health-related non-diagnostic responses. A suitable prompt might be:
/agent
endpoint.