Getting Started
Structure and Architecture
Core architecture and runtime flow
Project Structure
Project Structure
The system is based on a loop that runs at a fixed frequency of self.config.hertz
. This loop looks for the most recent data from various sources, fuses the data into a prompt, sends that prompt to one or more LLMs, and then sends the LLM responses to virtual agents or physical robots.
Architecture Overview
Specific runtime flow:
- Input plugins collect sensor data (vision, audio, social media, etc.)
- The Fuser combines inputs into a prompt
- The LLM generates commands based on the prompt
- The ActionOrchestrator executes commands through actions
- Connectors map OM1 data/commands to external data buses and data distribution systems such as custom APIs,
ROS2
,Zenoh
, orCycloneDDS
.
Core Runtime System
Code Explanation
The code above defines an asynchronous event loop for a system running a Cortex AI model. It continuously processes input, generates responses, and executes commands.
_run_cortex_loop() – Main Loop
- Runs indefinitely, executing _tick() at a rate defined by self.config.hertz (loop frequency).
- Uses asyncio.sleep() to maintain a steady execution rate.
_tick() – Processing Each Cycle
- Flushes completed tasks (finished_promises) from action_orchestrator.
- Ensures the system processes only pending or new tasks.
Generates a prompt using current inputs.
- Creates a prompt by combining agent_inputs and finished_promises using a fuser (likely a function that merges inputs).
Sends the prompt to the Cortex LLM for processing.
- Sends the fused prompt to the Cortex LLM (Language Model) and awaits a response.
- The model generates an output based on the input prompt.
Receives and logs the output.
- Logs the generated output for debugging.
Executes commands generated by the AI.
- Executes the commands from the LLM response by passing them to action_orchestrator.
Was this page helpful?