Configuration
Agents are configured via JSON5 files in the/config directory. The configuration file is used to define the LLM system prompt, agent’s inputs, LLM configuration, and actions etc. Here is an example of the configuration file:
Common Configuration Elements
- hertz Defines the base tick rate of the agent. This rate can be adjusted to allow the agent to respond quickly to changing environments, but comes at the expense of reducing the time available for LLMs to finish generating tokens. Note: time critical tasks such as collision avoidance should be handled through low level control loops operating in parallel to the LLM-based logic, using event-triggered callbacks through real-time middleware.
- name A unique identifier for the agent.
- api_key The API key for the agent. You can get your API key from the OpenMind Portal.
- URID The Universal Robot ID for the robot. Used to join a decentralized machine-to-machine coordination and communication system (FABRIC).
- system_prompt_base Defines the agent’s personality and behavior.
- system_governance The agent’s laws and constitution.
- system_prompt_examples The agent’s example inputs/actions.
version
The version field specifies the runtime configuration version. It is required for both single-mode and multi-mode configs. This field ensures that configuration files remain compatible as the runtime evolves. When the version in a config doesn’t match what the runtime expects, developers receive clear logs and errors instead of silent failures or unpredictable behavior.Runtime support
The runtime/version.py module handles:- retrieving the current runtime version
- checking compatibility between config and runtime
- producing detailed logs and helpful error messages when mismatches occur
Available versions
-
v1.0.0Initial stable configuration version. -
v1.0.1(latest) Adds support for context-aware mode for full autonomy.
Agent Inputs (agent_inputs)
Example configuration for the agent_inputs section:
agent_inputs section defines the inputs for the agent. Inputs might include a camera, a LiDAR, a microphone, or governance information. OM1 implements the following input types:
- GoogleASRInput
- VLMVila
- VLM_COCO_Local
- RPLidar
- TurtleBot4Batt
- UnitreeG1Basic
- UnitreeGo2Lowstate
- GovernanceEthereum
- more being added continuously…
agent_inputs config section is specific to each input type. For example, the VLM_COCO_Local input accepts a camera_index parameter.
Cortex LLM (cortex_llm)
The cortex_llm field allows you to configure the Large Language Model (LLM) used by the agent. In a typical deployment, data will flow to at least three different LLMs, hosted in the cloud, that work together to provide actions to your robot.
Robot Control by a Single LLM
Here is an example configuration of thecortex_llm showing use of a single LLM to generate decisions:
- type: Specifies the LLM plugin.
- config: LLM configuration, including the API endpoint (
base_url),agent_name, andhistory_length.
base_url and the api_key for OpenAI, DeepSeek, or other providers. Possible base_url choices include:
- https://api.openai.com/v1
- https://api.deepseek.com/v1
- http://localhost:11434 (Ollama - local inference, no API key required)
Simulators (simulators)
Lists the simulation modules used by the agent. Here is an example configuration for the simulators section:
Agent Actions (agent_actions)
Defines the agent’s available capabilities, including action names, their implementation, and the connector used to execute them. Here is an example configuration for the agent_actions section: