Configuration
/config
directory. The configuration file is used to define the LLM system prompt
, agent’s inputs, LLM configuration, and actions etc. Here is an example of the configuration file:
agent_inputs
)agent_inputs
section defines the inputs for the agent. Inputs might include a camera, a LiDAR, a microphone, or governance information. OM1 implements the following input types:
agent_inputs
config section is specific to each input type. For example, the VLM_COCO_Local
input accepts a camera_index
parameter.
cortex_llm
)cortex_llm
field allow you to configure the Large Language Model (LLM) used by the agent. In a typical deployment, data will flow to at least three different LLMs, hosted in the cloud, that work together to provide actions to your robot.
cortex_llm
showing (deprecated) use of a single LLM to generate decisions:
base_url
), agent_name
, and history_length
.base_url
and the api_key
for OpenAI, DeepSeek, or other providers. Possible base_url
choices include:
You can implement your own LLM endpoints or use more sophisticated approaches such as multiLLM robotics-focused endpoints by following the LLM Guide.
simulators
)simulators
section:
agent_actions
)agent_actions
section: