Skip to main content
The config file defines the agent that runs on your machine. It tells OM1 which modules to load, how the robot should behave, and which modes are available. To ensure your configuration is valid, follow the format defined here. OpenMind supports two configuration schemas:

Step 1. Single-Mode Schema

Use this when your robot only needs to run one dedicated mode. For example, a pure conversation agent or a navigation-only setup. The entire agent is optimized around a single use case.

Step 2. Multi-Mode Schema

Use this when you want the robot to switch between multiple modes at runtime based on user choice or context. You can configure any subset of the available five modes, just two, or all five, depending on your application.

Steps to build a new config file

  1. Start with getting your API key from OpenMind Portal. Copy it and save it, you’ll paste it into the config later.
  2. Create a new config file config.json5
FieldTypeRequiredDescription
versionstringYesThe version of the configuration format. Example: "v1.0.0"
hertznumberYesHow often (in Hz) the agent runs its update loop. Example: 0.01
namestringYesThe name of the agent. Example: "conversation"
api_keystringYesAPI key used to authenticate the agent. Example: "openmind_free"
system_prompt_basestringYesDefines the agent’s core personality and behavior. Serves as the primary system prompt for the LLM.
system_governancestringYesThe laws or constraints that the agent must follow during operation. Modeled similarly to Asimov’s laws.
system_prompt_examplesstringNoExample interactions that help guide the model’s behavior.

Step 3. Customize the system prompts

There are three key prompt fields:
  • system_prompt_base Defines your agent’s personality and behavior. You can keep the “Spot the dog” behavior or edit it to match your needs. You can also provide context to the LLM here.
  • system_governance Hard-coded rules the agent must follow (Asimovs laws).
  • system_prompt_examples Give your model examples of how to respond. These help shape its responses. You can add more examples if needed.

Step 4. Configure inputs

Inputs provide the sensory capabilities that allow robots to perceive their environment
FieldTypeRequiredDescription
typestringYesThe input type identifier. Example: "AudioInput"
configobjectNoConfiguration options specific to this input type. Example: GoogleASRInput

Step 5. Configure the LLM

FieldTypeRequiredDescription
typestringYesThe LLM provider name. Example: "OpenAILLM"
configobjectNoConfiguration options specific to this LLM type.
agent_namestringNoAgent name used in metadata. Example: "Spot"
history_lengthintegerNoNumber of past messages to remember in the conversation. Example: 10

Step 6. Set up agent actions

Actions define what your agent can do. You can define movement, TTS or any other actions here.
FieldTypeRequiredDescription
namestringYesHuman-readable identifier for the action. Example: "speak"
llm_labelstringYesLabel the model uses to refer to this action. Example: "speak"
implementationstringNoDefines the business logic. If none defined, defaults to "passthrough". Example: "passthrough"
connectorstringYesName of the connector. This is the Python file name defined under actions/action_name/connector. Example: "elevenlabs_tts"

Step 7. Validate the config

Before using the file: Check for JSON errors, make sure commas, quotes, and braces are correct and confirm that correct API key is configured.