POST
/
agent
/
robotic_team
/
runs
curl --request POST \
  --url https://api.openmind.org/api/core/agent/robotic_team/runs \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <api-key>' \
  --data '{
  "model": "gemini-2.0-flash",
  "message": "Analyze the market trends for renewable energy",
  "response_model": {}
}'
{
  "content": {},
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123
  },
  "model": "<string>"
}

OpenMind provides a multi-agent endpoint that allows you to run multiple agents in parallel. This is useful for tasks that require collaboration between different agents or when you want to leverage the strengths of different agents for a single task.

Authorizations

x-api-key
string
header
required

Body

application/json
message
string
required

The message to be processed by the agent

Example:

"Analyze the market trends for renewable energy"

model
enum<string>
default:gemini-2.0-flash

The model to use for the agent run

Available options:
gpt-4o,
gpt-4o-mini,
gemini-2.0-flash
Example:

"gemini-2.0-flash"

response_model
object

Optional JSON schema for structuring the agent's response

Response

200
application/json
Successful agent run response
content
object
required

The response content from the agent, structure depends on the response_model if provided

usage
object
required

Metrics about the agent run and resources used

model
string
required

The model used for the agent run