Combining Inputs, an LLM, and Outputs to create a smart, engaging toy.
This example takes two inputs, emotional state and voice inputs, sends them to an LLM, and produces speech outputs and physical movements. The overall behavior of the system is configured in /config/cubly.json5
.
Run
You should see your webcam light turn on and Cubly should speak to you from your default laptop speaker. You can see what is happening in the WebSim
simulator window.
ls /dev/cu.usb*
. If you do not specify your computer’s serial port, the example will provide logging data that simulates what it would send.cv2.CascadeClassifier(haarcascade_frontalface_default)
. See /inputs/plugins/webcam_to_face_emotion
./inputs/plugins/asr
./actions/speak/connector/tts
.COM1
, flowing to a connected Arduino, which can then generate servo commands. See /actions/move_serial_arduino/connector/serial_arduino
.