Project

# Title Team Members TA Documents Sponsor
6 Interactive Desktop Companion Robot for Stress Relief
Jiajun Gao
Yu-Chen Shih
Zichao Wang
Haocheng Bill Yang design_document1.pdf
proposal1.pdf
# Team
- Jiajun Gao (jiajung3)
- Yuchen Shih (ycshih2)
- Zichao Wang (zichao3)
# Problem
Students and office workers often spend extended periods working at desks, leading to mental fatigue, stress, and reduced focus. While mobile applications, videos, or music can provide temporary relief, they often require users to shift attention away from their primary tasks and lack a sense of physical presence. Static desk toys also fail to maintain long-term engagement because they do not adapt to user behavior or provide meaningful interaction.
There is a need for an interactive, physically present system that can provide short, low-effort interactions to help users relax without becoming a major distraction. Such a system should be compact, safe for desk use, and capable of responding naturally to user input.

# Solution
We propose an interactive desktop companion robot designed to reduce stress and boredom through voice interaction, expressive feedback, and simple physical motion. The robot has a compact, box-shaped form factor suitable for desk environments and can move using a tracked or differential-drive base. An ESP32-based controller coordinates audio processing, networking, control logic, and hardware interfaces.
The robot supports voice wake-up, natural language conversation using a cloud-based language model, and speech synthesis for verbal responses. Visual expressions are displayed using a small screen or LED indicators to reflect internal states such as listening, thinking, or speaking. Spoken commands can also trigger physical actions, such as rotating, moving closer, or changing expressions. By combining audio, visual, and physical interaction, the system creates an engaging yet lightweight companion that fits naturally into a desk workflow.
# Solution Components
## Subsystem 1: Voice Interaction and Audio Processing
This subsystem enables natural voice-based interaction between the user and the robot. It performs wake-word detection locally and streams audio data to a remote server for speech recognition and response generation. The subsystem also handles audio playback and interruption control.
Audio data is captured using a digital microphone, encoded, and transmitted over a network connection. Responses from the server are received as audio streams and played through an onboard speaker. Local wake-word detection ensures responsiveness and reduces unnecessary network usage.
Components:

• ESP32-S3 microcontroller with PSRAM
• ESP32-S3 integrated Wi-Fi module
• I2S digital microphone (INMP441 or equivalent)
• I2S audio amplifier (MAX98357A)
• 4Ω or 8Ω speaker
## Subsystem 2: Visual Expression and User Feedback
This subsystem provides visual feedback that represents the robot’s internal state and interaction context. Visual cues improve usability and convey personality.
Different states such as idle, listening, processing, speaking, and error are represented using animations or color patterns.
Components:

• SPI LCD display (ST7789 or equivalent) or
• RGB LEDs (WS2812B or equivalent)

## Subsystem 3: Motion and Actuation
This subsystem enables controlled movement on a desk surface. The robot performs simple motions such as forward movement, rotation, and stopping based on voice commands and sensor feedback.
Motor control runs in a dedicated task to prevent interference with audio and networking functions.
Components:

• Two DC gear motors
• Dual H-bridge motor driver (TB6612FNG or equivalent)
• Optional wheel encoders


## Subsystem 4: Power Management and Safety
This subsystem manages power distribution and ensures safe operation. The robot is battery-powered to allow untethered use on a desk. Hardware and software protections limit speed, current, and movement range.
Components:

• Lithium battery with protection circuit
• Battery charging module
• Voltage regulators (5V and 3.3V)
• Physical power switch

## Subsystem 5: Subsystem 5: Safety Sensing (Desk-Edge Detection + Obstacle Avoidance)

This subsystem prevents the robot from falling off the desk and reduces collisions with nearby objects. It continuously monitors both the surface below the robot and the space in front of the robot. When a desk edge (cliff) or obstacle is detected, this subsystem overrides motion commands and triggers an immediate safe response.

Desk-edge detection (cliff detection):
Two downward-facing distance sensors are mounted near the front-left and front-right corners. They measure the distance from the robot to the desk surface. If either sensor detects a sudden increase in distance beyond a calibrated baseline, the robot immediately stops and performs a short reverse maneuver to move away from the edge.

Obstacle avoidance:
A forward-facing distance sensor detects objects in front of the robot. If an obstacle is within a predefined safety distance, the robot stops. If the obstacle remains, the robot can optionally rotate in place to search for a clear direction before continuing motion.

Control priority:
Safety sensing has the highest priority in the motion stack:

Desk-edge detection (highest priority)

Obstacle avoidance

User/voice motion commands (lowest priority)

Components:

2 × Time-of-Flight distance sensors for downward cliff detection (VL53L0X or equivalent, I2C)

1 × Time-of-Flight distance sensor for forward obstacle detection (VL53L0X or equivalent, I2C)

# Criterion For Success
The success of this project will be evaluated using the following high-level criteria:
1. The robot connects to a Wi-Fi network and establishes a server connection within 10 seconds of power-on.
2. The system detects a wake word and enters interaction mode within 2 second in a quiet environment.
3. The average end-to-end voice interaction latency is less than 5 seconds under normal network conditions.
4. At least five predefined voice commands trigger the correct robot actions with at least 90% accuracy during testing.
5. Visual feedback correctly reflects the system state in all operational modes.
6. The robot operates continuously for at least 30 minutes on battery power during active use.
7. When Wi-Fi is unavailable, the system enters a safe degraded mode without crashing or unsafe motion.
8. During a 10-minute continuous motion demonstration on a desk, the robot does not fall off the desk.
9. In an obstacle test, the robot is commanded to move forward toward a stationary obstacle (for example, a box or book) from multiple start distances for 20 trials. The robot must stop (or stop and turn) before making contact in at least 18/20 trials.

Master Bus Processor

Clay Kaiser, Philip Macias, Richard Mannion

Master Bus Processor

Featured Project

General Description

We will design a Master Bus Processor (MBP) for music production in home studios. The MBP will use a hybrid analog/digital approach to provide both the desirable non-linearities of analog processing and the flexibility of digital control. Our design will be less costly than other audio bus processors so that it is more accessible to our target market of home studio owners. The MBP will be unique in its low cost as well as in its incorporation of a digital hardware control system. This allows for more flexibility and more intuitive controls when compared to other products on the market.

Design Proposal

Our design would contain a core functionality with scalability in added functionality. It would be designed to fit in a 2U rack mount enclosure with distinct boards for digital and analog circuits to allow for easier unit testings and account for digital/analog interference.

The audio processing signal chain would be composed of analog processing 'blocks’--like steps in the signal chain.

The basic analog blocks we would integrate are:

Compressor/limiter modes

EQ with shelf/bell modes

Saturation with symmetrical/asymmetrical modes

Each block’s multiple modes would be controlled by a digital circuit to allow for intuitive mode selection.

The digital circuit will be responsible for:

Mode selection

Analog block sequence

DSP feedback and monitoring of each analog block (REACH GOAL)

The digital circuit will entail a series of buttons to allow the user to easily select which analog block to control and another button to allow the user to scroll between different modes and presets. Another button will allow the user to control sequence of the analog blocks. An LCD display will be used to give the user feedback of the current state of the system when scrolling and selecting particular modes.

Reach Goals

added DSP functionality such as monitoring of the analog functions

Replace Arduino boards for DSP with custom digital control boards using ATmega328 microcontrollers (same as arduino board)

Rack mounted enclosure/marketable design

System Verification

We will qualify the success of the project by how closely its processing performance matches the design intent. Since audio 'quality’ can be highly subjective, we will rely on objective metrics such as Gain Reduction (GR [dB]), Total Harmonic Distortion (THD [%]), and Noise [V] to qualify the analog processing blocks. The digital controls will be qualified by their ability to actuate the correct analog blocks consistently without causing disruptions to the signal chain or interference. Additionally, the hardware user interface will be qualified by ease of use and intuitiveness.

Project Videos