Project

# Title Team Members TA Documents Sponsor
84 AutoServe (Automated Room Service Bot)
Ethan Jiang
Johan Martinez
Nikhil Vishnoi
Po-Jen Ko design_document1.pdf
proposal1.pdf
**AutoServe (Automated Room Service Bot)**

**Team Members:**
- Nikhil Vishnoi (nikhilv4)
- Ethan Jiang (ethanj4)
- Johan Martinez (jmart454)

**Problem**

In hotels, apartments, and dormitories, guests or residents often request small amenities such as snacks, toiletries, chargers and more. Fulfilling these requests often requires manual labor, such as a staff member traveling long distances across hallways and between floors which is time-consuming, inefficient, and labor intensive. While some automated delivery robots exist, commercial solutions are extremely expensive, and often impractical for smaller deployments or retrofitting existing buildings. There is a need for an affordable yet flexible indoor delivery system capable of autonomously transporting small items within multi floor buildings while operating within existing infrastructure constraints.

**Solution**

We propose a small autonomous indoor delivery robot capable of transporting items between locations in a multi-floor building such as a hotel. The robot will navigate hallways autonomously and use an elevator to travel between floors, allowing it to deliver items from a central base location such as the hotel lobby snack bar to a specified destination room. The robot will move autonomously and be monitored wirelessly by staff through a remote UI that can display status updates on deliveries, or when the robot is ready in the elevator to be transported by hotel staff calling the elevator from the lobby. Elevator actuation is assumed to be externally triggered by the building as is most common in real hotels, while the robot will autonomously handle entering, riding, and exiting the elevator at the correct floor with sensor detection. This design choice reflects realistic constraints of existing building logistics while allowing the project to focus on autonomous navigation, system integration, and practicality.
An ESP32-based controller located on the central unit and the navigation unit will coordinate wireless connection between each other with the integrated Wi-Fi module. We would also incorporate graphed routes that are optimized for avoiding obstacles, with a proximity sensor to detect obstacles such as people and send the appropriate warnings. Items will be transported in a box with a rfid lock that can only be opened by residents such as with a hotel keycard or something of similar nature. This system would reduce staff workload, improve response time for guests, and demonstrate how embedded robotic platforms can be useful to automate common but repetitive manual logistics tasks.


**Subsystem 1: Microcontroller Unit**

- Two ESP microcontrollers will be used, one for the Central Base Unit and one for the actual Robot Navigation Unit.
- Both microcontrollers will communicate with each other using their integrated Wifi connection modules with transmitters and receivers.

**Subsystem 2: Robot Base Unit**

- Will have USB keyboard input (DS_FT312D) and Display to allow user input commands to robot
- Display (NHD-0216KZW-AB5) will show a UI for user to see robot status (charge, where it thinks it is, connection)

**Subsystem 3: Robot Unit**

- 2 Stepper motors (17ME15-1504S) to accurately move robot with predetermined distances.
- Will be 3D printed or machined with the machine shop
- Motors will be driven using motor driver (A4988SETTR-T) with MCU
- Display (NHD-0216KZW-AB5) for robot unit to communicate with nearby people

**Subsystem 4: Navigation and Sensing**
- Position Tracking sensor (TLV493DA1B6HTSA2) to track x,y,z motion data of robot. Actual map data and floor data will be hardcoded into the robot; this data will be used to make sure that stepper motors are moving correctly.
- Proximity sensors (TSSP40) for MCU to tell when it is being blocked by an obstacle and if it is boxed in it will communicate with the Base Unit for help.

**Subsystem 5: Robot Charging Station**
- The robot will have battery charge detection and will be able to inform the central base Unit when it is low on power.
- When delivery is completed and robot is done working it will dock into a base charging station that will flow a reverse current into the Lithium Ion batteries using a charge management controller (MCP73811).

**Subsystem 6: Security Subsystem**
- RFID based lock system for storing delivered items that opens for residents (Either from base station or with smart lock)

**Criteria for Success**
- The central base station can send commands to the navigational robot unit which is able to use predefined data to go to programmed/stored locations accurately.
- The navigational unit is able to identify its location, calculate the route to its next destination, and then move precisely towards it and stop correctly.
- Robot unit can avoid obstacles and send back status messages to the central base station indicators.
- The robot unit can operate through the elevator and can tell when it is at the right floor and when to exit.

Smart Glasses for the Blind

Siraj Khogeer, Abdul Maaieh, Ahmed Nahas

Smart Glasses for the Blind

Featured Project

# Team Members

- Ahmed Nahas (anahas2)

- Siraj Khogeer (khogeer2)

- Abdulrahman Maaieh (amaaieh2)

# Problem:

The underlying motive behind this project is the heart-wrenching fact that, with all the developments in science and technology, the visually impaired have been left with nothing but a simple white cane; a stick among today’s scientific novelties. Our overarching goal is to create a wearable assistive device for the visually impaired by giving them an alternative way of “seeing” through sound. The idea revolves around glasses/headset that allow the user to walk independently by detecting obstacles and notifying the user, creating a sense of vision through spatial awareness.

# Solution:

Our objective is to create smart glasses/headset that allow the visually impaired to ‘see’ through sound. The general idea is to map the user’s surroundings through depth maps and a normal camera, then map both to audio that allows the user to perceive their surroundings.

We’ll use two low-power I2C ToF imagers to build a depth map of the user’s surroundings, as well as an SPI camera for ML features such as object recognition. These cameras/imagers will be connected to our ESP32-S3 WROOM, which downsamples some of the input and offloads them to our phone app/webpage for heavier processing (for object recognition, as well as for the depth-map to sound algorithm, which will be quite complex and builds on research papers we’ve found).

---

# Subsystems:

## Subsystem 1: Microcontroller Unit

We will use an ESP as an MCU, mainly for its WIFI capabilities as well as its sufficient processing power, suitable for us to connect

- ESP32-S3 WROOM : https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N8/15200089

## Subsystem 2: Tof Depth Imagers/Cameras Subsystem

This subsystem is the main sensor subsystem for getting the depth map data. This data will be transformed into audio signals to allow a visually impaired person to perceive obstacles around them.

There will be two Tof sensors to provide a wide FOV which will be connected to the ESP-32 MCU through two I2C connections. Each sensor provides a 8x8 pixel array at a 63 degree FOV.

- x2 SparkFun Qwiic Mini ToF Imager - VL53L5CX: https://www.sparkfun.com/products/19013

## Subsystem 3: SPI Camera Subsystem

This subsystem will allow us to capture a colored image of the user’s surroundings. A captured image will allow us to implement egocentric computer vision, processed on the app. We will implement one ML feature as a baseline for this project (one of: scene description, object recognition, etc). This will only be given as feedback to the user once prompted by a button on the PCB: when the user clicks the button on the glasses/headset, they will hear a description of their surroundings (hence, we don’t need real time object recognition, as opposed to a higher frame rate for the depth maps which do need lower latency. So as low as 1fps is what we need). This is exciting as having such an input will allow for other ML features/integrations that can be scaled drastically beyond this course.

- x1 Mega 3MP SPI Camera Module: https://www.arducam.com/product/presale-mega-3mp-color-rolling-shutter-camera-module-with-solid-camera-case-for-any-microcontroller/

## Subsystem 4: Stereo Audio Circuit

This subsystem is in charge of converting the digital audio from the ESP-32 and APP into stereo output to be used with earphones or speakers. This included digital to audio conversion and voltage clamping/regulation. Potentially add an adjustable audio option through a potentiometer.

- DAC Circuit

- 2*Op-Amp for Stereo Output, TLC27L1ACP:https://www.ti.com/product/TLC27L1A/part-details/TLC27L1ACP

- SJ1-3554NG (AUX)

- Connection to speakers/earphones https://www.digikey.com/en/products/detail/cui-devices/SJ1-3554NG/738709

- Bone conduction Transducer (optional, to be tested)

- Will allow for a bone conduction audio output, easily integrated around the ear in place of earphones, to be tested for effectiveness. Replaced with earphones otherwise. https://www.adafruit.com/product/1674

## Subsystem 5: App Subsystem

- React Native App/webpage, connects directly to ESP

- Does the heavy processing for the spatial awareness algorithm as well as object recognition or scene description algorithms (using libraries such as yolo, opencv, tflite)

- Sends audio output back to ESP to be outputted to stereo audio circuit

## Subsystem 6: Battery and Power Management

This subsystem is in charge of Power delivery, voltage regulation, and battery management to the rest of the circuit and devices. Takes in the unregulated battery voltage and steps up or down according to each components needs

- Main Power Supply

- Lithium Ion Battery Pack

- Voltage Regulators

- Linear, Buck, Boost regulators for the MCU, Sensors, and DAC

- Enclosure and Routing

- Plastic enclosure for the battery pack

---

# Criterion for Success

**Obstacle Detection:**

- Be able to identify the difference between an obstacle that is 1 meter away vs an obstacle that is 3 meters away.

- Be able to differentiate between obstacles on the right vs the left side of the user

- Be able to perceive an object moving from left to right or right to left in front of the user

**MCU:**

- Offload data from sensor subsystems onto application through a wifi connection.

- Control and receive data from sensors (ToF imagers and SPI camera) using SPI and I2C

- Receive audio from application and pass onto DAC for stereo out.

**App/Webpage:**

- Successfully connects to ESP through WIFI or BLE

- Processes data (ML and depth map algorithms)

- Process image using ML for object recognition

- Transforms depth map into spatial audio

- Sends audio back to ESP for audio output

**Audio:**

- Have working stereo output on the PCB for use in wired earphones or built in speakers

- Have bluetooth working on the app if a user wants to use wireless audio

- Potentially add hardware volume control

**Power:**

- Be able to operate the device using battery power. Safe voltage levels and regulation are needed.

- 5.5V Max

Project Videos