Projects
# | Title | Team Members | TA | Professor | Documents | Sponsor |
---|---|---|---|---|---|---|
1 | Sound Asleep |
Adam Tsouchlos Ambika Mohapatra |
presentation1.pdf |
|||
# **Sound Asleep** **Team Members:** - Adam Tsouchlos (adamtt2) - Ambika Mohapatra (ambikam2) # **Problem** Poor sleep can have serious effects on your health, increasing chances of conditions like poor mental health, kidney failure, diabetes, and more. It was found that slow wave sleep declines with age and that it is considered the most restorative stage of sleep. It is important for improving immune function, memory consolidation, and emotional regulation. Recent literature discusses using auditory stimulation during sleep to increase longevity of slow wave sleep for better overall physical and mental health. There are other devices that use EEG technology, but most have no auditory stimulation and the others were said to be very uncomfortable. # Solution **Sound Asleep**: a non-invasive wearable that transmits EEG data to a companion app. This then interacts with the user’s Bluetooth device to deliver precisely timed auditory stimulation. The user can choose their own bluetooth device for increased comfort during sleep. # Solution Components # Subsystem 1 – EEG Acquisition and Wearable Hardware This subsystem is responsible for acquiring the EEG signals. - EEG leads optimized for overnight use. - Wearable headband or soft cap to keep electrodes in place throughout the night. - Low-noise amplification and filtering circuitry to ensure signals are usable for real-time processing. - Small rechargeable battery to power sensors and wireless transmission. # Subsystem 2 – Wireless Transmission and Power This subsystem ensures EEG data can be reliably sent to the processing unit. - Bluetooth Low Energy (BLE) or Wi-Fi module for continuous data transfer. - Onboard microcontroller to digitize EEG signals and handle wireless protocols. - Battery management system for safe charging and overnight operation. # Subsystem 3 – Sleep Stage Classification and Signal Processing This subsystem processes EEG data in real-time to detect sleep stages and identify slow wave activity. - Algorithms for sleep staging (NREM, REM, wake) using EEG features. - Slow wave detection algorithms trained/tested on pre-labeled EEG datasets. - Closed-loop timing logic to sync auditory stimulation with ongoing slow waves. - Possible algorithms to be used: **YASA Slow-waves detection.** https://github.com/raphaelvallat/yasa/blob/master/notebooks/05_sw_detection.ipynb **CoSleep GitHub project.** https://github.com/Frederik-D-Weber/cosleep # Subsystem 4 – Auditory Stimulation Delivery (and App User Interface) This subsystem delivers pink noise bursts at intervals during SWS. - Mobile (or desktop) app triggers sound output through the user’s paired Bluetooth device (primary option as of now). - Sound customization features via app for intensity, duration, frequency, and comfort. - Sleep session dashboard showing nightly summaries (total sleep, time in slow wave sleep, stimulation events delivered). # Criterion for Success # ****Hardware**** - Wearing the EEG device is considered comfortable by users. - EEG device stays attached during full night of sleep - EEG readings are accurately transmitted to the software. # Software - EEG readings are correctly detected and processed by the app. - Slow wave sleep stage is accurately identified. - Auditory stimulation is transmitted to user’s bluetooth device. # Outcomes - User has increased slow wave sleep duration and amplitude. - Improvement in memory test after sleeping with the device compared to without it. # References - Ngo et al. (2013). Auditory closed-loop stimulation of the sleep slow oscillation enhances memory. https://pubmed.ncbi.nlm.nih.gov/23583623/ - Bo-Lin Su et al. (2015). Detecting slow wave sleep using a single EEG signal channel. https://pubmed.ncbi.nlm.nih.gov/25637866/ |
||||||
2 | Autonomous Car for WiFi Mapping |
Avi Winick Ben Maydan Josh Powers |
||||
# Title Autonomous Car for WiFi Mapping # Team Members: - Ben Maydan (bmaydan2) - Josh Powers (jtp6) - Avi Winick (awinick2) # Problem When moving into a new apartment, house, office, etc, people will often place their wifi modem or wifi extender in a convenient spot without much thought. Having gone through this just last week, it made us wonder if there was a better way to approach this to maximize the wifi strength across your house. The way most people go about testing their wifi is walking into a room, going to a speed tester website, running it, then doing that over and over. This takes a lot of time, isn't very accurate, and doesn't show the absolute most optimal location. We are solving the problem of automating the process of wifi testing of strength and speed in a given space. We would get wifi data from an autonomous vehicle driving around a room, create a heat map of the signal strength to display on a computer, analyze it, and then finally show the weak spots, deadzones, etc.. Some motivation for why this is a good project idea: this project allows you to find the best spot to place a wifi extender for optimal wifi so your zoom meetings never cut out and your instagram reels/youtube shorts/tiktok/angry birds keep playing with no issue (potentially at the same time). # Solution The basic idea is to do a scan of the room. Essentially, the car will have a lidar sensor and RSSI sensor on the top which will continuously scan the room for the strongest wifi signal. The lidar sensor ensures that the car will not run into anything and allows us to run the SLAM algorithm for obstacle avoidance. The idea is to do a two-scan approach to avoid the bluetooth connection from interfering with the wifi signal detector. The first scan of the room will be manually driving the car for a few minutes gathering LIDAR data and sending that data to an onboard raspberry pi which will map out the path for the car to take to explore the entire room. This is significantly less driving than the autonomous part (which splits the room into strips to drive back and forth). The onboard raspberry will then stream the path back to the car using codes sent to the motor controllers, where the second scan of the room will drive that path over the room and capture the wifi signal at every point in memory. Once the car is done scanning, all of the data will be sent via bluetooth back to the computer where we can nicely visualize the data and return the location of the optimal wifi signal back to the user. This naturally introduces a few subsystems to achieve this goal. Location tracking subsystem to map a location (x, y) with a wifi strength (this is done using SLAM) + Generating a path for the car to drive -> this is mapping the output of SLAM to a path the car can drive to scan the entire floor for the best wifi signal. Lidar sensing + bluetooth streaming subsystem Wifi signal catching subsystem which gets the signal strength of the wifi at the location the car is at Building the car with omnidirectional wheels and motor controllers # Solution Components ## SLAM subsystem for location tracking + generating a path for the car to drive While we map out the Lidar data, we can pass it to the SLAM using an ESP32-S3 module. This module also has wifi and bluetooth capabilities to easily transfer the data to the computer to let the user view the heatmap. To run the SLAM algorithm, we will offload the computation to a raspberry pi 5 since it has a much more capable microprocessor to run 2d SLAM. The ESP32 in this case will be responsible for cleaning the LIDAR data and sending commands to the motor controller to move the car. A separate algorithm will be written to run on the raspberry pi (since it is very compute heavy) which computes the optimal path for the car to drive while gathering wifi data. ## Lidar Sensor The LiDar subsystem allows for the car to navigate through the room. It can scan the room in order to detect walls and furniture. The sensor helps avoid collisions by measuring the distance from obstacles. It can support the SLAM subsystem in mapping the environment in order for it to be then overlaid with the signal strength of the WiFi in all areas. This will use the RPLIDAR A1M8 which can be connected to the ESP32 for the lidar point cloud data to be stored in flash for the SLAM algorithm to run. ## Wifi Sensor (RSSI) Received Signal Strength Indicator is a sensor that is part of the ESP32 module that we can place on the top of our car that detects the wifi signals at a location, and can output a strength measurement in Dbm(decibel Milliwatt) . ESP32 can grab this data and transmit it through its bluetooth module to the computer which will display the data and run a very simple linear scan algorithm to find the (x, y) location with the strongest wifi signal. This is a very simple algorithm but we want it to be transmitted to the computer for the user to be able to visualize it. The ESP32 has an existing module that can measure the received signal strength. RSSI is a component of the ESP32 chip ## Car + mecanum wheels We would 3D print a car and omnidirectional wheels, above the car would hold the lidar sensors as well as the RSSI and SLAM module. The ESP32 module would be placed on top, and have bluetooth send the data to the motor controllers for which direction to turn, move forward, or backwards. There would be different levels for each of the sensors, within the chassis holding the wheels, next level up holding the PCB board and motor controllers, and the top most level holding the ESP32 and the lidar sensor. The ESP32 has the RSSI and the lidar sensor will grab light data for SLAM. # Criterion For Success Before the car is able to drive, we need the car’s wheels to be able to be controlled by a program separately. Each mecanum wheel needs to be able to function properly to control the multidirectional aspect. This consists of connecting the ESP32 module to the PCB which is connected to the motor controllers which are connected to the motor of each wheel. We would need our 3D printed car with omnidirectional wheels to be able to be controlled manually using a computer by sending the driving commands (using arrow keys) over bluetooth. We would need to make sure it is able to move in all directions around a room. After testing the driving functionality, we would need it to be able to drive with a hard coded path for it to follow to map out the wifi of the room. When the car can be driven autonomously, we need to achieve the ability to use the lidar sensor to map out a path for the car to follow through the room without needing a predetermined path. This is done using the Lidar sensor and offloading the difficult SLAM computation to a raspberry pi. This involves the following: gather lidar data -> run through slam on raspberry pi -> use generated location and grid based map returned by SLAM to run a path planning algorithm for the car -> start path algorithm. The part that is a criterion for success here is the algorithm we create to generate a path from the grid based map returned by SLAM. Use an ESP 32 chip (with RSSI add on) to detect wifi strength and then send it over bluetooth to a computer where it would process the data and generate a heat map of the given space. Optionally give the user advice on where to move the modem to maximize either strength in a certain area or best general coverage. Write an algorithm that will take all the wifi heatmap data and return an optimal spot to place the wifi extender. This is a very complicated algorithm since wifi data can be interpolated (for example, if the strength at x = 1.0 is strong and the strength at x = 0.0 is weak then you can say the strength at x = 0.5 is medium). This essentially means that given a 2D list of wifi signal strength data you have potentially infinite spots to place a wifi extender since you can interpolate anywhere. Write an algorithm to find local deadspots using the heatmap. This can be done on the computer to visualize to the user and onboard the esp32 or the raspberry pi on the car to avoid mapping out sections of the room which are clearly deadspots. If it runs onboard the car it could save a lot of time by avoiding mapping out large sections of the room but it is again a very computationally intensive algorithm since it is essentially gradient ascent. |
||||||
3 | Follow-Me Cart: App controlled smart assistant |
Alex Huang Jiaming Gu Shi Qiao |
||||
Here's the following of the previous post due to word limitations: ## Subsystem 3: Mobile App Purpose: Allow customers to control the cart via app. Features: 1. BLE(bluetooth low energy) pairing with the Raspberry Pi for secure identification. 2. Enable/ disable follow-me mode. 3. adjust the following distance, receive notifications when the cart is too far from the user. Components: 1. Customized Android app 2. BLE/Wi-Fi for control and ID verification. ## Subsystem 4: Drive Subsystem Purpose: Drive the car Components: 1. 12V DC gear motors. 2. Chassis: 2-wheel drive with caster support for balance. 3. Payload capacity: 5–10 kg (scaled for safety and feasibility). 4.Power system: 12V Li-ion battery pack with buck converters for 5V (Pi) and 3.3V (sensors/ESP32). # Criterion For Success 1. The cart follows the user within 1–2 m, with >90% accuracy in aisle-like environments. 2. Our mobile app should connect to the cart within 5 seconds ,respond to any commands sent by users via app within 2 seconds, allow the user to start/stop at any time and adjust the parameters accordingly. 3. The cart follows only when both the paired phone and marker/ID are detected, preventing false tracking. 4. The cart stops for obstacles >10 cm wide within 1 m. 5. The cart might be able to speed up when it is far from the user and slow down when it gets near. In the whole process it should be able to avoid all possible obstacles smoothly. 6. The cart safely carries 5–10 kg without tipping. 7. Max speed capped at ~1.5 m/s (≈3.3 mph). 8. Operates for at least 1 hour per charge at walking speed (0.5 -- 1.5 m/s). |
||||||
4 | Champaign MTD Bus Tracker Map |
Amber Wilt Daniel Vlassov Ziad AlDohaim |
||||
# Champaign MTD Bus Tracker Map # Team Members: - Amber Wilt (anwilt2) - Daniel Vlassov (dvlas2) - Ziad Aldohaim (ziada2) # Problem Champaign has a very large and complex bus system through the MTD. It can be hard for students to know when the buses are coming when they are in buildings such as the ECEB, since the bus times are only displayed at the stops. Furthermore, these buses can be late or early, causing students to miss their bus or not arrive at their destination on time. # Solution To fix this, we will come up with the design for a large display that shows real-time locations of all buses (color-coded using RGB) in the surrounding campus area. This can be used by students in buildings to easily visualize where the bus they want to take is currently located, making it easier for students to time when to leave classrooms and when to expect their ride. The display will update the locations approximately every 30 seconds and will light up every LED along a bus route every few minutes to make it easier for students to visualize which bus route they need to take. Furthermore, the system will include various light settings (theme/brightness). # Solution Components This system will mainly include the subsystems of the LED matrix, the controller, and the power supply. ## Subsystem 1 - LED Matrix The LED matrix will be located on a large PCB or 3D printed map of the city (cost dependent). This subsystem will be made of addressable LEDs, photoresistors to automatically modify the intensity of LEDs, and will be controlled by the microcontroller (to indicate positions). ## Subsystem 2 - Microcontroller The microcontroller will utilize wifi to access the MTD API to gather real-time bus data as well as provide control to individually address each LED within the matrix. Furthermore, it will control/communicate with other modules/displays in the system, such as a real-time clock or menu. The microcontroller will be an ESP32 ## Subsystem 3 - Power Supply The power supply will provide ample power to a large number of LEDs (and the entire system). We will need to include a buck converter to step down the power supply to be usable by the LEDs. # Criterion for Success To demonstrate the success of our project, we will need to prove the accuracy of the data we are displaying (how accurate are bus timings/locations). Additionally, we will need to show that the data is easy to interpret for a user and can be utilized for easier bus system use. |
||||||
5 | Navigation Vest Suite For People With Eye Disability |
Haoming Mei Jiwoong Jung Pump Vanichjakvong |
||||
# Navigation Vest Suite For People With Eye Disability Team Members & Experiences: - Jiwoong Jung (jiwoong3): Experienced in Machine Learning, and some Embedded programming. Worked on many research and internships that requires expertise in Machine Learning, Software Engineering, Web Dev., and App Dev. Had some experience with Embedded programming for Telemetry. - Haoming Mei (hmei7): Experienced in Embedded programming and PCB design. Worked with projects like lights, accelerometer, power converter, High-Fet Board, and motor control for a RSO that involve understanding of electronics, PCB design, and programming with STM32 MCUs. - Pump Vanichjakvong (nv22): Experienced with Cloud, Machine Learning, and Embedded programming. Done various internships and classes that focuses on AI, ML, and Cloud. Experience with Telemetry and GPS system from RSO that requires expertises in SPI, UART, GPIOs, and etc with STM32 MCUs. # Problem People with Eye Disability often face significant challenges navigating around in their daily lives. Currently, most available solutions ranges like white canes and guide dogs to AI-powered smart glasses, many of which are difficult to use and can cost as much as $3,000. Additionally, problems arises for people with disability, especially in crowded/urban areas, and that includes injuries from collision with obstacles, person, or from terrains. According to the U.S department of Transportation's 2021 Crash Report , 75% of pedestrian fatalities occurred at locations that were not intersections. Thus we aim to design a navigation vest suite to help people with eye disability to deal with these issues. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813458.pdf # Solution We have devised a solution which helps ease visually impaired individuals in daily activities such as walking from two places, or navigating around a building with multiple obstacles. Our focus will for out-door navigation in urban areas, where obstacles, terrain, and pedestrians. But, if time permits we will also deal with traffics and crosswalks. In order to achieve this, we will be utilizing 3 main components: - Lidar sensors to help the wearer with depth perception tasks - Vibration Motors to aid navigation (turning left/right) - Magnetometer to enable more accurate GPS coordination All the above components will contribute to the sensory fusion algorithm. # Solution Components ## Subsystem 1 ### Microcontroller System We are planning to use a STM32 microcontroller as main processing unit for sensory data from lidar sensors (magnetometer and GPS if time permits) and object detection data from the **machine learning system**, and direction data from navigation app (our design on phone). We will use these information to generate vibration in the direction the wearer should navigate. ### Power Systems The whole system will be battery-powered by a battery module, which contains 5V battery-cells. It will be connected to the **Microcontroller System**, which will also supply it to the **Machine Learning System**. We will also implement the necessary power protection, buck converter, regulator, and boost converters as necessary per sensors or components. - Battery Module Pack - Buck Converter (Step-Down) - Boost Converter (Step-Up) - Voltage Regulator - Reverse Polarity Protection - BMS ## Subsystem 2 ### Navigation Locator Systems Our navigation system will consist of an App which directly connects to the Google Maps API, paired with our existing sensors. We plan to utilize a magnetometer sensor, which will indicate the direction the user is facing (North, South, East, West, .etc). In order to pinpoint which direction the wearer needs to be heading, our built-in LiDAR sensors will enable us to create a SLAM (Simultaneous Localization and Mapping) to build a map of the environment. With these systems in place, we would be able to assist users in navigation. To deal with Terrain hazards, we will use the LiDAR to sensors to assist us in dealing with elevation changes the user needed to make. - LiDAR - Android App (Connected to Google API) - Magnetometer - Vibration Motors Extra Features (if time permits): - Audio Output (Text to Speech Generation from Raspberry PI 5 sends to microcontroller through AUX cable ) ## Subsystem 3 ### Machine Learning Systems - We plan to employ a object detection model on a 16GB Raspberry PI 5 (already) along with a PI camera to detect objects, signs, and people on the road, which will be feed to the microcontroller - Raspberry PI 5 - PI Camera a) The image video model will be expected to be less than 5 billion parameters with convolutional layers, running on device in raspberry pi Obviously the processing power on the raspberry pi is expected to be limited, however we are planning to accept the challenge and find out ways to improve the model with limited hardware capabilities. b) If the overall project for subtask a) becomes arduous or time consuming, we can utilize api calls or free open source models to process the image/video in real time if the user wants the feature enabled. The device is paired with the phone via the wifi chip on the raspberry pi to enable the API call. Some of the best candidates we can think of are the YOLO family models, MMDetection and MMTracking toolkit, or Detectron2 model that is developed by Facebook AI Research that supports real time camera feedbacks. # Criterion For Success ### Navigational Motor/Haptic Feedback 1) The Haptic feedback (left/right vibration) should perfectly match with the navigation directions received from the app (turn left/right) 2) Being able to Detect Obstacles, Stairs, Curbs, and people. 3) Being able to detect intersections infront and the point of turn through the lidar sensory data. 4) Being able to obey the haptic feedback patterns that is designed. (tap front for walking forward, tap right to go right etc...) ### Object Detection 1) Using the Illinois Rules of the Road and the Federal Manual on Uniform Traffic Control Device Guidelines, we will be using total of 10-30 distinct pedestrian road signs to test the object detection capability. We will be using a formal ML testing methods like geometric transformations, photometric transformations, and background clutter. Accuracy will be measured by the general equation (Total Number of correctly classified Datasets)/(Total Number of Datasets) 2) The ML Model should be able to detect potential environmental hazards including but not limited to Obstacles, Stairs, Curbs, and people. We are planning onto gather multiple hazard senarios via online research, surveys, and in-person interviews. Based on the collected research, we will be building solid test cases to ensure that our device can reliably identify potential hazards. More importantly, we are planning design strict timing and accuracy measures metrics. 3) The ML model should be able to detect additional road structures such as curbs, crosswalks, and stairs to provide comprehensive environment awareness. We will be utilizing different crosswalks located on north quad and utilize the accuracy measurement techniques mentioned in 1) ### Power and Battery Life 1) The device should support at least 3 hours of battery life. 2) The device should obey the IEC 62368-1 safety standard. IEC 62368-1 safety standard lays out different classes such as ES1, ES2, and ES3 that considers electrical and flame |
||||||
6 | E-Bike Crash Detection and Safety |
Adam Arabik Ayman Reza Muhammad Daniyal Amir |
||||
# Title Team Members: - Ayman Reza (areza6) - Muhammad Amir (mamir6) - Adam Arabik (aarabik2) #Problem E-bikes are gaining popularity as a sustainable and convenient mode of transportation. The main issue with the growing number of e-bikes is the safety of the rider and those around them. If a rider gets into a crash, there is no automatic shutoff for the electrical systems on an e-bike. This means that the bike's motor can remain on, potentially causing more harm to the rider or the surrounding environment. Current safety systems installed on electronic devices typically focus only on post-crash communication, such as sending alerts to contacts or calling emergency services. There is currently no system that can detect a crash in real time and instantly cut power to the bike’s electrical systems to improve safety. #Solution My group's solution is a crash detection system with a motor shutoff that can integrate with e-bike systems. This device will use its own sensors and electrical measurements to recognize when a crash occurs. Once a crash is detected, the system will cut all power to the motor, ensuring that the bike can no longer accelerate even if the throttle is still engaged. To reduce false positives, the system will use a module that combines data from multiple sensors to provide a more accurate assessment of whether a cutoff is needed. In addition, the design will include a manual override that allows the rider to turn the motor back on and continue operating the bike normally. The goal of this project is to create a crash protection system that reacts quickly to its environment to prevent further harm during a crash. #Solution Components ##Subsystem 1: Crash Detection Sensors This subsystem is responsible for detecting sudden deceleration, impacts, or abnormal electrical behavior that indicates a crash. The design will use an accelerometer and gyroscope, like the MPU-6050, to monitor motion and angular velocity. A current sensor like the ACS712 will be used to detect sudden changes in motor current that occur during impact. An optional vibration or impact sensor may be added to confirm collision events and improve reliability. ##Subsystem 2: Control and Processing Unit This subsystem will process the inputs from the sensors, run the crash-detection algorithm, and issue the motor cutoff command. The system will be built around a microcontroller, such as an STM32 or ESP32, which has the processing capability to fuse sensor data and apply threshold-based decision making. The microcontroller will also handle input from the manual reset and override switch to allow the rider to re-enable the system if a false detection occurs. ##Subsystem 3: Motor Cutoff Circuit The subsystem physically disconnects the motor power when a crash is detected. A MOSFET-based switch will be used to cut power from the e-bike motor controller. The cutoff circuit will be designed to handle the motor’s current and respond within milliseconds. Once triggered, the motor will remain disabled until the system is reset by the rider. ##Subsystem5: Testing and Validation Setup The subsystem is focused on verifying the accuracy and timing of the system under controlled and real-world conditions. The initial bench testing will involve tapping the sensor and measuring how quickly the motor cutoff occurs using the oscilloscope. The controlled crash simulation will be performed by stopping the spinning wheel or using drop tests to mimic the impact. Field tests will involve riding the e-bike over curbs, bumps, and rough pavement to ensure the system doesn’t false trigger during normal use. Once a crash has been detected, the motor can be re enabled using the reset button. #Criterion for Success The rider must be able to manually cut and enable power to the motor at any time using switches on the electrical systems. If the bike tips over onto its side, the motor must turn off automatically. If the bike comes to an immediate stop that indicates a crash, the motor must turn off automatically. The system needs to be able to work with e-bike motors. |
||||||
7 | Omnidirectional Drone |
Dhruv Satish Ivan Ren Mahir Koseli |
||||
# Omnidirectional Drone Request for Approval Team Members - Dhruv Satish (dsatish2) - Ivan Ren (iren2) - Mahir Koseli (mkoseli2) # Problem The issue of aerial maneuvering has become an increasingly important consideration in the new age of drone deliveries, drone imaging, and necessity for automation in the fields of agriculture, construction, surveying, remote monitoring, and more. The current standard of drone technology remains limited to mostly quadcopters, a technology that has matured to enough of a degree to allow for complex directional motion, and extreme speed and stability. However, these vehicles have a notable issue of a lack of movement decoupling, with the translational and rotational motions being tied together. In a lot of speed-focused applications, this issue is trivial as most movement systems can compensate to move in 6DOF space by applying different amounts of power to different motor configurations. But in precision applications or in situations that require a certain orientation to be held, decoupling the rotational and translational degrees of motion allow for the drone to have unprecedented control. Just considering a few simple scenarios, for precise filming, construction, or especially sensitive natural or urban areas, a drone with full control over its movement means the ability to hold an angle for a shot, to apply paints at all angles and move around objects through very tight spaces, or to survey wildlife or urban areas without interfering with the natural environments. In any situation not prioritizing speed or power, an omnicopter would provide significantly improved flexibility and control. # Solution Our solution is inspired by the template of existing omnicopter designs such as the arducopter and ETH Zurich's project, but we plan to design, develop, and test our project completely independently. We will use existing resources to design the frame of the drone as either a 6 or 8 motor design. Aside from the frame, other components we plan to use are our own custom bldc motor controller, a custom flight controller board with telemetry from an IMU, GPS unit, and barometer, and potentially a regenerative breaking system. # Solution Components STM32466ZE (MCU) RP2040 (BLDC Motor Controller MCU) DRV8300 (Gate Driver) Neo M8N (Mosfets) ICM-42670-P (IMU) BMP390 (Barrometer) TLV493D (Gyroscope) Any 2200kV BLDC Motor 4s LiPo (Battery) # Subsystem 1 - BLDC Motor Controller The motor drive system will contain all required electronics to power and control the motors, including the ESCs, motors, current and voltage sensors, battery management system, and a central microcontroller that interfaces with the ESCs and remote controller. The system will be built to be modular, with each ESC and motor addition being its own module and being easily added to the overall electrical schematic to ensure flexibility with motor configuration, depending on power usage during testing. Within the motor drive system, the battery management system and regenerative braking feature will store away extra power produced by the large currents and wattages that spike up from the motor’s inductive nature. # Subsystem 2 - Frame The frame of the omnicopter will take the form of either a 6 or 8 motor configuration depending on power draw, stability, and feasibility testing after the electronics have been developed. The design will place an emphasis on easy fabrication using quick prototyping methods like FDM 3D printers, while also remaining lightweight and structurally sound. The goal here is for the drone to be easily manufacturable by hobbyists who would like a robust omni-directional drone with all required functionality and maximum tinkerability. On this end, we've already found research papers that document optimal motor placements for 6 and 8 motor omnicopter designs as well as the physics for powering these motors in various orientations. Subsystem 3 - Flight Control + Telemetry The controls and communications side will handle reading and writing data from the drone to the remote controller, as well as converting movement signals into different motor power combinations to enable separate translational and rotational movement. To do this conversion, we will write our own custom firmware that reads data from the gyroscope, IMU, barometer, and motor feedback to dictate the PWMs and direction for each individual motor. The remote controller will be a simple dual-joystick system with each joystick handling either rotational and translational motion. Depending on time constraints, trajectory planning and more can also be explored with this side of the project by using the drone’s initial position, motor velocities, and orientation. # Criterion for Success The final solution will consist of a multi-rotor drone capable of separate rotational and translational flight powered through onboard battery packs, responding to inputs from a remote controller through 2 joysticks controlling rotation and translation independently. |
||||||
8 | Hybrid Actuation Arm Exoskeleton |
Alan Lu Rubin Du |
||||
**Team** Alan Lu -- jialin8 Rubin Du -- rd25 **Problem** Lifting and carrying heavy objects is a common but physically demanding task faced in both personal and industrial environments. Whether it is a person at home carrying groceries or a logistics worker handling cargo, repetitive lifting puts stress on the musculoskeletal system and can result in fatigue, reduced productivity, and even long-term injuries. Existing exoskeleton solutions often focus on industrial use, but they suffer from limited backdrivability, high weight, or overly complex designs that prevent practical everyday use. A lightweight, safe, and efficient solution is needed to reduce the physical burden of lifting while maintaining user freedom of movement. **Solution** Our team proposes the development of a wearable exoskeleton system designed to assist users in lifting objects of up to 10 kilograms with minimal effort. The system employs a hybrid actuation strategy that combines the strengths of both a BLDC motor and a servo motor: the BLDC provides the torque required for large-angle lifting motions, while the servo supplies stable holding torque to maintain the lifted position without excess energy drain. The BLDC goes through a 64:1 planetary gear set to amplify torque, and the servo motor goes through a moveable linkage system to create sufficient mechanical advantage to further reduce the load on the motor. A detachable drivetrain allows the user to disengage the actuation system, enabling free arm movement when lifting support is not required. The skeleton itself is lightweight, manufactured using carbon-fiber-reinforced nylon (PA-CF), ensuring durability and comfort. This modular design starts with elbow actuation and can be scaled to include shoulder actuation, broadening its application. **Solution Components** **Subsystem 1: Mechanical Skeleton and Drivetrain** - Lightweight PA-CF composite structure, under 3 kg excluding the battery. - Hybrid drivetrain using BLDC with planetary gear for motion and servo motor for holding. - Drivetrain disengagement mechanism for free arm movement. - Moveable armor integrated with a linkage system on the drivetrain that elaborately moves upper limb armor to avoid structural interference. **Subsystem 2: Actuation and Power System** - Actuated by BLDC + servo combination for efficiency and safety. - Powered by a 6S LiPo battery (~200 Wh), providing several hours of continuous assistance. - Custom PCB with DC-DC buck converters for peripheral loads and power distribution. - Thermal management through ventilation and optional forced convection. **Subsystem 3: Control and Signal Processing** - Joint actuation regulated through PID controllers. - User intent detected via EMG sensors integrated into the arm. - Signal conditioning pipeline: Kalman filter → Chebyshev low-pass filter → controller input. - Optional manual override via a simple forearm-mounted control panel. - Microcontroller and peripheral signals integrated on a customized PCB/FPGA. **Subsystem 4: Peripherals** - Armor ambient light will be integrated into the shell of the skeleton for aesthetics. - Ventilation port openings will be controlled by microservos to ensure good heat dissipation. - A manual control panel will be placed on the lower limb skeleton to include manual operations and emergency switches. - TPU-based soft pads inside the skeleton to provide a comfort experience for the user. **Scalability and Modularity** - The initial prototype targets elbow actuation. - Design is scalable to include shoulder actuation grounded to chest armor. - The modular approach ensures meaningful demonstration even if full-body integration is not achieved. **Criterion for Success** The final solution will be a wearable exoskeleton capable of assisting the user in lifting and holding objects up to 10 kg through a dual-actuation BLDC–servo system with a detachable drivetrain for free arm movement, powered by an onboard 6S battery, lightweight (under 3 kg excluding the battery), and controlled via EMG signals or a manual override panel to ensure safe, efficient, and natural operation. |
||||||
9 | Ant Weight 3-D Printed BattleBot |
John Tian Mig Umnakkittikul Yanhao Yang |
||||
# Ant Weight 3D Printed BattleBot Competition Team Members Yanhao Yang (yanhaoy2) Yunhan Tian (yunhant2) Mig Umnakkittikul (sirapop3) # Problem We will design a 3-D printed BattleBot to attend the competition instructed by Professor Gruev. To attend the competition, we will need to meet the following requirements: - BattleBot must be under 2 lbs. - BattleBot must be made only of these materials: PET, PETG, ABS, or PLA/PLA+. - BattleBot must be controlled by PC via Bluetooth or Wi-Fi. - BattleBot must have a custom PCB that will hold a microprocessor, Bluetooth or Wi-Fi receiver, and H-bridge for motor control. - BattleBot must have a fighting tool activated by a motor. - BattleBot must have an easy manual shutdown and automatic shutdown with no RF link. - BattleBot will adhere to the rules on the NRC website. Our overall goal is to design, code, and build a war robot capable of thriving in the robot battle competition. # Solution We will build a 2-lb, 3-D printed BattleBot with a front-hinged lifting wedge (shovel) as the weapon to flip and destabilize other robots. The main structure will be ABS for toughness, PLA for non-critical connectors, and PETG around the power system and microcontroller for heat resistance. Control is via PC over Wi-Fi or Bluetooth using an ESP32 microcontroller.The bot will have at least three motors:Two DC-powered motors to control the robot's wheels for mobility. One geared lifter motor for the shovel, controlled through H-bridge drivers. # Solution Components ## Microprocessor We will use the ESP32-S3-WROOM-1-N16 for our BattleBot because it combines built-in Wi-Fi and Bluetooth, eliminating the need for separate modules. Its dual-core processor and ample RAM/flash provide sufficient power to handle motor control, PWM generation, weapon actuation, and sensor processing simultaneously. Its weight (6.5 g) is ideal for a 2-lb bot, and it supports many peripherals. ## Attack Mechanism To attack, destabilize, and flip opponent bots, we will use a front-hinged lifting wedge (“shovel”) as our primary weapon. The wedge will be 3-D printed with PETG for impact resistance, reinforced at hinge and linkage points to withstand stress. It will span about 50–70% of the bot’s width and feature a low, angled tip to slide under opponents effectively. A small, geared lifter motor will actuate the wedge through a lever linkage, which amplifies the torque from the motor to lift a 2-lb target. ## Mobility System We will use four small wheels (2.25’’), with the two rear wheels powered by high-torque 600 RPM, 12V DC motors. The smaller wheels lower the ride height of the bot, giving it a lower center of gravity, which improves stability during combat and reduces the chance of being flipped, while still providing solid ground traction. The motors strike a good balance between speed and torque, offering sufficient pushing power to maneuver our heavily armored bot effectively. ## Power System We will use Lithium Polymer (LiPo) batteries, 4S 14.8V 750 mAh, as the higher voltage may be required for the weaponry. LiPo batteries are significantly lighter than NiCd, provide more power, and save space. Additionally, we will integrate a motor current sensor (e.g., INA219 or ACS712) into the motor driver circuits to monitor current draw. The ESP32 will read these values in real-time, allowing us to detect stalling conditions and activate manual/automatic shutdown to protect motors and electronics. ## Bot Structure Materials We will use ABS for the main bot structure, as it offers sufficient strength and a good balance between durability and printability. PLA will be used for general-purpose parts, such as inner connection pieces, where high strength is not required. Finally, PETG will be used around the power system and microprocessor to provide additional heat resistance. # Criterion for Success The project will be considered successful if: - The BattleBot can be fully controlled remotely by PC, including movement and wedge activation. - The wedge lifter and drive motors operate reliably, capable of destabilizing or flipping a 2-lb opponent. - Manual and automatic shutdowns function correctly, independent of wireless communication. |
||||||
10 | NeuroBand |
Arrhan Bhatia Vansh Vardhan Rana Vishal Moorjani |
||||
# Problem As LLM-based voice assistants move onto AR glasses, interacting by voice is often impractical in public (noise, privacy, social norms). Existing AR inputs like gaze/head pose can be fatiguing and imprecise for pointer-style tasks, and camera-based hand-tracking ties you to specific ecosystems and lighting conditions. We need a device-agnostic, silent, low-latency input method that lets users control AR (and conventional devices) comfortably without relying on voice. # Solution Overview We propose a two-band wrist/forearm mouse that connects as a standard Bluetooth HID mouse and operates in virtual trackpad mode: * A wrist band (Pointing Unit) uses an IMU to estimate pitch/roll relative to a neutral pose and maps that orientation to a bounded 2D plane (absolute cursor control). A clutch gesture freezes/unfreezes the cursor so the user can re-center their wrist naturally. * A forearm band (Gesture Unit) uses surface EMG electrodes over the forearm muscle belly to detect pinch/squeeze gestures for clicks, drag, right-click, and scroll. * The wrist band is the host-facing device (Bluetooth HID). The forearm band communicates locally to the wrist band (tether or short-range wireless) for low added latency. * Initial design focuses on pitch/roll; yaw is not required for trackpad mode. # Solution Subsystems ## 1 — Wrist Band (Pointing Unit) * Wrist-mounted inertial sensing to estimate stable pitch/roll relative to a neutral pose. * Lightweight fusion/filtering for smooth, low-noise orientation signals suitable for absolute cursor mapping. * Local state for clutch (engage/hold/release) and pointer acceleration/limits as needed. ## 2 — Forearm Band (Gesture Unit) * Noninvasive EMG sensing over forearm muscle groups associated with finger pinches. * Basic signal conditioning and thresholding to convert muscle activity into discrete actions (left click, right click, drag, scroll). * Brief per-user calibration to set comfortable sensitivity and reduce false triggers. ## 3 — Inter-Band Link & Firmware * Local link from the forearm band (gesture events) to the wrist band (pointing and HID reports). * Embedded firmware to read sensors, perform fusion/gesture detection, manage clutch, and assemble standard Bluetooth HID mouse reports to the host. * Emphasis on responsiveness (low end-to-end latency) and smoothness (consistent cursor motion). ## 4 — Power, Safety, and Enclosure * Rechargeable batteries and simple power management sized for day-long use. * Electrical isolation/protection around electrodes for user safety and comfort. * Compact, comfortable bands with skin-safe materials; straightforward donning/doffing and repeatable placement. # Criterion for Success * Pairs as a standard BLE mouse and controls the on-screen cursor in virtual trackpad mode. * Supports left click, right click, drag, and scroll via gestures, with a working clutch to hold/release cursor position. * End-to-end interaction latency low enough to feel immediate (target: sub-~60 ms typical, Apple's magic mouse 2 has a latency of ~60 ms before motion is reflected on screen). * Pointer selection performance on standard pointing tasks comparable to a typical BLE mouse after brief calibration. * Minimal cursor drift when the wrist is held still with clutch engaged. * High true-positive rate (>= 90%) and low false-positive rate for click gestures during normal wrist motion. * 4 hours of battery life on a single charge. * Stable wireless operation in typical indoor environments at common usage distances (up to 2 meters). |
||||||
11 | Glove Controlled Drone |
Aneesh Nagalkar Atsi Gupta Zach Greening |
||||
Glove Controlled Drone Team Members - Aneesh Nagalkar (aneeshn3) - Zach Greening (zg29) - Atsi Gupta (atsig2) # Problem Controlling drones typically requires handheld remote controllers or smartphones, which may not feel natural and can limit user interaction. A more intuitive way to control drones could increase accessibility, improve user experience, and open possibilities for new applications such as training, entertainment, or assistive technology. # Solution Our group proposes building a wearable gesture-control glove that sends commands to a quadcopter. The glove will use motion sensors to detect the user’s hand orientation and movements, translating them into drone commands (e.g., tilting forward moves the drone forward). The glove will transmit these commands wirelessly to the quadcopter through an ESP32 Wi-Fi module. The drone will be purchased in parts to simplify integration and ensure reliable flight mechanics, while the glove will be custom-built. To improve from previous iterations of similar projects, we plan to: - Use IMU sensors instead of flex sensors for more precise and complex gesture detection. - Add haptic feedback to communicate status updates to the user (e.g., low battery, weak signal). - Implement an emergency shutoff mechanism triggered by a specific hand gesture (e.g., closing the hand). - Potentially integrate a camera onto the quad copter that will be signalled by a different hand gesture. The system is also scalable to include advanced commands such as speed adjustments based on motion severity. # Solution Subsystems **Subsystem 1: Gesture Detection** - IMU and gyroscope sensors embedded in the glove to detect orientation and movement. - Sensor fusion algorithms to interpret gestures into defined drone commands. 1. Three axis gyroscope: mpu-6050 2. IMU: Pololu MinIMU-9 v6 Controls: Here is a clear definition of how the drone will move - Drone maintains a constant hover height (handled by the drone’s onboard flight controller barometer/altimeter stabilization) - The glove only controls horizontal motion and yaw (turning - Pitch forward (tilt hand down): Move forward - Pitch backward (tilt hand up): Move backward - Roll left (tilt hand left): Strafe left - Roll right (tilt hand right): Strafe right - Yaw (rotate wrist clockwise/counter-clockwise): Turn left/right - Clenched fist (or another distinct gesture): Emergency stop / shutoff **Subsystem 2: Communication Module** - ESP32 microcontroller on the glove acts as the transmitter. - Wi-Fi connection to the drone for sending control signals. 1. ESP32 microcontroller 2. Integrated ESP32 wifi chip 3. Voltage regulation **Subsystem 3: Quadcopter Hardware** - Drone hardware purchased off-the-shelf to ensure stable flight. - Integrated with receiver to interpret Wi-Fi commands from the glove 1. LiteWing – ESP32-Based Programmable Drone **Subsystem 4: Feedback and Safety Enhancements** - Haptic motors embedded in the glove to provide vibration-based feedback. - Emergency shutoff gesture detection for immediate drone power-down. 1. Vibrating Actuator: Adafruit 10 mm Vibration Motor 2. Driver for actuator: Precision Microdrives 310-117 3. Battery: Adafruit 3.7 V 1000 mAh Li-Po 4. Glove that components will be affixed to # Criterion for Success, minimum 5/7 of these - The glove reliably detects and distinguishes between multiple hand movements. - The drone responds in real time to glove commands with minimal delay. - Basic directional commands (forward, back, left, right, up, down) work consistently. - Scaled commands (e.g., varying speed/acceleration) function correctly. - Haptic feedback provides clear communication of system status to the user. - The emergency shutoff mechanism works reliably and immediately. - The system demonstrates smooth, safe, and intuitive user control during a test flight. |
||||||
12 | New Generation Addiction Control and Recovery Device System with Absolute Safety and Privacy - working with the Drug Addiction Research Team |
Adrian Santosh Leo Li Richawn Bernard |
other1.docx |
|||
Yixuan Li, Bernard Richawn, Adrian Santosh ECE 445 Professor Arne Fliflet 2 Sep 2025 New Generation Addiction Control and Recovery Device System with Absolute Safety and Privacy - working with the Drug Addiction Research Team Problem: “First you take a drink, then the drink takes a drink, then the drink takes you.” – F. Scott Fitzgerald. As Fitzgerald says, unconscious Addiction is becoming a more and more common problem in modern society. By “unconscious,” I mean habits that sneak in slowly, built by small daily choices and cues, so people don’t notice the slide until it is strong. There’s many materials that can cause addiction in individuals' lives. Here I include both substances and behaviors. Some are relatively not impactful, like addiction to high sodium concentration, addiction to certain neuron impulses like music, even addiction to alcohol for most of the cases. I don’t mean these are harmless; only that for many people the day-to-day harm can be lower. Alcohol addiction can also be severe; I’m talking about mild use here. But some are much more horrifying, like addiction to smoking (more specifically nicotine), addiction to fentanyl, and addiction to cocaine. These often pull people in fast, with strong cravings, risky choices, and higher risk of death. All those substances deal a considerable amount of damage to the health of the addict and more or less influence the people around the user. Think heart and lung problems, changes in mood and thinking, money stress, and family strain. But this should not end here. Naming the harm is not enough; we need a clear plan to stop the slide. “It does not matter how slowly you go as long as you do not stop.” – Confucius. Like Confucius said, if there is no stop to the addiction then the addiction will eventually roll down the hill like a rock no matter faster or slower, a brake is needed for this process. The point is that steady effort matters more than speed, but pushes from stress and daily life keep the rock moving. Do we have a stop method for addiction? Certainly yes, there are many addictions known to be reversible and even the ones that are non-reversible, the symptom of the addiction can be controlled to be at least less severe. But there exists problems with the current method. Two big gaps stand out significantly in real life. The base logic for the addiction recovery is usually step-by-step. The steps are often fixed by a plan, not by how the person feels that day. Usually the addict needs to control the addiction level from severe step by step to nearly no impact, but if the process is too quick then the result can also be bad. It works like trying to reshape a Bouncy Ball, if we try to push too hard it will make a huge bounce back and eliminate most of the distance that has already gone through, in the worst case the addict might even be worse from the start because of the bounce. When that happens, the person loses trust and energy. They may even smoke more to calm the crash. All the current methods rely heavily on the self-regulation ability of the addict: the addict has to lower the addiction level step by step with a highly organized daily routine. This assumes strong willpower every hour, which is not realistic for most people. Life is messy, work shifts, kids, money, and stress can break the routine. This sometimes works well but in most cases the result isn’t optimal enough. There are mainly two problems: First, the self regulation ability of the addict is usually not well enough. Willpower rises and falls during the day, but cravings and desire don’t wait. Sure there exist addicts that have a strong will to make a recovery from the addiction for reasons, but in most cases the addict only knows the current addiction phase is not healthy and have a relatively weak will for making a recovery, which in this case the addict may give up the process when feeling horrible. This drop-off point is the key failure we must design for. Second, the self control model is not optimal and scientific enough most of the time. Most plans don’t measure anything in real time. Even if the addict tries one’s best to regulate oneself, due to the fact that the human body is a huge chaotic system, there is no way that the addict will have a perfect endoscopic view to know what should be the correct dose for that specific moment. Nicotine fades fast, so strong desire can spike many times a day. People end up guessing dose and timing, which causes under-dosing (white-knuckle cravings) or over-dosing (nausea, dizziness). So what we are aiming for then, is to work out an optimal systemic solution for the addiction recovery for smoking (nicotine) addiction. In simple words, a system that senses, decides, and helps right when the urge hits. It should be easy to use, low cost, and private. We aim to solve the problem of smokers not being able to self-control accurately and regulate enough in a precise scientific responsive system. Today’s tools are one-size-fits-all, slow to react, and too hard to follow under stress. We need a responsive loop: notice the trigger, then give the right aid, and then check the effect, then adjust and do it again in time. Team Members: Adrian Santosh, Bernard Richawn, Yixuan Li Solution: We need a solution that at least is able to perform four functions. First, the system should be able to detect the nicotine level of the addict and some other stats, for example maybe blood pressure and blood sugar, to provide sufficient information for analysis. Second, the system should have the ability to analyze the data input, then produce the best optimal solution for the current status. Third, the system should have the ability to control the release of the material for drug addiction recovery. Fourth, most importantly the system should be able to provide multiple level security checks to make sure even in the worst case the system should not produce any risk. Solution Components Privacy: We do not plan to connect any external linkage into the system so the user’s privacy should not be exposed to any external source. Also, we do not plan to use any long-term storage so even if the system is hacked it should not provide any readable information. If it is possible we will also add some encryption method for the data and hardware/software self-destruction function if the system is forced open in an unwanted way. The user's privacy is crucial and we want to protect it as much as we can. Sensor System: The sensor should be set in a non-invasive way and since we do not plan to connect the system into any external linkage. First Sensor System: This sensor system should be set for testing the fluid in the Oral Cavity Region. For example Salavia. The sensor should be able to detect at least the existing nicotine concentration in the mouth region. Second Sensor System: This sensor system should serve as a backup for the First Sensor System, detecting the existing nicotine portion in the exhaled air. Safety Measuring System: Emergency detecting warning: Watch states like blood pressure, heart rate, symptoms, and dose history. If any sign crosses a safe line, it blocks dosing and shows a clear message. Directly alarm the user for unusual cases. Sensors adjusting and comparing: This part might have to work with the microcontroller. It should compare the two data received from the sensors and if one of them goes totally insane then it should attempt to follow the one in the normal range. And if both sensors totally lose control, it will then use the user-input and default data as signals to send to the Microcontroller Units. Warnings should be sent to warn the user about the abnormality of the device sensor part. Backup user-input port: The user will have a way for using the user control port to provide supplement information for the treatment plan. In the worst case the user can solely use the subjective control input to control the device when all the sensors are not working. Simple slider for urge (1–10, just for a reference for the treatment). Another functionality is in the worst case when all the sensors fail, the port can be solely used for the next dose of treatment while still having a solid respect in the treatment limit. Microcontroller Unit Ports: Safe power port and covered plugs for sensors/maintenance. Marked shapes so parts can’t be plugged in the wrong way. There also should be Water-resistant covers considering the use of liquid frequently happens. Microprocessor: Include a timing system. Do most of the logic operations, when the signal is sent from the Sensor System, it should perform a logic and then give the suggested dose to the output system. Safety Check Software System Part: This part should be implemented in the Software, when the dose is significantly wrong compared to the previous data, the system should produce a warning and use a different one than the mistaken suggestion. Safety Check Hardware System Part: There should be a chip that guards the output message in case the software is failing. Irregular commands should be switched and failing commands should shut down the whole system and inform the user the device is failing. Three tanks overview system: Nicotine Tank: Volume: Need to decide a specific volume for the Nicotine Tank so the design has sufficient space but also not far wasting the design requirement. Storage Material: We have to decide what kind of material we need to use for the tank, to best prevent the corrosion and erosion from the liquid. Temperature detection: ideally we should also have a temperature detection to make sure that the liquid has the correct state and ensure the safety at least on the side of temperature. There should be a warning sent to the MicroController in case the temperature is over the safety line. Leaking Detection: There should be a detection device for the tank in case it is leaking. Leaking Management: At least there should be two layers of walls for the container to make sure that the leakage will not harm the rest of the device. Also serves the functionality of blocking the leakage from the external world. Safety Detection: If an unreasonable command is sent, safety measures should be enforced. Liquid Vapor Pod: Volume: Need to decide a specific volume for the Pod so the design has sufficient space but also not far wasting the design requirement. Storage Material: We have to decide what kind of material we need to use for the pod, to best prevent the corrosion and erosion from the liquid vapor. Leaking Detection: There should be a detection device for the tank in case it is leaking. Leaking Management: At least there should be two layers of walls for the container to make sure that the leakage will not harm the rest of the device. Also serves the functionality of blocking the leakage from the external world. Safety Detection: If an unreasonable command is sent, safety measures should be enforced. Noxious Vapor Pod: Volume: Need to decide a specific volume for the Pod so the design has sufficient space but also not far wasting the design requirement. Storage Material: We have to decide what kind of material we need to use for the pod, to best prevent the corrosion and erosion from the liquid vapor. Leaking Detection: There should be a detection device for the tank in case it is leaking. Leaking Management: At least there should be two layers of walls for the container to make sure that the leakage will not harm the rest of the device. Also serves the functionality of blocking the leakage from the external world. The Leaking management should be more strict here due to the reason that the noxious Vapor is more dangerous. Safety Detection: If an unreasonable command is sent, safety measures should be enforced. Mixing Pod: Volume: Need to decide a specific volume for the Pod so the design has sufficient space but also not far wasting the design requirement. Storage Material: We have to decide what kind of material we need to use for the pod, to best prevent the corrosion and erosion from the liquid vapor. Leaking Detection: There should be a detection device for the tank in case it is leaking. Leaking Management: At least there should be two layers of walls for the container to make sure that the leakage will not harm the rest of the device. Also serves the functionality of blocking the leakage from the external world. Safety Detection: If an unreasonable command is sent, safety measures should be enforced. Mixing Device: A mixing rod or more sufficient mixing device needs to exist for sufficient mixing activity. Mixing System: Pulls small amounts from the needed containers and blends to the set dose. Uses short, clean paths and a quick rinse step to avoid leftover mix. Safety System: The last guard against any abnormal commands. If the dose is higher than an amount, this simple safety check system should directly shut down the whole system and warn the user about the failing of the system. A clear hard cap should exist in this component to ensure the last safety guard works correctly. Powering System: Not exactly sure yet, can be a rechargeable battery but also can be non-rechargeable battery. Should work in normal mode and power saving mode when the system is not activated. Criteria for Success Sensing: The device reports a saliva or breath nicotine status within 60 seconds and matches a lab reference check in more than 80% of trials; BP and heart rate readings match a reference meter within a small allowed error in bench tests. Analysis: From any valid sensor or user input, the device gives a clear “status” and “next step” in 10 seconds and always stays within set dose caps and lockouts in a 10 case test set. Delivery: The system delivers the commanded micro-dose volume (using a safe test liquid) within 10% up and down of target as measured on a scale, and enforces a minimum lockout time between doses every time. Safety: If a red-flag condition appears (very high BP, sensor fault, low battery), dosing blocks within 1 second and a clear alert shows. Privacy: the device works with no external network, and the user can erase all local data. Notes: All the parts are still highly abstract so in the process of organizing and implementation, if some of the parts are proved to be impossible to implement or not compatible with the other parts of the system, we reserve the right to re-modify the project and make it still have the best optimal functionality. |
||||||
13 | Sun Tracking Umbrella |
Dora Stavenger Megan Cubiss Sarah Wilson |
||||
Team Members - Dora Stavenger (doraas2) - Sarah Wilson (sarahw7) - Megan Cubiss (mcubiss2) Problem When sitting outside in urban third spaces, it is often too hot or bright to stay there for a while. Even at low temperatures, exposure in direct sun gets uncomfortable and/or unhealthy quick. Many outdoor spaces do have stationary umbrellas but, once set, they only help for a period of time which can lead to discomfort from excessive heat/brightness. This can be avoided by adjusting the umbrella throughout the day but they are often quite heavy and hard to maneuver. Solution Overview To solve this problem, we suggest an umbrella that tracks the position of the sun using solar panels in addition to other sensors and adjusts the tilt of the umbrella to provide UV protection for the user and ensure comfort. To prove out this concept we are proposing to make a smaller model of an umbrella, using resources from the machine shop as well as doing some design ourselves. We will also do the math to prove that our design could be scaled up and withstand the extra load from the heavier weight of a real umbrella. Solution Subsystems ##Subsystem 1: Model Umbrella This subsystem is the mechanical basis for the project: The canopy would be scaled to about that of a personal rain umbrella. The rain umbrella would attach to an elbow joint allowing for tilting motion. The base would attach to a stable plate and a bearing allowing for circular motion. ##Subsystem 2: Solar Cells / Brightness Sensors This subsystem would be responsible for powering the umbrella as well as provide data on light intensity. A ring of solar cells towards the widest portion of the umbrella as well as solar cells towards the top. Solar cells power moving mechanisms as well as provide backup power through battery storage. Light intensity is measured using these solar cells to determine optimal positioning. ##Subsystem 3: Motor for Solar Angle Tracking This subsystem would be responsible for tilting the canopy of the umbrella: A stepper motor would be used due to low speed, high torque application. Physical stop built in for added safety so the canopy does not fall. Motor control done using H-bridge. ##Subsystem 4: Motor for Solar Position Tracking This subsystem would be responsible for rotating the entire umbrella: A stepper motor would be used in order to keep design consistent. Motor control done using separate H-bridge from Subsystem 3. ##Subsystem 5: wifi/bluetooth/communication This subsystem is responsible for the communication between the physical device on the umbrella and a user’s phone/application. Using a ESP32, a web server can be established which can be connected to a laptop/display via the existing wifi abilities. This would allow two way data communication where data could be viewed in a simple web browser with some sort of user interface to allow commands to be pushed back to the microcontroller. This would also allow users on the same network to access the page and interact with the device. Criterion for Success Outcomes : A scaled version of the working product with the proof that it is scalable to a full sized version. The umbrella tilts based on differences in intensity detected by the solar cells. The umbrella is structurally sound and does not fall over during any motion. Data from the solar cell is displayed and user input is possible. Hardware : The device does not get in the way of user experience. Solar cells send accurate data to software components. Motors respond accordingly to change umbrella positioning. Software : Data from solar cells are accurately received and processed by software. Software to determine how umbrella positioning is to move for optimal coverage. Accurately disperses information for motor movement. |
||||||
14 | Enhanced Golf Rangefinder |
Peter Maestranzi Emma DiBiase Jacob Hindenburg |
||||
**Team Members:** Peter Maestranzi (petervm2) Jake Hindenburg (jacobh6) Emma DiBiase (emmamd2) **Problem:** Golf is an extremely difficult game that requires a great deal of precision. There are a multitude of factors that can affect a single golf shot such as distance, weather conditions, and club choice. Modern rangefinders gauge distance well, with some even able to show yardage adjustments for changes in elevation. However, rangefinders still lack many features that could help average or new golfers improve quickly. **Solution:** The solution to the problem would be to create an enhanced rangefinder that adds several new features. The distance would be measured through a time-of-flight sensor, a commonly used component in rangefinders. To make our project unique, we would integrate several other components to help measure a more precise distance. This would consist of more sensors measuring factors such as wind speed, humidity, and temperature. The adjusted distance due to these factors would be updated on the rangefinder and shown through an LCD display. Another component that would be utilized in our device would be a Bluetooth user interface. Based on the readings from the rangefinder, a Bluetooth component on the user’s phone can supply all the necessary information for that specific shot and provide a club recommendation. Using a microprocessor with Bluetooth capabilities, this subsystem would be achievable and crucial to making our device unique. All our devices’ components would be secured within a 3D-printed enclosure that is both safe and easy to handle. **Subsystem 1: Microprocessor** For our microprocessor, we will use an ESP32-S3-WROOM-1-N16 as it supports Wi-Fi and Bluetooth capabilities. We will have added room for any additional UI features, GPIOS, and programming capabilities with plenty of extra power. **Subsystem 2: Distance Tracking System** The main component of the Distance Tracking System is a time-of-flight (ToF) sensor such as the JRT D09C Laser Distance Sensor. ToF would help measure the distance to any object that the golfer points at. These are very common in normal rangefinders, so the crucial part of this system for our project would be the interfacing that occurs with other systems that would provide an adjusted distance based on measurements of the environment. **Subsystem 3: Environment System** For the environmental system, we will detect ambient conditions that will directly affect the golf shot. This includes a hot-wire wind sensor with analog output for wind speed (Modern Device Wind Sensor Rev. C), as well as the Bosch BME280 to detect humidity and temperature as these directly correlate to increasing/decreasing yards. This subsystem is essential because it provides the additional assistance/feedback that golfers need to improve, giving us the “enhanced” rangefinder. **Subsystem 4: Power System** A Lipo battery such as the EEMB 3.7V Lipo Battery 500mAh should be sufficient to power each component. **Subsystem 5: User Interface + Bluetooth Application** A physical LCD display will be used to display distance measurements and wind speed which will be triggered by a push button on the mechanical enclosure. Using Bluetooth capabilities, an application on a phone or pc will be able to give users more information on club selection based on the conditions read. **Subsystem 6: Mechanical Enclosure** The enclosure is an important component to our project because it needs to safely contain all our systems while also being user-friendly. The enclosure would be 3D-printed and would properly mount all sensors and displays accordingly. **Criterion for Success** This project will be successful if we meet the following criteria: - The rangefinder measures the correct distance from the user to the flag pin. - Environmental sensors provide proper feedback to the user regarding wind, humidity, and temperature conditions - The UI recommends a suitable club based on the distance to the pin and the environmental conditions |
||||||
15 | Auto adjusted lighting system for room |
Howard Li Jihyun Seo Kevin Chen |
||||
**TITLE** Auto-Adjusted Smart Lighting System for Healthy Indoor Environments **TEAM MEMBERS:** Howard Li [zl114] Jihyun Seo [jihyun4] Kevin Chen [kdchen2] **PROBLEM** Most people do not give much thought to the lighting conditions in the rooms where they spend hours working, studying, or relaxing. As a result, the lighting and brightness levels are often unsuitable for eye health and comfort. Poor or inconsistent lighting can lead to eye strain, headaches, fatigue, and reduced productivity. While modern devices like phones and laptops already include adaptive brightness features, room lighting has largely remained static, requiring manual adjustment if at all. Sudden changes in light intensity can also be jarring, creating discomfort instead of solving the problem. We aim to solve the problem of creating an automatic, health-conscious lighting system for indoor environments that adjusts brightness in real time based on sensed conditions and does so gradually to protect users’ eyes. **SOLUTION** Our solution is to build a system of multiple wireless sensors placed around a room to continuously measure light levels at different points. These sensors will connect to a central control unit, which processes the readings and determines the optimal lighting adjustments for the space. The system will then control the room’s artificial lights, increasing or decreasing brightness to achieve a consistent, eye-healthy level across the room. Importantly, these adjustments will be gradual—mimicking the smooth transitions of a phone screen’s auto-brightness—so that users never experience sudden, distracting changes in illumination. This approach introduces several subsystems: Wireless sensing subsystem: distributed light sensors communicate readings to the main controller. Central control subsystem: interprets sensor data and computes adjustments. Lighting control subsystem: modifies the brightness and potentially the color temperature of the lights. User comfort subsystem: ensures that changes are gradual and within recommended ranges for eye comfort. In addition to improving eye comfort, our system will also focus on energy efficiency. By actively monitoring natural daylight levels through the sensors, the system can reduce or even turn off artificial lighting when sunlight provides sufficient brightness. This ensures that lights are only used when necessary, lowering energy consumption and utility costs while promoting sustainability. **SOLUTION COMPONENTS** SENSORS We will use ambient light sensors to measure lux levels at multiple locations in the room. Placing sensors in different spots ensures accurate feedback even if natural light is unevenly distributed. These sensors will transmit data wirelessly to the central controller. WIRELESS NETWORK & CENTRAL CONTROLLER A controller will collect all sensor data, run algorithms to determine the target lighting level, and send control signals to smart drivers or dimmers. The wireless system allows easy deployment without additional wiring. LIGHTING CONTROL We will integrate dimmable LED lights or connect to existing lighting fixtures via smart dimmers. The control logic will avoid rapid brightness jumps by gradually adjusting output intensity. We may also explore adaptive color temperature to better mimic natural daylight cycles. USER INTERFACE (OPTIONAL) A controller or could allow users to set preferences, such as “focus mode,” “relax mode,” or “sleep preparation mode,” which would adjust the target brightness levels and transition speeds. CRITERION FOR SUCCESS The system must be able to detect ambient lighting conditions in multiple parts of the room and wirelessly send the data to the central unit. The lights should respond automatically to sensor data without user intervention. Brightness adjustments should be gradual, with no sudden jumps noticeable to the human eye. The lighting should remain within healthy ranges recommended for eye comfort (e.g., 300–500 lux for reading, 100–200 lux for relaxation). Optional success criteria: the user interface allows customization of lighting preferences. |
||||||
16 | Antweight Battlebot - Blade Blade |
Jack Tipping Patrick Mugg Sam Paone |
presentation1.pptx |
|||
# Ant-weight Battlebot - Blade Blade Team Members: - Jack Tipping jacket2 - Samuel Paone spaone2 - Patrick Mugg pmugg2 # Problem Describe the problem you want to solve and motivate the need. We don’t have a problem, but other teams will when they see our lightweight battle bot. However, we must keep in mind certain design limitations to be eligible for competition, such as the mechanism remaining under 2 pounds. The battle bot must have a balance of being indestructible, lightweight, offensive, and long-lasting in terms of robot “cardio” (motors). # Solution Describe your design at a high level, how it solves the problem, and introduce the subsystems of your project. Our design will consist of a sturdy body for our bot, which has a circular saw that has the ability to not only spin, but also lift vertically. This will allow us to damage our opponent and also exploit their bot's weaknesses, depending on the flaws in their design. An initial component list is a 3d printed chassis, an ESP32 microcontroller, two wheels with two associated motors, two motors for the weapon, which is a saw in the front that rotates and lifts up connected over GPIO. # Solution Components ## MCU We will use an ESP32 microcontroller. The primary benefit is that it has integrated WIFI and Bluetooth. This will allow us to add custom telemetry to our laptop to control our bot. Such as controlling the motor speed, raising our wheel to flip the opponent's bot, or cutting our power as a fail-safe. The ESP32 has plenty of peripheral support. There are many PWM outputs, so we can directly drive multiple items. There are ADC inputs that will make it easy to read battery voltage, or any potential sensors we may have. It provides everything I mentioned, and is also very compact and doesn’t use much power. ## The Chassis For the project, we have access to 3d printing with 5 different types of plastics. The options are PET, PETG, ABS, PLA, and PLA+. After some research and evaluating tradeoffs, we are going to opt for PETG and ABS. PETG tends to be lighter and stronger than PET and is also easier to build with and more flexible than ABS. Because of this, it is optimal for the chassis. For the saw itself, the build will be done with ABS since manufacturing defects are not as important as being lightweight and strong. ## Power Unit and Motors We plan to use a 12V Brushed DC Gear Motor with a 37mm gearbox and an RPM of 45. Obviously limits our batteries to 12v unless some other circuit is involved. Evaluating 12v batteries, we find that I should use a 3S LiPo with size to be determined based on the final weight of the battle bot (~500mA). We may opt for a higher rpm motor for the saw, focusing on torque for now. ## Drive Unit Dual H bridge with motors listed above. Only two wheels/dual so we can have reverse/forward while saving weight (vs having more wheels with the same number of motors with less maneuverability). ## Saw Spin Unit When it comes to our weapon, it is going to be a tombstone design with a saw instead of an inanimate object that randomly rotates. It’s going to be hooked to the high rpm motor (Adafruit DC Motor), and on the lift side, we are debating between a 4th motor or an air-compressed part. This is also an optional feature. ## Additional Sensor We will also have a heat sensor to monitor if the motors are being overworked; if they are, we can avoid “engine failure” and lose the competition by temporarily immobilizing. # Criterion For Success Our high-level goals are to complete this class with an antweight battle bot that maneuvers well with two wheels, has a robust chassis, a weapon that is saw-like that rotates using a motor, in addition to being able to flip opponents, the robot should be able to be controlled over Bluetooth/wifi, and ideally, we do well in the competition. |
||||||
17 | LED Persistence of Vision Globe |
Gavi Campbell Melvin Alpizar Arrieta Owen Bowers |
||||
# Team: Globetrotters (WIP) # Team Members: - Owen Bowers (obowers2) - Melvin Alpizar Arrieta (malpi2) - Gavriel Campbell (gcampb7) # Problem LabEscape at UIUC is a popular attraction during events such as Engineering Open House and as such is constantly looking for ways to improve their exhibit. One such improvement they are looking to make is the implementation of a LED Globe capable of displaying messages and images via the utilization of something called persistence of vision. However, many issues can arise when trying to construct a functional system to utilize this phenomenon including mechanical, timing, and electrical restrictions. A couple examples of the problems that may be encountered are as follows: Difficulty in the creation of an electrical system that functions within a rapidly spinning environment. Difficulty acquiring proper live measurements of the systems spin rate. Difficulty translating spin rate into signals at proper time intervals for the entirety of the LED strip across the arch. Difficulty ensuring proper resolution for crisp imaging. Difficulty ensuring stability of the structure due to weights of parts. This problem is emphasized when applied to spinning objects. Additionally to all the above mentioned crucial issues to consider, there are a number of aesthetic issues that should be addressed. Namely, the noise of such a device should ideally be as little as possible and the color spectrum be as large as able. # Solution In order to address the many problems one could encounter when trying to build a system of this kind we plan to take the following measures. We will implement systems capable of acquiring the correct spin rate of the device, taking into account information from accelerometers, optical sensors, and the assumed spin rate of components. We will include a number of LED’s sufficient to provide clear and crisp images across the entirety of the spin radius. We will strive to manage external wiring and focus on keeping all relevant wiring components contained to the PCB board to ensure that wires will not tangle the device and result in catastrophic failure. To solve balancing issues we intend to create a tri-pylon approach where there will be three identical arches spaced around the structure to ensure that balance is maintained. Additionally we will ensure that PCB are spaced properly to distribute weight evenly. This design could be expanded to make use of an RGB coloring system to allow for multicolored display. ## Subsystem 1 - Power Unit A 5 volt power unit will allow for the safe operation of our LED’s avoiding risk of burnout. A wired power source system (DC 12V) and conversion to lower voltage for when it is desired for the device to run for extended periods of time. A mobile battery pack that can be utilized when mobility is desired. ## Subsystem 2 - Motor A DC motor capable of rotating at least 600 rpm should be more than satisfactory for the goal we wish to achieve in this project. WIll be able to rotate the mass of our globe for extended periods of time without wearing out. ## Subsystem 3 - Microprocessor Room for additional features should we wish to expand the scope of our project (such as perhaps the addition of a speaker). Capability to route all our necessary components with ease and the ability to accommodate additional power if needed. Our Microprocessor will allow for WIFI and bluetooth connectivity capabilities. ## Subsystem 4 - Accelerometer/Rotational Sensors An accelerometer to gather experimental data of the current rotational speed of the LED globe An optical sensor will be used with a reference point to verify the correct rotational speed of the globe. Alternatively a hall-effect sensor can be used to magnetically detect rotations and adjust light timing accordingly. ## Subsystem 5 - Multi-Colored LED Band(s) Balanced LED spacing around the PCB core to ensure the smooth rotation of our globe and avoiding turbulence. Reliable and fast acting LED’s not prone to burnout when activated actively and continuously. Bands of interconnected LED’s capable of a single or multiple colors. ## Subsystem 6 - Data Input An SD card reader or item of a similar nature that can accept physical information and display in a sequential order. Support for wireless data transfers to accomplish data displays without the necessity to stop and load the device. Support for an approachable user interface in which displays can be freely edited and changed wirelessly. ## Subsystem 7 - Web Application Will provide a user-friendly method to control the LED Globe Will allow users to upload media files (images, videos, gifs) directly from their device to the globe The web interface will connect to the globe via onboard WiFi/Bluetooth for seamless control. Password protection or local hosting will restrict access so only authorized users can make changes. # Criterion For Success This project will be successful if we meet the following criteria: High resolution displayable text and imaging. Continuous correct functioning for 12 hours when on battery power. Wireless Customizable Graphics. |
||||||
18 | RFID Poker Board |
Darren Liao KB Bolor-Erdene Satyam Singh |
||||
# Team Members: - Satyam Singh (satyams2) - Darren Liao (darrenl4) - Khuselbayar Bolor-Erdene (kb40) # Poker Traditional poker tables rely on the dealer and players to track cards, bets, and pots. This process can be slow and error prone, especially in casual games where the dealer is inexperienced. Players may misread cards, deal incorrectly, or lose track of the state of the game. Live poker also lacks many of the conveniences of online poker, such as automatic hand evaluation, instant game state updates, and precise tracking of actions. Online platforms further enhance the player experience with detailed statistics, and hand histories while live games rely entirely on player knowledge. # Solution An RFID-enabled poker table with tagged cards helps to bridge this gap by bringing digital intelligence into the live poker experience. By embedding RFID readers in the table, the system can automatically recognize cards, display the game state in real time, and evaluate hands without error. Game state management features such as LED indicators can track dealer position, blinds, and turn order, giving players visual cues that keep the game running smoothly. A dedicated companion app would serve as the primary user interface, providing players with immediate feedback. The app can also highlight blind positions, display whose action it is At a high level, we will stick 13.56 MHz HF RFID sticker tags onto poker cards (and possibly chips later), place small antenna pads under the “seat” zones in front of each player, and a larger one in the middle for the community cards. We will build a main PCB with an ESP32, a single HF reader IC, and an RF MUX switch so the microcontroller unit (MCU) can scan all pads sequentially. The MCU will resolve tag UIDs into chip denominations or card identities, then send compact state updates to a small UI over Wi-Fi in near real-time. # Solution Components ## Subsystem 1: RFID Cards and Antenna Network Each card and chip will have a 13.56 MHz HF RFID NFC sticker (ISO 15693) attached. Antenna pads will be embedded under each player’s seat zone and a larger pad will be used for the community cards. All pads will be routed through an RF multiplexer (e.g., a 16:1 analog switch like the HMC7992) into a single HF RFID reader IC (such as PN532 or MFRC522). The microcontroller will sequentially energize each pad, cycling through them at a fast interval per pad to collect tag UIDs, filter duplicates, and reliably detect card positions in near real time. ## Subsystem 2: Central Microcontroller The system will use an ESP32-S3 (dual-core with Wi-Fi) as the central controller. It will interface with the RFID reader via SPI or I2C and control the RF multiplexer using GPIO select lines. The microcontroller will maintain an internal mapping of card and chip UIDs to their identities (rank/suit or chip denomination) and update the game state. Once the game state is compiled, it will be serialized into JSON format and transmitted to the visualization app over HTTP for low-latency communication. ## Subsystem 3: Game Visualization App The visualization layer will be a cross-platform application (built with Python + Flask) that receives JSON packets from the ESP32. It will display each player’s hole cards and the community cards, highlight blinds and active turns, and compute win probabilities for each player using either Monte Carlo simulation or a precomputed odds lookup. As a stretch goal, the app will also store hand histories and send LED or LCD commands back to the ESP32 to synchronize the physical table indicators with the digital state. # Criterion For Success - 100% accuracy in tracking the cards currently in play through 5 rounds of gameplay - Game state is accurately updated on the app within 2-5 seconds of updating - Board can correctly differentiate between folds and players accidentally moving their cards away from antennas # Stretch Goals If we have the time, we would also like to enhance the player experience by adding small LED indicators for the game state (big/small blinds, betting rounds, LCD screen showing the pot size) to help each player better understand the game without having to rely strictly on the app. Tracking chips can be more challenging, since stacking with RFID can be difficult. However, we would love to implement this so we can build on the tech idea above and display the total pot size directly on the board along with the app. Additionally, if desired, we could use algorithms and machine learning in the app to help players make the best decisions given the current game state. |
||||||
19 | Suction Sense - Pitch Project |
Hugh Palin Jeremy Lee Suleymaan Ahmad |
||||
Team Members: Hugh Palin (hpalin2) Jeremy Lee (jeremy10) Suleymaan Ahmad (sahma20) **Problem** Currently, suction is unnecessarily left on for approximately 35% of the runtime in operating rooms. This results in wasted energy, increased maintenance costs, reduced equipment lifespan, and unnecessary electricity consumption. At present, there is no mechanism to detect or alert staff when suction is left running unnecessarily (such as overnight when no surgeries are in progress). **Solution** We propose a system composed of two hardware components and one software component. The first hardware module will attach to the medical gas shut-off valves, where it will monitor suction pressure and wirelessly transmit the data. A second hardware component will receive and store this data. On the software side, we will develop an application that takes in suction usage data and cross-references it with the hospital’s operating room schedule (retrieved from the Epic system). The application will display a UI showing which operating rooms are currently in use and whether suction is active. Color coding will clearly indicate if suction has been left on unnecessarily. **Solution Components** Microprocessor For this project, we plan to use an ESP32-WROOM-32 module as the microcontroller. We chose this module for its small form factor and wifi and bluetooth capability, which gives us flexibility in how we transmit the suction data. It is also extremely inexpensive, which is important considering hospitals operate on a limited budget and our module needs to be deployed in each operating room. Finally, the ESP32 features extensive open-source libraries, documentation, and community support, which will significantly simplify the development process. Pressure Transducer The Transducers Direct TDH31 pressure transducer will be used to monitor real-time suction. It works by converting vacuum pressure into an electrical signal readable by the ESP32. We chose this module for its compatibility with medical suction ranges, compact design for easy integration, and reliability in continuous-use environments. The sensor’s analog output provides a simple and accurate way to track suction status with minimal additional circuitry. BLE Shield The HM-19 BLE module will be used to relay suction data from the ESP32 to the Raspberry Pi. This module supports Bluetooth Low Energy 4.2, providing reliable short-range communication while consuming minimal power, which is critical for continuous monitoring in hospital settings. Its compact footprint and simple UART interface make it easy to integrate with the ESP32 without adding unnecessary complexity. Raspberry Pi Display Module The Raspberry Pi 4 Model B paired with the Raspberry Pi 7″ Touchscreen Display will serve as the central monitoring and alert system. The Raspberry Pi was chosen for its quad-core processing power, extensive I/O support, and strong software ecosystem, making it well-suited to run the suction monitoring application and integrate with the Epic scheduling system. The 7″ touchscreen will allow the module to be mounted in the hallway, providing an interface that allows staff to quickly view operating room suction status, with clear color-coded indicators and alerts. This combination also enables both visual and audio notifications when suction is unnecessarily left on, ensuring staff can respond promptly. Software Application The application will run on the Raspberry Pi and serve as the central hub for data processing and visualization. It will collect suction pressure readings from the ESP32 via the HM-19 BLE module and compare this data against the hospital’s operating room schedule retrieved through the Epic system. A color-coded interface on the Raspberry Pi touchscreen will clearly show which operating rooms are in use, whether suction is active, and show where suction has been unnecessarily left on. **Criteria for Success:** Our system must remain cost-effective, with a total component cost under $1,200 per unit to align with hospital budgets. The hardware module must securely attach to suction shut-off valves, remain compact, and accurately detect suction levels using an. Data must be reliably transmitted to a Raspberry Pi 4 Model B, which will also pull operating room schedules from the Epic system. Finally, the touchscreen application must clearly display suction status with color coding and issue real-time alerts when suction is left on unnecessarily. |
||||||
20 | Glove controlled mouse with haptic feedback |
Khushi Kalra Vallabh Nadgir Vihaansh Majithia |
||||
# Problem For digital artists, traditional mousepads and trackpads are constrained and limit natural hand motion, making writing or drawing on a laptop cumbersome. Existing gesture-based input devices are often expensive, camera-dependent, or occupy significant desktop space. There is a need for a low-cost, wearable, intuitive interface that enables free-form cursor control and natural gesture-based clicking. # Solution We propose a wearable glove system that allows users to control a computer cursor using hand movements and perform mouse clicks with natural finger pinches. The system consists of four main subsystems: 1) Hand Motion Tracking Subsystem – captures hand orientation and motion to move the cursor. 2) Finger Gesture Detection Subsystem – detects index and middle finger pinches for left/right clicks. 3) Haptic Feedback Subsystem – provides real-time vibration feedback for click confirmation. 4) Software Subsystem – processes sensor data, maps gestures to mouse actions, and communicates with the computer. # Components ## Subsystem 1: Hand Motion Tracking Purpose: Detects hand orientation and movement to control the 2D cursor position. Components: IMU sensor (accelerometer + gyroscope + magnetometer) for 3D motion tracking. Microcontroller (ESP32 or Arduino Nano 33 BLE) for sensor data processing. Custom PCB to host IMU, microcontroller, and wiring to glove sensors. A lightweight Lipo battery. Description: The IMU measures acceleration and rotation of the hand. Firmware filters and converts these readings into cursor velocity and direction. Provides smooth, real-time hand-to-cursor mapping (targeting cursor movement or click) cursor movement or click) <50 ms. 4) Wearability: Glove and PCB fit comfortably on the hand without restricting motion. 5) Software Functionality: Firmware correctly processes sensors; optional PC software handles calibration and visualization. 6) Haptic Feedback: Vibrations are triggered reliably with each recognized click gesture. ## Subsystem 2: Finger Gesture Detection Purpose: Detects finger pinches to generate left/right mouse clicks and optional extra gestures. Components: Flex/bend sensors on index and middle fingers for left/right clicks. Optional thumb flex sensor for gestures like scrolling or drag. Optional capacitive/touch sensor for hover or special gestures. Pull-down resistors and conductive wiring embedded in glove. Description: Flex sensors detect finger bending; bending past a threshold triggers clicks. Firmware includes debouncing to prevent multiple clicks from one gesture. Optional thumb and touch sensors provide extended functionality. ## Subsystem 3: Haptic Feedback Purpose: Provides tactile confirmation for detected gestures. Components: Small vibration motor (coin or pager type). Driver circuitry on PCB to control vibration intensity. Description: The microcontroller activates vibration briefly when a click gesture is recognized. Enhances user experience by providing immediate feedback without needing visual confirmation. ## Subsystem 4: Software Subsystem Purpose: Maps sensor data to cursor movement, gestures, and communicates with the computer. Components: Microcontroller firmware for sensor data acquisition, filtering, and gesture detection. PC-side optional calibration GUI (Python or C++) for sensitivity adjustment and mapping hand motion to screen resolution. Description: Processes raw sensor data and converts IMU readings into cursor deltas (Δx, Δy) and flex/touch inputs into click commands. Supports USB HID or Bluetooth HID communication to the computer. Optional software smooths cursor motion, calibrates sensors, and visualizes hand gestures for testing (Stretch). # Criterion for Success 1) Resolution (Equivalent DPI): variable DPI: (Range: 400-1000 DPI) 2) Max Tracking Speed (IPS): ≥50 IPS (so quick flicks don’t drop). 3) Acceleration Tolerance: ≥5 g without loss of tracking (users move hands fast). 4) Polling Rate: ≥100 Hz (every 10 ms or better). 5) End-to-End Latency: ≤20 ms (ideally closer to 10 ms). 6) Click Accuracy: ≥95% reliable detection of intended clicks, false positives ≤1%. 8) Haptic Feedback Response Time: <40 ms after click detection. 9) Cursor Control Accuracy: Hand movements map to cursor position within ±2% of intended location. 10) Wearability: Glove and PCB fit comfortably on the hand without restricting motion. |
||||||
21 | MULTI-SENSOR MOTION DETECTOR FOR RELIABLE LIGHTING CONTROL |
Joseph Paxhia Lukas Ping Sid Boinpally |
||||
Team Members: - Joseph Paxhia (jpaxhia2) - Siddarth Boinpally (sb72) - Lukas Ping (lukasp2) **PROBLEM:** In offices, classrooms, and lecture halls worldwide, motion sensors are commonly used to automate lighting control. While convenient, these systems share a critical flaw: lights often switch off when people remain in the room but are relatively still—such as when typing, reading, or watching a presentation. This leads to frustration, disrupts productivity, and creates an inefficient work environment. The root of the issue lies in the reliance on Passive Infrared (PIR) sensors, which detect the infrared radiation emitted by warm bodies. Although effective for detecting large movements, PIR sensors struggle with micromotions, are prone to false triggers, and rely on fixed timeout settings. As a result, they fail to consistently recognize human presence. **SOLUTION:** Our approach introduces a multi-stage verification system to improve reliability while preserving the strengths of current technology. PIR sensors remain useful for their fast response to initial entry and larger movements, so we retain them for triggering lights when someone walks into a room. To overcome their limitations, we integrate a millimeter-wave (mmWave) radar sensor, which excels at detecting fine micromotions such as breathing or subtle hand movements. This introduces the following subsystems: - Control and Processor - Sensing System - Lighting Interface - Power **Subsystem #1: Control and Processor** Primary Responsibilities: - Take in the sensor data from the PIR and the mmWave sensors. - Process this data and make a decision to stay on, gradually turn on, dim, and stay off. - Send this decision out. The control and processor subsystem will take in the PIR and mmWave sensor data, determine whether the lights should be off, on, gradually illuminate, or dim, and output this decision as a PWM (for the brightness of the lights) from the microprocessor for the lighting system to accurately drive the lights. By combining the two sensors, false positives and negatives will be reduced from the surrounding environment by combining the signals and using logic to combine the data sent in from both sensors. A STM32 microprocessor will be utilized, as it has the capability to process these signals and is best for filtering and dimming. **Subsystem #2: Sensing System** Primary responsibilities: PIR: instant “walk-in” detection, coarse motion, low power standby. mmWave: micromotion detection (breathing, typing), presence confirmation, and false-trigger suppression. We will be using the PIR for fast wake and coarse motion and the mmWave for verification/hold and micromotion detection. Using both avoids having PIR false-offs while making sure that we have semi-instant illumination. Basic state machine / functionality: 1. Idle / Vacant: PIR = low, mmWave = no-presence → lights off, system in low-power monitoring. 2. Wake / Entrance: PIR triggers → gradual illumination, start hold-timer and mmWave high-sensitivity window. 3. Occupied (confirmed): mmWave confirms presence (micro-motion or persistent reflection pattern) OR PIR continues to detect motion → remain ON; reset hold timers on detections. 4. Low-activity (PIR no longer seeing motion): PIR goes quiet → enter mmWave verification window: if mmWave detects micro-motion within verification window, remain in Occupied. If mmWave sees nothing for Nverify seconds → move to Vacant. 5. mmWave & PIR quiet → lights off, enter low-power scans at low duty. **Subsystem #3: Lighting Interface** Primary responsibilities: - Gradually turn lights on and off - Keeping lights on Our gradual illumination will employ a 0-10V analog dimmer, which is essentially a subcircuit block. This is a very widely used and accepted lighting control interface that reads a DC voltage to control brightness on an LED. The driver itself still runs on AC Main power. The subcircuit is comprised of these components: - Microcontroller - To generate a high frequency PWM (Pulse Width Modulation) proportional to the desired brightness - Filter - to transform the PWM to a DC voltage - Op Amp Buffer / Amplifier- Since our STM32 microcontroller outputs up to 3.3 V and we need to generate up to 10 V DC - Any protection needed - Resistors and diodes used as needed - Output to LED This 0-10V analog dimmer also can keep the lights on through the microcontroller generating a constant voltage above ~1.0 V. Once people leave the room and the controller doesn’t detect anyone, the inverse can be done to gradually turn the lights off (10 - 0 V). Note: We will have to do some math to find a suitable slew rate for the brightening and dimming. We are thinking of having 3 different rates: 1. Gradual brightening - should take around 0.5-1 seconds for the lights to go from off to desired brightness 2. First dimming - takes around 10 seconds when the sensor first detects no people in the room. 3. Final shutoff - takes around ~2 seconds to fade to fully off. This is done after first dimming is completed and the sensor still detects no activity in the room. **Subsystem #4: Power** 1. Take power from the fixture’s AC mains (120/230 VAC). 2. Use a dedicated isolated SMPS / LED-driver tap or internal LED driver rails to create regulated DC rails for electronics (3.3 V, 5 V, and optionally 1.8 V). 3. Keep the LED power path (the high-power LED driver) electrically separate from the low-voltage sensing electronics; provide good isolation and filtering between them. The system is powered from AC mains, which feeds the LED driver to provide constant-current illumination and also supports mains sensing and surge protection components such as fuses and MOVs. All low-voltage electronics—including the MCU, mmWave radar module, PIR sensor, and any communications modules (Wi-Fi/BLE)—operate on DC, typically 3.3 V, with some modules optionally requiring 1.8 V or 5 V. The MCU manages these peripherals and interfaces with sensors using logic-level signals, ensuring safe and reliable operation of the sensing and control system. **Criterion for Success** The light should gradually turn on when somebody enters a room, and it should start the turn on process without much wait time. While it is on, and people are still present in the room, the light should not start to dim. When the room becomes empty, the light should start to dim (after a sufficient wait time) and turn off. In addition, this should be able to detect motion within 10-15 m. of the sensor. |
||||||
22 | Adherascent |
Dhiraj Dayal Bijinepally Hardhik Tarigonda Jonathan Liu |
||||
TITLE ADHERASCENT Team Members: Jonathan Liu (jliu268) Hardhik Tarigonda (Htarig2) Dhiraj Bijinepally (Ddb3) PROBLEM Approximately 66% of adults in the United States take prescription medication. These can range from painkillers after surgery to essential life saving drugs. Common between all of these medications is the importance of taking them on time and on a schedule to maximize effectiveness. Adherascent is a program/device that helps individuals remember to take their medications. This is primarily aimed towards older adults, however anyone can use this device if they require it. SOLUTION Adherascent is a system composed of three subsystems: a wearable scent device, a mobile application, and a smart pillbox. The app provides the initial notification. If the notification is not addressed, the wearable escalates reminders using scent cues. The pillbox provides clear, per-compartment visual cues to indicate which medication should be taken, and it allows the user to confirm intake. SOLUTION COMPONENTS Adherascent consists of two main components. The phone application that interacts with the wearable device and the scent-releasing mechanism attached to a wearable device. SUBSYSTEM 1 The wearable device acts as a second reminder to take medication. Instead of relying solely on a single cue such as audio or visual, Adherascent utilizes the sense of smell to prompt action. At first, the app reminds the individual to take their medication. If the person dismisses the notification and takes their medication, the wearable device will not activate. However, if the notification is left unaddressed for over 5 minutes, the device activates. The Adherascent wearable emits a scent with varying intensity to escalate urgency. The working idea is to implement this using clock cycles: 1000 cycles: scent is initially released into the air. 2000 cycles: scent increases in intensity. 3000 cycles: scent reaches maximum intensity to strongly notify the user. This approach ensures reminders are multi-sensory and persistent, reducing the chance of a missed dose. We plan on utilizing technology similar to electronic air fresheners to emit the scent. The acceptable time before ramping the scent intensity depends on the nature of the individuals condition. If 5r medicine is urgent, it could skip the ramping process and immediately emit at maximum intensity from the start. It is possible that we can add a function in the app to adjust the time between reminders and scent intensity. SUBSYSTEM 2 The mobile app manages medication schedules and reminders. It sends a notification at the correct time and provides the first opportunity for the user to act. If the user dismisses the notification, the reminder is considered addressed, and no further action is taken. If the notification is ignored, the app sends a signal via Bluetooth to both the wearable device and the smart pillbox to activate. This central coordination ensures all subsystems work together to escalate reminders only when necessary. SUBSYSTEM 3 The smart pillbox provides a direct, physical reminder by lighting up the specific compartment corresponding to the medication due at that day and time. This not only alerts the user but also guides them to the correct pill, reducing confusion or mistakes. The pillbox also includes a confirmation method (such as a button or touch input) that allows the user to acknowledge that they have taken their medication. Once confirmation is received, the pillbox sends the acknowledgment to the app, ensuring the wearable device does not continue escalating. If no confirmation is received, the system proceeds with wearable activation, maintaining redundancy in reminders. We are working with Professor Steven Walter Hill, Gaurav Nigam ,Venkat Eswara Tummala and Brian Mehdian. |
||||||
23 | Drink Dispensing Robot |
Andrew Jung Ethan Cao Megan Cheng |
||||
# Title - Drink Dispensing Robot Team Members: - Andrew Jung (crjung2) - Ethan Cao (ecao4) - Megan Cheng (mxcheng2) # Problem Too often, we’re tired or distracted and put off getting a drink of water. Those small delays add up, leaving us dehydrated and drained without even realizing it. Dehydration impacts focus, energy, and overall well-being, yet it happens so easily in our daily lives. # Solution Describe your design at a high level, how it solves the problem, and introduce the subsystems of your project. It solves the problem by creating an ecosystem that allows a delivery system to a drink dispenser hub and then delivers back to the user. This will include a Robot with the ability to detect where the hub is and the cup is, a Hub with the ability to dispense two different liquids, and a Callback system that allows the user to call for the robot to come and pick up the cup. # Solution Components Explain what the subsystem does. Explicitly list what sensors/components you will use in this subsystem. Include part numbers. ## Shared Circuits ### Brushed motor driver We will be using some H-bridge driver IC, such as the DRV8841, to drive 2 sensored brushed motors We may want a small filtering circuit to reduce motor EMI ### Battery We will run off of 9V batteries ## Subsystem 1 - Robot ### Chassis: Mechanical part The robot will be a 3d printed 2-wheel drive chassis, which has 1 directly driven wheel on each side and a caster wheel ### Microcontroller + networking: compute the robot’s next action and communicate the system state to the other components ESP32S3 ### Drive We will use the same brushed motor driver as the rest of the system ### Locating callback system The robot will use an IR receiver to track the cup’s position ### Locating hub The robot will use an IR sensor to align with the hub. This will be done by doing an ~45 degree sweep in the direction of the hub and aligning with the point that has the peak IR brightness ### Cliff detection Use a distance sensor (QRE1113) to detect that there is a solid surface in front of the wheels ## Subsystem 2 - Drink Dispenser Hub The goal of this subsystem is to detect when the robot is underneath the nozzle. It will do that by emitting a general UWB, but then use IR to line up the robot precisely so that it will be under the drink system. After that occurs, it should be able to take the information from the callback system. With that information read it should be able to dispense the two liquids using a pump system. [Pump System](https://www.amazon.com/Diitao-Peristaltic-Connector-Aquarium-Analytic/dp/B0BB7KFRKJ?crid=13GKYIT7C5X5X&dib=eyJ2IjoiMSJ9.s470H6yLkoemv4HuZ9lGLwZZQg-a_7MaLY_vcFxsowj5uvlEs7obqDJx-53UV37HiWg79Pkrj576_HDzT428oA_280tYnCqexlKI3pO3ZotaWuM075bNYiPOxzd1x4JhxHi5UM6kKTd4PKZEg51rS35Ewz0Rh-Crd9eG6nivk_F9K1JjMHt19liiAfUJT_apBCN1mF6IYqDccJk0CRmCsa__1T9RhZ5zQPhhA30Hvfc.mMJwUXJHLF-ti6JAseRkx5ba7VYwh5UWYCvrFA506iY&dib_tag=se&keywords=peristaltic%2Bpump&qid=1756915929&sprefix=perstatic%2Caps%2C100&sr=8-14&th=1) Emit UWB [ESP32S3 ](https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N4R8/16163965) Precision Docking (IR) [IR Transmitter](https://www.sparkfun.com/infrared-emitter.html) [IR Reciever](https://www.sparkfun.com/ir-receiver-diode-tsop38238.html) Robot Arrived Check Bumper with a button on it Space bar on a keyswitch (We have both of these already) ## Subsystem 3 - Callback System (Cup) [ESP32S3 to communicate](https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N4R8/16163965) Precision Docking (IR) Communicates with the User input (Drink Ratio) Button Pressed -> Ready to send Dial (Water & Mixer) (RV4NAYSD101A) Gives the ratio of water and mixer # Criterion For Success The robot shall be able to locate the cup and retrieve it from the user on demand. The robot shall detect that the user has placed the cup on top of the robot and return to the dispenser hub The dispenser will refill the cup with a mixture of 2 liquids at a ratio defined by the user The robot will return to where it retrieved the cup from. This is done on a flat tabletop with no obstructions. But the robot will avoid falling off of the table |