Projects
# | Title | Team Members | TA | Professor | Documents | Sponsor |
---|---|---|---|---|---|---|
1 | Sound Asleep |
Adam Tsouchlos Ambika Mohapatra Shub Pereira |
Weiman Yan | Rakesh Kumar | presentation1.pdf proposal1.pdf |
|
# **Sound Asleep** **Team Members:** - Adam Tsouchlos (adamtt2) - Ambika Mohapatra (ambikam2) # **Problem** Poor sleep can have serious effects on your health, increasing chances of conditions like poor mental health, kidney failure, diabetes, and more. It was found that slow wave sleep declines with age and that it is considered the most restorative stage of sleep. It is important for improving immune function, memory consolidation, and emotional regulation. Recent literature discusses using auditory stimulation during sleep to increase longevity of slow wave sleep for better overall physical and mental health. There are other devices that use EEG technology, but most have no auditory stimulation and the others were said to be very uncomfortable. # Solution **Sound Asleep**: a non-invasive wearable that transmits EEG data to a companion app. This then interacts with the user’s Bluetooth device to deliver precisely timed auditory stimulation. The user can choose their own bluetooth device for increased comfort during sleep. # Solution Components # Subsystem 1 – EEG Acquisition and Wearable Hardware This subsystem is responsible for acquiring the EEG signals. - EEG leads optimized for overnight use. - Wearable headband or soft cap to keep electrodes in place throughout the night. - Low-noise amplification and filtering circuitry to ensure signals are usable for real-time processing. - Small rechargeable battery to power sensors and wireless transmission. # Subsystem 2 – Wireless Transmission and Power This subsystem ensures EEG data can be reliably sent to the processing unit. - Bluetooth Low Energy (BLE) or Wi-Fi module for continuous data transfer. - Onboard microcontroller to digitize EEG signals and handle wireless protocols. - Battery management system for safe charging and overnight operation. # Subsystem 3 – Sleep Stage Classification and Signal Processing This subsystem processes EEG data in real-time to detect sleep stages and identify slow wave activity. - Algorithms for sleep staging (NREM, REM, wake) using EEG features. - Slow wave detection algorithms trained/tested on pre-labeled EEG datasets. - Closed-loop timing logic to sync auditory stimulation with ongoing slow waves. - Possible algorithms to be used: **YASA Slow-waves detection.** https://github.com/raphaelvallat/yasa/blob/master/notebooks/05_sw_detection.ipynb **CoSleep GitHub project.** https://github.com/Frederik-D-Weber/cosleep # Subsystem 4 – Auditory Stimulation Delivery (and App User Interface) This subsystem delivers pink noise bursts at intervals during SWS. - Mobile (or desktop) app triggers sound output through the user’s paired Bluetooth device (primary option as of now). - Sound customization features via app for intensity, duration, frequency, and comfort. - Sleep session dashboard showing nightly summaries (total sleep, time in slow wave sleep, stimulation events delivered). # Criterion for Success # ****Hardware**** - Wearing the EEG device is considered comfortable by users. - EEG device stays attached during full night of sleep - EEG readings are accurately transmitted to the software. # Software - EEG readings are correctly detected and processed by the app. - Slow wave sleep stage is accurately identified. - Auditory stimulation is transmitted to user’s bluetooth device. # Outcomes - User has increased slow wave sleep duration and amplitude. - Improvement in memory test after sleeping with the device compared to without it. # References - Ngo et al. (2013). Auditory closed-loop stimulation of the sleep slow oscillation enhances memory. https://pubmed.ncbi.nlm.nih.gov/23583623/ - Bo-Lin Su et al. (2015). Detecting slow wave sleep using a single EEG signal channel. https://pubmed.ncbi.nlm.nih.gov/25637866/ |
||||||
2 | Autonomous Car for WiFi Mapping |
Avi Winick Ben Maydan Josh Powers |
Jason Jung | Arne Fliflet | proposal1.pdf |
|
# Title Autonomous Car for WiFi Mapping # Team Members: - Ben Maydan (bmaydan2) - Josh Powers (jtp6) - Avi Winick (awinick2) # Problem When moving into a new apartment, house, office, etc, people will often place their wifi modem or wifi extender in a convenient spot without much thought. Having gone through this just last week, it made us wonder if there was a better way to approach this to maximize the wifi strength across your house. The way most people go about testing their wifi is walking into a room, going to a speed tester website, running it, then doing that over and over. This takes a lot of time, isn't very accurate, and doesn't show the absolute most optimal location. We are solving the problem of automating the process of wifi testing of strength and speed in a given space. We would get wifi data from an autonomous vehicle driving around a room, create a heat map of the signal strength to display on a computer, analyze it, and then finally show the weak spots, deadzones, etc.. Some motivation for why this is a good project idea: this project allows you to find the best spot to place a wifi extender for optimal wifi so your zoom meetings never cut out and your instagram reels/youtube shorts/tiktok/angry birds keep playing with no issue (potentially at the same time). # Solution The basic idea is to do a scan of the room. Essentially, the car will have a lidar sensor and RSSI sensor on the top which will continuously scan the room for the strongest wifi signal. The lidar sensor ensures that the car will not run into anything and allows us to run the SLAM algorithm for obstacle avoidance. The idea is to do a two-scan approach to avoid the bluetooth connection from interfering with the wifi signal detector. The first scan of the room will be manually driving the car for a few minutes gathering LIDAR data and sending that data to an onboard raspberry pi which will map out the path for the car to take to explore the entire room. This is significantly less driving than the autonomous part (which splits the room into strips to drive back and forth). The onboard raspberry will then stream the path back to the car using codes sent to the motor controllers, where the second scan of the room will drive that path over the room and capture the wifi signal at every point in memory. Once the car is done scanning, all of the data will be sent via bluetooth back to the computer where we can nicely visualize the data and return the location of the optimal wifi signal back to the user. This naturally introduces a few subsystems to achieve this goal. Location tracking subsystem to map a location (x, y) with a wifi strength (this is done using SLAM) + Generating a path for the car to drive -> this is mapping the output of SLAM to a path the car can drive to scan the entire floor for the best wifi signal. Lidar sensing + bluetooth streaming subsystem Wifi signal catching subsystem which gets the signal strength of the wifi at the location the car is at Building the car with omnidirectional wheels and motor controllers # Solution Components ## SLAM subsystem for location tracking + generating a path for the car to drive While we map out the Lidar data, we can pass it to the SLAM using an ESP32-S3 module. This module also has wifi and bluetooth capabilities to easily transfer the data to the computer to let the user view the heatmap. To run the SLAM algorithm, we will offload the computation to a raspberry pi 5 since it has a much more capable microprocessor to run 2d SLAM. The ESP32 in this case will be responsible for cleaning the LIDAR data and sending commands to the motor controller to move the car. A separate algorithm will be written to run on the raspberry pi (since it is very compute heavy) which computes the optimal path for the car to drive while gathering wifi data. ## Lidar Sensor The LiDar subsystem allows for the car to navigate through the room. It can scan the room in order to detect walls and furniture. The sensor helps avoid collisions by measuring the distance from obstacles. It can support the SLAM subsystem in mapping the environment in order for it to be then overlaid with the signal strength of the WiFi in all areas. This will use the RPLIDAR A1M8 which can be connected to the ESP32 for the lidar point cloud data to be stored in flash for the SLAM algorithm to run. ## Wifi Sensor (RSSI) Received Signal Strength Indicator is a sensor that is part of the ESP32 module that we can place on the top of our car that detects the wifi signals at a location, and can output a strength measurement in Dbm(decibel Milliwatt) . ESP32 can grab this data and transmit it through its bluetooth module to the computer which will display the data and run a very simple linear scan algorithm to find the (x, y) location with the strongest wifi signal. This is a very simple algorithm but we want it to be transmitted to the computer for the user to be able to visualize it. The ESP32 has an existing module that can measure the received signal strength. RSSI is a component of the ESP32 chip ## Car + mecanum wheels We would 3D print a car and omnidirectional wheels, above the car would hold the lidar sensors as well as the RSSI and SLAM module. The ESP32 module would be placed on top, and have bluetooth send the data to the motor controllers for which direction to turn, move forward, or backwards. There would be different levels for each of the sensors, within the chassis holding the wheels, next level up holding the PCB board and motor controllers, and the top most level holding the ESP32 and the lidar sensor. The ESP32 has the RSSI and the lidar sensor will grab light data for SLAM. # Criterion For Success Before the car is able to drive, we need the car’s wheels to be able to be controlled by a program separately. Each mecanum wheel needs to be able to function properly to control the multidirectional aspect. This consists of connecting the ESP32 module to the PCB which is connected to the motor controllers which are connected to the motor of each wheel. We would need our 3D printed car with omnidirectional wheels to be able to be controlled manually using a computer by sending the driving commands (using arrow keys) over bluetooth. We would need to make sure it is able to move in all directions around a room. After testing the driving functionality, we would need it to be able to drive with a hard coded path for it to follow to map out the wifi of the room. When the car can be driven autonomously, we need to achieve the ability to use the lidar sensor to map out a path for the car to follow through the room without needing a predetermined path. This is done using the Lidar sensor and offloading the difficult SLAM computation to a raspberry pi. This involves the following: gather lidar data -> run through slam on raspberry pi -> use generated location and grid based map returned by SLAM to run a path planning algorithm for the car -> start path algorithm. The part that is a criterion for success here is the algorithm we create to generate a path from the grid based map returned by SLAM. Use an ESP 32 chip (with RSSI add on) to detect wifi strength and then send it over bluetooth to a computer where it would process the data and generate a heat map of the given space. Optionally give the user advice on where to move the modem to maximize either strength in a certain area or best general coverage. Write an algorithm that will take all the wifi heatmap data and return an optimal spot to place the wifi extender. This is a very complicated algorithm since wifi data can be interpolated (for example, if the strength at x = 1.0 is strong and the strength at x = 0.0 is weak then you can say the strength at x = 0.5 is medium). This essentially means that given a 2D list of wifi signal strength data you have potentially infinite spots to place a wifi extender since you can interpolate anywhere. Write an algorithm to find local deadspots using the heatmap. This can be done on the computer to visualize to the user and onboard the esp32 or the raspberry pi on the car to avoid mapping out sections of the room which are clearly deadspots. If it runs onboard the car it could save a lot of time by avoiding mapping out large sections of the room but it is again a very computationally intensive algorithm since it is essentially gradient ascent. |
||||||
3 | Follow-Me Cart: App controlled smart assistant |
Alex Huang Jiaming Gu Shi Qiao |
Shengkun Cui | Arne Fliflet | other1.pdf proposal1.pdf |
|
Here's the following of the previous post due to word limitations: ## Subsystem 3: Mobile App Purpose: Allow customers to control the cart via app. Features: 1. BLE(bluetooth low energy) pairing with the Raspberry Pi for secure identification. 2. Enable/ disable follow-me mode. 3. adjust the following distance, receive notifications when the cart is too far from the user. Components: 1. Customized Android app 2. BLE/Wi-Fi for control and ID verification. ## Subsystem 4: Drive Subsystem Purpose: Drive the car Components: 1. 12V DC gear motors. 2. Chassis: 2-wheel drive with caster support for balance. 3. Payload capacity: 5–10 kg (scaled for safety and feasibility). 4.Power system: 12V Li-ion battery pack with buck converters for 5V (Pi) and 3.3V (sensors/ESP32). # Criterion For Success 1. The cart follows the user within 1–2 m, with >90% accuracy in aisle-like environments. 2. Our mobile app should connect to the cart within 5 seconds ,respond to any commands sent by users via app within 2 seconds, allow the user to start/stop at any time and adjust the parameters accordingly. 3. The cart follows only when both the paired phone and marker/ID are detected, preventing false tracking. 4. The cart stops for obstacles >10 cm wide within 1 m. 5. The cart might be able to speed up when it is far from the user and slow down when it gets near. In the whole process it should be able to avoid all possible obstacles smoothly. 6. The cart safely carries 5–10 kg without tipping. 7. Max speed capped at ~1.5 m/s (≈3.3 mph). 8. Operates for at least 1 hour per charge at walking speed (0.5 -- 1.5 m/s). |
||||||
4 | Champaign MTD Bus Tracker Map |
Amber Wilt Daniel Vlassov Ziad AlDohaim |
Wesley Pang | Arne Fliflet | other1.pdf proposal1.pdf |
|
# Champaign MTD Bus Tracker Map # Team Members: - Amber Wilt (anwilt2) - Daniel Vlassov (dvlas2) - Ziad Aldohaim (ziada2) # Problem Champaign has a very large and complex bus system through the MTD. It can be hard for students to know when the buses are coming when they are in buildings such as the ECEB, since the bus times are only displayed at the stops. Furthermore, these buses can be late or early, causing students to miss their bus or not arrive at their destination on time. # Solution To fix this, we will come up with the design for a large display that shows real-time locations of all buses (color-coded using RGB) in the surrounding campus area. This can be used by students in buildings to easily visualize where the bus they want to take is currently located, making it easier for students to time when to leave classrooms and when to expect their ride. The display will update the locations approximately every 30 seconds and will light up every LED along a bus route every few minutes to make it easier for students to visualize which bus route they need to take. Furthermore, the system will include various light settings (theme/brightness). # Solution Components This system will mainly include the subsystems of the LED matrix, the controller, and the power supply. ## Subsystem 1 - LED Matrix The LED matrix will be located on a large PCB or 3D printed map of the city (cost dependent). This subsystem will be made of addressable LEDs, photoresistors to automatically modify the intensity of LEDs, and will be controlled by the microcontroller (to indicate positions). ## Subsystem 2 - Microcontroller The microcontroller will utilize wifi to access the MTD API to gather real-time bus data as well as provide control to individually address each LED within the matrix. Furthermore, it will control/communicate with other modules/displays in the system, such as a real-time clock or menu. The microcontroller will be an ESP32 ## Subsystem 3 - Power Supply The power supply will provide ample power to a large number of LEDs (and the entire system). We will need to include a buck converter to step down the power supply to be usable by the LEDs. # Criterion for Success To demonstrate the success of our project, we will need to prove the accuracy of the data we are displaying (how accurate are bus timings/locations). Additionally, we will need to show that the data is easy to interpret for a user and can be utilized for easier bus system use. |
||||||
5 | Navigation Vest Suite For People With Eye Disability |
Haoming Mei Jiwoong Jung Pump Vanichjakvong |
Rishik Sathua | Cunjiang Yu | proposal1.pdf |
|
# Navigation Vest Suite For People With Eye Disability Team Members & Experiences: - Jiwoong Jung (jiwoong3): Experienced in Machine Learning, and some Embedded programming. Worked on many research and internships that requires expertise in Machine Learning, Software Engineering, Web Dev., and App Dev. Had some experience with Embedded programming for Telemetry. - Haoming Mei (hmei7): Experienced in Embedded programming and PCB design. Worked with projects like lights, accelerometer, power converter, High-Fet Board, and motor control for a RSO that involve understanding of electronics, PCB design, and programming with STM32 MCUs. - Pump Vanichjakvong (nv22): Experienced with Cloud, Machine Learning, and Embedded programming. Done various internships and classes that focuses on AI, ML, and Cloud. Experience with Telemetry and GPS system from RSO that requires expertises in SPI, UART, GPIOs, and etc with STM32 MCUs. # Problem People with Eye Disability often face significant challenges navigating around in their daily lives. Currently, most available solutions ranges like white canes and guide dogs to AI-powered smart glasses, many of which are difficult to use and can cost as much as $3,000. Additionally, problems arises for people with disability, especially in crowded/urban areas, and that includes injuries from collision with obstacles, person, or from terrains. According to the U.S department of Transportation's 2021 Crash Report , 75% of pedestrian fatalities occurred at locations that were not intersections. Thus we aim to design a navigation vest suite to help people with eye disability to deal with these issues. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813458.pdf # Solution We have devised a solution which helps ease visually impaired individuals in daily activities such as walking from two places, or navigating around a building with multiple obstacles. Our focus will for out-door navigation in urban areas, where obstacles, terrain, and pedestrians. But, if time permits we will also deal with traffics and crosswalks. In order to achieve this, we will be utilizing 3 main components: - Lidar sensors to help the wearer with depth perception tasks - Vibration Motors to aid navigation (turning left/right) - Magnetometer to enable more accurate GPS coordination All the above components will contribute to the sensory fusion algorithm. # Solution Components ## Subsystem 1 ### Microcontroller System We are planning to use a STM32 microcontroller as main processing unit for sensory data from lidar sensors (magnetometer and GPS if time permits) and object detection data from the **machine learning system**, and direction data from navigation app (our design on phone). We will use these information to generate vibration in the direction the wearer should navigate. ### Power Systems The whole system will be battery-powered by a battery module, which contains 5V battery-cells. It will be connected to the **Microcontroller System**, which will also supply it to the **Machine Learning System**. We will also implement the necessary power protection, buck converter, regulator, and boost converters as necessary per sensors or components. - Battery Module Pack - Buck Converter (Step-Down) - Boost Converter (Step-Up) - Voltage Regulator - Reverse Polarity Protection - BMS ## Subsystem 2 ### Navigation Locator Systems Our navigation system will consist of an App which directly connects to the Google Maps API, paired with our existing sensors. We plan to utilize a magnetometer sensor, which will indicate the direction the user is facing (North, South, East, West, .etc). In order to pinpoint which direction the wearer needs to be heading, our built-in LiDAR sensors will enable us to create a SLAM (Simultaneous Localization and Mapping) to build a map of the environment. With these systems in place, we would be able to assist users in navigation. To deal with Terrain hazards, we will use the LiDAR to sensors to assist us in dealing with elevation changes the user needed to make. - LiDAR - Android App (Connected to Google API) - Magnetometer - Vibration Motors Extra Features (if time permits): - Audio Output (Text to Speech Generation from Raspberry PI 5 sends to microcontroller through AUX cable ) ## Subsystem 3 ### Machine Learning Systems - We plan to employ a object detection model on a 16GB Raspberry PI 5 (already) along with a PI camera to detect objects, signs, and people on the road, which will be feed to the microcontroller - Raspberry PI 5 - PI Camera a) The image video model will be expected to be less than 5 billion parameters with convolutional layers, running on device in raspberry pi Obviously the processing power on the raspberry pi is expected to be limited, however we are planning to accept the challenge and find out ways to improve the model with limited hardware capabilities. b) If the overall project for subtask a) becomes arduous or time consuming, we can utilize api calls or free open source models to process the image/video in real time if the user wants the feature enabled. The device is paired with the phone via the wifi chip on the raspberry pi to enable the API call. Some of the best candidates we can think of are the YOLO family models, MMDetection and MMTracking toolkit, or Detectron2 model that is developed by Facebook AI Research that supports real time camera feedbacks. # Criterion For Success ### Navigational Motor/Haptic Feedback 1) The Haptic feedback (left/right vibration) should perfectly match with the navigation directions received from the app (turn left/right) 2) Being able to Detect Obstacles, Stairs, Curbs, and people. 3) Being able to detect intersections infront and the point of turn through the lidar sensory data. 4) Being able to obey the haptic feedback patterns that is designed. (tap front for walking forward, tap right to go right etc...) ### Object Detection 1) Using the Illinois Rules of the Road and the Federal Manual on Uniform Traffic Control Device Guidelines, we will be using total of 10-30 distinct pedestrian road signs to test the object detection capability. We will be using a formal ML testing methods like geometric transformations, photometric transformations, and background clutter. Accuracy will be measured by the general equation (Total Number of correctly classified Datasets)/(Total Number of Datasets) 2) The ML Model should be able to detect potential environmental hazards including but not limited to Obstacles, Stairs, Curbs, and people. We are planning onto gather multiple hazard senarios via online research, surveys, and in-person interviews. Based on the collected research, we will be building solid test cases to ensure that our device can reliably identify potential hazards. More importantly, we are planning design strict timing and accuracy measures metrics. 3) The ML model should be able to detect additional road structures such as curbs, crosswalks, and stairs to provide comprehensive environment awareness. We will be utilizing different crosswalks located on north quad and utilize the accuracy measurement techniques mentioned in 1) ### Power and Battery Life 1) The device should support at least 3 hours of battery life. 2) The device should obey the IEC 62368-1 safety standard. IEC 62368-1 safety standard lays out different classes such as ES1, ES2, and ES3 that considers electrical and flame |
||||||
6 | E-Bike Crash Detection and Safety |
Adam Arabik Ayman Reza Muhammad Daniyal Amir |
Eric Tang | Arne Fliflet | proposal1.pdf |
|
# Title Team Members: - Ayman Reza (areza6) - Muhammad Amir (mamir6) - Adam Arabik (aarabik2) #Problem E-bikes are gaining popularity as a sustainable and convenient mode of transportation. The main issue with the growing number of e-bikes is the safety of the rider and those around them. If a rider gets into a crash, there is no automatic shutoff for the electrical systems on an e-bike. This means that the bike's motor can remain on, potentially causing more harm to the rider or the surrounding environment. Current safety systems installed on electronic devices typically focus only on post-crash communication, such as sending alerts to contacts or calling emergency services. There is currently no system that can detect a crash in real time and instantly cut power to the bike’s electrical systems to improve safety. #Solution My group's solution is a crash detection system with a motor shutoff that can integrate with e-bike systems. This device will use its own sensors and electrical measurements to recognize when a crash occurs. Once a crash is detected, the system will cut all power to the motor, ensuring that the bike can no longer accelerate even if the throttle is still engaged. To reduce false positives, the system will use a module that combines data from multiple sensors to provide a more accurate assessment of whether a cutoff is needed. In addition, the design will include a manual override that allows the rider to turn the motor back on and continue operating the bike normally. The goal of this project is to create a crash protection system that reacts quickly to its environment to prevent further harm during a crash. #Solution Components ##Subsystem 1: Crash Detection Sensors This subsystem is responsible for detecting sudden deceleration, impacts, or abnormal electrical behavior that indicates a crash. The design will use an accelerometer and gyroscope, like the MPU-6050, to monitor motion and angular velocity. A current sensor like the ACS712 will be used to detect sudden changes in motor current that occur during impact. An optional vibration or impact sensor may be added to confirm collision events and improve reliability. ##Subsystem 2: Control and Processing Unit This subsystem will process the inputs from the sensors, run the crash-detection algorithm, and issue the motor cutoff command. The system will be built around a microcontroller, such as an STM32 or ESP32, which has the processing capability to fuse sensor data and apply threshold-based decision making. The microcontroller will also handle input from the manual reset and override switch to allow the rider to re-enable the system if a false detection occurs. ##Subsystem 3: Motor Cutoff Circuit The subsystem physically disconnects the motor power when a crash is detected. A MOSFET-based switch will be used to cut power from the e-bike motor controller. The cutoff circuit will be designed to handle the motor’s current and respond within milliseconds. Once triggered, the motor will remain disabled until the system is reset by the rider. ##Subsystem5: Testing and Validation Setup The subsystem is focused on verifying the accuracy and timing of the system under controlled and real-world conditions. The initial bench testing will involve tapping the sensor and measuring how quickly the motor cutoff occurs using the oscilloscope. The controlled crash simulation will be performed by stopping the spinning wheel or using drop tests to mimic the impact. Field tests will involve riding the e-bike over curbs, bumps, and rough pavement to ensure the system doesn’t false trigger during normal use. Once a crash has been detected, the motor can be re enabled using the reset button. #Criterion for Success The rider must be able to manually cut and enable power to the motor at any time using switches on the electrical systems. If the bike tips over onto its side, the motor must turn off automatically. If the bike comes to an immediate stop that indicates a crash, the motor must turn off automatically. The system needs to be able to work with e-bike motors. |
||||||
7 | Omnidirectional Drone |
Dhruv Satish Ivan Ren Mahir Koseli |
Jason Zhang | Arne Fliflet | proposal1.pdf |
|
# Omnidirectional Drone Request for Approval Team Members - Dhruv Satish (dsatish2) - Ivan Ren (iren2) - Mahir Koseli (mkoseli2) # Problem The issue of aerial maneuvering has become an increasingly important consideration in the new age of drone deliveries, drone imaging, and necessity for automation in the fields of agriculture, construction, surveying, remote monitoring, and more. The current standard of drone technology remains limited to mostly quadcopters, a technology that has matured to enough of a degree to allow for complex directional motion, and extreme speed and stability. However, these vehicles have a notable issue of a lack of movement decoupling, with the translational and rotational motions being tied together. In a lot of speed-focused applications, this issue is trivial as most movement systems can compensate to move in 6DOF space by applying different amounts of power to different motor configurations. But in precision applications or in situations that require a certain orientation to be held, decoupling the rotational and translational degrees of motion allow for the drone to have unprecedented control. Just considering a few simple scenarios, for precise filming, construction, or especially sensitive natural or urban areas, a drone with full control over its movement means the ability to hold an angle for a shot, to apply paints at all angles and move around objects through very tight spaces, or to survey wildlife or urban areas without interfering with the natural environments. In any situation not prioritizing speed or power, an omnicopter would provide significantly improved flexibility and control. # Solution Our solution is inspired by the template of existing omnicopter designs such as the arducopter and ETH Zurich's project, but we plan to design, develop, and test our project completely independently. We will use existing resources to design the frame of the drone as either a 6 or 8 motor design. Aside from the frame, other components we plan to use are our own custom bldc motor controller, a custom flight controller board with telemetry from an IMU, GPS unit, and barometer, and potentially a regenerative breaking system. # Solution Components STM32466ZE (MCU) RP2040 (BLDC Motor Controller MCU) DRV8300 (Gate Driver) Neo M8N (Mosfets) ICM-42670-P (IMU) BMP390 (Barrometer) TLV493D (Gyroscope) Any 2200kV BLDC Motor 4s LiPo (Battery) # Subsystem 1 - BLDC Motor Controller The motor drive system will contain all required electronics to power and control the motors, including the ESCs, motors, current and voltage sensors, battery management system, and a central microcontroller that interfaces with the ESCs and remote controller. The system will be built to be modular, with each ESC and motor addition being its own module and being easily added to the overall electrical schematic to ensure flexibility with motor configuration, depending on power usage during testing. Within the motor drive system, the battery management system and regenerative braking feature will store away extra power produced by the large currents and wattages that spike up from the motor’s inductive nature. # Subsystem 2 - Frame The frame of the omnicopter will take the form of either a 6 or 8 motor configuration depending on power draw, stability, and feasibility testing after the electronics have been developed. The design will place an emphasis on easy fabrication using quick prototyping methods like FDM 3D printers, while also remaining lightweight and structurally sound. The goal here is for the drone to be easily manufacturable by hobbyists who would like a robust omni-directional drone with all required functionality and maximum tinkerability. On this end, we've already found research papers that document optimal motor placements for 6 and 8 motor omnicopter designs as well as the physics for powering these motors in various orientations. Subsystem 3 - Flight Control + Telemetry The controls and communications side will handle reading and writing data from the drone to the remote controller, as well as converting movement signals into different motor power combinations to enable separate translational and rotational movement. To do this conversion, we will write our own custom firmware that reads data from the gyroscope, IMU, barometer, and motor feedback to dictate the PWMs and direction for each individual motor. The remote controller will be a simple dual-joystick system with each joystick handling either rotational and translational motion. Depending on time constraints, trajectory planning and more can also be explored with this side of the project by using the drone’s initial position, motor velocities, and orientation. # Criterion for Success The final solution will consist of a multi-rotor drone capable of separate rotational and translational flight powered through onboard battery packs, responding to inputs from a remote controller through 2 joysticks controlling rotation and translation independently. |
||||||
8 | Hybrid Actuation Arm Exoskeleton |
Alan Lu Rubin Du |
Haocheng Bill Yang | Cunjiang Yu | proposal1.pdf |
|
**Team** Alan Lu -- jialin8 Rubin Du -- rd25 **Problem** Lifting and carrying heavy objects is a common but physically demanding task faced in both personal and industrial environments. Whether it is a person at home carrying groceries or a logistics worker handling cargo, repetitive lifting puts stress on the musculoskeletal system and can result in fatigue, reduced productivity, and even long-term injuries. Existing exoskeleton solutions often focus on industrial use, but they suffer from limited backdrivability, high weight, or overly complex designs that prevent practical everyday use. A lightweight, safe, and efficient solution is needed to reduce the physical burden of lifting while maintaining user freedom of movement. **Solution** Our team proposes the development of a wearable exoskeleton system designed to assist users in lifting objects of up to 10 kilograms with minimal effort. The system employs a hybrid actuation strategy that combines the strengths of both a BLDC motor and a servo motor: the BLDC provides the torque required for large-angle lifting motions, while the servo supplies stable holding torque to maintain the lifted position without excess energy drain. The BLDC goes through a 64:1 planetary gear set to amplify torque, and the servo motor goes through a moveable linkage system to create sufficient mechanical advantage to further reduce the load on the motor. A detachable drivetrain allows the user to disengage the actuation system, enabling free arm movement when lifting support is not required. The skeleton itself is lightweight, manufactured using carbon-fiber-reinforced nylon (PA-CF), ensuring durability and comfort. This modular design starts with elbow actuation and can be scaled to include shoulder actuation, broadening its application. **Solution Components** **Subsystem 1: Mechanical Skeleton and Drivetrain** - Lightweight PA-CF composite structure, under 3 kg excluding the battery. - Hybrid drivetrain using BLDC with planetary gear for motion and servo motor for holding. - Drivetrain disengagement mechanism for free arm movement. - Moveable armor integrated with a linkage system on the drivetrain that elaborately moves upper limb armor to avoid structural interference. **Subsystem 2: Actuation and Power System** - Actuated by BLDC + servo combination for efficiency and safety. - Powered by a 6S LiPo battery (~200 Wh), providing several hours of continuous assistance. - Custom PCB with DC-DC buck converters for peripheral loads and power distribution. - Thermal management through ventilation and optional forced convection. **Subsystem 3: Control and Signal Processing** - Joint actuation regulated through PID controllers. - User intent detected via EMG sensors integrated into the arm. - Signal conditioning pipeline: Kalman filter → Chebyshev low-pass filter → controller input. - Optional manual override via a simple forearm-mounted control panel. - Microcontroller and peripheral signals integrated on a customized PCB/FPGA. **Subsystem 4: Peripherals** - Armor ambient light will be integrated into the shell of the skeleton for aesthetics. - Ventilation port openings will be controlled by microservos to ensure good heat dissipation. - A manual control panel will be placed on the lower limb skeleton to include manual operations and emergency switches. - TPU-based soft pads inside the skeleton to provide a comfort experience for the user. **Scalability and Modularity** - The initial prototype targets elbow actuation. - Design is scalable to include shoulder actuation grounded to chest armor. - The modular approach ensures meaningful demonstration even if full-body integration is not achieved. **Criterion for Success** The final solution will be a wearable exoskeleton capable of assisting the user in lifting and holding objects up to 10 kg through a dual-actuation BLDC–servo system with a detachable drivetrain for free arm movement, powered by an onboard 6S battery, lightweight (under 3 kg excluding the battery), and controlled via EMG signals or a manual override panel to ensure safe, efficient, and natural operation. |
||||||
9 | Ant Weight 3-D Printed BattleBot |
John Tian Mig Umnakkittikul Yanhao Yang |
Gayatri Chandran | Rakesh Kumar | proposal1.pdf |
|
# Ant Weight 3D Printed BattleBot Competition Team Members Yanhao Yang (yanhaoy2) Yunhan Tian (yunhant2) Mig Umnakkittikul (sirapop3) # Problem We will design a 3-D printed BattleBot to attend the competition instructed by Professor Gruev. To attend the competition, we will need to meet the following requirements: - BattleBot must be under 2 lbs. - BattleBot must be made only of these materials: PET, PETG, ABS, or PLA/PLA+. - BattleBot must be controlled by PC via Bluetooth or Wi-Fi. - BattleBot must have a custom PCB that will hold a microprocessor, Bluetooth or Wi-Fi receiver, and H-bridge for motor control. - BattleBot must have a fighting tool activated by a motor. - BattleBot must have an easy manual shutdown and automatic shutdown with no RF link. - BattleBot will adhere to the rules on the NRC website. Our overall goal is to design, code, and build a war robot capable of thriving in the robot battle competition. # Solution We will build a 2-lb, 3-D printed BattleBot with a front-hinged lifting wedge (shovel) as the weapon to flip and destabilize other robots. The main structure will be ABS for toughness, PLA for non-critical connectors, and PETG around the power system and microcontroller for heat resistance. Control is via PC over Wi-Fi or Bluetooth using an ESP32 microcontroller.The bot will have at least three motors:Two DC-powered motors to control the robot's wheels for mobility. One geared lifter motor for the shovel, controlled through H-bridge drivers. # Solution Components ## Microprocessor We will use the ESP32-S3-WROOM-1-N16 for our BattleBot because it combines built-in Wi-Fi and Bluetooth, eliminating the need for separate modules. Its dual-core processor and ample RAM/flash provide sufficient power to handle motor control, PWM generation, weapon actuation, and sensor processing simultaneously. Its weight (6.5 g) is ideal for a 2-lb bot, and it supports many peripherals. ## Attack Mechanism To attack, destabilize, and flip opponent bots, we will use a front-hinged lifting wedge (“shovel”) as our primary weapon. The wedge will be 3-D printed with PETG for impact resistance, reinforced at hinge and linkage points to withstand stress. It will span about 50–70% of the bot’s width and feature a low, angled tip to slide under opponents effectively. A small, geared lifter motor will actuate the wedge through a lever linkage, which amplifies the torque from the motor to lift a 2-lb target. ## Mobility System We will use four small wheels (2.25’’), with the two rear wheels powered by high-torque 600 RPM, 12V DC motors. The smaller wheels lower the ride height of the bot, giving it a lower center of gravity, which improves stability during combat and reduces the chance of being flipped, while still providing solid ground traction. The motors strike a good balance between speed and torque, offering sufficient pushing power to maneuver our heavily armored bot effectively. ## Power System We will use Lithium Polymer (LiPo) batteries, 4S 14.8V 750 mAh, as the higher voltage may be required for the weaponry. LiPo batteries are significantly lighter than NiCd, provide more power, and save space. Additionally, we will integrate a motor current sensor (e.g., INA219 or ACS712) into the motor driver circuits to monitor current draw. The ESP32 will read these values in real-time, allowing us to detect stalling conditions and activate manual/automatic shutdown to protect motors and electronics. ## Bot Structure Materials We will use ABS for the main bot structure, as it offers sufficient strength and a good balance between durability and printability. PLA will be used for general-purpose parts, such as inner connection pieces, where high strength is not required. Finally, PETG will be used around the power system and microprocessor to provide additional heat resistance. # Criterion for Success The project will be considered successful if: - The BattleBot can be fully controlled remotely by PC, including movement and wedge activation. - The wedge lifter and drive motors operate reliably, capable of destabilizing or flipping a 2-lb opponent. - Manual and automatic shutdowns function correctly, independent of wireless communication. |
||||||
10 | NeuroBand |
Arrhan Bhatia Vansh Vardhan Rana Vishal Moorjani |
Wenjing Song | Cunjiang Yu | proposal1.pdf |
|
# Problem As LLM-based voice assistants move onto AR glasses, interacting by voice is often impractical in public (noise, privacy, social norms). Existing AR inputs like gaze/head pose can be fatiguing and imprecise for pointer-style tasks, and camera-based hand-tracking ties you to specific ecosystems and lighting conditions. We need a device-agnostic, silent, low-latency input method that lets users control AR (and conventional devices) comfortably without relying on voice. # Solution Overview We propose a two-band wrist/forearm mouse that connects as a standard Bluetooth HID mouse and operates in virtual trackpad mode: * A wrist band (Pointing Unit) uses an IMU to estimate pitch/roll relative to a neutral pose and maps that orientation to a bounded 2D plane (absolute cursor control). A clutch gesture freezes/unfreezes the cursor so the user can re-center their wrist naturally. * A forearm band (Gesture Unit) uses surface EMG electrodes over the forearm muscle belly to detect pinch/squeeze gestures for clicks, drag, right-click, and scroll. * The wrist band is the host-facing device (Bluetooth HID). The forearm band communicates locally to the wrist band (tether or short-range wireless) for low added latency. * Initial design focuses on pitch/roll; yaw is not required for trackpad mode. # Solution Subsystems ## 1 — Wrist Band (Pointing Unit) * Wrist-mounted inertial sensing to estimate stable pitch/roll relative to a neutral pose. * Lightweight fusion/filtering for smooth, low-noise orientation signals suitable for absolute cursor mapping. * Local state for clutch (engage/hold/release) and pointer acceleration/limits as needed. ## 2 — Forearm Band (Gesture Unit) * Noninvasive EMG sensing over forearm muscle groups associated with finger pinches. * Basic signal conditioning and thresholding to convert muscle activity into discrete actions (left click, right click, drag, scroll). * Brief per-user calibration to set comfortable sensitivity and reduce false triggers. ## 3 — Inter-Band Link & Firmware * Local link from the forearm band (gesture events) to the wrist band (pointing and HID reports). * Embedded firmware to read sensors, perform fusion/gesture detection, manage clutch, and assemble standard Bluetooth HID mouse reports to the host. * Emphasis on responsiveness (low end-to-end latency) and smoothness (consistent cursor motion). ## 4 — Power, Safety, and Enclosure * Rechargeable batteries and simple power management sized for day-long use. * Electrical isolation/protection around electrodes for user safety and comfort. * Compact, comfortable bands with skin-safe materials; straightforward donning/doffing and repeatable placement. # Criterion for Success * Pairs as a standard BLE mouse and controls the on-screen cursor in virtual trackpad mode. * Supports left click, right click, drag, and scroll via gestures, with a working clutch to hold/release cursor position. * End-to-end interaction latency low enough to feel immediate (target: sub-~60 ms typical, Apple's magic mouse 2 has a latency of ~60 ms before motion is reflected on screen). * Pointer selection performance on standard pointing tasks comparable to a typical BLE mouse after brief calibration. * Minimal cursor drift when the wrist is held still with clutch engaged. * High true-positive rate (>= 90%) and low false-positive rate for click gestures during normal wrist motion. * 4 hours of battery life on a single charge. * Stable wireless operation in typical indoor environments at common usage distances (up to 2 meters). |
||||||
11 | Glove Controlled Drone |
Aneesh Nagalkar Atsi Gupta Zach Greening |
Wenjing Song | Cunjiang Yu | proposal1.pdf |
|
Glove Controlled Drone Team Members - Aneesh Nagalkar (aneeshn3) - Zach Greening (zg29) - Atsi Gupta (atsig2) # Problem Controlling drones typically requires handheld remote controllers or smartphones, which may not feel natural and can limit user interaction. A more intuitive way to control drones could increase accessibility, improve user experience, and open possibilities for new applications such as training, entertainment, or assistive technology. # Solution Our group proposes building a wearable gesture-control glove that sends commands to a quadcopter. The glove will use motion sensors to detect the user’s hand orientation and movements, translating them into drone commands (e.g., tilting forward moves the drone forward). The glove will transmit these commands wirelessly to the quadcopter through an ESP32 Wi-Fi module. The drone will be purchased in parts to simplify integration and ensure reliable flight mechanics, while the glove will be custom-built. To improve from previous iterations of similar projects, we plan to: - Use IMU sensors instead of flex sensors for more precise and complex gesture detection. - Add haptic feedback to communicate status updates to the user (e.g., low battery, weak signal). - Implement an emergency shutoff mechanism triggered by a specific hand gesture (e.g., closing the hand). - Potentially integrate a camera onto the quad copter that will be signalled by a different hand gesture. The system is also scalable to include advanced commands such as speed adjustments based on motion severity. # Solution Subsystems **Subsystem 1: Gesture Detection** - IMU and gyroscope sensors embedded in the glove to detect orientation and movement. - Sensor fusion algorithms to interpret gestures into defined drone commands. 1. Three axis gyroscope: mpu-6050 2. IMU: Pololu MinIMU-9 v6 Controls: Here is a clear definition of how the drone will move - Drone maintains a constant hover height (handled by the drone’s onboard flight controller barometer/altimeter stabilization) - The glove only controls horizontal motion and yaw (turning - Pitch forward (tilt hand down): Move forward - Pitch backward (tilt hand up): Move backward - Roll left (tilt hand left): Strafe left - Roll right (tilt hand right): Strafe right - Yaw (rotate wrist clockwise/counter-clockwise): Turn left/right - Clenched fist (or another distinct gesture): Emergency stop / shutoff **Subsystem 2: Communication Module** - ESP32 microcontroller on the glove acts as the transmitter. - Wi-Fi connection to the drone for sending control signals. 1. ESP32 microcontroller 2. Integrated ESP32 wifi chip 3. Voltage regulation **Subsystem 3: Quadcopter Hardware** - Drone hardware purchased off-the-shelf to ensure stable flight. - Integrated with receiver to interpret Wi-Fi commands from the glove 1. LiteWing – ESP32-Based Programmable Drone **Subsystem 4: Feedback and Safety Enhancements** - Haptic motors embedded in the glove to provide vibration-based feedback. - Emergency shutoff gesture detection for immediate drone power-down. 1. Vibrating Actuator: Adafruit 10 mm Vibration Motor 2. Driver for actuator: Precision Microdrives 310-117 3. Battery: Adafruit 3.7 V 1000 mAh Li-Po 4. Glove that components will be affixed to # Criterion for Success, minimum 5/7 of these - The glove reliably detects and distinguishes between multiple hand movements. - The drone responds in real time to glove commands with minimal delay. - Basic directional commands (forward, back, left, right, up, down) work consistently. - Scaled commands (e.g., varying speed/acceleration) function correctly. - Haptic feedback provides clear communication of system status to the user. - The emergency shutoff mechanism works reliably and immediately. - The system demonstrates smooth, safe, and intuitive user control during a test flight. |
||||||
12 | New Generation Addiction Control and Recovery Device System with Absolute Safety and Privacy - working with the Drug Addiction Research Team |
Adrian Santosh Leo Li Richawn Bernard |
Shengyan Liu | Cunjiang Yu | proposal1.pdf |
|
Yixuan Li, Bernard Richawn, Adrian Santosh ECE 445 Professor Arne Fliflet 2 Sep 2025 New Generation Addiction Control and Recovery Device System with Absolute Safety and Privacy - working with the Drug Addiction Research Team Problem: “First you take a drink, then the drink takes a drink, then the drink takes you.” – F. Scott Fitzgerald. As Fitzgerald says, unconscious Addiction is becoming a more and more common problem in modern society. By “unconscious,” I mean habits that sneak in slowly, built by small daily choices and cues, so people don’t notice the slide until it is strong. There’s many materials that can cause addiction in individuals' lives. Here I include both substances and behaviors. Some are relatively not impactful, like addiction to high sodium concentration, addiction to certain neuron impulses like music, even addiction to alcohol for most of the cases. I don’t mean these are harmless; only that for many people the day-to-day harm can be lower. Alcohol addiction can also be severe; I’m talking about mild use here. But some are much more horrifying, like addiction to smoking (more specifically nicotine), addiction to fentanyl, and addiction to cocaine. These often pull people in fast, with strong cravings, risky choices, and higher risk of death. All those substances deal a considerable amount of damage to the health of the addict and more or less influence the people around the user. Think heart and lung problems, changes in mood and thinking, money stress, and family strain. But this should not end here. Naming the harm is not enough; we need a clear plan to stop the slide. “It does not matter how slowly you go as long as you do not stop.” – Confucius. Like Confucius said, if there is no stop to the addiction then the addiction will eventually roll down the hill like a rock no matter faster or slower, a brake is needed for this process. The point is that steady effort matters more than speed, but pushes from stress and daily life keep the rock moving. Do we have a stop method for addiction? Certainly yes, there are many addictions known to be reversible and even the ones that are non-reversible, the symptom of the addiction can be controlled to be at least less severe. But there exists problems with the current method. Two big gaps stand out significantly in real life. The base logic for the addiction recovery is usually step-by-step. The steps are often fixed by a plan, not by how the person feels that day. Usually the addict needs to control the addiction level from severe step by step to nearly no impact, but if the process is too quick then the result can also be bad. It works like trying to reshape a Bouncy Ball, if we try to push too hard it will make a huge bounce back and eliminate most of the distance that has already gone through, in the worst case the addict might even be worse from the start because of the bounce. When that happens, the person loses trust and energy. They may even smoke more to calm the crash. All the current methods rely heavily on the self-regulation ability of the addict: the addict has to lower the addiction level step by step with a highly organized daily routine. This assumes strong willpower every hour, which is not realistic for most people. Life is messy, work shifts, kids, money, and stress can break the routine. This sometimes works well but in most cases the result isn’t optimal enough. There are mainly two problems: First, the self regulation ability of the addict is usually not well enough. Willpower rises and falls during the day, but cravings and desire don’t wait. Sure there exist addicts that have a strong will to make a recovery from the addiction for reasons, but in most cases the addict only knows the current addiction phase is not healthy and have a relatively weak will for making a recovery, which in this case the addict may give up the process when feeling horrible. This drop-off point is the key failure we must design for. Second, the self control model is not optimal and scientific enough most of the time. Most plans don’t measure anything in real time. Even if the addict tries one’s best to regulate oneself, due to the fact that the human body is a huge chaotic system, there is no way that the addict will have a perfect endoscopic view to know what should be the correct dose for that specific moment. Nicotine fades fast, so strong desire can spike many times a day. People end up guessing dose and timing, which causes under-dosing (white-knuckle cravings) or over-dosing (nausea, dizziness). So what we are aiming for then, is to work out an optimal systemic solution for the addiction recovery for smoking (nicotine) addiction. In simple words, a system that senses, decides, and helps right when the urge hits. It should be easy to use, low cost, and private. We aim to solve the problem of smokers not being able to self-control accurately and regulate enough in a precise scientific responsive system. Today’s tools are one-size-fits-all, slow to react, and too hard to follow under stress. We need a responsive loop: notice the trigger, then give the right aid, and then check the effect, then adjust and do it again in time. Team Members: Adrian Santosh, Bernard Richawn, Yixuan Li Solution: We need a solution that at least is able to perform four functions. First, the system should be able to detect the nicotine level of the addict and some other stats, for example maybe blood pressure and blood sugar, to provide sufficient information for analysis. Second, the system should have the ability to analyze the data input, then produce the best optimal solution for the current status. Third, the system should have the ability to control the release of the material for drug addiction recovery. Fourth, most importantly the system should be able to provide multiple level security checks to make sure even in the worst case the system should not produce any risk. Solution Components Privacy: We do not plan to connect any external linkage into the system so the user’s privacy should not be exposed to any external source. Also, we do not plan to use any long-term storage so even if the system is hacked it should not provide any readable information. If it is possible we will also add some encryption method for the data and hardware/software self-destruction function if the system is forced open in an unwanted way. The user's privacy is crucial and we want to protect it as much as we can. Sensor System: The sensor should be set in a non-invasive way and since we do not plan to connect the system into any external linkage. First Sensor System: This sensor system should be set for testing the fluid in the Oral Cavity Region. For example Salavia. The sensor should be able to detect at least the existing nicotine concentration in the mouth region. Second Sensor System: This sensor system should serve as a backup for the First Sensor System, detecting the existing nicotine portion in the exhaled air. Safety Measuring System: Emergency detecting warning: Watch states like blood pressure, heart rate, symptoms, and dose history. If any sign crosses a safe line, it blocks dosing and shows a clear message. Directly alarm the user for unusual cases. Sensors adjusting and comparing: This part might have to work with the microcontroller. It should compare the two data received from the sensors and if one of them goes totally insane then it should attempt to follow the one in the normal range. And if both sensors totally lose control, it will then use the user-input and default data as signals to send to the Microcontroller Units. Warnings should be sent to warn the user about the abnormality of the device sensor part. Backup user-input port: The user will have a way for using the user control port to provide supplement information for the treatment plan. In the worst case the user can solely use the subjective control input to control the device when all the sensors are not working. Simple slider for urge (1–10, just for a reference for the treatment). Another functionality is in the worst case when all the sensors fail, the port can be solely used for the next dose of treatment while still having a solid respect in the treatment limit. Microcontroller Unit Ports: Safe power port and covered plugs for sensors/maintenance. Marked shapes so parts can’t be plugged in the wrong way. There also should be Water-resistant covers considering the use of liquid frequently happens. Microprocessor: Include a timing system. Do most of the logic operations, when the signal is sent from the Sensor System, it should perform a logic and then give the suggested dose to the output system. Safety Check Software System Part: This part should be implemented in the Software, when the dose is significantly wrong compared to the previous data, the system should produce a warning and use a different one than the mistaken suggestion. Safety Check Hardware System Part: There should be a chip that guards the output message in case the software is failing. Irregular commands should be switched and failing commands should shut down the whole system and inform the user the device is failing. Three tanks overview system: Nicotine Tank: Volume: Need to decide a specific volume for the Nicotine Tank so the design has sufficient space but also not far wasting the design requirement. Storage Material: We have to decide what kind of material we need to use for the tank, to best prevent the corrosion and erosion from the liquid. Temperature detection: ideally we should also have a temperature detection to make sure that the liquid has the correct state and ensure the safety at least on the side of temperature. There should be a warning sent to the MicroController in case the temperature is over the safety line. Leaking Detection: There should be a detection device for the tank in case it is leaking. Leaking Management: At least there should be two layers of walls for the container to make sure that the leakage will not harm the rest of the device. Also serves the functionality of blocking the leakage from the external world. Safety Detection: If an unreasonable command is sent, safety measures should be enforced. Liquid Vapor Pod: Volume: Need to decide a specific volume for the Pod so the design has sufficient space but also not far wasting the design requirement. Storage Material: We have to decide what kind of material we need to use for the pod, to best prevent the corrosion and erosion from the liquid vapor. Leaking Detection: There should be a detection device for the tank in case it is leaking. Leaking Management: At least there should be two layers of walls for the container to make sure that the leakage will not harm the rest of the device. Also serves the functionality of blocking the leakage from the external world. Safety Detection: If an unreasonable command is sent, safety measures should be enforced. Noxious Vapor Pod: Volume: Need to decide a specific volume for the Pod so the design has sufficient space but also not far wasting the design requirement. Storage Material: We have to decide what kind of material we need to use for the pod, to best prevent the corrosion and erosion from the liquid vapor. Leaking Detection: There should be a detection device for the tank in case it is leaking. Leaking Management: At least there should be two layers of walls for the container to make sure that the leakage will not harm the rest of the device. Also serves the functionality of blocking the leakage from the external world. The Leaking management should be more strict here due to the reason that the noxious Vapor is more dangerous. Safety Detection: If an unreasonable command is sent, safety measures should be enforced. Mixing Pod: Volume: Need to decide a specific volume for the Pod so the design has sufficient space but also not far wasting the design requirement. Storage Material: We have to decide what kind of material we need to use for the pod, to best prevent the corrosion and erosion from the liquid vapor. Leaking Detection: There should be a detection device for the tank in case it is leaking. Leaking Management: At least there should be two layers of walls for the container to make sure that the leakage will not harm the rest of the device. Also serves the functionality of blocking the leakage from the external world. Safety Detection: If an unreasonable command is sent, safety measures should be enforced. Mixing Device: A mixing rod or more sufficient mixing device needs to exist for sufficient mixing activity. Mixing System: Pulls small amounts from the needed containers and blends to the set dose. Uses short, clean paths and a quick rinse step to avoid leftover mix. Safety System: The last guard against any abnormal commands. If the dose is higher than an amount, this simple safety check system should directly shut down the whole system and warn the user about the failing of the system. A clear hard cap should exist in this component to ensure the last safety guard works correctly. Powering System: Not exactly sure yet, can be a rechargeable battery but also can be non-rechargeable battery. Should work in normal mode and power saving mode when the system is not activated. Criteria for Success Sensing: The device reports a saliva or breath nicotine status within 60 seconds and matches a lab reference check in more than 80% of trials; BP and heart rate readings match a reference meter within a small allowed error in bench tests. Analysis: From any valid sensor or user input, the device gives a clear “status” and “next step” in 10 seconds and always stays within set dose caps and lockouts in a 10 case test set. Delivery: The system delivers the commanded micro-dose volume (using a safe test liquid) within 10% up and down of target as measured on a scale, and enforces a minimum lockout time between doses every time. Safety: If a red-flag condition appears (very high BP, sensor fault, low battery), dosing blocks within 1 second and a clear alert shows. Privacy: the device works with no external network, and the user can erase all local data. Notes: All the parts are still highly abstract so in the process of organizing and implementation, if some of the parts are proved to be impossible to implement or not compatible with the other parts of the system, we reserve the right to re-modify the project and make it still have the best optimal functionality. |
||||||
13 | Sun Tracking Umbrella |
Dora Stavenger Megan Cubiss Sarah Wilson |
Wesley Pang | Arne Fliflet | proposal1.pdf |
|
Team Members - Dora Stavenger (doraas2) - Sarah Wilson (sarahw7) - Megan Cubiss (mcubiss2) Problem When sitting outside in urban third spaces, it is often too hot or bright to stay there for a while. Even at low temperatures, exposure in direct sun gets uncomfortable and/or unhealthy quick. Many outdoor spaces do have stationary umbrellas but, once set, they only help for a period of time which can lead to discomfort from excessive heat/brightness. This can be avoided by adjusting the umbrella throughout the day but they are often quite heavy and hard to maneuver. Solution Overview To solve this problem, we suggest an umbrella that tracks the position of the sun using solar panels in addition to other sensors and adjusts the tilt of the umbrella to provide UV protection for the user and ensure comfort. To prove out this concept we are proposing to make a smaller model of an umbrella, using resources from the machine shop as well as doing some design ourselves. We will also do the math to prove that our design could be scaled up and withstand the extra load from the heavier weight of a real umbrella. Solution Subsystems ##Subsystem 1: Model Umbrella This subsystem is the mechanical basis for the project: The canopy would be scaled to about that of a personal rain umbrella. The rain umbrella would attach to an elbow joint allowing for tilting motion. The base would attach to a stable plate and a bearing allowing for circular motion. ##Subsystem 2: Solar Cells / Brightness Sensors This subsystem would be responsible for powering the umbrella as well as provide data on light intensity. A ring of solar cells towards the widest portion of the umbrella as well as solar cells towards the top. Solar cells power moving mechanisms as well as provide backup power through battery storage. Light intensity is measured using these solar cells to determine optimal positioning. ##Subsystem 3: Motor for Solar Angle Tracking This subsystem would be responsible for tilting the canopy of the umbrella: A stepper motor would be used due to low speed, high torque application. Physical stop built in for added safety so the canopy does not fall. Motor control done using H-bridge. ##Subsystem 4: Motor for Solar Position Tracking This subsystem would be responsible for rotating the entire umbrella: A stepper motor would be used in order to keep design consistent. Motor control done using separate H-bridge from Subsystem 3. ##Subsystem 5: wifi/bluetooth/communication This subsystem is responsible for the communication between the physical device on the umbrella and a user’s phone/application. Using a ESP32, a web server can be established which can be connected to a laptop/display via the existing wifi abilities. This would allow two way data communication where data could be viewed in a simple web browser with some sort of user interface to allow commands to be pushed back to the microcontroller. This would also allow users on the same network to access the page and interact with the device. Criterion for Success Outcomes : A scaled version of the working product with the proof that it is scalable to a full sized version. The umbrella tilts based on differences in intensity detected by the solar cells. The umbrella is structurally sound and does not fall over during any motion. Data from the solar cell is displayed and user input is possible. Hardware : The device does not get in the way of user experience. Solar cells send accurate data to software components. Motors respond accordingly to change umbrella positioning. Software : Data from solar cells are accurately received and processed by software. Software to determine how umbrella positioning is to move for optimal coverage. Accurately disperses information for motor movement. |
||||||
14 | Enhanced Golf Rangefinder |
Peter Maestranzi Emma DiBiase Jacob Hindenburg |
Eric Tang | Arne Fliflet | presentation1.pdf proposal1.pdf |
|
**Team Members:** Peter Maestranzi (petervm2) Jake Hindenburg (jacobh6) Emma DiBiase (emmamd2) **Problem:** Golf is an extremely difficult game that requires a great deal of precision. There are a multitude of factors that can affect a single golf shot such as distance, weather conditions, and club choice. Modern rangefinders gauge distance well, with some even able to show yardage adjustments for changes in elevation. However, rangefinders still lack many features that could help average or new golfers improve quickly. **Solution:** The solution to the problem would be to create an enhanced rangefinder that adds several new features. The distance would be measured through a time-of-flight sensor, a commonly used component in rangefinders. To make our project unique, we would integrate several other components to help measure a more precise distance. This would consist of more sensors measuring factors such as wind speed, humidity, and temperature. The adjusted distance due to these factors would be updated on the rangefinder and shown through an LCD display. Another component that would be utilized in our device would be a Bluetooth user interface. Based on the readings from the rangefinder, a Bluetooth component on the user’s phone can supply all the necessary information for that specific shot and provide a club recommendation. Using a microprocessor with Bluetooth capabilities, this subsystem would be achievable and crucial to making our device unique. All our devices’ components would be secured within a 3D-printed enclosure that is both safe and easy to handle. **Subsystem 1: Microprocessor** For our microprocessor, we will use an ESP32-S3-WROOM-1-N16 as it supports Wi-Fi and Bluetooth capabilities. We will have added room for any additional UI features, GPIOS, and programming capabilities with plenty of extra power. **Subsystem 2: Distance Tracking System** The main component of the Distance Tracking System is a time-of-flight (ToF) sensor such as the JRT D09C Laser Distance Sensor. ToF would help measure the distance to any object that the golfer points at. These are very common in normal rangefinders, so the crucial part of this system for our project would be the interfacing that occurs with other systems that would provide an adjusted distance based on measurements of the environment. **Subsystem 3: Environment System** For the environmental system, we will detect ambient conditions that will directly affect the golf shot. This includes a hot-wire wind sensor with analog output for wind speed (Modern Device Wind Sensor Rev. C), as well as the Bosch BME280 to detect humidity and temperature as these directly correlate to increasing/decreasing yards. This subsystem is essential because it provides the additional assistance/feedback that golfers need to improve, giving us the “enhanced” rangefinder. **Subsystem 4: Power System** A Lipo battery such as the EEMB 3.7V Lipo Battery 500mAh should be sufficient to power each component. **Subsystem 5: User Interface + Bluetooth Application** A physical LCD display will be used to display distance measurements and wind speed which will be triggered by a push button on the mechanical enclosure. Using Bluetooth capabilities, an application on a phone or pc will be able to give users more information on club selection based on the conditions read. **Subsystem 6: Mechanical Enclosure** The enclosure is an important component to our project because it needs to safely contain all our systems while also being user-friendly. The enclosure would be 3D-printed and would properly mount all sensors and displays accordingly. **Criterion for Success** This project will be successful if we meet the following criteria: - The rangefinder measures the correct distance from the user to the flag pin. - Environmental sensors provide proper feedback to the user regarding wind, humidity, and temperature conditions - The UI recommends a suitable club based on the distance to the pin and the environmental conditions |
||||||
15 | Auto adjusted lighting system for room |
Howard Li Jihyun Seo Kevin Chen |
Zhuoer Zhang | Arne Fliflet | proposal1.pdf |
|
**TITLE** Auto-Adjusted Smart Lighting System for Healthy Indoor Environments **TEAM MEMBERS:** Howard Li [zl114] Jihyun Seo [jihyun4] Kevin Chen [kdchen2] **PROBLEM** Most people do not give much thought to the lighting conditions in the rooms where they spend hours working, studying, or relaxing. As a result, the lighting and brightness levels are often unsuitable for eye health and comfort. Poor or inconsistent lighting can lead to eye strain, headaches, fatigue, and reduced productivity. While modern devices like phones and laptops already include adaptive brightness features, room lighting has largely remained static, requiring manual adjustment if at all. Sudden changes in light intensity can also be jarring, creating discomfort instead of solving the problem. We aim to solve the problem of creating an automatic, health-conscious lighting system for indoor environments that adjusts brightness in real time based on sensed conditions and does so gradually to protect users’ eyes. **SOLUTION** Our solution is to build a system of multiple wireless sensors placed around a room to continuously measure light levels at different points. These sensors will connect to a central control unit, which processes the readings and determines the optimal lighting adjustments for the space. The system will then control the room’s artificial lights, increasing or decreasing brightness to achieve a consistent, eye-healthy level across the room. Importantly, these adjustments will be gradual—mimicking the smooth transitions of a phone screen’s auto-brightness—so that users never experience sudden, distracting changes in illumination. This approach introduces several subsystems: Wireless sensing subsystem: distributed light sensors communicate readings to the main controller. Central control subsystem: interprets sensor data and computes adjustments. Lighting control subsystem: modifies the brightness and potentially the color temperature of the lights. User comfort subsystem: ensures that changes are gradual and within recommended ranges for eye comfort. In addition to improving eye comfort, our system will also focus on energy efficiency. By actively monitoring natural daylight levels through the sensors, the system can reduce or even turn off artificial lighting when sunlight provides sufficient brightness. This ensures that lights are only used when necessary, lowering energy consumption and utility costs while promoting sustainability. **SOLUTION COMPONENTS** SENSORS We will use ambient light sensors to measure lux levels at multiple locations in the room. Placing sensors in different spots ensures accurate feedback even if natural light is unevenly distributed. These sensors will transmit data wirelessly to the central controller. WIRELESS NETWORK & CENTRAL CONTROLLER A controller will collect all sensor data, run algorithms to determine the target lighting level, and send control signals to smart drivers or dimmers. The wireless system allows easy deployment without additional wiring. LIGHTING CONTROL We will integrate dimmable LED lights or connect to existing lighting fixtures via smart dimmers. The control logic will avoid rapid brightness jumps by gradually adjusting output intensity. We may also explore adaptive color temperature to better mimic natural daylight cycles. USER INTERFACE (OPTIONAL) A controller or could allow users to set preferences, such as “focus mode,” “relax mode,” or “sleep preparation mode,” which would adjust the target brightness levels and transition speeds. CRITERION FOR SUCCESS The system must be able to detect ambient lighting conditions in multiple parts of the room and wirelessly send the data to the central unit. The lights should respond automatically to sensor data without user intervention. Brightness adjustments should be gradual, with no sudden jumps noticeable to the human eye. The lighting should remain within healthy ranges recommended for eye comfort (e.g., 300–500 lux for reading, 100–200 lux for relaxation). Optional success criteria: the user interface allows customization of lighting preferences. |
||||||
16 | Antweight Battlebot - Blade Blade |
Jack Tipping Patrick Mugg Sam Paone |
Gayatri Chandran | Rakesh Kumar | other1.pdf presentation1.pptx proposal1.pdf proposal2.pdf |
|
# Ant-weight Battlebot - Blade Blade Team Members: - Jack Tipping jacket2 - Samuel Paone spaone2 - Patrick Mugg pmugg2 # Problem Describe the problem you want to solve and motivate the need. We don’t have a problem, but other teams will when they see our lightweight battle bot. However, we must keep in mind certain design limitations to be eligible for competition, such as the mechanism remaining under 2 pounds. The battle bot must have a balance of being indestructible, lightweight, offensive, and long-lasting in terms of robot “cardio” (motors). # Solution Describe your design at a high level, how it solves the problem, and introduce the subsystems of your project. Our design will consist of a sturdy body for our bot, which has a circular saw that has the ability to not only spin, but also lift vertically. This will allow us to damage our opponent and also exploit their bot's weaknesses, depending on the flaws in their design. An initial component list is a 3d printed chassis, an ESP32 microcontroller, two wheels with two associated motors, two motors for the weapon, which is a saw in the front that rotates and lifts up connected over GPIO. # Solution Components ## MCU We will use an ESP32 microcontroller. The primary benefit is that it has integrated WIFI and Bluetooth. This will allow us to add custom telemetry to our laptop to control our bot. Such as controlling the motor speed, raising our wheel to flip the opponent's bot, or cutting our power as a fail-safe. The ESP32 has plenty of peripheral support. There are many PWM outputs, so we can directly drive multiple items. There are ADC inputs that will make it easy to read battery voltage, or any potential sensors we may have. It provides everything I mentioned, and is also very compact and doesn’t use much power. ## The Chassis For the project, we have access to 3d printing with 5 different types of plastics. The options are PET, PETG, ABS, PLA, and PLA+. After some research and evaluating tradeoffs, we are going to opt for PETG and ABS. PETG tends to be lighter and stronger than PET and is also easier to build with and more flexible than ABS. Because of this, it is optimal for the chassis. For the saw itself, the build will be done with ABS since manufacturing defects are not as important as being lightweight and strong. ## Power Unit and Motors We plan to use a 12V Brushed DC Gear Motor with a 37mm gearbox and an RPM of 45. Obviously limits our batteries to 12v unless some other circuit is involved. Evaluating 12v batteries, we find that I should use a 3S LiPo with size to be determined based on the final weight of the battle bot (~500mA). We may opt for a higher rpm motor for the saw, focusing on torque for now. ## Drive Unit Dual H bridge with motors listed above. Only two wheels/dual so we can have reverse/forward while saving weight (vs having more wheels with the same number of motors with less maneuverability). ## Saw Spin Unit When it comes to our weapon, it is going to be a tombstone design with a saw instead of an inanimate object that randomly rotates. It’s going to be hooked to the high rpm motor (Adafruit DC Motor), and on the lift side, we are debating between a 4th motor or an air-compressed part. This is also an optional feature. ## Additional Sensor We will also have a heat sensor to monitor if the motors are being overworked; if they are, we can avoid “engine failure” and lose the competition by temporarily immobilizing. # Criterion For Success Our high-level goals are to complete this class with an antweight battle bot that maneuvers well with two wheels, has a robust chassis, a weapon that is saw-like that rotates using a motor, in addition to being able to flip opponents, the robot should be able to be controlled over Bluetooth/wifi, and ideally, we do well in the competition. |
||||||
17 | LED Persistence of Vision Globe |
Gavi Campbell Melvin Alpizar Arrieta Owen Bowers |
Gayatri Chandran | Arne Fliflet | other1.pdf |
|
# Team: Globetrotters (WIP) # Team Members: - Owen Bowers (obowers2) - Melvin Alpizar Arrieta (malpi2) - Gavriel Campbell (gcampb7) # Problem LabEscape at UIUC is a popular attraction during events such as Engineering Open House and as such is constantly looking for ways to improve their exhibit. One such improvement they are looking to make is the implementation of a LED Globe capable of displaying messages and images via the utilization of something called persistence of vision. However, many issues can arise when trying to construct a functional system to utilize this phenomenon including mechanical, timing, and electrical restrictions. A couple examples of the problems that may be encountered are as follows: Difficulty in the creation of an electrical system that functions within a rapidly spinning environment. Difficulty acquiring proper live measurements of the systems spin rate. Difficulty translating spin rate into signals at proper time intervals for the entirety of the LED strip across the arch. Difficulty ensuring proper resolution for crisp imaging. Difficulty ensuring stability of the structure due to weights of parts. This problem is emphasized when applied to spinning objects. Additionally to all the above mentioned crucial issues to consider, there are a number of aesthetic issues that should be addressed. Namely, the noise of such a device should ideally be as little as possible and the color spectrum be as large as able. # Solution In order to address the many problems one could encounter when trying to build a system of this kind we plan to take the following measures. We will implement systems capable of acquiring the correct spin rate of the device, taking into account information from accelerometers, optical sensors, and the assumed spin rate of components. We will include a number of LED’s sufficient to provide clear and crisp images across the entirety of the spin radius. We will strive to manage external wiring and focus on keeping all relevant wiring components contained to the PCB board to ensure that wires will not tangle the device and result in catastrophic failure. To solve balancing issues we intend to create a tri-pylon approach where there will be three identical arches spaced around the structure to ensure that balance is maintained. Additionally we will ensure that PCB are spaced properly to distribute weight evenly. This design could be expanded to make use of an RGB coloring system to allow for multicolored display. ## Subsystem 1 - Power Unit A 5 volt power unit will allow for the safe operation of our LED’s avoiding risk of burnout. A wired power source system (DC 12V) and conversion to lower voltage for when it is desired for the device to run for extended periods of time. A mobile battery pack that can be utilized when mobility is desired. ## Subsystem 2 - Motor A DC motor capable of rotating at least 600 rpm should be more than satisfactory for the goal we wish to achieve in this project. WIll be able to rotate the mass of our globe for extended periods of time without wearing out. ## Subsystem 3 - Microprocessor Room for additional features should we wish to expand the scope of our project (such as perhaps the addition of a speaker). Capability to route all our necessary components with ease and the ability to accommodate additional power if needed. Our Microprocessor will allow for WIFI and bluetooth connectivity capabilities. ## Subsystem 4 - Accelerometer/Rotational Sensors An accelerometer to gather experimental data of the current rotational speed of the LED globe An optical sensor will be used with a reference point to verify the correct rotational speed of the globe. Alternatively a hall-effect sensor can be used to magnetically detect rotations and adjust light timing accordingly. ## Subsystem 5 - Multi-Colored LED Band(s) Balanced LED spacing around the PCB core to ensure the smooth rotation of our globe and avoiding turbulence. Reliable and fast acting LED’s not prone to burnout when activated actively and continuously. Bands of interconnected LED’s capable of a single or multiple colors. ## Subsystem 6 - Data Input An SD card reader or item of a similar nature that can accept physical information and display in a sequential order. Support for wireless data transfers to accomplish data displays without the necessity to stop and load the device. Support for an approachable user interface in which displays can be freely edited and changed wirelessly. ## Subsystem 7 - Web Application Will provide a user-friendly method to control the LED Globe Will allow users to upload media files (images, videos, gifs) directly from their device to the globe The web interface will connect to the globe via onboard WiFi/Bluetooth for seamless control. Password protection or local hosting will restrict access so only authorized users can make changes. # Criterion For Success This project will be successful if we meet the following criteria: High resolution displayable text and imaging. Continuous correct functioning for 12 hours when on battery power. Wireless Customizable Graphics. |
||||||
18 | RFID Poker Board |
Darren Liao KB Bolor-Erdene Satyam Singh |
Eric Tang | Rakesh Kumar | proposal1.pdf |
|
# Team Members: - Satyam Singh (satyams2) - Darren Liao (darrenl4) - Khuselbayar Bolor-Erdene (kb40) # Poker Traditional poker tables rely on the dealer and players to track cards, bets, and pots. This process can be slow and error prone, especially in casual games where the dealer is inexperienced. Players may misread cards, deal incorrectly, or lose track of the state of the game. Live poker also lacks many of the conveniences of online poker, such as automatic hand evaluation, instant game state updates, and precise tracking of actions. Online platforms further enhance the player experience with detailed statistics, and hand histories while live games rely entirely on player knowledge. # Solution An RFID-enabled poker table with tagged cards helps to bridge this gap by bringing digital intelligence into the live poker experience. By embedding RFID readers in the table, the system can automatically recognize cards, display the game state in real time, and evaluate hands without error. Game state management features such as LED indicators can track dealer position, blinds, and turn order, giving players visual cues that keep the game running smoothly. A dedicated companion app would serve as the primary user interface, providing players with immediate feedback. The app can also highlight blind positions, display whose action it is At a high level, we will stick 13.56 MHz HF RFID sticker tags onto poker cards (and possibly chips later), place small antenna pads under the “seat” zones in front of each player, and a larger one in the middle for the community cards. We will build a main PCB with an ESP32, a single HF reader IC, and an RF MUX switch so the microcontroller unit (MCU) can scan all pads sequentially. The MCU will resolve tag UIDs into chip denominations or card identities, then send compact state updates to a small UI over Wi-Fi in near real-time. # Solution Components ## Subsystem 1: RFID Cards and Antenna Network Each card and chip will have a 13.56 MHz HF RFID NFC sticker (ISO 15693) attached. Antenna pads will be embedded under each player’s seat zone and a larger pad will be used for the community cards. All pads will be routed through an RF multiplexer (e.g., a 16:1 analog switch like the HMC7992) into a single HF RFID reader IC (such as PN532 or MFRC522). The microcontroller will sequentially energize each pad, cycling through them at a fast interval per pad to collect tag UIDs, filter duplicates, and reliably detect card positions in near real time. ## Subsystem 2: Central Microcontroller The system will use an ESP32-S3 (dual-core with Wi-Fi) as the central controller. It will interface with the RFID reader via SPI or I2C and control the RF multiplexer using GPIO select lines. The microcontroller will maintain an internal mapping of card and chip UIDs to their identities (rank/suit or chip denomination) and update the game state. Once the game state is compiled, it will be serialized into JSON format and transmitted to the visualization app over HTTP for low-latency communication. ## Subsystem 3: Game Visualization App The visualization layer will be a cross-platform application (built with Python + Flask) that receives JSON packets from the ESP32. It will display each player’s hole cards and the community cards, highlight blinds and active turns, and compute win probabilities for each player using either Monte Carlo simulation or a precomputed odds lookup. As a stretch goal, the app will also store hand histories and send LED or LCD commands back to the ESP32 to synchronize the physical table indicators with the digital state. # Criterion For Success - 100% accuracy in tracking the cards currently in play through 5 rounds of gameplay - Game state is accurately updated on the app within 2-5 seconds of updating - Board can correctly differentiate between folds and players accidentally moving their cards away from antennas # Stretch Goals If we have the time, we would also like to enhance the player experience by adding small LED indicators for the game state (big/small blinds, betting rounds, LCD screen showing the pot size) to help each player better understand the game without having to rely strictly on the app. Tracking chips can be more challenging, since stacking with RFID can be difficult. However, we would love to implement this so we can build on the tech idea above and display the total pot size directly on the board along with the app. Additionally, if desired, we could use algorithms and machine learning in the app to help players make the best decisions given the current game state. |
||||||
19 | Suction Sense - Pitch Project |
Hugh Palin Jeremy Lee Suleymaan Ahmad |
Lukas Dumasius | Cunjiang Yu | other2.pdf presentation1.pdf proposal1.pdf |
|
Team Members: Hugh Palin (hpalin2) Jeremy Lee (jeremy10) Suleymaan Ahmad (sahma20) **Problem** Currently, suction is unnecessarily left on for approximately 35% of the runtime in operating rooms. This results in wasted energy, increased maintenance costs, reduced equipment lifespan, and unnecessary electricity consumption. At present, there is no mechanism to detect or alert staff when suction is left running unnecessarily (such as overnight when no surgeries are in progress). **Solution** We propose a system composed of two hardware components and one software component. The first hardware module will attach to the medical gas shut-off valves, where it will monitor suction pressure and wirelessly transmit the data. A second hardware component will receive and store this data. On the software side, we will develop an application that takes in suction usage data and cross-references it with the hospital’s operating room schedule (retrieved from the Epic system). The application will display a UI showing which operating rooms are currently in use and whether suction is active. Color coding will clearly indicate if suction has been left on unnecessarily. **Solution Components** Microprocessor For this project, we plan to use an ESP32-WROOM-32 module as the microcontroller. We chose this module for its small form factor and wifi and bluetooth capability, which gives us flexibility in how we transmit the suction data. It is also extremely inexpensive, which is important considering hospitals operate on a limited budget and our module needs to be deployed in each operating room. Finally, the ESP32 features extensive open-source libraries, documentation, and community support, which will significantly simplify the development process. Pressure Transducer The Transducers Direct TDH31 pressure transducer will be used to monitor real-time suction. It works by converting vacuum pressure into an electrical signal readable by the ESP32. We chose this module for its compatibility with medical suction ranges, compact design for easy integration, and reliability in continuous-use environments. The sensor’s analog output provides a simple and accurate way to track suction status with minimal additional circuitry. BLE Shield The HM-19 BLE module will be used to relay suction data from the ESP32 to the Raspberry Pi. This module supports Bluetooth Low Energy 4.2, providing reliable short-range communication while consuming minimal power, which is critical for continuous monitoring in hospital settings. Its compact footprint and simple UART interface make it easy to integrate with the ESP32 without adding unnecessary complexity. Raspberry Pi Display Module The Raspberry Pi 4 Model B paired with the Raspberry Pi 7″ Touchscreen Display will serve as the central monitoring and alert system. The Raspberry Pi was chosen for its quad-core processing power, extensive I/O support, and strong software ecosystem, making it well-suited to run the suction monitoring application and integrate with the Epic scheduling system. The 7″ touchscreen will allow the module to be mounted in the hallway, providing an interface that allows staff to quickly view operating room suction status, with clear color-coded indicators and alerts. This combination also enables both visual and audio notifications when suction is unnecessarily left on, ensuring staff can respond promptly. Software Application The application will run on the Raspberry Pi and serve as the central hub for data processing and visualization. It will collect suction pressure readings from the ESP32 via the HM-19 BLE module and compare this data against the hospital’s operating room schedule retrieved through the Epic system. A color-coded interface on the Raspberry Pi touchscreen will clearly show which operating rooms are in use, whether suction is active, and show where suction has been unnecessarily left on. **Criteria for Success:** Our system must remain cost-effective, with a total component cost under $1,200 per unit to align with hospital budgets. The hardware module must securely attach to suction shut-off valves, remain compact, and accurately detect suction levels using an. Data must be reliably transmitted to a Raspberry Pi 4 Model B, which will also pull operating room schedules from the Epic system. Finally, the touchscreen application must clearly display suction status with color coding and issue real-time alerts when suction is left on unnecessarily. |
||||||
20 | Glove controlled mouse with haptic feedback |
Khushi Kalra Vallabh Nadgir Vihaansh Majithia |
Frey Zhao | Rakesh Kumar | proposal1.pdf |
|
# Problem For digital artists, traditional mousepads and trackpads are constrained and limit natural hand motion, making writing or drawing on a laptop cumbersome. Existing gesture-based input devices are often expensive, camera-dependent, or occupy significant desktop space. There is a need for a low-cost, wearable, intuitive interface that enables free-form cursor control and natural gesture-based clicking. # Solution We propose a wearable glove system that allows users to control a computer cursor using hand movements and perform mouse clicks with natural finger pinches. The system consists of four main subsystems: 1) Hand Motion Tracking Subsystem – captures hand orientation and motion to move the cursor. 2) Finger Gesture Detection Subsystem – detects index and middle finger pinches for left/right clicks. 3) Haptic Feedback Subsystem – provides real-time vibration feedback for click confirmation. 4) Software Subsystem – processes sensor data, maps gestures to mouse actions, and communicates with the computer. # Components ## Subsystem 1: Hand Motion Tracking Purpose: Detects hand orientation and movement to control the 2D cursor position. Components: IMU sensor (accelerometer + gyroscope + magnetometer) for 3D motion tracking. Microcontroller (ESP32 or Arduino Nano 33 BLE) for sensor data processing. Custom PCB to host IMU, microcontroller, and wiring to glove sensors. A lightweight Lipo battery. Description: The IMU measures acceleration and rotation of the hand. Firmware filters and converts these readings into cursor velocity and direction. Provides smooth, real-time hand-to-cursor mapping (targeting cursor movement or click) cursor movement or click) <50 ms. 4) Wearability: Glove and PCB fit comfortably on the hand without restricting motion. 5) Software Functionality: Firmware correctly processes sensors; optional PC software handles calibration and visualization. 6) Haptic Feedback: Vibrations are triggered reliably with each recognized click gesture. ## Subsystem 2: Finger Gesture Detection Purpose: Detects finger pinches to generate left/right mouse clicks and optional extra gestures. Components: Flex/bend sensors on index and middle fingers for left/right clicks. Optional thumb flex sensor for gestures like scrolling or drag. Optional capacitive/touch sensor for hover or special gestures. Pull-down resistors and conductive wiring embedded in glove. Description: Flex sensors detect finger bending; bending past a threshold triggers clicks. Firmware includes debouncing to prevent multiple clicks from one gesture. Optional thumb and touch sensors provide extended functionality. ## Subsystem 3: Haptic Feedback Purpose: Provides tactile confirmation for detected gestures. Components: Small vibration motor (coin or pager type). Driver circuitry on PCB to control vibration intensity. Description: The microcontroller activates vibration briefly when a click gesture is recognized. Enhances user experience by providing immediate feedback without needing visual confirmation. ## Subsystem 4: Software Subsystem Purpose: Maps sensor data to cursor movement, gestures, and communicates with the computer. Components: Microcontroller firmware for sensor data acquisition, filtering, and gesture detection. PC-side optional calibration GUI (Python or C++) for sensitivity adjustment and mapping hand motion to screen resolution. Description: Processes raw sensor data and converts IMU readings into cursor deltas (Δx, Δy) and flex/touch inputs into click commands. Supports USB HID or Bluetooth HID communication to the computer. Optional software smooths cursor motion, calibrates sensors, and visualizes hand gestures for testing (Stretch). # Criterion for Success 1) Resolution (Equivalent DPI): variable DPI: (Range: 400-1000 DPI) 2) Max Tracking Speed (IPS): ≥50 IPS (so quick flicks don’t drop). 3) Acceleration Tolerance: ≥5 g without loss of tracking (users move hands fast). 4) Polling Rate: ≥100 Hz (every 10 ms or better). 5) End-to-End Latency: ≤20 ms (ideally closer to 10 ms). 6) Click Accuracy: ≥95% reliable detection of intended clicks, false positives ≤1%. 8) Haptic Feedback Response Time: <40 ms after click detection. 9) Cursor Control Accuracy: Hand movements map to cursor position within ±2% of intended location. 10) Wearability: Glove and PCB fit comfortably on the hand without restricting motion. |
||||||
21 | MULTI-SENSOR MOTION DETECTOR FOR RELIABLE LIGHTING CONTROL |
Joseph Paxhia Lukas Ping Sid Boinpally |
Shiyuan Duan | Cunjiang Yu | proposal1.pdf |
|
Team Members: - Joseph Paxhia (jpaxhia2) - Siddarth Boinpally (sb72) - Lukas Ping (lukasp2) **PROBLEM:** In offices, classrooms, and lecture halls worldwide, motion sensors are commonly used to automate lighting control. While convenient, these systems share a critical flaw: lights often switch off when people remain in the room but are relatively still—such as when typing, reading, or watching a presentation. This leads to frustration, disrupts productivity, and creates an inefficient work environment. The root of the issue lies in the reliance on Passive Infrared (PIR) sensors, which detect the infrared radiation emitted by warm bodies. Although effective for detecting large movements, PIR sensors struggle with micromotions, are prone to false triggers, and rely on fixed timeout settings. As a result, they fail to consistently recognize human presence. **SOLUTION:** Our approach introduces a multi-stage verification system to improve reliability while preserving the strengths of current technology. PIR sensors remain useful for their fast response to initial entry and larger movements, so we retain them for triggering lights when someone walks into a room. To overcome their limitations, we integrate a millimeter-wave (mmWave) radar sensor, which excels at detecting fine micromotions such as breathing or subtle hand movements. This introduces the following subsystems: - Control and Processor - Sensing System - Lighting Interface - Power **Subsystem #1: Control and Processor** Primary Responsibilities: - Take in the sensor data from the PIR and the mmWave sensors. - Process this data and make a decision to stay on, gradually turn on, dim, and stay off. - Send this decision out. The control and processor subsystem will take in the PIR and mmWave sensor data, determine whether the lights should be off, on, gradually illuminate, or dim, and output this decision as a PWM (for the brightness of the lights) from the microprocessor for the lighting system to accurately drive the lights. By combining the two sensors, false positives and negatives will be reduced from the surrounding environment by combining the signals and using logic to combine the data sent in from both sensors. A STM32 microprocessor will be utilized, as it has the capability to process these signals and is best for filtering and dimming. **Subsystem #2: Sensing System** Primary responsibilities: PIR: instant “walk-in” detection, coarse motion, low power standby. mmWave: micromotion detection (breathing, typing), presence confirmation, and false-trigger suppression. We will be using the PIR for fast wake and coarse motion and the mmWave for verification/hold and micromotion detection. Using both avoids having PIR false-offs while making sure that we have semi-instant illumination. Basic state machine / functionality: 1. Idle / Vacant: PIR = low, mmWave = no-presence → lights off, system in low-power monitoring. 2. Wake / Entrance: PIR triggers → gradual illumination, start hold-timer and mmWave high-sensitivity window. 3. Occupied (confirmed): mmWave confirms presence (micro-motion or persistent reflection pattern) OR PIR continues to detect motion → remain ON; reset hold timers on detections. 4. Low-activity (PIR no longer seeing motion): PIR goes quiet → enter mmWave verification window: if mmWave detects micro-motion within verification window, remain in Occupied. If mmWave sees nothing for Nverify seconds → move to Vacant. 5. mmWave & PIR quiet → lights off, enter low-power scans at low duty. **Subsystem #3: Lighting Interface** Primary responsibilities: - Gradually turn lights on and off - Keeping lights on Our gradual illumination will employ a 0-10V analog dimmer, which is essentially a subcircuit block. This is a very widely used and accepted lighting control interface that reads a DC voltage to control brightness on an LED. The driver itself still runs on AC Main power. The subcircuit is comprised of these components: - Microcontroller - To generate a high frequency PWM (Pulse Width Modulation) proportional to the desired brightness - Filter - to transform the PWM to a DC voltage - Op Amp Buffer / Amplifier- Since our STM32 microcontroller outputs up to 3.3 V and we need to generate up to 10 V DC - Any protection needed - Resistors and diodes used as needed - Output to LED This 0-10V analog dimmer also can keep the lights on through the microcontroller generating a constant voltage above ~1.0 V. Once people leave the room and the controller doesn’t detect anyone, the inverse can be done to gradually turn the lights off (10 - 0 V). Note: We will have to do some math to find a suitable slew rate for the brightening and dimming. We are thinking of having 3 different rates: 1. Gradual brightening - should take around 0.5-1 seconds for the lights to go from off to desired brightness 2. First dimming - takes around 10 seconds when the sensor first detects no people in the room. 3. Final shutoff - takes around ~2 seconds to fade to fully off. This is done after first dimming is completed and the sensor still detects no activity in the room. **Subsystem #4: Power** 1. Take power from the fixture’s AC mains (120/230 VAC). 2. Use a dedicated isolated SMPS / LED-driver tap or internal LED driver rails to create regulated DC rails for electronics (3.3 V, 5 V, and optionally 1.8 V). 3. Keep the LED power path (the high-power LED driver) electrically separate from the low-voltage sensing electronics; provide good isolation and filtering between them. The system is powered from AC mains, which feeds the LED driver to provide constant-current illumination and also supports mains sensing and surge protection components such as fuses and MOVs. All low-voltage electronics—including the MCU, mmWave radar module, PIR sensor, and any communications modules (Wi-Fi/BLE)—operate on DC, typically 3.3 V, with some modules optionally requiring 1.8 V or 5 V. The MCU manages these peripherals and interfaces with sensors using logic-level signals, ensuring safe and reliable operation of the sensing and control system. **Criterion for Success** The light should gradually turn on when somebody enters a room, and it should start the turn on process without much wait time. While it is on, and people are still present in the room, the light should not start to dim. When the room becomes empty, the light should start to dim (after a sufficient wait time) and turn off. In addition, this should be able to detect motion within 10-15 m. of the sensor. |
||||||
22 | Adherascent |
Dhiraj Dayal Bijinepally Hardhik Tarigonda Jonathan Liu |
Shiyuan Duan | Cunjiang Yu | proposal1.pdf |
|
TITLE ADHERASCENT Team Members: Jonathan Liu (jliu268) Hardhik Tarigonda (Htarig2) Dhiraj Bijinepally (Ddb3) PROBLEM Approximately 66% of adults in the United States take prescription medication. These can range from painkillers after surgery to essential life saving drugs. Common between all of these medications is the importance of taking them on time and on a schedule to maximize effectiveness. Adherascent is a program/device that helps individuals remember to take their medications. This is primarily aimed towards older adults, however anyone can use this device if they require it. SOLUTION Adherascent is a system composed of three subsystems: a wearable scent device, a mobile application, and a smart pillbox. The app provides the initial notification. If the notification is not addressed, the wearable escalates reminders using scent cues. The pillbox provides clear, per-compartment visual cues to indicate which medication should be taken, and it allows the user to confirm intake. SOLUTION COMPONENTS Adherascent consists of two main components. The phone application that interacts with the wearable device and the scent-releasing mechanism attached to a wearable device. SUBSYSTEM 1 The wearable device acts as a second reminder to take medication. Instead of relying solely on a single cue such as audio or visual, Adherascent utilizes the sense of smell to prompt action. At first, the app reminds the individual to take their medication. If the person dismisses the notification and takes their medication, the wearable device will not activate. However, if the notification is left unaddressed for over 5 minutes, the device activates. The Adherascent wearable emits a scent with varying intensity to escalate urgency. The working idea is to implement this using clock cycles: 1000 cycles: scent is initially released into the air. 2000 cycles: scent increases in intensity. 3000 cycles: scent reaches maximum intensity to strongly notify the user. This approach ensures reminders are multi-sensory and persistent, reducing the chance of a missed dose. We plan on utilizing technology similar to electronic air fresheners to emit the scent. The acceptable time before ramping the scent intensity depends on the nature of the individuals condition. If 5r medicine is urgent, it could skip the ramping process and immediately emit at maximum intensity from the start. It is possible that we can add a function in the app to adjust the time between reminders and scent intensity. SUBSYSTEM 2 The mobile app manages medication schedules and reminders. It sends a notification at the correct time and provides the first opportunity for the user to act. If the user dismisses the notification, the reminder is considered addressed, and no further action is taken. If the notification is ignored, the app sends a signal via Bluetooth to both the wearable device and the smart pillbox to activate. This central coordination ensures all subsystems work together to escalate reminders only when necessary. SUBSYSTEM 3 The smart pillbox provides a direct, physical reminder by lighting up the specific compartment corresponding to the medication due at that day and time. This not only alerts the user but also guides them to the correct pill, reducing confusion or mistakes. The pillbox also includes a confirmation method (such as a button or touch input) that allows the user to acknowledge that they have taken their medication. Once confirmation is received, the pillbox sends the acknowledgment to the app, ensuring the wearable device does not continue escalating. If no confirmation is received, the system proceeds with wearable activation, maintaining redundancy in reminders. We are working with Professor Steven Walter Hill, Gaurav Nigam ,Venkat Eswara Tummala and Brian Mehdian. |
||||||
23 | Drink Dispensing Robot |
Andrew Jung Ethan Cao Megan Cheng |
Frey Zhao | Rakesh Kumar | proposal1.png proposal2.png proposal3.pdf |
|
# Drink Dispenser Robot - Team 23 ## Ethan Cao, Megan Cheng, Andrew Jung ### Problem Statement: Too often, we’re tired or distracted and put off getting a drink of water. Those small delays add up, leaving us dehydrated and drained without even realizing it. Additionally, many users may get tired of drinking water and would prefer flavored drinks. Dehydration impacts focus, energy, and overall well-being, yet it happens so easily in our daily lives. ### Solution: The solution is to create a drink delivery ecosystem that seamlessly connects a mobile robot, a drink dispenser hub, and a callback system for user interaction. The robot is responsible for navigating the environment, locating both the dispenser hub and the user’s cup, and safely transporting beverages. The hub acts as the liquid source, with the ability to dispense various ratios of drinks, allowing user choice of mixed drink. The callback system enables the user to request service without needing to approach the dispenser themselves. Together, these subsystems ensure a smooth workflow: the robot docks at the hub, the hub dispenses the desired beverage into the cup, and the robot delivers the drink back to the user. Specifically, the robot integrates multiple subsystems. Its detection subsystem includes a bumper with left/right detection switches, cliff-detection sensors capable of recognizing drops greater than 1 inch, and IR detectors for locating both the dispenser and coasters. These components are connected to the robot’s microcontroller (ESP32) over I2C. On the hub side, the pump subsystem controls two liquid channels via servo-driven pumps, with encoders ensuring accurate dispensing. A precision docking subsystem with IR transmitters and sensors aligns the robot under the dispenser nozzle to prevent spillage. Like the robot, the hub uses an ESP32S3 microcontroller to communicate that the drink is desired and the ratio of the drink. Then using the IR detector the robot locates the user and travels with the cup. This ecosystem ensures hydration is made convenient, safe, and customizable with minimal user effort. ### Visual Aid (File is submitted below) ### High-level requirements list: - The robot can retrieve the drink in 90 seconds - No significant amount of liquid will spill from the machine onto any surface - The robot will not get stuck on the dispensing station/coaster or fall off of the table. ## Design ### Block Diagram (File is submitted below) ### Subsystem Overview: A brief description of the function of each subsystem in the block diagram and explain how it connects with the other subsystems. Every subsystem in the block diagram should have its own paragraph. #### Robot ##### Detection Subsystem The detection subsystem contains all of the sensors required to navigate the environment effectively. The sensors will be housed on a separate PCB and communicate with the microcontroller over I2C. The bumper will use a bar and 2 switches to determine if the robot is contacting any object. The cliff detection system will use downward facing distance sensors to determine if the robot is approaching the edge of the table. The IR detectors will help the robot find the location of the dispenser and coasters. - The bumper system shall be able to detect a collision anywhere on the front surface of the robot - The bumper system shall be able to detect if the collision occurred on the left or right half of the front - The cliff detection system shall detect any drop greater than 1 inch - The IR detector shall determine the brightness of the IR light received ##### Drive Subsystem The drive subsystem will use an H-bridge IC to drive 2 brushed motors with integrated encoders. It will communicate wheel positions back to the microcontroller using a quadrature signal and receive commands from the microcontroller with PWM signals. - The drive subsystems shall not cause overcurrent - The drive subsystem shall drive at over 1 feet/second unloaded and 0.5 feet/second loaded with a cup ##### Power Subsystem The power subsystem will power the robot using a 9V battery and regulate the battery’s voltage down to 3.3 volts for our logic devices. - The power system shall provide up to 250mA at 3.3 volts and protect against overcurrent events. - The power subsystem shall protect the battery from overcurrent through a fuse - The battery shall be accessible and replaceable with basic hand tools ##### Microcontroller (Seeed Studio ESP32S3) The microcontroller will send signals to all of the other subsystems on the robot and communicate with the dispenser and coaster over the ESP-NOW protocol. - The microcontroller shall not exceed 80 degrees celcius #### Drink Dispenser System ##### Pump Subsystem This system contains an H-bridge IC that has control over two servos. These servos will then be connected to a pump to dispense the liquids. The servos will also be connected to encoders which will communicate the servo positions to allow accuracy in dispensing of the two liquids. The pump will ensure that liquid is dispensed in a quick manner only when the robot is under the nozzle to obtain the goal of not spilling any significant amount of liquid onto the robot or the drink dispenser hub. - The pump system should be accurate to the nearest tablespoon ##### Power Subsystem The power subsystem will be used to power the servos and the microcontroller. - Similarly to the robot the power subsystem will also protect against overcurrent events and protect the battery through a fuse ##### Precision Docking Subsystem This contains the IR transmitter as well as any sensors or components that the robot might use to dock at the drink dispenser hub. ##### Microcontroller (ESP32S3) This will be the same microcontroller as is on the robot with the same goal of transmitting and receiving signals from the other systems to operate the pumps. ##### Callback Subsystem - The user interface of the system - Allows the user to call and send the robot with custom ratios of drink - Uses ESP32 to communicate with the robot given the inputs from User Subsystem ##### User Subsystem - Has call and send buttons that will call the robot if currently not at user, and send the robot to the hub if robot is at the user - Ratio dial will customize the ratio between the water and extract (electrolytes, flavor, etc) - IR transmitter will allow the robot to detect the user by emitting IR ### Tolerance Analysis: Identify an aspect of your design that poses a risk to successful completion of the project. Demonstrate the feasibility of this component through mathematical analysis or simulation. #### Accurately able to locate the dispenser and the cup using the IR We can create a testing jig with an infrared LED with an infrared sensor on a pivot to determine how accurately we can find the infrared LED. #### Accurately dispensing liquid Can calculate the accuracy and rate of dispensing liquid using the pump’s datasheet #### Being able to path efficiently and align with the dispensing station We can test pathing using a robotics simulation software ### Ethics and Safety The primary ethical concern of the project is to avoid harm to the end user according to ACM general ethical principles sections 1.2: Avoid Harm and 1.3: Be Honest. Due to the nature of our project, users will likely ingest liquids which have interacted with our project. Therefore, we need to ensure our components are food safe and that we warn our users of any potential contamination. We will need to provide functions for the user to clean the dispenser and flush any potentially harmful substances out. We will also need to prevent misuse by labeling what liquids are allowed in the dispenser station. |
||||||
24 | Autonomous Cylindrical Root Camera |
Aidan Veldman Nathaniel McGough Zach Perkins |
Rishik Sathua | Rakesh Kumar | proposal1.pdf |
|
# Autonomous Cylindrical Root Camera Team Members: - Zach Perkins (zjp4) - Aidan Veldman (aidankv2) - Nathaniel McGough (nm47) # Problem The problem we intend to solve is the one outlined by John Hart and Jeremy Ruther in their presentation on a new hemispherical root camera model for their research. They want a new model that addresses the issues with usability and cost-effectiveness of the current one. For the research, they need high quality photos of the development of root systems. Every season clear tubes that are about 5ft long and 5in diameter are planted for root systems to grow into, and at the end of the year photos are taken by their current scanner system to assess the success of that plant. The problem is that the current printer scanner based model needs to be lowered and rotated manually, is vulnerable to wear and tear from use in the fields, and is vulnerable to water damage from moisture in the tubes. # Solution *Note that we have not been able to formally meet with the pitchers of the project, so some of the exact desired specifications are still vague/unknown* The new design is a cylindrical device that uses a 360-degree mirrored orthographic camera to capture its pictures, LEDs for light, a servo motor to winch the device up and down the tube, a button for starting the task, and exterior small wheels for smooth transversal. The exposed electronics (button, servo motor, LEDs, and camera) will be waterproof models and sealed properly within the device casing. Operating it would require reaching through the winch cap mount and pressing a start button. The device would turn on its LEDs and lower itself into the tube as it takes pictures, then reverse and ascend to its starting position ready to be moved to the next tube. Obtaining the photos would be done via local storage ready to be downloaded later (likely a flash drive/SD card). # Solution Components ## Subsystem 1: Camera and LEDs To save costs and camera interface complexity, we will use a standard camera with a custom lens for obtaining ring-shaped cross-sections of the tube. The camera is centered on the bottom face of the device and faces directly down. Ahead of it, the vision of the camera is focused out into a telecentric lens. A few centimeters in front of the lens is a semi-spherical mirror. Both the lens and the mirror are roughly equal in diameter so that the camera obtains clear orthographic pictures of the mirror’s contents. From its shape, the mirror displays a ring featuring a slice of the desired root system. The camera is connected to the main PCB and communicates a stream of photos as it descends. A ring of small LEDs surrounds the lens to bring symmetrical lighting into the dark tube. They are also connected to the PCB. Camera DigiKey part number: 1738-FIT0701-ND ## Subsystem 2: PCB and Power The single PCB controls the device. It features a ESP32-S3-WROOM-1 microcontroller chip for image processing and any extra storage needed for multiple large photos. A battery power pack connects to the PCB to run the device. The main task of the microcontroller is to implement an algorithm to slice the known ring of data from each photo, reconstruct it horizontally, and stitch each slice into a final rendered picture which it saves to the local storage. ## Subsystem 3: User Interface The user interface subsystem consists of a waterproof button accessible from the top of the device that connects to the PCB. It directs the process to begin There is also an external USB port for extracting the pictures. It may need an accompanying LED to indicate a successful pairing between the flash drive and microcontroller. *Note if it is viable, the task could also be achieved with Bluetooth connection to the microcontroller* USB port DigiKey part number: 1528-6073-ND Waterproof button https://www.wickedwarnings.com/product/waterproof-push-button-on-off-switch/?srsltid=AfmBOoqiUecr6UN8rvu42ILdTlRO9r2S_dkrkDFNViV1Ggn5V13krY0z ## Subsystem 4: Winch The winch operates by a servo motor driving a spool at the top of the device. A strong weather resistant cord outputting from the spool is hung from a cap that attaches to the top of the tube. A rotational sensor attached to the axis of the spool tracks the number of rotations so the microcontroller is aware of the device’s depth. # Criterion For Success - Cost-effective. - Drop resistant, able to be subject to repeated use. - Waterproof, able to resist high moisture content within the tubes. - Winch system consistently completes motion cycles without getting stuck. - Obtains clear, large, and consistent pictures of the desired root systems on par with the currently used model. - User interface is simple and easy to use. - Power system keeps the device powered consistently and battery lasts for the time the device is required in the field. |
||||||
25 | Auto-Guitar Tuner |
Daniel Cho Ritvik Patnala Timothy Park |
Eric Tang | Rakesh Kumar | proposal1.pdf |
|
**Handheld Automatic Guitar Tuner** **Team Members:** Timothy Park (twpark2) Daniel Cho (hc55) Ritvik Patnala **Problem:** When playing guitar, being in tune is essential. When strings are not properly tuned to their correct pitches, the notes played can clash with each other causing what listeners perceive as being "off" or "out of tune." Accurately tuning a guitar is a challenge for both beginners and experienced players. Traditional tuners require the musician to manually turn tuning pegs while reading pitch information, which can be inconsistent and time-consuming. An automatic solution that can both detect pitch and physically adjust the tuning peg would reduce errors, speed up tuning, and improve usability in practice and performance settings. **Solution:** We propose a handheld automatic guitar tuner that integrates pitch detection and motorized peg adjustment into one device. The system will capture string vibrations, process them using a microcontroller to identify the current pitch, and automatically rotate the tuning peg with a small motor until the string is in tune. Since the handheld device tunes one string at a time, it can be used on different guitars without needing to worry about different spacing in between pegs and strings. A compact LED screen will display the detected pitch and tuning status, while four buttons (Power, String Select, Mode, Start) provide simple user control. The String Select button allows the user to cycle through the six guitar strings. Each press moves the selection to the next string in order: low E, A, D, G, B, high E, then back to low E again. This circular navigation lets users easily choose which string to tune without confusion or the need for multiple buttons. The Mode button lets users toggle between preset tuning standards (Standard, Drop D, Open G, etc) to accommodate various playing styles and preferences. The design will run on a rechargeable battery, with all subsystems integrated into a custom PCB for portability and reliability. **Solution Components:** Subsystem 1: Audio Sensing and Pitch Detection Purpose: Capture the sound/vibration of the guitar string and convert it to a clean, digitizable signal. Components: A transducer (Piezo or electret mic) is used to convert the string vibration to an electrical signal. If a mic is used, filtering algorithms will be implemented to remove unwanted ambient noise. Low noise op-amp or preamplification can boost the tiny sensor signal to a usable voltage for the ADC. Anti-aliasing filter removes any high frequency noise or harmonics above Nyquist so that the sampled form represents real string motion. This can prevent false pitch estimates. MCU ADC input samples the signal at a steady rate. Clean samples are the raw materials/inputs to the pitch detection algorithm. Subsystem 2: Microcontroller Purpose: Run pitch detection algorithms and control the motor Components: The microcontroller is used for multiple aspects including streaming the audio in via ADC. It is also used to run the pitch detection algorithm and determine the motor speed/direction to achieve the optimal tuning. Apart from this, it also updates the data on the screen and takes in inputs from the buttons. GPIO inputs (buttons) can read the users intent including power, mode and string selection, and start/stop. Debouncing will ensure one clean press. I2C bus for OLED display and fuel gauge: The 2 wire link the MCU uses to communicate with the display to show the note, offset and battery. Subsystem 3: Motor Purpose: Physically adjust the guitar tuning peg to reach the correct pitch. Components: A DC gear motor provides the mechanism to rotate a guitar’s tuning peg. This needs to ensure that we trade speed for torque so that peg can be turned smoothly and precisely to prevent the string from snapping. Removable socket attachment allows the tuner to be attached to different peg shapes. A quick swap lets you tune various guitars without redesigning the whole tool. Subsystem 4: Power Unit Purpose: Provide stable power to both logic and motor subsystems Components: A Li-ion battery will be the primary power source, due to the energy density and rechargeability. Typically specs at 3.7 volts with potential to allow for multiple hours of operations. Charger IC with USB-C input can be utilized to allow safe and reliable charging of the Li-ion battery. The Buck regulator will be used to cut voltage levels required by different components (usually 3.3V or 5V). Fuel gauge Subsystem 5: User Interface Purpose: Provide real-time pitch feedback and allow user input Components: An OLED display is used to showcase the note/string you want to tune, a cents bar/needle so the user can see whether the string needs to be tightened or loosened. It also shows the battery and the chosen mode. Four push buttons (Power, Mode, String Select, Start) Piezo Buzzer to generate beep sounds to signal successful tuning. This would be driven by the microcontroller’s GPIO for tone generation. **Criterion for Success:** The device tunes all six guitar strings to within ±12 cents of the target pitch, which is the threshold where most people can perceive a note as out-of-tune. Possibly ±5 cents for further accuracy. Visual indication (such as an LED) signals when each string reaches its correct pitch. The tuner should function reliably on both acoustic and electric guitars without causing any damage to the instrument or strings. Each string should be tuned within a reasonable time frame (under 20 seconds per string). OLED display refreshes pitch feedback at least 5 times per second |
||||||
26 | Orion Med |
wenhao Zhang XiangYi Kong Yuxin Zhang |
Zhuoer Zhang | Rakesh Kumar | proposal1.pdf |
|
# ORION MED Team Members : - Xiangyi Kong (xkong13) - Yuxin Zhang (yuxinz11) - Wenhao Zhang (wenhaoz5) # Problem As the global population continues to age, the demand for elder care is rising faster than the number of available care workers. Care workers often spend much of their time on routine but necessary tasks, such as fetching medicine or preparing basic tools. These simple tasks leave them with less time to focus on what really matters: providing personal attention, comfort, and medical care to the elderly. This imbalance not only increases stress and workload for care workers but also makes it harder to ensure that the elderly receive the level of care they deserve # Solution We propose to design a line-following autonomous medicine cart that can navigate between a nurse station (HOME) and five fixed pharmacy locations along a predefined track. The nurse will input a target pharmacy number (1–5) and a specific medicine type through a GUI. The cart will follow the track, detect the correct station using ground markers, and stop to wait. Once a medicine package is placed on the tray (detected by onboard sensors), the cart will first verify whether the correct pill bottle has been selected. If so, immediately return to the HOME position. The system is divided into the following subsystems: 1. Locomotion & Navigation 2. Station Recognition 3. Load Detection 4. Medicine Verification 5. Control & Communication 6. Power Supply & Safety # Solution Components ## Subsystem 1: Locomotion & Navigation - Purpose: Drive the cart along the predefined track and keep it centered on the black line. - Components: - 2 × DC gear motors with encoders - Motor driver: TB6612FNG (or L298N as alternative) - QTR-8A IR reflectance sensor array for line tracking - Functionality: Uses PID control with encoder feedback to follow the black line smoothly and reliably. ## Subsystem 2: Station Recognition - Purpose: Detect when the cart has arrived at one of the five fixed pharmacy stations or the HOME position. - Components: - Ground marker system (unique tape patterns or RFID tags) - Functionality: Each station has a unique marker or tag; the sensor detects it and signals arrival to the controller. ## Subsystem 3: Load Detection - Purpose: Detect whether an object (medicine package) has been placed on the tray. - Components: - HX711 load cell amplifier + load cell sensor - Functionality: Confirms stable load placement before triggering the RETURN sequence. ## Subsystem 4: Medicine Verification - Purpose: Confirm that the medicine placed matches the nurse’s request before returning to HOME. - Components: - Color sensor module (e.g., TCS34725 RGB sensor) - Functionality: - The nurse specifies a medicine type (e.g., Red, Green, Blue pill). - After load detection, the color sensor scans the deposited item. - If the detected color matches the requested medicine → RETURN sequence is triggered. If not, the cart remains at the station, and an error/status is sent to the GUI. ## Subsystem 5: Control & Communication - Purpose: Serve as the “brain” of the system, executing navigation logic and communicating with the user interface. - Components: - ESP32 microcontroller (Wi-Fi + control) - Python Tkinter GUI or ESP32-hosted web interface - Functionality: - Receives target station input from GUI - Executes finite state machine: IDLE → TO_STATION → WAIT → RETURN → HOME - Sends status updates (Idle, Moving, Waiting, Returning, Done) back to GUI ## Subsystem 6: Power Supply & Safety - Purpose: Provide stable power to motors, sensors, and controller while ensuring user safety. - Components: - lithium-ion battery pack - Step-down voltage regulators (5V for motors/sensors, 3.3V for ESP32) - Ultrasonic distance sensor (HC-SR04 or VL53L0X) for obstacle avoidance - Emergency stop button with hardware cutoff - Functionality: Supplies regulated voltages, ensures safe shutdown in emergencies, and prevents collisions. # Criterion For Success 1. Navigation: - The cart can travel from HOME to any of the five stations with high reliability. - The cart stays centered on the line with little deviation. 2. Station Recognition: - Correctly identify each of the five stations and HOME. 3. Load Detection & Return: - Correctly detect object placement. - Only allow RETURN if the correct medicine is placed. - Trigger return-to-home sequence correctly after placement. 4. Task Completion - Accept user input, reach target station, wait, detect load, and return to HOME. 5. Safety - Stop within 20 cm of unexpected obstacles. - Stable and safe operation with no exposed wires or hazards. |
||||||
27 | Team Heart Restart |
Brian Chiang Ethan Moraleda Will Mendez |
Frey Zhao | Arne Fliflet | proposal1.pdf |
|
Team Heart Restart Team Members: - William Mendez (wmendez2) - Ethan Moraleda (ethannm2) - Brian Chiang (brianc11) Problem: Research has found that defibrillators delivering a single shock have a lower survival rate (13.3%) compared to Double Sequential External Defibrillators (DSED), which achieve a survival rate of 30.4%. To deliver a double shock, two separate defibrillators are required. Since ambulances typically carry only one defibrillator/cardiac monitor, DSED is currently not feasible in the field. Current Defibrillators do not have impedance readings which limits their accessibility to different body types. Solution: Our solution is to create a singular device that can deliver two sequential shocks. As we now need a total of four pads to administer 2 consecutive shocks, we are now able to read the impedance of the patient, allowing us to calculate a more accurate time and power of the shocks to increase survivability. Our first subsystem will be our custom PCB board. This board will contain 3 main elements: the electrocardiogram (EKG), the Impedance sensor, and the power supply. The EKG will be used to read the electric signals within the heart from the anterior-posterior (AP) and the anterior-lateral (AL) positions. This will utilize 4 hospital-grade electrode tabs as the sensors. These electrical signals will allow us to understand how the heart is functioning, and when we would initiate the sequential shocks. The impedance sensor will measure the body impedance of the patient. This measurement is essential as it is required to calculate how much power is needed behind each shock and the time between each shock. Different body types require different levels of power to reset their hearts. Lastly, the power supply will be used to supply power to the PCB board and our other subsystems. Our Second subsystem will be an external microcontroller board. This microcontroller will be in charge of our inputs and outputs. Our three inputs are the EKG reading, the Impedance reading, and the start/stop button. Our output will be an HDMI display, which will display the heart rate and impedance in real time with high accuracy. For safety and to keep the scope of the project realistic, we will be implementing only the EKG and impedance sensor. A future senior design project can implement our project into a full defibrillator device that can execute sequential shocks. We will be documenting our work to hand it off appropriately. Solution Components Subsystem 1 - Main board Subsystem 1.1 - ECG (Amplifiers and Filters) The electrocardiogram will comprise multiple filters, which can be built using breadboards and over-the-counter small electronic components. This filter will be placed on a PCB board, which will be connected to the microcontroller. The PCB will most likely have a differential amplifier, with a low-pass filter and a notch filter. This will eliminate a lot of noise and disregard all the higher frequencies that do not occur in the human body. Subsystem 1.2 - Impedance sensor High-pass filter: Based on previous research, higher frequencies are used to find the human body’s impedance, which means we will need a high-pass filter to filter out the lower frequencies. Amplifier: Currents that are traveling through the body will be very sensitive and small. To combat this and make the readings readable, an amplifier will be needed. Subsystem 1.3 - Power Supply Power Supply: The Power Supply will take a Power output from a Battery and step it down to the voltages needed to supply the electrocardiogram, impedance sensor, and microcontroller. This will likely use LDOs and/or buck converters. Subsystem 2 - Microcontroller board This board will take in the outputs from the ECG, Impedance sensor, and power. The ECG and Impedance sensor readings will then be processed and converted to display to a separate screen. Criterion For Success Goal 1: Display heart rate via a graph in real time. Goal 2: Display impedance readings via a graph in real time. Goal 3: Design circuitry for EKG and Impedance and implement via PCB Goal 4: Design a board that can step down power from a battery for EKG and Impedance circuitry Goal 5: Utilize a microcontroller to process readings Goal 6: Work with medical students/mentors Goal 7: Document how to implement this project for future expansion. |
||||||
28 | Real-time EEG Drowsiness Detection Device |
Nikhil Talwalkar Senturran Elangovan |
Zhuoer Zhang | Arne Fliflet | proposal1.pdf |
|
**Real-time EEG Drowsiness Detection Device** Team members: - Nikhil Talwalkar (nikhilt4) - Senturran Elangovan (se10) **Problem** Many people unintentionally doze off while studying, working, or in situations that demand constant focus—such as driving or monitoring critical systems. Current consumer sleep trackers, such as smartwatches, are primarily designed to analyze and record sleep patterns after the fact. They cannot provide real-time interventions to prevent drowsiness-related lapses. In high-risk scenarios like long-distance or nighttime driving, even a few seconds of microsleep can result in serious accidents. Therefore, there is a need for a portable, proactive system that can detect drowsiness in real time and alert users before loss of focus occurs. **Solution** Our project proposes an implementation of a real-time drowsiness detection device. The system uses a lightweight EEG headband to continuously monitor the user's brain activity. By analyzing frequency changes in EEG signals associated with early stages of drowsiness, the device can detect when the user is at risk of falling asleep. When drowsiness is detected, the system triggers an audible or tactile alarm to immediately alert the user, helping prevent microsleep-related accidents or lapses in attention. Compared to computer vision–based systems, which rely on slower external cues such as eyelid closure, yawning, or head movement, and which often perform poorly in nighttime conditions, our device provides earlier and more reliable detection by directly monitoring EEG signals. To ensure usability and a practical aesthetic, the electrodes will be put into a cap-style wearable that requires so special alignment or positioning by the user. The device will be powered by a lithium polymer battery with a projected life of 8 to 10 hours. In terms of performance, most false positives and false negatives arise from interference in EEG signals, such as when eyes are closed during meditation or half-open. Since these states are not relevant to driving scenarios, they will still trigger an alert. We expect a 1–2% false positive rate during normal focus and a less than 10% false negative rate when the user is drowsy. **Subsystems:** **Subsystem 1: EEG Headband Hardware** - Lightweight, dry electrodes attached to Fp1, Fp2, and Fpz regions of the head, wired neatly into a cap-style wearable to capture brain activity. - Ideally dry, reusable, Ag electrodes, restricted to the budget. If allowed, higher end electrodes can be integrated for future modifications. **Subsystem 2: Signal Processing Unit** - Analog noise filtering using Butterworth filter, aiming for a bandpass between 0.5 to 30 Hz. - Includes a CMRR operational amplifier to amplify the signal from 10^{-3} range to 1. - Analog to digital signal converter to allow signal to be filtered digitally for flexibility and data collection. f_{s} around 250Hz for good signal. **Subsystem 3: Detection Algorithm** - Software running locally to identify characteristic frequency changes in EEG that correspond to drowsiness - Using open libraries such as MNE, YASA and others for the EEG signal processing - ML (RFA, Naive Bayes) algorithms to determine if user is at the brink of stage 1 sleep **Subsystem 4: Alert Mechanism** - Audible (buzzer) or tactile (vibration motor) alerts to immediately notify the user when drowsiness is detected. **Subsystem 5: Power System** - Lithium-polymer battery providing 8–12 hours of continuous operation for portability and reliability - Power electronic circuit to ensure battery doesn't overcharge or over-discharge, and maintain limited current draw, and if possible (time constraints) temperature monitoring. - Exploring 'AAA' battery alternatives if integratabtle and doesn't make the device look too chunky. **Criterion of Success:** EEG acquisition – the EEG captures reliable and accurate brain signals. Can be checked by blinking, which will induce a relatively significant voltage spike in the EEG signal. Real-time sleep detection – the control system can detect when the user feels has micro-sleeps or is drowsy. Feeding open-source data or sleep and drowsiness into the system, and check if there is any outputs. Prompt alerting – the buzzer triggers the alerting noise at a timely manner, with acceptable detection-to-alert latency. Measuring the time delay from the input and the output signal, to ensure the latency is acceptable. Safety and comfort – the device is wearable for long hours and safe. Since user-based, allow randomly-selected volunteers to wear for a day and tell if there's any discomfort. Quantize it by using a numbering survey. TA can be included too, if they volunteer. **Resources:** https://github.com/SuperBruceJia/EEG-DL https://github.com/lcsig/Sleep-Stages-Classification-by-EEG-Signals https://www.sciencedirect.com/science/article/pii/S2090447922002064 Bohao Li, et al 2021, J. Phys.: Conf. Ser. 1907 012045 |
||||||
29 | Modular Wafer Track for Semiconductor Fabrication |
Hayden Kunas Jack Schnepel Nathan Pitsenberger |
Shengyan Liu | Rakesh Kumar | proposal1.pdf |
|
Modular Wafer Track for Semiconductor Fabrication Team Members: -Jackks2 -nmp5 -hkunas2 # Problem In today’s world, where semiconductors drive nearly every aspect of technological innovation, little room is left for small-scale fabrication and experimentation. Commercial wafer processing equipment ranges from tens of thousands to hundreds of millions of dollars, putting it far out of reach for hobbyists, educational laboratories, and early-stage researchers. Existing systems are not only cost-prohibitive but also lack the flexibility and modularity needed for experimentation on a smaller scale. As a result, innovation outside of large industrial fabs is limited, leaving students, independent researchers, and small labs without access to tools that enable exploration of semiconductor device fabrication. # Solution Our team’s solution to this problem is to design, build, and demonstrate a modular, cost-effective wafer track system that lowers the barrier to entry for small-scale semiconductor processing. The idea is to create a track that will: Transport wafers between the interchangeable processing modules, Execute repeatable fabrication recipes that ensure process consistency, and communicate standardized instructions to each module through a defined packet interface, enabling true modularity and user-created modules. The system architecture will be layered: A Raspberry Pi will serve as the front-end controller, providing recipe management, a user interface, and real-time monitoring. An ESP32 Microcontroller will delegate low-level instructions to each module and control the stepper motors for wafer transport. Individual modules (demonstrated through a wafer alignment station that reorients a wafer’s major flat at the start of each recipe) will showcase the modular framework and mechanical precision of the track. By defining a standardized track-module interface and releasing the system as open source, our design will empower hobbyists, students, and small research labs to reproduce, extend, and customize the platform. This solution not only addresses cost barriers but also promotes accessibility, flexibility, and innovation in semiconductor fabrication education and prototyping. # Solution Components User Interface: This will be the subsystem that the user interfaces with to create a series of steps, or recipes, that will be sent to the ESP32 for execution. This system will be based around a Raspberry PI 4B with an Anyuse 15.6” portable monitor built into the system for the user to interact with. Main mover: This will be the primary subsystem responsible for moving the wafer to the various modules. Components include two linear actuators and a rotational axis to transport the wafer to the modules, limit switches for the linear actuators, and proxy sensors (APDS-9930)to detect when the wafer has reached a certain location. Included with this is a power distribution PCB, which will be used to step down and rectify the wall voltage into the necessary DC voltages required for all of the motors and other components. Wafer Alligner: This module will have a small vacuum to hold the wafer down to a disc while it is spun with a motor. A proxy sensor (APDS-9930) will be able to detect the flat edge which can be aligned in a certain area. A linear actuator will be used here as well to raise and lower the wafer onto this platform. ”Black Box”: This is the subsystem that will act as a symbol of potential future modules that can be added, such as a spin coater or hot plate modules. In our project, this idea will be executed with an Arduino R4. The “black box” should not be considered as part of the project, but only as a showcase for the functions and abilities. # Criterion For Success This project will be labeled as a success if: The track can recognize and adapt to new modules being loaded, Accept user recipes and execute those systematically, Rotate wafers to the correct orientation, Automatically transport wafers to the correct module slot depending on module position |
||||||
30 | Transverse String Organ |
Ash Huang Eddy Perez Kellen Sakaitani |
Shengyan Liu | Rakesh Kumar | proposal1.pdf |
|
# Title: Transverse String Organ Team Members: - Eddy Perez (ekperez2) - Ash Huang (akhuang3) - Kellen Sakaitani (kellens4) # Problem Electric guitar feedback is traditionally produced by amplifying the signal from the instrument loud enough that the energy stored as sound can induce a sustained feedback loop in the guitar string. Products such as the EBow take this concept and remove the inefficiency of energy transmission through sound by instead sending the amplified signal through magnetic driver coils (think of speaker drivers) directly into the string. Products such as this implement harmonic controls through analog filters in the signal chain, causing the string to resonate in higher octaves. Techniques such as this create a unique timbre from this instrument which can be finely controlled by the player and the electronics of the instrument. This unique timbre is restricted to a small number of notes (1-6 strings) at any given time and can only be utilized musicians who are trained on guitar. # Solution Our team would like to bypass these restrictions by making a harp or organ-like instrument with one feedback system per string. This instrument would ideally consist of twelve strings representing the chromatic scale in either the first or second musical octave. Our instrument would be controlled over a MIDI interface, allowing it to generalize to a broad range of musical controllers for those with backgrounds in various instruments. The instrument would be comprised of two main systems: the Master Board and the individual DSP Feedback Systems. The Master Board will act as the host of the system; it will listen to a MIDI signal through the UART peripheral of an STM microcontroller, and translate specific MIDI commands to an I2C bus, where this system would act as the master. This board will also include a 3.3V DC/DC regulator to power the MCUs of the other boards of the system. On the slave side of the I2C bus will be several (1 per string) low-power, DSP microcontrollers. These microcontrollers will impelement the filtering that traditional sustain systems typically do using DSP rather than analog filters. This will allow us to perform extended functionality such as the automatic muting of notes, and more controlled harmonic filtering. These DSPs will be paired with an electromagnetic pickup (similar to that of an electric guitar) to sample a signal from the string as it vibrates, and an electromagnetic driver which will receive an amplified & filtered version of the original signal in order to induce feedback into the string. Each electromagnetic driver will be powered by a discrete class AB amplifier. We would like to use an off the shelf 12V DC power supply to power the entire system. # Solution Components ## Subsystem 1 The Master Board will be comprised of three parts: an optocoupler, the master MCU, and an integrated 3.3V DC/DC converter. The optocoupler will serve the purpose of reference isolation for the MIDI controller port. This is standard circuit design for a MIDI receiver and will require minor peripheral circuitry to perform tasks such as ESD protection and signal biasing. The master MCU will be a low power MCU, capable of basic communications such as an STM L0 / L4. Since the bandwidth requirement of this MCU is actually less than that of the DSP boards, we will likely use the same MCU as the DSP boards for cost optimization. In particular, we were considering the STM32 L431CB, the reasoning for which will be explained in the DSP section. The DC/DC regulator will likely be a TI TPS62903. We've decided to use an integrated regulator as the functional design of this type of circuit is not core to the working concept of this project. This IC has an input range of up to 17 Volts which aligns with the off the shelf 12 Volt power supply that we are hoping to use. This specific regulator has a sustained maximum current output of 3A, which significantly greater than the maximum current draw (140mA) for 13 of the MCUs above, although it is also important to note that these MCUs are not expected to draw nearly that much current as they will not be powering any peripherals. This IC is a QFN package which will require an SMD stencil and reflow soldering, however members of our team (Eddy Perez) have experience with BGA design and reflow soldering from prior classes. ## Subsystem 2 The DSP boards will be comprised of four parts: The MCU that we are hoping to use for both these boards and the Master Board is the STM32 L431CB. This MCU is a part of STM's low-power series of microcontrollers, and is likewise cheap and accessible which is important for scalability in a project that uses several of them. This MCU comes in a QFP package and therefore will be hand-solderable. Additionally, this specific MCU has an internal factory trimmed 16MHz oscillator, which is key to reducing the overhead circuitry needed for DSP. In the same vein, this MCU has a built in DAC which will allow us to directly drive amplifier circuitry rather than using PCM output and smoothing circuitry. While it is likely possible to process multiple concurrent channels of audio using this MCU, we would like to use 1 DSP per string to avoid any potential bandwidth restrictions or architectural complication when executing this design. Our choices regarding cost and scalability reflect this decision. The DSP boards will make use of two electromagnetic coils. The signal for each string will start at a pickup (similar to that of an electric guitar) localized to each specific string. These pickups will have an output impedance of roughly 10 kOhms which will enable them to directly drive the ADC pins of the DSP MCU. Again, this is a decision that was made to optimize the cost of the project. The output of these pickups can be attenuated passively using potentiometers for level matching. Potentially higher performance for this system could be achieved by using lower impedance pickups, and op-amp buffers before sampling. This is a discussion that we would like to have with a TA / professor before completely finalizing the design as it may make coil assembly far easier (using less winds). The second electromagnetic coil will be the output driver for the string. This coil functions identically to the voice coil of a speaker: a power-amplified signal is passed through a low impedance coil (~8 Ohms) to move a magnet. In our case, a magnet will be positioned next to the string, thus magnetizing the string and allowing it to capture power from the Driver Coil. The same concept applies in reverse for the pickup mentioned above. The final component of this subsystem and our design in general is the discrete class AB amplifier that will be attached to each DSP board. Each of these amplifiers will be connected to the 12 Volt power supply that powers the entire system, allowing for greater power output than could be supplied by an MCU or battery. These amplifiers will be designed for a maximum power output of approximately 600mA, as this is the upper range of what off the shelf guitar sustain devices typically draw. Although we would like to use an off the shelf power supply that that can power all twelve sustain devices at max power concurrently, we will implement digital controls such as restriction of how many notes may be turned on concurrently to ensure that we stay below power limits. ## Additional Components Our project will require the following additional components: Off the shelf 12V power supply (ideally > 60 W, however this isn't necessary). Instrument body fabricated out of plywood \ MDF. The design of this will include minor drilling and joinery and will incorporate a bridge for the strings to ensure proper acoustics and sustain. Guitar-style tuners. Electric guitar strings. Misc standoffs for PCB mounting & driver alignment. # Criterion For Success All twelve strings will can continuously ring out (individually) as long as power is supplied. Small chords can be made: minimum 3 strings ringing out concurrently. Harmonic control of each string is possible: The instrument can isolate strings at their fundamental frequency, and the 2nd & 4th harmonics (octave & 2 octaves). Dampening of each string is possible: The system can use negative feedback to mute strings rather than letting them ring out after note turned off. Prospective block diagram:  |
||||||
31 | NueroGaurd |
Aidan Moran Alexander Krejca Stephen Simberg |
Shiyuan Duan | Cunjiang Yu | other1.pdf proposal1.pdf |
|
**Problem** Modern cauterization tools are blunt instruments, that cannot distinguish the tissue that must be cauterized from nerves that could potentially be damaged in the process. **Solution** Nuerogaurd aims to use low frequency nerve stimulation for nerve identification with high frequency cautery to create a safer surgical tool. **Sub Systems** High voltage step down PWM Input Remote Sensing and Digital Signal Processing **High Voltage System** The Aim of this project is to integrate into current Hospital electronics. The input for the Power System will be 2000V DC, which must be stepped down to 5V DC on the output. The output then must feed a large capacitor that feeds into a power mosfet, which will drive the current pulses for nerve stimulation. **PWM** The PWM driver circuitry must take inputs from the remote sensing and drives the power mosfet correctly. This project will receive support from medical students as well as ECE faculty. I have attached the abstract to this post that better outlines the aims of the electronics for this project. **Team Members (Seeking More)** - afmoran2@illinois.edu **Nuergaurd Abstract** [Expanding the Scope of Intraoperative Neuromonitoring with Nerve-Specific Stimulatory Waveform Design](https://drive.google.com/file/d/1GFjlQcn7_o1pGsj8yHZlhmiguU0Gc3VG/view?usp=sharing) |
||||||
32 | Insight: Cardiovascular Screening Device |
Ethan Pereira Jay Nathan Rishab Iyer |
Weiman Yan | Cunjiang Yu | proposal1.pdf |
|
After discussing with both Professor Rakesh Kumar and Professor Arne Filthelt, we have addressed some limitations and concerns regarding our initial project proposal. Here is our revised proposal. # Insight: Cardiovascular Screening Device ## Team Members: Jay Nathan (jayrn2) Rishab Iyer (riyer20) Ethan Pereira (ethanrp2) # Problem Cardiovascular disease (CVD) is the leading cause of death worldwide, responsible for nearly 20 million deaths annually, about one in three deaths overall. A significant share of these fatalities occur without prior diagnosis: approximately 45% of sudden cardiac deaths happen in individuals with no previously recognized heart disease, while nearly 20% of adults with hypertension and up to 23% of those with atrial fibrillation remain undiagnosed. These silent conditions, such as hypertension, arrhythmias, and sinus bradycardia risk factors, drive the majority of preventable CVDs. Current solutions remain fragmented, while comprehensive screening still requires multiple expensive clinical visits, such as blood pressure measurement, lipid panels, ECGs, and rhythm monitoring, creating barriers for uninsured or underserved populations. The impact is most severe in rural communities, where mortality rates are 20% higher than in urban areas due to limited access to screening. Yet the challenge extends to cities as well, where preventive tests are often costly, not covered by insurance, and therefore underutilized. Consumer devices like blood pressure cuffs, smartwatches, and single-lead ECGs are disjointed, expensive, and difficult to interpret. Critically, there is no affordable, comprehensive, and user-friendly screening solution that can detect CVD risks. # Solution We propose a low-cost, cardiovascular screener to detect the leading drivers of CVD: hypertension, atrial fibrillation (AFib), and sinus bradycardia (SB). The device combines Electrocardiogram (ECG), Photoplethysmography (PPG), and accelerometer-based noise correction for accurate measurements. Mathematical regression-based models will then analyze the digital signals and generate comprehensible readings. Technically, the system consists of a single custom PCB with built-in ECG and PPG sensors plus an accelerometer to capture physiological signals simultaneously. Both the ECG and PPG sensors will be placed on the board to focus on immediate automated screening. A data acquisition board with an Atmega microcontroller synchronizes and packages the data for transmission via USB. A connected PC runs lightweight mathematical regression models for hypertension, AFib, and SB detection. Results are displayed through a simple web dashboard for easy interpretation. # Solution Components ## Subsystem 1: Data Acquisition System (DAS) Function: The system captures physiological data from both an ECG and a PPG sensor (both on board). An accelerometer is also connected to support motion-based denoising of the signal. The microcontroller processes the data we’re reading and sends it into the ML pipeline on the PC through a USB port. Components: - ECG Sensor: AD8232 Analog Front End for ECG - PPG Sensor: MAX30102 - Accelerometer: Adafruit LIS3DH - Microcontroller: ATmega328P MCU - Comm Port: USB ## Subsystem 2: Firmware & Communication Function: Firmware handles ADC sampling, calibration routines, and packaging data for transmission. Supports firmware for reading data from both sensors. USB-based serial communication happens between the ATmega on the DAS board and the host PC. The firmware will send ECG/PPG data as formatted CSV files to be processed by the models. Components: - Microcontroller: ATmega328P MCU - Sensor Buses: SPI/I²C - Comm Port: USBC or micro-USB connector on the DAS PCB - Host Software: an application on the PC to receive and display data on the web app ## Subsystem 3: Backend Processing and ML Inference Function 1: Use ECG/PPG-derived HRV(heart rate variability) and irregularity features to classify AFib vs. normal rhythm using a lightweight classifier on RR(beat-to-beat intervals) sequences. Function 2: Compute mean HR and rhythm regularity from ECG/PPG to flag sinus bradycardia when < 60 bpm using simple threshold logic (no training needed). Function 3: Combine PTT (ECG→PPG latency) and PPG features to classify Normal/Elevated/Hypertensive risk levels using a classifier trained with cuff labels. Components: - Compute: PC(Laptop) for computation. - ML Libraries: PyTorch, scikit-learn, XGBoost, MLflow - Preprocessing module: filtering, motion gating, feature extraction (RR, PP, PTT, HRV, PPG morphology). - Models: AFib classifier, Brady threshold logic, BP-risk classifier. - Calibration & Decision Layer: Produces JSON outputs with labels, confidence, and window duration # Criterion For Success ## Functional Affordable Prototype: A working hardware prototype integrating both ECG and PPG on the same board, and collecting, processing, and displaying data in real time. Target total cost remains < $70. ## Signal Processing with Baseline Accuracy: The system will have noise filtering and feature extraction (heart rate and rhythm patterns) on baseline ECG and PPG datasets from a sampled group of individuals. Success criteria include: Accuracy Benchmarks: Heart rate estimated within ±5 bpm of reference values in ≥80% of trials, and rhythm classification achieving ≥75% agreement with reference annotations. Literature-based Targets: Based on literature, we should aim for AFib detection AUC ≥ 0.95 (high ability to differentiate AFib from normal rhythm), Sinus Bradycardia detection F1 ≥ 0.95 (recall for slow, regular rhythms), and hypertension classification Macro-F1 ≥ 0.80 with one calibration reading (accuracy across Normal/Elevated/Hypertensive categories when compared to a real reference). Dataset Training & Validation: We will validate performance through repeated training and testing, comparing our outputs against certified standards for consistency. Evaluation will be conducted using publicly available medical datasets such as MIT-BIH (ECG) and PPG-DaLiA (PPG) to test reliability across diverse conditions. Noise Mitigation: Integration of accelerometer inputs for motion-based denoising. ## ML Interpretation: Backend ML models must correctly interpret ECG/PPG readings to achieve at least 85% classification accuracy on validation datasets for AFib detection and hypertension risk classification, showing that the models can reliably separate normal from abnormal conditions when tested against certified public medical datasets. ## Repeatability & Sensor Placement: The same individual tested five times in a row should yield similar results, with differences within a 10% margin for all key metrics (heart rate, rhythm classification, BP risk). For standard ECG lead & sensor placement, we will use standard placement (as shown in LITFL ECG Lead Positioning) to reduce variability caused by electrode misplacement. # Resources: ECG Lead positioning: https://litfl.com/ecg-lead-positioning/ Atrial Fibrillation: https://pmc.ncbi.nlm.nih.gov/articles/PMC11262392/ Sinus Bradycardia: https://www.researchgate.net/publication/377965437_Detecting_Sinus_Bradycardia_From_ECG_Signals_Using_Signal_Processing_And_Machine_Learning Hypertension: https://pmc.ncbi.nlm.nih.gov/articles/PMC11904724/#S10 |
||||||
33 | Budget Clip-On Posture Checker |
Ashit Anandkumar Destiny Jefferson Edward Ruan |
Wenjing Song | Cunjiang Yu | proposal1.pdf |
|
Title: Budget Clip-On Posture Checker Team Members: - Ashit Anandkumar (aa97) - Edward Ruan (eruan3) - Destiny Jefferson (djeff4) # Problem Describe the problem you want to solve and motivate the need. Today, people work long hours at desks, either using their computers or mobile devices. This leads to poor posture whether it be through rounding shoulders, slouching, or tilting their head forward. These poor habits can lead to chronic neck, back, and shoulder pain, fatigue, and possibly some spinal and musculoskeletal issues. Most of the time people subconsciously fall into a position of poor posture and don’t notice its negative effects until they experience discomfort. Current solutions include either having a brace which is restrictive and expensive, an application that uses cameras which require users to sit in front of which is tedious and impractical, and reminders that occur without measuring actual poor posture which people tend to ignore. There needs to be a discreet solution that can accurately monitor posture in real time, provide immediate feedback, and is portable. There is currently a product on amazon that does this, but this product is expensive and no one should be emptying their wallet for a simple but useful posture checker device. # Solution Describe your design at a high-level, how it solves the problem, and introduce the subsystems of your project. The Clip-On Posture Checker will be an affordable small wearable device that is clipped onto the user’s upper shirt or upper body. This device will continuously monitor the body’s orientation and its deviation from proper posture. Everyone’s proper posture is different which is why the device has a calibration button the user can press when sitting/standing in their proper posture, after a set time the device will be calibrated. Within a set parameter, a deviation outside of this calibrated range will trigger immediate feedback. When the user slouches or leans forward a lot, the device will immediately provide haptic feedback which will prompt the user to correct their posture. # Solution Components ## Subsystem 1 - Sensory This sensory subsystem will detect the user’s orientation and motion. The component(s) required will be something like an ICM-20948, which contains an accelerometer, gyroscope, and magnetometer to properly detect user posture deviation from their calibrated proper posture. ## Subsystem 2 - Processing/MCU For the processing subsystem, we will use the Arduino Nono 33 BLE or ESP32 to handle all the sensor data collection, filtration and feedback control. Both these microcontrollers have a compact size and will help fit into this wearable project. The microcontroller will continuously read the orientation and acceleration sensors and be able to calculate whether the posture is correct or not. Additionally, there will be a filtration system to calculate the tilt/change in posture from the calibrated position. Additionally, the filtration system will also be able to detect if it is just a slight movement by the user or a posture change. Lastly the microcontroller will be in charge of sending feedback to the user to help indicate to the user that there is a change in posture. ## Subsystem 3 - Feedback This feedback subsystem serves to notify the user in real-time when they have poor posture. It will be a simple vibration motor for haptic feedback, a ERM motor will suffice, optionally LEDs or a buzzer can also be included. ## Subsystem 4 - Power This power subsystem will provide stable power and lasting operation to ensure proper posture checking behaviour. The components required would be a small rechargeable 3.7V LiPo battery @200-500 mAh, a voltage regulator for the MCU, and a battery charging circuit. ## Subsystem 5 - Enclosure This mechanical subsystem serves to enclose the entire device and its components, the components could simply be a plastic shell to hold all the components and a metal clip so the user can clip on the device to their body. # Criterion For Success Describe high-level goals that your project needs to achieve to be effective. These goals need to be clearly testable and not subjective. For the device to be successful, the device shall detect the user’s torso tilt angle within ±5° accuracy relative to the calibrated upright posture. The device shall provide real-time feedback (vibration or LED) within 1 second when posture deviation exceeds a threshold angle (e.g., 15° forward lean). The device shall operate continuously for at least 8 hours on a single battery charge. The device shall log posture data with a time resolution of at least 1 minute and store or transmit a minimum of 24 hours of usage history. |