Project

# Title Team Members TA Documents Sponsor
97 Facial Matching Display Mirror w/ Motion Sensor
Connor Tan
Keenan Peris
Krish Sahni
Argyrios Gerogiannis proposal1.pdf
# Facial Matching Smart Mirror w/ Motion Sensor

Team Members:
- Keenan Peris (peris2)
- Krish Sahni (krish3)
- Connor Tan (cctan2)

# Problem
STEM outreach spaces often rely on static posters or non-interactive exhibits to showcase different career paths one can take within these industries. These “displays” fail to actively engage or create a personal connection with visitors, especially for students who may not initially see themselves represented in STEM fields. Thus, there is a need for an interactive, technology-driven exhibit that captures attention, responds to visitor presence, and presents STEM role models in a way that feels personal, modern, and engaging.

# Solution
We propose an interactive, mirror-like display that appears inactive or reflective until a person stands in front of the mirror. When the system detects a person, it turns on and prompts the user to select the future career they would like to see. The user would then be able to select from a list of options on which “quantum” career they want to look at. The system then uses a camera and basic image processing to identify the type of person standing in front of it (ethnicity, sex, etc). This data would be then used to find a matched scientist or engineer and present them via short video with their name, role, and brief quote about their interest in science or engineering.


# Solution Components

## Subsystem 1: Presence Detection
This subsystem utilizes the Xbox Kinect’s Infrared (IR) depth stream to create a digital "tripwire" for the system. The Raspberry Pi processes the Kinect’s raw depth map. By analyzing the depth frames for skeletal silhouettes, the system can distinguish between a human walking toward the mirror and background movement (such as a door swinging), ensuring the interactive UI only triggers when a user is positioned within the optimal 1.5- to 3.5-meter interaction zone.

Components:
- Raspberry Pi 4
- Microsoft Xbox One And 360 Kinect V2 Model 1520 Motion Sensor Camera

## Subsystem 2: Mirror Display

This subsystem is responsible for presenting the interactive user interface and short-form video content of selected STEM role models. The system uses a partially reflective (two-way) mirror placed in front of an LED display, creating a mirror-like appearance when inactive and a dynamic display when content is shown. The LED TV functions as an active backlighting source positioned directly behind the partially reflective mirror. When the system is inactive, the display outputs a dark (near-black) image, causing minimal light transmission through the mirror and preserving a reflective, mirror-like appearance. When activated, the TV increases brightness and displays high-contrast video and UI elements, allowing light to pass through the mirror film and making the content visible to the user. This contrast-based control enables seamless transitions between an inactive mirror state and an active display state without mechanica

Components:

- 18”x30” Glass or Acrylic Panel
- 18”x30” 60%R/40%T Mirror film
- INSIGNIA 32" Class F20 Series LED HD Smart Fire TV


## Subsystem 3: Camera & Video/Image Processing
This subsystem captures real-time visual data of the area in front of the display using a Logitech C920 webcam and processes it using the Raspberry Pi to detect the presence and position of a person. It provides the raw image data needed for person and face detection. The processing performed in this subsystem prepares image regions of interest for face detection and confidence evaluation. This ensures reliable and efficient visual analysis.

Components:
- USB camera - Logitech C920
- Camera mounting bracket
- Raspberry Pi 4
- Image processing software

## Subsystem 4: Interactive UI
The Interactive UI serves as the bridge between user input and the career-matching database, leveraging Xbox Kinect skeleton tracking for a touchless experience. The interface display shows career options that users can select by moving their hands to "hover" over digital buttons. The display is controlled by an STM32 Microcontroller.

Components:
- STM32 Microcontroller
- Display
- Microsoft Xbox One And 360 Kinect V2 Model 1520 Motion Sensor Camera

## Subsystem 5: Face Detection Confidence Determination
This subsystem evaluates whether a face is present in the captured image and determines how confident the system is in that detection. It focuses solely on detection quality, such as face size, position, and clarity. The resulting confidence score is then used to decide when the system should proceed with selecting and displaying a profile, helping prevent false triggers and unreliable matches.

Components:
- Face detection software module - OpenCV
- Raspberry Pi
- Image quality evaluation logic

## Subsystem 6: Data-based search
This subsystem selects an appropriate/matching scientist or engineer profile from a local database once sufficient confidence has been established. Using predefined metadata and selection rules, the subsystem matches user context to available profiles and outputs the selected content to the UI.

Components:
- Local profile database - JSON or SQLite
- Raspberry Pi
- Local storage - microSD card
- Profile selection and matching logic in software

## Subsystem 7: Power Management
This subsystem is responsible for safely distributing power to all low-voltage electronic components in the system, including the Raspberry Pi and the microcontroller-based control hardware. A regulated 5V DC supply will be used for powering the Raspberry Pi, and additional 3.3 V regulated rails for any local microcontroller logic.
The subsystem for the microcontroller and Raspberry Pi cooperates independently of the Kinect sensor and LED TV, which each use their own dedicated power supplies.
Components:
- Buck Converter LM2596
- MCP1700 / MCP1703
- Schottky Diode
- KABCON Kinect Adapter for Xbox One S, Xbox One X, PC Windows 10 8.1 8, Xbox Kinect Power Supply for Xbox 1S, 1X Kinect 2.0 Sensor
- INSIGNIA 32" Class F20 Series LED HD Smart Fire TV AC power adapter (comes with TV)


# Criterion for Success:
Our final product should demonstrate the following capabilities in order to be considered successful:
## Core Functionality
The mirror detects a person standing in front of it, with detection being automatic (no button press needed).
The system consistently activates when someone enters range.

## Visual Transformation:
The display clearly switches from a mirror-like idle state -> digital content
The overlay of image/video and text is visible, aligned, not confusing or cluttered.

## Correct Content Triggered:
The system shows the intended character/person (scientist, engineer, etc.) along with the correct associated text and/or short video.
No random or incorrect activations
Reliability of the consistent facial matching is >80% correct.

## Responsiveness:
Delay from detection to the display change is short enough to feel natural.

Bracelet Aid for deaf people/hard of hearing

Aarushi Biswas, Yash Gupta, Anit Kapoor

Bracelet Aid for deaf people/hard of hearing

Featured Project

# PROJECT TITLE: Bracelet Aid for deaf people/hard of hearing

# TEAM MEMBERS:

- Aarushi Biswas (abiswas7)

- Anit Kapoor (anityak3)

- Yash Gupta (yashg3)

# PROBLEM

We are constantly hearing sounds around us that notify us of events occurring, such as doorbells, fire alarms, phone calls, alarms, or vehicle horns. These sounds are not enough to catch the attention of a d/Deaf person and sometimes can be serious (emergency/fire alarms) and would require the instant attention of the person. In addition, there are several other small sounds produced by devices in our everyday lives such as washing machines, stoves, microwaves, ovens, etc. that cannot be identified by d/Deaf people unless they are observing these machines constantly.

Many people in the d/Deaf community combat some of these problems such as the doorbell by installing devices that will cause the light in a room to flicker. However, these devices are generally not installed in all rooms and will also obviously not be able to notify people if they are asleep. Another common solution is purchasing devices like smartwatches that can interact with their mobile phones to notify them of their surroundings, however, these smartwatches are usually expensive, do not fulfill all their needs, and require nightly charging cycles that diminish their usefulness in the face of the aforementioned issues.

# SOLUTION

A low-cost bracelet aid with the ability to convert sounds into haptic feedback in the form of vibrations will be able to give d/Deaf people the independence of recognizing notification sounds around them. The bracelet will recognize some of these sounds and create different vibration patterns to catch the attention of the wearer as well as inform them of the cause of the notification. Additionally, there will be a visual component to the bracelet in the form of an OLED display which will provide visual cues in the form of emojis. The bracelet will also have buttons for the purpose of stopping the vibration and showing the battery on the OLED.

For instance, when the doorbell rings, the bracelet will pick up the doorbell sound after filtering out any other unnecessary background noise. On recognizing the doorbell sound, the bracelet will vibrate with the pattern associated with the sound in question which might be something like alternating between strong vibrations and pauses. The OLED display will also additionally show a house emoji to denote that the house doorbell is ringing.

# SOLUTION COMPONENTS

Based on this solution we have identified that we need the following components:

- INMP441 (Microphone Component)

- Brushed ERM (Vibration Motor)

- Powerboost 1000 (Power subsystem)

- 1000 mAh LiPo battery x 2 (hot swappable)

- SSD1306 (OLED display)

## SUBSYSTEM 1 → SOUND DETECTION SUBSYSTEM

This subsystem will consist of a microphone and will be responsible for picking up sounds from the environment and conducting a real-time FFT on them. After this, we will filter out lower frequencies and use a frequency-matching algorithm to infer if a pre-programmed sound was picked up by the microphone. This inference will be outputted to the main control unit in real-time.

## SUBSYSTEM 2 → VIBRATION SUBSYSTEM

This subsystem will be responsible for vibrating the bracelet on the wearer’s wrist. Using the vibration motor mentioned above, we should have a frequency range of 30Hz~500Hz, which should allow for the generation of a variety of distinguishable patterns. This subsystem will be responsible for the generation of the patterns and control of the motor, as well as prompting the Display subsystem to visualize the type of notification detected.

## SUBSYSTEM 3 → DISPLAY SUBSYSTEM

The Display subsystem will act as a set of visual cues in addition to the vibrations, as well as a visual feedback system for user interactions. This system should not draw a lot of power as it will be active only when prompted by user interaction or by a recognized sound. Both of these scenarios are relatively uncommon over the course of a day, which means that the average power draw for our device should still remain low.

## SUBSYSTEM 4 → USER INTERACTION SUBSYSTEM

This subsystem is responsible for the interaction of the user with the bracelet. This subsystem will include a set of buttons for tasks such as checking the charge left on the battery or turning off a notification. Checking the charge will also display the charge on the OLED display thus interacting and controlling the display subsystem as well.

## SUBSYSTEM 5 → POWER SUBSYSTEM

This subsystem is responsible for powering the device. One of our success criteria is that we want long battery life and low downtime. In order to achieve this we will be using a power boost circuit in conjunction with two rechargeable 1000 mAh batteries. While one is charging the other can be used so the user doesn’t have to go without the device for more than a few seconds at a time. We are expecting our device to use anywhere from 20-50mA which would mean we get an effective use time of more than a day. The power boost circuit and LiPo battery’s JST connector allow the user to secure and quick battery swaps as well.

# CRITERION FOR SUCCESS

- The bracelet should accurately identify only the crucial sounds in the wearer’s environment with each type of sound having a fixed unique vibration + LED pattern associated with it

- The vibration patterns should be distinctly recognizable by the wearer

- Should be relatively low cost

- Should have prolonged battery life (so the power should focus on only the use case of converting sound to vibration)

- Should have a small profile and a sleek form factor

Project Videos