Project

# Title Team Members TA Documents Sponsor
97 Facial Matching Display Mirror w/ Motion Sensor
Connor Tan
Keenan Peris
Krish Sahni
Argyrios Gerogiannis design_document1.pdf
proposal1.pdf
# Facial Matching Smart Mirror w/ Motion Sensor

Team Members:
- Keenan Peris (peris2)
- Krish Sahni (krish3)
- Connor Tan (cctan2)

# Problem
STEM outreach spaces often rely on static posters or non-interactive exhibits to showcase different career paths one can take within these industries. These “displays” fail to actively engage or create a personal connection with visitors, especially for students who may not initially see themselves represented in STEM fields. Thus, there is a need for an interactive, technology-driven exhibit that captures attention, responds to visitor presence, and presents STEM role models in a way that feels personal, modern, and engaging.

# Solution
We propose an interactive, mirror-like display that appears inactive or reflective until a person stands in front of the mirror. When the system detects a person, it turns on and prompts the user to select the future career they would like to see. The user would then be able to select from a list of options on which “quantum” career they want to look at. The system then uses a camera and basic image processing to identify the type of person standing in front of it (ethnicity, sex, etc). This data would be then used to find a matched scientist or engineer and present them via short video with their name, role, and brief quote about their interest in science or engineering.


# Solution Components

## Subsystem 1: Presence Detection
This subsystem utilizes the Xbox Kinect’s Infrared (IR) depth stream to create a digital "tripwire" for the system. The Raspberry Pi processes the Kinect’s raw depth map. By analyzing the depth frames for skeletal silhouettes, the system can distinguish between a human walking toward the mirror and background movement (such as a door swinging), ensuring the interactive UI only triggers when a user is positioned within the optimal 1.5- to 3.5-meter interaction zone.

Components:
- Raspberry Pi 4
- Microsoft Xbox One And 360 Kinect V2 Model 1520 Motion Sensor Camera

## Subsystem 2: Mirror Display

This subsystem is responsible for presenting the interactive user interface and short-form video content of selected STEM role models. The system uses a partially reflective (two-way) mirror placed in front of an LED display, creating a mirror-like appearance when inactive and a dynamic display when content is shown. The LED TV functions as an active backlighting source positioned directly behind the partially reflective mirror. When the system is inactive, the display outputs a dark (near-black) image, causing minimal light transmission through the mirror and preserving a reflective, mirror-like appearance. When activated, the TV increases brightness and displays high-contrast video and UI elements, allowing light to pass through the mirror film and making the content visible to the user. This contrast-based control enables seamless transitions between an inactive mirror state and an active display state without mechanica

Components:

- 18”x30” Glass or Acrylic Panel
- 18”x30” 60%R/40%T Mirror film
- INSIGNIA 32" Class F20 Series LED HD Smart Fire TV


## Subsystem 3: Camera & Video/Image Processing
This subsystem captures real-time visual data of the area in front of the display using a Logitech C920 webcam and processes it using the Raspberry Pi to detect the presence and position of a person. It provides the raw image data needed for person and face detection. The processing performed in this subsystem prepares image regions of interest for face detection and confidence evaluation. This ensures reliable and efficient visual analysis.

Components:
- USB camera - Logitech C920
- Camera mounting bracket
- Raspberry Pi 4
- Image processing software

## Subsystem 4: Interactive UI
The Interactive UI serves as the bridge between user input and the career-matching database, leveraging Xbox Kinect skeleton tracking for a touchless experience. The interface display shows career options that users can select by moving their hands to "hover" over digital buttons. The display is controlled by an STM32 Microcontroller.

Components:
- STM32 Microcontroller
- Display
- Microsoft Xbox One And 360 Kinect V2 Model 1520 Motion Sensor Camera

## Subsystem 5: Face Detection Confidence Determination
This subsystem evaluates whether a face is present in the captured image and determines how confident the system is in that detection. It focuses solely on detection quality, such as face size, position, and clarity. The resulting confidence score is then used to decide when the system should proceed with selecting and displaying a profile, helping prevent false triggers and unreliable matches.

Components:
- Face detection software module - OpenCV
- Raspberry Pi
- Image quality evaluation logic

## Subsystem 6: Data-based search
This subsystem selects an appropriate/matching scientist or engineer profile from a local database once sufficient confidence has been established. Using predefined metadata and selection rules, the subsystem matches user context to available profiles and outputs the selected content to the UI.

Components:
- Local profile database - JSON or SQLite
- Raspberry Pi
- Local storage - microSD card
- Profile selection and matching logic in software

## Subsystem 7: Power Management
This subsystem is responsible for safely distributing power to all low-voltage electronic components in the system, including the Raspberry Pi and the microcontroller-based control hardware. A regulated 5V DC supply will be used for powering the Raspberry Pi, and additional 3.3 V regulated rails for any local microcontroller logic.
The subsystem for the microcontroller and Raspberry Pi cooperates independently of the Kinect sensor and LED TV, which each use their own dedicated power supplies.
Components:
- Buck Converter LM2596
- MCP1700 / MCP1703
- Schottky Diode
- KABCON Kinect Adapter for Xbox One S, Xbox One X, PC Windows 10 8.1 8, Xbox Kinect Power Supply for Xbox 1S, 1X Kinect 2.0 Sensor
- INSIGNIA 32" Class F20 Series LED HD Smart Fire TV AC power adapter (comes with TV)


# Criterion for Success:
Our final product should demonstrate the following capabilities in order to be considered successful:
## Core Functionality
The mirror detects a person standing in front of it, with detection being automatic (no button press needed).
The system consistently activates when someone enters range.

## Visual Transformation:
The display clearly switches from a mirror-like idle state -> digital content
The overlay of image/video and text is visible, aligned, not confusing or cluttered.

## Correct Content Triggered:
The system shows the intended character/person (scientist, engineer, etc.) along with the correct associated text and/or short video.
No random or incorrect activations
Reliability of the consistent facial matching is >80% correct.

## Responsiveness:
Delay from detection to the display change is short enough to feel natural.

Iron Man Mouse

Jeff Chang, Yayati Pahuja, Zhiyuan Yang

Featured Project

# Problem:

Being an ECE student means that there is a high chance we are gonna sit in front of a computer for the majority of the day, especially during COVID times. This situation may lead to neck and lower back issues due to a long time of sedentary lifestyle. Therefore, it would be beneficial for us to get up and stretch for a while every now and then. However, exercising for a bit may distract us from working or studying and it might take some time to refocus. To control mice using our arm movements or hand gestures would be a way to enable us to get up and work at the same time. It is similar to the movie Iron Man when Tony Stark is working but without the hologram.

# Solution Overview:

The device would have a wrist band portion that acts as the tracker of the mouse pointer (implemented by accelerometer and perhaps optical sensors). A set of 3 finger cots with gyroscope or accelerometer are attached to the wrist band. These sensors as a whole would send data to a black box device (connected to the computer by USB) via bluetooth. The box would contain circuits to compute these translational/rotational data to imitate a mouse or trackpad movements with possible custom operation. Alternatively, we could have the wristband connected to a PC by bluetooth. In this case, a device driver on the OS is needed for the project to work.

# Solution Components:

Sensors (finger cots and wrist band):

1. 3-axis accelerometer attached to the wrist band portion of the device to collect translational movement (for mouse cursor tracking)

2. gyroscope attached to 3 finger cots portion to collect angular motion when user bend their fingers in different angles (for different clicking/zoom-in/etc operations)

3. (optional) optical sensors to help with accuracy if the accelerometer is not accurate enough. We could have infrared emitters set up around the screen and optical sensors on the wristband to help pinpoint cursor location.

4. (optional) flex sensors could also be used for finger cots to perform clicks in case the gyroscope proves to be inaccurate.

Power:

Lithium-ion battery with USB charging

Transmitter component:

1. A microcontroller to pre-process the data received from the 4 sensors. It can sort of integrate and synchronize the data before transmitting it.

2. A bluetooth chip that transmits the data to either the blackbox or the PC directly.

Receiver component:

1. Plan A: A box plugged into USB-A on PC. It has a bluetooth chip to receive data from the wristband, and a microcontroller to process the data into USB human interface device signals.

2. Plan B: the wristband is directly connected to the PC and we develop a device driver on the PC to process the data.

# Criterion for Success:

1. Basic Functionalities supported (left click, right click, scroll, cursor movement)

2. Advanced Functionalities supported(zoom in/out, custom operations eg. volume control)

3. Performance (accuracy & response time)

4. Physical qualities (easy to wear, durable, and battery life)