Project

# Title Team Members TA Documents Sponsor
27 Desk Learning Aid Device
Aidan Johnston
Conan Pan
Ethan Ge
Kaiwen Cao design_document1.pdf
final_paper1.pdf
grading_sheet1.pdf
presentation1.pptx
proposal1.pdf
proposal2.pdf
video1.mov
# Desk Learning Aid Device

Team Members:
- Conan Pan (cpan23)
- Aidan Johnston (aidanyj2)
- Ethan Ge (ethange2)

# Problem
As a result of ongoing technological growth and the Covid pandemic, there has been a shift in recent years to integrate technology into the classroom via computers, devices, and virtual learning. However, this shift has generated further problems, specifically in elementary school classrooms. Problems we have noticed include young children spending more time on screens, less time socializing, and being far more disruptive. These trends contribute to a less effective and unhealthy learning environment. In the pursuit of generating a more social, engaging, and nurturing environment for young students we propose the desk learning aid device.

# Solution

The desk learning aid device will function through various buttons connected to a customized PCB device. Buttons will correspond to responding to polls/questions, comprehension checks, asking questions, and more. The device will communicate to an application that can be monitored by the teacher where they will receive real-time feedback. The teacher can have a better understanding of the student’s comprehension levels and be able to properly cater towards providing the students the most effective lesson. The purpose of this device would be to provide a cost-effective solution that can be set up at each student’s desk to promote a holistically better learning environment for students. This differs from other options on the market due to easier set up because other options require you to create a question in order to receive a response, however, our device allows for many passive inputs including comprehension and other urgent needs. In addition, other portable solutions require students to buy each device individually costing them hundreds of dollars, but our solution only requires the purchase of a reusable RFID keycard that is cheap and easy to use.

# Solution Components

## Input Subsystem

This subsystem will include response (EVQ-P7K01P (Panasonic)) buttons for comprehension checks, request for assistance buttons, feedback buttons, and mental/emotional health check-in buttons.
These buttons will be labeled accordingly so that student interaction with the device is simplified. The advantage with having a variety of buttons is to enable teachers to have as little interaction with the app as possible.
In addition, this subsystem will include a scroller (Bourns PTL30 Series (PTL30-15O0F-B103)) that will enable students to adjust in real-time how they are feeling throughout the day.

## Interface Subsystem

(SSD1306 0.96" OLED Display (I²C))
This subsystem will include a basic interface that serves several purposes:
Will display the user’s name once the user check’s in thus verifying the check-in.
Will display a range of emotions that students can select via their scroller.
Will display the answer choice selected by the user for comprehension checks.

## Microcontroller Subsystem

The ESP 32 S3 Microcontroller is programmed with firmware to recognize button inputs and process them according to whether a question has been asked. Transmit student data to the mobile app run by the teacher.

## Mobile Application

The mobile application subsystem serves as the teacher’s interface to monitor student responses, track participation, and adjust lesson pacing in real time. The app receives data from student devices via Wi-Fi or Bluetooth, displaying responses in a structured and visual manner. The teacher can view class-wide comprehension trends, see which students need help, and manage classroom activities such as quizzes and polls. It would also receive data from the RFID/NFC keycard subsystem and store data for each student’s participation, attendance, and comprehension.

Components:

Frontend UI:

Built using React

Displays real-time responses, feedback, and participation data.

Backend & Communication:

Firebase Realtime Database to handle instant message transmission.

Secure BLE/Wi-Fi communication with ESP32 devices.

Data Processing & Visualization:

Aggregates student responses for charts, graphs, and heatmaps.

Uses D3.js or Chart.js for real-time visualization of classroom engagement.

Authentication & Security:

Teachers log in with Google OAuth or school credentials.

## Power subsystem

The 103454 LiPo rechargeable battery system will be used to power the device. Ideally, we’d like to also make the system as efficient as possible as to ensure that it doesn’t need frequent recharging. This battery system is preferred over wired power due to the installation and cable management that comes with wired power. Furthermore, desks are constantly moving in a classroom, whether that be re-arranging seats or during seasonal cleans, thus further highlighting the advantage of the battery system.

## RFID/NFC (Keycard) subsystem

The RFID/NFC subsystem allows students to log in quickly and anonymously using keycards without the need for manual name entry or personal devices. This ensures a seamless and low-disruption way to track participation, attendance, and response data. By tapping their RFID or NFC card on their desk device, students authenticate themselves before answering questions or engaging in activities. This enables teachers to monitor individual engagement and performance trends without requiring students to use personal logins.

Components

RFID/NFC Reader

Function: Reads the keycard’s unique ID

Part: RC522 NFC Module (SPI-based)

RFID Key Cards
Function: Unique identifier for each student
Part: MIFARE 13.56 MHz RFID Cards

# Criterion For Success

The PCB device accurately registers button presses and sends the data to the mobile application.

The mobile application receives user(s) data from the microcontroller and stores/analyzes trends in the data for classroom comprehension for specific topics.

The keycard correctly signs the user into the “classroom” that the device belongs to.

The entire system remains functional throughout an entire school day.

The button design is clear and simple for students to interact with.

The desk learning aid device integrates smoothly into the existing classroom format - without adding substantial work for students and especially teachers.

Smart Glasses for the Blind

Siraj Khogeer, Abdul Maaieh, Ahmed Nahas

Smart Glasses for the Blind

Featured Project

# Team Members

- Ahmed Nahas (anahas2)

- Siraj Khogeer (khogeer2)

- Abdulrahman Maaieh (amaaieh2)

# Problem:

The underlying motive behind this project is the heart-wrenching fact that, with all the developments in science and technology, the visually impaired have been left with nothing but a simple white cane; a stick among today’s scientific novelties. Our overarching goal is to create a wearable assistive device for the visually impaired by giving them an alternative way of “seeing” through sound. The idea revolves around glasses/headset that allow the user to walk independently by detecting obstacles and notifying the user, creating a sense of vision through spatial awareness.

# Solution:

Our objective is to create smart glasses/headset that allow the visually impaired to ‘see’ through sound. The general idea is to map the user’s surroundings through depth maps and a normal camera, then map both to audio that allows the user to perceive their surroundings.

We’ll use two low-power I2C ToF imagers to build a depth map of the user’s surroundings, as well as an SPI camera for ML features such as object recognition. These cameras/imagers will be connected to our ESP32-S3 WROOM, which downsamples some of the input and offloads them to our phone app/webpage for heavier processing (for object recognition, as well as for the depth-map to sound algorithm, which will be quite complex and builds on research papers we’ve found).

---

# Subsystems:

## Subsystem 1: Microcontroller Unit

We will use an ESP as an MCU, mainly for its WIFI capabilities as well as its sufficient processing power, suitable for us to connect

- ESP32-S3 WROOM : https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1-N8/15200089

## Subsystem 2: Tof Depth Imagers/Cameras Subsystem

This subsystem is the main sensor subsystem for getting the depth map data. This data will be transformed into audio signals to allow a visually impaired person to perceive obstacles around them.

There will be two Tof sensors to provide a wide FOV which will be connected to the ESP-32 MCU through two I2C connections. Each sensor provides a 8x8 pixel array at a 63 degree FOV.

- x2 SparkFun Qwiic Mini ToF Imager - VL53L5CX: https://www.sparkfun.com/products/19013

## Subsystem 3: SPI Camera Subsystem

This subsystem will allow us to capture a colored image of the user’s surroundings. A captured image will allow us to implement egocentric computer vision, processed on the app. We will implement one ML feature as a baseline for this project (one of: scene description, object recognition, etc). This will only be given as feedback to the user once prompted by a button on the PCB: when the user clicks the button on the glasses/headset, they will hear a description of their surroundings (hence, we don’t need real time object recognition, as opposed to a higher frame rate for the depth maps which do need lower latency. So as low as 1fps is what we need). This is exciting as having such an input will allow for other ML features/integrations that can be scaled drastically beyond this course.

- x1 Mega 3MP SPI Camera Module: https://www.arducam.com/product/presale-mega-3mp-color-rolling-shutter-camera-module-with-solid-camera-case-for-any-microcontroller/

## Subsystem 4: Stereo Audio Circuit

This subsystem is in charge of converting the digital audio from the ESP-32 and APP into stereo output to be used with earphones or speakers. This included digital to audio conversion and voltage clamping/regulation. Potentially add an adjustable audio option through a potentiometer.

- DAC Circuit

- 2*Op-Amp for Stereo Output, TLC27L1ACP:https://www.ti.com/product/TLC27L1A/part-details/TLC27L1ACP

- SJ1-3554NG (AUX)

- Connection to speakers/earphones https://www.digikey.com/en/products/detail/cui-devices/SJ1-3554NG/738709

- Bone conduction Transducer (optional, to be tested)

- Will allow for a bone conduction audio output, easily integrated around the ear in place of earphones, to be tested for effectiveness. Replaced with earphones otherwise. https://www.adafruit.com/product/1674

## Subsystem 5: App Subsystem

- React Native App/webpage, connects directly to ESP

- Does the heavy processing for the spatial awareness algorithm as well as object recognition or scene description algorithms (using libraries such as yolo, opencv, tflite)

- Sends audio output back to ESP to be outputted to stereo audio circuit

## Subsystem 6: Battery and Power Management

This subsystem is in charge of Power delivery, voltage regulation, and battery management to the rest of the circuit and devices. Takes in the unregulated battery voltage and steps up or down according to each components needs

- Main Power Supply

- Lithium Ion Battery Pack

- Voltage Regulators

- Linear, Buck, Boost regulators for the MCU, Sensors, and DAC

- Enclosure and Routing

- Plastic enclosure for the battery pack

---

# Criterion for Success

**Obstacle Detection:**

- Be able to identify the difference between an obstacle that is 1 meter away vs an obstacle that is 3 meters away.

- Be able to differentiate between obstacles on the right vs the left side of the user

- Be able to perceive an object moving from left to right or right to left in front of the user

**MCU:**

- Offload data from sensor subsystems onto application through a wifi connection.

- Control and receive data from sensors (ToF imagers and SPI camera) using SPI and I2C

- Receive audio from application and pass onto DAC for stereo out.

**App/Webpage:**

- Successfully connects to ESP through WIFI or BLE

- Processes data (ML and depth map algorithms)

- Process image using ML for object recognition

- Transforms depth map into spatial audio

- Sends audio back to ESP for audio output

**Audio:**

- Have working stereo output on the PCB for use in wired earphones or built in speakers

- Have bluetooth working on the app if a user wants to use wireless audio

- Potentially add hardware volume control

**Power:**

- Be able to operate the device using battery power. Safe voltage levels and regulation are needed.

- 5.5V Max

Project Videos