Project

# Title Team Members TA Documents Sponsor
80 Edge-AI based audio classifier
Ahaan Joishy
Kavin Manivasagam
Om Dhingra
Gayatri Chandran design_document1.pdf
proposal1.pdf
proposal2.pdf
Problem Overview
Most audio-based embedded systems nowadays collect large amounts of raw sensor data but, they only use simple threshold-based logic for classification. And these methods are very sensitive to noise and fail to perform accurately across various conditions. Thus, they fail without the usage of external computation/ cloud services. Thus, there’s a need of a method that can covert raw captured signals to meaningful classifications locally under tight power and memory constraints.

Solution Overview
The proposed project is an Edge-AI embedded system that can classify audio signals (e.g. – a clap, laugh, snap, stomp, speech, etc.) in real time using a simple neural net. The system will use a single sensor (an MEMS I2S digital microphone) to collect audio data. The classification will result in an LED-based output, telling the user the result. The system thus eliminates the need for cloud usage/computation and demonstrates the true strength of machine learning, even under tight constraints.

Solution Components
Sensor Subsystem:
A MEMS microphone with an I2S interface will be used to collect raw audio signals (e.g. – clap, speech, snap, etc.).
Audio will be sampled at a target rate of 16kHz, which is sufficient and the industry standard (used in voice bots and voice recognition) for speech and common environmental sounds.
We use a digital microphone because it removes the need for an analog amplifier.
Processing Subsystem:
We will use an STM32F411 microcontroller for this project. We choose this microcontroller for our project because it features 512 kB Flash and 128 kB RAM which is crucial for running the math of a neural net. Furthermore, it has built-in DSP instructions that are crucial to convert raw audio signals into a spectrogram (MFCC) in real time. Since we’re using a neural net, we also need a chip with a floating point unit (FPU) which this microcontroller has.
The signal chain (to capture signals) would be as follows:
The microphone captures audio and sends it digitally over I2S to the microcontroller, which uses DMA to quickly store the data in memory.
The audio frames are converted into MFCC features
These features are then fed into our neural-net model.
The ML pipeline would be as follows:
The obtained MFCC features are fed into our small, dense neural net for classification into predefined types.
TensorFlow Lite Micro will be used to facilitate deployment on the microcontroller without an OS/ internet connection. (We may also try to use ExecuTorch if time permits).
Model size will be kept under 20 kB to ensure real-time performance.
Power Subsystem:
We will use a 5V USB input to power the board. This will be stepped down to 3.3 V using an on-board voltage regulator.
Decoupling capacitors and filtering components will be used to reduce electrical noise that could interfere with stable operation.
Criterion for Success
Our device can classify at least 3 different sound types correctly with more than 85% accuracy on the recorded test set.
The target end-to-end latency (from sound to LED output) is less than 100 ms.
Current drawn should be under 60mA.
Test Protocol description: Our test set will consist of around 50 samples per class and shall be gathered from a variety of noisy and quiet environments. (We shall aim for our model to correctly classify 3 different sound types but, this will be extended to 5 types if time permits).
Alternatives
Many existing sound classification systems use cloud-based processing or rely on high-power computing platforms such as smartphones and computers. These methods require a continuous internet connection. Many other methods also use a threshold-based audio detection but, these can’t work accurately for different types of sounds in varying environments. Our solution differs by performing audio classification on a low-power embedded device, using a simple neural, without the usage of external computing/ complex hardware.

WHEELED-LEGGED BALANCING ROBOT

Gabriel Gao, Jerry Wang, Zehao Yuan

WHEELED-LEGGED BALANCING ROBOT

Featured Project

# WHEELED-LEGGED BALANCING ROBOT

## Team Members:

- Gabriel Gao (ngao4)

- Zehao Yuan (zehaoy2)

- Jerry Wang (runxuan6)

# Problem

The motivation for this project arises from the limitations inherent in conventional wheeled delivery robots, which predominantly feature a four-wheel chassis. This design restricts their ability to navigate terrains with obstacles, bumps, and stairs—common features in urban environments. A wheel-legged balancing robot, on the other hand, can effortlessly overcome such challenges, making it a particularly promising solution for delivery services.

# Solution

The primary objective of this phase of the project is to demonstrate that a single leg of the robot can successfully bear weight and function as an electronic suspension system. Achieving this will lay the foundation for the subsequent development of the full robot.

# Solution Components

## Subsystem 1. Hybrid Mobility Module:

Actuated Legs: Four actuator motors (DM-J4310-2EC) power the legged system, enabling the robot to navigate uneven surfaces, obstacles, and stairs. The legs also functions as an advanced electromagnetic suspension system, quickly adjusting damping and stiffness to ensure a stable and level platform.

Wheeled Drive: Two direct drive BLDC (M3508) motors propel the wheels, enabling efficient travel on flat terrains.

**Note: 4xDM4310s and 2xM3508 motor can be borrow from RSO: Illini Robomaster** - [Image of Motors on campus](https://github.com/ngao4/Wheel_Legged_Robot/blob/main/image/motors.jpg)

The DM4310 has a built in ESC with CAN bus and double absolute encoder, able to provide 4 nm continuous torque. This torque allows the robot or the leg system to act as suspension system and carry enough weight for further application. M3508 also has ESC available in the lab, it is an FOC ESC with CAN bus communication. So in this project we are not focusing on motor driver parts. The motors would communicate with STM32 through CAN bus with about 1 kHz rate.

## Subsystem 2. Central Control Unit and PCB:

An STM32F103 microcontroller acts as the brain of the robot, processing input from the IMU through SPI signal, directing the motors through CAN bus. The pcb includes STM32F103 chip, BMI088 imu, power supply parts and also sbus remote control signal inverter.

Might further upgrade to STM32F407 if needed.

Attitude Sensing: A 6-axis IMU (BMI088) continuously monitors the robot's orientation and motion, facilitating real-time adjustments to ensure stability and correct navigation. The BMI088 would be part of the PCB component.

## Subsystem 3. Testing Platform

The leg will be connected to a harness as shown in this [sketch](https://github.com/ngao4/Wheel_Legged_Robot/blob/main/image/sketch.jpg). The harness simplifies the model by restricting the robot’s motion in the Y-axis, while retaining the freedom for the robot to move on the X-axis and jump in the Z-axis. The harness also guarantees safety as it prevents the robot from moving outside its limit.

## Subsystem 4. Payload Compartment (3D-printed):

A designated section to securely hold and transport items, ensuring that they are protected from disturbances during transit. We will add weights to test the maximum payload of the robot.

## Subsystem 5. Remote Controller:

A 2.4 GHz RC sbus remote controller will be used to control the robot. This hand-held device provides real-time control, making it simple for us to operate the robot at various distances. Safety is ensured as we can set a switch as a kill switch to shutdown the robot in emergency conditions.

**Note: Remote controller model: DJI DT7, can be borrow from RSO: Illini Robomaster**

The remote controller set comes with a receiver, the output is sbus signal which is commonly used in RC control. We would add an inverter circuit on pcb allowing the sbus signal to be read by STM32.

Note: When only demoing the leg function, the RC controller may not be used.

## Subsystem 6. Power System

We are considering a 6s (24V) Lithium Battery to power the robot. An alternative solution is to power the robot through a power supply using a pair of long wires.

# Criterion For Success

**Stable Balancing:** The robot (leg) should maintain its balance in a variety of situations, both static (when stationary) and dynamic (when moving).

**Cargo Carriage:** The robot(leg) can be able to carry a specified weight (like 1lb) without compromising its balance or ability to move.

_________________________________________________________________________

**If we are able to test the leg and function normally before midterm, we would try to build the whole wheel legged balancing robot out. It would be able to complete the following :**

**Directional Movement:** Via remote control, the robot should move precisely in the desired direction(up and down), showcasing smooth accelerations, decelerations, and turns.

**Platform Leveling:** Even when navigating slopes or uneven terrains, the robot should consistently ensure that its platform remains flat, preserving the integrity of the cargo it carries. Any tilt should be minimized, ideally maintaining a platform angle variation within a range of 10 degrees or less from the horizontal.

**Position Retention:** In the event of disruptions like pushes or kicks, the robot should make efforts to return to its original location or at least resist being moved too far off its original position.

**Safety:** During its operations, the robot should not pose a danger to its surroundings, ensuring controlled movements, especially when correcting its balance or position. The robot should be able to shut down (safety mode) by remote control.

Project Videos