Project

# Title Team Members TA Documents Sponsor
82 RFA: EMG Controlled Midi effects controller
Joseph Schanne
Karan Sharma
Paul Matthews
Eric Tang proposal1.pdf
## This is a repost of https://courses.grainger.illinois.edu/ece445/pace/view-topic.asp?id=77612, since it was in the wrong post category.

# MyoEffect

Team Members:

- paulgm2 (netid)
- karans4 (netid)
- schanne3 (netid)

# Problem

Most musicians working in digital audio workspaces (DAWs) desire fine control over the effects they use and the intensity at which those effects operate. Most of the time, in order to quickly assign effects to controls and to fine tune them, musicians will make use of MIDI controllers; however, these controllers are often both clunky to use and expensive to obtain.
# Solution

Our design offers a cheaper and more expressive method of using effects by bypassing the use of a separate controller entirely and controlling effects with the motion of the musician’s own body. Instead of having to turn knobs or press buttons, a musician can simply close their fist, or extend their arm, or twist their leg; our design’s goal is to give complete freedom to the musician as far as what motion they want to control their effects. Our design makes use of electromyography (EMG) sensors to map contractions of users’ muscles to voltage readings; these readings will then be converted into corresponding MIDI parameters. Our device can then be plugged into a DAW and operate in exactly the same way as a normal MIDI controller.
# Solution Components
# Subsystem 1
EMG (Electromyography) Sensor System

This system will either be a custom built sensor setup using the MIT-Licensed OpenEMG project as a base, or will use the MyoWare 2.0 sensor produced by SparkFun. The major advantage of the OpenEMG project is that we can simplify the overall design of the sensor and microcontroller into one board and save money. The overall cost of the parts (excluding the fabrication of a custom PCB) needed for the OpenEMG sensor setup should be about $5, which is much less than the cost of one MyoWare 2.0 sensor, which is $40. Either way, this system uses two adhesive EMG pads (or alternatively metal plates) to measure the amplitude of the electric signals corresponding to muscle contraction. For specific partouts, check the OpenEMG git repo.

https://www.sparkfun.com/myoware-2-0-muscle-sensor.html https://www.mouser.com/pdfDocs/Productoverview-SparkFun-DEV-18977.pdf https://github.com/CGrassin/OpenEMG
# Subsystem 2
## MCU's to process EMG data

We’re planning on using ESP32 C3 MCUs to collect, digitize, encode, and send signals from the wearable sensor to the main midi controller. Initially we will be using devboards to set this up, but eventually we’ll design and manufacture our own proprietary boards to fit into a proprietary (watch-like) housing that can be strapped to any given large muscle.

The ESP32 offers an on-board Analog to Digital Converter (ADC), allowing us to collect data directly from the EMG sensors and convert it to a digital signal. We will encode this digital signal into an acceptable format and transmit it to the receiver (Subsystem 3).

In order to communicate with the receiver, we have decided to use the ESP-NOW protocol developed by Expressif, the company behind the ESP32, as we have found their protocol to offer lower latency than bluetooth. Low latency is essential in music production, as even just a quarter of a second could lead to you being a note or two behind. The ESP-NOW protocol’s max speed of 512 kbps and latency as low as 5ms is ideal for our use case.

Since we are limited to 512 kbps, and we will need to be transmitting data to the MIDI controller over 100 times a second, we are limited to a low amount of data to encode and send for each sensor. The bitwise resolution of our encoding and sampling rates are a design decision that remains unanswered as of now.

ESP NOW protocol: Max speed is 512 kbps, so we can’t encode a super high quality signal from every sensor. Ex: 22,050 Hz * 16 bits = 352800 bps We eventually do want our design to be able to handle as many sensors as possible, but we will be limited by our data transmission rate with the use of the ESP32. However, all of our boards have Bluetooth options already; therefore, if we feel the need to change our priority away from latency issues and towards having as many sensors as possible, we have the option of changing over to Bluetooth instead to allow this.
# Subsystem 3
## MIDI controller

This part of the design will act as a hub for multiple sensors and will modify certain parameters of the MIDI protocol depending on the input from each sensor. Each sensor would modulate one effect, for example a leg sensor could be used for a “wah” effect, while hand position/grip strength could correspond to distortion, etc.

We plan to use an ESP32 C3 MCU to act as the brains of this subsystem in order to maintain compatibility with the ESP-NOW protocol. We will receive the digital signals from the transmitter and convert it to a MIDI compliant USB signal. The end user would plug in a USB cable into his or her Computer/ Digital Audio Workstation (DAW), and could pick and choose the effects applied to each signal. Since MIDI is a standardized format, this data signal should be compatible with any DAW, with little to no user configuration required.
# Criterion For Success

Criterion For Success

To make our criterion more specific we will decide that if we can control the pitch on at least one person by extending or contracting their arm, we will consider that a success.

Specific measurements: Our user must fully extend their arm as far as it can go while playing a steady note on the guitar, and the pitch of the note will be shifted down by by half a step. We will specifically ask the guitarist to play A3 (220.0000 Hz), and consider it a success if when the arm is fully extended, the note being played is A3 flat (207.6523 Hz) or lower. We are choosing pitch because it is the easiest to quantify and measure.

If we can accomplish this, we would move on to adding more body parts and more effects, also making it work for multiple people, but they are not necessary for the project. Our sole criterion will be if we can change an effect like as pitch by extending or releasing the arm on one person.

Electronic Replacement for COVID-19 Building Monitors @ UIUC

Patrick McBrayer, Zewen Rao, Yijie Zhang

Featured Project

Team Members: Patrick McBrayer, Yijie Zhang, Zewen Rao

Problem Statement:

Students who volunteer to monitor buildings at UIUC are at increased risk of contracting COVID-19 itself, and passing it on to others before they are aware of the infection. Due to this, I propose a project that would create a technological solution to this issue using physical 2-factor authentication through the “airlock” style doorways we have at ECEB and across campus.

Solution Overview:

As we do not have access to the backend of the Safer Illinois application, or the ability to use campus buildings as a workspace for our project, we will be designing a proof of concept 2FA system for UIUC building access. Our solution would be composed of two main subsystems, one that allows initial entry into the “airlock” portion of the building using a scannable QR code, and the other that detects the number of people that entered the space, to determine whether or not the user will be granted access to the interior of the building.

Solution Components:

Subsystem #1: Initial Detection of Building Access

- QR/barcode scanner capable of reading the code presented by the user, that tells the system whether that person has been granted or denied building access. (An example of this type of sensor: (https://www.amazon.com/Barcode-Reading-Scanner-Electronic-Connector/dp/B082B8SVB2/ref=sr_1_11?dchild=1&keywords=gm65+scanner&qid=1595651995&sr=8-11)

- QR code generator using C++/Python to support the QR code scanner.

- Microcontroller to receive the information from the QR code reader and decode the information, then decide whether to unlock the door, or keep it shut. (The microcontroller would also need an internal timer, as we plan on encoding a lifespan into the QR code, therefore making them unusable after 4 days).

- LED Light to indicate to the user whether or not access was granted.

- Electronic locking mechanism to open both sets of doors.

Subsystem #2: Airlock Authentication of a Single User

- 2 aligned sensors ( one tx and other is rx) on the bottom of the door that counts the number of people crossing a certain line. (possibly considering two sets of these, so the person could not jump over, or move under the sensors. Most likely having the second set around the middle of the door frame.

- Microcontroller to decode the information provided by the door sensors, and then determine the number of people who have entered the space. Based on this information we can either grant or deny access to the interior building.

- LED Light to indicate to the user if they have been granted access.

- Possibly a speaker at this stage as well, to tell the user the reason they have not been granted access, and letting them know the

incident has been reported if they attempted to let someone into the building.

Criterion of Success:

- Our system generates valid QR codes that can be read by our scanner, and the data encoded such as lifespan of the code and building access is transmitted to the microcontroller.

- Our 2FA detection of multiple entries into the space works across a wide range of users. This includes users bound to wheelchairs, and a wide range of heights and body sizes.