Mock Presentation

Description

Similar to the Design Doc Check and the Mock Demo, the Mock Presentation is an informal, mandatory event designed to better prepare you for your Final Presentation. In these sessions, you will present a few of your slides (about 10-15 minutes), and get feedback from the course staff as well as a few invited Department of Communication TAs. You will also be able to see a few of your peers' Mock Presentations, as there are up to 3 teams per time slot.

Requirements and Grading

The Mock Presentation is meant to be an opportunity for you to get feedback on a subset of your final presentation. It is recommended that you choose some aspect of your project, and present the design, results, and conclusions from that aspect. In order to get relevant feedback on your presentation skills, your Mock Presentation should also have an introduction and conclusion. You will receive feedback on your delivery, the format of your slides, and the organization of your presentation. Your slides should generally include:

  1. Title slide: Names, group #, title.
  2. Introduction slide: What is the project?
  3. Objective slide: What problem does this solve?
  4. Design Slides: A few slides on design, requirements and verification (should include block diagram, math, graphs, figures, tables).
  5. Conclusion: Wrap things up, future work.

Mock presentation is graded credit/no credit based on attendance and apparent effort; showing up completely unprepared will earn no credit.

Submission and Deadlines

Sign-up is handled through PACE. Time slots are 1 hour long, and multiple groups will share a time slot. This will give you an opportunity to give and receive feedback from your peers. You will be required to stay until all groups have presented and received feedback.

VoxBox Robo-Drummer

Craig Bost, Nicholas Dulin, Drake Proffitt

VoxBox Robo-Drummer

Featured Project

Our group proposes to create robot drummer which would respond to human voice "beatboxing" input, via conventional dynamic microphone, and translate the input into the corresponding drum hit performance. For example, if the human user issues a bass-kick voice sound, the robot will recognize it and strike the bass drum; and likewise for the hi-hat/snare and clap. Our design will minimally cover 3 different drum hit types (bass hit, snare hit, clap hit), and respond with minimal latency.

This would involve amplifying the analog signal (as dynamic mics drive fairly low gain signals), which would be sampled by a dsPIC33F DSP/MCU (or comparable chipset), and processed for trigger event recognition. This entails applying Short-Time Fourier Transform analysis to provide spectral content data to our event detection algorithm (i.e. recognizing the "control" signal from the human user). The MCU functionality of the dsPIC33F would be used for relaying the trigger commands to the actuator circuits controlling the robot.

The robot in question would be small; about the size of ventriloquist dummy. The "drum set" would be scaled accordingly (think pots and pans, like a child would play with). Actuators would likely be based on solenoids, as opposed to motors.

Beyond these minimal capabilities, we would add analog prefiltering of the input audio signal, and amplification of the drum hits, as bonus features if the development and implementation process goes better than expected.

Project Videos