Project
# | Title | Team Members | TA | Documents | Sponsor |
---|---|---|---|---|---|
43 | Autonomous Featherweight (30lb) Battlebot |
Jason Mei Michael Ko Qinghuai Yao |
Michael Gamota | proposal1.pdf |
|
# Autonomous Featherweight (30lb) Battlebot Team Members: - Jason Mei (jasonm5) - Qinghuai Yao (qyao6) - Michael Ko (ykko2) # Problem iRobotics, a RSO on campus, has built multiple battlebots that are entered into competitions across the U.S. One of the robots that has been developed is called "CRACK?", a 30lb hammer-axe battlebot. The robot has already been designed and completed - however, the project would be to upgrade this robot from manual control to autonomous control. # Solution For this project, the plan is to use a camera mounted just outside the polycarbonate walls for a live state of the arena, sending information to a computer. The computer can then use image transforms to get an accurate top-down view of the field, which allows the computer to then calculate the next movements, either directly using a pure-pursuit algorithm, or a machine learning algorithm potentially. The control is then passed over to a microcontroller board mounted within the robot, which sends signals to the motors, and drives the robot or fires the hammer. # Solution Components ## Camera Subsystem The main computer takes in the data from a camera (ALPCAM 2MP Varifocus USB Camera) mounted on the outside of the arena. The camera uploads a standard size (640x480 or 1280x720) to the computer. For every frame, the python program (utilizing OpenCV) creates a binary image with perspective transforms, color filters, and other information. It will also scan for april tags, which will be mounted on specific sides of the robot, allowing for the computer to identify the both of the robots’ full pose (position and orientation) within the arena. ## Autonomous Control Subsystem After gaining both of the robot’s poses, the computer will identify the next actions for the robot to perform. Initially, we will use a standard pure pursuit algorithm, where the robot will simply minimize the distance between itself and the opponent without regard for orientation. Potentially, we may switch to using a reinforcement learning algorithm, utilizing machine learning within a custom OpenAI environment. The computer will then use bluetooth to connect wirelessly to the robot, and then send over the instructions. ## On-robot Subsystem The motors on the robot itself are typically controlled by a receiver, which uses PWM signals (1.5 ms on a 50 ms period is a “neutral” signal). We will be inserting a microcontroller (ESP32S3) board in between the receiver and the motor ESCs (electronic speed controllers), to analyze the information from both the receiver and the computer. Additionally, to maximize the information available to the user, we will be adding both a voltage divider to analyze battery voltage, as well as an accelerometer sensor (MPU6050) to display the robot’s movement. # Criterion For Success We would define a successful project with a specific set of goals: The system must identify the robot and track the location and pose live (using the april tag). The system must be able to allow the robot to drive to any specific location directly at close to full speed, similarly to how a human would. The system must be able to shut off safely and immediately if there are ever any safety violations. The robot will compete at Robobrawl X, an event that is held this year on campus (April 4th and 5th, 2025). |