Tyler Duckworth

A "Vision"ary Path

Today was Kickoff. FIRST has now revealed the game for 2019: Destination Deep Space. * Queue Epic Music *

The Skinny

The objective of the game is to fill up your “spaceships” with cargo and discs to keep it in place. In the first 15 seconds, a blind will go over the driver’s viewport, blocking them from seeing directly. This forces them to either opt for autonomous programs, or attach a camera and view that. Over the course of two and a half minutes, you will navigate your half of the field to place the discs on velcro stations and your cargo in the compartments formed by the discs. In the last 15 seconds of the match, you must climb a series of steps in your “Hab” (short for habitat) to stay safe from the incoming storm. Once that is completed, the alliance with the most points wins. (For more detail on this game mode, see the manual.)

What does this mean for me?

The FIRST Team heard the complaints of the lack of vision targets for last year, and added about twenty-six to the new field. They range from gaffers tape on the floor to reflective tape on the targets. The team and I have our work cut out for us.

DISCLAIMER: The rest of this post is mostly me jotting my ideas and thoughts down for how to proceed. If you don’t want to read it, no offense taken.

The vision targets are relatively simple. The floor tape can be used to align the robot with the targets, and the reflective tape for the cargo disposal. Depending on the direction the team decides to take, we could do either or both. No matter what, this will probably result in the driver being able to toggle some automated correction for disposal. We would have to make that speedy and quick to lessen the time it takes. Motoring will probably be an issue. I need to figure out how to write autonomous code for a mecanum bot.

I would love to try to find a way to detect the balls for targeting. The workshop led by SoKno Robotics I took can help me by writing a Haar cascade or an OpenCV app to isolate the ball’s color. ML would be a cool way to start, and I think object tracking would be a very cool Innovation Award, if done right.

This all calls for a library to make using this simpler, similar to Cade Brown’s VPL. It would consist of various things: Object Tracking Based on Color, Possibly Based on Image, and other cool things you can do with vision. I will need to read up on them, though.

In short, the coming weeks will be fun, putting it mildly.

Hello! I was born in Knoxville, TN and currently attend the L&N STEM Academy. I have interests in things varying from digital design to programming. I work with the L&N STEMpunks Robotics team to model and program robots.