Once, programming the robot was simple as long as no extra sensors were needed. But if you wanted to use an encoder (to find how fast the gears were moving) or set up a different drive method than the one given by default, each team needed to write essentially the same code for the task. So to save work, WPILib was developed. This is a library developed by Worchester Polytechnic Institute that contains the common tasks teams need to do, such as use different drive control systems and control sensors.
WPILib comes in two flavors: Java and C++. We are currently employing Java, an object-oriented programming language, for our robot’s programming. It is the default language for the AP Computer Science course, offered at many high schools throughout the US.
In 2012, we grew from a team of 2 to a team of 20 due to an influx of aspiring programmers. We taught them using the Mechanical Simulation Library (MechSim). One of our programming officers developed the library to provide a robust Java environment in which new members could learn principles and algorithms without a physical robot. We also used the documentation and references provided by FIRST to transition to WPILib (below).
- Getting Started With Java for FRC – Installing NetBeans, the WPILib, etc. Do this first.
- WPI Robotics Library User Guide – What it sounds like
- WPILib Robotic Programming Cookbook – Code snippets for frequently occurring problems. Before you write any new code, look here first to see if someone already did your work for you.
The 2012 FRC competition, Rebound Rumble, features vision targets so that scoring can be computer-guided. Performing this seemingly simple task requires computer vision techniques. Vision processing can be complex, but the demands of FRC competitions are less exacting than those of researchers and AI competitions. The resources below assume some basic familiarity with image processing already, so you may want to view: an intro; an example of vision processing; and/or a computer vision contest (all on YouTube).
- Vision Whitepaper – How to track the vision targets for the backboards.
- Vision-Based Behavior Acquisition For A Shooting Robot By Using Reinforcement Learning – A more complex approach to vision acquisition. Reinforcement learning basically involves the computer teaching itself how to recognize the vision target, with your input as to what is or is not a vision target. After a while, it works great. In theory. Sort of the academia approach to this problem.
- Determining Robot Position Relative to Vision Target by Analyzing Camera Image – Programmers are good at naming things, aren’t they? *ahem* It does exactly what it says.
- Perspective Rectangle Detection – Another academic whitepaper that helps identify rectangles from various perspectives (complex angles + translation + zoom).
- Parallelogram Detection Using the Tiled Hough Transform – Hough Transform is a shape detection algorithm. Basically, partition the image into rectangular regions and use Hough Transform on each section (aka tile), then composite them to extract parallelograms from the original image.
- Hough Transform – A tad gentler approach than the Wikipedia article. Both are good starting points, though. Check out Generalized Hough Transform for a brief overview of how to apply Hough Transform for arbitrary objects (e.g., not lines or circles).