Software is responsible for developing the programs that run Barracuda. There are two main projects: AVNavControl, which runs on the low-level control board, and EVA, which runs on the high-level BeagleBoard. In addition, software maintains a custom firmware for Barracuda’s IMU.
AVNavControl uses a set of serial lines to monitor the output of the navigation sensors and uses its built-in analog-to-digital converters to monitor the voltages of the pressure sensor and the kill switch. Data from the gyroscopes (high bias but low noise) and data from the accelerometers (low bias, high noise) are combined with several Kalman filters. By appearing as a serial port, AVNavControl then delivers this information to and receives control commands from the mission software running on the main computer. Several Proportional–Integral–Derivative (PID) loops receive input from the control commands and calculate adjustments to maintain the desired heading, depth, and pitch of the submarine. The control loops include several features, including derivative filtering (important for sensors that have noise spikes occasionally) and mechanisms to prevent integral windup. Finally, to move the motors, AVNavControl sends commands to the serial-to-servo board over another set of serial lines.
Previous iterations of Barracuda have used a control board based on Microchip’s dsPIC33FJ128GP804 microprocessor. From a programming perspective, the mbed offers several advantages over the dsPIC33FJ family. First, it has a well-documented API with object-oriented abstractions of the esoteric registers and commands for accessing the hardware of the mbed. Second, its development environment is entirely cloud-based, meaning that members can log in from any web browser and begin developing software. Last, the mbed offers drag-and-drop flashing of programs onto the ARM Cortex by appearing as a USB external drive. Taken together, these changes have made mbed development easier for the advanced programmers on the team and more accessible to the beginners.
Extensible Vehicular Automaton (EVA)
The Extensible Vehicular Automaton (EVA) software carries out the high-level management of the sub by reading input from configuration files, cameras, and passive sonar board, and by communicating with the control board, which is responsible for low-level tasks. EVA also contains many intelligent algorithms for target location, color identification, and edge detection.
This year, the decision to switch from AVI to EVA was made because the software team decided that AVI wasn’t flexible enough to support the expansion they had in mind. They noticed that small changes required large refactoring of the code, so they scrapped AVI’s entire codebase and began from scratch.
To provide modularity, EVA uses several self-contained classes. A single mission controller class encapsulates all the variables with information about Barracuda’s position. Each competition task is also contained in a class. Tasks are allowed to take control of the sub when they detect the appropriate task, so task execution doesn’t have to be linear. A side effect of this highly modular design is that a failed task does’t require a mission restart.
Beginning the Mission
Once the EVA program is run and the kill switch is inserted, the vehicle dives to mission depth and continues straight through the validation gate.
Hitting the Buoys
After a timeout, the optical processing code begins to search for the buoys. Based on the order of colors specified in the configuration file, it checks the forward camera for the presence of a circle of the first color within the image. The center of this circle and the center’s distance from the center of the field of view are computed. Any deviation causes a command to be sent to the control board to correct for the deviation. When a successful collision is detected (signaled by the sudden disappearance of the enlarged buoy), the submarine moves backward and targets the buoy of the second color in the same way.
Following the Path
EVA switches to the downward-facing camera to view the orange path that outlines the course. Similar to the buoy algorithm, EVA’s path algorithm quickly filters out the orange color of the path. Then, it applies a Hough transform. From the Hough transform’s output, EVA can calculate the difference between the slope of the path and that of a vertical line. The vision processing unit can then adjust the desired heading.
The first step in finding the bins is finding the edges. First, to cut down on noise, the image is normalized. Then, EVA uses a Canny filter to find the edges. The results are run through an intensity threshold, which provides a black-and-white image. The binary nature of the image makes it more suitable for the Hough transform and less CPU-intensive to process.
Firing Torpedoes into Caesar
The vehicle resumes processing forward vision once the markers have been dropped. It then searches for an intense red object, which would indicate the presence of the cutouts. Once the cutout is found, the vehicle slows down and orients itself in front of the cutout of a specified color. The vehicle uses the size and distortion of the cutout to determine its relative placement and orientation. Once it is properly aligned, the vehicle will fire its torpedoes.
Dropping Markers in the Bins
While continuing to travel straight, the vehicle searches for the orange path with the downward-facing camera and the target bins. When one of the bins is located, the vehicle will attempt to station itself directly over the center of the target, using the center of “mass” image-processing algorithm. Once the vehicle is positioned properly over the center of the bin, it drops both markers and proceeds toward the window.
After dropping the markers in bins, the vehicle will travel to the center of the pool. The presence of a yellow object denotes the board, and the orientation of the cylinders will be found by measuring the intensity of the color red. A probe is then used to manipulate each cylinder off its holder.
Navigating to the Laurel Wreath
After completing the marker drop, the computer signals the hydrophone board to begin analysis of the audio data. The dspblok polls the hydrophones through the analog-to-digital converter at a constant interval, and fills a buffer. Once the buffer is full, it runs algorithms on the data in the buffer to filter out the noise, check for erroneous readings, and finally, check if a ping was received. If a ping was received, the board sends the time difference of the pings to the BeagleBoard via RS-232. The BeagleBoard uses these data to calculate a vector of the pinger’s relative location and then navigate to the pinger.
Retrieving the Wreath and Surfacing in the Palace
Once the vehicle is positioned above the pinger, signified by a directional vector with very small X, Y, and Z components, it secures the structure using its passive grabber by pitching forward and then backward. After the structure is secured, the vehicle will then carry it to the other octagon, navigating again using the hydrophones. Once there, it will rise until the pressure sensor detects that the AUV is at the surface, pitch forward to drop the structure, and shut off the motors to complete the mission.
SparkFun 9DOF Razor Firmware
Software also maintains a fork of SparkFun’s 9DOF Razor test firmware that increases the output rate to about 100 Hz by caching data from the magnetometer, which can only be polled at 10 Hz. The team has open sourced this code on GitHub.
To manage complex software projects with many collaborators, we use GitHub, a website that specializes in hosting source-code repositories in the Git format. There are many advantages of distributed version-control systems like Git. First, each developer given the ability to complete work asynchronously and swithout an Internet connection, leading to more flexibility and faster development. Second, the code is backed up on external servers and on each team member’s computer, and the underlying version control system, Git, allows the team to revert the source code to older states that are known to be functional. The combination of these two features encourages team members to experiment freely when adding new features. Last, GitHub’s Pull Request feature encourages discussion of software development, which is then archived. We hope that the records generated by Pull Requests will enable future club members to understand the reasoning behind implementations used in the code.