Monday, October 10, 2011

Robocode, Automated Build Systems, and Automated Testing

 Overview

As an exercise in developing systems that scale, I continued development of a competitive Robocode robot, integrating it with an automated build system that builds the system, runs the robot simulation, and tests the code using JUnit tests, acceptance tests, behavioral tests, and quality assurance tools. This build system uses a collection of Ant scripts; coding is still done using Eclipse. The big difference is that now, with a few simple commands from a shell, I can accomplish all the tasks outlined above. The real benefit is that as the code-base grows, the same scripts can be run. Of course, new tests and script modifications can be added, but the impact of maintaining a few scripts or adding tests should be negligible compared to the time savings if I was doing all the build, run, test, and  quality assurance manually.

The Robocode “Surfer” Robot

Movement: My strategy for moving the Surfer robot is to conserve energy and perhaps fool tracking robots. Surfer follows a counterclockwise circuit of the right half of the battlefield, then reverses its direction and follows a clockwise circuit. Additionally, Surfer conserves energy by not firing often, and varying the bullet strength.

Targeting:  Surfer stops and rotates it’s radar to scan for other robots at 4 places during each clockwise or counterclockwise circuit: top center, top right corner, lower right corner, and bottom center.

Firing: Any time Surfer scans an enemy robot, it fires at it with a bullet strength based on both the distance from the enemy and the size of the battlefield itself. Surfer also fires more bullets (up to 3) if it is closer. The thresholds are: fire 3 bullets if 200 pixels or closer, fire 2 bullets if 300 pixels or closer, or just fire 1 bullet when farther away.

Results:  Here are the results of 5 battles of ten rounds each against some sample robots:

Walls: Surfer wins 0/5.
RamFire: Surfer wins 5/5.
SpinBot: Surfer wins 1/5.
Crazy: Surfer wins 5/5.
Fire: Surfer wins 5/5.
Corners: Surfer wins 3/5.
Tracker: Surfer wins 5/5.
SittingDuck: Surfer wins 5/5.

So Surfer can reliably beat most of these sample bots except Walls, SpinBot, and Corners. Given Surfer’s conservative nature, I think these results are pretty good. To beat Walls, SpinBot, and Corners, I think Surfer needs a more advanced targeting algorithm and perhaps a more advanced strategy for bullet firing.

What I would improve in the Surfer robot initially, includes locking the radar once an enemy is scanned as well as devising a strategy when colliding with another robot. Beyond that, after becoming more familiar with the game physics and the Robocode API, literally dozens of changes and improvements could be made to Surfer.

Testing

My testing approach begins with JUnit tests to test the methods I use to calculate the number of bullets to fire, as well as the bullet strength. I also perform behavioral tests that check if my robot visits the positions on the battlefield that I expect it to, and that the bullets it fires vary in strength. My acceptance tests are implemented by having my robot battle against other robots. In addition I used Jacoco to get feedback on "coverage". This is very useful because it reveals lines of code are not being executed during testing, and if they are not executed, they may well contain bugs.

Quality Assurance

Above and beyond JUnit, behavioral, acceptance tests, and Jacoco, QA was done using checkstyle, findbugs, and PMD. Although these three products have some degree of overlap, they can find many kinds of errors, including 'best practice' coding style errors, unused variables, code that can never be reached, etc... Once they are built into the automated build system they pay big dividends that increase as the project grows and the lines of code increase. To do these type of tests manually would be painstakingly difficult and time consuming.

Lessons learned

From this project I learned that running automated testing and QA tools can be done in matter of seconds, and that these tools will scale to larger systems where manual techniques will not scale. Bugs, incorrect behavior, and errors in coding style, can be discovered more quickly than performing these processes manually - by perhaps an order of magnitude or more. Virtually no large software development effort would be undertaken without tools and techniques similar to the ones I used. Even though the QA tools may take some time to configure and understand, and scripts may take some time to develop, in the long run they will save an enormous amount of money, time, and effort.

No comments:

Post a Comment