Autonomous robot navigation is an area that has undergone much research and development, leading to a wide range of robotic platforms. These range from mass-produced robotic vacuums, to mid-sized research platforms like ours, to full-size autonomous automobiles.As advancement in sensor and computer technology continues and as sensor prices come down, smaller platforms are gaining access to sensors and that used to be reserved for larger platforms. Leveraging more sensors and more sophisticated sensors however requires more computation.This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed.We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time. The primary control program for Stark is designed for participation in the Association for Unmanned Vehicle Systems International (AUVSI) Intelligent Ground Vehicle Competition (IGVC). The competition consists of two primary events, the navigation challenge, and the autonomous challenge. The navigation challenge is an outdoor GPS-based navigation problem where a robot needs to autonomously navigate to a number of pre-defined GPS waypoints while avoiding positive obstacles such as construction barrels, sawhorses and mesh fences. The robot must do this while also strictly following a course bounding box which is indirectly given by GPS coordinates.The autonomous challenge is effectively a superset of the navigation challenge, requiring additional computer vision processing and logic to handle white lines, real and simulated potholes, ramps, and navigation flags. To read this external content in full, down load the paper from the author archives at the Massachusetts Institute of Technology.