A brief mention of the time it takes for visual odometry in the Spirit route thread got me thinking off on digital signal processing tangents--
The optical mouse you're using right now has no issues with visual odometry, unless you use it on too plain a surface, and the basic principle is a camera taking lots of tiny photos of the surface connected to a DSP.
Have any rover technologies looked at analogous techniques for really precise odometry? Or perhaps navigation is a better term than odometry because it can handle arbitrary vectors. Is it too tough to do this with an eyeball farther from the ground? Without a light source that can flatten the light for a larger area? Isn't this easier than doing odometry with cameras looking any other direction?
Does the rover visual odometry use lots of successive photos of the most immediate objects, or does it try to use big objects farther away as waypoints?