Printable Version of Topic

Click here to view this topic in its original format

Unmanned Spaceflight.com _ Image Processing Techniques _ Spirit's UnderBelly

Posted by: PDP8E Jun 25 2009, 03:01 AM

Here is a work in progress.
I was working on a another thread (processing the envelope challenge for PFK) and thought I could use the same technique to 'see' underneath Spirit's deck with the MI.
While I don't fix the focus, the technique does what amounts to a smart contrast...
If anyone would like to to do a mosaic, i would appreciate it (I haven't done them enough to pull them off like you guys)
I will post the whole underbelly later, but here are the test images:




Cheers and thanks!

Posted by: RoverDriver Jun 25 2009, 04:56 AM

That's pretty impressive! Can you do the other frames as well? I'm particularly interested to th LM wheel and surrounding area.

Paolo

Posted by: Astro0 Jun 25 2009, 11:34 AM

Nice work PDP8E smile.gif
Here's a mosaic of your images.


Posted by: Astro0 Jun 25 2009, 11:41 AM

Here are the variants that have appeared so far.
A combination of these techniques would produce some interesting results.



More work for we UMSF'ers I suggest. wink.gif

Posted by: Floyd Jun 25 2009, 06:04 PM

pdp8 Do you think think the focus deconvolution could be used after your smart contrast, or have you distorted the image too much to do a deconvolution? (not that I am the one to do such a trick)


Posted by: PDP8E Jun 25 2009, 06:32 PM

Hi Floyd, (...nice weather you and I are having ... )
A focus deconvolution should still work...
However...The adaptive contrast enhancement was very successful in dark regions of the image.
I think it may have to do with the camera and the darker parts of the scene working together 'simulating a high f number' exposure in that area (which can fix out of focus shots). So the top and bottom of the new images are almost OK, but the sun drenched background of the middle part of the image is still way out of focus. A focus program may fix the middle 1/3 and mess up the top and bottom.
But, that's just theory! Who has a nice Richardson-Lucy deconvolver? Let's see!
I can post the rest of the image sequence tonight . (thanks for the feedback Paolo and for the mosaic Astro0)

Cheers

Posted by: jekbradbury Jun 26 2009, 12:26 AM

This is something I did a few weeks ago using the DeconBL (Biggs-Lucy Iterative Blind Deconvolution) routine in P.J. Tadrous's excellent Biaram suite of command-line image processing tools (http://www.bialith.com) for 8,192 iterations of alternating PSF and image estimation steps. The animated GIF is composed of the output at N=0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, and 8192 iterations. Unfortunately, after about 2000 iterations, the process began converging to what is essentially just a noisy version of the original image. However, some of the images in between may be useful. The entire process took over 100 CPU hours, but could be repeated more efficiently for the other images by stopping after the 1024th iteration.


EDIT: whoops I ran out of bandwidth on the first site I was hosting that image at...

ADMIN: Large embedded file replaced with smaller version of animation above. Original 10mb version available http://www.fileden.com/files/2009/6/26/2489880/Stack.gif.

Posted by: PDP8E Jun 26 2009, 01:50 AM

jekbradbury: wow! that is one groovy blind deconvolution!
I wonder if the JPL guys would share the PSF of the MI camera to make it a little less blind?
Is there any literature about using a microscope camera as a normal camera?
...like, really short exposure times and then auto contrast?
...Maybe a few different exposure times for different distances? and then piece it together?

Anyway, here are the rest (7) of the images that I processed. (the first 3, were posted previously)
Astro0: if you would be so kind as to use your superior mosaic skills, I would be in your debt.
It looks like a big rock is next to the LM wheel (the right most image in the soon to be made 'Astro0 mosaic', or the last image in this sequence).
I am pretty sure (70%?) a deconvolution routine would clean these up....(anyone?)
This is the best I could do for now...




The rest are in the next post (I hit some upload limit...sorry doug!)


 

Posted by: PDP8E Jun 26 2009, 01:53 AM

the last one...



Cheers

Posted by: Astro0 Jun 26 2009, 03:48 AM

A quick and rough scaled down version of PDP8E's 'smart contrast' images.
Will have to work on something better later. wink.gif


Posted by: Stu Jun 26 2009, 05:39 AM

Absolutely awesome work you two. I would love to see NASA show your images, as an example of the "Citizen Science" they are encouraging and promoting so hard.

Posted by: PDP8E Jun 27 2009, 03:38 AM

Here is an off the wall idea to get better focused images using the MI while peeking under Spirit's deck.
The rough idea is to increase the speed of the exposure, a well known technique to increase depth of field < i don't know if that is possible >
Another, is to decrease the aperture size, which increases the f number < i dont know if that is possible >
BUT !
If you cant do either of those , then decrease the amount of light per exposure, by timing the shot at dusk or dawn, and by taking multiple shots with the waning (or gathering) light.
By using enhancement tools (and much more limited deconvolution), the JPL team may gain the insight they require for the sandbox simulations. <??>
..,Spirit has a ton of watts and time....
<...well it seemed like a good idea while I was stuck in traffic...>

Posted by: Astro0 Jun 28 2009, 06:50 AM

I was looking back at the MIs and got to thinking that the offset between each image as the IDD/MI moved should be just enough to make an anaglyph possible. I think someone tried it with 'pointy rock' earlier. Here's a version with the LM wheel.



It might be useful if another set of MIs was done that better covered the area near the LM wheel which RoverDriver noted was of interest.
Applying all the techniques we are discussing here might produce some good results.

Posted by: RoverDriver Jun 28 2009, 07:04 AM

QUOTE (PDP8E @ Jun 26 2009, 08:38 PM) *
...
The rough idea is to increase the speed of the exposure, a well known technique to increase depth of field < i don't know if that is possible >


I never heard about this technique. Do you mean to make the exposure time shorter?

QUOTE
...
If you cant do either of those , then decrease the amount of light per exposure, by timing the shot at dusk or dawn, and by taking multiple shots with the waning (or gathering) light.
By using enhancement tools (and much more limited deconvolution), the JPL team may gain the insight they require for the sandbox simulations. <??>
...


I'm not sure how decreasing light and/or exposure time would yeld a better image. Can you point me to some place that describes these image processing algorithms?

Paolo

Posted by: dvandorn Jun 28 2009, 05:37 PM

Actually, PDP8E is operating under a misapprehension. I can tell his logical thought process was "If you get greater depth of field by reducing your aperture and thus cutting down on the light entering the camera, maybe you'll get greater depth of field merely by reducing the overall light level." And that's a fallacious logic chain.

You see, in the physics of photography, it's the actual size of the aperture, and not the amount of light reaching the photosensitive surface, that determines your depth of field (i.e., the range of distance from the camera in which objects are in focus). Focus has to do with the collimation (i.e., the parallel-ness) of the rays of light when they hit the film/CCDs. The smaller the aperture, the less "spread" you get from a beam of light entering, say, from the upper-right corner of the lens and then painted into the bottom-left corner of the film plane. The absolute greatest depth of field comes from a pinhole aperture, since there is almost no practical room for the light from any given area in the field of view to spread across the width of the aperture.

I have a degree in photojournalism -- some things you learn empirically, even if you're not a physicist... smile.gif

-the other Doug

p.s. -- looking at PDP8E's other point, that reducing exposure time would increase depth of field, again that's not a truthful statement. The only thing that really increases depth of field is reducing aperture. If you increase the light on the subject, you can reduce aperture and thus increase depth of field, and perhaps at the same time you might need to reduce your exposure time in order to get a properly exposed image. That's the only way in which exposure time could work in with depth of field. But in general, photographers use exposure time to determine proper exposure (i.e., total amount of light) and aperture to determine proper depth of field. dvd

Posted by: tty Jun 28 2009, 06:07 PM

Is there any way to reduce the aperture external to the camera? There is unlikely to be any structure with a suitable pinhole to look through, but it might be possible to place the MI so that part of the aperture is blocked by the MER structure. Would this work? Most camera apertures become rather asymmetric at small apertures, but this does not seem to have much of an impact on the image quality.

Posted by: dvandorn Jun 28 2009, 06:17 PM

Partially closing the MI's lens cap has been discussed here to achieve this, but as far as I know hasn't been tried.

-the other Doug

Posted by: PDP8E Jun 28 2009, 08:00 PM

Hey Other Doug,

Thanks for keeping me on the straight and narrow!

Disclosure: My aperture/exposure time 'skills' come from an old manual 35mm Minolta that I haven't touched in years.

Cheers




Posted by: RoverDriver Jun 29 2009, 03:12 AM

QUOTE (PDP8E @ Jun 28 2009, 12:00 PM) *
...
Disclosure: My aperture/exposure time 'skills' come from an old manual 35mm Minolta that I haven't touched in years.
...


I thought you meant that but was unsure. There are no iris on any of the cameras on board the vehicles. They are delicate mechanical devices that would be prone to malfunction. We can control the exposure time, the PANCAM have ND and bandpass filters, the MI has a lens cover, but that's it. No focus ring, no iris setting.

Paolo (who recently traded his Nikon N6006 for a monopod)

Posted by: PDP8E Jun 30 2009, 02:41 AM

Here is the 1st peek under Spirit's deck taken a sol before the 'better' set

Same technique

last image is in the next post




Posted by: PDP8E Jun 30 2009, 02:43 AM

last image:



Cheers

<Astro0 please do your mosaic magic !>

Posted by: Astro0 Jun 30 2009, 12:26 PM

It's an interesting technique PDP8E, but I wonder how much more detail it is revealing to us.
Perhaps the 'smart contrast' should happen after the other processes.

Here's the mosaic of your recent images.


Posted by: PDP8E Jun 30 2009, 02:24 PM

Astro, first thanks for helping out with the mosaic! (I find it amazing that a guy in Boston can post a couple of images before bed, and then in the morning, find that another guy in Australia has added his technical and artistic expertise to the images...virtual collaboration!).

This little contrast trick only demonstrates one avenue of study on an image set that has many challenges (seriously out of focus, generally low light levels, very bright spots, extended light&dark areas -etc.). As can be seen, adaptive contrast enhancement balances on the hairy edge of introducing and using speckles (or as some would call it: noise!) as information

I am attacking the focus problem at the moment (hitting the books), and hope to have something worth sharing in a little while.

I hope the rover team gets little Spirit of this quagmire soon. Good luck with the sandbox Paolo!

Cheers




Posted by: jekbradbury Jun 30 2009, 02:59 PM

For those who intend to do serious work on attacking the defocus, here are two documents which may come in useful, as they describe the technical and lens specifications for the Microscopic Imager. The toughest part is likely computing the proper PSF for each individual point on the image, as that would require knowledge of the exact distance to each pixel, which requires photogrammetric calculations that usually require a sharp stereo pair! The anaglyph may help, but an accurate mesh of the surroundings is somewhat of a goal and not available for input...one (crazy) option is an iterative bundle adjustment that starts with a good estimate for some sort of photometric textured mesh model for the situation and improves its estimate by repeatedly raytracing this model into images based on the camera parameters and checking how well they correspond to reality? Yes, I know this would probably require a couple supercomputers' worth of processing power and whatnot, but it might be worth a try...

http://starbrite.jpl.nasa.gov/pds/viewInstrumentProfile.jsp?INSTRUMENT_ID=MI&INSTRUMENT_HOST_ID=MER2
http://www.lpi.usra.edu/meetings/sixthmars2003/pdf/3276.pdf

Posted by: PDP8E Jul 1 2009, 04:27 AM

This is brief report on a work in progress: focusing of the MI images
The adaptive contrast was just an attempt to see in low light levels ... with varying success...depending on the quality of the input images.

I cobbled together a focus program that takes some aspects of wavelets and the related least square methods.
As a worst case input (but easiest for me) I used the processed mosaic (thanks Astro-naught) of JPL's 1st attempt to see beneath Spirit.

After I finished the coding and debug, it ran 2 hours of CPU time (!) and deconvolved into utter ugliness. I ran it again and stopped it at iteration 211 (52 mins) and here is the result.



<little rock in the SW corner is better (and closer to the camera)...what is going on with the LM wheel?...>
<a distant 'rock' seen through the (out of focus and) upside down 'U' of the hull, in the NW corner may be seen by the NAV/PAnN cameras for a check?>
<can somebody get on their belly and take a series of pictures at the TEE BEE of the underside of the test rover? ...and post them?>

The overall goal is to iteratively compute a new hi-res image and check it by producing a newer low res version, while comparing it to the actual low res (and then adjusting for the next iteration) -- I cheated and used 2 arrays -one each for a 1D representation of X and Y - rather than a more (days! worth of) complex matrix computations. (confession: averaging was used at the the intersections of pixels in the output array ~! )

Since this wost case image (super processed by me and then skillfully mosaiced by Astro0) 'looks' like it can be improved (?some what?.. you tell me...?) then, the next step: is to try this method on the original JPL original images, then contrast enhance, then mosaic.

As a note, I can only detect one seam from the mosaic in the deconvolution...I thought I'd see them all.

See you in a few days...?

Cheers

Ustrax: beautiful Tennyson poem...the Crew, Marines, and all of us caught up in this adventure hear you loud and clear...

Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)