Printable Version of Topic

Click here to view this topic in its original format

Unmanned Spaceflight.com _ Image Processing Techniques _ Image Reprocessing Methods

Posted by: JRehling Mar 7 2016, 08:22 PM

I recently saw a PBS-produced show which was quite off-topic for this board, generally, but involved image processing. This special, "The Bomb", showed a lot of old film footage of nuclear explosions. Where this is relevant to the board is that they cited new work in image processing that dramatically improved the visual quality of that film. Early on, they showed old, grainy film, and showed the quality of the improvements as a "scan line" moved across the frame, converting the old, grainy imagery to new, greatly improved video. They verbally referred to this image processing work, but I'm not sure how one would find out the details of the work.

The show itself is cited here:

http://www.pbs.org/program/bomb/


Posted by: scalbers Mar 7 2016, 10:15 PM

Maybe some Fourier processing would help with grainy images? Reminds me of a sort of similar type of grain on Titan radar data.

Posted by: HSchirmer Mar 7 2016, 11:16 PM

QUOTE (JRehling @ Mar 7 2016, 08:22 PM) *
They verbally referred to this image processing work, but I'm not sure how one would find out the details of the work.


I think you're describing the US Army's algorithm, http://techlinkcenter.org/summaries/super-resolution-image-reconstruction-srir.

Basically, you can make camera panning or camera jitter work for you with grainy video images.
It also appears to work well for video guided ordinance, and license plate recognition.

 

Posted by: JRehling Mar 8 2016, 12:27 AM

I think a significant problem that they solved was to repair damage in the form of scratches and lines. Those were very obvious in the original film, but seemingly repaired almost perfectly in the final product. I imagine it's useful to think of the film as a three-dimensional space, with time one of the dimensions, and interpolate to repair flaws once they are detected. Even though we don't use film in space anymore, those kinds of flaws can occur due to sensor noise and cosmic ray hits.

Posted by: nprev Mar 8 2016, 02:14 AM

MOD NOTE: Changed topic title to redirect the discussion towards application of techniques used in other fields to spacecraft imagery. Examples from other subjects are fine as long as there is a clear tie-in to robotic spaceflight.

Posted by: machi Mar 8 2016, 12:14 PM

I didn't see original documentary but there are many ways to remove scratches from old films.
Best way is to have original film and using https://en.wikipedia.org/wiki/Infrared_cleaning.
Another way is to compare adjacent movie frames for differences or map scratches inherent to camera.
If you know position of dust specks and scratches then is possible to remove them via inpainting or using data from adjacent frames.

Posted by: JohnVV Mar 8 2016, 11:37 PM

PIXAR came up with a way using there massive render farms to use the movie film frame(s) before and after to inpaint into the current frame

this works VERY well in the removal of scratches and noise

i do not have a 12+ box cluster so .....


for images the tool i use mostly is Gmic . This used to be the old program called "GREYCstoration"

it uses a heat flow PDE to inpaint missing data and for noise removal

Posted by: Ian R Mar 9 2016, 01:07 AM

G'mic is a remarkable toolset, John; especially the numerous inpainting resources.

Posted by: JohnVV Mar 9 2016, 01:55 AM

i have been using it for many years .Gmic has mostly replaced imagemagick for me ( except for matlab m files )

the gimp plugin is 8 bit ( gimp 2.8 ) with support for 16 and 32 bit and float images in Gimp 2.9 DEVELOPMENT

but the terminal version runs on { uchar | char | ushort | short | uint | int | ulong | long | float | double } image data
so from ascii text and cvs data to 8 ,16, 32,64 bit data

an example from Voy2 at Neptune
Gimp 2.9.3 DEVEL , gmic ,resynthesizer, (and the built in HEAL tool )
http://imgbox.com/yVbSHO7C http://imgbox.com/vRP3jNOi

Posted by: JRehling Mar 9 2016, 07:28 PM

As I'm starting to get into astrophotography, I'm finding that one useful tactic comes from an understanding of whether noise is systematically brighter or darker than the signal.

If noise may cause pixels to end up either darker or lighter than the signal, then algorithms that average values (over space or time) may be the best way to remove glitches.

But if you know that noise is brighter than the signal (for example), averaging may not be as good as combining values (over space or time) with a "minimum" operator. Or, obviously, a "maximum" operator if the noise is systematically dark.

When I take images of, say, the Messier objects, there is a fuzzy boundary between the (theoretically) black sky and the gray object. My camera introduces noise which is almost always brighter than the signal. A naive approach could be to bin pixels 2x2, so the noise is spread out among 4 times as much signal, at the cost of resolution. But I've found it is much better to wiggle the image in 4 copies – perhaps wiggling it by 2 pixels each time. Then, overlay those images and take the minimum value for each resulting pixel. Again, I lose resolution, but now the noise is not just reduced to 25% of what it was, but – in many cases – to zero!

I very much doubt if this is an original insight, but I came by it independently.

Posted by: JohnVV Mar 9 2016, 08:36 PM

for static such as ccd noise

this simulated noise is similar
the above Neptune image with Noise added

CODE
gmic c1134037.cal1NOISE.png -remove_hotpixels 3,5 -o test.png

the added noise , then the de-noised image
http://imgbox.com/iiiysLWq http://imgbox.com/q1UUN0up

Posted by: HSchirmer Mar 10 2016, 04:59 PM

QUOTE (JRehling @ Mar 9 2016, 08:28 PM) *
But I've found it is much better to wiggle the image in 4 copies – perhaps wiggling it by 2 pixels each time.
Then, overlay those images and take the minimum value for each resulting pixel.


Eh, with the right computations, you can overlay multiple images
to synthesize a significantly higher resolution than each individual pixel.

Basically, taking 16 blurry photos of the same static image allows you to compute an image 16x the resolution.


https://www.researchgate.net/publication/242116561_Sub-pixel_image_registration_with_a_maximum_likelihood_estimator_Application_to_the_first_adaptive_optics_observations_of_Arp_220_in_the_L'_band

http://www.nature.com/nphoton/focus/superresolution/index.html#editorial

Posted by: JRehling Mar 10 2016, 06:45 PM

QUOTE (HSchirmer @ Mar 10 2016, 09:59 AM) *
Eh, with the right computations, you can overlay multiple images
to synthesize a significantly higher resolution than each individual pixel.


Oh, I know! But if you have noise in each image and merely do that, you keep the noise. My suggestion is about eliminating the noise (almost) entirely.

And, a perspective which is almost startling in contrast to typical planetary imaging, but often applies with imaging, e.g., nebulas: Sometimes you prefer better signal-to-noise at the cost of resolution.

Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)