Hi Nirgal and all, I would like to discuss about vignetting and methods of resolution for it. I'm mainly interested in mathematical methods that could automatically calculate and adjust the right grey value for each pixel in a single picture.
My current method works with more or less transparent layers over the original picture that so roughly are able to balance the grey values. A perfect layer have to be the exact inverted brightness difference of each picture with this shadow effects. This method is very effective if you get the correct inverted values. These shots of the Mars sky come nearly at such a perfect mask, but not always. And of course the center of the pictures lose much of theirs original brightness/luminance sadly.
I have in mind a mathematical method that can adjust each grey value in a pic in order to obtain a completely balanced brightness over the entire picture. But I'm not in the position to reach that. I only know one have to start with the calculation of the grey values in the center of the picture. In the center are quasi the reference values of the whole picture, if I'm correct.
Is there a possibility (mathematical method) to get (roughly) the same brightness and luminance like in the center over the whole picture from the MERs?
Greetings, Peter
Hi Tman,
mathematically, we can describe vignetting effects as a very low frequency
change in pixel brightness.
Unfortunately there is no "perfect" algorithm that could always distinguish the
Vignetting from other low frequency brightness variations that belong to the
"real" picture content and are therefore not to be canceled out.
So I'm afraid that there will be no better method than to use the
complete reference frames where the camera points to a uniformly bright sky.
So actually I think your method of using the Navcam reference sky-images is
already quite good !
The only problem that I can see is that the sky is probably not uniform and
that, if there is a time span between the reference image and the actual
shot, it may not work well because of intermediate changes n light conditions.
As for my "unstrip" program I had to choose a different approach, because it
is mainly intended to remove the dark "strips" (at the frame seams) in already stitched conplete panoramas.
The algorithm works as follows (simplified)
0. convert the image to LAB color space to handle color and luminance independently
1. try to find a *horizontal* strip in the image that is as uniform in brightness as possible.
2. apply a gaussian smoothing filter to each line of the reference area
3. "merge" step: calculate the reference line as an average of all lines in the reference area.
4. take the avreage brightness of the reference area as the reference luminance.
5. we now have our horizontal "brightness correcting curve", but it may still have
undesired high frquency variations (despite the smoothing)
6. so,as a last step I create a b-spline (or linear) interpolated version of the
correction curve.
7. apply the correction curve to each individual line in the image
(i.e. brightness(pixel(x,y)) = brightness(pixel(x,y)) + brightness(correction-curve(x))
Hi Nirgal, probably you're right and it's impossible with Navcam pics. Btw. dont you fancy to dye my sol 581 pan again?
Hi Tman,
I tried it using "Fast Fourier Transformation (FFT)". For each row and each column of an image I calculated the FFT and took the lowest freqency to determine the amount of vignetting within that row/column. Then I averaged the results for the rows and the columns, seperately. That results in two vectors containing the average vignetting of the rows and the columns.
I used these two vectors to construct a template with the vignetting of the whole image. This template was then normalised yielding values between zero and one. After that I devided the original image by this template. By doing so, pixels with a value of zero remain zero, and the colour depth is preserved, actually increased. To compensate for that each image has to be transformed to pixel values between 0 and 255 afterwards.
That method gave quite some good results, but, there are some problems as well that I will have to follow up.
Michael
Hi Michael, best we would talk about it in German, only the other members here would pull one's hair out (or so ).
I dont know the FFT method so far, but it seems to be fundamental when programming high quality picture editing software. After a quick search in the web I found very interesting examples - but a.t.m. definitively too difficult to me.
Did you wrote a program for the FFT caculation that now works automatically when processing the MER pics?
My only hope is that you could programming a plug-in to PhotoShop with it
When you get satisfactory results then, I would be the first customer!
Yeah sounds really good so far That would be fantastic if we could each single picture process so (roughly) good like your posted picture above Michael - before any other processing. I guess (without any experience with this technology) that would really help to get nearly the best result in original luminance and details without any vignetting finally.
Thanks Micheal,
hopefully I will find the time to do do the coding soon
(I'll never know whether I spent more time with image processing or with coding the tools that in turn help with the image processing
both is a lot of fun.. to see mathematics in action, turning formulas to
living shapes & colors
Nirgal, I finished the commenting of my program to eliminate the vignetting. It is now fully automatic. You, or anybody else who is interested, can download it from http://www.muk.uni-hannover.de/~theusner/mars/vignett_fft.pro.
I hope that you can find your way through the code. If you have any questions, just ask me. Or, if you want to know how some of the IDL-routines work, you can have a look here: http://idlastro.gsfc.nasa.gov/idl_html_help/idl_alph.html
The program can deal with jpeg-images only which have equal x- and y-dimensions. Currently, it needs true color images. Basically, it gets along with all MER images.
I also have ready a simple graphical user interface which can be run by everyone after downloading RSI's IDL Virtual Machine, which is for free (or without VM if you already have IDL on your computer).
http://www.rsinc.com/idlvm/
It uses pre-complied IDL-programs. So, if anybody is interested, you can download the pre-compiled GUI http://www.muk.uni-hannover.de/~theusner/mars/anti_vig.sav.
It looks like this:
Please drop me a line if you use the program or if you find any bugs while using it.
Michael
I have now modified the anti-vignetting-program (using IDL) such that it can deal with any kind of jpeg-image. Also, it does not use FFT any longer but simply takes a 2D-cosine-shaped mask. That mask is fitted to the chosen image and the amount of vignetting automatically determined (that can be done RGB-channel specific). It also allows to save images as 16-bit-tiff so that none of the original information is lost in the out-put image. You can also do manual adjustments if the amount of vignetting is over- or underestimated by the program (depends on the structure of the image). All that has to be done with each single image and cannot be aplied to a ready panorama.
Below are two examples (SOL 582).
Without anit-vignetting:
After anti-vignetting:
After anti-vignetting the sky shows a very even color distribution, without these nasty green edges. These are obviously due to different amounts of vignetting in the RGB-channels.
I also tested that program with scanned slides and digital camera images and it works very well. So it does not only work with martian craters but also with terrestrial ones The image below shows Wolfe Creek meteorite crater in Western Australia. Its got a diameter of 850 m and is about 30-40 meters deep. Like martian craters it is filled with sand and dust (80 meters).
Panorama from three frames.
The only draw-back is that I cannot provide a stand-alone version of that program Though, http://www.muk.uni-hannover.de/~theusner/mars/anti_vig.sav can be run with http://www.rsinc.com/idlvm/index.asp. It's free, but requires registering and is a 115 MB download
Is there anybody here who's got a license to turn this IDL-program into a stand-alone one?
Michael
WOW!
MichaelT, I think would be really GREAT to implement your algorithm inside MMB... I would know opinion from the other Michael on this item...
MichaelT:
Do you think your anti-vignetting app could be tweaked to sort out the tone variations seen across the Surveyor frames?
Bob Shaw
I'm guessing there's no flatfield or darkfield for the Surveyor stuff ( where can you get it all anyway?)
If you took all the surveyor frames that didnt include the 'sky' and averaged them, you might get an appropriate 'flat field' - which you could then invert, and overlay at a few percent opacity on all the other frames.
Doug
I usually try to make a blank image with my digital camera to record only the vignetting so that it can be subtracted. Here are some pans where it works well. Sometimes it works, sometimes it doesn't. To do this automatically would be great!
Hi,
I put an updated version of the anti-vignetting program online. I corrected some minor errors and it now supports output as 16-bit-TIFF. That way one can do all the adjusting after the anti-vignetting process wihtout loosing any information. As usual it is a precompiled IDL-program and can be used with RSI's Virtual Machine (see earlier posts).
You can find the program here: http://www.muk.uni-hannover.de/~theusner/mars/anti_vig.sav
Tman pointed me to the fact that some people are already using that program, and the results really look nice. As I was in the final stages of PhD-thesis-writing until mid-October, I had not noticed that so far But now I am done
Anyway, I would not mind input from the users. Is there something that you would like to be improved?
Bob: Did you scan any images so far?
Michael
Tman,
what I mean to say with those lines is the following:
Anti-vignetting of Navcam images gives around 20% better images and Pancam images
around 3 or 4% better images. ( see the percentage after vignetting. )
The end result of every image after anti-vignetting with MichaelT's program is much better, as you describe.
Therefore I use the anti-vignetting program for Navcam and Pancam images for every
panoramic view I make.
jvandriel
You're right Jvandriel, it's for both very helpful. If one use them without automatical brightness/color correction during the stitching process, especial with PTGui's "PhotoShop with feathur" version, one mostly have only to correct a brightness gradient that caused by the sun direction during the shot (among the correction of exposure). I overcome this rest gradient with an adjusted mask over each frame.
"those lines" adress my post here, right: http://www.unmannedspaceflight.com/index.php?showtopic=1506&view=findpost&p=25066
I dont mind it running in IDL, but I would really like a batch processing tool.
Doug
I just put online Version 2.0, now with batch processing option ('batch' button). You can do the vignetting settings before choosing the batch option (eg. with a template image). The settings will then be applied to all the selected images. You can also select vertical parts of the image (eg. the sky, see 'Help') that will be used to determine the amount of vignetting. That selection will the be used for all batch images.
So far, the images will be generated in the folder where the files are located. That will be changed in a future version. An 'av' is added to the files to prevent overwriting of the original images.
http://www.muk.uni-hannover.de/~theusner/mars/anti_vig.sav
Let me know if there are any problems.
Michael
Hi Michael,
Tried the batch processing just with the last Spirit's sol 649 Pancam files which I used for a pan and it works very well so far.
Thanks!
Very helpful to last the fastest MER-pans producer forum of the world.
Michael T,
today I used for the first time your anti-vignetting program with the batch processing addition. ( V 2.0 )
It works great. No problems at all.
jvandriel
I'm starting to use your tool too, looks pretty good !
Thanks
Nico
Hi Michael,
it would indeed be very nice if someone implemented it into MMB. Unfortunately, that won't be me as I do not have experience in Java-programming. I will post the code of my anti-vignetting routine in the next couple of days. So I hope that someone else can transform the code into Java.
Michael
I would be interested in giving it a go. I made a java attempt at MichaelT's
DD enhancing algorithm before so I fancy a go at this even of some else beats me to it
But I think I might need a bit of expansion on parts of the algorithm posted at the start of the thread.
I'm not entirely sure of the different colour spaces.
For example can you convert directly from greyscale to LAB or must it be in RGB.
I have been looking and have found conversion from RGB to XYZ to LAB but not directly
from RGB to LAB.
>>0. convert the image to LAB color space to handle color and luminance independently
And then
>>6. so,as a last step I create a b-spline (or linear) interpolated version of the
correction curve.
In terms of getting the curve and things I'm a bit
I'll wait for MichaelT to post his routine I think, but I'll give it a go.
Wow, okay.
Looks like I have my work cut out for me.
Cripes, I'll have to pour over what you have said and your code first before I go
asking more questions. The color space issue is concerning me a little but I'll leave
that til I have a metter understanding of what you do.
I have a feeling I will have plenty of questions.
If anyone else outhere is thinking of doing this then by all means drop in.
Edit: Just realised the algorithm I was looking at originally wasn't posted by Michael. Muppetesque moment for me . That clears a few things up. Sorry about that.
Don't worry. I have everything under control
Michael, thank you for sharing the code to this application. I've found the program very useful, and look forward to digging into its inner workings a bit deeper.
Jared, since vignetting is a camera effect, I'm pretty sure it would be inappropriate to use this application in Lab space. The L channel is shaped similar to a cube root function as it is intended to be a perceptive measure of brightness, and is therefore not linear with photon count like the raw images (well, in a perfect world at least . This means the 'fit' used to determine the amount of vignetting would be even more complicated if you used L instead of the actual pixel brightness through a given filter. The ab parameters concern themselves only with the ratio between color channels (or in MERs case, the raw images) and the only way to know if you have accurately described those would be to use this on the raw images before determining the color at each pixel.
How long does this program take to run for you guys?
I got it running on my Mac yesterday but it takes 4 - 5 mins on each raw pancam frame! I know it is a complicated procedure and it's running through a virtual machine so it's not going to be instantaneous but my activity monitor says it's only using about 10% of my CPU.
Just wondering...
James
The same to me - 5 seconds.
My PC system runs 2,4 GHz Pentiumİ 4
That's most probably not a rate of calculation problem.
Hmm, 5 seconds eh, that's a bit quicker than 5 mins!
Anybody else got this working on a Mac?
no mac, but 3 seconds, although it might well be a bit slower in a batch.
so yes, 5 mins is
Hope you figure it out.
Nico
I made a minor modification to the program. I don't know whether that will make any difference to the problem that you encountered James. Just try it out and tell me what the result is. Version 2.0.1 can be found here: http://www.muk.uni-hannover.de/~theusner/mars/anti_vig.sav
Michael
Hi, I haven't a much of a chance to do any coding yet.
I just want to clarify something.
MichaelT, the CONGRID command you use. I have to implement that myself but
I want to check that all I have to do is take a two-dimensional array and
subsample it down to the smaller size.
Hopefully over the weekend I'll get a chance to get something done. After the Ireland v Australia rugby match of course.
Micheal,
I just like to say thank you for sharing your software !
As you know, I was already in the process of developing my own anti-vignetting software, implementing a similar algorithm in C,
but with the latest version of your program (with the batch mode !)
there is no need for an own AV-tool any more :-)
The only advantage of a C-based program (that I originally planned to write)
would be speed and the potential to run without additional virtual machine ... but I have downloaded and installed the IDL and it's fast enough and without hazzle and really no problem at all ... Plus: your program has the fine graphical user interface whereas most of my C-based tools are rather cryptic and mostly command line driven .. although I really would miss batch capabilities ...
So I think your latest batch-enabled version of AntiVig will definitely become the "gold standard" of Anti-Vignette processing in the MER imaging community
Thanks again !
MichaelT,
I just downloaded Anti_Vig 2.0.1, but when I try to use it I get the following message:
"Attempt to call undefined procedure/function: Anti_Vig 2.0.1 ".
Version 2.0 works fine.
jvandriel
Hi Jvandriel,
I guess you renamed it as "Anti_Vig 2.0.1", right?
But it's necessary to keep the name of the program only "anti_vig.sav"
I tried that already too.
Would you like me to include the version number in the name of the file? Wouldn't be a problem. So you could keep all the old versions in one folder and use them independently.
Michael
Hi Michael, just reply overlap.
Regarding version number in the name, I would like it. So one can leave the previous version just in the same file folder - I know it is not necessary but I always keep the previous for safeness.
MichaelT and Tman,
thanks for the advice. I did indeed change the name.
Now I have downloaded Version 2.0.2 and it works perfect. ( without changing the name of course. )
MichaelT,
is it possible for you to make the following modification to your program.
Now, in batch processing mode, every image is calculated and saved.
It is not possible to individual change the brightness of an image. That must bo done later by hand.
For example,
in batchmode after calculating the image the batchmode halt, then change the brightness by hand, ok it and the batchmode continue and saves the image, calculate the next image and halt for brightness change and so on and so on until all the images are done.
Sorry, to give you more work.
jvandriel
After a long, long time I am currently developing a new version of my anti vignetting tool, version 3. Again, it is based on IDL and can be used with RSI's Virtual Machine. See http://www.unmannedspaceflight.com/index.php?s=&showtopic=1306&view=findpost&p=18469.
It is not complete, yet. The "save file" buttons are not fully functional up to now (you can't save). But, I want to give you a preview of what to expect during the next weeks. You can already play around and test it and tell me about any bugs that you find.
The program is now faster due to a much improved and simpler de-vignetting routine. Also, you can do more adjustments than in the previous versions.
Other improvements:
- You can open as many files as you like and change between them during processing (pull-down-menu).
- A '+' or '-' before each file name indicates the processing state (with or without de-vignetting).
- After processing simply choose "save all" and the program will automatically save all the files in folders designated by you.
- "Save all" will open a window with a list of all files and their current state of processing. It is planned to enable you to set output-folders for each file or copy a certain output-folder-name to a selection of files. After hitting "save", all currently processes images will be saved one after another. This could take some time as the program does not keep the whole file in its memory but just a smaller version. So it will read each file again, do the de-vignetting and save it to the destination folder.
- Like in the previous versions you can choose either jpeg or 16-bit-tiff output (or both). Do you need any other file types (png)?
- You can draw an area in the image window which will be used for the de-vignetting (by clicking with the left mouse button, set up to 64 points). Clicking close to an already present line will insert a new point. These points can be relocated by holding the left mouse button and moving the mouse. You can delete them by double-clicking (except the one in the lower left corner).
- The selcted area can be copied from one image to another.
- The de-vignetting of anaglyphs also removes the green/blue - red gradient wich often causes dark borders between image even though they vignetting has been removed.
- You can also adjust the brightness of the image (shift the whole color range or with a multiplicative factor) and the amount of vignetting.
- Clicking and holding with the right mouse button will show the unprocessed image if you have already processed it. This click will also select a line (x direction) the brightness values of which will be shown in the left hand window when changing the brightness and de-vignetting settings. That way you can optimally adjust the a-vig values.
- The histogram of the image is also shown after each change. Even though the image is shown with a pixel value cut-off of 255 then actual histogram is shown in the histogram window. Saving the file as 16-bit-Tiff will retain even those pixels which are brighter than 255.
- Moving the mouse cursor over a button will show you its function (help button is still missing).
- you can choose the type of the image (gray-scale, anaglyph, RGB) manually, though it is determined automatically when loading it. Changing the type can have the benefit that you can move the sliders for the de-vignetting and brightnes adjustment either independently or together. before saving you should set it back to the actual type.
Things I also want to implement is output of the de-vignetting or vignetting mask.
There are certainly things that I forgot to mention. Simply try it out or ask me.
So this is the time now to tell me your wishes or worries or both
The program is located here: http://www.muk.uni-hannover.de/~theusner/mars/anti_vig_n.sav
Like in the previous versions, you must not, under all circumstances, change its name
Have fun playing around with it!
Michael
Hi Michael,
I have been interested in your anti-vignetting program for some time -- but have resisted the HUGE ( 118 MB ) download of IDL6.3 VM.
Well, I just downloaded the VM and your new version -- but before I run the VM install I want to understand the implications.
How does the IDL VM interact with the O/S -- specifically Windows 2000?
Requirements? I am running a Pentium 4 @ 1.5 GHz with 768 MB of memory. I am a little short of disk space on C ( down to my last few GB ) but have about a 100GB free on D.
Looking forward to producing some "professional looking" panoramas utilizing your tool.
color by horticolor: more real than real.
Doesn't look like Michael is monitoring this thread anymore.
I tried antivig 2.0.2 -- worked great!
Here is my first antivig panorama:
http://www.flickr.com/photos/hortonheardawho/165067263/
tried the new version -- looks like some interesting features -- but the saves -- don't.
Anyway, I will use the tool for future pans -- including the McMurdo pan...
Thanks Michael -- wherever you are.
Very cool work, Michael.
I know in a mathematically ideal camera, vignetting is cosine**4. But I think the geometry of real lenses is a lot more complex. Here is a paper I wrote with some friends on modeling lens effects: http://www.mentallandscape.com/Papers_siggraph95.pdf
Where does cosine to the 4th power come from? You get one cosine from the foreshortening of the lens, one from foreshortening of the element of area on the film, and two more from the 1/r**2 distance effect between the lens and locations on the film.
For stitching panoramas, I've used a lot of different software (Autostitcher, Panorama Factory, etc), including just doing it by hand in Photoshop. The best tool I've found, by a long shot, is the stitcher in Microsoft Digital Image Suite 2006. It's the only function I used that program for, I barely know what else it does. This is software developed by the computer vision researcher, Rick Szeliski.
Szeliski's algorithm is very general, and combines images with arbitrary projective transformations. Panning, tilting, zooming, even moving the camera location (modulo visibility changes then). Nothing else seems to handle all of these variables.
If they measured the camera response, then that is better than theoretical models. The cosine**4 model is exactly correct if the lens is a thin disk. But a real camera lens is compound, a cylinder packed with simple lenses. So in addition to the ideal cos**4 effect, there are complex geometrical effects. I would trust the formula they published for MER.
Is this anti-vignetting program specifically for MER photos? I had no idea.
You see, I have an old two-CD-ROM program called "Voyage to the Outer Planets" that includes "Voyager's Greatest Hits," a sample of original IMG files from the Grand Tour. They're the original 800 x 800 complete with reseau marks, and the vignetting is AWFUL. All the images get lighter, not darker, as you get near the edge. It makes homemade stitching look terrible. (Apparently you're supposed to correct them against something called "flatfield" images, but there's nothing like that on the CD. How do you do that?)
I tried the old version on a borrowed XP machine with some Jupiter pictures. It seemed to work OK, except for the extreme corners. Would I really need a corrected version for Voyager images? Or should all the images be temporarily made into negatives, so the edges do go darker instead of lighter?
Thanks.
The Voyager (mariners, viking) cameras have other problems beside vignetting that sort-of look like vignetting. The detectors were vidicon TUBES... cyindrical electron tubes with a window at one end and a photocathode light-sensative target just behind the window. In general, image quality was good in the central portions of the target and various artifacts increased toward the edges .... the edges of the CIRCULAR target. The corners of the view were closest to those edges.
I suspect that the artifacts are a combination of manufacturing irregularities of the target increasing toward the outer edge and imperfections in the readout electron beam scanning, focussing, intensity, and other quality measures near the edge of the scannable area. At any rate, there is more artifact structure and contrast in the dark exposure images and the flatfield bright exposure images in the corners of these vidicon images than in the central portions. These can include reverse-vignetting-like corner brightening and whatever.
Sorry for the delay. My website requires me to upload files one at a time.
(Warning: lots of links follow.)
I was trying to hand-stitch two Voyager Jupiter images: C1631753 and C1631755. Here's what it looks like if you just use the uncorrected images:
Another problem is that Voyager images exhibit a large amount of geometric distortion. Correcting this would definitely help with your stitching.
jrdahlman, these websites may come in handy for you:
http://members.optusnet.com.au/serpentis/ - For viewing and processing Voyager images
http://pds-rings.seti.org/catalog/vgriss.html - Nearly complete catalog of Voyager images, including calibration images.
Good luck!
Oh, I know that you can download all the images. I just have my hands full with one CD, focussing on one planet! (That CD's been sitting here for years--I should do something with it.)
I'm really just playing around. I don't expect to find anything "new."
Downloaded yet another program. If it doesn't gasp and die in my little Libretto, I'll try it out.
By the way, HOW is it geometrically distorted? Fisheye-lens curving?
Playing with Voyager 1 wide angle images, I found I could make dark-field images that essentially perfectly canceled out dark-exposure structure and shading in images that had the same parameters.
I found to my distinct horror that there are image sets with apparently identical "relevant" parameters (readout rate, etc) that have different dark-exposure structure and needed separate calibration files.
Playing with Titan Voyager 1 Wide angle frames, I found that on a distinctly overexposed Titan image (exposed for limb or terminator structure), as you approached the moon's disk, there were image displacement fringes on the reseau marks (black dots) as the electrical charge loss in the area of the overexposed moon's image DEFLECTED the image readout electron beam during image readout!
Decalibrating vidicon images is a Siphysus-level task.
I would like to point out that the program um3k's first link above:
http://pds-rings.seti.org/catalog/vgriss.html
is the program for getting rid of reseau and for correcting the geometric distortion. I've played with it a few days and it works fine even in my little Libretto computer. It has special settings for Voyager or Viking orbiter images, and seems to have almost as many complicated controls as Photoshop! (I imagine.)
The only thing is that it doesn't automatically do the flat-fielding--special flat-field images must be loaded into it. But is has separate settings for "flat-field" and "dark current"--didn't even know there was a difference.
Dark current images are those taken with zero exposure. They actually are of two types.. Ones taken with zero exposure, and ones taken with time exposure of a zero-brightness scene, then read out at slower than the maximum possible speed. As a camera's detector "sits" before readout, it accumulates (or a vidicon looses) charge, and the blank image changes with exposure time and readout settings.
A flat-field image is one taken of a uniformly illuminated scene, to calibrate for camera and lens response variations AFTER a perfectly matched dark image is subtracted. You then divide data images by the flat-field to get hopefully clean decalibrated images.
Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)