Vignetting, discussion about methods of resolution |
Vignetting, discussion about methods of resolution |
Aug 24 2005, 07:37 AM
Post
#1
|
|
Member Group: Members Posts: 877 Joined: 7-March 05 From: Switzerland Member No.: 186 |
Hi Nirgal and all, I would like to discuss about vignetting and methods of resolution for it. I'm mainly interested in mathematical methods that could automatically calculate and adjust the right grey value for each pixel in a single picture.
My current method works with more or less transparent layers over the original picture that so roughly are able to balance the grey values. A perfect layer have to be the exact inverted brightness difference of each picture with this shadow effects. This method is very effective if you get the correct inverted values. These shots of the Mars sky come nearly at such a perfect mask, but not always. And of course the center of the pictures lose much of theirs original brightness/luminance sadly. I have in mind a mathematical method that can adjust each grey value in a pic in order to obtain a completely balanced brightness over the entire picture. But I'm not in the position to reach that. I only know one have to start with the calculation of the grey values in the center of the picture. In the center are quasi the reference values of the whole picture, if I'm correct. Is there a possibility (mathematical method) to get (roughly) the same brightness and luminance like in the center over the whole picture from the MERs? Greetings, Peter -------------------- |
|
|
Guest_DonPMitchell_* |
Jun 12 2006, 06:07 PM
Post
#2
|
Guests |
Very cool work, Michael.
I know in a mathematically ideal camera, vignetting is cosine**4. But I think the geometry of real lenses is a lot more complex. Here is a paper I wrote with some friends on modeling lens effects: Camera Models Where does cosine to the 4th power come from? You get one cosine from the foreshortening of the lens, one from foreshortening of the element of area on the film, and two more from the 1/r**2 distance effect between the lens and locations on the film. For stitching panoramas, I've used a lot of different software (Autostitcher, Panorama Factory, etc), including just doing it by hand in Photoshop. The best tool I've found, by a long shot, is the stitcher in Microsoft Digital Image Suite 2006. It's the only function I used that program for, I barely know what else it does. This is software developed by the computer vision researcher, Rick Szeliski. Szeliski's algorithm is very general, and combines images with arbitrary projective transformations. Panning, tilting, zooming, even moving the camera location (modulo visibility changes then). Nothing else seems to handle all of these variables. |
|
|
Jun 14 2006, 05:06 PM
Post
#3
|
|
Member Group: Members Posts: 156 Joined: 18-March 05 From: Germany Member No.: 211 |
Very cool work, Michael. Thanks DonPMitchell! That's a very interesting publication that you wrote. When I started this project a couple of months ago, I only had a document which described the MER imaging systems (can't find it anymore now...). It included the response function (as a plot) of the sensor+optics when imaging a surface of uniform brightness. It looked pretty much like a + b r**2 (b negative) to me and I tried it out. It worked very well and I found that there where only minor/negligible deviations. So obviously for the MER lense systems (pancam and navcam) the shape of cosine**4(theta) can very well be fitted by a function of the form f® = a + b r**2. Some time I will try out the formula that you gave in your paper. The good thing is that the MER imaging systems have a known, fixed f-stop. The sensor size and focal length are know, too. So, from the pixel location you could fairly easily compute theta and obtain E(x') or reverse. Though, there are some difficulties. The histograms of the recent images are always clipped and streched by unknown amounts by some automatic processes. So in the end you'd have something like E(x') = a + b[L...]. a and b would have to be determined by fitting, am I right? Michael |
|
|
Guest_DonPMitchell_* |
Jun 14 2006, 05:42 PM
Post
#4
|
Guests |
If they measured the camera response, then that is better than theoretical models. The cosine**4 model is exactly correct if the lens is a thin disk. But a real camera lens is compound, a cylinder packed with simple lenses. So in addition to the ideal cos**4 effect, there are complex geometrical effects. I would trust the formula they published for MER.
|
|
|
Lo-Fi Version | Time is now: 1st November 2024 - 12:07 AM |
RULES AND GUIDELINES Please read the Forum Rules and Guidelines before posting. IMAGE COPYRIGHT |
OPINIONS AND MODERATION Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators. |
SUPPORT THE FORUM Unmannedspaceflight.com is funded by the Planetary Society. Please consider supporting our work and many other projects by donating to the Society or becoming a member. |