MSL Images & Cameras, technical discussions of images, image processing and cameras |
MSL Images & Cameras, technical discussions of images, image processing and cameras |
Aug 16 2012, 11:05 PM
Post
#1
|
||
Senior Member Group: Members Posts: 2228 Joined: 1-December 04 From: Marble Falls, Texas, USA Member No.: 116 |
I'm still trying to figure out a number of things about the new images we are trying to work with. Assuming others are likewise trying to learn, I thought I would open this thread to create a place for such discussions.
I'd like to start out with a comment about raw image contrast. There have been several postings in the main threads about whether or not the MSL raw images have been stretched like those from the MER missions. I am certainly no expert on this, but it looks to me as if the MSL images have not been stretched at all. I haven't tried to analyze all of the image types, but the hazcams and navcams have pixel brightness histograms that are very different from their MER counterparts. This attached image compares MER and MSL navcams along with their luminosity histograms. The MSL images clearly are not using the entire, available range of brightness values, whereas the MER raws do. For this reason, the MSL raw images can usually be nicely enhanced by simply stretching the distribution of brightness across the full 256 value range. -------------------- ...Tom
I'm not a Space Fan, I'm a Space Exploration Enthusiast. |
|
|
||
Aug 26 2012, 05:17 PM
Post
#2
|
|
Senior Member Group: Members Posts: 4256 Joined: 17-January 05 Member No.: 152 |
I think what Art's suggesting is adjusting the apparent relief while keeping viewing size/distance constant. I could imagine doing that, for example by morphing one frame part ways towards the other to reduce relief. But that would be hard and would involve some degree of faking for the intermediate viewpoints.
For now, ngunn's approach can at least help reduce exagerated relief. |
|
|
Aug 26 2012, 08:09 PM
Post
#3
|
|
Member Group: Members Posts: 122 Joined: 19-June 07 Member No.: 2455 |
Yes, that's exactly what I was wondering about. It would very much involve faking one of the images based on an analysis of an anaglyph created with the wider separation of views and then rebuilding the anaglyph with one original and the "faked" image. I guess derived image would be a more PC term much like when smoothing a video shot at low FPS and having the computer generate the intermediate images for a standard video frame rate based on a best guess of how motion and scaling would occur in each frame. When I've created anaglyphs in the past, the two original images are lined up vertically first and then the images are aligned horizontally for the most comfortable view. This results in a blue and a red tinted image combining both of the originals. When you view it without the glasses you can see distinct blue and red tinted objects with close up ones having more horizontal distance between those objects and the far away ones having very little distance or they're essentially right on top of one another. When the left and right image are taken at let's say hundreds of miles apart those distances get very exaggerated when viewed by human eyes. What the program would do would be figure out how much offset each pixel on let's say the right image shifted to the side from it's corresponding pixel on the left image and bring it back closer together in the proportion between a standard eye viewing angle and the actual image angles. I'd think that would be fairly easy to do on a long distant aerial shot but very tough on something close up because objects could block other ones from left to right. So guess this is a challenge to the programmers that have written the wonderful anaglyph software out there that pretty much assumes the original shots are taken at standard eye distance. They're already really doing the processing when they build the final red/blue image. You'd just need one more parameter in there that was the distance between the original images. Instead of simply combining the shots together, the blue portion would be derived and then combined.
One advantage of having this feature would be that you could also intentionally exaggerate the relief by creating the derived image at a much wider distance than it was originally shot to more readily spot depressions and things jutting up. I think what Art's suggesting is adjusting the apparent relief while keeping viewing size/distance constant. I could imagine doing that, for example by morphing one frame part ways towards the other to reduce relief. But that would be hard and would involve some degree of faking for the intermediate viewpoints. For now, ngunn's approach can at least help reduce exagerated relief. |
|
|
Lo-Fi Version | Time is now: 23rd September 2024 - 01:51 AM |
RULES AND GUIDELINES Please read the Forum Rules and Guidelines before posting. IMAGE COPYRIGHT |
OPINIONS AND MODERATION Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators. |
SUPPORT THE FORUM Unmannedspaceflight.com is funded by the Planetary Society. Please consider supporting our work and many other projects by donating to the Society or becoming a member. |