Help - Search - Members - Calendar
Full Version: MSL Images & Cameras
Unmanned Spaceflight.com > Mars & Missions > MSL
Pages: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
CosmicRocker
I'm still trying to figure out a number of things about the new images we are trying to work with. Assuming others are likewise trying to learn, I thought I would open this thread to create a place for such discussions.

I'd like to start out with a comment about raw image contrast. There have been several postings in the main threads about whether or not the MSL raw images have been stretched like those from the MER missions. I am certainly no expert on this, but it looks to me as if the MSL images have not been stretched at all. I haven't tried to analyze all of the image types, but the hazcams and navcams have pixel brightness histograms that are very different from their MER counterparts.

This attached image compares MER and MSL navcams along with their luminosity histograms.
Click to view attachment

The MSL images clearly are not using the entire, available range of brightness values, whereas the MER raws do. For this reason, the MSL raw images can usually be nicely enhanced by simply stretching the distribution of brightness across the full 256 value range.
fredk
I've noticed the same thing. It means for some of these images, we're effectively getting 7 bit images. But on the other hand, the MSL images don't seem to loose detail in the whites the way MER images do.

I don't know if the MSL images have had any nonlinear pixel value transformations done, such as a logarithmic lookup table. I am curious what the images would look like with pixel value linearly related to the true scene intensity, and with the zero point correct.
CosmicRocker
I'm curious about something I am seeing in the mastcam color images (M-34) and hope someone here can explain it. I'm not sure if this is the correct term, but I am seeing what appears to be color banding in most of the color mastcams. It is more apparent in some than others, and if you split the color image into it's red, green, and blue channels it becomes even more apparent.

In the attached example I have put the red channel greyscale alongside the color image. I have contrast stretched both images to make the banding effect more apparent.
Click to view attachment

My best guess is that this may be caused by the raw images having had their bit depth reduced at some point. Or, could this be caused by some kind of image compression?
um3k
That's the JPEG compression having an aneurysm. smile.gif
Cargo Cult
QUOTE (um3k @ Aug 18 2012, 07:00 AM) *
That's the JPEG compression having an aneurysm. smile.gif

Something I've been wondering about - are the Mastcam, MARDI and MAHLI JPEG images being recompressed on Earth, or is this JPEG data originally produced on Mars?

(I read somewhere on these fine forums that the Navcams use some fancy wavelet compression, but the colour cameras use good old JPEG. I'd love it if some of the raw images were literally that - identical data to that produced on another planet. Being repackaged and recompressed into a more web-friendly form is much more likely, alas...)

Pointlessly, I did check an image for EXIF tags, just in case - unsurprisingly there's nothing exciting.

CODE
Spiral:Desktop afoster$ jhead 0003ML0000124000E1_DXXX.jpg
File name : 0003ML0000124000E1_DXXX.jpg
File size : 132864 bytes
File date : 2012:08:18 15:35:06
Resolution : 1200 x 1200
Comment : NASA/JPL-Caltech/Malin Space Science Systems


Transmitting data like the manufacturer's name, camera serial number, focal length and whatever, over and over again, could be deemed an unnecessary use of interplanetary bandwidth? Shame. I imagine useful metadata takes a different route. wink.gif
um3k
QUOTE (Cargo Cult @ Aug 18 2012, 06:45 PM) *
Something I've been wondering about - are the Mastcam, MARDI and MAHLI JPEG images being recompressed on Earth, or is this JPEG data originally produced on Mars?

At least some of the images are compressed in-rover, but I suspect these glaring artifacts are the result of recompression for the web. Especially since the data is almost certainly transmitted as a grayscale image with an intact bayer pattern. It'd be rather silly to debayer it on Mars and triple the amount of data that needs sent.
ugordan
QUOTE (um3k @ Aug 19 2012, 12:50 AM) *
It'd be rather silly to debayer it on Mars and triple the amount of data that needs sent.

... except that's pretty much exactly what the cameras do onboard, de-Bayer is applied and JPEG-compressed as that's the lossy compression scheme of choice. Probably every single color image returned so far used this approach. It's not 3x the amount of data because its compressed in a lossy way and the chrominance channels are subsampled. In fact, this approach of exploiting visual similarity between what would otherwise be 3 similar b/w images encoded separately might by itself reduce bits-per-pixel requirement for a given image quality, even with no chrominance subsampling.
um3k
I suppose that sort of makes sense. It makes the photographer in me cringe, and explains the sub-par demosaicing, but I understand the logic behind it. As someone who frequently deals with video encoding, I'm well aware of the relative compression efficiency of redundant data.
mcaplinger
QUOTE (um3k @ Aug 18 2012, 04:16 PM) *
I suppose that sort of makes sense. It makes the photographer in me cringe, and explains the sub-par demosaicing, but I understand the logic behind it.

Gee, thanks for the ringing endorsement. smile.gif ugordan has it all correct. A complete description of MMM compression can be found in the Space Science Reviews paper on MAHLI -- http://rd.springer.com/article/10.1007/s11214-012-9910-4 -- see section 7.5 "Image Compression".

We can return uninterpolated frames if we want to pay the downlink volume penalty.

I haven't verified this but I suspect that the web release images are going through a decompress/recompress cycle.
um3k
I mean no offense, in fact I greatly admire you and the rest of the MSSS team. You've (collectively and individually) accomplished some amazing things. I just have a rather negative gut reaction to lossy compression. tongue.gif

EDIT: Also, about the demosaicing: I'm quite certain you know what you're doing. I'm referring to the anomalous horizontal lines in the MARDI images, which I assume (perhaps mistakenly) have to do with the demosaicing, the quality of which I (again) assume is limited by the limited processing power/time of the rover. I also fully realize that more complex, non-linear interpolation algorithms aren't very conducive to scientific analysis. Perhaps my excessive assumptions are correlated with the fact that the MSSS team has cameras in space, while I have a desk cluttered with scavenged optics and a spectrograph made of LEGO bricks and a mutilated DVD-R. tongue.gif
Winston
The signal to noise ratio and information content of this forum is outstanding. Let me contribute a small bit by providing links to technical documents about some of the MSL cameras, documents I found in the process of researching the MSL's computer system and internal network (about which I found virtually nothing):

The Mars Science Laboratory Engineering Cameras
http://www-robotics.jpl.nasa.gov/publicati...ne/fulltext.pdf

THE MARS SCIENCE LABORATORY (MSL) NAVIGATION CAMERAS (NAVCAMS)
http://www.lpi.usra.edu/meetings/lpsc2011/pdf/2738.pdf

THE MARS SCIENCE LABORATORY (MSL) HAZARD AVOIDANCE CAMERAS (HAZCAMS)
http://www.lpi.usra.edu/meetings/lpsc2012/pdf/2828.pdf

THE MARS SCIENCE LABORATORY (MSL) MARS DESCENT IMAGER (MARDI) FLIGHT INSTRUMENT
http://www.lpi.usra.edu/meetings/lpsc2009/pdf/1199.pdf

THE MARS SCIENCE LABORATORY (MSL) MARS HAND LENS IMAGER (MAHLI) FLIGHT INSTRUMENT
http://www.lpi.usra.edu/meetings/lpsc2009/pdf/1197.pdf

----------
Some interesting heat shield documents:

MEDLI System Design Review Project Overview
http://www.mrc.uidaho.edu/~atkinson/Senior...ct_Overview.pdf

Advances in Thermal Protection System Instrumentation for Atmospheric Entry Missions
http://www.mrc.uidaho.edu/~atkinson/ECE591...ntations/Fu.ppt

A relatively short but very interesting document about the engineering challenges of landing on Mars which discusses the advantages and disadvantages of the various possible methods:
http://www.engineeringchallenges.org/cms/7126/7622.aspx

As I said, I found virtually nothing about the Mars Compute Element (MCE) and the network(s) used within the lander to control MSL hardware (anyone know a good source, the more technical the better?), but I did find a tiny bit within this document starting on p41 where the electronics architecture is discussed. The bus used is redundant 2Mbps RS-422. The SAM uses BASIC keywords for its command language!:

The Sample Analysis at Mars Investigation and Instrument Suite
http://www.springerlink.com/content/p26510...08/fulltext.pdf

Excerpt:
The (SAM) CDH (Command and Data Handling) module (Fig.16) includes a number of functions. The CPU is the Coldfire CF5208 running at 20 MHz. The CDH (module) communicates with the Rover via redundant 2 Mbps high speed RS-422 serial bus along with a discrete interface (NMI).

In the SAM software description starting on p47, I found this interesting tidbit:
SAM’s command system is a radical departure from prior spaceflight instrumentation. SAM is a BASIC interpreter. Its native command language encompasses the complete set of BASIC language constructs—FOR-NEXT, DO-WHILE, IF-ENDIF, GOTO and GOSUB—as well as a large set of unique built-in commands to perform all the specific functions necessary to configure and operate the instrument in all its possible modes.

SAM’s commands, which are BASIC commands with SAM-specific commands built in, are transmitted in readable ASCII text. This eliminates the need for a binary translation layer within the instrument command flow, and makes it possible for operators to directly verify the commands being transmitted. There are more than 200 commands built in to the SAM BASIC script language.


----------
And even though this is just a NASA Press Kit, it is satisfyingly detailed technically on various MSL systems:
Mars Science Laboratory Landing
http://mars.jpl.nasa.gov/msl/news/pdfs/MSLLanding.pdf

And just for the heck of it, here's NASA's Viking Press Kit. How very far we have come, even with press kits:
http://mars.jpl.nasa.gov/msl/newsroom/presskits/viking.pdf
nprev
Welcome, Winston, and thanks for a terrific first post! smile.gif We may archive some of those links in the MSL FAQ thread; much appreciated!
mcaplinger
QUOTE (Holger Isenberg @ Aug 24 2012, 11:48 AM) *
Now the question is: Which is the actual transmissin curve of the IR cutoff filter? The one (black dotted line) in the 2541.pdf or the one (thick dark green line) on the JPL-Mastcam web page?

The plot in 2541.pdf is not a transmission curve, it's a response curve and thus modulated by the detector QE. The JPL web page looks pretty close.
Holger Isenberg
QUOTE (mcaplinger @ Aug 24 2012, 10:02 PM) *
The plot in 2541.pdf is not a transmission curve, it's a response curve and thus modulated by the detector QE. The JPL web page looks pretty close.


Thanks for pointing that out! So the overall effect is like adding a strong UV filter to a normal digital camera, like a LP 420, which is actually helping to reduce visual haze. However, what explains then the decrease of the blue response function maxima at 470nm to almost 30% with the green response being the 100% reference on the black dotted line in 2541.pdf?
mcaplinger
QUOTE (Holger Isenberg @ Aug 24 2012, 01:30 PM) *
However, what explains then the decrease of the blue response function maxima at 470nm to almost 30% with the green response being the 100% reference on the black dotted line in 2541.pdf?

I'm not sure what the dashed line is really supposed to mean. The IR-cutoff response is shown in blue, green, and red, but all the filters have been normalized to their peaks, so you can't, for example, compare the R2/L2 narrowband response to the IR-cutoff blue response, or the IR-cutoff blue to the red. This plot was just intended to give an outline of where the bandpasses are.
ugordan
This is just for fun, I tried to implement an adaptive correction for the JPEGged (ugh!), raw Bayered images to get rid of artifacts in image areas that are smooth in appearance. The artifacts come from the JPEG algorithm trashing the Bayer pattern, introducing this kind of pattern:

Click to view attachment

After correcting for that, the smooth areas became more smooth as illustrated by this comparison, although obviously this approach can't ever come close to an image already returned from the spacecraft in de-Bayered, color form:

Click to view attachment

I had to make the algorithm adaptive in picking which DCT blocks it will apply this to, because if I apply that correction invariably across the image, some uniform-color areas which originally already looked good had these artifacts introduced afterwards...
Airbag
Here is a poor man's way of de-Bayering Mastcam images using the Gimp (or similar tools) for those wanting to experiment a little - obviously not intended for the experts here! I realize it is not as sophisticated as proper implementations, but in the absence of a Gimp plugin, this way has the advantage of simplicity at the expense of ending up with a half-resolution image. It does not for instance use just the green pixels for luminosity, and it performs the chroma filtering by the simple expedient of scaling the image down by a factor of 2 (thus merging each set of red, blue and two green pixels together).

  1. Open the Mastcam Bayer pattern (1536x1152) image in the Gimp
  2. Change the mode to "RGB"
  3. Drag the attached Bayer pattern color map onto it (forming a new layer)
  4. Change the mode of that new layer to "multiply"
  5. Flatten image
  6. Scale image to 50% of original size
  7. Colors->Auto->Normalize (or use Levels or Curves tool) to brighten the image.

Airbag

Bayer pattern color map image:
Click to view attachment

Sample result:
Click to view attachment
fredk
Here's the FFT of the jpeged Bayer patterns, upper two are a patch of smooth sky, and lower two a patch of ground:
Click to view attachment
In the ground image, you only see 2-pixel-scale (rough) periodicity, corresponding to the Bayer pattern, which shows up as broad peaks at the edges of the FFT. In the sky image, you also see peaks at the 2-pixel scale at the edges of the FFT, but sharp now since the sky is smooth. And you can also see FFT peaks halfway and quarter way to the edges, corresponding to 4- and 8-pixel periodicity. But of course there should be no 4 or 8 pixel periodicity in a smooth Bayer image! So clearly those peaks are the result of jpegging.

So I tried to filter out those peaks in the power spectrum. Here's the result on the same image as ugordan used:
Click to view attachment
Very similar result! The Fourier space filtering beautifully gets rid of the blotchy pattern on large smooth areas, but breaks down at the edges of those areas since the periodicity breaks down there. But there was no need to make the algorithm adaptive here - it works in one simple step.

Here's the horizon shot:
Click to view attachment
Again, a great job on the sky, but very little improvement on the not-so-periodic blotches over the mound.
ngunn
I'm totally fascinated by the various approaches to de-Bayering being tried out and discussed here, and that's from somebody not normally interested in the details of image processing techniques. I've learned a lot from the many posts, especially ugordan's and now fredk's. I love Airbag's slicing of the gordian knot for a quick short cut. Who says airbags are no use on this mission?
Art Martin
I hope this is the right section for this. I'm actually replying to a post in another thread but, since it's about image processing, thought it better to put it here.

Amazing comparing the Hirise anaglyph to what we actually see on the ground. One thing that seems to be obvious to me though is that there is a real exaggeration of relief in the Hirise 3D effect (mesa are taller, canyons are deeper) most likely created because the left and right images are taken a great deal farther apart than the human eyes. If I understand they are simply images taken at different points in the orbit and not by two cameras side by side as on the rovers. While that relief is stunning and produces this amazing view of surface from above, it is not really a true representation as to what a human observer would see from orbit. In these days of phenomenal image and video processing software, where a program can build intermediate frames of a video by analyzing the pixels of each surrounding frame, I wonder if someone hasn't devised a way of correcting the relief of a 3D anaglyph if one knows the actual separation of the two images. I can certainly picture the code process in my mind and it doesn't seem complicated if one works with image comparison coding. I'm a computer programmer but it's been years since I did anything where I was manipulating pixels and my relearning curve would be extensive or I'd tackle something myself.

Sure seems that anaglyphs have been around long enough that someone would have figured this out by now. Any thoughts?

QUOTE (walfy @ Aug 26 2012, 02:21 AM) *
I borrowed Fred's excellent rendition to compare with HiRISE anaglyph of the prime science region around the inverted riverbed. It's a very narrow angle of view. If I marked some features wrong, please let me know.

Click to view attachment

Phil Stooke
I don't do anaglyphs so I can't get technical here, but basically the 3-D map created by a stereo pair can be displayed with any vertical exaggeration you like. Typically they are made with some exaggeration because most scenes are rather bland without it. For a given stereo pair there may be some default value that is normally used but it could be changed if desired.

Phil

ngunn
QUOTE (Art Martin @ Aug 26 2012, 05:25 PM) *
I wonder if someone hasn't devised a way of correcting the relief of a 3D anaglyph


We've discussed this here a while ago, but rather than try to dig back for that here are a couple of salient points. Anaglyphs don't have a constant intrinsic exaggeration factor. The apparent relief you see depends on the size of the image and the distance you view it from. Adjust those and you could in theory view any anaglyph without line-of sight exaggeration. It's true that in many cases you'd have to enlarge the image enormously and sit very close to it!!

One solution I've suggested is the inclusion of a small virtual cube in the corner of each anaglyph to serve as a three dimensional scale bar, so if the cube looks too tall you know you're seeing the scene exaggerated by the same amount.
fredk
I think what Art's suggesting is adjusting the apparent relief while keeping viewing size/distance constant. I could imagine doing that, for example by morphing one frame part ways towards the other to reduce relief. But that would be hard and would involve some degree of faking for the intermediate viewpoints.

For now, ngunn's approach can at least help reduce exagerated relief.
john_s
QUOTE (ngunn @ Aug 26 2012, 11:09 AM) *
It's true that in many cases you'd have to enlarge the image enormously and sit very close to it!!


Actually, does enlarging the image have any effect on the apparent vertical exaggeration? I wouldn't expect so, because there should be no vertical exaggeration when the convergence angle of your eyes matches the convergence angle of the original image pair [convergence angle = angle between the two lines of sight in the stereo pair, measured at the surface location being viewed]. The convergence angle of your eyes depends on their distance from the image, but doesn't depend on the image magnification.

John
Pando
Here's my attempt at creating a 3d anaglyph image of the distant hills...

ngunn
QUOTE (john_s @ Aug 26 2012, 06:55 PM) *
Actually, does enlarging the image have any effect on the apparent vertical exaggeration? I wouldn't expect so


You are correct of course. It's the viewing distance alone that does it. I was confusing anaglyphs with cross-eyed pairs where the size does have an effect because it changes the angles too.

QUOTE (Pando @ Aug 26 2012, 07:59 PM) *
Here's my attempt at creating a 3d anaglyph image of the distant hills.


Excellent! smile.gif
Art Martin
Yes, that's exactly what I was wondering about. It would very much involve faking one of the images based on an analysis of an anaglyph created with the wider separation of views and then rebuilding the anaglyph with one original and the "faked" image. I guess derived image would be a more PC term much like when smoothing a video shot at low FPS and having the computer generate the intermediate images for a standard video frame rate based on a best guess of how motion and scaling would occur in each frame. When I've created anaglyphs in the past, the two original images are lined up vertically first and then the images are aligned horizontally for the most comfortable view. This results in a blue and a red tinted image combining both of the originals. When you view it without the glasses you can see distinct blue and red tinted objects with close up ones having more horizontal distance between those objects and the far away ones having very little distance or they're essentially right on top of one another. When the left and right image are taken at let's say hundreds of miles apart those distances get very exaggerated when viewed by human eyes. What the program would do would be figure out how much offset each pixel on let's say the right image shifted to the side from it's corresponding pixel on the left image and bring it back closer together in the proportion between a standard eye viewing angle and the actual image angles. I'd think that would be fairly easy to do on a long distant aerial shot but very tough on something close up because objects could block other ones from left to right. So guess this is a challenge to the programmers that have written the wonderful anaglyph software out there that pretty much assumes the original shots are taken at standard eye distance. They're already really doing the processing when they build the final red/blue image. You'd just need one more parameter in there that was the distance between the original images. Instead of simply combining the shots together, the blue portion would be derived and then combined.

One advantage of having this feature would be that you could also intentionally exaggerate the relief by creating the derived image at a much wider distance than it was originally shot to more readily spot depressions and things jutting up.

QUOTE (fredk @ Aug 26 2012, 10:17 AM) *
I think what Art's suggesting is adjusting the apparent relief while keeping viewing size/distance constant. I could imagine doing that, for example by morphing one frame part ways towards the other to reduce relief. But that would be hard and would involve some degree of faking for the intermediate viewpoints.

For now, ngunn's approach can at least help reduce exagerated relief.

ElkGroveDan
QUOTE (ugordan @ Aug 25 2012, 04:29 AM) *
This is just for fun, I tried to implement an adaptive correction for the JPEGged (ugh!), raw Bayered images

Very ingenuitive thinking Gordan -- and it seems to have worked well.
Roby72
A few remarks about the near focus of the mastcams:

The pictures of the sundial taken by the mastcam-100 are not in focus up to now.
I suspect that the near focus of this camera is a little bit beyond - for example the cable that running left of the dial is sharp:

http://mars.jpl.nasa.gov/msl-raw-images/ms...1000E1_DXXX.jpg

In contrast the m34 has taken sundial pictures that are in best focus:

http://mars.jpl.nasa.gov/msl-raw-images/ms...0000E1_DXXX.jpg

Did anyone know the distance from the mastcams to the dial ?

Robert
mcaplinger
QUOTE (Roby72 @ Aug 27 2012, 01:22 PM) *
Did anyone know the distance from the mastcams to the dial ?

The Marsdial is roughly 7.6 cm square and one side is 296 pixels long in the 34mm image. The IFOV of the 34mm is about 218 microrads, so the distance is roughly 1.2 meters.
PDP8E
Back from a little vacation, and catching up on all the threads.
After several attempts to circumvent the JPEG induced color/pattern bleeds in the bayers, I decided to just make a good B&W image. It has several hacks: rank order adaptive noise reduction, reduced resolution to 50%, and multiple stacks.
(and its not that good) huh.gif

Click to view attachment

... maybe MSSS will take pity on the collective suffering and hacks found here on these threads and authorize a posting of a few PNG raw bayers (just to see if all of the UMSF debayer programs are working properly of course!)

mcaplinger
QUOTE (PDP8E @ Aug 27 2012, 05:12 PM) *
... maybe MSSS will take pity on the collective suffering and hacks found here on these threads and authorize a posting of a few PNG raw bayers...

What would be more in keeping with the spirit of the data release policy (IMHO) would be for us to demosaic the data and then JPEG that, but I can't do either one without permission. If raw data acquisition becomes common I'll ask, but I don't think it's worth it for a relative handful of images.
ElkGroveDan
Sounds like a great task for a trusted college intern.
mcaplinger
QUOTE (ElkGroveDan @ Aug 27 2012, 09:06 PM) *
Sounds like a great task for a trusted college intern.

I have no idea what your point is. It would take me about a minute to process all these images and post them. It's the permission to do so that's lacking.
JohnVV
the jpg issue is mostly solved
http://imgbox.com/g/9pzCUh6YaZ
------------

-------------
not "white balanced"
the code used
CODE
gmic  0017MR0050002000C0_DXXX.jpg -bayer2rgb 3,3,3 -pde_flow 5,20 -sharpen 2 -o 0017MR0050002000C0.png


a pde to remove most of the artifacts
ugordan
QUOTE (mcaplinger @ Aug 28 2012, 06:44 AM) *
It's the permission to do so that's lacking.

I can understand that posting raws as PNGs would be iffy because we'd be getting the same quality product as you, but why would de-Bayering on the ground and JPEG-ing present an issue?
ugordan
QUOTE (JohnVV @ Aug 28 2012, 09:30 AM) *
the jpg issue is mostly solved

John, that looks very smooth, but there's an obvious loss of sharpness in the resulting images.
RegiStax
QUOTE (ugordan @ Aug 28 2012, 09:59 AM) *
John, that looks very smooth, but there's an obvious loss of sharpness in the resulting images.

Ive attached a waveletsharpened version of one of the PNG's, its possible to resharpen them without too much of the debayer-artefacts showing up it seems.

cheers
CorB
Click to view attachment
jmknapp
I wonder what the effective bits per pixel of the MASTCAM raw images is. The cameras use the KODAK KAI-2020CM sensor. Is it sampled with a 12-bit ADC like MARDI and NAVCAM? One reference for NAVCAM gives an SNR of >200 for certain conditions. So that would mean effectively 7-8 bits per pixel for that camera anyway?

It was interesting that yesterday Mike Malin referred to stacking images.
mcaplinger
QUOTE (jmknapp @ Aug 28 2012, 03:08 AM) *
I wonder what the effective bits per pixel of the MASTCAM raw images is.

The MAHLI paper I've referred to several times before is an excellent source of this kind of information. From section 7.5.1:

"Acquired as 12-bit images, MAHLI data are converted onboard the instrument, without loss
of information, to 8-bit images using a square-root companding look-up table."
mcaplinger
QUOTE (ugordan @ Aug 28 2012, 12:57 AM) *
why would de-Bayering on the ground and JPEG-ing present an issue?

I am not authorized to put any data out, in any form. You might want to take a look at http://blogs.agu.org/martianchronicles/201...8/blogging-msl/ -- Ryan discusses in general terms the rules that we operate under. I am constantly evaluating each of my posts to make sure that it is only derived from public information.

If you don't like it, complain to my boss.
Airbag
QUOTE (mcaplinger @ Aug 28 2012, 08:48 AM) *
"Acquired as 12-bit images, MAHLI data are converted onboard the instrument, without loss
of information, to 8-bit images using a square-root companding look-up table."


I read that a while back and was puzzled by how one could do any such conversion without a loss of information unless the original data already fit in 8 bits? Any non-trivial processing operation results in a loss of information somewhere (usually with the purpose of increasing apparent information somewhere else).

I understand companding and all that, but that is not a two-way operation without loss of information. The resulting 8 bit data cannot be converted back into the original 12 bit data, surely, as multiple values in the 12 bit sample get mapped to the same 8 bit value?

Just trying to understand what exactly was meant in the MAHLI document.

Airbag
Floyd
mcaplinger We all really appreciate the fantastic images released and I think the misundrstanding was that you are bound by image release policy--not being the one doing the releasing. After the appropriate time, the eager ones here will have access to raw data.

Airbag I'm not an expert, but I think it means without loss of spatial resolution. Clearly there is minor loss of brightness precisionin/pixl going from 12 to 8 bit (and back again). However, visually, 8 bit square-root gives full dynamic range and preserves the low-light/dark range preferentially to the saturated white end.
ugordan
Airbag, also look up photon shot noise to see why even 12 to 8 bit conversion is "sorta" lossless.
Airbag
Ugordan,

But photon shot noise is less of an issue with larger number of photons (i.e. higher CCD signal levels) and that is exactly where a square root compander "does its thing" the most, so I don't see how photon shot noise would be relevant to my original question?

I could understand it if the CCD read noise was a dominant factor, and thus the 12 bits of data are really only "effectively" worth around 8 bits, but that would imply a huge noise contribution!

Airbag
mcaplinger
QUOTE (Airbag @ Aug 28 2012, 09:17 AM) *
But photon shot noise is less of an issue with larger number of photons...

shot noise = sqrt(signal), so shot noise is higher at higher signal levels. The square root encoding is coarser at higher levels, finer at lower levels.

You could have a philosophical debate about what this means, but that's the way our cameras work.
Airbag
I was looking at shot noise from a S/N point of view but I get the hint and will skip any debates - thanks for the clarifications everybody!

Airbag
jmknapp
QUOTE (mcaplinger @ Aug 28 2012, 08:48 AM) *
The MAHLI paper I've referred to several times before is an excellent source of this kind of information.


http://rd.springer.com/article/10.1007/s11...4/fulltext.html

From another paper....

QUOTE
MAHLI shares common electronics, detector, and
software designs with the MSL Mars Descent Imager
(MARDI) and the two Mast Cameras (Mastcam).


Ahh, good to know. Wonder what the (optimum) SNR is.
mcaplinger
QUOTE (jmknapp @ Aug 28 2012, 12:47 PM) *
Wonder what the (optimum) SNR is.

I often tell people who are fond of such numeric metrics as SNR, MTF, ENOB, etc, that no matter how optimal those numbers make your camera sound, anybody can tell a good image from a bad image. I think the MMM images hold up pretty well.

That said, you could work out the best possible SNR we could get from the Truesense datasheet. You can't ever do better than sqrt(fullwell) for a single measurement due to shot noise and these sensors have a fullwell of 20K-40K electrons so 140:1 to 200:1 is as good as it could be.
jmknapp
QUOTE (mcaplinger @ Aug 28 2012, 04:25 PM) *
these sensors have a fullwell of 20K-40K electrons so 140:1 to 200:1 is as good as it could be.


200:1 is about 46 dB SNR which according to the method on ctecphotonics gives effective number of bits (ENOB) of (46-2)/6 = 7.3 bits. That could be improved if desired by stacking?

The images are very impressive to see--just wondering what the limits are when the images are considered as abstract data, perhaps to be analyzed by machines teasing out the last bit of information.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2018 Invision Power Services, Inc.