I've almost made this post several times today but I had to go back and redo my work multiple times because I really just couldn't believe it.
Going into this, I was absolutely sure I wouldn't get anything from it. It just seemed so incredibly unlikely to work.
Heres the original HST 2002/2003 maps of Pluto to refresh you.
http://i.imgur.com/TQEMK4y.png
Below is the 2002/2003 combined observations of Pluto, run through my experimental image processing to bring out albedo variations.
http://i.imgur.com/MJhOZEa.png
Here is an animation fading between the above and scalbers latest high resolution map (downscaled of course). (13MB gif)
http://i.imgur.com/AKl76kX.gif
(ctrl + scroll wheel to zoom - maximum possible encouraged)
Finally, heres the single frame from the fade.
http://i.imgur.com/KACBLPX.png
I've also taken this process further with a different map and it appears to continue bringing out small increments of detail each time, to what limit, I have no idea. One especially important note about all of this, its somewhat like finding Waldo without knowing what Waldo looks like. Without knowing what to look for, it would have been increasingly difficult to know how to set the parameters in each iteration, to avoid corrupting the details.
----------
Edit
----------
I should also note, just in case it wasn't clear, the HST combined map was directly processed. Zero data from NH was involved in pulling out the details. The map from scalbers is just to compare.
It looks pretty close to me as well.
I started working on it because I couldn't quite get the HST maps to line up with much on the NH maps so I took a shot in the dark, hoping I could get something to come out so I could get a better alignment.
Very cool work.
It'd be interesting to see this tried with the Hubble Ceres imagery in order to validate & further refine your technique for other applications, but the contrast levels there aren't nearly as high as those on Pluto so it might not work nearly as well.
It definitely works best with high contrast terrain. I have already experimented a little with Ceres with results less appealing. I'll give it another go some time though. I'm terrible at planet mapping but if someone wanted to use the HST hemispheric views to make a quick map, it would definitely make comparison a lot easier.
There is much more agreement between the images I just posted than just large scale structures. Pick something from the processed HST image and watch it resolve on the NH map, there's a very large number of these. These weren't originally resolved and technically shouldn't be resolvable but they are there.
A good one to notice is the crater or an arc (difficult to tell) near the center right on Cthulu.
Another is a curly feature on the northern rim of Tombaugh Regio. That one is really well defined.
That probably points to the way your process is tailored more than anything else, though. As Fred pointed out, the information just isn't present in the HST images because of the resolution. By definition, you can't get sub-pixel information from a pixel since it's a uniform unit (actually a single DN, if my weak grasp of the appropriate terminology is correct).
To clarify again, all processing was performed on the HST image prior to alignment with the NH map. This is why I provided both, the uncorrected and the corrected views of the map for comparison. No retouching of any form is used in this. Its a mixture of lots and lots of different layering techniques and deconvolution. And again, full disclosure, I have no idea why it appears to work at all, all limitations of all involved systems considered.
OK... ZLD, let's be absolutely clear about what you are seeing. Here is an image I just made which takes a bit of text (left), then saves it as a low quality JPG (middle - it still looks good) - but then cranks up the contrast to a ridiculous degree (right). It's now full of artifacts. That's what JPG does, that's how its algorithm works. You can't avoid it.
Here are several highlighted features, that in my opinion, are not similar to your example and also, extremely coincidentally close to current features.
http://i.imgur.com/AzUFdDu.gif
Yes,... but in the same image, 100 other points that are completely different. If you lay one complex pattern over another there will always be a few points that appear to correspond. The biggest, clearest, features in the HST image should match LORRI much better than a few faint things do, but they don't.
I'm not saying your work is useless, I'm saying this particular application of it is mistaken.
Phil
Left: generated noise
Right: wild processing of the generated noise
It seems like test data is not scarce: You could print out any known image, take a photography of it from far away, and run your process on that photo, then see if you recover detail at a higher resolution than the photograph of the printout. (Of course, downsampling a digital image would seem to accomplish the same goal.)
If you want to make a double blind study of it, someone else could provide the photos, wherein you would have no possible way to obtain the photos in any other way (e.g., through Google Image Search).
Roughly speaking, use the same methodology as studies of putative E.S.P.
That should give you a much broader supply of test data than solar system imagery alone.
Thats the type of correspondence I was looking for JRehling. Thank you for this idea. I'd be highly interested in trying this.
I do think the imaging device can play a very large role in if this works though. Consumer products do emit much more noise that is very apparent when doing this in comparison to much higher quality research grade CCDs. I've tried this to some extent.
Hi ZLD,
It's interesting, can you algorithm extract details from http://www.bbc.com/news/science-environment-30524429?
I only worked on a small crop due to the strange shape of the image. Sudden contrast changes, as on the edges plays havoc.
Thanks ZLD, a nice processing. It was interesting to see what your method can do with this really very difficult image (non-linear motion blur, etc), and to compare with previous attempts:
http://www.unmannedspaceflight.com/index.php?s=&showtopic=7896&view=findpost&p=216419
http://www.unmannedspaceflight.com/index.php?showtopic=7896&st=1050&p=216427&#entry216427
http://blogs.esa.int/rosetta/2014/12/18/updates-from-agu/#comment-283282
Maybe it will help you to tune your algorithm. If you want I'll give some more "difficult" samples for testing.
Sure thing, Alex. I'd be very interested. This has been an evolving process for months now.
It looked computer generated from the set out so I went with that assumption.
Well a wrong assumption will certainly wreck everything following. Oops.
Would you care to describe how you reached your test image result? I can't seem to reproduce it myself.
when I saw that 'test image' I fed it to a little stochastic battalion of filters I maintain.
The only instruction for convergence was high freq edges exceeding 30 %
Here is what my script 'hallucinated'
Haha PDP8E.
Also, finally got around to looking through the paper. If I am understanding correctly, the very orange image is based on a 6x1 pixel image and then stretched out. This alone would never be recoverable into a true image. Very interesting paper though.
An old chestnut from the technology of image file compression: If two different source images are compressed and they make identical target files, you cannot recover from the target which one was the original source.
These one-row images present a stark situation: If you have three pixels which are, in sequence: BLACK - GRAY - WHITE, you cannot determine whether the real object (at, say, 1x100 pixel resolution) had a sharp cliff from black to white or a gradual transition. You cannot. The information is not there.
But there are some possible (and related) saving graces that can give you additional information:
1) You may have a priori information about the likely transitions. If you knew, for example, that the image was of Mercury, you would have a lot of constraints on the norms for transitions. If you knew, moreover, that the image was of Mercury at a high phase angle, you would have still more information. But if you didn't know that, you'd be much more limited in your ability to guess between abrupt versus gradual transitions.
Just knowing that the image is of a body in space (and not, say, a Captcha of blurred text) is potentially useful information, but that only goes so far. Iapetus, Mars, and Mercury have profoundly different norms for how sharp/gradual transitions are. You can guess with one set of norms and luckily get some details right, but it was a guess. If you guess the world is visually like Mars but it turns out to be visually like Iapetus, your guess is simply wrong. And if you have no a priori information, the guess remains a guess.
2) The one-row case is not a common one, so you have information from adjacent rows which might inform how abrupt/gradual the transitions are. This doesn't provide new information in an absolute sense (the real object might have sharply defined square blocks as its true shading!), but one can infer norms across the surface and then use those locally.
3) When you have selective high-resolution imaging of a world but only low-resolution imaging in many other areas, you can use this information to set the parameters in (1). This seems applicable for, eg., Europa and Pluto.
But given, say, an exoplanet with no high-resolution data possible, the ability to guess at details more fine than we can see in the raw image is going to be close to nil.
I absolutely do agree that at a measured 'image', a single pixel in height, will leave the raw information as the best obtainable at the time. Without lots of other data, theres nothing else to work from.
However, it wouldn't be completely useless to make inferences based on lots of other collected data and following up by defining several scenarios based on multiple interpretations of the data. Isn't that the basis for forming future experiments most of the time?
That's all fair and well; making the process more mature, one would define the parameters distinguishing one situation from another. e.g., Mercury at high phase angle, Mars at high phase angle, Iapetus at high phase angle, Mercury at low phase angle, etc. (Certainly, resolution is another important determiner of context: Rough at one resolution is smooth at another.)
Then, for any given image, you could say, here's a P% increase in resolution assuming a [KNOWN WORLD]-like surface at this phase angle, and whatever one sees or doesn't see, the prediction is appropriately contextualized. If the assumption is correct, the prediction should be correct. If we know that the assumption is questionable, then we know that the prediction is questionable.
Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)