HST Albedo Map Processing |
HST Albedo Map Processing |
Jul 27 2015, 01:16 AM
Post
#1
|
|
Member Group: Members Posts: 555 Joined: 27-September 10 Member No.: 5458 |
I've almost made this post several times today but I had to go back and redo my work multiple times because I really just couldn't believe it.
Going into this, I was absolutely sure I wouldn't get anything from it. It just seemed so incredibly unlikely to work. Heres the original HST 2002/2003 maps of Pluto to refresh you. Below is the 2002/2003 combined observations of Pluto, run through my experimental image processing to bring out albedo variations. Here is an animation fading between the above and scalbers latest high resolution map (downscaled of course). (13MB gif) (ctrl + scroll wheel to zoom - maximum possible encouraged) Finally, heres the single frame from the fade. I've also taken this process further with a different map and it appears to continue bringing out small increments of detail each time, to what limit, I have no idea. One especially important note about all of this, its somewhat like finding Waldo without knowing what Waldo looks like. Without knowing what to look for, it would have been increasingly difficult to know how to set the parameters in each iteration, to avoid corrupting the details. ---------- Edit ---------- I should also note, just in case it wasn't clear, the HST combined map was directly processed. Zero data from NH was involved in pulling out the details. The map from scalbers is just to compare. -------------------- |
|
|
Jul 29 2015, 05:20 PM
Post
#2
|
|||
Member Group: Members Posts: 555 Joined: 27-September 10 Member No.: 5458 |
Haha PDP8E.
Also, finally got around to looking through the paper. If I am understanding correctly, the very orange image is based on a 6x1 pixel image and then stretched out. This alone would never be recoverable into a true image. Very interesting paper though. This should be close to what was resampled. I'm not quite sure how they simulated the upper and lower lines. Probably just an average falloff or something. ---------- Edit ---------- I decided to do a quick try to see what could be done with a 6x1 image. Not much still but it seems to correspond to whats on the map. Color data is derived from the same 6x1 pixel image, just applied differently than it was in the paper. I could see their method being pretty useful in observing exoplanets if the resolution can hit a few pixels across. -------------------- |
||
|
|||
Jul 29 2015, 06:18 PM
Post
#3
|
|
Senior Member Group: Members Posts: 2530 Joined: 20-April 05 Member No.: 321 |
An old chestnut from the technology of image file compression: If two different source images are compressed and they make identical target files, you cannot recover from the target which one was the original source.
These one-row images present a stark situation: If you have three pixels which are, in sequence: BLACK - GRAY - WHITE, you cannot determine whether the real object (at, say, 1x100 pixel resolution) had a sharp cliff from black to white or a gradual transition. You cannot. The information is not there. But there are some possible (and related) saving graces that can give you additional information: 1) You may have a priori information about the likely transitions. If you knew, for example, that the image was of Mercury, you would have a lot of constraints on the norms for transitions. If you knew, moreover, that the image was of Mercury at a high phase angle, you would have still more information. But if you didn't know that, you'd be much more limited in your ability to guess between abrupt versus gradual transitions. Just knowing that the image is of a body in space (and not, say, a Captcha of blurred text) is potentially useful information, but that only goes so far. Iapetus, Mars, and Mercury have profoundly different norms for how sharp/gradual transitions are. You can guess with one set of norms and luckily get some details right, but it was a guess. If you guess the world is visually like Mars but it turns out to be visually like Iapetus, your guess is simply wrong. And if you have no a priori information, the guess remains a guess. 2) The one-row case is not a common one, so you have information from adjacent rows which might inform how abrupt/gradual the transitions are. This doesn't provide new information in an absolute sense (the real object might have sharply defined square blocks as its true shading!), but one can infer norms across the surface and then use those locally. 3) When you have selective high-resolution imaging of a world but only low-resolution imaging in many other areas, you can use this information to set the parameters in (1). This seems applicable for, eg., Europa and Pluto. But given, say, an exoplanet with no high-resolution data possible, the ability to guess at details more fine than we can see in the raw image is going to be close to nil. |
|
|
Lo-Fi Version | Time is now: 16th May 2024 - 12:32 AM |
RULES AND GUIDELINES Please read the Forum Rules and Guidelines before posting. IMAGE COPYRIGHT |
OPINIONS AND MODERATION Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators. |
SUPPORT THE FORUM Unmannedspaceflight.com is funded by the Planetary Society. Please consider supporting our work and many other projects by donating to the Society or becoming a member. |