IPB

Welcome Guest ( Log In | Register )

12 Pages V  « < 7 8 9 10 11 > »   
Reply to this topicStart new topic
DSCOVR
fredk
post Oct 31 2016, 05:02 PM
Post #121


Senior Member
****

Group: Members
Posts: 4246
Joined: 17-January 05
Member No.: 152



QUOTE (scalbers @ Oct 30 2016, 04:17 PM) *
The funny thing is that once I inadvertantly ran this without air and the ocean indeed looked darker. This was a goofy accidental run (image below) with Mars atmospheric pressure and Earth aerosols.

Thanks a lot, Steve. This is very cool: Earth without air. Comparing your with and without air views, I can certainly see your point about the oceans being dominated by light from the sky - the Earth is the blue planet because our sky is blue. Apart from the continents, I can almost imagine these views as fish-eye views of the sky with patchy clouds from the ground.

I'm still curious about one detail: switching off the air makes the oceans much darker, but how much of that darkening is due to the removal of scattered light in the air above the water, and how much is due to the removal of sky light reflected back up from the water's surface (ie, the removal of a sort of wide-angle "sky glint")? My guess would be that the former would dominate, since the water's surface is not a very good reflector (apart from large angles of incidence).
Go to the top of the page
 
+Quote Post
scalbers
post Oct 31 2016, 11:35 PM
Post #122


Senior Member
****

Group: Members
Posts: 1621
Joined: 5-March 05
From: Boulder, CO
Member No.: 184



Indeed it's interesting to imagine the aspects of symmetry between looking up at the sky and looking down from space - I can almost get dizzy looking up and imagining this.

If we assume no aerosols, then I agree the scattered light upward by the sky is much more. This is because the Rayleigh phase function is pretty similar upward and downward, and the downward diffuse light only has about 8% reflected by the water. If aerosols are present the scattered/reflected ratio may vary some (considering the asymmetry factor), though probably not change the main conclusion for most cases.

A fun example to help illustrate some of this is to note that Lake Titicaca is much darker than the ocean areas, due to the high altitude and less air present.


--------------------
Steve [ my home page and planetary maps page ]
Go to the top of the page
 
+Quote Post
fredk
post Nov 1 2016, 03:29 PM
Post #123


Senior Member
****

Group: Members
Posts: 4246
Joined: 17-January 05
Member No.: 152



The symmetry of the Rayleigh phase function tells us one more interesting thing. Neglecting the subdominant light scattered or reflected from the water, the intensity of the ocean (near the centre of the disk but away from the sun glint) seen from space with the sun roughly behind your back would be similar to the Rayleigh-scattered intensity of the sky (away from the horizon) from the ground (near sea level) when the sun is high. Of course getting into the (mainly forward-scattered) Mie regime spoils this and will make the sky from the ground generally brighter than the ocean from above.

But basically the intensity of the sky near noon on the clearest, least dusty day would be similar to the intensity (and colour) of the ocean when viewing the full Earth from above. For those of us who aren't going to make it into space, at least we can imagine a bit more quantitatively now!
Go to the top of the page
 
+Quote Post
JRehling
post Nov 1 2016, 05:20 PM
Post #124


Senior Member
****

Group: Members
Posts: 2530
Joined: 20-April 05
Member No.: 321



The question of what Earth would look like without its atmosphere is potentially ambiguous, it could mean:

1) What would the view from space be if the light reflecting off the surface/ocean were not altered on its path up to the camera.
2) What would the surface/ocean itself look like if it had a black sky above it.
3) Both (1) and (2).

A related example: In towns/cities when there is snow on the ground and cloud cover overhead, night can be astonishingly bright because streetlights reflect off the clouds, and that light reflects off the snow, in what is effectively a damped infinite feedback loop.

The situation looking at normal, natural surfaces from above has some degree of this, with the sky altering how the surface looks and the surface, surely, altering how the sky looks.

If you wanted to address (1), an "easy" way to do it would be to ground-truth the DSCOVR images by taking images of isotropic surfaces (e.g., the ocean, snow, certain deserts). Compare your camera's color values with DSCOVR pixels of the same surface unit and determine the function that relates the two. Then apply the inverse function to DSCOVR images of the whole planet.

FWIW, I recently did something like this with images of Mercury that I took in a daytime sky. I subtracted the R, G, and B values of the background sky from the whole image, including the portions containing Mercury. The result gives a black sky and a brownish-grey Mercury approximating the colors seen by Messenger. That is the "up looking" version of what (1) would be the "down looking" version of.
Go to the top of the page
 
+Quote Post
Stratespace
post Dec 9 2016, 10:49 PM
Post #125


Junior Member
**

Group: Members
Posts: 22
Joined: 7-January 13
Member No.: 6834



I begin to be a bit desperate about the DSCOVR images.
I've first worked with the PNG images provided by their server, associated with their metadata. After days of work, I still couldn't understand what I was seeing: when projected on a map with Spice kernels, the data did not seem very well registrated, and there were slight shifts changing depending on the local view angle. I contacted NASA that nicely replied that the PNG images were not scientifically accurate in terms of localization, and that I should work with the L1B calibrated data instead.
To make it short, those L1B data are calibrated and include all bands registred in a common reference frame. In other terms, for each image you have the gray level for each band, as well as the local latitude/longitude corresponding to each pixel, or map of coordinates if you prefer.
But after hours of play with those data, it appears the problem is even worse than before: those L1B data are even wrong-er and it's impossible as it to project properly the images on a map.
As a short demonstration, I projected a "true" coastline map onto the images according to their lat/long maps. Here is the result:


After a bit of investigation, I finally understood where the problem is in those raw calibrated data. Unfortunately, the maps associated to the images have been generated very roughly, the DSCOVR EPIC team have considered all "non-null" pixels as "the Earth", and the darkness of space as "not Earth". As a result, the atmosphere of the Earth is considered as being the ground, and ten pixels outside the Earth the maps still indicate different latitudes and longitudes !
You can see this fact on the following images, where I show a portion of Earth's limb and the same portion with the size of the Earth uncorrectly given by the metadata overimposed on it:

As a consequence, when you project your coastline map on the images for debug, you see the coast floating up into the upper atmosphere:


I think we can all agree it is clearly off-target...

I've already tried different techniques to correct for this issue, such as detecting the "true" size of the Earth in the images and to shrink all the lat/long maps accordingly, but this is very hard in practice, as you need to make the difference between lit portions of the limb that are clearly visible and barely visible portions of the limb that are already into the night. Remember that DSCOVR is orbiting around L1 and not on L1, thus shadows are visible.
I've implemented alternative techniques as well, such as clouds removal and recognition of different patterns on the Earth to morph the metadata they provide to a more correct geometry, but it is very hard to make it 100% automated with high confidence without spending weeks and weeks of effort.
As a last chance, I've tried working with the L1A data (uncalibrated), but it fails for the same reason.

My conclusion so far is: I'm done with those data, they lack of reliable metadata to work with them at pixelic resolution with an automated process, to make animated maps for example. It's okay to make movies on the original images themselves (no transformation needed), it's okay to transform one of those images, but it's pointless working on them for a smooth transformed animation. At least without days and days of work. The optical flows and other filtering techniques I've implemented to guess automatically what should be the correct lat/long grid for each image are quiet complex, but still insufficient to do the job. That's a shame, the outcome would have been awsome...
Go to the top of the page
 
+Quote Post
scalbers
post Dec 10 2016, 06:13 PM
Post #126


Senior Member
****

Group: Members
Posts: 1621
Joined: 5-March 05
From: Boulder, CO
Member No.: 184



Indeed it's tricky to get accurate mapping. Here is a somewhat empirical fit used in my recent blinking comparison of synthetic vs DSCOVR:

Attached Image


http://stevealbers.net/allsky/cases/dscovr...k_162641808.gif

I did have some success in simulating the limb shading to help with the fitting. There is still some extra atmosphere appearing in the actual image for some reason.

It's interesting to consider how thick the atmosphere appears from this vantage point, and where the (often invisible) limb of the solid Earth is located. The simulations may provide useful estimates of the distance between the limb and the first "non-zero" pixel based on the atmosphere and on limb shading relating to the phase angle.


--------------------
Steve [ my home page and planetary maps page ]
Go to the top of the page
 
+Quote Post
Phil Stooke
post Dec 10 2016, 08:14 PM
Post #127


Solar System Cartographer
****

Group: Members
Posts: 10127
Joined: 5-April 05
From: Canada
Member No.: 227



How about... don't use the limb detection routine to locate the limb, use it plus a limb-fitting routine to get a best fit and use that to establish the central (sub-spacecraft) pixel. Use that central point plus a calculated radius based on range and geometry to fit the 'true limb' to the image. You can adjust some of the geometry parameters until they give optimum results.

More simply, you could use your existing routine but multiply by the fraction necessary to shrink the coastline map to fit the image. Once established that fraction should be fairly constant, or at least can be varied as a function of range.

Phil


--------------------
... because the Solar System ain't gonna map itself.

Also to be found posting similar content on https://mastodon.social/@PhilStooke
NOTE: everything created by me which I post on UMSF is considered to be in the public domain (NOT CC, public domain)
Go to the top of the page
 
+Quote Post
Stratespace
post Dec 10 2016, 09:21 PM
Post #128


Junior Member
**

Group: Members
Posts: 22
Joined: 7-January 13
Member No.: 6834



The limb detection is largely impacted by the uncertainties associated with the terminator. In some images, the pixels disapear when the sun is locally 10° above the horizon; and in other images they disapear when the sun is virtually 0° above the horizon !
In addition, it appears that a shrink/translate transform is not enough to compensate for all of the errors in the metadata, this is why I switched to optical flow that tried to fit the coastlines and other salient features (not an affine transformation).
It somehow works, but requires significant tuning to run on a batch of several images. I can't afford to spend too much time to make it work on thousands of images with a very low error rate.
I other words:
QUOTE (Phil Stooke @ Dec 10 2016, 09:14 PM) *
More simply, you could use your existing routine but multiply by the fraction necessary to shrink the coastline map to fit the image.
doesn't work.

I'm really curious to know how the scientists can actually work with such errors. Do they do the same kind of corrective process (with hopefully more time than 2 hours during the week-end like us) ? Could we expect that they correct their calibrated data accordingly one day ?
Go to the top of the page
 
+Quote Post
scalbers
post Dec 10 2016, 10:00 PM
Post #129


Senior Member
****

Group: Members
Posts: 1621
Joined: 5-March 05
From: Boulder, CO
Member No.: 184



Maybe the scientists are doing something like this: https://ntrs.nasa.gov/archive/nasa/casi.ntr...20160011149.pdf


--------------------
Steve [ my home page and planetary maps page ]
Go to the top of the page
 
+Quote Post
Stratespace
post Dec 10 2016, 10:25 PM
Post #130


Junior Member
**

Group: Members
Posts: 22
Joined: 7-January 13
Member No.: 6834



You are right, thank you very much for the link !
Go to the top of the page
 
+Quote Post
Floyd
post Dec 11 2016, 02:32 PM
Post #131


Member
***

Group: Members
Posts: 909
Joined: 4-September 06
From: Boston
Member No.: 1102



Here is a maybe crazy, maybe not so crazy idea. Could interested scientists or institutions set up 1 to 3 dozen lasers around the globe that point at DSCOVR during daylight and act as fiduciaries. The images would all have a set of hot pixels for perfect alignment. I'm sure a few universities across the globe would be happy to operate a facility to put themselves on the map as reference points. Laser frequencies could be chosen to blind only one channel of one pixel for each fiduciary laser.



--------------------
Go to the top of the page
 
+Quote Post
Stratespace
post Dec 11 2016, 04:24 PM
Post #132


Junior Member
**

Group: Members
Posts: 22
Joined: 7-January 13
Member No.: 6834



QUOTE (Floyd @ Dec 11 2016, 03:32 PM) *
Here is a maybe crazy, maybe not so crazy idea. Could interested scientists or institutions set up 1 to 3 dozen lasers around the globe that point at DSCOVR during daylight and act as fiduciaries. The images would all have a set of hot pixels for perfect alignment. I'm sure a few universities across the globe would be happy to operate a facility to put themselves on the map as reference points. Laser frequencies could be chosen to blind only one channel of one pixel for each fiduciary laser.
According to the paper linked above, they don't need this. They cross-correlate the EPIC images with image taken simultaneously by satellites in LEO. They have a precision of a fraction of a pixel, sufficient for their need.
Considering they already have very precise pre-image registration for scientific purpose (called "navigation" precision in their paper), it's very unfortunate they don't update the L1B data with those corrected lat/lon values...
Go to the top of the page
 
+Quote Post
scalbers
post Dec 11 2016, 06:08 PM
Post #133


Senior Member
****

Group: Members
Posts: 1621
Joined: 5-March 05
From: Boulder, CO
Member No.: 184



It seems in the reference I linked that the image correlation is being done mainly in areas near the center of the disk. Even when I do this manually, there seem to be discrepancies that show up when we go very close to the limb. I suppose this relates to what was mentioned above about the EPIC team's decision on where the limb is when registering and combining the images from the various channels. It sounds like I might get better results with my the blinking comparison if I would work with the raw data, rather than the web imagery.


--------------------
Steve [ my home page and planetary maps page ]
Go to the top of the page
 
+Quote Post
Explorer1
post Dec 21 2016, 07:58 PM
Post #134


Senior Member
****

Group: Members
Posts: 2073
Joined: 13-February 10
From: Ontario
Member No.: 5221



Big changes to the public website interface, plus an 'enhanced colour' option:

http://epic.gsfc.nasa.gov/
Go to the top of the page
 
+Quote Post
Michael Boccara
post Jan 24 2017, 09:45 AM
Post #135


Newbie
*

Group: Members
Posts: 11
Joined: 29-August 16
From: Israel
Member No.: 8032



Following the previous discussion with Stratespace about DSCOVR data inacuracies, I have some new information from the DSCOVR team: they finally recognized having an error in their ephemeris, at least on the lunar position, because they were using geodetic coordinates instead of geocentric. It caused an absurd "rosette-shaped" path for the Moon around the Earth, as shown in the video below (sorry for the very artisanal screen capture - showing my Unity3D development environment):
https://drive.google.com/file/d/17C7FVMH5oU...y9hZKnTlLg/view

This same error is also the cause behind a problem I had with the famous "Moon photobombing" images of July 5th, 2016, when the Moon passed through EPIC's lens. Here's an article for those who missed it:
https://www.nasa.gov/feature/goddard/2016/n...-time-in-a-year
Now see the video I made from my Blueturn app, that shows interpolated EPIC images of July 5, together with a 3D model of the Moon (rightmost) based on their ephemeris:
https://vimeo.com/189285144/6916063e34
I am now waiting for the EPIC team to fix their database, and hopefully I'll be able to integrate correct 3D model of the Moon, and fix the above video.

I am currently in discussion with the DSCOVR team to find out whether this error also applies to DSCOVR's position and attitude, in which case there is some new hope of being able to have an accurate orthophoto calibration of the images in 3D.
Go to the top of the page
 
+Quote Post

12 Pages V  « < 7 8 9 10 11 > » 
Reply to this topicStart new topic

 



RSS Lo-Fi Version Time is now: 28th March 2024 - 11:54 AM
RULES AND GUIDELINES
Please read the Forum Rules and Guidelines before posting.

IMAGE COPYRIGHT
Images posted on UnmannedSpaceflight.com may be copyrighted. Do not reproduce without permission. Read here for further information on space images and copyright.

OPINIONS AND MODERATION
Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators.
SUPPORT THE FORUM
Unmannedspaceflight.com is funded by the Planetary Society. Please consider supporting our work and many other projects by donating to the Society or becoming a member.