High-Res DEMs from single HiRISE images, First results of new "Shape from Shading" algorithm |
High-Res DEMs from single HiRISE images, First results of new "Shape from Shading" algorithm |
Jan 16 2010, 03:30 PM
Post
#1
|
|
Member Group: Members Posts: 713 Joined: 30-March 05 Member No.: 223 |
Hi all,
Here the long overdue continuation of the "Alien Landscapes" series. This time based on 3D DEMs generated with "Shape from Shading" from single HiRISE images. Enjoy Click on Images for larger version. Detail views from PSP_002172_1410 (large gully system) Detail view of Gullies from PSP_001376_1675 Detail of gully system in PSP_002022_1455 Dune Views from PSP_004339_1890 Detail from PSP_001834_1605 Here is some background info on the making of the images: "Shape from Shading" (SFS) i.e. the possibility to extract shape information from a single image has always been a fascinating topic for me. Now I found the time to implement a prototype for a new SFS algorithm based on some ideas that I've been thinking about for a long time. The problem with existing SFS approaches (see here for a survey is that they either tend to over-smooth the details (due to the regularization constraint) or suffer from excessive noise in the high-frequency components of the reconstructed surface. Another problem is the large demand on CPU ressources which would make them very challenging to apply to large scale input data, such as HiRISE orbiter images. So for a long time I was rather sceptical as to the potential of SFS and it was my impression that Methods based on multiple images (stereo) must be far superior to single-image SFS. However, after a long time of experimenting, combining existing approaches with some new ideas, I got the following quite promising first results that I'd like to share: All of the images were generated from a single HiRISE image (no depth information was used from stereo or laser altimeter data). Also, no texturing or additional coloring/shading was applied when rendering the surface. Every detail visible is real 3D down to the pixel-level... For rendering I used a very simple model based on lambertian reflection with gouraud shading. The resolution of the images is still moderate: that is downsampled details crops in the order of 0.5-1 Megapixels. However, despite the heavy math machinery that drives the core of the algoritm (several systems of equations with millions of unknowns) the processing time is still moderate (about 15 Minutes per med-res image, using about 2 Gigs main mem) such that the application to full-res HiRISE images should be possible The following image shows an example to illustrate the general principle (click to enlarge). On the left hand side the 2D input image (simple noisy JPEG from the Web with unknwon light source direction). On the right hand side shows the recovered 3D surface re-lighted under a different light source direction. Note that one problem of the current implementation of the algorithm is it's vulnerability to notable distortions in the low frequency components (i.e. large scale variations) of the generated surface. However I'm confident that this can be overcome by an improved version or by adding the large-scale depth information from stereo-based DEMs or altimeter data (MOLA) where available. |
|
|
Jan 19 2010, 04:19 PM
Post
#2
|
|
Founder Group: Chairman Posts: 14449 Joined: 8-February 04 Member No.: 1 |
OK - yeah - do you think you'll be able to share the software and technique at some point? I have enough ideas for this that would keep you on your toes till April
|
|
|
Jan 19 2010, 05:37 PM
Post
#3
|
|
IMG to PNG GOD Group: Moderator Posts: 2257 Joined: 19-February 04 From: Near fire and ice Member No.: 38 |
ok, no problem: here I generated some views of it in 3D *Jaw drops to the floor* This is awesome, probably the most interesting UMSF-member-developed software I have seen in the history of UMSF so I can simply repeat what Doug said above (actually I could probably keep you busy till April *next* year or longer ;-). If I could combine output from SFS with the output from my steromatcher I'd get DEMs of probably something like 10 times higher resolution than I have ever dreamt of. Combining DEMs like this is probably not difficult. If you manage to get this to work the way you want to there are lots of people here who would love to test it. And I would make my stereomatcher available - would need to spruce it up a lot though and it still has some bugs and quirks. Combining results from these two approaches opens up *lots* of possibilities. In particular, the low frequency variations from my software are pretty accurate (how accurate depends largely on how accurate the viewing geometry information is) while high frequency details are a problem and frequently 'disappear'. P.S.: one nice thing about the algorithm is that one does not need to specify any external calibration parameters, like the light source (sun incidence and azimut) direction.( because those are estimated simultaneously with the recoverered 3D surface ) In contrast my stereomatcher needs accurate viewing geometry information, field of view etc. Inaccurate viewing geometry information typically manifests itself as a 'tilted' DEM. |
|
|
Lo-Fi Version | Time is now: 10th November 2024 - 05:56 PM |
RULES AND GUIDELINES Please read the Forum Rules and Guidelines before posting. IMAGE COPYRIGHT |
OPINIONS AND MODERATION Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators. |
SUPPORT THE FORUM Unmannedspaceflight.com is funded by the Planetary Society. Please consider supporting our work and many other projects by donating to the Society or becoming a member. |