AVIATR - Titan Airplane Mission Concept, Proposed unmanned aerial exploration of Titan |
AVIATR - Titan Airplane Mission Concept, Proposed unmanned aerial exploration of Titan |
Apr 16 2010, 12:20 AM
Post
#1
|
|
Senior Member Group: Moderator Posts: 2785 Joined: 10-November 06 From: Pasadena, CA Member No.: 1345 |
The AVIATR mission concept is an unmanned aerial vehicle that would fly over Titan’s surface. It’s nominal one year mission would enable detailed high-resolution images of Titan’s diverse landscapes for better comparison to Earth’s geological processes. Selected regions could be imaged at resolutions near 30 cm/pixel, equivalent to current HiRise imaging of Mars. In addition, atmospheric sampling would allow a profile of Titan’s thick lower atmosphere and how it relates to Earth’s atmospheric processes and weather systems.
Further details of the AVIATR mission concept were presented at the Lunar and Planetary Sciences Conference 2010 and at Titan Through Time 2010. See: Barnes et al. LPSC 41 (2010) Abstract 2551. “AVIATR: Aerial Vehicle for In-situ and Airborne Titan Reconnaissance.” Freely available here: http://www.lpi.usra.edu/meetings/lpsc2010/pdf/2551.pdf And also: http://www.info.uidaho.edu/documents/2010%...18467&doc=1 -------------------- Some higher resolution images available at my photostream: http://www.flickr.com/photos/31678681@N07/
|
|
|
Jul 13 2010, 02:33 PM
Post
#2
|
|
Senior Member Group: Members Posts: 1018 Joined: 29-November 05 From: Seattle, WA, USA Member No.: 590 |
I think I see. Part of my confusion is that, in the computer biz, when we talk about the resolution of a screen, we always just mean the pixels. So, if I understand correctly, when you guys talk about resolution, you include all the factors that could degrade the image: the pixel scale, of course, but also atmospheric noise, diffraction, probably even noise in the electronics themselves. Beyond a certain point (all other things being equal) increasing the pixel scale will not improve resolution at all. And so the dispute you two are having is not over the actual hardware being used but over the effect of these other factors?
Ralph: When you talk about doing science below the pixel scale, are you talking about making repeated observations of the same thing and computing a higher-resolution model from that? That is, you have to depend on having a static target. Or do you mean something more complex? (I may be guilty of seeing Bayesian and Markov Networks everywhere these days.) ;-) --Greg |
|
|
Jul 14 2010, 04:54 PM
Post
#3
|
|
Member Group: Members Posts: 613 Joined: 23-February 07 From: Occasionally in Columbia, MD Member No.: 1764 |
And so the dispute you two are having is not over the actual hardware being used but over the effect of these other factors? I dont know what the dispute was about. As far as I am concerned there is no dispute, the resolution as normally defined is 350m and that's that. Ralph: When you talk about doing science below the pixel scale, are you talking about making repeated observations of the same thing and computing a higher-resolution model from that? That is, you have to depend on having a static target. Or do you mean something more complex? (I may be guilty of seeing Bayesian and Markov Networks everywhere these days.) ;-) What you describe sounds a bit like how i understand 'super resolution' works. (I think with the procedure can also be referred to as 'dithering' : it was used on Pathfinder, also on HST). Radio astronomers (with typically low angular resolutions defined by the real aperture) use similar methods by e.g. allowing objects to pass through the beam as the Earth or spacecraft rotate. The key is having a well-defined psf, and having a precise enough pointing history to know where in the psf of the scene the pixels of the detector actually are. But it can be as simple as taking an image (many pixels) of an object which is geometrically smaller than a pixel (e.g. a star) but whose image, as defined by the telescope optical system, is much larger. The information obtained by sampling many pixels allows you to estimate where the star was to much less than one pixel, if the point-spread function is known. That's a nicely-posed problem for a point source like a star, the cleverness (Maximum Entropy, Bayesian, whatever) comes in how you deconvolve that psf from the image to make a best estimate of a more complex scene (non-point objects like planets, many stars, plus some noise). |
|
|
Lo-Fi Version | Time is now: 27th September 2024 - 02:34 AM |
RULES AND GUIDELINES Please read the Forum Rules and Guidelines before posting. IMAGE COPYRIGHT |
OPINIONS AND MODERATION Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators. |
SUPPORT THE FORUM Unmannedspaceflight.com is funded by the Planetary Society. Please consider supporting our work and many other projects by donating to the Society or becoming a member. |