IPB

Welcome Guest ( Log In | Register )

MSL image metadata sources & documentation?
yogi
post Oct 10 2015, 11:38 AM
Post #1


Junior Member
**

Group: Members
Posts: 30
Joined: 8-September 14
From: London, UK
Member No.: 7254



Hi -

if this is not the right place to ask, sorry! Please show me the right place.

I'm looking through JPL's JSON endpoint (http://json.jpl.nasa.gov/data.json) for rover imagery metadata, especially the one for MSL.
I'd love:
  • pointers to documentation to help me understand the metadata I've found (see below)
  • pointers to other metadata sources. For example I happened to find out yesterday that there is a separate JSON endpoint for map tile and traverse info. Is there a list of such metadata sources somewhere?

Specific things I'm struggling to figure out from files such as http://msl-raws.s3.amazonaws.com/images/images_sol713.json:
  1. what's the coordinate frame for rover_xyz and the 'C' & 'A' vectors from the CAHVOR model?
    1. I believe 'C' is in meters but I'm not sure relative to what coordinate frame. Is the origin perhaps at the center of the rover? (That would make sense because C and A tend to be at about right angles, except when the camera is looking up or down)
    2. rover_xyz starts at (0,0,0) on sol 1 but in what unit? It's not meters.
  2. what's the semantics of camera_vector? It does sometimes deviate appreciably from the optical plane normal 'A'.
  3. Am I right that the given CAHVOR/CAHVORE parameters and other metadata from the JSON endpoint aren't enough to place the image in the surrounding 3D world because the pixel density of the sensor isn't specified anywhere but is needed to relate the H', V' vectors computed from A, H, V which are in sensor pixels, and the C vector which is in world coordinates (meters?)? Currently I'm hard coding the sensor pixel density for each camera. I've read the Di & Li (2004) paper on CAHVOR camera model and its photogrammetric conversion for planetary application but that paper only deals with rendering a single image in the camera plane, as opposed to several images on separate planes embedded in a 3D space.

Any insights or pointers to documentation?

The reason I'm looking into this is just out of curiosity (I'm a layperson). I discovered mhoward's wonderful Midnight Planets recently and would love to build a part of something similar as a learning experience. I've checked out NAIF which Michael mentions on his website as a source of metadata but I'm not sure where exactly to look (I've dug through their collection of command line tools written in C but it all seemed a lot more complex than should be required to just find out where, in what attitude and what camera model an image got taken).

Thanks heaps for any help/pointers!

Tobias
Go to the top of the page
 
+Quote Post

Posts in this topic


Reply to this topicStart new topic

 



RSS Lo-Fi Version Time is now: 6th June 2024 - 03:52 PM
RULES AND GUIDELINES
Please read the Forum Rules and Guidelines before posting.

IMAGE COPYRIGHT
Images posted on UnmannedSpaceflight.com may be copyrighted. Do not reproduce without permission. Read here for further information on space images and copyright.

OPINIONS AND MODERATION
Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators.
SUPPORT THE FORUM
Unmannedspaceflight.com is funded by the Planetary Society. Please consider supporting our work and many other projects by donating to the Society or becoming a member.