IPB

Welcome Guest ( Log In | Register )


AdrianC
Posted on: Oct 30 2013, 07:54 PM


Newbie
*

Group: Members
Posts: 8
Joined: 3-October 13
Member No.: 7010


QUOTE
s the varying color of the terrain due to surface specularity and changing camera position implicite in your flow-chart?
I'd think, that specularity needs to be subtracted away to get phong light conditions and surface specularity separated for an appropriate texture / vertex definition and rendering.
Or do you intend to switch textures during the sequence (working purely with phong), as an alternative?


@Gerald
Specularity will be substracted from each image. This will definitely keep the result close to the original.
I was thinking of complementing the effect with a simple shading model, but in the images we get lots of phenomenons like: specularity, atmospheric diffraction, CCD amplifier glow (vertical glowing strips mostly visible when the shield detaches) etc.
There will be texture switching but performed only on the frames of substracted specularity which will be added back on the final composite.
On the other hand the CCD amplifier glow will be an added effect, because the shield will be an actual animated 3D object with textures derived from the original, but with CG shadows.

QUOTE
It's unclear from the processing flow, but it's possible that that's essentially what he's proposing.

That said, estimating the position and pose of each MARDI frame would be a useful contribution all by itself.


@djellison and @mcaplinger

That is correct. Summing the whole process to a few words is basically MARDI imagery draped over HiRise elevation data.
The key difference I am trying to make is to keep the whole process as scientifically accurate as possible in order to derive some interesting data like: instant altitude, instant velocity, orientation, amount of dust blown before touchdown, etc.
Everything depends on how all the data sets come together.

As a digital VFX artist we usually work with artistic impressions of reality, and mostly because of time constrains, just plain old empirical approaches of just about everything. It may not sound like what "making of" promotional materials describe about movie VFX, but it's just how things are done in this industry. After a whole decade I find this disappointing and inefficient process, with too much decision weight on untrained people like some directors and directors of photography. I'm not naming people but imagine that some DOP's have immensely limited knowledge about optics and basic light behavior and sometimes physics in general.
Please excuse this rant, it's strictly my opinion, but this is one of the main reasons I started this project. It's because I want it to have quantifiable results, not end up as an artistic impression or some "simulation".
  Forum: MSL · Post Preview: #204212 · Replies: 24 · Views: 10620

AdrianC
Posted on: Oct 26 2013, 05:31 PM


Newbie
*

Group: Members
Posts: 8
Joined: 3-October 13
Member No.: 7010


I'm posting a workflow diagram to better illustrate the process I'm trying to follow.
Please note I made this from memory on my phone, please excuse any errors.
The diagram is based on the assumption that the main image source is compressed.
  Forum: MSL · Post Preview: #204091 · Replies: 24 · Views: 10620

AdrianC
Posted on: Oct 26 2013, 05:09 PM


Newbie
*

Group: Members
Posts: 8
Joined: 3-October 13
Member No.: 7010


QUOTE
Are you sure? A quick look at http://pds-imaging.jpl.nasa.gov/data/msl/M...EX/RDR_CMDX.TAB shows 1280 EDL frames (all as "MrdiRecoveredProducts") though these are inconveniently scattered across releases 1-3 because of their transmission times (32 on 1, 864 on 2, and 384 on 3).


This is good news. My only source was MSL Curiosity Analyst's Notebook http://an.rsl.wustl.edu/msl/mslbrowser/br2.aspx?tab=solsumm
The search returned only 388 MARDI products. I was under the assumption that this was the whole release.
I greatly appreciate the help. smile.gif

QUOTE
Per the documentation, the raw data are the digitized samples coming out of the camera and then passed through a 12-to-8-bit square-root table with no additional lossy compression. I'm not certain what you are concerned about in your images, but we've found that the two green pixels in the Bayer pattern have slightly different responses and this has to be accounted for for best color reconstruction. Bayer reconstruction is a very involved topic.


My first concern was that I was forced to use the public JPG images. Not that great for feature detection.
My second concern (not being able to correctly parse the raw files and forced to use the DRCL .IMG versions) I was stuck with the moire pattern generated by the different response in the green pixels.

Again, thanks for the valuable information. I will now try to implement the decompanding tables.

  Forum: MSL · Post Preview: #204090 · Replies: 24 · Views: 10620

AdrianC
Posted on: Oct 26 2013, 12:19 PM


Newbie
*

Group: Members
Posts: 8
Joined: 3-October 13
Member No.: 7010


This is a HSV colorspace version to further show the chrominance macro-blocking.
  Forum: MSL · Post Preview: #204086 · Replies: 24 · Views: 10620

AdrianC
Posted on: Oct 26 2013, 12:16 PM


Newbie
*

Group: Members
Posts: 8
Joined: 3-October 13
Member No.: 7010


Upon further inspection on what is available in PDS, I've hit a major snag. The raw data is a small subset of what was previously recorded in comparison with the whole JPG sequence found here > http://mars.nasa.gov/msl/multimedia/raw/?s...mp;camera=MARDI

This means from a total of 1283 frames recorded there are about 650 - 750 frames relevant for the EDL and on the PDS website, the latest release is only 388 frames long.
A quick look indicates that this release is showing every 3rd frame of what was captured. This means I am forced to work with the full JPG sequence in order to compute the whole camera motion.

About the raw non de-bayered images
I tried debayering blindly, without respecting the tables in the documentation. I have found that there are two schemes for interpolating the green channel but there is more work to be done here.
One conclusion is that the compression is applied, probably o board the craft for memory reasons. 4GB of DRAM if I remember correctly.
The point is that even the non de-bayered images are exhibiting signs of spatial compression thus reducing the color fidelity by introducing visible color macro-blocks.
Quick comparison in the attachment.
  Forum: MSL · Post Preview: #204085 · Replies: 24 · Views: 10620

AdrianC
Posted on: Oct 9 2013, 09:01 AM


Newbie
*

Group: Members
Posts: 8
Joined: 3-October 13
Member No.: 7010


Sorry. Obviously I meant SkyCrane.
The patterns in which the dust is blasted are very interesting. Yes the one on the left is smaller, either the engines where correcting attitude or the terrain is significantly sloped. Or both.
Once I am done registering all the images thing will get clearer.
  Forum: MSL · Post Preview: #203683 · Replies: 24 · Views: 10620

AdrianC
Posted on: Oct 8 2013, 07:29 PM


Newbie
*

Group: Members
Posts: 8
Joined: 3-October 13
Member No.: 7010


Thanks for the feedback everyone.

About the estimated work time, yes it might be completely erroneous. There are lots of bumps I have to deal with, mainly with the data formats. I will get right back here with a complete set of questions regarding PDS IMG format (I'm reading the documentation ).
In the mean time if anybody has a favorite reader / converter for PDS IMG imges please let me know.

Also I started running a few tests on how to deal with the dust we see during the terminal phase. Nothing concise yet. As mcaplinger stated this is a troublesome portion of the imagery. My first take - I will try not to use the dusty image sets and and use data from previous frames. This violates the commitment of using MARDI imagery only, because the dust must be added back somehow.

This is an animation of 4 frames, on the left the difference between a clean frame minus the luminance of each dusty frame. Result an enhanced image of the dust blown by the lander.
(Please note this was done on a compressed data set (old set of Jpegs), no lens distortion removed, no color look up table, no vignetting removed.)

@MahFL you are right. I'am well used to this. In my world all alterations done to sequence of images must be invisible. There are a lot of movies out there that have hundreds of hours of work, invisible work, mostly on removing unwanted objects left there during production just because someone forgot. Nothing glamorous about that.
  Forum: MSL · Post Preview: #203670 · Replies: 24 · Views: 10620

AdrianC
Posted on: Oct 7 2013, 05:04 PM


Newbie
*

Group: Members
Posts: 8
Joined: 3-October 13
Member No.: 7010


Hello everybody.
I have been reading this forum for quite a while now (8 months+) but never got the nerve and data to actually contribute.
Let me start off by asking everybody two questions.

"What do you think about a high quality remasterization of MSL's MARDI descent and landing imagery?"

"Is it worth the effort having a higher resolution, higher frame rate version of the descent?"

Obviously i'm talking about a high quality remasterization (details will follow). My background is in digital visual effects (VFX) to be more specific 10 years in matchmoving, this involves advanced photogrammetry knowledge, and digital compositing.

Final results will be:
-a minimum horizontal resolution of 2.6K
-25 to 60 fps (TBD)
-derived telemetry overplayed on imagery and current stage (ex. heat shield separation, engines)
-original radio comms as sound.

I am aware of the fact that such attempts already exist (here). This is just a mere optic flow interpolation which dose not do any justice to the mission.

The simplified roadmap is:
-I will use HI-Rise data to create a digital elevation map then convert to a polygonal mesh
-Compute absolute position of each MARDI frame
-Camera project on the ground geometry a series of MARDI frames. Each temporal sample will contribute to a higher resolution image by stacking it on the ground geometry.
-Final rendering will be based ONLY on MARDI data. Interpolation between camera positions will be as physically accurate as possible.

This is no easy task, my estimate is about 40-100 hours of work minimum and lots of data missing.There will be some programming, some small tool developement. But since next month I will be out of a job (corporate financial problems) I thought I might ask you guys about the possible interest of this project.

PS If i posted in the wrong section, I ask the moderators to kindly help me.
  Forum: MSL · Post Preview: #203650 · Replies: 24 · Views: 10620


New Posts  New Replies
No New Posts  No New Replies
Hot topic  Hot Topic (New)
No new  Hot Topic (No New)
Poll  Poll (New)
No new votes  Poll (No New)
Closed  Locked Topic
Moved  Moved Topic
 

RSS Lo-Fi Version Time is now: 22nd September 2018 - 05:45 AM
RULES AND GUIDELINES
Please read the Forum Rules and Guidelines before posting.

IMAGE COPYRIGHT
Images posted on UnmannedSpaceflight.com may be copyrighted. Do not reproduce without permission. Read here for further information on space images and copyright.

OPINIONS AND MODERATION
Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators.
SUPPORT THE FORUM
Unmannedspaceflight.com is a project of the Planetary Society and is funded by donations from visitors and members. Help keep this forum up and running by contributing here.