IPB

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
3D Projecting JunoCam images back onto Jupiter using SPICE
Matt Brealey
post Dec 12 2017, 02:50 PM
Post #1


Newbie
*

Group: Members
Posts: 16
Joined: 4-September 16
Member No.: 8038



Hi Everyone,

As part of an upcoming VR/VFX-based project, I'm attempting to project the raw images from JunoCam back onto a 3D model of Jupiter, the goal being to do this precisely enough that multiple images from a specific perijove can be viewed on the same model. However, Iím currently having one or two issues calculating the correct JunoCam rotation values, and was hoping someone with more experience might be able to point me in the right direction!

My process for processing a single raw image is as follows :
  1. Recombine the raw framelets into multiple exposures, each matching the layout of JunoCamís CCD/the original exposures.
  2. Undistort these original exposures.
  3. Get the correct position/rotation of JunoCam using Spiceypy and the relevant kernels.
  4. Project the undistorted captured images through the now correctly positioned/rotated 3D camera onto a 3D model of Jupiter
  5. Get the position of the sun relative to Jupiter, and use this to remove Jupiterís specular highlight in the resultant image.
Thanks to Geraldís fantastic documentation I have reasonably confidently achieved steps 1-2 (thank you very much Gerald!) Using a meta-kernel generated by Boris Semenov along with additional ck/spk kernels as required, Iíve also managed to get the position/rotation data required in steps 3-5. However, my rotation values for JunoCam always seem to be slightly off in a suspiciously consistent way, making me think that Iím perhaps misunderstanding the frames involved.

The frame transformations I'm calculating are currently : J2000 >> IAU_JUPITER >> JUNO_SPACECRAFT >> JUNO_JUNOCAM_CUBE >> JUNO_JUNOCAM.

Here's the snippet of Spiceypy code used :
CODE
#Calculate the rotation offset between J2000 and IAU_JUPITER
rotationMatrix = spice.pxform("J2000", "IAU_JUPITER", exposureTimeET)
rotationAngles = rotationMatrixToEulerAngles(rotationMatrix)
j2000ToIAUJupiterRotations = radiansXYZtoDegreesXYZ(rotationAngles)

#Get the positions of JUNO_SPACECRAFT relative to JUPITER in the IAU_JUPITER frame, ignoring corrections
junoPosition, lt = spice.spkpos('JUNO_SPACECRAFT', exposureTimeET, 'IAU_JUPITER', 'NONE', 'JUPITER')

#Get the rotation of JUNO_SPACECRAFT relative to the IAU_JUPITER frame
rotationMatrix = spice.pxform("IAU_JUPITER", "JUNO_SPACECRAFT", exposureTimeET)
rotationAngles = rotationMatrixToEulerAngles(rotationMatrix)
junoRotations = radiansXYZtoDegreesXYZ(rotationAngles)

#Get the rotation of the JUNO_JUNOCAM_CUBE to the SPACECRAFT, based upon data in juno_v12.tf
rotationMatrix = np.matrix([[-0.0059163, -0.0142817, -0.9998805], [0.0023828, -0.9998954,  0.0142678], [-0.9999797, -0.0022981,  0.0059497]])
rotationAngles = rotationMatrixToEulerAngles(rotationMatrix)
junoCubeRotations = radiansXYZtoDegreesXYZ(rotationAngles)

#Get the rotation of JUNO_JUNOCAM to JUNO_JUNOCAM_CUBE, based upon data in juno_v12.tf
junoCamRotations = [0.583, -0.469, 0.69]

The resultant position coordinates and euler angles are then transferred to a matching hierarchy of objects in 3D. The result is a JUNO_JUNOCAM 3D camera that should in theory be seeing exactly what is shown in the matching original exposure from steps 1-2.

Using the 8th exposure (calculated time 2017-05-19 07:07:11.520680) of this image taken during perijove 6, zooming in on the final projected result gives me the following image :



As you can see, the RGB alignment is close, but not quite perfect. After doing some additional experimentation, I realised that the solution was to add an additional X rotation of -0.195į to the camera for exposure 1, -0.195į*2 for exposure 2, -0.195į*3 for exposure 3 etc. Applying this additional rotation gives me the following image :



Still not perfect as this value is no doubt slightly inaccurate, but itís certainly a lot closer. The consistency of this additional rotation (plus the fact that the value required changes between different raw images) makes me think that Iím perhaps missing something obvious in my code above! (I am admittedly not yet compensating for planetary rotation but I wouldn't expect that to cause a misalignment of this magnitude).

Problem 2 is that in addition to the alignment offset between exposures, there is another global alignment offset across the entire image-set. Looking again at the 8th exposure, this time in 3D space, we see the following :



The RGB bands are from our single exposure, which has been projected through our 3D JunoCam onto the black sphere, which is my stand-in for Jupiter (sitting in the exact centre of the 3D space). As you can see, there is an alignment mismatch. After yet more experimentation, I found that applying a rotation of approximately [-9.2į, 1.7į, -1.2į] to the entire range of exposures (at the JUNOCAM level) more closely aligns our 8th exposure with it's expected position on Jupiter's surface, however this does introduce a very slight drift in subsequent exposures :



I hope that all makes some sense! I'm (probably fairly obviously) new to SPICE, but after reading multiple tutorials/forum posts elsewhere, I still can't seem to pinpoint the one or two things I'm no doubt misunderstanding!

I'm incredibly excited about what's possible once I iron out the issues in this process, so if anyone has any suggestions, I'd certainly love to hear them!

Thanks very much in advance!

Matt
Go to the top of the page
 
+Quote Post
Gerald
post Dec 12 2017, 06:24 PM
Post #2


Senior Member
****

Group: Members
Posts: 1973
Joined: 7-December 12
Member No.: 6780



With 80.84 exposures per Juno rotation you should get close to a good alignment. Otherwise the root cause for the displacement is likely to be somewhere else. The interframe delay is documented in the json files, but is 1 ms off according to the JunoCam instrument kernel file, version 2. The usual value is near 0.37 seconds. The same file documents a timing offset. The angular velocity of Juno varies, even within a perijove pass.
Go to the top of the page
 
+Quote Post
mcaplinger
post Dec 12 2017, 11:31 PM
Post #3


Senior Member
****

Group: Members
Posts: 1621
Joined: 13-September 05
Member No.: 497



QUOTE (Matt Brealey @ Dec 12 2017, 06:50 AM) *
if anyone has any suggestions, I'd certainly love to hear them!

With pxform and all the right kernels loaded, you should be able to go directly from the IAU_JUPITER frame to the JUNO_JUNOCAM frame without that other stuff. Not saying what you were doing was wrong, but it's a potential source of confusion.

I think 0.2 degrees is about 1 second of timing error, which is way bigger than what I would expect.

I tried to document what I think is the correct way to do this in the Python snippets in https://naif.jpl.nasa.gov/pub/naif/JUNO/ker..._junocam_v02.ti -- I presume you saw those.


--------------------
Disclaimer: This post is based on public information only. Any opinions are my own.
Go to the top of the page
 
+Quote Post
Bjorn Jonsson
post Dec 13 2017, 01:03 AM
Post #4


IMG to PNG GOD
****

Group: Moderator
Posts: 1929
Joined: 19-February 04
From: Near fire and ice
Member No.: 38



Another possible issue is that at least in my case, I always need to make small corrections to the pointing. I suspect it's mainly because of this from the kernel mentioned above by Mike:

QUOTE
"We have found that there is a fixed bias of 61.88 msec in the start time with a possible jitter of order 20 msec relative to the reported value...

The pointing error I get is typically 3-10 pixels.

However, it seems obvious to me that here something else is going on; even a 10 pixel pointing error results in much better color alignment (except maybe near the limb).

And this reminds me I really need to spruce up my own Juno-related SPICE code. It's really messy (including lots of code I commented out) because considerable experimentation was required following the Earth flyby to get everything working correctly (including a section of code to optionally correct the pointing).

And just in case: Be sure you are using up to date kernels. New versions of some of the instrument/frame kernels were released a few months ago. The new versions are significantly more accurate than the old ones (but nevertheless the old ones didn't result in a significant color misalignment).
Go to the top of the page
 
+Quote Post
Kevin Gill
post Dec 18 2017, 12:00 AM
Post #5


Newbie
*

Group: Members
Posts: 12
Joined: 22-July 14
Member No.: 7220



QUOTE (Bjorn Jonsson @ Dec 12 2017, 09:03 PM) *
Another possible issue is that at least in my case, I always need to make small corrections to the pointing. I suspect it's mainly because of this from the kernel mentioned above by Mike:


The pointing error I get is typically 3-10 pixels.

However, it seems obvious to me that here something else is going on; even a 10 pixel pointing error results in much better color alignment (except maybe near the limb).

And this reminds me I really need to spruce up my own Juno-related SPICE code. It's really messy (including lots of code I commented out) because considerable experimentation was required following the Earth flyby to get everything working correctly (including a section of code to optionally correct the pointing).

And just in case: Be sure you are using up to date kernels. New versions of some of the instrument/frame kernels were released a few months ago. The new versions are significantly more accurate than the old ones (but nevertheless the old ones didn't result in a significant color misalignment).


One thing I have found in my most recent project is to use the specific ids for the Red, Green, and Blue framelets and compute position and pointing separately based on those.

When I'm computing the visible ray surface intercept, I'm doing something similar to:

CODE
JUNO_JUNOCAM_METHANE = -61504
JUNO_JUNOCAM_BLUE = -61501
JUNO_JUNOCAM = -61500
JUNO_JUNOCAM_GREEN = -61502
JUNO_JUNOCAM_RED = -61503

def surface_point_for_channel_at_et(channel, et):
    shape, frame, bsight, n, bounds = spice.getfov(channel, 4, 32, 32)
    spoint, etemit, srfvec = spice.sincpt("Ellipsoid", "JUPITER", et, "IAU_JUPITER", "CN+S", "JUNO", frame, bsight)

    radius, lon, lat = spice.reclat(spoint)
    lon = math.degrees(lon)
    lat = math.degrees(lat)
    return lat, lon, radius

et = spice.str2et("2016-DEC-11 17:14:13.003")
spoint_red = surface_point_for_channel_at_et(JUNO_JUNOCAM_RED, et)
spoint_green = surface_point_for_channel_at_et(JUNO_JUNOCAM_GREEN, et)
spoint_blue = surface_point_for_channel_at_et(JUNO_JUNOCAM_BLUE, et)


You'll see that at the exact same time, the boresight intercepts will be different. I wonder if this might help resolve your problem.

-- Kevin
Go to the top of the page
 
+Quote Post
mcaplinger
post Dec 18 2017, 01:45 AM
Post #6


Senior Member
****

Group: Members
Posts: 1621
Joined: 13-September 05
Member No.: 497



QUOTE (Kevin Gill @ Dec 17 2017, 04:00 PM) *
One thing I have found in my most recent project is to use the specific ids for the Red, Green, and Blue framelets and compute position and pointing separately based on those.

You can try that if you want to, but I'm not sure if those boresights have been corrected for distortion -- we use the JUNO_JUNOCAM frame in our recommended processing.


--------------------
Disclaimer: This post is based on public information only. Any opinions are my own.
Go to the top of the page
 
+Quote Post
Matt Brealey
post Dec 18 2017, 03:54 PM
Post #7


Newbie
*

Group: Members
Posts: 16
Joined: 4-September 16
Member No.: 8038



Hi Everyone,

Thanks so much for the responses! After re-writing and re-checking the code several times over the last few days I came to the conclusion that the 3D software I was using was actually the cause of the problem. It appears that it’s incorrectly concatenating rotation matrices, somehow resulting in an incorrect rotation order when finally converting to euler angles (the logic of this frankly baffles me slightly!)

I’ve managed to get around this by using SPICE itself to calculate the matrix to euler part of the process, and the result is MUCH better. I’m now getting the expected camera rotations for every raw image/metadata file combo I throw at it. Here's a very quick test render of a processed raw image from perijove 06 (please excuse the quick sharpening/color correction which is only applied in order to better see render errors).

Although there are clearly still a few things to be worked out, this is getting me much closer to a usable image, far more quickly than my previous 2D-only method. From start to finish, processing a new raw image now takes only ~15-20 seconds, as opposed to ~2-3hrs of manual tweaking with the 2D method. This includes combining the framelets, undistorting, calculating the camera positions, projecting on the surface and stitching into a single image/texture (I'm not yet incorporating the removal of Jupiter's specularity which will likely add another few seconds per image).

The next step is to try and combine multiple exposures into a single texture. I've just done a quick test using 8 images taken over 20minutes from perijove 6, and the resultant gif is available here. As you can see, there's currently quite a lot of movement in the positions of surface features across the 8 frames, with exposures 4-5 being the most inconsistent of the group. Whilst some disparity is to be expected due to this being an unwrapped 3D texture, if anyone has any other avenues for me to explore, I'd love to hear them! This evening's task is to build in an extra control allowing me to optionally add that 0-20msec jitter in start time per image which should hopefully start to address the issues.

Gerald : Thanks for the advice! As part of trying to understand my current alignment issues, I'm going to start calculating angular velocity/exposures-per-rotation for every raw image processed, so your values are very helpful.
mcaplinger: I did see those snippets thank you! It's incredibly useful to find things like that amongst the docs.
Bjorn : Thanks for highlighting the start time jitter, I did somehow miss that on my initial pass. And you were right about those kernels, too - the recent ones provided much better results.
Kevin : Interesting, thanks! I'm not quite brave enough to go the surface intercept route just yet but the results you're getting look fantastic! I'd be very interested to compare the results from our two methods once I iron out the issues, if you'd be up for perhaps processing the same image? smile.gif

Cheers!

Matt
Go to the top of the page
 
+Quote Post
Kevin Gill
post Dec 18 2017, 04:09 PM
Post #8


Newbie
*

Group: Members
Posts: 12
Joined: 22-July 14
Member No.: 7220



I'm glad you figured out a solution! The stuff I've been working on has been producing map-projected images, but I think it might be a dead end for producing perspective output (which is my primary goal) unless I bring Maya or Blender into the mix. The boresight calculations are pretty good for the individual color channels, but like mcaplinger said, I've seen distortion around the extremes of the field of view.
Go to the top of the page
 
+Quote Post
Brian Swift
post Dec 18 2017, 06:50 PM
Post #9


Newbie
*

Group: Members
Posts: 9
Joined: 18-September 17
Member No.: 8250



Question for everyone with a raw pipeline using the juno_junocam_v02.ti lens parameters - what is the magnitude of the color mis-alignment you observe (particularly near the edges of the CCD), sub-pixel, pixel, or several pixels?
Go to the top of the page
 
+Quote Post
mcaplinger
post Dec 19 2017, 05:31 PM
Post #10


Senior Member
****

Group: Members
Posts: 1621
Joined: 13-September 05
Member No.: 497



QUOTE (Brian Swift @ Dec 18 2017, 10:50 AM) *
Question for everyone with a raw pipeline using the juno_junocam_v02.ti lens parameters - what is the magnitude of the color mis-alignment you observe (particularly near the edges of the CCD), sub-pixel, pixel, or several pixels?

The standard deviation of our model validation efforts to date (based on comparing the observed vs predicted position of the galilean satellites on orbit 1) is about 1 pixel in X and about 2.8 pixels in Y, but that probably doesn't get more than halfway to the edge of the field. The presumption is that the larger Y error is due to timing slop.

Try as we might, I'm not expecting that we will ever come up with a subpixel-accurate model that works across the whole field without some kind of ad hoc adjustment (at a minimum, a per-image timing tweak). I'd love to be proven wrong.


--------------------
Disclaimer: This post is based on public information only. Any opinions are my own.
Go to the top of the page
 
+Quote Post
Brian Swift
post Dec 21 2017, 06:16 PM
Post #11


Newbie
*

Group: Members
Posts: 9
Joined: 18-September 17
Member No.: 8250



QUOTE (mcaplinger @ Dec 19 2017, 09:31 AM) *
The standard deviation of our model validation efforts to date (based on comparing the observed vs predicted position of the galilean satellites on orbit 1) is about 1 pixel in X and about 2.8 pixels in Y


Thanks. Thats of the order of what I'm seeing with my current lens model which is based on CruisePhase and ORBIT_ 00 "stars".
Go to the top of the page
 
+Quote Post

Reply to this topicStart new topic

 



RSS Lo-Fi Version Time is now: 16th January 2018 - 11:39 PM
RULES AND GUIDELINES
Please read the Forum Rules and Guidelines before posting.

IMAGE COPYRIGHT
Images posted on UnmannedSpaceflight.com may be copyrighted. Do not reproduce without permission. Read here for further information on space images and copyright.

OPINIONS AND MODERATION
Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators.
SUPPORT THE FORUM
Unmannedspaceflight.com is a project of the Planetary Society and is funded by donations from visitors and members. Help keep this forum up and running by contributing here.