Printable Version of Topic

Click here to view this topic in its original format

Unmanned Spaceflight.com _ Juno _ Juno PDS data

Posted by: elakdawalla Jan 8 2016, 10:15 PM

There is now PDS-format JunoCam cruise and Earth flyby data available; it's been submitted to the PDS, but MSSS has gone ahead and posted it on their website. http://planetary.s3.amazonaws.com/data/juno/junocam_cruise.html. Unlike my usual index pages, there aren't any thumbnails because of the odd nature of JunoCam images, with their long skinny shapes and interleaved framelets. I haven't played much with these data because it's a bit beyond my skill -- I look forward to seeing what any of you can do with it.

Posted by: Gerald Jan 14 2016, 02:08 AM

In the meanwhile I've looked at all the 2x121 IMG files, only at few in more detail.

Filename convention with JNCE_2013282_00M103_V01 as an example:
- 3 letters "JNC" for JunoCam,
- 1 letter, "E" for EDR, "R" for RDR,
- "_" always,
- 4 digit year,
- 3 digit day of year,
- "_" always,
- "00" common in all files thus far,
- 1 letter for applied filters, "A": all, "C": colors red, green, blue, "R" red, "G" green, "B" blue, "M" methane,
- 3 digits image counter,
- "_V" version prefix always,
- 2 digits version number.

The IMG files are raw data without header. The header is instead in a separate LBL file.
EDRs are 8-bit unsigned char, RDRs 16-bit unsigned short, big endian encoded, hence need to be swapped for processors of Intel type.
Some of the "methane" test images use sampling factor 2, and are 816 pixels wide - that's a little less than 1648/2, starting with 2013107_00M053.

The RDR LBL files say

QUOTE
SAMPLE_BIT_MODE_ID = "SQROOT"

like the for the EDRs, but they look linearized to me at first glance.

Here a comparison of an 8-fold enlarged crop of EFB13 with the according files JNCE_2013282_00M103_V01, and JNCR_2013282_00M103_V01, the EDR converted in a straightforward way, the 16-bit RDR grey values divided by 20.25 to use the whole grey scale range:


There are at least two kinds of differences between EFB13 and the EDR:
- EFB13 - at least the version one I worked with - shows JPG artifacts,
- the EDR image shows 16x16-pixels macroblocks, possibly to account for HDR.
The macroblocks are present in the RDR, as well.

I didn't find information, how the raw data numbers can be restored.
So some questions about alternative approaches arose:
- Do there exist additional tables of grey value offsets for each macroblock?
- Does there exist a convention how to calculate the relative offset between two neighboring macroblocks?
- Is a subsequent application of a H.264-compliant or some similar deblocking post-process assumed?

After solving the macroblock question, some of the EFB images might eventually return better results than the initial EFB images.

Several images appear well-suited to study
- image geometry, effects of TDI, possible probe nutation, using stars,
- stray light of the Sun as a well-defined source of light.


Posted by: mcaplinger Jan 14 2016, 04:03 AM

QUOTE (Gerald @ Jan 13 2016, 06:08 PM) *
I didn't find information, how the raw data numbers can be restored.
So some questions about alternative approaches arose:
- Do there exist additional tables of grey value offsets for each macroblock?
- Does there exist a convention how to calculate the relative offset between two neighboring macroblocks?
- Is a subsequent application of a H.264-compliant or some similar deblocking post-process assumed?

I'm not sure exactly what you mean by this. The 16x16 block artifacts are from the onboard data compression (same as was used for the MOC way back when -- JPEG-like but with 16x16 transform blocks.) There is no way in general to restore anything as the raw image was never transmitted.

The only difference between the EDR and the RDR is that the RDR has been linearized and a radiometric scale factor applied.

If this had been a real PDS release you would also have access to the documentation. That release is coming but I don't know when; unlike previous missions we don't deliver directly to the PDS. For now I'll see if I can post the documentation on the web site.

Posted by: Gerald Jan 15 2016, 02:41 AM

I've implemented a quick&dirty de-blocking algorithm, which just estimates a constant grey value offset for each 16x16-pixels block in the EDR image.
It seems, that adding such an appropriate constant for each block might be able to remove the block edges:


I thought, that the constant should have been stored somewhere, since it's only one byte per 16x16-block, at most.
The easiest explanation would be considering a glitch in the DCT decompression script causing omission or some truncation of the lowest-frequency term.

... A SIS-like documentation would certainly be useful and answer several questions before they occur.

Posted by: elakdawalla Jan 15 2016, 02:53 AM

When I ran IMG2PNG on these images I thought the blockiness was because of a bug in IMG2PNG but I've already emailed Bjorn so many feature requests recently, I didn't bug him about this one. Now I'm glad I didn't smile.gif

Posted by: mcaplinger Jan 15 2016, 03:51 AM

QUOTE (Gerald @ Jan 14 2016, 06:41 PM) *
I've implemented a quick&dirty de-blocking algorithm, which just estimates a constant grey value offset for each 16x16-pixels block in the EDR image.

Is what you're calling the "raw" image stretched or something? The raw data don't/shouldn't have anything like the level of visible blockiness that this image has. Either there is something wrong with whatever you're using to convert formats or something messed up in the EDR files themselves.

[Indeed, on further checking my original decompressed version of EFB13 doesn't show these block artifacts. Something is screwed up in our processing chain that produced the EDRs. Thanks for noticing this!]

Posted by: Gerald Jan 22 2016, 05:56 PM

This week I worked on a better understanding of the hot pixels, mainly in order to infer a specific point noise filter which protects stars.
The stars will help geometric calibration of the camera, which is an ingredient for accurate RGB registering.

Here an excerpt of the observed behavior of the hot pixels:
TDI causes hot pixels to show up as "hot streaks" with the number of TDI steps as length in pixels.
"Hot streaks" grow downward with increasing TDI, if the CH4 filter is defined as "up" and the red filter as "down".
Hot pixels vary over months, mostly in number, but intensity can vary, too.

Here an example comparing hot streaks of 2012, doy (day of year) 294, with 2014, doy 038:


The 2012294 image has been obtained by stacking all framelets of the six TDI 60 images JNCE_2012294_00R036_V01 to JNCE_2012294_00R041_V01, followed by kind of a horizontal hipass filter.
The 2014038 image is the according processing result of the four TDI 80 images JNCE_2014038_00R114_V01 to JNCE_2014038_00R117_V01.

I've then defined a heuristics which reduces the images further to estimated hot pixel positions with respect to the considered framelet, lower left corner defined as coordinates (0;0). Hot pixels may be outside the - in this case - red filter readout zone, even in the green filter readout zone, but the streaks reach the red filter readout zone. So, hot pixel positions beyond framelet y-coordinate 127 occur.
 file_list_red_01.txt_hot_pixels.csv.txt ( 2.02K ) : 1041
 file_list_red_03.txt_hot_pixels.csv.txt ( 3.19K ) : 1049

When combining hot pixels of the red and the green filter, streaks starting in green and ending in red can actually be observed. These inter-filter hot streaks confirm the relative y-offset of 155 pixels between the red and the green readout zone.

I didn't see good data for the blue filter; thus far, those show strong compression artifacts. Nevertheless, you may see some matches with the green filter.

Here an example of stacked and hipass-filtered framelets as insets in a simulation of TDI 80 hot streaks from reduced data:

CH4 input data were TDI 64 (of 2011238), therefore the streak length in the simulation is different from the inset image. The CH4 data are early in the cruise. Therefore matchings with the blue data are less likely.

Posted by: Gerald Jan 30 2016, 03:27 PM

This one is mostly about finding blips used as star candidates.

After some correlation analysis, it turned out, that hot streaks in the EDRs behave sub-additionally, as expected for square-root encoded data. But squaring didn't result in perfect additivity. The inferred contribution of the hot streak still showed dependence of the background brightness:


Nevertheless I've subtracted repetitive patterns on dark background in a linear way, by subtracting the squares of values obtained from the EDRs, in order to obtain a 0th degree approximation for cleaned images. Blink gif before/after cleaning:

This imperfection is to some degree intentional to encourage at least some resilence of the consecutive processing.

The next processing step tries to find blips which are good star candidates. The centroid of the blips (using pixels above some threshold) is determined after subtracting the mean background level in a ring around the blip.
Then a best-fitting line ("https://en.wikipedia.org/wiki/Principal_component_analysis") through the centroid is determined. With TDI 80, stars form small streaks.
After calculating the vertical and horizontal mean distance of the pixels of the blip, weighted by brightness, blips similar to hot streaks are filtered out (some resilence with respect to hot streaks). The remaining star candidates are marked by a respective circle.

I've then roughly compared the marked blips with star maps by eye, to get a preliminary assessment of how many stars can be identified for a good geometric calibration.
For image JNCE_2014038_00R117_V01, a 180 degrees panorama, I could find about 80 good star candidates. Here an according annotated intermediate processing step:

Some similar images are available. This should be sufficient to calibrate at least the red filter area geometrically.

Next, I'll probably simulate BSC stars with two families of geometric camera models, a purely radially Brown-Conrady-distorted pinhole model, and a purely radially Brown-Conrady and hyperbolically distorted pinhole model. I'm expecting the hyperbolic model to be better suited, since a Brown distortion allone cannot model a hyperbolic distortion, if my draft calculations are correct. But I currently think, an idealized rotating pushframe camera should distort hyperbolically. A full radial Taylor series, not skipping any second summand as in the radial Conrady model, might be a third option. The latter approach is sufficiently powerful to describe hyperbolic distortions, as well.

Btw.: JNCE_2014038_00R116_V01 probably shows a gcr hit:

Posted by: Gerald Feb 15 2016, 02:46 PM

This is about a first crude simulation of image JNCE_2014038_00R117_V01 using BSC stars.

Starting from a version of the star catalog http://heasarc.gsfc.nasa.gov/W3Browse/star-catalog/bsc5p.html
 bsc5p_equatorial.zip ( 155.14K ) : 1145

I've calculated and added the according 3d unit vectors
 bsc5p_equatorial_vectors.zip ( 383.81K ) : 1129
.

Then crudely assuming
- a camera rotation of 79 frames per 360° (guess from EFB images, neglecting effect of spacecraft motion),
- a pinhole horizontal scale of 29° for 800 pixels from an assumed optical center at (822;600) towards one side, (simplified, and somewhat inaccurate interpretation of the http://link.springer.com/article/10.1007%2Fs11214-014-0079-x, 1.)
- a consecutive Brown distortion with K1 = -3.839251e-8, (JunoCam paper, 6.7, added sign)
- red, green, blue, CH4 filters assumed as starting at y values 306, 461, 616, 781, with lower left as (0;0), (recalculated from http://www.unmannedspaceflight.com/index.php?s=&showtopic=2548&view=findpost&p=203948)
and negelecting TDI, I've calculated the 3d unit vectors corresponding to the centroids of the blips, and annotated several of them with star candidates,
 Stars_JNCE_2014038_00R117_V01_Star_candidates_vec_pinhole_brown2_man_annot.txt ( 20.35K ) : 1031

then reduced the list considerably
 Stars_JNCE_2014038_00R117_V01_Star_candidates_vec_pinhole_brown2_man_annot_smallExcerpt.txt ( 942bytes ) : 1031

and fused it with the corresponding BSC data to
 bsc5p_equatorial_vectors_small_excerpt_assoc_junocam_img117.txt ( 1.57K ) : 1050


Further restriction to three stars - chosen in a way to avoid almost-singularities - results in two 3x3 matrices
 bsc5p_equatorial_vectors_small_excerpt_assoc_junocam_img117_matOnly.txt ( 345bytes ) : 1080


Inverting the (transposed) of the matrix for the BSC vectors, and multiplying the (transposed) matrix for the blip vectors from the left results in an estimated transformation matrix from BSC to JNCE_2014038_00R117_V01 vectors.
This matrix, however, isn't quite orthonormal. Applying the vector product twice to replace two of the colums of the transformation matrix, and normalization of the columns, adjusts the transformation matrix to orthonormal.
 Stars_JNCE_2014038_00R117_V01_TransformationMatrix01.txt ( 174bytes ) : 1020


A first attempt to simulate star positions for JNCE_2014038_00R117_V01 by using BSC stars and the obtained transformation resulted in this annoteted image:


Adjusting
- the camera rotation to 81.0 frames per 360°,
- the pinhole scale to 28.3° for 800 pixels,
and treating multiple occurences of the same star, resulted in a somewhat better simulation.

The blip vectors used for the transformation matrix are still to be adjusted accordingly.

Despite the errors in some cases globally adding up to about 40 pixels at this crude state, the simulation helps identifying stars more reliably, which is needed for RMS minimization methods.
Among the next intended steps are
- merging the above individual calculations to a more automated pipeline,
- considering the displacement due to TDI, and
- developing and applying optimization methods to pin down the geometric camera parameters (at least for red and green bands).

Posted by: Gerald Feb 22 2016, 06:23 PM

The following tentative BSC star identifications of blips in image 117 look sufficiently good, that I'll be going to work on RMS minimization of the geometric parameters now.


Using the HR numbers simplified comparing the simulations

with the cleaned JunoCam image, here annotated according to the above survey of the blip identifications.

There is still a small number of flaws in the survey, but it should be easy for the RMS optimization to flag them as invalid.

Posted by: Gerald Mar 24 2016, 07:29 PM

This article is about a geometric calibration method using stars of a single swath and first results for image JNCE_2014038_00R117_V01:
 junocam06_geometry_by_stars.pdf ( 254.45K ) : 1369


Here a graphics of two of the inferred Brown distortions compared to the MSSS laboratory result, y-value for optical axis chosen by optimization algorithm:



The four simulated images corresponding to the distortions considered in appendix A of the above article:
With constant optical axis at y=600, for Brownian K1, and for Brownian K1+K2:


Y-value for optical axis chosen by optimization algorithm, again for Brownian K1, and for Brownian K1+K2:



========

I'll probably work on RGB registering for spheroids (with fuzzy surface), before refining the geometric in-flight calibration.

Posted by: wildespace Jun 27 2016, 07:59 AM

I don't know how to work with IMG files (they won't even mount on a virtual drive on my laptop), so could someone please post JunoCam's RGB images of stars and the zodaical light? Information about the exposure for those images would be appreciated too.

Thanks smile.gif

Posted by: Gerald Jun 27 2016, 05:36 PM

I'll try to convert the square root encoded EDR IMGs to PNGs, if you like, within about a day, and provide a link to a list of the files. But be aware, that some of the images currently show the blocky decompression glitch.

Posted by: mcaplinger Jun 27 2016, 05:49 PM

We didn't take any RGB images of stars and the zodiacal light. We took some red and some green with spectral crosstalk because the amount of TDI was longer than the frame height would support.

As for processing the IMG files, there's a saying about giving a man a fish, I don't quite recall how it goes. smile.gif

Posted by: Gerald Jun 27 2016, 09:59 PM

Well, applying the saying would mean, read my post #2, above, in order to understand the structure of the IMGs, then try Björn's IMG2PNG, some other image processing software able to read raw data, or write your own converter software.
If all these options fail, ask for help.

Posted by: wildespace Jun 28 2016, 07:11 AM

QUOTE (mcaplinger @ Jun 27 2016, 06:49 PM) *
We didn't take any RGB images of stars and the zodiacal light.

I see. Emily's page gave me an impression that you did, with headers like:
QUOTE
2013107_00C048
Filters: BGR


I used IMG2PNG to create JNCE_2013107_00C048_V01.png but as I don't know much about processing such images, I got a black stip with barely a hint of what looks like noise. And I also don't know how to combine that raw imagery into an RGB composite.

Posted by: Gerald Jun 28 2016, 12:14 PM

This is a 16-fold brightness-stretched and 2-fold enlarged crop of 2013107_00C048:


It shows one copy of a pattern which repeats each 3x128=384 pixel rows in the raw swath.
Those repetitive patterns are mostly caused by hot pixels. But there exist some repetitive spots of different type, too.
The 3x128 pixels are resulting from exposures made of three color bands times 128 pixels framelet height.

This is a 16-fold enlarged crop:

It shows a vertical line of 4 bright pixels. This line is the result of one hot pixel on the CCD copied 4-times by the TDI mechanism. It indicates, that 4 TDI steps have been applied for this image. - TDI 4 is hard for finding many stars. TDI 64 and TDI 80 are much better-suited for this purpose.
The 16x16 noisy block indicates, that the image has been compressed lossily on macroblocks (tiles) of 16x16 pixels.


When looking for real objects, first mark all repetitive patterns, or clean the image from these patterns.
Then decompose the swath into framelets of height 128 pixels, grouped into exposures of 3 framelets. Insert a gap of 27 pixels between neighbouring framelets within one exposure. Assign a color channel to each framelet within each exposure. Shift the exposures vertically, until you get a match of corresponding features (about 114 pixels, give or take a few). Use the valid color channel of each exposure to obtain full rgb coverage. The result will be a first draft of an RGB image. I'd recommend to use 2013282_000C91 (aka EFB01), showing Earth's moon, as a first exercise.
The first 22 slides of http://www.ajax.ehu.es/Juno_amateur_workshop/talks/06_03_Junocam_processing_Eichstadt.pdf should apply to moon and star RGB images, as well. Spacecraft trajectory and rotation of the target objects can be neglected for distant targets.
You may use weights for the colors in order to adjust the raw colors, if you use EDRs; the weights for the colors are likely to undergo refined calibration.

Posted by: mcaplinger Jun 28 2016, 02:16 PM

QUOTE (Gerald @ Jun 28 2016, 04:14 AM) *
This is a 16-fold brightness-stretched and 2-fold enlarged crop of 2013107_00C048:

This particular image was part of a mapping operations test. Since it has a short exposure, it likely doesn't show any real objects, just noise.

Posted by: javierluiso Jul 2 2016, 03:33 AM

QUOTE (wildespace @ Jun 27 2016, 04:59 AM) *
I don't know how to work with IMG files (they won't even mount on a virtual drive on my laptop), so could someone please post JunoCam's RGB images of stars and the zodaical light? Information about the exposure for those images would be appreciated too.

Thanks smile.gif


Hi, I've been working to load and process .IMG file using Octave / MATLAB scripts.
Take a look to https://javierluiso.wordpress.com/, I hope my work will be useful to you.

Javier

Posted by: JohnVV Jul 2 2016, 07:13 PM

any reason you are not opining img files as a raw whit a header
or
using img2png
or
using isis3
or
using vicar
or
using ???

now Juno data is a bit odd and needs a lot of processing

Posted by: Gerald Jul 2 2016, 08:47 PM

Reading the IMGs is just the first small hurdle. The IMGs are raw binary data streams without an embedded header. More important is opening the files in a tool or with a computer language you're used to, and which is sufficiently powerful to perform the sophisticated processing.

Posted by: mcaplinger Jul 2 2016, 09:05 PM

QUOTE (Gerald @ Jul 2 2016, 12:47 PM) *
The IMGs are raw binary data streams without an embedded header.

"raw binary data streams"? They're just normal 2D images, in row-major order, with 8 or 16-bit pixels. You could load them raw into Photoshop if you read the image dimensions out of the label manually.

Unpacking the framelets might be a bit of a challenge, but we do that for you with the image releases that have actual content, like EFB.

Most simple manipulations are a few lines of code in any modern processing environment.

Posted by: Gerald Jul 3 2016, 06:25 AM

To me, the most easy file format to handle is the EDR IMG format, as provided in the MSSS PDS. But I can't assess, whether I'm representative.
Essential for fast and good results is early accessibility to these files. But I can handle the RDRs, too.
If intermediately processed images will be the only available, I might need to reconstruct the EDRs as far as possible.

Posted by: Gerald Mar 8 2017, 04:05 AM

http://pds-imaging.jpl.nasa.gov/data/juno/ is online.

Posted by: elakdawalla Jun 27 2017, 10:54 PM

https://pds.nasa.gov/tools/subscription_service/SS-20170620.shtml

Posted by: Bjorn Jonsson Aug 26 2017, 01:07 AM

Today when starting work on some additional Juno images after a hiatus of several weeks, I noticed that new versions are available of some important instrument/frame SPICE kernels. This is of importance for everyone using SPICE data for JunoCam processing. In particular the new version of the JunoCam instrument kernel is interesting; among other things it contains this note:

QUOTE
...there is a fixed bias of 61.88 msec in the start time with a possible jitter of order 20 msec relative to the reported value (that is, you should add 61.88 msec to the value of START_TIME.) You should also add 1 msec to the value of INTERFRAME_DELAY.


The interframe delay 1 msec addition has been discussed earlier in at least one of the perijove image processing threads. However, the 61.88 msec bias is new and significant information to me. I had started to suspect that something like this might be happening since the correction I had to make to the pointing was always rather similar (but not identical), even for images from different perijoves.

There is also a new version (juno_v12.tf) of the frame kernel.

Posted by: Bjorn Jonsson Sep 5 2017, 09:29 PM

The new frame and instrument kernels (plus information/comments they contain) mentioned in the previous post have eliminated most of the systematic errors I have been getting, meaning that the corrections I need to make to the camera pointing are now much smaller. What I'm doing is comparable to what ISIS3's deltack does. However, small errors apparently remain. Because of this there is one issue I hope Mike can clarify (I suspect Gerald also knows something about this).

juno_junocam_v02.ti has lines like this:

INS-6150#_DISTORTION_X = 814.21

The width of the images is 1648 pixels. Does the above line mean that the optical axis passes through x=814.21 in the images and not width/2 (=824) as in other spacecraft cameras (e.g. Cassini, Voyager, Galileo and Dawn) I'm familiar with?

Posted by: mcaplinger Sep 5 2017, 10:58 PM

QUOTE (Bjorn Jonsson @ Sep 5 2017, 01:29 PM) *
Does the above line mean that the optical axis passes through x=814.21 in the images and not width/2 (=824) as in other spacecraft cameras (e.g. Cassini, Voyager, Galileo and Dawn) I'm familiar with?

Yes. At least that's what we are suggesting you use in the camera model. No camera really has the optic axis going directly through width/2 -- if it says it does, that most likely only means that other parts of the camera model have been adjusted to compensate for that assumption (camera model parameters don't have to be physically accurate as long as the residual errors are low.)

For this most recent kernel update, we (a summer intern and I) measured thousands of cruise star image locations and then used Nelder-Mead nonlinear optimization to minimize camera model error while varying seven model parameters: image optical center (cx, cy), the first two terms in the radial distortion function (k1, k2), camera focal length (fl), and camera boresight angular misalignment in X and Y (xr, yr). The residual error is still about 1 pixel cross-spin and ~2.5 pixels down-spin (1 sigma); the latter higher because of remaining timing slop.

Posted by: Gerald Sep 6 2017, 12:02 PM

x=814.21 for the optical axis is astonishingly similar to the values I got in subsubsection 4.1.3 of the article of http://www.unmannedspaceflight.com/index.php?showtopic=8143&view=findpost&p=230113.
Unfortunately, I need to deviate from the best fits derived from star positions in order to obtain best-fits of local RGB alignment, the latter two orders of magnitude more accurate. There must still be some unconsidered effect, at least in my models. I hope, that I'll find more time for geometrical calibration near the end of this year. Possibly K3 plays some role near the margins. And I'm inclined to verify, whether there is some small chromatic aberration, and whether the pixels are perfectly square.

Posted by: mcaplinger Sep 6 2017, 04:57 PM

QUOTE (Gerald @ Sep 6 2017, 04:02 AM) *
Unfortunately, I need to deviate from the best fits derived from star positions in order to obtain best-fits of local RGB alignment, the latter two orders of magnitude more accurate.

If you could document this for inclusion in the PDS kernels, that would be a very useful contribution.

Posted by: Gerald Sep 6 2017, 07:11 PM

I'll try to squeeze it in appropriately during the first half of October, at least partially. I could also take some of the (vast amount of) pretty unstructured material to Riga, and meet with your colleague Michael Ravine. This week, and most of next week, PJ08 processing and preparing the EPSC talk(s) have priority.

Posted by: Bjorn Jonsson Sep 19 2017, 12:38 AM

QUOTE (Gerald @ Sep 6 2017, 12:02 PM) *
Unfortunately, I need to deviate from the best fits derived from star positions in order to obtain best-fits of local RGB alignment, the latter two orders of magnitude more accurate. There must still be some unconsidered effect, at least in my models. I hope, that I'll find more time for geometrical calibration near the end of this year. Possibly K3 plays some role near the margins. And I'm inclined to verify, whether there is some small chromatic aberration, and whether the pixels are perfectly square.

k1 and k2 values that result in more accurate RGB alignment would be very useful.

Using the updated radial distortion function parameters (k1 and k2) and focal length from the new JunoCam kernel file has reduced the RGB color alignment errors I'm getting (the updated frame kernel is probably also significant here). After reprojecting the framelets to simple cylindrical projection I usually warp the green and blue channels into the red channel because even an alignment error of just ~2 pixels is noticeable in enhanced/sharpened images - I want 'perfect' alignment. However, when I use the new kernels the alignment errors are smaller than before (I need to process more images to completely confirm this though). The area where I need to warp the GB channels is also smaller (it's near the image edges - close to center the alignment is/was perfect). Some of the alignment errors might be due to slight inaccuracies in the camera pointing parameters I'm using when reprojecting the images.

Regarding a possible small chromatic aberration: Is it possible that the best way to get rid of RGB alignment errors might be to use slightly different k1 and k2 values for the different color channels? Has this been tried?

Posted by: mcaplinger Sep 19 2017, 02:34 AM

QUOTE (Bjorn Jonsson @ Sep 18 2017, 04:38 PM) *
Is it possible that the best way to get rid of RGB alignment errors might be to use slightly different k1 and k2 values for the different color channels?

I'd have thought that the focal length would be a larger variable than k1/k2.

Gerald has said that he is getting residuals much smaller than 0.1 pixels. I haven't been able to get anything close to that good, so it would be very useful for him to document his processing in a way that could be incorporated into the I kernel.

Posted by: Brian Swift Sep 25 2017, 05:24 AM

QUOTE (mcaplinger @ Sep 5 2017, 03:58 PM) *
For this most recent kernel update, we (a summer intern and I) measured thousands of cruise star image locations and then used Nelder-Mead nonlinear optimization to minimize camera model error while varying seven model parameters: image optical center (cx, cy), the first two terms in the radial distortion function (k1, k2), camera focal length (fl), and camera boresight angular misalignment in X and Y (xr, yr). The residual error is still about 1 pixel cross-spin and ~2.5 pixels down-spin (1 sigma); the latter higher because of remaining timing slop.


Hi Mike,

I’ve also been looking at PDS cruise phase full-rotation color images for camera geometry calibration. I noticed what appear to be changes in compression artifacts between datatakes without an indication of a change in the .LBL file. For example, between JNCE_2016026_00C00063_V01 and JNCE_2016026_00C00064_V01.
Could you comment on that?

I also wonder if there are opportunities for non-standard imaging around apojove (or any time Jupiter is fairly small in the FOV). If so, some imagery I think could be useful are:

1. Command JunoCam to disable TDI and capture a full rotation of color frames with EXPOSURE_DURATION around 464ms to 816ms. The goal being to capture bright stars or moons as slightly curved trails passing over filter boundaries, which could then be used to help characterize chromatic aberration and geometry. Ideally, collect several of these at different altitudes to capture trails from the moons in different sets of CCD columns.

2. Collect a sequence of low compression color frames spanning more than one full rotation. The goal being to allow wrap around linkage for FOV measurement to be performed with more stars (and away from the first frame which seems to have a higher noise level). Ideally, collect a series of these, each starting at a different random rotation angle.

Thanks,
Brian Swift

Posted by: mcaplinger Sep 25 2017, 04:14 PM

QUOTE (Brian Swift @ Sep 24 2017, 09:24 PM) *
I noticed what appear to be changes in compression artifacts between datatakes without an indication of a change in the .LBL file. For example, between JNCE_2016026_00C00063_V01 and JNCE_2016026_00C00064_V01.
Could you comment on that?

A visual example of what you are reacting to would be helpful.

Without looking at these specific images, we've observed an effect where as the sensor warms up during operation, the dark level in the images interacts with the companding table in such a way as to (paradoxically) make the images less noisy, which changes the behavior of the lossy compressor. Unfortunately, the PDS archive product wasn't defined with a field to record what the actual compression factor was (that is, we don't record how big the original downlink file was), which would give some indication of what level of compression artifact to expect.

QUOTE
I also wonder if there are opportunities for non-standard imaging around apojove

At the moment Junocam is turned off except near perijove, and the time near perijove is fully subscribed with images of the planet.

With regard to calibration, my philosophy is to try to extract all the information possible from images that we already have before taking new images. There were a few instances in cruise where the TDI was commanded assuming the spacecraft was spinning at 1 RPM and it was really spinning at 2 RPM, and these images have the kinds of streaks you're describing. We just ignored them when we were doing our calibration, but maybe they have some utility.

There are many, many images taken all the way around the orbit on orbit 1 of the moons with no TDI, and that's a good source of geometric information that we are in the early stages of looking at.

Taking long exposures without TDI is outside the parameters that the instrument was designed to work within. It might be possible to command this, but obviously you can't take contiguous frames around a spin so you'd be taking images of somewhat random star fields.

Posted by: Brian Swift Sep 26 2017, 05:09 PM

QUOTE (mcaplinger @ Sep 25 2017, 09:14 AM) *
A visual example of what you are reacting to would be helpful.

As an example, the below images are the third frames from JNCE_2016026_00C00063_V01.IMG and JNCE_2016026_00C00064_V01.IMG which have START_TIMEs 2016-01-26T08:56:02.168 and 2016-01-26T09:01:02.082. The images were decompanded and DN 0 to 8 were stretched full display range.



QUOTE
Unfortunately, the PDS archive product wasn't defined with a field to record what the actual compression factor was

Besides compression factor, are there other instrument configuration parameters that effect imaging characteristics which are not recorded in the metadata?

QUOTE
At the moment Junocam is turned off except near perijove, and the time near perijove is fully subscribed with images of the planet.

OK. If I decide to request some special imaging I’ll run it though the perijove voting process, with the imaging to occur during or at the end of the departure movie collection.

QUOTE
There are many, many images taken all the way around the orbit on orbit 1 of the moons with no TDI...

Great, I’ll go look for them on PDS. I’d assumed all the non-cruise imagery was on the missionjuno web site.

One more question: is the imagery collected for the Junocam Calibration Report available online anywhere?

Thanks again.

Posted by: mcaplinger Sep 27 2017, 01:05 AM

QUOTE (Brian Swift @ Sep 26 2017, 09:09 AM) *
As an example, the below images are the third frames from JNCE_2016026_00C00063_V01.IMG and JNCE_2016026_00C00064_V01.IMG

Yes, that's an example of the effect I described.
QUOTE
Besides compression factor, are there other instrument configuration parameters that effect imaging characteristics which are not recorded in the metadata?

If we ever changed any of the piecewise linear companding parameters (which we don't plan to do) there is no place to record them in the metadata. One issue is that the PDS doesn't easily provide a way to define new keywords and has a pretty sparse set of standard keywords (for example, we had to lobby to get a keyword to describe the amount of TDI -- even though TDI has been used on earlier instruments, there is no standard keyword for it.)
QUOTE
One more question: is the imagery collected for the Junocam Calibration Report available online anywhere?

Not publicly, no, not at this time. Some missions archive their ground data and some don't, and of the ones that do, some of them spend a lot of effort to make that intelligible and some don't. Most of our ground images were taken with ground support equipment that read out a large area of the CCD including the "junk" between band edges, and so it doesn't fit perfectly into the definition of the standard PDS product.

If I thought any of the ground images were generally useful, I'd be more interested in trying to get them into some releasable form.

Posted by: Bjorn Jonsson Sep 27 2017, 11:00 PM

Is there any possibility the compression factor could be included in future PDS releases? There is a PDS keyword for a compression factor - the Galileo and Cassini PDS files include it and it's *very* useful, especially in the case of Galileo (I always check the compression factor when processing Galileo images).

Posted by: mcaplinger Sep 27 2017, 11:43 PM

QUOTE (Bjorn Jonsson @ Sep 27 2017, 03:00 PM) *
Is there any possibility the compression factor could be included in future PDS releases?

I see "encoding_compression_ratio", is that what you're talking about?

It's more difficult than you would think to add because we can't make any changes without going through another lengthy review/approval cycle. And frankly, while I admit it was an oversight, I am not seeing a really compelling need to know it. We could perhaps consider having some kind of ancillary table or text file and I'd be happy to informally distribute that.

Posted by: Bjorn Jonsson Sep 28 2017, 01:01 AM

Yes, ENCODING_COMPRESSION_RATIO in the Galileo files (it's INST_CMPRS_RATIO in the Cassini PDS files).

An ancillary table or text file would be great - it wouldn't make any difference to me whether the compression ratio was in a PDS formatted LBL file or in a separate text file.

Posted by: Brian Swift Sep 29 2017, 08:13 PM

The Mathematica apps I’ve developed for processing JunoCam raw images with the Hugin panorama application are available at https://github.com/BrianSwift/JunoCam

Juno25 converts MissionJuno website -raw.png files to full CCD frame images which can be processed as a panorama in Hugin.

Juno24e extracts control points from PDS cruise phase star fields to drive Hugin’s lens parameter generation.

The produced lens parameters v=44.96 b=-0.01045 d=12.99 e=13.89 yield Average control point distance = .252, Standard deviation = .319, Maximum = 3.13. Description of parameters is at http://wiki.panotools.org/Lens_correction_model

First image is available at https://flic.kr/p/XXpGUx

Posted by: Gerald Oct 28 2017, 11:42 PM

QUOTE (mcaplinger @ Sep 19 2017, 04:34 AM) *
Gerald has said that he is getting residuals much smaller than 0.1 pixels. I haven't been able to get anything close to that good, so it would be very useful for him to document his processing in a way that could be incorporated into the I kernel.

I'm aligning the RGB centroids of a marble movie image according to some convention by adjusting camera parameters. This problem can be statet as an underdetermined system. Therefore it isn't too difficult to find a solution with an error on the order of a few millipixels, much better than it's possible with star streaks.
This doesn't mean, that such a local solution needs to apply globally. But it provides a means of obtaining rather accurate data points required for global calibration.
Here an intermediate and very technical report, including some analysis of several scenarios:
 junocam10_geometry_by_single_mm_pj08.pdf ( 1.5MB ) : 974

It's about what I've been able to write up, before I needed to switch tasks. I hope, that I'll find time to continue with global calibration after a first run of PJ09 processing is completed, with an RMS minimization method I've hinted to in the above article.
Accurate registering isn't limited to centroids, but determining centroids is rather straightforward to implement.


Posted by: Brian Swift Dec 18 2017, 06:57 PM

Mike, how small of an INTERFRAME_DELAY can be commanded to Junocam?
I’m thinking that if a value like .02 seconds were used when targeting moons, the multiple images gathered could potentially be used for super-resolution imaging.

Posted by: mcaplinger Dec 18 2017, 08:00 PM

QUOTE (Brian Swift @ Dec 18 2017, 10:57 AM) *
Mike, how small of an INTERFRAME_DELAY can be commanded to Junocam?
I’m thinking that if a value like .02 seconds were used when targeting moons, the multiple images gathered could potentially be used for super-resolution imaging.

No way. The minimum interframe time is about 200 msec, 10x longer than what you suggest. The camera has to flush the CCD and then read out the subframe between each frame.

Posted by: Gerald Dec 18 2017, 10:42 PM

Any attempts to create SR products would require losslessly compressed images. Otherwise, it is already in improvement to reduce compression artifacts and other image noise. Occasionally, lossless images are made. In these cases, there is at least a theoretical chance to use the three color channels for resolution improvements, and sometimes, you even get an overlap of two framelets of the same color band for the target object, such that four takes of the same target can be assumed. This would allow for a doubling of the resolution in the best case. Further methods would combine several raws of a sequence of images of the same target. This requires, however, very accurate alignment and de-rotation.
But usually, I'm already happy, when overlapping channels can be used to reduce image noise.

Posted by: Brian Swift Jan 31 2018, 01:50 AM

Mike, were the MTF images collected during thermovac saved? (I didn't see them cataloged in the calibration report). And do you recall if the test patterns covered most of the camera field of view?

Unrelated, has there been much analysis of the PJ10 lightning search images? When I took a look, they seemed like mostly spot noise. I guess I was expecting to see see stars with the long TDI. Thanks.

Posted by: mcaplinger Jan 31 2018, 01:56 AM

QUOTE (Brian Swift @ Jan 30 2018, 05:50 PM) *
Mike, were the MTF images collected during thermovac saved? (I didn't see them cataloged in the calibration report). And do you recall if the test patterns covered most of the camera field of view?

Yes, they were saved, and no, the test patterns don't cover much of the field of view.


Posted by: Brian Swift Jan 31 2018, 04:11 AM

QUOTE (mcaplinger @ Jan 30 2018, 05:56 PM) *
Yes, they were saved, and no, the test patterns don't cover much of the field of view.

Thanks for the super quick response. No plumb-lines there.

Posted by: mcaplinger Jan 31 2018, 05:55 AM

QUOTE (Brian Swift @ Jan 30 2018, 05:50 PM) *
Unrelated, has there been much analysis of the PJ10 lightning search images?

I took a casual look when they came down. There are a large number of particle hits in these images; I see a few things that could be stars but they don't jump out as such and that wasn't what we were trying to accomplish (of course, I didn't see any lightning either but it's a long shot given the short exposure times.) There's more work to do with these images if anyone is interested.

Posted by: Gerald Jan 31 2018, 10:53 AM

One of the questions, I've been interested in, has been: Is JunoCam able to discern Jupiter in Io shine? The answer is: Yes, it is!
Here a stretched raw, with bright Io at the right and Jupiter at the left side:



During the next years, we may see much of Jupiter's night side. So, one question will be, whether we can persue at least some of the largest storm systems even on the night side under Io shine.
Of course, detecting aurora or lightnings, or even some faint moons would be interesting, too. Another goal would be using JunoCam as a radiation flux detector on the basis of night shots.

Posted by: mcaplinger Jan 31 2018, 12:30 PM

QUOTE (Gerald @ Jan 31 2018, 02:53 AM) *
One of the questions, I've been interested in, has been: Is JunoCam able to discern Jupiter in Io shine? The answer is: Yes, it is!

Prove it, by plotting the expected geometric limb of the planet on this image. I think this is much more likely to be a stray light artifact.

Posted by: Gerald Jan 31 2018, 03:37 PM

Far from anything I'd call a proof, but a first plausibility cross-check:



The first row are maps of cosines of emission angles with a HFOV of 60 degrees, and a vertical FOV of 60 degrees (vertically) centered to Jupiter.
The second row consists of according enhanced renditions derived from JunoCam raws, without trajectory nor shape model, and cropped arbitrarily.

Posted by: fredk Jan 31 2018, 05:26 PM

Ioshine or light scattered along Jupiter's atmosphere from the daylit side? Or skyglow? Earth's night atmosphere is fairly bright due to skyglow. How bright would you expect Ioshine to be?

Posted by: Gerald Jan 31 2018, 06:05 PM

With https://en.wikipedia.org/wiki/Io_(moon)'s radius of about 1820 km, I get a cross section of about 10.4e6 km². With Io's distance from Jupiter of about 420,000 km, I get a surface of a hemihttps://en.wikipedia.org/wiki/Sphere#Surface_area of that radius of about 1.1e12 km². The quotient of these two areas is about 1e5.
Assuming 4000 DN for Jupiter's solar-illuminated surface with TDI 3, we should get theoretical 80,000 DN for TDI 60, such that we should be on the same order of magnitude as one DN for Io shine. Io's albedo is pretty high with about 0.63. Some binning/blurring over the noise (which I obtained by blurring an enhanced intermediately processed image) should result in a detectable signal similar to the one presumably observed.

Posted by: mcaplinger Jan 31 2018, 06:38 PM

QUOTE (Gerald @ Jan 31 2018, 07:37 AM) *
Far from anything I'd call a proof, but a first plausibility cross-check...

Well, maybe. There are clearly a whole lot of stray light artifacts in later images as the sunlit limb comes into the FOV (see image 10, for example) -- you can see hints of this in your version of image 6.

I wasn't expecting so many radiation artifacts in these images, but honestly we only took them because we had nothing else to take with the planet out of the FOV.

Posted by: Gerald Jan 31 2018, 08:18 PM

I've seen the stray light in the later images - and intentionally omitted those. Then, it isn't clear, whether Jupiter's night side can't be resolved in images after #06, because of the stray light, or because Io is in Jupiter's shadow, or otherwise out of reach.
But the essential question will be, whether we'll find a way to take Jupiter images during the next solar conjunction(s), when we'll see mostly Jupiter's night side, and no Earth-based observation will be possible. If we can show, that stacked high-TDI night shots are able to resolve storm features, we could bridge some of the observational gap.
A stack of images will be suitable to remove most or all of the artifacts caused by energetic particle hits or hot pixels, and improve the signal strength.

Posted by: Brian Swift Feb 23 2018, 08:46 PM

Anyone have suggestions on how I can determine the rotation (and tilt) of
the Junocam CCD relative to Juno spin axis?
I’m new to SPICE, but have played around with Frame Transformations
on WebGeocalc. Unfortunately, my linear algebra and geometry skills have gotten a little
rusty being underused for a few decades. Thanks.

Posted by: Gerald Feb 24 2018, 02:17 AM

Out-of-the-hip, I can only say, that the deviation from orthonormality is small, and I'm not quite sure, whether the values are perfectly constant, since Juno's spin axis might undergo tiny changes. I've run several calibration series on Marble Movie images, and applied the results to images near these series, when highly accurate alignment was recommended. But for the alignment of close-up images, the deviation from orthonormality usually didn't play an obvious role. Errors induced by these inaccuracies are probably on a subpixel level.
I'm presuming, that the main source of residual misalignments in my processing is due to inaccuracies of my optical distortion model. I'm working on this question in small time slices between all the other event-driven activities. However, I don't rely on any of the published ik versions thus far. I think, that the Brownian approach is inherently unstable for wide angles, and I'm inclined to do the math for a different approach, maybe together with an article, if successful. I also cannot entirely rule out some small chromatic aberration, or a tiny deviation of the CCD pixel grid from square.
Other possible causes for small misalignments might be deviations of Jupiter from its idealized IAU shape, small oscillations of Juno's solar panels, or a small varying torque-free precession of Juno's spin axis.
There are several degrees of freedom that may annihilate each other partially. So, it's not quite trivial to find out the actual physical settings with a high accuracy.

Posted by: mcaplinger Feb 24 2018, 03:29 AM

QUOTE (Brian Swift @ Feb 23 2018, 12:46 PM) *
Anyone have suggestions on how I can determine the rotation (and tilt) of
the Junocam CCD relative to Juno spin axis?

If you use SPICE, all of this is managed for you by the frames system. If you have a vector in the JUNO_JUNOCAM coordinate system (which is formed by the camera boresight and the CCD line and sample directions) and you want to transform it into some other system, either inertial or not, you can just call pxform to get a rotation matrix for a particular time, and then call mxv to transform a vector from one coordinate system to another.

The raw values for what these transforms consist of are in the "frames kernel" https://naif.jpl.nasa.gov/pub/naif/JUNO/kernels/fk/juno_v12.tf but all you need to do when using SPICE is load this kernel, and the software does the rest.

As Gerald says, these angles are pretty small, so for many purposes you can just assume that Junocam is perfectly pointed along the spacecraft's -X axis.

You don't need to understand mathematically how a rotation matrix or matrix-vector multiplication work to use SPICE effectively, although it doesn't hurt.

Posted by: Brian Swift Feb 26 2018, 08:22 AM

Mike, Gerald - Thanks for the replies.

Rotation and tilt relative to the spin axis are two of the extrinsic parameters produced from my camera modeling process, and end up being adjustable parameters to my image formation pipeline. So, I was curious how much the spin axis has changed between the imagery used for camera modeling (2016026, 2016040, 2016130) and and Perijove imagery. Changing the rotation by 0.1 degree definitely has a noticeable effect on the assembled imagery.

Gerald - in addition to your misalignments candidates, one I’ve wondered about is the effect of onboard compression on imagery with high gradients. such as point sources (which I use to produce the camera model) and Jupiters limb (which I use to judge the alignment).

I also think there is a non-radially-symetric component to misalignment.
I’m currently capturing that in p1,p2 parameters of my brown model.

Posted by: Gerald Feb 26 2018, 12:58 PM

For DCT compressed images, you get the usual type of compression artifacts, of course.

Here is some selected calibration run for PJ09 images, I think, it's #001 to #041 RGB approach images:


The top right diagram shows the inferred rotation around the optical axis in radians under constant assumptions for Brownian K1, K2, and for a fixed z/x scale factor.
You'll see, that this is fluctuating between -0.1 and +0.1 degrees (between -0.002 and +0.002 radians).
This calibration run used a 7-parameters camera model with 3 assumed and 4 inferred parameters.
I've also tested several model extensions with more parameters, but I've no final opinion, yet. There are many ways to get almost-aligned images. I haven't yet had the time to implement a more global RMS error minimization method applied to a more complex camera model and to many images at the same time.
As long as I don't need to work scientifically very accurately, the model assumptions I'm currently applying return useful images. So, the pressure to squeeze out another digit of relative accuracy isn't very high at the moment.
But there are some other tasks until mid March that can't wait. I see some hope to be able to continue with more complex calibration runs in the second half of March, before PJ12.

Posted by: mcaplinger Feb 26 2018, 03:37 PM

QUOTE (Brian Swift @ Feb 26 2018, 12:22 AM) *
So, I was curious how much the spin axis has changed

The spacecraft sends down its orientation as determined by the star trackers at fairly high resolution and these data are assembled into the C kernels, so it's not like this is some big mystery if you are using SPICE.
QUOTE
I’ve wondered about is the effect of onboard compression...

Certainly there is a small effect, especially for the high compression factors we were using for star imaging in cruise. But I wouldn't say it's any more significant than other sources of error, like motion blur from spacecraft nutation. We didn't even bother to compute centroids for our analysis, we just used the eyeball location of stars. As I've said before, expecting subpixel registration without any manual adjustment steps is probably not achievable.

Posted by: Brian Swift Feb 26 2018, 06:29 PM

QUOTE (Gerald @ Feb 26 2018, 04:58 AM) *
This calibration run used a 7-parameters camera model with 3 assumed and 4 inferred parameters.

Does this mean all your inferred (extrinsic) parameters are only derived from the imagery and don't rely on SPICE geometry?

Posted by: Gerald Feb 26 2018, 10:25 PM

Inferred means inferred from raw JunoCam images.
Assumptions can be from any source, including SPICE, or just guessed, or arbitrary.

Posted by: Brian Swift Feb 26 2018, 10:30 PM

Note for anyone else who hasn't noticed...
The coverage dates of SPICE kernels at http://wgc.jpl.nasa.gov:8080/webgeocalc/#FrameTransformation
which currently only go through 5/2017 (PJ6) when "JUNO" kernel set is selected can
be extended for PJ7-PJ11 by:
1. Initially select JUNO
2. Select "Manual" (at the bottom of the pulldown) "Choose Kernels..."
3. Navigate to "JUNO/kernels/ck"
4. Select the "juno_sc_rec_yymmdd_yymmdd_v01.bc" with a date range covering the desired time.

Posted by: mcaplinger Feb 26 2018, 10:36 PM

QUOTE (Gerald @ Feb 26 2018, 02:25 PM) *
Inferred means inferred from raw JunoCam images.
Assumptions can be from any source, including SPICE, or just guessed, or arbitrary.

I'm not sure that answers the original question.

To the extent that I understand Gerald's processing flow, he uses the spacecraft position data (SPK file) at least in some cases, but never the orientation data (CK).

Posted by: Gerald Feb 26 2018, 11:27 PM

I'm not quite sure any more, what we are talking about.
For the geometrical camera calibration, I don't need SPICE data, but can use them.
For processing close-up images, or for rendering illumination-adjusted images, I'm using sets of 8 dumped (saved) trajectory position files with different frame settings, that implicitely contain all relevant kernel informations including CK.
But the processing is considerably more complex than the processed images might suggest.

Posted by: mcaplinger Feb 27 2018, 03:32 AM

QUOTE (Gerald @ Feb 26 2018, 03:27 PM) *
For processing close-up images, or for rendering illumination-adjusted images, I'm using sets of 8 dumped (saved) trajectory position files with different frame settings, that implicitely contain all relevant kernel informations including CK.

I guess I assumed since you independently computed the spin rate and axis from the marble images that your position information was in an inertial or Jupiter-fixed frame and didn't include any knowledge of spacecraft orientation, but I guess if you have position in a spacecraft-fixed frame then the orientation is convolved into that.

That aside, I've tried to document the standard processing flow in the Junocam I kernel. There's some evidence that this works to 1-2 pixel accuracy, and I don't think one can expect anything better than that without doing limb fits or some kind of ad hoc color registration.

Posted by: Brian Swift Mar 1 2018, 04:15 AM

QUOTE (Gerald @ Feb 26 2018, 04:58 AM) *
Here is some selected calibration run for PJ09 images, I think, it's #001 to #041 RGB approach images:

From the drawing conclusions from a single data point department...

Left is first image of PJ09 approach showing a large glitch with my default rotation,
right is using .0025 rotation from your graph.

Posted by: Brian Swift Mar 1 2018, 05:30 AM

Gerald, do you have a RotOffestZ estimate for JNCE_2017244_08C00121 easily accessible?
It's an image I use often in development and testing.

Posted by: Gerald Mar 1 2018, 12:08 PM

The effect in your comparison is indeed larger than I remembered. For the small marble movie images, a misalignment of one or two pixels can be rather obvious.
For PJ08, I've performed calibration runs only for the early approach phase, but with several different sets of assumptions.
For the same assumptions used in the above PJ09 calibration run, the best-fit data have been around .002 radians rotation around JunoCam's optical axis.
But this might have convoluted unmodeled s/c or camera properties specific to the position of Jupiter in the early approach images.

During the first few hours after a s/c op, Juno may oscillate a bit, before it's dampened down to the level of statistical noise.

Posted by: mcaplinger Mar 1 2018, 03:46 PM

QUOTE (Gerald @ Mar 1 2018, 04:08 AM) *
During the first few hours after a s/c op, Juno may oscillate a bit...

True. If you're using the C kernel, any oscillation should be captured in the kernel. If you're not using the C kernel, then all bets are off. I still find it hard to tell in this whole exchange if people are using the C kernel, and if so, how and when.

If there are errors in the angular offsets for Junocam in the I kernel and frames kernel, or if there's spacecraft motion not being captured in the C kernel, I'd like to know about those problems so they can be fixed.

Posted by: Brian Swift Mar 1 2018, 05:24 PM

QUOTE (Gerald @ Mar 1 2018, 04:08 AM) *
The effect in your comparison is indeed larger than I remembered.

In my original PJ09 approach movie (with all frames processed using
the same orientation parameters), it was only this first frame that
had a large glitch. Since your graph showed it as an outlier, I was curious if
using your rotation value would correct it.

Posted by: Brian Swift Mar 1 2018, 06:06 PM

QUOTE (mcaplinger @ Mar 1 2018, 07:46 AM) *
I still find it hard to tell in this whole exchange if people are using the C kernel, and if so, how and when.

I currently only use the SPICE "AV Magnitude" as the spin rate in my image production pipeline.
(This pipeline is fairly basic, not doing image rectification.)

I use no SPICE info in my lens modeling process.

Posted by: Gerald Mar 1 2018, 11:06 PM

QUOTE (mcaplinger @ Mar 1 2018, 04:46 PM) *
... I still find it hard to tell in this whole exchange if people are using the C kernel, and if so, how and when...

In my usual way to process images, I'm using C kernel information only for a small number of selected trajectory positions, and extrapolate s/c and camera pointing on the basis of actual images.
With some effort, I could probably optionally replace my own pointing calculations by data retrieved from SPICE, or I could use more or redundant trajectory positions with C kernel information and extrapolate to overlapping trajectory fragments for consistency tests. But as of yet, the processing doesn't allow a fully conclusive error analysis of SPICE data. Easiest to check would be fluctuations of the rotational phase angle wrt actual images by using SPICE pointing information from very close to the time an image has been taken, e.g. SPICE data of image start or stop time (rounded to one or five seconds).
In general, for more recent perijoves, I needed less adjustments of the data retrieved from SPICE than I needed before. That may be partially due to more practice, but proably also due to improved quality of the data.

Posted by: Gerald Mar 2 2018, 10:51 AM

Without any warranty, not even tested, just in terms of a manually modified SPICE instrument kernel, here the entirely heuristical Brownian distortion parameters, and focal length in ik notation I've been mostly working with over the last year or so.
 juno_junocam_v02_ge01.txt ( 15.32K ) : 669

(rename extension from txt to ik)

Those are without any formal empirical justification, and subject to change without prior notice.

I'm planning to work on a refined and empirically justified version, but not in the short run.

Posted by: Brian Swift Mar 2 2018, 08:48 PM

Juno28g, a JunoCam raw processing pipeline implemented in Mathematica, is available under permissive open source license at https://github.com/BrianSwift/JunoCam

This basic pipeline implements pixel value decompanding, lens distortion correction, equirectangular projection, and assembly of raw frames to a final image. Only a per-channel scaling is applied for color balance, leaving beautification in Photoshop to the user’s artistic interpretation.

Produces usable images with defaults. But the appearance of seams can be reduced by refinement of parameters related to Juno spin rate and JunoCam orientation relative to the spin axis.

Works with MissionJuno website formatted -raw.png and PDS .IMG or .IMG.gz raw image files.

Output format is 16-bit PNG.

Produces approach/departure movies.

The default lens model is Brown K1,K2,P1,P2. However, more baroque models involving chromatic aberration and principal point offsets are supported.

PJ11_27 processed with this pipeline:

Posted by: Sean Mar 3 2018, 02:11 AM

Thanks for sharing this Brian...very keen to wrap my head around Mathematica in order to get my mitts on some equirectangular projection!

Posted by: Brian Swift Mar 3 2018, 05:21 PM

QUOTE (Sean @ Mar 2 2018, 06:11 PM) *
Thanks for sharing this Brian...very keen to wrap my head around Mathematica in order to get my mitts on some equirectangular projection!

Your results are looking great.

If seams/limb-steps are noticeable in a high-altitude image try changing rExtrinsic to 0.0 or -0.002.
This may not help with seams in lower altitude images since the pipeline doesn't compensate for parallax.

Posted by: Sean Mar 3 2018, 10:00 PM

Ah thankee! I've fed 214 raw files into the script so it should be finished in about 20 hours.

Do the seams refer to the fringing on high contrast features? I've taken to aligning/puppet-warping these by hand for selected files.

Posted by: Brian Swift Mar 3 2018, 10:43 PM

QUOTE (Sean @ Mar 3 2018, 02:00 PM) *
Do the seams refer to the fringing on high contrast features? I've taken to aligning/puppet-warping these by hand for selected files.

Seams would be sharp discontinuities along vertical lines, possibly starting at a misalignment along the terminator.

The fringing may be do to misalignment caused by rExtrinsic being off, or possibly by parallax shifts between framelets,
that later of which I've really only recently started thinking about. While I haven't tried this yet, if the problem is due to parallax,
you could try changing spiceYawRate which might improve alignment in some areas and make it worse in others, and then
you could merge good regions. 1/27 degree change is about one pixel.

Posted by: Brian Swift Mar 4 2018, 07:16 AM

QUOTE (Brian Swift @ Mar 3 2018, 02:43 PM) *
Seams would be sharp discontinuities along vertical lines, possibly starting at a misalignment along the terminator.

Meant limb, not terminator.

The left image (processed with default rExtrinsic = 0.00159) shows the limb discontinuity, and seams stretching vertically from the discontinuities.
Right image (processed with rExtrinsic = -0.002) reduces the discontinuities, seams, and also the limb coloring artifacts.
(Unfortunately, in this case, reducing the artifact on one side increases it on the other side [not shown]).

Posted by: Sean Mar 4 2018, 05:07 PM

Thank you for the explanation.

I have noticed vertical offsets on each of the R, G, B channel strips which I show in a tweet here; https://twitter.com/_TheSeaning/status/970341243422232582

I presume this has to do with Juno's orbit and the parallax shifts between framelets you mentioned?

I would rather have accuracy in foreground areas since pixel discrepancies fall off with distance, which would be fine for my purpose at least.

The script has been running merrily since last night with nary a hiccup, it is very impressive!


Posted by: Brian Swift Mar 4 2018, 10:21 PM

QUOTE (Sean @ Mar 4 2018, 09:07 AM) *
I have noticed vertical offsets on each of the R, G, B channel strips which I show in a tweet here; https://twitter.com/_TheSeaning/status/970341243422232582

I presume this has to do with Juno's orbit and the parallax shifts between framelets you mentioned?

I would rather have accuracy in foreground areas since pixel discrepancies fall off with distance, which would be fine for my purpose at least.


I'm not sure which image you are working on in the tweet. But I think the fringing here in PJ11_13 is due to the basic nature of my software assuming the spacecraft doesn't move significantly in the .6 sec between red and blue exposures. However, for PJ11_13 the distance to the surface is changing from 8289 to 8072m while the entire image is being collected. Also, disregard what I said about changing spiceYawRate, it probably won't help with this.



QUOTE (Sean @ Mar 4 2018, 09:07 AM) *
The script has been running merrily since last night with nary a hiccup, it is very impressive!

Glad it's proved entertaining. I've been running it over the past few nights.

Posted by: Sean Mar 4 2018, 10:31 PM

The example image I used was PJ08_118

Posted by: Sean Mar 5 2018, 02:50 AM

'Perijoves'

214 images, 11 perijoves, 3.15 billion pixels http://www.gigapan.com/gigapans/206659 *update: that includes all the black space I added, more like 1.8 billion pixels*

https://flic.kr/p/GL7AWL

Posted by: Brian Swift Mar 5 2018, 04:38 AM

QUOTE (Sean @ Mar 4 2018, 06:50 PM) *
'Perijoves'
214 images, 11 perijoves, 3.15 billion pixels http://www.gigapan.com/gigapans/206659

Very cool! You've run more perijove data through the pipeline than I have.

Posted by: Brian Swift Mar 5 2018, 05:17 AM

QUOTE (Sean @ Mar 4 2018, 06:50 PM) *
'Perijoves' 214 images

If you have all the meta-data and .mx.gz files , you could try evaluating "Create Movie"
(after manually executing Startup, Controlling Parameters, Definitions and Functions,
and skipping "Raw to Final Image Processing Pipeline".)

I have no idea what it will do when given multiple perijoves.

Posted by: Brian Swift Mar 5 2018, 09:19 AM

QUOTE (Sean @ Mar 4 2018, 09:07 AM) *
I have noticed vertical offsets on each of the R, G, B channel strips which I show in a tweet here; https://twitter.com/_TheSeaning/status/970341243422232582

I added a feature intended to help anyone who is interested in shifting color channels around to remove fringes, or playing with seams...

exportFullFrames: Export each corrected full frame image as a separate PNG file.
Load these into Photoshop as layers via "File -> Scripts -> Load Files into Stack...",and then select all layers and change blending mode to "Lighten".

Also, if you are manually removing dark spots, it might be easier (more automate-able) to perform the edits in the raw file before running it through the pipeline.
(Fixes should repeat every 3*128=384 lines.) Just don't change the raw file name, the pipeline parses out the various fields encoded in the name.

Posted by: Bjorn Jonsson Mar 5 2018, 11:22 PM

QUOTE (mcaplinger @ Mar 1 2018, 03:46 PM) *
True. If you're using the C kernel, any oscillation should be captured in the kernel. If you're not using the C kernel, then all bets are off. I still find it hard to tell in this whole exchange if people are using the C kernel, and if so, how and when.

If there are errors in the angular offsets for Junocam in the I kernel and frames kernel, or if there's spacecraft motion not being captured in the C kernel, I'd like to know about those problems so they can be fixed.

There are definitely differences between how we are doing this. My processing differs in some ways from what Gerald or Brian (or someone else) are doing although the fundamentals should be the same.

Regarding C kernels, I always use orientation data from the C kernel for every R/G/B set of framelets (typically 25-40 sets per image) and I also do this for the spacecraft position. If possible, as a 'sanity check' I would be interested in knowing the typical pointing errors I should be getting if I use the START_TIME+0.068 seconds in combination with 'frame number' times (INTERFRAME_DELAY+0.001) as the time when extracting the pointing from the C kernel (I sometimes adjust the 0.068 value slightly). I determine the pointing errors using limb fits in images that have been geometrically corrected using the appropriate Brownian distortion parameters. The typical pointing error I get is ~10 pixels but this varies a bit (it's rarely less than ~3 or bigger than 20). The pointing correction varies a bit from the first framelet set to the last (using constant correction would increase the artifacts at the limb that have been discussed in the preceding posts and the color at the limb near the top and/or bottom can get messed up).

I think everything I'm doing works correctly, partially because for images where I know the pointing to be accurate I get the expected result, i.e. no pointing error (example: the Cassini image subset where C-smithed C kernels are available). However, I'm not quite as sure as I'd like to be that everything is corrrect. The reason is that debugging this thing isn't completely trivial when I don't know exactly which result I should be getting (in particular I don't know how big the pointing error really is). However, it's probably encouraging that I have *probably* noticed artifacts in images not processed by me that could indicate pointing errors similar that what I've been getting.

The pointing correction can be omitted (and I sometimes omit it). However, doing so can result in obvious color artifacts near the limb while artifacts far from the limb are much smaller and often not of significance unless you want very high geometric accuracy.

Posted by: mcaplinger Mar 6 2018, 03:06 AM

QUOTE (Bjorn Jonsson @ Mar 5 2018, 03:22 PM) *
Regarding C kernels, I always use orientation data from the C kernel for every R/G/B set of framelets (typically 25-40 sets per image) and I also do this for the spacecraft position.

This is the only method I would have imagined would be adequate without a lot of manual tweaking, and it's the way the MSSS processing works. As far as I can tell from reading Brian's Mathematica code (I don't have Mathematica and the raw ASCII is not the easiest thing to read) he isn't using the C kernel data and he only uses the spacecraft position at the beginning of imaging, but his results are very good so perhaps this doesn't matter in most cases.

As for the expected errors, using our parameters the position error of the galilean satellites on orbit 1 had standard deviations of about 1 in X and about 2.8 in Y. I attribute the higher Y error to timing slop, but we could have residual geometric distortion error because the satellites were probably near the optic axis where distortions are lower. For obscure reasons, our own standard processing isn't using our updated parameters yet, though.

Posted by: Brian Swift Mar 6 2018, 04:03 AM

QUOTE (mcaplinger @ Mar 5 2018, 07:06 PM) *
.. As far as I can tell from reading Brian's Mathematica code (I don't have Mathematica and the raw ASCII is not the easiest thing to read)

Ewww (that stuff isn't pretty). If you download and install the much too large http://www.wolfram.com/cdf-player/ you'll be able to view the document properly formatted.
(Hover over the right side to show section expansion controls.)

QUOTE
he isn't using the C kernel data and he only uses the spacecraft position at the beginning of imaging, but his results are very good so perhaps this doesn't matter in most cases.

My simple pipeline doesn't do map projection, so it doesn't use position. The only thing it uses from SPICE is Angular Velocity Magnitude for the spin rate.

Posted by: Gerald Mar 6 2018, 05:03 AM

For my drafts, like http://junocam.pictures/gerald/uploads/20171106a/, I don't use SPICE data, except data similar to those in the ik. It's a fixed processing pipeline that just uses the PJ number as a PJ specific parameter in order to know which directory to process. The other parameters are so similar between perijoves, that only higher quality requirements need a specific calibration.
The principle is similar to Brian's. Calibration runs are optional and based merely on distant Jupiter images.
This simplified processing method ends, where data about solar illumination, other 3D data, or accurate alignment of close-up features is required. Then SPICE trajectory and pointing data simplify processing considerably. For New Horizons, I made some experiments with infering orbital trajectory data, and mass estimates of Pluto and Charon from images. But that's rather complicated, time consuming, and not necessarily very accurate. Especially Jupiter with its oblate shape would add another challenge.

That said, Brian accomplished an important milestone in understanding JunoCam's way to take images.
This understanding is a value for itself, since it prepares you for the next level of challenges. And beyond some point, when you'll explore unknown territory, nobody will be able to give you useful help anymore, and you'll need to rely on your own acquired capabilities.

Posted by: Brian Swift Mar 18 2018, 04:38 PM

Mike, Can you release to public domain the photo of the sensor and filters that is Figure 12 in "Junocam: Juno’s Outreach Camera"?
If not, I'll remove the copy embedded in the comments of my pipeline in the next release.

Posted by: mcaplinger Mar 23 2018, 06:11 PM

QUOTE (Brian Swift @ Mar 18 2018, 08:38 AM) *
Mike, Can you release to public domain the photo of the sensor and filters that is Figure 12 in "Junocam: Juno’s Outreach Camera"?

The paper is open access: "This article is distributed under the terms of the Creative Commons Attribution License
which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the
source are credited."

Posted by: Brian Swift Mar 24 2018, 12:44 AM

QUOTE (mcaplinger @ Mar 23 2018, 11:11 AM) *
The paper is open access...

Thanks Mike. Can you make an original resolution version of the image available.

Also, do you know how accurate the .001s adjusted INTERFRAME_DELAY is?
I've only seen the values of the INTERFRAME_DELAY and adjustment given
to 1ms of precision, but I believe the accuracy is significantly better than that.



Posted by: mcaplinger Mar 24 2018, 01:22 AM

QUOTE (Brian Swift @ Mar 23 2018, 04:44 PM) *
Thanks Mike. Can you make an original resolution version of the image available.

I guess, but it's just eye candy without practical benefit, right?
QUOTE
Also, do you know how accurate the .001s adjusted INTERFRAME_DELAY is?
I've only seen the values of the INTERFRAME_DELAY and adjustment given
to 1ms of precision, but I believe the accuracy is significantly better than that.

If I understand your question, it's not. The delay is commanded in multiples of 20000 cycles of a 20 MHz clock, and the 1 msec offset was caused by an off-by-one misunderstanding in the commanding of the counter. So to the accuracy of the clock oscillator, which we don't know anything about other than it's within 20 ppm or so, the adjustment really is 1.0000 msec. It's probably slightly different, but we have no internal way of assessing that.

Posted by: Brian Swift Mar 24 2018, 05:21 AM

QUOTE (mcaplinger @ Mar 23 2018, 05:22 PM) *
I guess, but it's just eye candy without practical benefit, right?

I'm thinking of creating a diagram out of it by correcting the perspective and overlaying the filter readout ranges.

QUOTE
If I understand your question, it's not. The delay is commanded in multiples of 20000 cycles of a 20 MHz clock, and the 1 msec offset was caused by an off-by-one misunderstanding in the commanding of the counter. So to the accuracy of the clock oscillator, which we don't know anything about other than it's within 20 ppm or so, the adjustment really is 1.0000 msec. It's probably slightly different, but we have no internal way of assessing that.

The above is exactly what I was curious about. My modeling is currently estimating a value of 0.001032 sec for the adjustment.

Posted by: mcaplinger Mar 24 2018, 03:55 PM

QUOTE (Brian Swift @ Mar 23 2018, 09:21 PM) *
My modeling is currently estimating a value of 0.001032 sec for the adjustment.

Well, I can't say that's impossible, but it's about 30x larger than the largest clock error I would expect given the specification of the clock oscillator.

An error in interframe delay and/or spin rate would lead to increasing downspin errors over the course of a full image acquisition. Our star and galilean satellite imaging analyses don't show those to my eye (if I'm doing the math right the error you state would build up to 82*0.000032*371 or almost a second of error over a full spin, which would be easily noticable) but I don't claim to know for sure.

Posted by: Gerald Mar 26 2018, 02:16 AM

82 * 32µs = 2.624ms. This would be a little less than 1 pixel. You won't probably be much more accurate on the basis of stars when working with a single swath.
But a relative error of 32µs / 371ms = 8.6e-5 would add up to about 1 sec within a little more than three hours. The clock is certainly more accurate by orders of magnitude.
So, the error is much more likely in the geometrical camera calibration than in the clock.

Posted by: Bjorn Jonsson Mar 26 2018, 11:27 PM

The START_TIME value is also not accurate and this can be of significance. From the juno_junocam_v02.ti kernel:

QUOTE
We have found that there is a fixed bias of 61.88 msec in the start time with a possible jitter of order 20 msec relative to the reported value...

Posted by: Brian Swift Apr 7 2018, 01:43 AM

Juno28g, a JunoCam raw processing pipeline implemented in Mathematica has been updated. It is available under permissive open source license at https://github.com/BrianSwift/JunoCam

Changes:


Posted by: Sean Apr 7 2018, 10:19 AM

Here is my first pass using Brian's updated pipeline. I can confirm memory burps when upscaling. tongue.gif

PJ12_100
https://flic.kr/p/25USKtr


Posted by: Brian Swift Apr 7 2018, 06:10 PM

QUOTE (Sean @ Apr 7 2018, 03:19 AM) *
Here is my first pass using Brian's updated pipeline. I can confirm memory burps when upscaling. tongue.gif

Bit of a hack workaround for memory issues is to process a subset of frames with the frameRangeToProcess parameter,
which is what I used for processing the Io full spin images at 90 pix/deg.

Posted by: Brian Swift Apr 10 2018, 05:57 PM

Juno24u, a JunoCam camera modeling Mathematica notebook, is available under permissive open source license at https://github.com/BrianSwift/JunoCam

This notebook produces the lens parameters used in the Juno28g raw processing pipeline.

From 29 CruisePhase and Oribit_00 data takes, 519 point source correspondences are automatically identified.

Modeling optimizes lens parameter to minimize mean of distances between paired points forward-transformed to equirectangular coordinates.

Various plots of residuals are produced to help evaluate the model and attempt to identify systematic modeling deficiencies.

This is a plot residual distances (in pixels) between pairs of forward-transformed points



These two plots show residuals in the Along-Spin and Cross-Spin directions.





The notebook contains many more plots (with tooltips for each point). These are viewable when the notebook is opened with Wolfram CDF Player available from https://www.wolfram.com/cdf-player/

A CSV table of point correspondences is produced for anyone else building a JunoCam camera model. This table is also available at https://github.com/BrianSwift/JunoCam

Additionally, PDFs annotating raw files are produced in which point sources used in modeling are circled and neighboring values of flattened and raw data are displayed.

A rasterized greyscale version of an annotated raw calibration file is available at https://www.missionjuno.swri.edu/junocam/processing?id=4468

When this data is input to a JunoCam raw processing pipeline, the resulting image can help evaluate how well the pipeline's lens/camera model aligns Red, Green, and Blue framelets.

The output produced by the current Juno24g pipeline is available at https://www.missionjuno.swri.edu/junocam/processing?id=4469

Posted by: Bjorn Jonsson Jun 6 2018, 12:35 AM

I'm currently making a few improvements to my JunoCam-related software and now have some questions for Mike (Gerald might also be familiar with this).

Yesterday I noticed that recently a very interesting correction was made to how the USGS ISIS3 software handles JunoCam images.

This was interesting to me because having processed a lot of JunoCam images I have gradually started to suspect there still was a small, systematic geometric error in my processing pipeline. In particular the (small) correction I have to apply to the camera pointing is almost always negative in the sample direction in image coordinates - this is highly suspicious. Also I usually have to modify the START_TIME (only adding 0.06188 and modifying INTERFRAME_DELAY usually isn't quite accurate enough).

But now I see from the most recent ISIS3 files (in particular the junoAddendum005.ti file located in the iak directory where ISIS3 stores the JunoCam SPICE kernels) that the B/G/R framelet offsets from the top of the 1648x1200 frame are 441.52, 596.52 and 751.52, respectively. Before this recent correction ISIS3 used the values 456, 611 and 766, respectively. Interestingly these are also the values I have been using in my software until now.

To make absolutely sure I now have everything correct, below is an image showing the locations of the framelets within a single 1648x1200 pixel image as the image would appear if I wanted to handle this as if the image was from a regular framing camera:



Is the above image correct? (in particular the exact vertical position of the B/G/R framelets).

[EDIT June 9, 2018: The image turned out to be incorrect, the vertical position of the framlets isn't correct]

Another question to make sure I understand everything correctly - this is also based on numbers from the above mentioned junoAddendum005.ti file. Assuming Juno always has perfect pointing and you command the spacecraft to point JunoCam (or more accurately, JunoCam's optical axis) at a star, would the star be at pixel (814.21, 600) in the resulting image? Normally you would expect the star to be at (824, 600) (i.e. x=number of pixels in the sample direction divided by 2) but these images have dummy pixels etc. at the image edges.

The source of the above confusion regarding the filter offsets is probably this diagram from the juno_junocam_v02.ti kernel:

CODE
      --- 0,0--------------------------------------------.
  4.94 deg | METHANE 128 pix      *                      |     --------
      ---  `-----------------------------------------1648,128   | 245 pixels
                                                                |
      --- 0,0--------------------------------------------.      |
  4.94 deg | BLUE    128 pix      *                      |      |  ---
      ---  `-----------------------------------------1648,128   |   |80 pixels
                             +Zjc x-------------> +Xjc         --- ---
      --- 0,0---------------------|----------------------.      |   |75 pixels
  4.94 deg | GREEN   128 pix      *                      |      |  ---
      ---  `----------------------|------------------1648,128   |
                                  |                             |
      --- 0,0---------------------|----------------------.      | 230 pixels
  4.94 deg | RED     128 pix      *                      |     --------
      ---  `----------------------|------------------1648,128
                                  |
                                  V +Yjc


I interpreted this to mean that the filter offsets relative to the top of the 1648x1200 pixel frame were 456, 611 and 766 and apparently the ISIS3 software also did this until it was corrected last month. Either the diagram above is very easy to misunderstand or it is incorrect (I suspect the latter).

Posted by: mcaplinger Jun 6 2018, 12:50 AM

QUOTE (Bjorn Jonsson @ Jun 5 2018, 04:35 PM) *
Either the diagram above is very easy to misunderstand or it is incorrect (I suspect the latter).

I suggested that that diagram be removed completely as I found it more confusing than helpful; I'm actually not sure if it's wrong or just incomplete. At any rate, you're supposed to be using the values in INS-6150*_DISTORTION_Y. I tried to be as clear and specific as I could be in the comments and pseudocode in the "optical distortion" section. I can't immediately map what I said there to numbers like 441.52; since we don't read out, much less transmit, the entire frame, I don't spend a lot of time thinking about what the coordinates of the entire frame are any more.

I hope that helped. Sorry for any confusion. If you find some ambiguity in the "optical distortion" section that you would like resolved, let me know.

Posted by: Gerald Jun 6 2018, 05:51 AM

QUOTE (Bjorn Jonsson @ Jun 6 2018, 02:35 AM) *
But now I see from the most recent ISIS3 files (in particular the junoAddendum005.ti file located in the iak directory where ISIS3 stores the JunoCam SPICE kernels) that the B/G/R framelet offsets from the top of the 1648x1200 frame are 441.52, 596.52 and 751.52, respectively. Before this recent correction ISIS3 used the values 456, 611 and 766, respectively. Interestingly these are also the values I have been using in my software until now.

What really counts are the relative values. As long as the relative y-offsets (155) are the same, you can either adjust the y-position of the optical axis, or the absolute values of the readout region. I'm prefering to leave the absolute pixel positions as they have been, and adjust the optical axis, instead. But I'd call this as matter of taste.

Posted by: mcaplinger Jun 6 2018, 06:09 AM

QUOTE (Gerald @ Jun 5 2018, 09:51 PM) *
What really counts are the relative values.

I'm not 100% sure what you mean by this. If you use the recipe I describe then you can just use the tabulated DISTORTION_Y values in the kernel and they account for both the readout offsets and the optic axis in one step. If you don't follow my recipe, then you're on your own for figuring out how those are combined: which is more work, no advantage I can see, and more potential error sources.

Posted by: Brian Swift Jun 6 2018, 07:52 AM

I'll speculate the value changes were to fix their implementation of Mike's lens correction.
Subtracting their new offset values from CCD mid-line yields the juno_junocam_v02.ti INS-6150?_DISTORTION_Y values.
600-284.52=315.48
600-441.52=158.48
600-596.52=3.48
600-751.52=-151.52

FWIW, my pipeline converts framelets back to full frames using 456, 611, 766 as the readout positions prior applying the lens correction.

I didn't see these "Addendum Kernel" files under source control on GitHub, and only found them after doing a partial data install with:
rsync -azv --delete --partial --exclude='kernels/ck' isisdist.astrogeology.usgs.gov::isis3data/data/juno data/

I didn't see an Issue directly related to this change at https://isis.astrogeology.usgs.gov/fixit/issues?set_filter=1

But maybe they discovered the problem while investigating and fixing their most recent JunoCam Issue https://isis.astrogeology.usgs.gov/fixit/issues/5236

Posted by: Gerald Jun 6 2018, 10:01 AM

QUOTE (mcaplinger @ Jun 6 2018, 08:09 AM) *
I'm not 100% sure what you mean by this...

When you are just using the kernel for processing, it doesn't make a difference.
When you are calibrating the optics, it's an advantage to know the relative positions of the readout regions being reliable constants that don't require a calibration, except maybe determining some chromatic aberration. Those relative positions are only implicitely provided by the DISTORTION_Y values. So, the description of the CCD layout is helpful. The settings could be different for a 3-CCD camera with a possibly individual distortion to be determined for each CCD separately.

Posted by: Bjorn Jonsson Jun 7 2018, 12:10 AM

QUOTE (Brian Swift @ Jun 6 2018, 07:52 AM) *
But maybe they discovered the problem while investigating and fixing their most recent JunoCam Issue https://isis.astrogeology.usgs.gov/fixit/issues/5236

Yes, that's the one. See #6 there ("...Correcting this has also shown that our filter positions on the ccd are incorrect.").

QUOTE (Brian Swift @ Jun 6 2018, 07:52 AM) *
I didn't see these "Addendum Kernel" files under source control on GitHub, and only found them after doing a partial data install with:
rsync -azv --delete --partial --exclude='kernels/ck' isisdist.astrogeology.usgs.gov::isis3data/data/juno data/

That's also how I got them. Both the new version of the kernel (junoAddendum005.ti) and the previous version (junoAddendum004.ti) contain this near the top:

INS-61500_BORESIGHT_LINE = 600
INS-61500_BORESIGHT_SAMPLE = 814.21

However, further down there is an important difference. The new version (junoAddendum005.ti) contains this (I have added the value from the previous version in parentheses):

INS-61504_FILTER_NAME = 'METHANE'
INS-61504_FILTER_OFFSET = 284.52 (291)
INS-61504_FILTER_LINES = 128
INS-61504_FILTER_SAMPLES = 1648

INS-61501_FILTER_NAME = 'BLUE'
INS-61501_FILTER_OFFSET = 441.52 (456)
INS-61501_FILTER_LINES = 128
INS-61501_FILTER_SAMPLES = 1648

INS-61502_FILTER_NAME = 'GREEN'
INS-61502_FILTER_OFFSET = 596.52 (611)
INS-61502_FILTER_LINES = 128
INS-61502_FILTER_SAMPLES = 1648

INS-61503_FILTER_NAME = 'RED'
INS-61503_FILTER_OFFSET = 751.52 (766)
INS-61503_FILTER_LINES = 128
INS-61503_FILTER_SAMPLES = 1648

It's because of this that I *think* the reconstructed full frame that I posted yesterday is correct. The optical axis is at y=600 and the new numbers above are consistent with Mike's code if I understand everything correctly.

BTW the reason I want to reconstruct the full frame as in the image in my above post is that once I've done that I can proceed as if each set of B/G/R framelets is an image from a regular 1648x1200 pixel framing camera. JunoCam is then no longer a special case and I can use more or less the same processing pipeline I'm using for e.g. Cassini and Voyager images when reprojecting the framelets to simple cylindrical projection (with the addition of automatically mosaicking everything in the end to get one map containing data from all of the framelets).

QUOTE (Brian Swift @ Jun 6 2018, 07:52 AM) *
FWIW, my pipeline converts framelets back to full frames using 456, 611, 766 as the readout positions prior applying the lens correction.

That's also what I have been doing until now. But now I suspect this is incorrect, see e.g. the data above from junoAddendum005.ti. I'll probably do some experiments soon to see what seems to work best but I'm still slightly confused (hopefully I can finish figuring this out this weekend). It was to make absolutely sure I understand everything correctly that I included this elementary question in my post yesterday:

QUOTE (Bjorn Jonsson @ Jun 6 2018, 12:35 AM) *
Assuming Juno always has perfect pointing and you command the spacecraft to point JunoCam (or more accurately, JunoCam's optical axis) at a star, would the star be at pixel (814.21, 600) in the resulting image? Normally you would expect the star to be at (824, 600) (i.e. x=number of pixels in the sample direction divided by 2) but these images have dummy pixels etc. at the image edges.

The value 814.21 mentioned here is in the junoAddendum005.ti file (INS-61500_BORESIGHT_SAMPLE) and also in the juno_junocam_v02.ti kernel (INS-6150#_DISTORTION_X).

I should add that it was when I started processing the JIRAM data back in January that I started to suspect really strongly that something wasn't quite correct in my JunoCam processing pipeline. In the JIRAM images, the position of Jupiter's limb (or disc in global views) always matched (or very nearly matched) what I computed from the kernels.

Posted by: mcaplinger Jun 7 2018, 05:57 AM

QUOTE (Bjorn Jonsson @ Jun 5 2018, 04:35 PM) *
would the star be at pixel (814.21, 600) in the resulting image?

The 814.21 is the pixel coordinate given the 1648-wide image, so any issues about dummy pixels have already been accounted for.

I've got no idea what USGS is doing with the 600. The optic axis isn't at a Y of 600 any more than it's at an X of 1648/2.

I wrote the code in the kernel file so there would be no ambiguity about what needed to be done with managing the framelets. This code has been validated with limb fitting in one direction and ray tracing in the other so I'm pretty confident it's right to a pixel or two. If you want to use a different formalism, that's fine, but don't expect me to debug it -- I had enough trouble with my recommended method.

Posted by: Brian Swift Jun 7 2018, 07:00 AM

QUOTE (Bjorn Jonsson @ Jun 6 2018, 05:10 PM) *
Yes, that's the one. See #6 there ("...Correcting this has also shown that our filter positions on the ccd are incorrect.").

Yup, missed that.

Note, (as Mike mentioned) the DISTORTION_Y values used by the "code recipe" in juno_junocam_v02.ti
bake together the readout offsets with the "optical axis". These can be un-baked as follows:

600+78.48-158.48-64=456.00
600+78.48-3.48-64=611.00
600+78.48-(-151.52)-64=766.00

(600 == ccd center Y, and 64 == half a framelet)

Since my pipeline doesn't attempt to hit specific targets, I haven't been concerned with boresight
and just allow it to be the CCD center.

I'll admit I hadn't realized until this current review of the recipe that DISTORTION_ parameters were specifying
the boresight. I'd assumed they were only specifying the "principal point" offsets to the Brown distortion model,
and that the boresight would be at CCD center.

Mike:
While it shouldn't effect image processing (since it isn't associated with a filter), do you think the value of
INS-61500_DISTORTION_Y (optical axis relative to full frame) value should be 678.48 instead of 78.48?

Posted by: Gerald Jun 7 2018, 11:12 AM

Somehow, I get the feeling, that with almost each post another frame of reference is introduced implicitely, and mixed with the frames already mentioned before.
So, a first step to define what everyone is talking about would be an unambiguous definition of the frame(s) of reference you are refering to.

I'm usually working with the two frames of reference that define either the lower left, or the upper left pixel of the CCD as (0;0), and that are using the horizontal offset between two consecutive pixels as the unit of length.
Other translational options that seem to be discussed here are defining pixel row 600 as 0 for y, but column 0 as 0 for x, or different from that, defining y=600 as the y-position of the optical axis, both with two y-flipped versions.

So, just discussing constants without sufficient context regarding the applied frame of reference will always result in confusion.

The rotation matrix of the camera frame with respect to the Juno frame is yet another question, as well as Juno's rotation axis with repect to the Juno frame, and handled outside JunoCam's instrument kernel.

I'm going to analyse JunoCam's geometry during the next three months, hopefully to subpixel accuracy, but hesitate to discuss the lots of dead-ends.
Especially, I'll review whether the Brownian distortion model is applicable to JunoCam at all on this level of accuracy, and develop and check a set of alternative families of geometrical distortion models.
I'm planning to discuss this on EPSC in September as a part of a talk. But I don't intend to release much intermediate material before, except the according EPSC abstract.
If the investigation will result in any new insight, I may consider to write an according peer-reviewed paper, and in this case of course provide data for a refined instrument kernel.

Posted by: mcaplinger Jun 7 2018, 03:20 PM

QUOTE (Brian Swift @ Jun 6 2018, 11:00 PM) *
do you think the value of
INS-61500_DISTORTION_Y (optical axis relative to full frame) value should be 678.48 instead of 78.48?

The code doesn't use this value, I didn't put it in the kernel, and I don't know what it's supposed to mean. You could very well be right technically (since for brevity I folded the typical height/2 offset into the constants themselves as Gerald points out.)

In a perfect world, these keywords would have well-defined and documented meanings, and the SPICE toolkit would have functions that used them and performed the basic camera geometry functions. As it is, the toolkit doesn't have those functions and the keywords don't have well-defined meanings, leaving us to do the best we can in the comments.

The comments were written after a big effort to determine the best possible fit from in-flight data and in the face of massive confusion from practically everyone who had ever written code to deal with pushframe systems about how to manage Junocam data. So my attitude was very much "do this specific thing" instead of "here's a whole bunch of information that will allow you to make your own design choices and mistakes and then come back to me and complain about it." rolleyes.gif

Posted by: algorithm Jun 7 2018, 08:01 PM

Mr Caplinger, you are a living legend, brilliant!

Posted by: Brian Swift Jun 7 2018, 09:21 PM

QUOTE (Gerald @ Jun 7 2018, 04:12 AM) *
I'm going to analyse JunoCam's geometry during the next three months, hopefully to subpixel accuracy, but hesitate to discuss the lots of dead-ends.
Especially, I'll review whether the Brownian distortion model is applicable to JunoCam at all on this level of accuracy, ...

The results in my 4/10/2018 posting in this thread demonstrate subpixel correction from a Brown-Conrady model using only K1,K2,P1,P2 parameters. I found a similar level of correction from a model consisting of Brown-Conrady K1,K2 with two additional parameters for chromatic aberration. I stuck with K1,K2,P1,P2 for my pipeline based only a preference for a single correction function, rather than three based on color.

I think getting to deci-pixel or centi-pixel alignment, or untangling non-radial geometric corrections from CA from timing slop, would certainly be an advancement.

If I were to revisit my modeling, I’d change the process I use for generation of control points. Currently intensity centroids are derived from images that have been flattened for detecting point sources. I’d change this to only use the flattened images for detection and coarse location identification, and then produce the intensity centroids from the corresponding locations in the unflattened images.

I look forward to your results.

Posted by: Bjorn Jonsson Jun 7 2018, 09:52 PM

QUOTE (mcaplinger @ Jun 7 2018, 03:20 PM) *
The comments were written after a big effort to determine the best possible fit from in-flight data and in the face of massive confusion from practically everyone who had ever written code to deal with pushframe systems about how to manage Junocam data. So my attitude was very much "do this specific thing" instead of "here's a whole bunch of information that will allow you to make your own design choices and mistakes and then come back to me and complain about it." rolleyes.gif

Well, that might be me wink.gif (among others). In hindsight maybe I should have rewritten most of my code from scratch. What happened though is that I had largely finished writing it back in 2013 around the time of the Earth flyby (at that time less information was available). I then modified/corrected it in the months following JOI when more information was available. Big thanks for taking the time to post various useful information here and for answering various JunoCam-related questions.

And the bug/small systematic error I've been dealing with is now finally gone (and a lot of my confusion too), see below:

QUOTE (mcaplinger @ Jun 7 2018, 05:57 AM) *
QUOTE (Bjorn Jonsson @ Jun 5 2018, 04:35 PM) *
would the star be at pixel (814.21, 600) in the resulting image?
The 814.21 is the pixel coordinate given the 1648-wide image, so any issues about dummy pixels have already been accounted for.

That's it!! It turns out that the main problem was that I was using an incorrect value here. Having done a few quick tests it's clear that the small systematic error is now gone and the limb fit errors I get have dropped to a maximum of 2-3 pixels (0-1 is common but determining the exact value is often difficult due to haze at the limb).

QUOTE (mcaplinger @ Jun 7 2018, 05:57 AM) *
I've got no idea what USGS is doing with the 600. The optic axis isn't at a Y of 600 any more than it's at an X of 1648/2.

I've got no idea either. But it's now clear that the recent changes to ISIS3 (or the ISIS3-specific kernels) are of no relevance to me. The correct framelet offsets (from the top) are 456, 611 and 766 pixels as I've been using.

But the diagram in the IK file is confusing as has been mentioned. In particular the numbers in the diagram are of no direct relevance to your code in the IK file but this isn't explicitly mentioned. Perhaps it would be best to remove the diagram as you suggested (or at least change it).

QUOTE (Gerald @ Jun 7 2018, 11:12 AM) *
I'm going to analyse JunoCam's geometry during the next three months, hopefully to subpixel accuracy, but hesitate to discuss the lots of dead-ends.
Especially, I'll review whether the Brownian distortion model is applicable to JunoCam at all on this level of accuracy, and develop and check a set of alternative families of geometrical distortion models.
I'm planning to discuss this on EPSC in September as a part of a talk. But I don't intend to release much intermediate material before, except the according EPSC abstract.
If the investigation will result in any new insight, I may consider to write an according peer-reviewed paper, and in this case of course provide data for a refined instrument kernel.

This is very interesting and I look forward to seeing the results even though distortion isn't a big problem in my case. If color channel misalignment becomes too noticeable I simply warp the G/B channels into the red channel. The misalignment isn't big (can be ~2 pixels near the image edges) but still noticeable in high contrast areas in enhanced images.

Posted by: Gerald Jun 7 2018, 11:42 PM

I'm trying to analyse Jupiter's cloud velocity field, where displacements of features between images reprojected to common vantage points are often less than a raw pixel. So, with global geometrical uncertainties, I need to apply global corrections that remove information I'd liked to retrieve. In order to resolve this, not just color channel alignment, but also absolute alignment with Jupiter's shape model should be no more than 1/10 raw pixel. Manually, I can adjust to a fraction of a pixel, but that's rather time consuming, and still less accurate and reliable than it should be for this purpose.

Any of my attempts to define a globally accurate lense distortion model on the basis of Brown-Conrady with only 4 free parameters failed thus far. Locally it's no problem to go below 1/100 pixel accuracy with respect to some error metrics. But I would have been surprised, if Brown-Conrady would have worked so easily for global lense geometry. With an increasing number of non-zero parameters, the Brown-Conrady model becomes more and more unstable and unrealistic. That issue is well-known since several decades, at least, (probably more than a century in some cases) for other approximations derived from Taylor polynomials. (Think, e.g. of splines of polynomials of degree 7 rather than cubic splines. They oscillate too much to be useful in practice.) Therefore, I'll consider approaches rather different from Brown-Conrady. But I can't tell which of the long list of those more or less straightforward alternatives will be as successful as required, before it's elaborated and tested. Those methods have their own specific issues.

Another point that might turn out to be relevant is Jupiter's deviation from its idealized IAU shape model. I might eventually ask the Juno gravity group for numbers, or do the math myself.

Posted by: Bjorn Jonsson Jun 8 2018, 12:16 PM

QUOTE (Gerald @ Jun 7 2018, 11:42 PM) *
Any of my attempts to define a globally accurate lense distortion model on the basis of Brown-Conrady with only 4 free parameters failed thus far. Locally it's no problem to go below 1/100 pixel accuracy with respect to some error metrics. But I would have been surprised, if Brown-Conrady would have worked so easily for global lense geometry. With an increasing number of non-zero parameters, the Brown-Conrady model becomes more and more unstable and unrealistic. That issue is well-known since several decades, at least, (probably more than a century in some cases) for other approximations derived from Taylor polynomials. (Think, e.g. of splines of polynomials of degree 7 rather than cubic splines. They oscillate too much to be useful in practice.) Therefore, I'll consider approaches rather different from Brown-Conrady. But I can't tell which of the long list of those more or less straightforward alternatives will be as successful as required, before it's elaborated and tested. Those methods have their own specific issues.


I can imagine it might be difficult to find a mathematical function that works well everywhere if you want this level of accuracy - there could be various minor 'irregularities'. For example minor distortions might not be perfectly symmetrical around the optical axis at this level of accuracy.

I wonder if it might make sense to measure control points using stars in JunoCam images and then triangulate the control points. Another set of triangulated control points would contain the known, correct (i.e. without optical distortions) positions of the stars. You could then determine an affine transform for each triangle and use this to map from distorted pixel positions to non-distorted positions or vice versa. A possible drawback is that you might need a lot of control points (i.e. small triangles) for sufficient accuracy.

Posted by: fredk Jun 8 2018, 02:38 PM

Sorry for not having followed all the details in this thread, but what do you use to fit the B-C parameters? Are you finding the best-fit parameters from images of Jupiter using its modeled shape/position? Or are you using images that show stars? It seems the latter would avoid any modeling uncertainties in the shape of Jupiter since the stars' true directions are precisely known.

Posted by: mcaplinger Jun 8 2018, 03:07 PM

QUOTE (fredk @ Jun 8 2018, 06:38 AM) *
Sorry for not having followed all the details in this thread, but what do you use to fit the B-C parameters?

The Junocam I kernel parameters were fit using stars from the cruise imaging and validated using the positions of the galilean satellites during PJ1.

How Gerald's processing works with regard to camera parameters has never been clear to me.

I am personally of the opinion that expecting deep sub-pixel registration between bands is a pipe dream, but obviously I could be wrong.

Posted by: Brian Swift Jul 26 2018, 04:19 AM

Anyone have a script/tool to bulk download raw data and metadata from missionjuno site that they are willing to share?

Posted by: Brian Swift Aug 1 2018, 06:42 PM

Mike, two unrelated questions:

1. How many different image compression quality level settings are used by junocam on a typical peri-jove?

2. "FOCAL_PLANE_TEMPERATURE" appears to always be "273.0 <K>". Is this a bug or a feature (of very precise temp control)?

Posted by: mcaplinger Aug 1 2018, 09:37 PM

QUOTE (Brian Swift @ Aug 1 2018, 10:42 AM) *
1. How many different image compression quality level settings are used by junocam on a typical peri-jove?

2. "FOCAL_PLANE_TEMPERATURE" appears to always be "273.0 <K>". Is this a bug or a feature (of very precise temp control)?

1) It depends, but nearly all of the lossy images have been taken with the same compression requantization parameter for the past several PJs, with an occasional single image with a little higher value.

2) For a variety of reasons, the 273K value basically means that the software couldn't easily figure out the FPA temperature. None of our processing so far tries to do anything with the FPA temperature, so we haven't tried to fix this problem very aggressively.

Sending a message to jncdata@msss.com is the prescribed way of asking questions about the PDS products.

Posted by: Gerald Sep 23 2018, 12:39 PM

QUOTE (fredk @ Jun 8 2018, 04:38 PM) *
Sorry for not having followed all the details in this thread, but what do you use to fit the B-C parameters? Are you finding the best-fit parameters from images of Jupiter using its modeled shape/position? Or are you using images that show stars? It seems the latter would avoid any modeling uncertainties in the shape of Jupiter since the stars' true directions are precisely known.

I've given a partial answer to these questions at EPCS a few days ago.
http://junocam.pictures/gerald/talks/epsc2018-586/.

I'll refine this further over the next several weeks in background processes, and may occasionally provide according SPICE kernels.
There are some more methods for calibration and cross-checking, only partially implemented as of yet.
I've also written up portions of the deeper theoretical background. But that's not yet available online.

I've used stars for calibration purposes a few years ago. But they have their own issues, e.g. real or optical binaries with unclear centroid, TDI blur, background noise, few photons, aliasing, velocity aberration (can be adjusted for in a formal way), etc.

Posted by: fredk Sep 24 2018, 03:20 PM

Thanks for the details, Gerald. Can you point to some starfield images?

Posted by: Gerald Sep 24 2018, 07:09 PM

You'll find some material, if you go back to http://www.unmannedspaceflight.com/index.php?s=&showtopic=8143&view=findpost&p=229336.

But you can also take a more extended look at https://pds-imaging.jpl.nasa.gov/volumes/juno.html.
The EDRs are starting https://pds-imaging.jpl.nasa.gov/data/juno/JNOJNC_0001/DATA/EDR/CRUISE/. Check for possible DCT decoding issues.

Posted by: Brian Swift Sep 26 2018, 06:32 AM

QUOTE (fredk @ Sep 24 2018, 08:20 AM) *
Thanks for the details, Gerald. Can you point to some starfield images?

The starfield image filenames and point source (stars/moons) correspondences used in my camera modeling process are available in file Juno24Matchpoints.csv on my https://github.com/BrianSwift/JunoCam.

These are full revolution images that appear to have relatively low amounts of compression.

This http://www.unmannedspaceflight.com/index.php?showtopic=8143&view=findpost&p=239084 has an overview of my process.

Posted by: Brian Swift Sep 28 2018, 04:11 PM

Question for everyone, what tool(s) do you use to identify objects (moons/stars) in an instrument's field of view?
I'm currently looking at the SPICE enhanced Cosmographia from NAIF, and Celestia.
If I used SPICE routines directly, it appears I'd have to iterate through all the known objects testing each one to see if it in the FOV.

Posted by: Gerald Sep 28 2018, 05:13 PM

Depends on the purpose. For JunoCam calibration, I've used the Bright Star Catalog (BSC), together with a home-made rendering tool. But sometimes, the printed Uranometria 2000.0, or the offline tool Red Shift. Sometimes, the online http://nova.astrometry.net/upload can help, or https://www.google.com/sky/. Or https://eyes.jpl.nasa.gov/eyes-on-juno.html. There are several more star catalogs, etc.

Posted by: mcaplinger Sep 28 2018, 06:10 PM

QUOTE (Brian Swift @ Sep 28 2018, 08:11 AM) *
Question for everyone, what tool(s) do you use to identify objects (moons/stars) in an instrument's field of view?...
If I used SPICE routines directly, it appears I'd have to iterate through all the known objects testing each one to see if it in the FOV.

Yep. I just use a small subset of the BSC and loop as follows (note, no aberration correction):

CODE
readstars("bscbrite")
for i in range(0,nframes):
t = t0+i*interf
c = pxform("j2000", "juno_junocam", t)
for s in stars:
if stars[s][2]>3.5: continue # skip stars that are too dim
to_star = radec(stars[s][0], stars[s][1])
to_star = mxv(c, to_star)
fl = 10.997/7.4e-3
alpha = to_star[2]/fl
cam = [to_star[0]/alpha, to_star[1]/alpha]
cam = distort(cam)
cam[1] += yoff
if to_star[2] > 0 and abs(cam[0])<1648/2 and abs(cam[1])<128/2:
x = cam[0]+1648/2
y = cam[1]+128/2
print s, stars[s][2], x, y+i*128

Posted by: Brian Swift Oct 9 2018, 05:49 PM

Thanks Mike, Gerald. I ended up using Cosmographia to determine the names of the stars from a few frames, used simbad to look up their Hipparcos id, which I used to lookup ra-dec in a Hipparcos catalog I downloaded form NAIF. After working out the SPICE frame transforms using the few stars, I was able to automatically match all my detected "stars". Then investigated a couple outlier matches, which turned out to Ganymede and Saturn.

Posted by: Brian Swift Oct 9 2018, 05:55 PM

Mike, just curious, is there an explanation for the 20 msec START_TIME jitter that is mentioned in juno_junocam_v02.ti?

Posted by: mcaplinger Oct 9 2018, 11:02 PM

QUOTE (Brian Swift @ Oct 9 2018, 09:55 AM) *
Mike, just curious, is there an explanation for the 20 msec START_TIME jitter...

It's a software issue having to do with when the timestamp is captured relative to when the command to start imaging is sent.

Posted by: Brian Swift Oct 10 2018, 05:01 PM

QUOTE (mcaplinger @ Oct 9 2018, 04:02 PM) *
It's a software issue having to do with when the timestamp is captured relative to when the command to start imaging is sent.

Is the the imaging electronics start signal linked to the spacecraft system clock or is it driven from some other free running
asynchronous clock? My thinking being that if the start was linked to the spacecraft clock, the 20ms jitter would
have a 1/256 sec (low precision spacecraft clock) quantization. So I could model actual start time as a 0 to 5 tick offset
from the adjusted captured timestamp.

Posted by: mcaplinger Oct 10 2018, 06:20 PM

QUOTE (Brian Swift @ Oct 10 2018, 09:01 AM) *
Is the the imaging electronics start signal linked to the spacecraft system clock or is it driven from some other free running
asynchronous clock?

Neither, sort of? It's complicated. The software sends the command to start imaging over a 57600 baud UART and that command is received and imaging starts based on the free-running oscillator in the JDEA. Meanwhile, back in the spacecraft computer, there is then a potentially variable delay before the timestamp is captured from the spacecraft clock.

Yes, the spacecraft clock is quantized, but the delay is of order 20 msec for reasons having nothing to do with that. The delay could be 20 msec and it could be 40 msec and it could, perhaps, be anywhere in between or even longer. I don't think it can be shorter but I can't swear to that.

I can't go into this in more detail, sorry.

Posted by: Brian Swift Oct 10 2018, 11:58 PM

Thanks Mike, that answers my question. (Just not the answer I was hoping for.)

Posted by: mcaplinger Oct 11 2018, 01:01 AM

While I'm sure you appreciated this, it's worth noting that the uncertainty only applies to the START_TIME, the interframe time is set by the JDEA and has nothing to do with the spacecraft clock.

For on-orbit images, I've had some luck characterizing the START_TIME by doing limb fitting of the first image that contains the limb. It would be interesting to know if there are any systematics of the error, but none were apparent in my brief look at it. The cruise star imaging appeared to be behaving differently with regard to the error.

Posted by: Bjorn Jonsson Oct 13 2018, 08:53 PM

QUOTE (mcaplinger @ Oct 11 2018, 01:01 AM) *
For on-orbit images, I've had some luck characterizing the START_TIME by doing limb fitting of the first image that contains the limb. It would be interesting to know if there are any systematics of the error, but none were apparent in my brief look at it. The cruise star imaging appeared to be behaving differently with regard to the error.

This is exactly what I've been doing. Following this I also do limb fitting of the last image containing the limb and then adjust the interframe delay slightly if necessary (I typically end up with values like 0.3711 or 0.3708 or something like that instead of 0.371). After I fixed a bug in my software several months ago and another relatively minor and obscure bug last month this usually is sufficient and no further adjustments are necessary. However, I sometimes need to adjust the pointing by something like ±3 pixels in the horizontal direction in the really hi-res images. This might be because an SPK kernel containing the reconstructed PJ15 trajectory wasn't available (it is now). Also the images of the northern hemisphere are sometimes slightly problematic. It's as if the spacecraft orientation changes slightly during image acquisition but of course that's highly unlikely. I'm still working on this particular problem (apparently it's the only problem left in my processing pipeline). This doesn't cause any color channel misalignments though so it's not a big inconvenience.

As a 'sanity check' it would be interesting to know which START_TIME value you got for an image or two, especially for images obtained at an altitude of ~25,000 km or closer.

Posted by: volcanopele Oct 26 2018, 10:16 PM

PJ11 and PJ12 data is now in the PDS (or at least the JIRAM data is available)


Posted by: Brian Swift Feb 4 2019, 10:40 PM

Juno3D, a JunoCam raw to textured 3D object processing pipeline implemented in Mathematica is available under permissive open source license at https://github.com/BrianSwift/JunoCam/tree/master/Juno3D

Sample outputs produced from an Earth flyby image are also available in the github directory.

What it does...

Converts Junocam raw image and metadata to 3D object suitable for importing into 3rd party rendering application (such as Blender).

Input is MissionJuno website formatted -raw.png or PDS .IMG or .IMG.gz raw image file, and .json or .LBL metadata file.

Output is Wavefront geometry and material file (.obj, .mtl) and an image/texture file for each color (.png).

Raw image files and metadata are imported, SPICE and the camera model are used to map image pixels to a tessellated representation of the target (Jupiter) surface. Imagery that doesn't intercept the target is projected onto a backstop surface (large sphere centered at a spacecraft location with radius less than the distance to the limb.) Texture mapped geometry is written out.

Information useful for rendering scenes (such as spacecraft position and Blender camera rotation values) is included as comments in the .obj file (which is a textual format.)

By default flat field adjustment and linear light to sRGB are performed on generated imagery textures. These sRGB encoded textures provide a close representation of actual Jupiter contrast and may need post-processing in a tool like Photoshop to enhance details in the cloud structures.

Can mark SPICE computed limb locations in texture images.

The current lens model uses Brown-Conrady K1,K2,Xp,Yp plus linear and cubic CA terms.

The output structure of a separate set of geometry for each of the three colors requires non-default shader/material and rendering options described below in Rendering in 3rd party application .

Rendering in 3rd party application

The 3D object produced by this script has three separate but co-located groups of geometry. Each set of geometry is textured with imagery for a different color channel. Each set of geometry has different triangle vertex locations, so the geometry associated with each color channel consists of triangles that are both inter-penetrating and nearly co-planar with the geometry from the other color channels. Therefore, each set of geometry must effectively be rendered independently to the red/green/blue channels of the final image.

The above organisation means there is no spatial resampling of the raw data performed by this script. Thus, lens correction and perspective projection are accomplished in a single resampling done in the renderer.

Step by step instructions to setup this rendering configuration in Blender and Apple XCode SceneKit Viewer are available in the Mathematica notebook. The Blender setup process is also available as a screen recording on YouTube at: https://youtu.be/6Wx--D_OxbI

If you don't have Mathematica, the source code can still be viewed with correct formatting using Wolfram CDF Player available at https://www.wolfram.com/cdf-player/

Earth flyby image rendered in Blender at 45.51 pixels/deg: https://flic.kr/p/2dtZxpw

Posted by: Brian Swift Feb 5 2019, 07:00 AM

QUOTE (Bjorn Jonsson @ Oct 13 2018, 12:53 PM) *
This is exactly what I've been doing. Following this I also do limb fitting of the last image containing the limb and then adjust the interframe delay slightly if necessary (I typically end up with values like 0.3711 or 0.3708 or something like that instead of 0.371). ...

As a 'sanity check' it would be interesting to know which START_TIME value you got for an image or two, especially for images obtained at an altitude of ~25,000 km or closer.

Instead of limb fitting for start time and inter frame delay, have you considered fitting for start time and planet radius?

I believe the Junocam visible limb is at a higher elevation than the 1-bar limb returned by SPICE.
For example, in the occultation of Io in PJ16_11, SPICE (Web GeoGalc) predicts a start time of 2018-10-29T20:50:43.855
which is almost a second later than the image time of frame 8, 20:50:42.932. However, the image shows the
visible limb already covering about 1/3 of Io.

I've only done a few fits since my process isn't fully automated yet, but here are some preliminary results:

Radius increase (km), start time adjustment, image id, frame numbers used for fit, filter
95 -.013 PJ14_19 7,31 green
47 -.004 PJ14_25 13,29 green
45 -.004 PJ14_25 7,35 green
90 .0015 PJ16_11 7,31 red
95 .0020 PJ16_11 8,33 green
90 .0005 PJ16_11 12,33 blue


I have no reason to expect the height variation to be uniform across Jupiter, and can think of enough potential effects on it to eliminate any desire on my part to attempt to model it.

Also, the time offsets above could all be shifted by constant, since my camera model has its own spacecraft to CCD frame transform.

Mike - are you able to request a specific Juno spin phasing to ensure JunoCam is pointing in the right direction for a time critical event like PJ16_11 Io occultation? (I think if this image was 15 seconds earlier Io would still be some distance from the limb, and 15 seconds later it would have been completely covered) If so, that's pretty cool, if not, it's good to get lucky.

Posted by: Gerald Feb 5 2019, 02:42 PM

QUOTE (Brian Swift @ Feb 5 2019, 08:00 AM) *
I have no reason to expect the height variation to be uniform across Jupiter, and can think of enough potential effects on it to eliminate any desire on my part to attempt to model it.

Jupiter's gravitational equipotential deviates by up to about 40 km from the IAU ellipsoid in a non-trivial way. Locally, there are also translucent or opaque haze layers of up to about three scale heights. We don't know the exact pressure of the level of the cloud tops.

Posted by: JohnVV Feb 5 2019, 05:04 PM

isis 3.6 now has a import tool "junocam2isis"

then use the standard isis3 pipeline

a random EDR image test "JNCE_2018038_11C00011_V01"



near the northpole in orthographic projection , then minor image enhancing in gimp using gmic

Posted by: Brian Swift Feb 5 2019, 07:01 PM

QUOTE (Gerald @ Feb 5 2019, 06:42 AM) *
Jupiter's gravitational equipotential deviates by up to about 40 km from the IAU ellipsoid in a non-trivial way....

Thanks Gerald, was I curious about the magnitude of gravitational equipotential.
Can you point me a reference for the 40km figure, or did you derive it from the published gravitational
spherical harmonic values?

Posted by: Brian Swift Feb 5 2019, 08:11 PM

QUOTE (JohnVV @ Feb 5 2019, 09:04 AM) *
isis 3.6 now has a import tool "junocam2isis"

then use the standard isis3 pipeline

a random EDR image test "JNCE_2018038_11C00011_V01"
...
near the northpole in orthographic projection , then minor image enhancing in gimp using gmic

Cool John. Could you run JNCE_2013282_00C00102_V01 (this is the earth flyby image I used for
an example above) I'm curious how ISIS compares to my processing.
About how long does it take ISIS to produce the image?
Do you know if ISIS is able to produce the final image with one image resampling or two?

Posted by: JohnVV Feb 5 2019, 08:39 PM

i will grab "JNCE_2013282_00C00102_V01" tonight

isis has to remap in order to mosaic the parts together , simple cylindrical or Mercator projections are the ones normally used

the above was then remapped to orthographic from Mercator

so resampled from FULL res to 45.5111111111 <pixels/degree> ( a map that is 16384x8192 px)
so a small 16k map for testing the tool , i did not want to take all night to run the set of tools
1)junocam2isis
2) spiceinit
3) cam2map ( took the longest time )
4) automos
5) GDAL ( gdal_translate)

Posted by: mcaplinger Feb 5 2019, 08:55 PM

QUOTE (Brian Swift @ Feb 4 2019, 11:00 PM) *
For example, in the occultation of Io in PJ16_11, SPICE (Web GeoGalc) predicts a start time of 2018-10-29T20:50:43.855
which is almost a second later than the image time of frame 8, 20:50:42.932.

Using which SPICE kernels? Always important to make sure you are using the best reconstructions, the right SCLK-SCET, leapseconds, etc.
QUOTE
are you able to request a specific Juno spin phasing to ensure JunoCam is pointing in the right direction for a time critical event like PJ16_11 Io occultation?

Wish I could take credit for this, but AFAIK we didn't even intentionally command an image for this event, much less a specific spin phasing, so it's all random chance.

Posted by: JohnVV Feb 6 2019, 01:38 AM

isis3 remaps images - there is no back to camera mapping (map2cam) for juno images

the Earth fly by image "JNCE_2013282_00C00102_V01"



--- edit ---

for some unknown reason the jpg's of the tiff images are not uploading
-- so on to imagebox
Simple cylindrical ( color is over saturated )
http://imgbox.com/hQK14D0K
ortho( color is over saturated )
http://imgbox.com/6VIBMjR7
BlueMarble -- in the same ortho projection for comparison
http://imgbox.com/TQ5Q21Fe

Posted by: Brian Swift Feb 6 2019, 05:12 AM

QUOTE (JohnVV @ Feb 5 2019, 05:38 PM) *
...
ortho( color is over saturated )

Thanks for posting these, they answered my question.

Posted by: mcaplinger Feb 6 2019, 11:28 PM

QUOTE (Brian Swift @ Feb 4 2019, 11:00 PM) *
I believe the Junocam visible limb is at a higher elevation than the 1-bar limb returned by SPICE.

The image below shows per-channel overlays of the limbs of Io and Jupiter as predicted by our best current model for PJ16-011, no timing adjustment other than our nominal recommended one. The Io limb is pretty close to dead on (maybe a pixel off in the red) and the Jupiter limb is clearly low relative to the observed limb by 4 pixels or so. So that does support the idea that what we see is higher than the limb defined by the IAU radii in the SPICE planetary constants file. Of course, the observed limb is not super-sharp so it's a judgement call about where it is exactly.


Posted by: volcanopele Feb 12 2019, 04:06 PM

Juno data (including JunoCAM but excluding JIRAM) for PJ13 and PJ14 is now in the PDS:

https://pds.nasa.gov/datasearch/subscription-service/SS-20190211.shtml

Posted by: Bjorn Jonsson Feb 13 2019, 12:39 AM

QUOTE (Brian Swift @ Feb 5 2019, 07:00 AM) *
QUOTE (Bjorn Jonsson @ Oct 13 2018, 12:53 PM) *
This is exactly what I've been doing. Following this I also do limb fitting of the last image containing the limb and then adjust the interframe delay slightly if necessary (I typically end up with values like 0.3711 or 0.3708 or something like that instead of 0.371). ...

As a 'sanity check' it would be interesting to know which START_TIME value you got for an image or two, especially for images obtained at an altitude of ~25,000 km or closer.

Instead of limb fitting for start time and inter frame delay, have you considered fitting for start time and planet radius?

I believe the Junocam visible limb is at a higher elevation than the 1-bar limb returned by SPICE.
For example, in the occultation of Io in PJ16_11, SPICE (Web GeoGalc) predicts a start time of 2018-10-29T20:50:43.855
which is almost a second later than the image time of frame 8, 20:50:42.932. However, the image shows the
visible limb already covering about 1/3 of Io.

I briefly considered fitting for the planet radius but decided not to for several reasons. The biggest reason is that I suspect the cloud altitudes vary depending on cloud 'type' (light or dark). In particular the bright, white ammonia clouds should usually be at a higher altitude than the darker clouds. In other words, if this correct Jupiter's shape is slightly 'irregular' if you define Jupiter's radius by the top of the visible clouds and not by a specific pressure level. I think I may have seen tentative evidence of variable cloud altitudes at the limb in some images but I'm not completely sure yet and need to look further into this. Another possible source of errors is that a reconstructed SPK kernel doesn't become available until several weeks after a a flyby. I have yet to compare the reconstructed SPK kernels to the predict kernels that become available immediately.

QUOTE (Brian Swift @ Feb 5 2019, 07:00 AM) *
I've only done a few fits since my process isn't fully automated yet, but here are some preliminary results:

Radius increase (km), start time adjustment, image id, frame numbers used for fit, filter
95 -.013 PJ14_19 7,31 green
47 -.004 PJ14_25 13,29 green
45 -.004 PJ14_25 7,35 green
90 .0015 PJ16_11 7,31 red
95 .0020 PJ16_11 8,33 green
90 .0005 PJ16_11 12,33 blue


I have no reason to expect the height variation to be uniform across Jupiter, and can think of enough potential effects on it to eliminate any desire on my part to attempt to model it.

This is very interesting and I'll probably do some processing runs to see if I get similar results (I hadn't processed these exact images yet). I'm especially interested in the start time adjustment.

QUOTE (Gerald @ Feb 5 2019, 02:42 PM) *
We don't know the exact pressure of the level of the cloud tops.

And also the cloud tops are probably not at the same pressure level everywhere - see above.

QUOTE (mcaplinger @ Feb 6 2019, 11:28 PM) *
The image below shows per-channel overlays of the limbs of Io and Jupiter as predicted by our best current model for PJ16-011, no timing adjustment other than our nominal recommended one. The Io limb is pretty close to dead on (maybe a pixel off in the red) and the Jupiter limb is clearly low relative to the observed limb by 4 pixels or so. So that does support the idea that what we see is higher than the limb defined by the IAU radii in the SPICE planetary constants file. Of course, the observed limb is not super-sharp so it's a judgement call about where it is exactly.


Big thanks for posting these, I've been wanting to see something like this for some time. Typically, the errors I'm getting at Jupiter's limb are similar to this when I adjust the start time using the recommended values but the fuzziness of the limb often complicates matters and makes it difficult to locate the cloud tops.

Posted by: Brian Swift Feb 13 2019, 07:40 AM

QUOTE (Bjorn Jonsson @ Feb 12 2019, 04:39 PM) *
Another possible source of errors is that a reconstructed SPK kernel doesn't become available until several weeks after a a flyby. I have yet to compare the reconstructed SPK kernels to the predict kernels that become available immediately.

I believe there was a change of about 8ms on PJ16_11 (Scratched my head for bit trying to figure out why I wasn't getting the same result I had produced earlier, and eventually realized I'd update the SPK at some point).

FWIW, in my matching code I gradient filter the image and define the limb as the maximal gradient.
QUOTE
This is very interesting and I'll probably do some processing runs to see if I get similar results (I hadn't processed these exact images yet). I'm especially interested in the start time adjustment.

The start times could all easily be off by a constant since my model uses its own spacecraft to camera frame transform matrix. (I haven't checked to see how much z-rotation difference there is between mine and the standard matrix).

Posted by: Brian Swift Feb 23 2019, 08:00 AM

Update to Juno3D pushed to https://github.com/BrianSwift/JunoCam/tree/master/Juno3D


Posted by: Brian Swift Mar 11 2019, 04:32 PM

Mike, I just noticed that prior to PJ10 TDI=1 was rare (TDI=2/3 typical), but from PJ10 forward TDI=1 seems to be the default.
I'm just curious, can you comment on the motivation for the change?

Posted by: mcaplinger Mar 11 2019, 05:53 PM

QUOTE (Brian Swift @ Mar 11 2019, 08:32 AM) *
can you comment on the motivation for the change?

Higher signal levels due to the evolving sun angle.

We are still using high TDI for some of the polar images in an effort to see the circumpolar cyclones better.

Posted by: Brian Swift Feb 5 2020, 05:51 PM

Mike, can you describe the Map Projection used for images that accompany the raw data images on the missionjuno site,
or point me to and existing description.

Posted by: mcaplinger Feb 5 2020, 09:41 PM

QUOTE (Brian Swift @ Feb 5 2020, 09:51 AM) *
Mike, can you describe the Map Projection used for images that accompany the raw data images on the missionjuno site...

It is an uncontrolled point perspective projection only intended as an unofficial pretty picture and not archived with PDS.

Posted by: Bjorn Jonsson Mar 29 2020, 12:14 AM

At long last I'm adding methane channel processing to my processing pipeline. I'm using image PJ25_19 for testing. My problem is that I am getting large geometric errors but then I remembered discussion of tests where the methane image readout region was changed. I strongly suspect this is the reason for the geometric errors and if I'm correct the IK kernel is incorrect when processing the PJ25 methane images. What is the location of the new readout region, either in absolute pixel coordinates or relative to the readout region before the change?

Has the new/changed readout region been in use after a specific date/perijove or is the 'old' readout region still used in some of the recent images?

Posted by: mcaplinger Mar 29 2020, 06:54 AM

QUOTE (Bjorn Jonsson @ Mar 28 2020, 04:14 PM) *
What is the location of the new readout region, either in absolute pixel coordinates or relative to the readout region before the change?

For the new readout region, INS-61504_DISTORTION_Y should be 405.48 (the first line of the region changed from 291 to 201.)

Inconveniently, we continue to switch back and forth between the two readout regions (we have to use the old methane setting to take RGB data as it happens because the parameters depend on each other). Typically the old one is used on distant images and the new one only on images close to perijove. For PDS products this will be indicated in the comment for each image somehow, but I'm not sure if this information will show up in the missionjuno metadata.

Sorry for this confusing state of affairs, we didn't anticipate that we might want to change these parameters at all.

Posted by: Brian Swift Jun 2 2021, 05:35 AM

Mike, is the data for the "Junocam relative filter response" table on page 6 of the "Junocam Calibration Report" available anywhere?
Also, can you describe from what locations on the CCD the monochromator values were obtained (and/or share example images)?
(considering possible impact of flat field on measured filter response)


Posted by: Brian Swift Jan 10 2022, 07:32 PM

Has anyone worked with ISIS cubes produced with junocam2isis fullccd=yes?
Does it seem sensible that the produced cubes are single rather than 3-band?


Posted by: mcaplinger Jan 10 2022, 08:56 PM

QUOTE (Brian Swift @ Jan 10 2022, 11:32 AM) *
Does it seem sensible that the produced cubes are single rather than 3-band?

There are a lot of things that don't seem sensible about ISIS. I think it would depend on your use case as to whether this was a feature or a bug.

Posted by: JohnVV Jan 11 2022, 12:24 AM

QUOTE
Does it seem sensible that the produced cubes are single rather than 3-band?

how do you know that they are single band ?

the few that i have processed with isis are rgb

Posted by: Brian Swift Jan 11 2022, 04:35 AM

QUOTE (JohnVV @ Jan 10 2022, 04:24 PM) *
how do you know that they are single band ?

the few that i have processed with isis are rgb

Viewing the cube produced by junocam2isis fullccd=yes with qview showed only 1 grey filter.
Also processing the cube with explode only produced one file.

I looked for a command that would report the number of filters/bands in a cube, but didn't see an obvious choice.

Posted by: Brian Swift Jan 13 2022, 10:09 PM

Note for anyone creating ISIS control networks for JunoCam data.
If you create a control network based on MissionJuno data, it won't "link" to ISIS cubes created from PDS data because the SerialNumber for the PDS cubes will be slightly different than MissionJuno based control network. The SerialNumber difference is because PDS StartTime has been improved/changed based on limb fit.

Posted by: volcanopele Jan 13 2022, 11:10 PM

Can confirm. had to redo my PJ34 control network for the PDS release. thankfully, I didn’t feel the need to use quite so many points this time around, so it wasn’t quite as bad. Still needed jigsaw to change both camera angles and camera position to get a good solution. But the differences weren’t nearly as bad as with the MissionJuno PNG files. BTW, I saw your ISIS bug report. I’m having to use ISIS 5.0.2 for CaSSIS and JunoCAM due to a jigsaw bug in 6.0 with push-frame cameras that require the use of observations mode in jigsaw.

Posted by: Brian Swift Jan 19 2022, 09:43 PM

The ISIS tool junocam2isis has a FULLCCD option which produces cubes with dimension 1648x1200 with image data in rows 441-568, 596-723, 751-878 (counting first row as 0). These values are offset by 15 from the start lines of 456, 611, and 766 mentioned at http://www.unmannedspaceflight.com/index.php?showtopic=2548&view=findpost&p=203948

Does anyone think this is correct? I just want to check before submitting a bug report since I'm relatively new to using ISIS.

Posted by: mcaplinger Jan 19 2022, 11:12 PM

QUOTE (Brian Swift @ Jan 19 2022, 01:43 PM) *
Does anyone think this is correct?

That doesn't sound correct to me, but I'm not sure what subsequent processing they are doing with such an image.

If you could show that the lat/lon mapping for the same image pixel was different between a FULLCCD image and a one-band image, that would be definitive proof of an inconsistency somewhere.

I'm not sure if you can back those start lines out from the info in the I kernel, but maybe.

Posted by: Bjorn Jonsson Jan 19 2022, 11:42 PM

This seems strange but very similar numbers (441.52 etc.) appear in the junoAddendum005.ti kernel that comes with ISIS. I don't know how ISIS uses these values though so this might or might not be working correctly in ISIS.

Posted by: mcaplinger Jan 20 2022, 12:23 AM

QUOTE (Bjorn Jonsson @ Jan 19 2022, 03:42 PM) *
This seems strange but very similar numbers (441.52 etc.) appear in the junoAddendum005.ti kernel that comes with ISIS.

Good catch. I suspect the issue is that that file says INS-61500_BORESIGHT_LINE = 600 when I don't think it is, but 600 - INS-61502_FILTER_OFFSET = 3.48, which is INS-61502_DISTORTION_Y in the official I kernel. So it may be that those two confusions/errors cancel out in their processing.

Posted by: Brian Swift Jan 21 2022, 12:44 AM

QUOTE (Bjorn Jonsson @ Jan 19 2022, 03:42 PM) *
This seems strange but very similar numbers (441.52 etc.) appear in the junoAddendum005.ti kernel that comes with ISIS. I don't know how ISIS uses these values though so this might or might not be working correctly in ISIS.

Thanks Bjorn, the INS-6150?_FILTER_OFFSET values in junoAddendum005.ti do control where the data is placed in the FULLCCD cubes.

QUOTE (mcaplinger @ Jan 19 2022, 03:12 PM) *
If you could show that the lat/lon mapping for the same image pixel was different between a FULLCCD image and a one-band image, that would be definitive proof of an inconsistency somewhere.

Thanks Mike, I took a look and lat/lon match up with about 0.5 pixel vertical difference, which I suspect is due to the current FILTER_OFFSET values having a 0.52 fractional part. If I change the FILTER_OFFSETs to the "standard" whole number values 292,457,612,767 and re-spiceinit the FULLCCD and one-band cubes, the alignment is exact.

However, from junoAddendum004.ti to junoAddendum005.ti the FILTER_OFFSETs are changed to the current odd values with the comment:
"2018-05-14 Jesse Mapel - Changed filter offsets to match code from M Caplinger."

Posted by: mcaplinger Jan 21 2022, 06:26 AM

QUOTE (Brian Swift @ Jan 20 2022, 04:44 PM) *
However, from junoAddendum004.ti to junoAddendum005.ti the FILTER_OFFSETs are changed to the current odd values with the comment:
"2018-05-14 Jesse Mapel - Changed filter offsets to match code from M Caplinger."

I don't have any idea what they are doing with the junoAddendum005.ti file or why they need those values. My code only uses the values from the standard I kernel.

It sounds like the error, if it is an error, is cancelled out by whatever they're doing in the code. I did some tests between my results and ISIS results a couple of years ago, and they matched at the eyeball level (1-2 pixels, perhaps better) so there are no gross discrepancies AFAIK.

Posted by: Brian Swift Jan 21 2022, 06:40 PM

Mike,
Before I mention it to the ISIS folks, I just want to confirm that the below statement still stands.

QUOTE (mcaplinger @ Oct 20 2013, 03:02 PM) *
... The actual start lines for each 128-line band framelet relative to line 0 at the "top" of the sensor are 291, 456, 611, and 766 with the nominal optic axis at line 600. And the focal length is 10.997mm.

Rephrased - the readout start lines 291, 456, 611, and 766 are relative to the 1200 row CCD "Active Pixels" area with rows numbered starting from 0.

I assume the actual readout lines used by the instrument are 6 larger than these numbers to skip the 2 dark and 4 buffer rows, which is why "top" is in quotes in your description.

I actually messed this up when building my camera model, and assumed the readouts were relative to the 1214 CCD total pixel rows.

Just out of curiosity since I'm dragging you down memory lane, are the readout offsets A)parameters to every imaging command B)fixed in firmware C) fixed in flight software D)part of a table of instrument parameter sets in flight software that are selected by an imaging command parameter, E)Transmitted telepathically from the spice infused hamster running on a wheel aligned with Junos' spin axis?


Posted by: mcaplinger Jan 22 2022, 06:01 PM

QUOTE (Brian Swift @ Jan 21 2022, 10:40 AM) *
I assume the actual readout lines used by the instrument are 6 larger than these numbers to skip the 2 dark and 4 buffer rows, which is why "top" is in quotes in your description.

I don't think this is true, no.
QUOTE
are the readout offsets A)parameters to every imaging command B)fixed in firmware C) fixed in flight software D)part of a table of instrument parameter sets in flight software that are selected by an imaging command parameter...

D except there is one set and it's always used by every imaging command.

There are five parameters: how many lines to skip at the top during CCD readout (done in hardware in the camera head), and then how many lines to skip at the top of each 155-line band area to produce the 128 lines that are actually sent to the ground (done in software). 155-line regions for bands that aren't being taken in any given image are also skipped in hardware (at the top -- if you commanded an red/blue image the green would get read out and then thrown away in software because the hardware can't skip lines in the middle of a readout, only at the beginning.)

I'll quote from the Junocam User's Guide, which says the same thing:

QUOTE
Junocam uses a color filter array with four spectral filter regions
which is bonded to the surface of its CCD image sensor. The process
of imaging reads out a single rectangular area of the CCD, and then
the flight software edits that frame to omit areas between the filter
regions which do not contain usable image data. The CCD map defines
which areas correspond to which filter regions. Each region is
nominally 155 lines high, and 128 lines are extracted for each color
band.

The CCD map consists of five entries: the starting line for band 1,
and then the number of lines at the top of each 155-pixel-high region to
skip to form the 128-line output.

The default settings for the CCD map are 287, 4, 14, 14, 14.


In the I kernel, I concentrated on describing how to do the geometric correction for the 128-line framelets we actually get on the ground, and never tried to describe in detail exactly where those pixels were coming from, and none of my processing ever tries to reconstruct the "original" frame like FULLCCD does. I think the ASCII drawing in the I kernel is accurate, but I wouldn't swear to it since none of my processing is doing anything with it. And to reiterate, my interaction with the ISIS team has been pretty minimal and I have not made an effort to understand their design choices, some of which seem odd to me.

Posted by: Brian Swift Jan 23 2022, 12:14 AM

Thanks Mike, this is great info.

QUOTE (mcaplinger @ Jan 22 2022, 10:01 AM) *
I'll quote from the Junocam User's Guide, which says the same thing:

Hmm, Junocam User's Guide isn't part of my JunoCam related documents collection.
Is it a MSSS internal doc, or a JPL (or other agency) doc?

I can see the functional description translates into effective readout lines as:
287 + 4 = 291
287 + 155 + 14 = 456
287 + 155 + 155 + 14 = 611
287 + 155 + 155 + 155 + 14 = 766

QUOTE
I think the ASCII drawing in the I kernel is accurate, but I wouldn't swear to it ...

Since these offsets are relative to very first line of CCD, the center reference in the ASCII diagram in juno_junocam_v03.ti
is at offset 599.5 (291 start of methane + 63.5 to center of methane + 245 to diagram center reference.)

This 599.5 is 6 lines from the center of the Active Pixels which is at offset 605.5=(6+1205)/2 where 6 and 1205
are the first and last lines of the Active Pixels area.

Diagram of CCD layout from KAI-2020-D.PDF for anyone wondering where these CCD numbers come from:


QUOTE
...design choices, some of which seem odd to me.

Nominally ISIS breaks each filter into a separate image. Based on the issues I've had with the FULLCCD option
and responses at https://astrodiscuss.usgs.gov/t/should-junocam2isis-fullccd-yes-produced-single-band-cubes/797/5?u=bswift
I have the impression that no one else has used it.


Posted by: mcaplinger Jan 25 2022, 04:47 PM

QUOTE (Brian Swift @ Jan 22 2022, 04:14 PM) *
Based on the issues I've had with the FULLCCD option... I have the impression that no one else has used it.

Certainly we didn't ask for this functionality, no other pushframe imager support in ISIS has it, and we weren't asked for, nor did we provide, any input about how pixels are mapped to CCD lines.

I acknowledge that the I kernel is not very clear about what the CCD layout is; technically it doesn't need to be. I didn't concentrate on this because we were trying to get the framelet parameters all correct, and I believe they are. But how we got to that point is more confusing than one might hope. I did find a correction for the dark/buffer CCD rows buried in the parameter-fitting code that we used to update the I and frames kernel values from cruise star imaging. Unfortunately, near the end of that analysis we realized that the random timestamping errors were going to swamp any effort to get to subpixel precision on this, so what we have now is probably near the limit of what can be done.

I will conclude by observing that camera models don't have to be "right" and often aren't, they just have to yield correct and consistent results.

Posted by: volcanopele Jan 25 2022, 04:57 PM

CaSSIS does have similar functionality with tgocassisstitch (and tgocassisunstitch). But no, we don’t use it either. The only place where we might want to use it is during bundle adjustment but we just make use observations mode in jigsaw to ensure that framelets taken simultaneously are adjusted together.

Posted by: Brian Swift Oct 3 2022, 12:48 AM

A good "Introduction to JunoCam Imaging" thread on twitter: https://twitter.com/akaschs/status/1576220654801936384

Posted by: Kevin Gill Oct 7 2022, 08:40 PM

OK, so I just wrote up a new JunoCam processing pipeline. I wrote it in Rust as a dedicated processing package which uses a much less complex method than what I have been doing. I'm mainly just using the CK kernels to determine spacecraft rotation in order to restitch the framelets together (leading to the primary limitation that it doesn't yet account for spacecraft motion). It does dark/flat calibration, hot pixel correction, and blemish infill correction. I tried to make it as cross-platform as possible, but I've only so far tested it in Linux (Kubuntu 22.04 & Ubuntu 22.04 in WSL, specifically). So far, I've only written it as a command line tool, but it probably wouldn't be too hard to add a GUI. I expect it to be a somewhat buggy and imperfect in it's current state and only supports RGB images, though I do plan to add methane or other single-channel image options.

I put it up on GitHub if anyone wants to check it out: https://github.com/kmgill/junocam_processing



It does OK



At perijove, it shows it's limitation in respect to spacecraft motion, but this should be correctable.

Posted by: volcanopele Apr 8 2023, 03:59 AM

One thing that kinda of went by unnoticed by me was that data from the SRU is finally trickling into the PDS: https://pds-imaging.jpl.nasa.gov/data/juno/JNOSRU_0001/

Only one orbit so far (36), but at least the SIS is out now. once orbit 34 is released, I’ll take a look at writing a script that can at least allow for adding geometry backplanes so you can reproject them in ISIS using nomap2map.

Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)