Juno perijoves 2 and 3, October 19 and December 11, 2016 |
Juno perijoves 2 and 3, October 19 and December 11, 2016 |
Oct 26 2016, 04:44 PM
Post
#1
|
|
Junior Member Group: Members Posts: 22 Joined: 13-October 13 Member No.: 7013 |
A lot has happened and it seemed like a good time to start a new post. We will be staying in 53 day orbits until the project has a full understanding of the risks that may or may not be associated with reducing the orbit period to 14 days per our previous plan.
|
|
|
Feb 3 2020, 11:48 PM
Post
#2
|
|
IMG to PNG GOD Group: Moderator Posts: 2250 Joined: 19-February 04 From: Near fire and ice Member No.: 38 |
There is also clearly some misalignment far from the limb that can be seen by 'blinking' the red/green/blue channels rapidly in high contrast areas. The misalignment should be smaller. One important issue is that the x (sample) of the optical axis is *not* at the center (i.e. x=824) in the framelets. Depending on how you are creating these images this may be of importance but I'm not sure it's enough to cause all of the misalignment (in particular not the misalignment near the center of your images).
|
|
|
Feb 4 2020, 10:58 PM
Post
#3
|
|
Member Group: Members Posts: 406 Joined: 18-September 17 Member No.: 8250 |
... 'blinking' the red/green/blue channels rapidly in high contrast areas. Doh. I should have already been doing this. I've just been looking at bue/red fringing in high contrast areas. Björn, are you using the standard camera model, or have you developed your own? |
|
|
Feb 7 2020, 01:01 AM
Post
#4
|
|
IMG to PNG GOD Group: Moderator Posts: 2250 Joined: 19-February 04 From: Near fire and ice Member No.: 38 |
Björn, are you using the standard camera model, or have you developed your own? This depends on how you define "...using the standard camera model" . I'm using software written by myself for the geometric processing (reprojecting the framelets to a simple cylindrical and/or polar map etc.). However, some of the code is directly based on information/code from the IK kernel, in particular the distort/undistort code, the location of R/G/B on the CCD, the FOV etc.. I'm also using the SPICE toolkit. Works wonderfully now, especially after lots of improvements I did in November and December 2019 (what was supposed to be a minor improvement in early November triggered a flood of new ideas for improving the software, resulting in faster processing, improved/proper flatfielding, a new (?) function for removing limb darkening, better photometric parameters, an empirical model of the skylight illumination near the terminator, easier limb fits etc.). The only issue I'm working on now is the value I need to add to START_TIME. Using a fairly large sample of images it has become absolutely clear that in my case the average value I need to use is lower than the correct value (0.068). The average value I use is close to 0.040 but it varies and is sometimes close to 0.068. This means that something is wrong. This has no visual effect though and is in that sense not a serious problem (in particular there is negligible misalignment between the R/G/B channels or adjacent framelets from the same color channel). I can think of at least three plausible reasons for this (in fact all of them might contribute to the error) and now that I'm almost finished with the PJ24 images (at least for the time being) I plan on taking a detailed look at this issue. |
|
|
Feb 7 2020, 06:55 AM
Post
#5
|
|
Member Group: Members Posts: 406 Joined: 18-September 17 Member No.: 8250 |
... some of the code is directly based on information/code from the IK kernel, in particular the distort/undistort code, That is what I was I was referring to. I and (I believe) Gerrald have our own distort/undistort code, Kevin (I believe) is using the standard code via ISIS, but I didn't know if you had your own or used the standard code. QUOTE ... in faster processing, improved/proper flatfielding, a new (?) function for removing limb darkening, better photometric parameters, an empirical model of the skylight illumination near the terminator, easier limb fits etc.). Awesome. Wish I had time to do a proper illumination removal model. When you combine multiple images, is there residual brightness variation that needs to be adjusted to eliminate boundaries between images? QUOTE The only issue I'm working on now is the value I need to add to START_TIME. Using a fairly large sample of images it has become absolutely clear that in my case the average value I need to use is lower than the correct value (0.068). The average value I use is close to 0.040 but it varies and is sometimes close to 0.068. Note, the START_TIME_BIAS in juno_junocam_v03.ti is 0.06188 not .0688. My start times (based on limb fit) range from .03 to .08 |
|
|
Feb 9 2020, 11:40 PM
Post
#6
|
||||
IMG to PNG GOD Group: Moderator Posts: 2250 Joined: 19-February 04 From: Near fire and ice Member No.: 38 |
... in faster processing, improved/proper flatfielding, a new (?) function for removing limb darkening, better photometric parameters, an empirical model of the skylight illumination near the terminator, easier limb fits etc.). Awesome. Wish I had time to do a proper illumination removal model. When you combine multiple images, is there residual brightness variation that needs to be adjusted to eliminate boundaries between images? Yes, there are always some differences and I don't think they can ever be eliminated. The reason is that the photometric parameters differ a bit for different parts of Jupiter (in particular I'm pretty sure that the parameters for the polar areas differ from the parameters closer to the equator) and they also vary as a function of time (changes in color/brightness/haze etc.), in other words: There's no such thing as a 'perfect' photometric model for Jupiter. For simplification purposes I'm using the same parameters everywhere. That said, the differences at the boundaries between images are much smaller now than they used to be when mosaicking images. Of course the intensity differences are smaller but what's maybe even more important is that there are now no significant color differences at the seams (unless there are significant differences in the emission angle). These smaller differences are not only because of more accurate photometric parameters when removing the illumination effects. I recently discovered that proper and accurate flat fielding is much more important when processing JunoCam images than I used to think. There is some vignetting in the raw images. The effects of flat fielding are not very noticeable in images with lots of high frequency, high contrast features but they are more noticeable in low contrast areas. Here is an image where the effects of flat fielding are particularly noticeable, an animated GIF example from image PJ14_26 (here the illumination has not been changed): Needless to say the flat fielded images with illumination removed are easier to deal with when making mosaics. The flat fielding also greatly reduces the horizontal banding seen in some images, especially in the blue channel in hi-res images of low contrast areas. This is an example from the blue channel in image PJ10_28 without flat fielding. The contrast has been increased a lot: The individual framelets are too dark at the top and too bright at the bottom. Flat fielding greatly reduces this banding but does not completely eliminate it. The absence of an 'official' JunoCam flat field turned out to be a smaller problem than I was initially expecting. As a starting point I found a flat field image in this post from Mike. Using this image directly didn't work well (I tried all decompanded and not decompanded combinations to be sure). I had to make significant modifications to it in order for it to work well. This was largely a trial and error process involving mosaics where the difference in emission angle is small in the overlap area, checking images where I knew that the brightness near the right edge shouldn't be lower than near the center and also by checking for the horizontal banding mentioned above. I ended up with a flat field that seems to work very well but I'll probably make further modifications to it later - importantly I really have no idea exactly how close it is to a 'perfect' flat field. This is the flat field I'm currrently using: Apart from other changes, high frequency artifacts, blemishes etc. have been removed since I prefer to fix these in a separate processing step and not as part of the flat fielding. Has anyone else been flat fielding the JunoCam images and if so, how? Note, the START_TIME_BIAS in juno_junocam_v03.ti is 0.06188 not .0688. My start times (based on limb fit) range from .03 to .08 Oops, I didn't look up the correct value before writing the incorrect value 0.068 but it doesn't change the fact that the ~0.040 value I mentioned is suspiciously low. The range I have seen (also from limb fits) is slightly bigger, ~0.005 to ~0.085. My assumptions about camera pointing are imperfect, and I'm doing some manual limb fitting of each image. ... Regarding the limb fit: Note, that the opacity of the hazes, and so the apparent limb, is varying with latitude. This applies to Jupiter's equipotential wrt to a spheroid, too. (AFAIK, full detail of the latter isn't publicly accessible, yet.) I'm also measuring the limb position in every image I process - I've sometimes had the impression that this was rare. I then feed the measured limb positions into software that gives me the START_TIME and interframe delay that are consistent with the measured limb positions. Hazes, variable cloud altitudes etc. greatly complicate this though. Also the appearance of the limb in the blue images is significantly different from the red (and also green) images and this affects the limb position measurements. Maybe I should measure the limb positions from red images only. |
|||
|
||||
Feb 11 2020, 08:07 PM
Post
#7
|
||
Member Group: Members Posts: 406 Joined: 18-September 17 Member No.: 8250 |
Has anyone else been flat fielding the JunoCam images and if so, how? My flat fields (gains) were derived from average of 150 bright framelets (30 from each of PJ12 to PJ16) which are smoothed by fitting with a 10th order polynomial. Dark spots in average are merged into the polynomial based flat. I've uploaded to GitHub depot the gain, flat (not used), and debias images as 32-bit tiff. https://github.com/BrianSwift/JunoCam/tree/master/Juno3D An animated gif showing effect of flat field on PJ10_28 blue channel QUOTE ... I then feed the measured limb positions into software that gives me the START_TIME and interframe delay that are consistent with the measured limb positions. I only use limb fits to adjust START_TIME. Centering the SPICE limb within the visible limb using average of time offset computed independently for R,G,B. I assume variance in altitude of visible limb is due to real atmospheric variation relative to the SPICE ellipsoid. I'm unaware of any physical justification for varying interframe delay. |
|
|
||
Lo-Fi Version | Time is now: 29th April 2024 - 12:36 AM |
RULES AND GUIDELINES Please read the Forum Rules and Guidelines before posting. IMAGE COPYRIGHT |
OPINIONS AND MODERATION Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators. |
SUPPORT THE FORUM Unmannedspaceflight.com is funded by the Planetary Society. Please consider supporting our work and many other projects by donating to the Society or becoming a member. |