Sorry if this has been discussed anywhere but I can't seem to find much information about it - why is it that Voyager's pointing is so haphazard, as in this video of all the RAW Jupiter images - https://www.youtube.com/watch?v=bf5QJ8iFxUs?
I came across this link which says that pointing information for the images exists - http://pds-rings.seti.org/voyager/ck.html, but have also read on this forum that it's not very accurate. Would the information be useful in automatically aligning composite images and mosaics, or is it too coarse? Would it at least be useful in getting general alignments that could be refined by hand?
And does anyone know why the cameras could not be pointed more accurately, or why accurate information could not be returned with the images? I assume it was some technical limitation, but just curious what it might be.
The north azimuth that can be determined using the C kernels is accurate enough (and this applies to all of the Voyager C kernels) and this makes the available Voyager C kernels highly useful. The pointing itself (i.e. the location of the subspacecraft point in the image) is typically off by ~100 pixels and needs to be corrected. The C smithed kernels are an exception (they are much more accurate) but they are only available for the Voyager 1 Saturn images.
Okay, thank you both for the information - it all makes more sense now.
I'm just starting to get stabilization working for the movies, which works for most simple cases - I was planning to try to crowd-source the rest (assuming such a crowd could be found - I figured some people on Reddit might be interested). But it sounds like once the C-smithed kernels are available that wouldn't be necessary - it would be possible to generate stable movies and composites somewhat automatically.
Of course, generating accurate C kernels would be where all the hard work is...
Well, I'll post movies as they get more stable, as much as can be automated anyway - I can't stop now...
The links to the downloads on the C kernels page weren't working so I wrote to Mark Showalter - he is working on C-smithing the Cassini kernels right now, and plans to C-smith the Voyager kernels once the Cassini data is done, which might be another year or so.
So regarding the accuracy of the C kernels - for the kernels obtained from the SEDR data:
you have read through my post
"Voyager Images and Isis3, applications and methods"
http://www.unmannedspaceflight.com/index.php?showtopic=8198
use your preferred way of handling the truncated lines
adding a grid and then measuring the offset .You can then offset the non grided image by the x & y amount , this works well .
For non gas giants ,a control network can be used BUT this is easier IF the images are already close to accurate.
Thanks for the link, I haven't tried ISIS yet - would like to someday though.
At the moment the goal is to make fairly rough movies, so motion of the target between images isn't accounted for, nor geometric distortion, but they look okay(ish) for the task. Here's the Voyager 1 Jupiter rotation movie in color, still kind of glitchy (v0.41) - http://imgur.com/MgNRzyE.
Maybe someday a later version could get into using ISIS to make more accurate versions, but that's kind of over the horizon at the moment.
But yeah, whatever can't be automated will have to be done by hand - will try to reduce that as much as possible though.
Without correction for geometric distortion, the Jupiter approach movies will look as though they were filmed underwater (I speak here from bitter experience).
I've already completed this particular video using a much more refined version of the method outlined in this clip from 2010:
https://www.youtube.com/watch?v=0XjW0vZZZXw
Here's a snippet of the final movie, which I can't yet release for various reasons: (But look out for it soon )
https://youtu.be/ZLSD0_-3LTM?VQ=HD1080
That's good to know about, thanks - I'd read something about the flatfield correction not being ideal - it might be nice to add something like that to the pipeline at some point, further down the line. Also will need some good reseau-mark-removal cleanup algorithms, since they're so noticeable on the limbs.
And for cleaning up noisy images it could be a nice citizen science type project to host the images online somewhere so people could download some and clean them up. Someday...
i use gmic ( was called GREYCstoration ) but it evolved into including basically everything Imagemagic can do and the PDE based smoothing and inpainting
the pds imq ( yes the IMQ ) header has a x/y points for the reseau marks
Thanks, G'mic looks interesting, especially if it can do inpainting well.
I've got the program set up to use the later PDS volumes, so would be using that data - it includes the reseau mark locations like so -
C1469813_RESLOC.TXT:
This is a table of the center locations of
the reseau markings in the corresponding raw image, C1469813_RAW.IMG. The
table has 202 rows, one per reseau marking. Each row contains values for the
line and sample coordinates. Note that lines and samples range from 1 to 800,
although some reseau markings can fall outside these limits. This file was
derived from the corresponding VICAR-format binary data file
C1469813_RESLOC.DAT. It has been converted to ASCII text format for users'
convenience. An extra column contains the reseau marking number as originally
identified by the Voyager Imaging Team; this number differs from the order of
the rows in the file. See Fig. 1 of
Danielson, G. E., P. N. Kupferman, T. V. Johnson, and L. A. Soderblom 1981.
Radiometric performance of the Voyager cameras. J. Geophys. Res. 86, 8683-
8689.
C1469813_RESLOC.TAB:
1, 4.9486, 12.4470, 1
2, 0.3756, 51.5631, 3
3, 2.4425,126.7564, 4
4, 1.5738,205.6022, 5
5, -0.2871,282.6453, 6
6, -0.1777,362.7857, 7
...
For filling in large gaps, I was thinking that since the program knows what the size of the target should be (thanks to the SPICE data), once you have the image centered, you could fill in the target area where it's black by filling in from the previous good frame, and blurring the edges a bit. Either that or some kind of texture synthesis, e.g. http://eric-yuan.me/texture-synthesis/.
I'm still having trouble stabilizing images due to all the bad frames, so cleaning up the images might need to be higher in priority - until then the movies are going to be fairly choppy. But the expected target size from SPICE has matched the actual image very well from the images I've looked at, so that might help with centering the images, since the Hough circle detector can look for circles of a specific radius.
I've been having trouble getting camera pointing information from SPICE - I've tried both the older C-kernels from NAIF and the newer versions from PDS, Voyager 1 and 2 data, run through all available image times, tried the scan platform vs cameras, set the time tolerance to increasingly larger values, but it still comes back with pointing information not found. I must be doing something obviously wrong - does anyone know what it is?
Here's the Python code using SpiceyPy (I also tried it in C to make sure there wasn't something wrong with the Python interface but got the same results) -
Thanks for any pointers!
I found the problem - somehow in all the permutations of inputs I missed one - it needed the scan platform instead of one of the cameras - Doh!
I had to dig into the kernel files with some SPICE command line tools to find it - some spacit output -
Thanks - I guess this means there are gaps in the pointing data...
I'm just getting started with this pointing stuff, so it will take me a while to get there. So far I can just project a straight-on view to a simple cylindrical map, but need to incorporate the pointing information and axial tilt, etc. Lots of trigonometry...
Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)