Hey gang, I finally discovered an easy way to align images of various celestial targets taken at slightly different times! I used a plugin for ImageJ called bUnwarpJ. I'm really pleased with most of the results I've gotten so far. I've noticed that the receding limb sometimes doesn't warp as well as the approaching limb, though. That's something I still need to figure out, along with mosaicking and the issue I'm about to bring up.
Anyway, to celebrate performing this pretty rudimentary image processing task for the first time, I tried aligning Dawn images taken through filters 2, 7 and 8 (corresponding roughly to green, red and blue) using .img files I had converted to .gif through NasaView. The thing is, my resulting color images keep coming out looking incredibly garish. One attempt ended up with a bright pink Ceres.
I've noticed a lot of raw NASA images, calibrated or uncalibrated, vary incredibly wildly with respect to brightness, and even across images taken through the same filter with the same exposure. How am I supposed to rectify this, or even begin to know what to do? This has been driving me crazy for a while. I know there's no such thing as "true" color, that color is subjective, etc., but I've always assumed that a given combination of three frames taken through three different filters can be mixed in a specific way that is scientifically useful, predictable, and proper or something.
I guess what I'm asking is, how do you guys decide on how much of each filter to use when mixing color?
Amateur astrophotographers now routinely use de-rotation software for imaging Jupiter. Jupiter rotates so quickly that it's hard to get imaging data fast enough before the rotation is significant. I haven't used this myself.
Imaging taken from spacecraft (e.g., Ceres from Dawn) have an arbitrarily greater challenge, due to the motion of the spacecraft.
"Scientifically useful" and "proper" may be contradictory things. Ultraviolet images of Venus are scientifically useful but are inherently unlike anything a human can see with their eyes.
There really is no objective true color, and that's not just a minor nitpick. It varies dramatically based on the lighting, etc. Almost all space images of solar system objects (from Mercury to Saturn) appear on a monitor much dimmer than the world would appear in direct sunlight. That inherently changes the color a great deal. Deep sky objects, on the other hand, are almost always displayed on a monitor much brighter than they would appear if seen with the naked eye.
The human eye has three broadband color sensors. Many space camera systems have narrowband filters corresponding to the middle of the R, G, and B ranges, and those can approximate human RGB color, but can never be guaranteed to capture it correctly. For example, a sodium yellow line is, in principle, completely invisible to RGB narrowband filters, while the human eye and cameras with broad RGB can see it just fine.
If an image seems too garish to you, you can adjust the saturation.
In summary, there is no validity, in digital imaging, to Keats' lines:
"Beauty is truth, truth beauty," – that is all
Ye know on earth, and all ye need to know.
Once again, I know there's no valid color in digital imaging. What I'm asking is when you're given a repeating series of images of a target taken through multiple filters, with each filter always using the same exposure, and you'd like to process three differently filtered images into one color image, what do you do when images taken through the same filter have extremely inconsistent brightness levels that can't be explained by varying exposure? Are there any references as to how a frame taken through a certain filter with a certain exposure should look, without these brightness fluctuations? I want to know how you guys choose the specific levels of each filter you choose when processing images, whether or not they include ultraviolet or infrared or whatever. I'm not interested in duplicating the exact kind of color my eye would see, I just want to be like you guys.
In case you would want to consider perceived color, there are some steps that can be detailed that I like to do with full color synthetic (rendered) images. If we assume we're starting with a linear spectral radiance scaling we can do these sorts of things:
1) Convolve the spectral radiance with the tristimulus color functions
2) Apply the 3x3 transfer matrix that puts the XYZ image into the RGB color space of the computer monitor
3) Include a gamma correction to match the monitor brightness scaling
Even though this would strictly speaking apply to perceived color, it should give pretty close results with "actual" color. I define this as if you're sitting (floating) out in space holding your computer monitor side-by-side with the object of interest and seeing if they match.
For a less exact but very fast approach, I make a color combination of R, G, B images and then compare its levels to a color photo processed by one of the masters here (like ugordan for Saturn targets, machi for small bodies), and I adjust the levels to make my image's histogram look similar to the example image by setting the maximum values of R, G, and B channels to be near those for the brightest color in the example image.
besides the 8 bit ( 256 tone) and 16 bit ( 65536 tone) issue Bjorn posted
Nasaview normalizes the images
it auto stretches -5% to + 105%
the top and bottom 5% of tones are turned black(0) or white (255)
then there is the issue of the 8 bit image depth
then the camera calibration needs to be looked at
each ccd array and filter has a set of calibration images that need appalling
example for dawn the flat images
http://sbn.psi.edu/archive/dawn/fc/DWNCALFC2/DATA/FLATS/
"FC2_F1_FLAT_V02.IMG"
-- 8 bit normalized copy of the 32 bit float
http://imgbox.com/KFAWcsu5
now converting DIGITAL-- R ,G, B, black and white images into a single color image is WAY easier than using FILM used to be
( i used to do that dye transfer and color separations in the darkroom )
a LOT of math needs to go into it
or
use the non scientific approach and artistic one and have it "look good"
read up on the MSL and Opportunity and sprite "sun dial" color wheel image processing
or the viking color chart processing (and the blue sky - first image )
if they are fits images then they are NOT pds img ( img/lbl ) files
fit or fits are mainly used in astronomy and most editors can work with them
for fit images i use
Nip2 -- handles 32 and 16 bit images
Gmic ( terminal tool ) -- handles 32 and 16 bit images
Gimp 2.9.3 ( DEVELOPMENT ) -- handles 32 and 16 bit images
and sometimes GDAL
i take it you are refering to these
http://sbn.psi.edu/archive/dawn/fc/DWNVFC2_1A/DATA/
fit
http://sbn.psi.edu/archive/dawn/fc/DWNVFC2_1A/DATA/FITS/2011223_SURVEY/2011238_CYCLE6/FC21A0006338_11238030914F1H.FIT
fit in gmic -( handy for images in the web browser )
-- upside down the fit format starts on the bottom and reads UP
http://imgbox.com/cSN2rQ22
The data in the Fit images are 16 bit SIGNED!!!!!!! integer ( -32767 to +32768 )
if the java imageJ is normalizing them then they will be all over the place
png dose NOT support 16 bit short data just 16 bit Ushort ( 0 to 65535 )
the img
http://sbn.psi.edu/archive/dawn/fc/DWNVFC2_1A/DATA/IMG/2011223_SURVEY/2011238_CYCLE6/FC21A0006338_11238030914F1H.IMG
opens just fine in isis3 and is imported
Less than 24 hours ago I had no idea how to write command lines... but after seven continuous hours last night and five continuous hours today of caffeine, cursing and sifting through forum posts, I finally made my first NASA image with good color from images that weren't contrast stretched~~~!!!!!!!!!!11111
Thank you QFitsView, ISIS3 and everyone in the ISIS community!
most things i work on use typing into the terminal at some point
this is not new , back in the 80's this was the norm when i started using this infernal things ( and 5 in floppies)
Yep... I was pretty much determined to install and attempt to use ISIS at all costs last night, and I did that, and then I found a way to get .pngs that hadn't been contrast stretched this afternoon. There really needs to be easy-to-use software that poops out entire folders of unstretched .pngs, for people like me who have spent the past three years trying to do this. But honestly, after getting ISIS running, I feel pretty content with it. I just wish there was a way to batch convert entire folders of images into cubes, and entire folders of cubes into unstretched .pngs.
if you need help with isis3 and i am assuming CentOS 6.8 ( the free version of RHEL 6.8 )
pm me or send me a email
I find that mixing isis3 , gdal ,and gmic is a good tool set
you may or may not be aware of one of the cool " mostly" undocumented options
convert a cub to a raw AND!!! have all the info and be able to move it back to a cub after working on it
Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)