@tanjent Yes, in addition to the 3 visible bands, Himawari has 13 infrared bands which work at night. I assume that their process is to use one of those bands with an alpha channel to get transparency, then overlay the transparent image on an existing, cloudless RGB image with even daytime illumination. This also explains why they don't match up exactly with visible observations, since the IR bands are sensitive to different atmospheric phenomena than the visible.
This week I have been trading e-mails with Steve Miller and Dan Lindsey at NOAA/CSU/CIRA, who are responsible for the great true-color images on the CIRA/RAMMB page that I'm using to create my videos. They have seen the Youtube channel, and Steve was kind of flabbergasted at first that someone else had managed to replicate their composite process, until I showed him the image credit (click 'see more' on youtube description) and explained that I was using their output
But they both seem excited about the motion interpolation technique, its capabilities, and its applicability to other datasets. So I am in the process of writing some better documentation for my project
(which can be found here), to help them get Butterflow + my scripts running on their machines.
They are continuing to work on their image process, and just pushed an update which includes better color correction near the terminator, so the images are continually improving! And they're working on a new product similar to the VIIRS day/night band that includes night-time coverage, which should be incredible.
I also tried interpolating some hourly NOAA FIM cloud forecast images from Science on a Sphere that Steve Albers (scalbers) sent me. These work pretty well too, you can see a downscaled version and a cropped version here:
https://www.youtube.com/watch?v=KPpq_XmwKkMhttps://www.youtube.com/watch?v=Q9zHM8IFB_kFinally, I've been working my way through the
original paper about the optical flow algorithm underlying the interpolator, and it's given me lots of ideas (most of them likely well outside my coding abilities) about how the technique could be improved for the specific use case of satellite imagery. I'll write some of them up when I have time, in the "discussion" section of the documentation mentioned above. In particular, the bit about incorporating a priori knowledge seems useful, as we often have lots of a priori knowledge about the expected flow from one image to the next (ie. most movement is constrained to the surface of the sphere, we know the timing of the terminator, etc.).