Found this on Facebook:
His dissertation is available here
Being a non-expert, I only skimmed it for the pictures, but it looks like that it uses the extra resolution available in today's camera chips to increase the focused depth of field using light rays.
Microlenses are placed in front of a group of pixels, say 8x8 or 64, and now the mini-picture formed on that 8x8 array can be used to extend the Depth of field by a factor of 8. This can be done without needing a high F/ratio, as is usual to increase the DoF. Plus, it allows for choosing to put near, far, or everything in between into focus.
For space, this looks the most interesting for microscopic images. It would allow a single-shot to capture a whole 3d microscopic image, with focus at a very wide distance range. You could create the large DoF picture on the imager, and upload only the processed image, or upload the full image, and choose where to focus back on Earth. It would reduce your effective number of pixels, but would remove the need for focusing apparatus, aperture control, or focus steps as you bring the imager closer to the object under study.