Folks
This is certainly unrelated to UMSF (and I assume therefore not best placed in the image processing sub forum) but on the other hand is something that the massive collective experience and prowess of this place should be able to provide pertinent thoughts on (indeed I'm kicking myself for not thinking of asking earlier, having spent so long marvelling at the images generated here).
Take the following image:
That's a good problem and I'm interested to see what people come up with.
What is the best technique to pull out signal from a varying background?
My recipe:
Duplicate image
Gaussian blur 10
Gaussian blur 10 (again)
Subtract from original image + 128
Contrast stretch
Duplicate image
take new image HiPass filter, change combine mode to Hard light.
Result:
Photoshop only...
Change mode to Greyscale
Contrast and Levels stretch
Invert image
Levels stretch
Select and delete noise
Gaussian blur
Levels stretch
Copy and paste to original image
Mode to Hard Light
Result:
Many thanks folks, and any further thoughts from people are very welcome. As I say, in terms of the actual technique we employ this would be one of the poorer examples, and indeed there are physical/chemical tricks we can play on it before the image processing comes in. On the other hand, as you show, it's a tough task and one that will play a key role in determining if this has genuine forensic application.
As an aside, where this reagent does have clear potential is with fingerprint imaging, and we've obtained prints from an unprecedented range of media this way.
As another aside - and one pertinent to UMSF and indeed any aspect of scientific endeavour - this whole branch of investigation came from entirely serendipitous observations. We were doing work on the reagent (S2N2) in a completely unrelated area, when we stumbled upon the fingerprint results and then the inkjet work. The results we published generated interest and enquiry from around the world - from Orange County Police Dept through to Nature magazine; from a science writing course at MIT through to the scriptwriters of Numb3rs. But it came about by chance, through a bit of lateral thinking applied to serendipitous observations obtained during fundamental, speculative research. There's a moral there...
You might want to scan it at a very high resolution like 600-1200 dpi so we can apply a more subtler effect on a wider range of data points.
Ah, I did wonder about the resolution issue.
Here is a re-scan at 900 dpi - all I've done to it is rotate, flip and crop in Photoshop.
Here is my shot at enhancing the image.
I used the original lower res version, I'll try the higher-res this week...
I ran an adaptive rank order filter to soften any noise-like pixels (26.78% of pixels), while retaining all edges (i.e. adaptive using a 5x5 kernel to see the edges). I then used 4 histogram equalization filters (32x32, 64x64, 96x96, and 128x128) and ran them in a stochastic fashion over the image to convolve the varying background from the faint foreground 'characters'. You can see the artifacts from this processing on the left edge as the different filter sizes moved in a random pattern over the image. The results were obtained after about 2000 iterations, which were summed in an output array and then averaged.
I think the letter inside the envelope moved around during handling and made multiple random soft and hard 'impressions' on the envelope, that resulted in the smudge-like letters -etc. I would like to see an image of the original letter that was inside this envelope (to establish ground truth).
Many thanks for that - fascinating! I suspect the smearing problem is less to do with movement and more to do with simple diffusion. In fact its very very odd that the ink components don't simply diffuse out completely - as I said, we can see this effect through more than one sheet of paper and its conceptually very strange that any structure would be kept at all.
The original is just normal text, but the process does add stuff onto the background, so I can scan normal text that has been treated this way if that would help.
For reasons I wont go into now it would be very useful to have one of these processes run again exactly the same on a re-scan of the sample after a few days; I'll get back to people on that.
Rest assured that I'll be happy to formally acknowledge people's efforts on this.
Here is the enhancement on the higher res image....same technique...
Thanks again, that's very interesting.
At the risk of prolonging a distinctly non-UMSF related topic (this is the last of it, honest ):
(i) as requested, the following is a low resolution scan of original text that has been chemically treated this way - note the golden, conducting layer on the text; it shows the general relationship between what we'd hope for and the background generated by the technique.
Here are the two images processed with a "decorrelation stretch" using the ImageJ plugin DStretch. Essentially, this process performs a Principal Components Analysis on the three color bands (I chose YCbCr color space), then contrast-stretches each Principal Component image, then inverts the PCA and contrast-stretches the resulting image in RGB space.
First image:
I'm not an 'admin' but I don't think you need to worry about inviting members here to help with your enquiry. In fact I think it is a very good place to come to if you want high quality responses about image processing. The thread has no malignant potential so won't ring any alarm bells.
If Doug or any of the other admins disagree I'm sure they will let me know!
Another crack at the original:
Both images 8c and 9 needed a slight change in the processing recipe. Since they were bigger (more pixels), I needed to up the gaussian and high pass.
(I used Gaussian 20 Gaussian 20 for the pseudo-background, and High Pass 60, (unsharp mask the same, however), and the last two Gaussian blurs were at 0.8.)
(The first levels layer was set at 83, 1.00, and 175; and the second levels layer was set at 0, 1.20, 200)
I also added a second curves layer:
Curve
(0,0)
(105, 63)
(170, 170)
(255, 255)
I expanded the original image (using old parameters) and the mark8c and mark9 images and cropped them down to the common overlap region.
Here are the results (all cut down by 50%):
mark2 original:
Here is a difference image between Mark8c and Marck9. (Looks like a chunk shifted down and to the right between 8c and 9)
Animated GIF cycling through original (darker)-->Mark8c-->Mark9:
And now, one last blink comparing the (B&W converted and contrast stretched) original lettering, with my processing from mark9 hi-res:
Excellent stuff Mike, thank you !
Here is the second high res image (converted to grayscale)
Same technique - all images done the same way:
* adaptive rank order filter to median-out serious outlier pixels in a kernel
* then a few different sizes of a histo-equal filters run over the image to do smart contrast
* summed in the output image and finally averaged by pixel -
* run time is 2 secs per big image on an old laptop,
* using just a home brew c++ program to do the processing
* the 'gray edges' are the original image at the bottom of the average stack.
I wonder how useful it would be to apply one of these greyscale techniques in conjunction with a decorrelation stretch; there is definitely a significant amount of information hidden in the differences between the color bands, but noise-removal techniques would help bring this out. PDP8E, could you try applying your technique to all three bands of the input image separately and see what happens, either in RGB or YCbCr space?
jekbradbury,
I am on it!
back in a day or two....
OK...I took the last big color image and took it apart into color bands (RGB)
* I applied the same technique above (noise reduction, and then histo-equal via stochastic stamping) to each band independently
* then put the RGBs back together in one image
again, fascinating stuff - thank you!
Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)