https://www.youtube.com/watch?v=o7POpid-e8U
there are several that illustrate the same thing...by shifting your perspective a few pixels and merging
them, you somehow achieve that super resolution. That's where I get lost.
The movements of the camera are so small that perspective shift is negligable. What happens is not so much that you move the image by a few pixels (although that in itself could increase resolution) but that you move it by x pixels plus a fraction. Which means that you sample in between the pixels of a single image. Thus, your stack of 20-40 handheld images will have a lot more information.
By combining images which have shifts of (x + a fraction) of a pixel, you get around a fundamental limit described by the Nyquist-Shannon theorem. In short, you cannot capture details smaller than 2 pixels. Your stack of images has in effect put pixels in-between those of a single image, so your "2 pixels" in the stack are much closer than in a single image. Thus you can resolve smaller details.
A final gain should be in the image noise: as you add the signals, part of the noise will cancel out, so the final image will have a higher signal/noise, or less visible noise.
I discovered this morning that it is called the Drizzle Method, kinda explained...http://www.stark-labs.com/craig/arti...rizzle_API.pdf
There is a related procedure relevant to landscape photography and that is to take several shots with identical exposure and focus, but each a few seconds apart. In Photoshop these can be stacked and then merged using median sampling. The consequence of this is to eliminate objects that only occur in one shot (typically people).
John
There was a thread on this subject some while back -
Stacking for noise