Helpful Posts Helpful Posts:  0
Results 1 to 3 of 3

Thread: Resolution: Micrography versus Photography

  1. #1

    Join Date
    Feb 2012
    Location
    Texas
    Posts
    6,956
    Real Name
    Ted

    Resolution: Micrography versus Photography

    I've been reading up on fundamental resolution where it applies to object/subject detail as perceived by a digital sensor. In just about every article on the subject in photographic literature, the sensor is said to require at least 2X the spatial frequency (F) of the detail as it appears at the image plane (i.e. the sensor).

    Example: my macro lens can resolve, say, 20um wide stripes at 1:1 and f/5.6 - a spatial frequency of 50 l/mm (l, not lp) - again at the image plane. Ergo, two pixels should occupy 2X 1/50 = 40um - thus 1 pixel should be equal to, or less than, 20um.

    I do realize there are many ways to arrive at a similar conclusion, not the least of which would include data that is more to do with lens aberrations, diffraction, visual acuity and display resolution, but here I am only talking about digital sampling after the lens has done it's blurry stuff.

    My dilemma is that the same research often brings up digital Microscopy web pages. Almost without exception, they quote the Nyquist limit/criterion/whatever as being 2.3X the spatial frequency (F) of the detail. I would like to know how the magic"2.3" (oft-quoted, but never explained) was derived?
    Last edited by xpatUSA; 23rd February 2012 at 06:45 PM. Reason: getting old . .

  2. #2

    Join Date
    Apr 2011
    Location
    Western MA, USA
    Posts
    455
    Real Name
    Tom

    Re: Resolution: Micrography versus Photography

    My guess is that 2.3 is just "Kentucky windage" to accommodate real-world data. I have applied something pretty much like that to my sampling rates for Fourier analyses of data. The reason you cannot go up to the Nyquist limit in practice is basically because it doesn't really apply to what you are doing in the real world. Take just one really big asssumption -- that all frequencies in the data set are overtones of the sampling window. This is almost assuredly false. Thus, you will have leakage in your frequency data.

    This is easiest to discuss in the time domain with one-dimensional data. Assume, for example, that you sample data at a rate of 0.1 sec for 1 second. The first harmonic will, of course, be 1 Hz, and the Nyquist limit will nominally be 5 Hz. However, FTs assume that the data is periodic in your sampling window. What if there is a component that is, say, 4.5 Hz? It will violate the mathematical constraints of doing an FT on the data. What actually happens with digital data is that the 4.5 Hz data will appear as a lower peak (this is called picket fencing, and can be eliminated by zero-filling) that is spread evenly between the two defined values of 4Hz and 5Hz. It will appear that you data has two equal-sized components of each frequency. In addition, the data will "spill out" of your frequency range entirely, and cause aliasing -- the 5 Hz bin will actually appear to have a bit more height than the 4 Hz bin because of this, and all the bins will be non-zero, even if the only component present is 4.5 Hz.

    Now, to be able to reasonably interpret real-world data that is not infinitely repetitive; does not consist only of overtones of the sampling window base frequency; and is not limited to frequencies that are below the Nyquist limit, we need to back off from the Nyquist limit a bit. I have often used a value of 1/2.5 times the sampling frequency for practical data, but that will depend on various things (how sharp is the rolloff of your anti-aliasing filter, for example).

    So, after all is said and done, I think they're just trying to accommodate the messiness of reality. If you research microscopy, you'll probably find that some influential lab manual used that for the limit, and it became the norm. FWIW

  3. #3

    Join Date
    Feb 2012
    Location
    Texas
    Posts
    6,956
    Real Name
    Ted

    Re: Resolution: Micrography versus Photography

    Quote Originally Posted by tclune View Post
    My guess is that 2.3 is just "Kentucky windage" to accommodate real-world data.

    Now, to be able to reasonably interpret real-world data that is not infinitely repetitive; does not consist only of overtones of the sampling window base frequency; and is not limited to frequencies that are below the Nyquist limit, we need to back off from the Nyquist limit a bit. I have often used a value of 1/2.5 times the sampling frequency for practical data, but that will depend on various things (how sharp is the rolloff of your anti-aliasing filter, for example).

    So, after all is said and done, I think they're just trying to accommodate the messiness of reality. If you research microscopy, you'll probably find that some influential lab manual used that for the limit, and it became the norm. FWIW
    Thanks Tom,

    As good a guess as any I've read out there to date and it would perhaps explain the regular occurrence of "2.3X" although there are occasional excursions to "2.5X" and even "3X". In my world of watch photography there are sometimes occurrences of repetition, e.g. crown grooves (yes, I realize they're round), strap weaves, dial patterns, and such. And, in my world, there's a bit of emphasis placed on sharpness too. Oddly enough, we also like any scratches to be perfectly imaged (wabi-sabi, almost the inverse of bokeh).

    Fortunately, there's no AA filter in front my SD9's Foveon sensor - one less thing to scratch my head over ;-)

    Ted

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •