I just did that but NO luck. The process did NOT work quite as advertised in Rawpedia but I have posted a new thread in the RT forum to see if some help from there is possible.
I was able to find a more up to date version of what I think is the applicable xml file that did contain information about the Canon mirrorless cameras and lenses on GitHub but could NOT perform download.
Apparently, RT also supports the Adobe LCP files but that will take a little more time to investigate.
Last edited by ajax; 14th January 2022 at 05:03 PM.
Let me start by saying that this question is drifting away from discussion about camera lenses. Please feel free to ignore.
If I understand what you are saying this could be similarly useful in a slightly different scenario. Similar in the sense that like your race photo there is no option for getting a better original. As I began to learn how to post process digital images (what I call developing in the case of photos with raw files) I also became interested in digitizing what I might call those old family photos from the past (i.e., long before digital cameras).
This involves scanning originals and then trying to improve the appearance. This is a case where I'd say you cannot recover detail that is missing from the original photo (in either print or slide form). However I'd argue that, as mentioned in some cases, pretty amazing improvements are possible. What I've been doing is adjusting the resolution on the scanner based in the size of the original and the size of a possible print. Original being FF lens size in the case of slides. "Possible" meaning a bit of a guess as to what could produce an acceptable result. However, what you call up-sampling (which I've been calling up-scaling) is something that I've previously thought would be worthless.
Might you be saying that it could do some good in this case as well?
The older upsampling methods can be described mathematically, but if I understand correctly, the new machine-learning-based ones can't be: they are just a large number of patterns extracted from the training sets. So, I think it's particularly hard to know how much the tool will help. I had one less than fully sharp photo for which Adobe's Super Resolution was essentially no help, and I reverted to the original. I had nother instance (the race photo I mentioned) where it made a substantial difference.
I don't think a lack of detail in a scan is any different in this respect from a lack of detail in a straight digital capture.
So I think the answer is: let the software do its imputation, and decide whether the results are better.
I've done that a lot with old family pics and such. My Canon all-in-one printer has a film holder for it's scanner and works quite well.
The correct term is re-sampling whether up or down. However, "up-sampling" is commonly used and understood. Up-scaling is less so, and "up-sizing" is de trop ... ;-)However, what you call up-sampling (which I've been calling up-scaling) is something that I've previously thought would be worthless.
We are venturing far afield, but: sampling and scaling are entirely different things in my professional world. Rescaling doesn't change the number of datapoints; it simply represents the data on a mathematically different scale. Converting 5 measurements of sound pressure to dBA entails a logarithmic rescaling that leaves you with 5 data points. Sampling refers to the selection (or in this case, generation) of datapoints and is entirely different from scaling. I don't know who is the arbiter of "re-sampling" being correct and up- and down-sampling being incorrect, but "downsample" is routinely used in statistical disciplines. In fact, just a week or two ago, I told a student to handle a problem she's working on by randomly downsampling. I didn't tell her anything about scaling.The correct term is re-sampling whether up or down. However, "up-sampling" is commonly used and understood. Up-scaling is less so, and "up-sizing" is de trop ... ;-)
In the case of photography, I think the key distinction is between altering how many pixels are present, which is a sampling issue, and changing how far apart they are, which is a matter of magnification and hence of a linear rescaling.
Unnecessarily disparaging, particularly given that you were the person who introduced the question of terminology in post #146.
The distinction between sampling and scaling is useful and arises often in everday life. Want to convert the weight of 5 people from "Imperial" to metric? That's just rescaling: a different set of numbers, but the same five observations. Want to estimate how fast your internet is in the evening, but don't want to measure it every day? Sample 5 or 10 days and average them, and you can probably get a decent estimate for all evenings. No scaling involved.
One difference in terminology is that in statistical disciplines, unike photography, one doesn't often encounter the term "upsampling". Normally, that process is called "imputation". Nonetheless, it's the same principle: adding more observations, just synthetic ones.
Yes, at over 80, I do get a bit grumpy sometimes.
After a little reading, I don't like the term at all, if applied to an image seeing as "sample" means a smaller part of a set. Therefore, to me, making a larger set from a smaller set by interpolation or AI needs a term different than "upsampling".The distinction between sampling and scaling is useful and arises often in everday life. Want to convert the weight of 5 people from "Imperial" to metric? That's just rescaling: a different set of numbers, but the same five observations. Want to estimate how fast your internet is in the evening, but don't want to measure it every day? Sample 5 or 10 days and average them, and you can probably get a decent estimate for all evenings. No scaling involved.
One difference in terminology is that in statistical disciplines, unike photography, one doesn't often encounter the term "upsampling". Normally, that process is called "imputation". Nonetheless, it's the same principle: adding more observations, just synthetic ones.
In other words, I agree.
Last edited by xpatUSA; 15th January 2022 at 05:04 PM.
It looks like my mistaken usage caused some of this discussion but I’m afraid I’m still NOT certain that I understand what Dan meant when saying “These tools are getting better; for some images, the machine-learning based enhancement in Adobe software works better than the mathematical algorithms the software offers for upsampling.” where I took “upsampling” to pertain to scaling. From that sentence I deduce that I might NOT have this problem if were knowledgeable of Adobe software products such as Photoshop.
However, I cannot find the term "Upsampling" (nor sampling or downsampling for that matter) referenced in Rawpedia (i.e., documentation for Rawtherapee) nor have I encountered it when using Rawtherapee. What Rawtherapee does reference is something they call “Resizing” which is described here. You might notice that this tool offers an option called “Allow Upscaling” (i.e., use a technique that protects against mistakenly entering a larger value than would be expected in most cases).
For whatever it might be worth, I also use GIMP to process image files. Even though it cannot handle raw files is does support multi-layer editing which, I believe, is also offered by Photoshop and comes in quite handy for lots of post development (i.e., once a quality picture has been created) processing tools for doing various kinds of transformations. It also has a tool for “Scaling” images described here that allows the number of pixels to be either increased or decreased (i.e., which I’m tempted to also associate with up or down).
My unfamiliarity with the term “sampling” caused me to deduce that it might be the same thing as "scaling" per the referenced explanations. Possibly incorrectly? If so, it sounds like I should at least have a grasp for what “sampling” means and how it is different. I don’t expect any of you to provide a tutorial but if you might know of some reference material that offers an explanation I’d be grateful for that.
Unfortunately, there is no standard terminology. When software is said to "resize", it can either entail sampling or not. Here is the relevant text from a page about Photoshop:
For printing, one should sample, as the level of detail in the resulting image file should match the requirements of the printer.To change the image size or resolution and allow the total number of pixels to adjust proportionately, make sure that Resample is selected, and if necessary, choose an interpolation method from the Resample menu.
To change the image size or resolution without changing the total number of pixels in the image, deselect Resample.
Sorry, but I have no idea about RawTherapee or the Gimp.
Last edited by DanK; 15th January 2022 at 06:00 PM.
Have a look here:
https://en.wikipedia.org/wiki/Image_scaling
It might be better read more than once.
I will be glad to answer any questions about that article. It does use the term "scaling" which should be understood in context, albeit not technically correct.
HTH.
Thanks for the reference. After using it for my own reference, looks like I should have considered searching WikiPedia myself. However, I’d have searched for “upsampling” and come up with this result which quickly dives in a little deeper than I was intending to go. Thankfully you knew to go after “image scaling” which avoided the need for me to go and find my college math books. It also used an uncommon term, “Lanczos”, that I recognize from my experience with Rawtherapee when doing what it calls “Resizing”.
Anyway, I’ve concluded that it looks, to me, like I was at least on the right track. This looks like another one of those cases, like with “mosaic”, where some prefixes need to be incorporated when wanting to apply a commonly used word like “sample” and/or “scale” to convey a more specific concept not normally associated with use of the word. I don’t feel any where near qualified to try and weigh in on whether “sample” or “scale” is a better fit for the subject at hand but it looks like there might be somewhat good agreement that “up” means “more pixels” and down means “fewer pixels”.
When it comes to answering my original question about possibly treating my old paper non-digital photos that have been scanned and digitized to the technique mentioned for improving an imperfect race photograph it is something I intend to experiment with and see if it helps. Thanks for that.
That is correct.
Good luck with it ...When it comes to answering my original question about possibly treating my old paper non-digital photos that have been scanned and digitized to the technique mentioned for improving an imperfect race photograph it is something I intend to experiment with and see if it helps. Thanks for that.
Found this while browsing DPR's articles:
https://www.dpreview.com/news/050146...essive-results
A whole new meaning for changing-the-number-of-pixels-in-an-image ...
Both Topaz Labs (Gigalpixel AI) and Adobe (ACR / LR) has SuperResolution that has been out there for a while. These applications seem to have similar uses.
I have no idea the Google product compares to the others. I look forward to seeing this technology being unleashed on the real world and not stuck in an AI lab. Unfortunately, with Google, one never knows...
I read the link that Google also does Super Resolution (SR) but then applies Cascaded Diffusion Modeling to further improve the SR output, thereby claiming superior results to other methods according to their subjective tests.
For me, AI stuff is double-Dutch and I have no use for it - myself preferring to stick with original captures and to never re-sample upward. In other words, if more "detail" is required: use a longer lens or step closer.
Last edited by xpatUSA; 18th January 2022 at 01:54 PM.
STepping closer or using a longer lens isn't always an option, and for those of us who print and therefore need a lot more pixels than you need on screen, some form of upsampling is unavoidable in many instances.
What I find intriguing is the difference between traditional and AI-based upsampling. The traditional methods, which use methods like bucubic and nearest neighbor sampling, can be described by mathematical algorithms. If you dig a bit, you can find the math online. The software is simply doing that math. If I understand correctly, the AI-based upsampling is fundamentally different; the software includes a summary of the patterns the computer (theirs, not yours) found in examing a very large number of test images. In that respect, it's similar to AI-based selection methods. And I think for now, one can expect the same result: sometimes they will work better, sometimes they won't work as well, and the only way to be certain is to try.
I played long ago with something also called Super-Resolution.
Following an article in PetaPixel (presumably culled from elsewhere) one would:
1) Take about 7-14shots hand-held.
2) Align the images e.g. align_image_stack.exe
3) Upsize (er, impute) them Nearest Neighbor by an integer factor.
4) Merge them, e.g. enfuse.exe.
IIRC.
No additional Math involved.
Apparently gooder that upsizing by simple bicubic or NN..
Maybe so ...If I understand correctly, the AI-based upsampling is fundamentally different; the software includes a summary of the patterns the computer (theirs, not yours) found in examing a very large number of test images. In that respect, it's similar to AI-based selection methods. And I think for now, one can expect the same result: sometimes they will work better, sometimes they won't work as well, and the only way to be certain is to try.
Last edited by xpatUSA; 18th January 2022 at 06:42 PM.