Re the newest release, see this
Re the newest release, see this
Thanks for posting, very interesting from my POV.
I just set the cat amongst the pigeons elsewhere:
https://www.dpreview.com/forums/post/62297753
I've tested this new functionality on an image I shot on Sunday.
Shot was taken with an expensive Nikkor f/2.8 70-200mm lens on my D810 that was sitting on a heavy duty tripod. I shot at f/11 at 1/200th sec using studio flash.
I imported both image files into Photoshop and stacked them on top of one another. At 100% magnification, I see no difference. At 200% there is a hint of difference and at 400% I can start making out some very small changes in micro-contrast. As am example a few stray hairs looked a tad sharper.
It took Camera Raw about 30 seconds to create the new linear raw file. Initial impressions is "what's the big deal?". I guess I'm going to have to do a bit more testing.
Yes, and that also shows in their blog's animated example, second image here:
https://theblog.adobe.com/enhance-details/
My experience of similar stuff, e.g. Piccure+, is that if you start with a really good capture as you obviously did, the gain - if any - is quite small and can actually be negative with the sudden appearance of unexpected artifacts.It took Camera Raw about 30 seconds to create the new linear raw file. Initial impressions is "what's the big deal?". I guess I'm going to have to do a bit more testing.
In more testing, maybe less-than-perfect shots would give "better" results?
Thanks for the link Ted. The difference I saw with my images were even less noticeable than with what the Adobe example shows, but certainly it is in that range.
I'm going to have to sit down and have a hard look at it when I get back in late March. I just don't have the time to work on this right now. I suspect I will have to create a large format print with / without the new demosaicing algorithm and compare the outputs.
I quickly tried this and couldn't see any improvement. I may be wrong.
Will be following this thread ! Thank you !
The main advantages seem to be for the Fuji X-trans sensor:
https://www.lightroomqueen.com/whats...eid=868904fa09
Dave
Actually, there are four examples listed:
- You’re a Fuji X-Trans photographer (especially if you’ve noticed “worm” artifacts when sharpening your photos).
- You’re shooting with a high quality Bayer sensor without a low pass filter and you’re using a very sharp lens.
- You’re creating large prints or expensive books.
- You’re affectionately known as a pixel peeper because you like to view your photos at 1:1 view or greater, and eek out every last bit of detail on your best photos.
The second and third apply to me and I do the fourth one when doing input and in-process sharpening. Adobe has taken some time to properly support the X-trans sensor, but that problem was around when the first generations hit. That problem was resolved a few years ago.
Again, I can only see the differences when I pixel peep at magnifications of 200% and greater.
It's based on AI, but apparently just to make calculations more efficient and reduce computational compromises: https://www.engadget.com/2019/02/12/...hance-details/
I don't understand this part:
then [use] machine learning built into the latest Mac OS and Windows 10 operating systems to run this network," said Adobe.
From the Adobe site:
"Enhance Details requires Apple’s Core ML and Microsoft’s Windows ML, so it won’t work on older operating systems. Please be sure to update to macOS 10.13 or Win 10 (1809) or later."
It seems the newest version of both Apple and MS software have some additional functionality built in that supports this.
https://theblog.adobe.com/enhance-details/