For example, here's the Nikon Z 50/1.8S: https://imgsv.imaging.nikon.com/lineup/lens/z-mount/z_50mmf1...
Doesn't take much guessing where the aperture is in this lens.
I come down firmly on the side of “yes,” and started my explanation of why with the photographic work of Salvador Dali. The guy was an absolute darkroom master. He wasn’t just a person who had a vivid imagination and could paint, he had an engineer’s understanding of optics, physics and the chemistry of photography. This led into a sort of point about programming not being an end to itself, but that programming is a part of of the toolbelt of a modern digital designer.
ANYWAY. This guy, not just busting out a camera and photoshop, but actually taking apart and rebuilding lenses, more of this. Get dirty with your tools.
I've been meaning to put together a personal blog with Jekyll to learn the tool, maybe that would be an interesting subject for some posts.
Film cameras are a marvel. First time looking through a really good viewfinder, and you'll feel like you're in your own private cinema.
An example is the "contrast-adaptive sharpening" that AMD introduced a while back in their driver. I've been using contrast-adaptive sharpening for a long time in my film development. I use a developer (Agfa Rodinal, with many clones such as Compard R09) with a solvent effect that tends to increase edge sharpness. If you use it in high dilution without any agitation then highlights or sharp edges will tend to "locally deplete" the developer and reduce the rate of development. In combination with the edge-enhancing nature, this compensating effect tends to help reduce the haloing from the sharpening (and of course it also helps pull your highlights back a bit in general). Which is basically what contrast-adaptive sharpening does!
And of course there's all those skeumorphic terms we've adopted that we no longer consciously think about the origins of. Dodging and burning, unsharp masks, and airbrushing are all real things that people used to do by hand!
Drug sequences in Dredd. Very similar soft and slightly haunting quality.
How is a new lens design made nowadays, and is there much innovation still in lens design? I mean, seems like aside from zoom lenses, the designs were done (by hand! and slide rule) 100 to 50 years ago.
What do you do now? Have a ray tracing linear solver with constraints about refractive index, desired weight, number of elements, distortions, etc. (and cost!) factor all these things in and see what multiple lens stack it comes out with? Or is it just small variations on known proven designs now?
from what I've heard, there is a delicate tradeoff between designing a lens that has pleasing bokeh (smooth MTF when out of focus) and designing a lens that is sharp when focused--I've heard that to do the former, you basically /need/ spherical aberration, but to do the latter, you need to remove as much spherical aberration as possible. I believe this is what people are referring to when people say modern lenses are too sterile and that the old lenses (which because aspherical elements basically didn't exist, had lots of spherical aberration)
Also, a decent zoom lens design is still very difficult. It basically needs to be designed trial-and-error style, and there's a LOT of knobs to tweak, as the zooming affects all kinds of figures of merit. Nowadays it's basically fully automated with optical solvers (they use raytracing, but in one axes, but they solve for things like spherical aberration, chromatic aberration, etc etc)
Just because you have asphericals doesn't mean you automatically solve all your problems. The onion-ring bokeh effect is still unfortunately an issue with asphericals.
Also one thing I really wish would happen is open source lens correction get better. It's currently absolutely shit.
For example, Sony describes their modern 50mm 1.4 as a ”refined double-Gauss design incorporating two aspherical elements, including one precision AA (advanced aspherical) element, works with an ED (Extra-low Dispersion) glass element to suppress field curvature and distortion". The basic double-gauss design has been around since the 19th century!
CNC optical design really hit big in the 90s. The problem space for lens design is obviously high-dimensional and large (number of elements, element composition, element placement, element size, curvature of each optical surface, cement between the elements...) and that made it possible to just brute-force a bunch of different lenses and tune it all.
For prime lenses, that pretty much solved "reasonable" lenses for "reasonable" angles of view, "reasonable" aperture/intensity, and "reasonable" numbers of elements. A 7 element lens of a particular focal length, designed in the 90s, is significantly better than one designed in the 70s, but probably about the same as today.
There has still been significant progress made on the "unreasonable" lenses. For example f/1.4 lenses from the 50s were poor wide open, from the 70s they are mediocre, in the 90s they were decent, modern lenses they are excellent wide open. Superfast lenses, superwide lenses, super-sharp lenses, and lenses optimized for beyond-visual light or wide-spectrum photography (IR or UV at the same time as visible light) have all benefited significantly since the 90s.
Lenses have been moving to more and more elements over time, as well as using more exotic shapes and materials, all of which gives more ability to correct the image. A lens from the 50s would have been around 5-7 elements, all spherical (all surfaces are a sphere of some radius), using basic types of glass. A modern superlens will incorporate several ultra-low dispersion elements (typically grown fluorite crystals) and use at least one aspherical element, on top of having a significantly higher number of elements in general (7 elements was a lot in the 70s, modern prime lenses might have 12-14, with 3-4 of them being exotic).
This may seem like an obvious statement but optics are a very complex materials science problem and on top of the optical design the materials and manufacturing have become much more advanced over time. At the dawn of photography, basic crown glass and flint glass was about all that was available. The coatings are also very important - every time a beam of light goes from air to glass or vice versa, it refracts a bit. Many of the modern optical designs used in the 70s to 90s were actually known quite early - the Planar lens type (used in your standard 50mm f/1.8 lens) was invented in 1896, and the Plasmat type was invented in 1918, but the number of air-glass interfaces meant the contrast was very low on those early lenses and it was not practical. Instead, fewer elements produced more contrast (because of fewer air-glass interfaces) and elements were often cemented together with balsam pitch so that you didn't have to have an air-glass interface between them (for example in the Cooke Triplet or Zeiss Sonnar designs). More advanced designs had to wait for the advent of coatings just before World War 2, and then there was an explosion of designs after the war. And multi-coatings (Pentax SMC was the early winner) did the same in the 70s, and the expiration of the SMC patents enabled another explosion in the 00's. The same applies to glass types and other components of the lenses, most recently aspheric elements and fluorite.
And yes as you mention it has always been hard to design good zoom lenses, they are as complex as a "super" prime lens even with a relatively modest zoom range and relatively slow aperture. "Superzoom" lenses covering a giant zoom range are even harder to do. The 70-200mm type has been around a long time and good 70-200 designs were available in the 70s for the professional market, then in the 2000s the pro market got 24-70 f/2.8 and 24-105 f/4 types, and more recently there's been some good consumer ones like 17-50 f/2.8 or 18-35 f/1.8. So the field of zoom lenses is still very much advancing.
so tl;dr basic prime lenses were "solved" in the 90s, really high-end prime lenses that do exotic things and zoom lenses are still making progress by increasing numbers of elements and using more exotic elements.
the lens is designed to take rays traveling from optical infinity, and converge all those rays at a focal plane that's (eg) 30 millimeters away, right? well, if you flip the lens around backwards, then it will send out rays towards optical infinity from a focal plane that's 30mm away... which is what a macro lens is.
in fact in the old days people used to use their camera as an enlarger to expose their negative onto prints. The theory was that using the same lens that took the image in reverse would help to correct the abberations of the lens... I am not sure that's actually supportable compared to using a real enlarger lens that's optimized for high-magnification work but it's a neat theory, and possibly better than just using a random crap enlarger lens ;)
now if you really want to have fun... ever wonder why diffraction makes your image softer at smaller apertures? ;)
This is somewhat an example of "if that was a dollar, someone would have picked it up already"
The value here is in novelty - it doesn't look like anything else. If it was mass produced, demand for the effect would likely go down, not up. And it's certainly not without compromise - sharpness across the entire frame in those example shots is worse than your average plastic 18-55mm kit zoom.
If this could be a bit smoother in the background, on the other hand ...
I'd be astonished if this hasn't been done before, at least several times in variations.
However, someone having done it, and you and I finding out about it, are very different things. And the latter is much, much easier today than it was for most of the life of this lens.
I know a few people who toy with lenses and very often remove, invert or replace elements (with elements from other lenses)
This is making the "news" because today everyone is looking for that sweet sweet internet karma
Explore a few hobbyist forums and discords and you'll find plenty of crazy modifications that aren't talked about on petapixel
Is there software that helps with artistic lens design?
I think the three easiest factors in a "pleasing" image are spherical abberation, apodization, and field curvature. Spherical abberation is what makes a soft-focus lens soft, and a slight amount of spherical abberation is responsible for the famous "Leica glow". People generally find at least a small amount of this abberation to be pleasing.
"Bokeh" is the rendering of the out-of-focus areas of an image, and it's the combination of a number of optical factors. Highlights in the bokeh area will take on the shape of the aperture. Crudely, if you cut a dick shaped hole in a piece of paper, then tape it over your lens, you will have dick-shaped highlights. Apodization basically is the idea that if instead of a hard aperture, you use a neutral-density filter that gets darker at the edges, then your bokeh won't have hard edges, it'll have a gradient. STF lenses use a separate aperture with one of these gradient filters to alter the edges of the bokeh so it's smooth. Other abberations also manifest in the bokeh - uncorrected sagittal field curvature will result in the "swirlies" of a Petzval lens, and spherical abberation likely helps smooth out the bokeh a bit as well.
if your software gives you metrics for these optical abberations... that would be a good place to start.
tbh someone else mentioned Lytro and this is something they would have been good at. Take an "accurate" lightfield image and you could reproject it as if it had come through a different (not accurate) lens. From what I know the resolution on Lytros was never that great, but maybe you could do something similar with a custom Reshade filter in video games, change the rendering characteristics of a game in accordance with the optical characteristics of some lens. Perhaps raytracing might be useful to model the exact optical behavior of the rays inside the lens in realtime...
(what software are you using btw? anything available for free / at a semi reasonable cost? I've always thought it would be interesting to try single-point-turning custom elements and building my own custom lenses... completely absurd idea but neat to think about.)
I was looking at https://github.com/quartiq/rayopt (found on HN) earlier. (Check out https://github.com/quartiq/rayopt-notebooks for examples.) I've also tinkered a bit with Lytro files; they have maybe 8 "angle" samples per "pixel" - not quite enough to do much.
My idea was to try to use something like a privacy filter (blocks some angles of light) inside a lens for the purpose of having a nonlinear bokeh falloff. One of the issues with using something like F1.2 for a portrait is that while the background will be nice and blurry, so will the ear and the nose if you focus on the eyes. But if you're able to set up a "smart aperture" that filters out rays that are just out of focus, and keep the ones that are either perfectly in focus or way out of focus, you'd get a really blurry background while keeping the depth of field workable.
(laser-printing a negative for direct contact printing is kind of a fun process in general!)
the idea of using a polarizing filter (privacy filter) is kind of an interesting one, can't really picture in my head whether or not that would work. I think the answer might be that yes it would probably change the bokeh, but it might also result in uneven illumination across the image or weird color shifts? it would be neat to see what happens though.
possibly also of interest: https://en.wikipedia.org/wiki/Photon_sieve
Sadly, you need to have significantly higher light field resolution to do an 'acceptable' job of approximating lens aberrations, and in the end it probably wouldn't look better than a 2D filter approximating it. Nonetheless I like the idea.
Now to really make it art, figure out how to say something with the effect. A portrait would have been a nice addition to the collection. How would it change the feeling of looking at a face?
That’s not to take away from how cool this is, just asking as a practical thing why someone else beside the person in question would buy another lease and do this vs use a digital solution to get a similar effect.
You cannot do with software what you can do with a lens. The information is lost by the time it's in a raw file. Not just depth, but also brightness. Any white highlights are also lost information; a lens can shape a highlight to look different in a way which cannot be done in software without using HDR (i.e. multiple exposures).
However those don't seem to be "real real holograms", but rather something closer to the glasses-free stereoscopic 3D like the Nintendo 3DS or some 3D TVs ?
I wonder how they can manage to have a picture cross the frame of the screen, AFAIK this isn't possible neither with "real real holograms" nor stereoscopic 3D ?
(For previous research into "real real holographic video", see these :
The camera lens has a bunch more information to work with in terms of depth. Along the same lines, an iPhone can fake bokeh by applying a filter masked by depth information, but it's not really in the same league as the real optical effect you get from a real lens in the field. (In terms of responsiveness and the quality of detail that's captured.)
OTOH, as much as I think legitimate optical effects can be useful/special at capture time, I have limited use for computational effects at capture time. As long as the 'digital negative' has all the information, I'd much rather be able to apply the computational effect later. (This is an advantage of digital techniques - you can adjust them later.)
1) He is using physical elements to get the interesting shot. There is something that is difficult to describe, but innately alluring about "real" or physical aspects of our increasingly digital world. The bending of light, the crystals in film, the feeling of the pages of a real book. There is something about the medium itself carrying meaning, even if the actual communicated value is the same. Perhaps it speaks to a timelessness, or durability of good ideas, beautiful pictures, and powerful words that can last thousands of years. Where as in the digital realm, my worlds sometimes only last as long as the hard drive they live on. I think it is deeper than that. Physicality is something we can relate to, touch, smell, taste, we experience it far deeper than an abstract concept represented by pixels on a screen.
E.g. Why is a book better than a PDF? I have no good answer.
2) Although being in photography for decades, I was unaware of this specific effect when manipulating the elements inside a lens. I had no idea this effect was possible. The tinkering, the curiosity followed down a path few would bother with, isn't that what hackers are all about? I am inspired by His diving deep on topics of interest.
You could try to approximate a crappier version of it but there would be noticeable differences, and sometimes nuance in art matters.
Definition of je ne sais quoi
: something (such as an appealing quality) that cannot be adequately described or expressed.
This sounds like fluff driven by emotion and not reason, but I encourage readers to embrace the lack of reason for some elements of human experience.
We are anything but perfectly rational beings, this is one powerful aspect where efficiency and logic are sacrificed on the alter of subjective experience.
Sometimes it's good to just do things for the experience and nothing more. This is especially true for many hobbies such as Amateur Radio. Guys can armchair quarterback antennas, coax, baluns/ununs, whatever, all day long, arguing about which is right. Then a guy goes out and just tries it, does it all wrong according to the "in" crowd, and has tons of fun. <--- this is me. Doing it because I can and it's fun not because it's "better"
I’ve been using computers and digital graphics applications for about thirty years now.
If I were to get myself a new hobby I wouldn’t necessarily want to have one that made me spend more time in front of a computer. That’s one reason people might want to do things the analog way.
I’m currently reading a book written in 2017 by an author who still uses a typewriter. A good book, and it might help him to avoid distractions when writing.
Practical effects always feel better over computational ones - in terms of feedback.
Which is weird to say because it only really happened for me after digital cameras gave instant previews with zoom & I could tweak/click while getting to know some new hardware.
I did some digital tilt-shifts and then played around with a manual shifted lens once - seeing what you click is so "tactile" (& the bar for it will shift was more computation can be done in preview).
The thing is that the digital thing is still guessing depth with some gradient map, particularly at such a long focal length.
The reason a lot of the in-app portrait modes have improved massively is because there are secondary depth sensors which fill in the data that is needed to compute a good looking depth map.
> why someone else beside the person in question would buy another lease
I can't think of a single reason except to mess with things till you get the photo you want :)
I wasted a bunch of time with a very cheap (but great) Nikkor 50mm f/1.8D for different bokehs during a party shoot, which turned out to be a fun project "makeable" for a tutorial I was doing later.
And eventually turned to a lensbaby bokeh punchable + a lot of vinyl clips of random shapes (mostly Om/ॐ shapes for temple pictures).
I'm sure I could get a digital solution for that, but it involved a lot less work to punch a pattern and go nuts with it.
 - https://www.flickr.com/photos/t3rmin4t0r/4362410419/
It's fun to fiddle and tinker. As much as someone could make the image in a render (I actually seriously doubt the picture could be recreated with post-processing alone unless there was a light field camera involved or something) who would think to do so? This didn't come about because someone was looking for the effect, but because someone was exploring and found it.
Such filters live in 2D, optics in 3D - part of why most filter for things like "bokeh" don't work particularly well is that without an accurate depth map you are guessing or ignoring that 3rd dimension.
On the flip side - this stuff is aesthetic, so there isn't one "proper" way to do it.
People like to play. People like to place artificial constraints upon their work, and then see what they can make within those constraints.
People get very used to a particular camera and lens, and sometimes they get a bit stuck in a rut and want to break away from that.
People sometimes like unpredictability, and it can lead them onto different ways of working or different ways to present a result.
A good deal do photo filters are based on real world optics. Sometimes optics that are outdated (Polaroid mode), sometimes because the physical setup is really expensive. What I am asking is whether this particular effect is hard to replicate. I feel like I’ve seen it before in digital art but don’t know if it’s the same or not and whether the physical effect is actually somehow magical or not.
We can have both. The world is not boolean.
“Es gibt kein Kompromiss / Es zählt nur ja und nein / Wir sehn wies wirklich ist / Wir denken digital”
I too am an amateur/prosumer photographer who uses an entirely digital process. But tech has its weaknesses, especially in photography -- digital tech still can't replicate a lot of what analog can do, and its important to embrace that.
That $4000 mirrorless still can't replicate the dynamic range of 35mm film. Photoshop still can't replicate the contrasty reds of Velvia film. And so on...
If there is an equivalent way to achieve the effect digitally, for many people that is good information and they can do that, and don't need to go to the lengths this guy did.
Poorly, if experience with similar things has anything to teach us.
Lens hacks result in lost signal. You're sending fewer photons to less surface area, and you're losing readings.
The cameras of the future might look a lot less like the ones we have today: stereo and light field capture, explosion of sensors, and design that incorporates both advances in optics as well as ML-assisted optimization.
edit: -3 :( I've got unpopular opinions as always, I suppose. We'll see how it pans out in ten years. I predict we're in for a sea of changes. Photogrammetry is going to be huge.