Hacker News new | past | comments | ask | show | jobs | submit login
Flipped an element in an old lens and got 'magic' bokeh (2018) (petapixel.com)
181 points by colinprince 10 days ago | hide | past | favorite | 89 comments





The Zeiss, like the huge majority of ~50mm lenses in existence, is a so-called Double-Gauss design [1] (yes, that Gauss [2]). The role of the interior doublet is mostly to reduce aberrations, so reversing it basically has the opposite effect (at least spherical and chromatic aberration appear to go crazy). Would be interesting to see some raytrace diagrams of the modified lens.

[1] https://en.wikipedia.org/wiki/Double-Gauss_lens

[2] https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss


Even the most modern "ultra high performance" 50 mm lenses still have a Double Gauss lens at their core.

For example, here's the Nikon Z 50/1.8S: https://imgsv.imaging.nikon.com/lineup/lens/z-mount/z_50mmf1...

Doesn't take much guessing where the aperture is in this lens.


Funny… just thirty minutes ago I was giving a sort of guest lecture to some UX students, one of my key talking points was that old “should designers learn to code?” chestnut. (If you're not familiar, and want to see dozens or hundreds of people instantly start arguing on Twitter, go ahead and ask.)

I come down firmly on the side of “yes,” and started my explanation of why with the photographic work of Salvador Dali. The guy was an absolute darkroom master. He wasn’t just a person who had a vivid imagination and could paint, he had an engineer’s understanding of optics, physics and the chemistry of photography. This led into a sort of point about programming not being an end to itself, but that programming is a part of of the toolbelt of a modern digital designer.

ANYWAY. This guy, not just busting out a camera and photoshop, but actually taking apart and rebuilding lenses, more of this. Get dirty with your tools.


I think coders should learn to shoot and develop film, and ideally learn to wet print. not that it's something that you'll need at your day job, but it's just such a fun hobby for an engineering type person. The basic process is simple and yet infinitely modifiable, and there's numerous deep rabbit holes you can go down once you master the basics. It's definitely one of those "a moment to learn, a lifetime to master" type hobbies. And it's really enjoyable for someone who stares at a screen all day to work with your hands and do something that is completely analog.

I've been meaning to put together a personal blog with Jekyll to learn the tool, maybe that would be an interesting subject for some posts.


I started the other way around!

Film cameras are a marvel. First time looking through a really good viewfinder, and you'll feel like you're in your own private cinema.


I've been wanting to implement some of photoshop and Lightroom functionality from scratch to really understand what all the sliders are doing to the image.

100% correct, I am an artist working in realtime visual effects (previsualization) and it's surprising how often knowledge of how realtime 3d works on the hardware and software end enhances what I can do with my art! (and how little fellow artists want anything to do with anything technical) there's so much you can do if you realize how textures are mapped or what gets affected by polygon sorting order...

I don't work in computer graphics, but I've always gone the other direction and been amused by the analogies of digital image processing to the physical ones.

An example is the "contrast-adaptive sharpening" that AMD introduced a while back in their driver. I've been using contrast-adaptive sharpening for a long time in my film development. I use a developer (Agfa Rodinal, with many clones such as Compard R09) with a solvent effect that tends to increase edge sharpness. If you use it in high dilution without any agitation then highlights or sharp edges will tend to "locally deplete" the developer and reduce the rate of development. In combination with the edge-enhancing nature, this compensating effect tends to help reduce the haloing from the sharpening (and of course it also helps pull your highlights back a bit in general). Which is basically what contrast-adaptive sharpening does!

And of course there's all those skeumorphic terms we've adopted that we no longer consciously think about the origins of. Dodging and burning, unsharp masks, and airbrushing are all real things that people used to do by hand!


This is actually probably more useful for video. Many story lines contain a dream outer world sequence. Or just a romantic interlude of a flower. Lots of applications, this could be use to create a certain feelings and emotions.

My first thought when I saw the images...

Drug sequences in Dredd. Very similar soft and slightly haunting quality.


Better title might be 'A "Magic Bokeh" Lens Modification' - from the original YouTube video.

I would love to know from someone knowledgeable --

How is a new lens design made nowadays, and is there much innovation still in lens design? I mean, seems like aside from zoom lenses, the designs were done (by hand! and slide rule) 100 to 50 years ago.

What do you do now? Have a ray tracing linear solver with constraints about refractive index, desired weight, number of elements, distortions, etc. (and cost!) factor all these things in and see what multiple lens stack it comes out with? Or is it just small variations on known proven designs now?


There is plenty of "work" to be done regarding lens design:

from what I've heard, there is a delicate tradeoff between designing a lens that has pleasing bokeh (smooth MTF when out of focus) and designing a lens that is sharp when focused--I've heard that to do the former, you basically /need/ spherical aberration, but to do the latter, you need to remove as much spherical aberration as possible. I believe this is what people are referring to when people say modern lenses are too sterile and that the old lenses (which because aspherical elements basically didn't exist, had lots of spherical aberration)

Also, a decent zoom lens design is still very difficult. It basically needs to be designed trial-and-error style, and there's a LOT of knobs to tweak, as the zooming affects all kinds of figures of merit. Nowadays it's basically fully automated with optical solvers (they use raytracing, but in one axes, but they solve for things like spherical aberration, chromatic aberration, etc etc)

Just because you have asphericals doesn't mean you automatically solve all your problems. The onion-ring bokeh effect is still unfortunately an issue with asphericals.

Also one thing I really wish would happen is open source lens correction get better. It's currently absolutely shit.


I'm a hobbyist, not an expert, but my impression is that many modern lens designs are heavily based on classic designs, with more corrective elements (often at least 10 total vs ~6 in many classic designs), as well as increased use of special elements (e.g. aspherical) and proprietary coatings which improve optical properties.

For example, Sony describes their modern 50mm 1.4 as a ”refined double-Gauss design incorporating two aspherical elements, including one precision AA (advanced aspherical) element, works with an ED (Extra-low Dispersion) glass element to suppress field curvature and distortion". The basic double-gauss design has been around since the 19th century!


(I don't work in the field, I just do photography as a hobby, but as let's say an "advanced amateur" interested in lens design and history:)

CNC optical design really hit big in the 90s. The problem space for lens design is obviously high-dimensional and large (number of elements, element composition, element placement, element size, curvature of each optical surface, cement between the elements...) and that made it possible to just brute-force a bunch of different lenses and tune it all.

For prime lenses, that pretty much solved "reasonable" lenses for "reasonable" angles of view, "reasonable" aperture/intensity, and "reasonable" numbers of elements. A 7 element lens of a particular focal length, designed in the 90s, is significantly better than one designed in the 70s, but probably about the same as today.

There has still been significant progress made on the "unreasonable" lenses. For example f/1.4 lenses from the 50s were poor wide open, from the 70s they are mediocre, in the 90s they were decent, modern lenses they are excellent wide open. Superfast lenses, superwide lenses, super-sharp lenses, and lenses optimized for beyond-visual light or wide-spectrum photography (IR or UV at the same time as visible light) have all benefited significantly since the 90s.

Lenses have been moving to more and more elements over time, as well as using more exotic shapes and materials, all of which gives more ability to correct the image. A lens from the 50s would have been around 5-7 elements, all spherical (all surfaces are a sphere of some radius), using basic types of glass. A modern superlens will incorporate several ultra-low dispersion elements (typically grown fluorite crystals) and use at least one aspherical element, on top of having a significantly higher number of elements in general (7 elements was a lot in the 70s, modern prime lenses might have 12-14, with 3-4 of them being exotic).

This may seem like an obvious statement but optics are a very complex materials science problem and on top of the optical design the materials and manufacturing have become much more advanced over time. At the dawn of photography, basic crown glass and flint glass was about all that was available. The coatings are also very important - every time a beam of light goes from air to glass or vice versa, it refracts a bit. Many of the modern optical designs used in the 70s to 90s were actually known quite early - the Planar lens type (used in your standard 50mm f/1.8 lens) was invented in 1896, and the Plasmat type was invented in 1918, but the number of air-glass interfaces meant the contrast was very low on those early lenses and it was not practical. Instead, fewer elements produced more contrast (because of fewer air-glass interfaces) and elements were often cemented together with balsam pitch so that you didn't have to have an air-glass interface between them (for example in the Cooke Triplet or Zeiss Sonnar designs). More advanced designs had to wait for the advent of coatings just before World War 2, and then there was an explosion of designs after the war. And multi-coatings (Pentax SMC was the early winner) did the same in the 70s, and the expiration of the SMC patents enabled another explosion in the 00's. The same applies to glass types and other components of the lenses, most recently aspheric elements and fluorite.

And yes as you mention it has always been hard to design good zoom lenses, they are as complex as a "super" prime lens even with a relatively modest zoom range and relatively slow aperture. "Superzoom" lenses covering a giant zoom range are even harder to do. The 70-200mm type has been around a long time and good 70-200 designs were available in the 70s for the professional market, then in the 2000s the pro market got 24-70 f/2.8 and 24-105 f/4 types, and more recently there's been some good consumer ones like 17-50 f/2.8 or 18-35 f/1.8. So the field of zoom lenses is still very much advancing.

so tl;dr basic prime lenses were "solved" in the 90s, really high-end prime lenses that do exotic things and zoom lenses are still making progress by increasing numbers of elements and using more exotic elements.


It seems like this type of lens has been available since the 1920s, and there would have been nothing stopping, say, the German Expressionists from using this sort of technique? I love seeing this sort of thing, it's part of why I find modern demoscene so fascinating.

Reminds me of realising years and years ago that one can make a macro lens by reversing a normal prime. (Which I know! Everyone knows that, you can even get proper adapters for it, but it was a genuine discovery for me at the time.) Doing things in camera, rather than later in software, just is more enjoyable for a lot of photographers. I know I find it more fun.

I've been shooting for 10 years and didn't know this at all!! I just saw a photo of a lens mounted in reverse and my mind is blown. Time to go down a rabbit hole trying to understand why this works...

lenses pass light in both directions, if you flip it around then it bends the light the opposite way ;)

http://hyperphysics.phy-astr.gsu.edu/hbase/geoopt/image.html...

the lens is designed to take rays traveling from optical infinity, and converge all those rays at a focal plane that's (eg) 30 millimeters away, right? well, if you flip the lens around backwards, then it will send out rays towards optical infinity from a focal plane that's 30mm away... which is what a macro lens is.

in fact in the old days people used to use their camera as an enlarger to expose their negative onto prints. The theory was that using the same lens that took the image in reverse would help to correct the abberations of the lens... I am not sure that's actually supportable compared to using a real enlarger lens that's optimized for high-magnification work but it's a neat theory, and possibly better than just using a random crap enlarger lens ;)

now if you really want to have fun... ever wonder why diffraction makes your image softer at smaller apertures? ;)

https://www.cambridgeincolour.com/tutorials/diffraction-phot...


Are you also familiar with extension tubes? I think in both the reversed lens and extension tube the lens sits further from the film or sensor (focal plane). Somewhat counter-intuitively, at least for me, is when the lens is focused at infinity, it's at its closest point to the focal plane. Moving it further from the focal plane focuses closer.

even better, you can use a reversed prime as a loupe to view slides!

that's interesting, especially interesting that nobody has done it before, considering the amount of money there are in specialty lenses..

This is somewhat an example of "if that was a dollar, someone would have picked it up already"


The money in lenses goes towards more predictability - sharper, wider aperture, faster to focus, smooth bokeh for portraits - or technical achievements like ultratelephoto lenses. The people who can pay are looking for those features, not random effects.

The value here is in novelty - it doesn't look like anything else. If it was mass produced, demand for the effect would likely go down, not up. And it's certainly not without compromise - sharpness across the entire frame in those example shots is worse than your average plastic 18-55mm kit zoom.


There are lenses made for unusual effects rather than optical quality, but it's more of a niche market

https://microsites.lomography.com/petzval-58-bokeh-control-l...

https://lensbaby.com/products/composer-pro-ii-with-sweet-50-...


I'd be surprised if this is truly the first time. I'm guessing that there is someone that was assembling a lens and did this by mistake. After seeing the results, saw it wasn't right, and then went back in to see what needed to be corrected.

From the sample photos it seems like the lens is rather useless this way. While some bokeh is awesome, this looks more like someone applied an LSD-filter and called it a day; nothing that you could use for a portrait. It also seems the pictures are not too sharp. Therefore, there's probably not too much money to be made.

If this could be a bit smoother in the background, on the other hand ...


I dunno. I'd buy a micro 4/3 one like a shot. I'm already using lenses from SLR magic that aren't THE sharpest possible things, but have really good bokeh and pleasing departure from perfect focus outside perfect focal distance.

> especially interesting that nobody has done it before

I'd be astonished if this hasn't been done before, at least several times in variations.

However, someone having done it, and you and I finding out about it, are very different things. And the latter is much, much easier today than it was for most of the life of this lens.


> especially interesting that nobody has done it before

I know a few people who toy with lenses and very often remove, invert or replace elements (with elements from other lenses)

This is making the "news" because today everyone is looking for that sweet sweet internet karma

Explore a few hobbyist forums and discords and you'll find plenty of crazy modifications that aren't talked about on petapixel


So most lens design software I've seen is optimized for a "scientific instrument" workflow - optimizing for sharpness across spectrum and angle.

Is there software that helps with artistic lens design?


"perfect" optical performance is something that you can mathematically express, while whether an abberation is "pleasing" depends on the person and the scene it's used on. There is no mathematical formula that says a Petzval portrait looks nice and many people would say they don't.

I think the three easiest factors in a "pleasing" image are spherical abberation, apodization, and field curvature. Spherical abberation is what makes a soft-focus lens soft, and a slight amount of spherical abberation is responsible for the famous "Leica glow". People generally find at least a small amount of this abberation to be pleasing.

"Bokeh" is the rendering of the out-of-focus areas of an image, and it's the combination of a number of optical factors. Highlights in the bokeh area will take on the shape of the aperture. Crudely, if you cut a dick shaped hole in a piece of paper, then tape it over your lens, you will have dick-shaped highlights. Apodization basically is the idea that if instead of a hard aperture, you use a neutral-density filter that gets darker at the edges, then your bokeh won't have hard edges, it'll have a gradient. STF lenses use a separate aperture with one of these gradient filters to alter the edges of the bokeh so it's smooth. Other abberations also manifest in the bokeh - uncorrected sagittal field curvature will result in the "swirlies" of a Petzval lens, and spherical abberation likely helps smooth out the bokeh a bit as well.

if your software gives you metrics for these optical abberations... that would be a good place to start.

tbh someone else mentioned Lytro and this is something they would have been good at. Take an "accurate" lightfield image and you could reproject it as if it had come through a different (not accurate) lens. From what I know the resolution on Lytros was never that great, but maybe you could do something similar with a custom Reshade filter in video games, change the rendering characteristics of a game in accordance with the optical characteristics of some lens. Perhaps raytracing might be useful to model the exact optical behavior of the rays inside the lens in realtime...

(what software are you using btw? anything available for free / at a semi reasonable cost? I've always thought it would be interesting to try single-point-turning custom elements and building my own custom lenses... completely absurd idea but neat to think about.)


Hey, thanks for the info.

I was looking at https://github.com/quartiq/rayopt (found on HN) earlier. (Check out https://github.com/quartiq/rayopt-notebooks for examples.) I've also tinkered a bit with Lytro files; they have maybe 8 "angle" samples per "pixel" - not quite enough to do much.

My idea was to try to use something like a privacy filter (blocks some angles of light) inside a lens for the purpose of having a nonlinear bokeh falloff. One of the issues with using something like F1.2 for a portrait is that while the background will be nice and blurry, so will the ear and the nose if you focus on the eyes. But if you're able to set up a "smart aperture" that filters out rays that are just out of focus, and keep the ones that are either perfectly in focus or way out of focus, you'd get a really blurry background while keeping the depth of field workable.


if you just want to achieve non-linear falloff (that's apodization) then all you need to do is the graduated-density filter I mentioned, with darker edges and a clear center. In principle you could probably expose a negative or laser-print a transparency to your specifications.

(laser-printing a negative for direct contact printing is kind of a fun process in general!)

the idea of using a polarizing filter (privacy filter) is kind of an interesting one, can't really picture in my head whether or not that would work. I think the answer might be that yes it would probably change the bokeh, but it might also result in uneven illumination across the image or weird color shifts? it would be neat to see what happens though.

possibly also of interest: https://en.wikipedia.org/wiki/Photon_sieve


Re: Lytro, I gave them that precise advice back in ~2008, and I believe they had already talked about it. Nearly 10 years later, their cinema camera boasted 'lens emulation' but I'm not personally convinced it emulated bokeh digitally, despite the whizz-bang stage demos.

Sadly, you need to have significantly higher light field resolution to do an 'acceptable' job of approximating lens aberrations, and in the end it probably wouldn't look better than a 2D filter approximating it. Nonetheless I like the idea.


Seriously cool. I love creative accidents like this. True hacker culture.

I'd love to see video recorded this way. Looks like it has lots of potential for those "characters take psychedelics" parts of the script.

I love where your head is at. That instantly reminded me of this clip. I adore this song because of it.

https://www.youtube.com/watch?v=oNHP5Z7RZcA


Kinda neat. Many of these remind me of observing a solar eclipse, where trees cast shadows with zillions of little crescents. Fun to experience in person, but distracting to see in an image because I get the distinct "something is not right here" feeling.

Now to really make it art, figure out how to say something with the effect. A portrait would have been a nice addition to the collection. How would it change the feeling of looking at a face?


(2018)

As an amateur photographer… can someone smarter/more educated than me explain to me why practically I would prefer this over a computational solution? This looks really cool, but there are dozens of apps and filters out there that do stuff like this, no?

That’s not to take away from how cool this is, just asking as a practical thing why someone else beside the person in question would buy another lease and do this vs use a digital solution to get a similar effect.


A lens shapes a light field onto a sensor. Once light hits the sensor, it is flattened into a 2D image, quantized, and saturated based on the limitations of the sensor.

You cannot do with software what you can do with a lens. The information is lost by the time it's in a raw file. Not just depth, but also brightness. Any white highlights are also lost information; a lens can shape a highlight to look different in a way which cannot be done in software without using HDR (i.e. multiple exposures).


Damn, I wanted to link Lytro's website but they shut down ? :(

Making up new bokeh shapes, depth-based special effects, etc. are the type of thing Lytro should have done with their cameras, instead of just «focus after capture time», which is mathematically interesting but doesn’t enable fundamentally new art.

They ate working on the other side of the equation now, making displays that directly reproduce light fields. Checkout https://www.lightfieldlab.com/

"Real holograms. No headgear."

https://variety.com/2018/digital/features/light-field-lab-ho...

However those don't seem to be "real real holograms", but rather something closer to the glasses-free stereoscopic 3D like the Nintendo 3DS or some 3D TVs ?

I wonder how they can manage to have a picture cross the frame of the screen, AFAIK this isn't possible neither with "real real holograms" nor stereoscopic 3D ?

(For previous research into "real real holographic video", see these :

https://www.media.mit.edu/spi/holoVideoAll.htm

https://phys.org/news/2010-05-holographic-3d.html

https://www.businessinsider.com/cheap-holographic-video-comi...

https://phys.org/news/2018-11-big-application-d-holography-h... )


> why practically I would prefer this over a computational solution?

The camera lens has a bunch more information to work with in terms of depth. Along the same lines, an iPhone can fake bokeh by applying a filter masked by depth information, but it's not really in the same league as the real optical effect you get from a real lens in the field. (In terms of responsiveness and the quality of detail that's captured.)

OTOH, as much as I think legitimate optical effects can be useful/special at capture time, I have limited use for computational effects at capture time. As long as the 'digital negative' has all the information, I'd much rather be able to apply the computational effect later. (This is an advantage of digital techniques - you can adjust them later.)


Why would anybody do anything in camera vs digital? Because somethings just aren't the same when done in post. Is this one of them? Maybe the person doing something physically isn't a software person to make something that doesn't exist now, but they have old lenses laying around and are not afraid to tinker. Much faster results. The "bokeh" from these smart phone cameras are still annoyingly not right, so there's that aspect as well.

This is cool in two ways.

1) He is using physical elements to get the interesting shot. There is something that is difficult to describe, but innately alluring about "real" or physical aspects of our increasingly digital world. The bending of light, the crystals in film, the feeling of the pages of a real book. There is something about the medium itself carrying meaning, even if the actual communicated value is the same. Perhaps it speaks to a timelessness, or durability of good ideas, beautiful pictures, and powerful words that can last thousands of years. Where as in the digital realm, my worlds sometimes only last as long as the hard drive they live on. I think it is deeper than that. Physicality is something we can relate to, touch, smell, taste, we experience it far deeper than an abstract concept represented by pixels on a screen.

E.g. Why is a book better than a PDF? I have no good answer.

2) Although being in photography for decades, I was unaware of this specific effect when manipulating the elements inside a lens. I had no idea this effect was possible. The tinkering, the curiosity followed down a path few would bother with, isn't that what hackers are all about? I am inspired by His diving deep on topics of interest.


That makes a lot of sense. I am looking at it from the point of view of the finished product, the photograph, rather than the process of creating it. I personally tend to do things “the hard way” at times because I feel like I, for the lack of a better term, put my soul into the thing that way. But as a practical matter, people viewing a photograph generally don’t know how it was produced and as such is this particular effect unique and/or hard to reproduce?

You wouldn't be able to recreate the exact effect digitally.

You could try to approximate a crappier version of it but there would be noticeable differences, and sometimes nuance in art matters.


Even if it was an exact replica, there is, as the french say, a je ne sais quoi about an object crafted lovingly in a manual fashion.

Definition of je ne sais quoi : something (such as an appealing quality) that cannot be adequately described or expressed.

This sounds like fluff driven by emotion and not reason, but I encourage readers to embrace the lack of reason for some elements of human experience.

We are anything but perfectly rational beings, this is one powerful aspect where efficiency and logic are sacrificed on the alter of subjective experience.


Maybe they don't know, but there is an unspoken energy to things that are real, that have a history all their own. Sometimes, for some people, the object itself is part of the message, the way it was crafted part of its value. Explain sentimentality and you explain this hahahah.

Serendipity in the real world can produce effects that you may not think to work to create digitally. On the flip side, digital processing can do things that would be a pain to do with a physical pipeline. Imho more possibilities from all avenues makes for a richer world.

Because no digital solution can recognize depth correctly from a single image, which would be needed for this effect.

Granted, but also haven’t we done this: https://corephotonics.com/products/digital-bokeh/

That's from multiple images, and if you look closely it looks really bad. Just look at her hand in the "far focus" image. A digital method needs to fill out information that's just not present in the image.

It's not from a single image.

Why climb a mountain when you can just send a drone to the top? Some things are worth doing just because they are.

This. It's easy to get into the mindset that "It's all been done" or "somebody already did it better, so why bother?"

Sometimes it's good to just do things for the experience and nothing more. This is especially true for many hobbies such as Amateur Radio. Guys can armchair quarterback antennas, coax, baluns/ununs, whatever, all day long, arguing about which is right. Then a guy goes out and just tries it, does it all wrong according to the "in" crowd, and has tons of fun. <--- this is me. Doing it because I can and it's fun not because it's "better"


Different strokes for different folks.

I’ve been using computers and digital graphics applications for about thirty years now.

If I were to get myself a new hobby I wouldn’t necessarily want to have one that made me spend more time in front of a computer. That’s one reason people might want to do things the analog way.

I’m currently reading a book written in 2017 by an author who still uses a typewriter. A good book, and it might help him to avoid distractions when writing.


> can someone smarter/more educated than me explain to me why practically I would prefer this over a computational solution?

Practical effects always feel better over computational ones - in terms of feedback.

Which is weird to say because it only really happened for me after digital cameras gave instant previews with zoom & I could tweak/click while getting to know some new hardware.

I did some digital tilt-shifts and then played around with a manual shifted lens once - seeing what you click is so "tactile" (& the bar for it will shift was more computation can be done in preview).

The thing is that the digital thing is still guessing depth with some gradient map, particularly at such a long focal length.

The reason a lot of the in-app portrait modes have improved massively is because there are secondary depth sensors which fill in the data that is needed to compute a good looking depth map.

> why someone else beside the person in question would buy another lease

I can't think of a single reason except to mess with things till you get the photo you want :)

I wasted a bunch of time with a very cheap (but great) Nikkor 50mm f/1.8D for different bokehs during a party shoot, which turned out to be a fun project "makeable" for a tutorial I was doing later.

And eventually turned to a lensbaby bokeh punchable + a lot of vinyl clips of random shapes (mostly Om/ॐ shapes for temple pictures).

I'm sure I could get a digital solution for that, but it involved a lot less work to punch a pattern and go nuts with it.

[1] - https://www.flickr.com/photos/t3rmin4t0r/4362410419/


Same reason folks generally like kaleidoscopes, but I don't see as many of those in screen savers anymore. That is, I don't think there is an objective answer. A lot of the fun in this is the unexpected and unrepeatable nature of the results.

Depth information is lost once light hits the sensor. Software can attempt to account for this (portrait mode) but it's imperfect.

Why take pictures at all? Photorealistic renders are at our fingertips with things like Blender and Cycles. Just take a look at /r/rendered.

It's fun to fiddle and tinker. As much as someone could make the image in a render (I actually seriously doubt the picture could be recreated with post-processing alone unless there was a light field camera involved or something) who would think to do so? This didn't come about because someone was looking for the effect, but because someone was exploring and found it.


> This looks really cool, but there are dozens of apps and filters out there that do stuff like this, no?

Such filters live in 2D, optics in 3D - part of why most filter for things like "bokeh" don't work particularly well is that without an accurate depth map you are guessing or ignoring that 3rd dimension.

On the flip side - this stuff is aesthetic, so there isn't one "proper" way to do it.


An extreme version of your question: Why even bother taking pictures? Just ray-trace or otherwise render your images.

> As an amateur photographer… can someone smarter/more educated than me explain to me why practically I would prefer this over a computational solution? This looks really cool

People like to play. People like to place artificial constraints upon their work, and then see what they can make within those constraints.

People get very used to a particular camera and lens, and sometimes they get a bit stuck in a rut and want to break away from that.

People sometimes like unpredictability, and it can lead them onto different ways of working or different ways to present a result.


Software is typically immature hardware. To the extent that you can do something in hardware the results are more often superior, if less flexible.

I'm so tired of this arrogant attitude. "Why should I do something in the real world when I can simulate it with a good-enough-but-ultimately-inferior digital process?"

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html


I think the many qualifications in OP's question make it clear that the intention was not to be arrogant, but simply to ask a question and maybe learn something about photography. I had the same question, not because I'm looking down on people who do this but because I genuinely don't know very much about the photography world and think that it might be possible that there are some unknown advantages to one method over the other.

I don't think the OP is arrogant. I think it's a genuine question. I didn't think of the question itself, but after it was posed, I didn't know the answer (because I have no background in photography or image processing). A few other folks have responded with a technical answer, which is that rendering this would require depth information which obviously isn't available in a 2D image.

I in no way meant that as arrogant. I think it’s really cool that that person figured it out. What I am wondering is in what way the effect is different when produced digitally and does it matter?

A good deal do photo filters are based on real world optics. Sometimes optics that are outdated (Polaroid mode), sometimes because the physical setup is really expensive. What I am asking is whether this particular effect is hard to replicate. I feel like I’ve seen it before in digital art but don’t know if it’s the same or not and whether the physical effect is actually somehow magical or not.


I guess in simple terms, if you apply a digital filter to a regular image after capture, you can only manipulate elements there or maybe introduce artificial effects. Special filters or lens adjustments done before capture however can manipulate received light before it is normally discarded to take that regular photograph

How is it arrogant? He's honestly asking. There might be good reason for using a digital solution, such as keeping valuable equipment intact..

We can have both. The world is not boolean.


Your last sentence there made me think of an old Welle: Erdball song…

“Es gibt kein Kompromiss / Es zählt nur ja und nein / Wir sehn wies wirklich ist / Wir denken digital”


It's arrogant when seen in a larger trend of technologists constantly trying to substitute everything with technology. "When all you have is a hammer, everything seems like a nail"

I too am an amateur/prosumer photographer who uses an entirely digital process. But tech has its weaknesses, especially in photography -- digital tech still can't replicate a lot of what analog can do, and its important to embrace that.

That $4000 mirrorless still can't replicate the dynamic range of 35mm film. Photoshop still can't replicate the contrasty reds of Velvia film. And so on...


Chill, it didn't come across as arrogant at all to me. I had the same question.

If there is an equivalent way to achieve the effect digitally, for many people that is good information and they can do that, and don't need to go to the lengths this guy did.


I wish there was a way to learn more about photographic lens design. Seems completely inaccessible. I wanted to create some basic lenses from scrap parts (this has been done before by others)

I'm not seeing any photos on that article. Server load issue?

How could I recreate this effect with an app/website ?

1. Buy two of those lenses, modify one to cause this effect. 2. Take tens of thousands (possibly orders of magnitude more, possibly less) of photographs of the exact same scene with near identical lighting conditions. 3. Use both sets to train a deep learning algorithm to take input photographs and produce the same output. 4. Grow dataset until reached desired look for general input photograph.

> How could I recreate this effect with an app/website ?

Poorly, if experience with similar things has anything to teach us.


He can flip a lens, but he can't rotate his phone 90 degrees?

Here's hoping the swirly/crazy bokeh phase is soon over.

Magic bokeh? No offense, but it's horrible and nothing magic about it.

At some point all of these effects would probably be better to produce computationally.

Lens hacks result in lost signal. You're sending fewer photons to less surface area, and you're losing readings.

The cameras of the future might look a lot less like the ones we have today: stereo and light field capture, explosion of sensors, and design that incorporates both advances in optics as well as ML-assisted optimization.

edit: -3 :( I've got unpopular opinions as always, I suppose. We'll see how it pans out in ten years. I predict we're in for a sea of changes. Photogrammetry is going to be huge.


Most information is lost when light hits the sensor. You mentioned a few ways we might be able to capture more of that. But until we get all that we can play with that information through lenses.

Many of these effects cannot be produced without a depth map, since distance to the lens is part of the maths.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: