This particular design is from the iPhone 7 (or, more precisely, my guess is that it's from the iPhone 7 due to both the date of the patent [1] and due to the elements matching up with marketing images of the iPhone 7), which is 6 years old now, but I think it's broadly still representative of modern smartphone lenses. In the past 5 years or so, advancements in phone cameras have come mostly in better sensors, far better image processing, and adding more cameras, but the basic principles of the compact ultra-aspherical lens design seem to still be in place.
As an example, here [2] is an exploded view of the iPhone 6's lens setup, and here [3] is an exploded view of the more recent iPhone 12's lens setup. The iPhone 12 gained an extra element, but they both use similarly weird ripply elements and you can see the clear lineage between the two phones.
Also, as mentioned in the tweet thread and elsewhere in the comments here, Kats Ikeda has an excellent, incredibly detailed and thorough explainer on mobile phone lens design [4]. I don't actually know a whole lot about the optics field; I'm just dabbling as a hobbyist from a computer graphics perspective. Kats Ikeda's site is a much better resource than my Twitter posts for learning about this stuff.
I was working on a CG art project [1] where I wanted to render the final image using a real-world lens with lots of "swirly" bokeh [2], but the renderer I was using didn't have the ability to simulate real-world lenses. So, to finish the project, I had to write my own plugin to achieve this effect. Afterwards, I found the topic interesting, so I started going further down the rabbit hole!
Complicated, but simply an incremental evolution of a very old solution. These complicated lens arrangements to counteract chromatic aberration got started in the 18th century. The game is all about getting three distinct wavelengths to focus despite each reacting differently at each lens. The lens in the iPhone looks complex but it isn't any more complicated than others. The most complex I've ever read about was for a satellite which had to maintain focus despite temperature changes. Iirc that employed a stack of over a dozen lenses.
The craziness in the iPhone design isn't exactly the number of elements, but their weird aspherical shapes. The traditional approaches for designing optics are have been developed for spherical lenses because they are much simpler to describe mathematically. Even then, optimizing them requires a lot of numerical computation. At first glance, optimizing these wavy, complex lens shapes seems to require much more involved methods.
I don't think the computational complexity has been a significant problem for at least half a century. Aspheric lenses are described by simple polynomials and ray tracing is just elementary geometry. You only need to analyze a hand full of rays to get the relevant parameters of an imaging system. Probably a few thousand floating point operations in total.
The problem is that it is extremely difficult to manufacture surfaces with optical quality that are not spherical. Even today, aspheres tend to be more expensive than spherical lenses and have significantly worse surface quality. This is the price you have to pay if the objective has to fit inside a phone.
>The name “Anastigmat” literally means “non-astigmatic” lens, since it corrects astigmatism and its cousin the field curvature. Which is funny, because astigmatism means “non-stigmatic”), where stigmatism is “an image-formation property of an optical system which focuses a single point source in object space into a single point in image space”.
>I love the word Anastigmat. That means that “Anastigmat” can be translated as a “non-non-stigmatic”. I guess “Stigmat” didn’t really have a good ring to it. Nowadays, almost all lenses have the basic aberrations corrected, so we don’t have the need to call them Anastigmats anymore. Sad.
Interesting. He doesn't really explain how they're designed though. Must be just automatic global optimisation at this point though. No way a human manually optimised all the ray paths.
Usually some form of gradient descent is combined with a global optimization routine that tries to explore the parameter space. This is nothing new, optics has been designed this way since the early 60s.
Optics equations are nonlinear (and generally nonconvex). Global optimization for nonconvex functions has not existed since the 60s. Even fast function eval (for brute force solutions) has not existed since the 60s.
Definitely not global (probably NP hard) but it’s likely optimized in some basic sense would be my guess. (Probably still using very specific parametrizations of the lenses and such.)
At some point we’ll just use metaoptics for lenses that are fully optimized using some general heuristics, but so far I am not sure they (both lenses and heuristics) are ready for prime time :)
Radially symmetrical lenses are extremely simple to calculate for and discontinuities are rare, so any gradient descent algorithm could find this pretty easily. A 3 lens system can probably be optimized with less computation than a single frame of a modern AAA game.
How they manufacture these is what I'd like to know.
Considering 3-lens systems were being used in telescopes over 200 years ago, id bet it takes far less computation than a single pixel in a modern game.
In high school it’s introduced when they close the blinds and turn off the lights and you play with the glass blocks and 3 lines of light, if that jogs your memory.
Interesting. This is on the young side internationally. FWIW the Dutch have also been global leaders in optics for most history (the field really only starts when the Dutch invented the telescope and the microscope).
Add a point: in Japan (also where lens manufacturers HQ'd), it's taught in high school and only for who chose physics, but some basics are told in junior high school for everyone.
I think its the former, if you are in the US. I also did not learn this formula in high school. My physics class barely went beyond momentum, and I remember the slightest hint of electricity being discussed.
This is from iPhone 7, released 6 years ago. I would like to see an actually modern lens design. Smartphone cameras got so much better in the last 6 years.
The iPhone 12 Pro is insanely better than the iPhone X--especially in low light. I have a bunch of fairly high-end camera equipment that I rarely use outside of some fairly specific circumstances at this point
I have high end kit (and some semi-pro lighting) which I love to use, but I now use a secondhand iPhone X, an Apexel teleconverter lens and a cheap Ulanzi grip, and I really enjoy this combination for its immediacy.
The higher end kit is now increasingly used in novel, DIY ways, because carrying a phone has taught me that gear should be used for its strengths.
Cameraphones have gotten to the point where I should probably consider getting some external lens gear. It didn't really make sense to me when there were so many other compromises with respect to image quality but offering some additional options when traveling with just a phone is probably legit useful at this point.
The Apexel HD 2X lens has been useful for me. Cheap enough not to stress about, and just enough extra reach to be useful.
I really enjoy the cheap little Ulanzi CapGrip thing, too.
It's definitely not 100% like using a real camera (no half-press for focus, though I am a back-button-focus guy on DSLRs anyway!)
But it adds just enough of a camera-like feel to allow relaxed one-handed grips and slightly more immediate shooting. And it weighs nothing so it's always in a bag or jacket pocket.
Thanks. I'll probably wait until my next upgrade which will probably be a good couple of years. But I can absolutely see pulling the plug on upgrading my other cameras at that point. Which would make even going with a couple higher-end Moment lenses pretty thinkable.
I was really quite interested in film at one point but the overhead in terms of big equipment and film processing cost was just more than I wanted to handle. A smartphone today with a bit of software would have been a game changer.
I have just the one now. It's the Apexel HD 2X telephoto.
I bought it because I wanted a slightly longer lens for portrait location scouting, and chose this one not because I am certain it is excellent but because I doubt there's enough of a quality gap between the Apexels and the Moment or Rhinoshield lenses to justify the much higher price.
It is really fairly good though, until you get into the corners. But then I happily use vintage lenses on my A7II that are worse in the corners!
What difference there is, is that the Moment lenses use a tiny bayonet, which I think I would prefer to the 17mm thread Apexel, Ulanzi and others use. Screwing the lens on can be fiddly, especially with the semi-open, three-quarter circumference attachment threads you typically see on lens adapter phone cases.
Don't use the clamp attachments. They are hard to align and I think that is a lot of why people find attachment lenses so disappointing.
Get a phone case with a built-in 17mm thread (or bayonet if you pick a bayonet lens type). I think I'm using a Ulanzi case.
The only frustration that remains is that the iPhone's built-in camera app does not let you force the choice of either of the lenses. It will choose for you if you pick 2x. So you might want an app. I really like the ProCamera app.
I'm not sure whether accessory lenses are so useful on some of the three and four camera mobiles, but it works well enough for my iPhone X.
I've had a few clamp-style lens attachments for my iPhone and you have to keep them very clean or they can scratch the screen. Plus the alignment, as you said, is finicky.
It really boils down to this. A lot of people have enough pocket space to need no bag most of the time, so having one just for the camera is extra grating.
What suprises me the most is that on the other side, smartphone photo UI hasn’t progressed much from the early days. A lot of my shots that fail on smartphone are because of the controls. The cycle of zooming/framing then locking focus and/or exposure is still a pain, adjusting exposition, forcing an ISO setting that could result in a bad photo is out of the window. I ended up bringing an old DSLR on random walks when I might want to capture stray cats and don’t care about the extra bulk.
All of these are easier to do with third party apps, but those aren’t available on quick access. And taking photos out of quick access is so slow. A DSLR is ready instantly from power down, we’re happy if the phone instantly unlocks.
I have a fairly large FF DSLR Canon system that I rarely use at this point--though not sure if it's worth selling. (I do use it now and then.) I use my Fujifilm mirrorless setup more but even that almost has to be a longer trip where I plan to take a lot of pictures. iPhone is fine for most purposes.
Yeah I do a lot of walking and long trips. I had a mid range DSLR and it was horrible carrying it everywhere and juggling lenses and batteries. The iPhone works 90% of the cases perfectly and the last 10% of the cases, I am happy with the compromise.
Edit: The thing that really killed the DSLR for me was lugging the whole kit up Scafell Pike in the pissing rain and cold in the UK. I got some good photos in the morning but carried a brick for the entire day after the battery dumped because of the cold. I had an iPhone XR back then and that kept me going all day fine. When I got back the whole damn camera despite being weather sealed had condensation in it.
No manufacturer will actually guarantee their weather sealing. It surprised me when I went and researched it. YouTuber James Popsys[0] takes his cameras out in the cold and wet all the time, so whatever he uses is probably a good starting point if you decide to try interchangeable lens cameras again and want one that actually handles moist weather well.
OM System (the camera division of Olympus sold off as a separate company) recently announced their OM-1 camera which together with a some of their expensive lenses got an IP53 rating. Which is still a less than many smartphones.
The weather seal may not have been breached. If you put your camera together in a nice warm house, then it would have trapped warm moist air inside it. When you carried it in the cold, the moisture in the air would have condensed.
Best advice with batteries is to keep them in your coat pocket, where they stay reasonably warm.
They aren't comparable, not really. In the same way that an 8x10" camera and a full-frame camera aren't.
But they are excellent.
Good phone cameras have a value proposition all of their own, and they are utterly changing what we expect from photography.
My mobile has taught me that a lot of what I relied on or worked with in a DSLR or mirrorless is a crutch.
Mobile phone cameras force you, for example, to really think about composition, because you can't simply blur out the bits you don't like (portrait mode still sucks). They force you to think of other ways to isolate subjects, other ways to make use of light and contrast.
I've owned a lot of kit over 20 years or so, though I still own a lot of it -- I'm using a 14-year-old full frame DSLR and a seven year old full-frame mirrorless.
In the last two years, when studio portrait photography has been complicated or impossible, I have used my phone a lot, when out walking by myself. What I thought was just a way to not-totally-give-up photography has turned into a work of its own.
It will anger a lot of photographers who like to whine about how mobile phone cameras can't do X, Y and Z, but here's the truth: if you don't understand what a mobile phone camera can teach you as a photographer, you're probably not really trying.
I shoot professionally (DSLR) and there are situations in my work (automotive) where my iPhone 12 mini just won't work BUT there are also lot of situations in my work where the iPhone could get a better shot—or, at least, get a good shot more easily.
A solid color workflow, tilt-shift/specialized optics, high resolution, and lighting/strobe control all go to the DSLR system, whereas just about everything else goes to the iPhone.
Honestly it's just a different media. There's no need to think too hard about it. Might as well compare oil pastel to charcoal
Personally I think almost all of the output of phone cameras (except what's in their glossy ads) is missing a certain je ne sais quoi that distinguishes Photography (note the capital P) from an ordinary photo. That and you can pry the amazing bokeh from a high-quality SLR lens from my cold dead hands.
> Honestly it's just a different media. There's no need to think too hard about it. Might as well compare oil pastel to charcoal
You might, but even then you might also consider that learning from one medium can translate even to a very different medium.
But in any case here we're not talking about quite different media. The images are processed and edited in fundamentally the same way. As a result, many photographers tend to perceive the one (mobile phone digital photography) as an inherently inferior subset of the range of the other (large digital sensor photography).
This is because photographers -- particularly in the digital era -- do not ordinarily consider the creative advantages of self-imposed restrictions.
And when deliberate creative restrictions are suggested, the immediate response is to dismiss them or to suggest that they are wasteful (you're losing information at capture), better achieved in post-processing (so you have more "flexibility").
So there is a "precision maximalist" and "gear maximalist" default to most contemporary photography. Use the sharpest, fastest, least quirky lens you can get, shoot at the highest resolution, buy the latest if you can.
All of these things are narrowly valuable in some area of photography, but they get glommed together to make a giant assumption.
The mobile phone presents a convenient, nearly unavoidable way to question many of the impacts of that maximalist position.
One of the things I often want to do is ask photographers "why are your DSLR photos better than your friend's mobile phone photos?" and then poke them in the ribs each time they mention the technical advantages of their kit. But it would generally get quite violent.
> So there is a "precision maximalist" and "gear maximalist" default to most contemporary photography. Use the sharpest, fastest, least quirky lens you can get, shoot at the highest resolution, buy the latest if you can.
I wouldn't conflate "buy the latest you can" with gear maximalism. My own journey with that started with dissatisfaction of the quality of the output that I had, and, as I became a better photographer, running up against the limits of my camera and lens. Then, after understanding what I was experiencing was indeed the hardware and not a limitation of my skill, I upgraded, and the hardware leapfrogged my abilities. Then I got better and I once again started hitting the limits of my hardware. Though I am now at a point where the hardware and my abilities are so good that I am content with the output as one can reasonably expect for a hardcore amateur. My point is that "gear maximalism" is not always just mere materialism.
As for the phone, it's convenient. I can guss it up and if the output is sufficiently pig-like I can put lipstick on it til it looks good enough. With the right software I can even output RAW files and theoretically feed them into Lightroom like my normal workflow. But it will never be as good, and there are so many techniques that I simply can't do with it.
"The best camera is the one in your hand" is true and all, but it's flooding the world with shit photos. Good is the enemy of great, after all.
> there are so many techniques that I simply can't do with it.
But that's the point of what I'm getting at, concerning limitations and maximalism.
We've got into a mindset where we seek to push the edges of the kit we have -- we are urged to shoot faster lenses, shallower depth of field, lower and lower light.
But it doesn't get you a higher rate of good photos. A good photo is a good photo regardless of kit, and it's possible to flounder creatively when you have more scope.
As a photographer I think we should be able to creatively progress with whatever photographic tool we have in our hands. See a creative angle, refine it, improve on it, etc. Finding a way to be creative within a set of limitations is all that matters, and therefore starting with a serious set of restrictions is good practice.
FWIW I am a fairly good portraitist, a long-standing small-venue gig photographer, and I'm pretty technically-minded. But thinking back to the first DSLR I owned (a secondhand Fuji S2 Pro), I can think of precisely one photographic limitation I'd run into regularly even now: its high ISO performance was not that good. Otherwise I'd have that camera back to shoot with, even now, especially in the studio. Though I might rage at the maximum card capacity.
I'm regularly surprised to hear people talking of hitting the photographic limits of the kit they use; it suggests to me very specific niches or unusually rubbish kit lenses (I've owned one of those, and even then I got some results I liked).
Because I can't think of a popular DSLR made after about 2005 that is bad; certainly not after 2008. I can think of only a couple of entry-level DSLR or mirrorless cameras with limitations that would trouble me (the lack of a depth of field preview being the main one, aside from glasses-unfriendly viewfinders).
I still shoot with digital kit I bought 13 years ago, and got some of my best results in the last few years with the oldest of those cameras that is now so cheap I'm better off not selling it. I've only bought secondhand, old-model equipment since 2011. I probably won't ever buy new again -- except maybe a Cambo Actus.
The same is true of lenses. I'm currently doing a bit of work with an early 1980s Vivitar wide-to-standard zoom that cost me £30. Crap in the corners, gorgeous otherwise. I don't plan to buy a new lens ever again.
I'd always recommend going back to your former kit (buy it again, cheap on eBay) and testing those beliefs about limits in retrospect.
I still have that older equipment. (In fact, I even bought the updated model, and gave the original to a friend. She knows full well the kinds of pictures that setup has taken and it motivates her to improve) It's lighter, it's what I built my reputation on, and frankly the (later) investment in lenses and such for that platform wasn't cheap. In fact I'm very proud of how much attention I've gotten (which isn't that much, by influencer standards) with just a Costco kit lens and body.
> But it doesn't get you a higher rate of good photos.
Beg to differ here, if one knows what they're doing.
I do a lot of low-light, event photography, among other stuff, and the fancy expensive kit is just flat-out way better. I've shot thousands of these photos with both by now, and the extra technical flexibility is totally worth it. Hell, the extra sensor size and autofocus alone took me to a whole new level. Do you know how much it sucks trying to do low-light with manual focus at a DJ performance because your autofocus can't keep up? How much time do you want to spend twiddling noise reduction controls because the entry-level kit's ISO 1600 looks like crap? How much shadow detail are you willing to sacrifice? It's such a pain in the ass.
And through all that I like to take a few shots with my phone for the instant social media gratification... and those shots are never anywhere near as good. The color is flatter, depth of field is blah, everything is just...meh. They look good on instagram, but that's about it.
> I can think of precisely one photographic limitation I'd run into regularly even now: its high ISO performance was not that good. Otherwise I'd have that camera back to shoot with, even now, especially in the studio. Though I might rage at the maximum card capacity.
Growing up I messed around with 35mm and even had a SLR camera for a short while. But I got serious with my first DSLR (a Nikon D3100). Maybe I was wrong in expecting something comparable to my old 35mm stuff but the dynamic range sucked, high ISO sucked (1600 is grainy AF), autofocus with like 12 points with only one cross-type sensor sucked, and no matter how good I focused the image (even on a nice 35mm prime) I couldn't get the razor sharpness I was after. There was something about how that sensor processed certain light transitions that had a really nice quality, and the camera took a lot of hits, but ultimately I felt like I was hitting a plateau.
One example: I like to do long-exposure shots of firedancers doing their thing. Depending on the fuel type the dancer uses there could be a lot of dynamic range to deal with, and most of the other photographers in this space seem to struggle in coping with it: either the dancer is exposed nicely but the flame is blown out, or the flame is exposed nicely but the dancer is super dark (losing a lot of color detail if you try and fix it in post). Or they just try to squash both together and the picture looks like a bad HDR merge. Here is another place where the sensor and lens matters a ton and both my phone and my DX kit struggle. Can I do and have I done these shots on that kit? Sure. But the nice camera, with its amazing sensor, and the amazing lenses I have for it, are able to capture way more detail, and as a result I get way better results straight-away and spend way less time in post trying to fix things.
> I do a lot of low-light, event photography, among other stuff, and the fancy expensive kit is just flat-out way better. I've shot thousands of these photos with both by now, and the extra technical flexibility is totally worth it.
Mm, yep. I do gig photography in small, dark venues -- I've shot probably 125 thousand photos at high ISOs, with six different cameras (Nikon DSLRs for the bulk of it, and Sony mirrorless experiments alongside in a parallel track).
> Hell, the extra sensor size and autofocus alone took me to a whole new level.
The difference in light gathering between an APS-C and full-frame camera of the same technological era is usually about one stop. If you make a several-year jump at the same time, that can be noticeable, I suppose.
> Do you know how much it sucks trying to do low-light with manual focus at a DJ performance because your autofocus can't keep up?
At gigs, I do quite a lot of manual focus (and single-point, back-button focus with manual adjustment). I have no particular issue with it and it helps solve problems autofocus simply cannot solve.
Especially with a Nikon, because you get focus point confirmation even manually.
You do a lot more thinking and observation about equivalent distance. For example if you cannot focus on someone's face because they are behind a brightly lit microphone that autofocus will always prefer: How far are they behind it? If you focus further down the microphone stand or on a belt buckle, lock and recompose on their face, without changing your position, how much does that correct it? Or can you judge how far in inches and then literally move yourself or your camera forward?
One good cross-type sensor is all you absolutely need (stay away from old-style Phase One autofocus kit if this bothers you!)
Admittedly back button focus makes that more useful. And focus peaking on a mirrorless camera is the bee's knees.
> How much time do you want to spend twiddling noise reduction controls because the entry-level kit's ISO 1600 looks like crap? How much shadow detail are you willing to sacrifice? It's such a pain in the ass.
I said above that the only thing that really forced me to upgrade from my S2 Pro was low light performance. But again, I think if I was to go back to using that camera, I would be getting results I wanted a whole stop, maybe a stop and a half faster than I did at the time, through better subject selection, better understanding of its limitations and fifteen years more understanding of exposure.
If you're talking about the D3100, I'd shoot that up to ISO 3200, I reckon. Its low light ISO performance is a little better than the D300 which I used for years at gigs with enormous success. And its dynamic range is not all that noticeably worse. As long as you are careful with your exposure, that's a useful little camera.
DxO Optics Pro/PhotoLab helps of course.
Newer kit does a lot of stuff better, certainly, and it can save you time. And yeah, low light photography is about the only place people run up against the limitations of older kit. But IMO nothing since 2008 has any kind of problem with low light, and in many cases the newer kit is tripped up by the exact same difficult situations.
Idunno dude, ISO 3200 is nearly unusable on the D3100 for anything you want a large picture of, the noise was just too much for me, especially so when I didn't want to shoot fully stopped up. The noise has a look of its own and it compressed horribly and would lose a ton of color detail. Maybe with the right software but I like my workflow as is. The newer (relatively speaking) Nikon sensors contribute some additional DR as well, a comparison that I saw in a performance review a while back was closer to 2 or 3 stops between everything.
Maybe it works for your style, but it was cramping mine. There is enough suffering in art, why add to it?
> The difference in light gathering between an APS-C and full-frame camera of the same technological era is usually about one stop. If you make a several-year jump at the same time, that can be noticeable, I suppose.
That's exactly what I am talking about, a sensor and format upgrade after several years. I have a DX body of the same generation and its sensor performance is very close to its FX brother, and they're both considerably better than the D3100.
A lot of people (including this comment) seem to think that DSLR=“good camera” and all real photos are taken on 35mm format SLRs, but none of that is true.
SLR is a specialized format designed for fast autofocus used by news/sports photographers and 35mm “full frame” is not the largest film size.
Rangefinder and mirrorless formats actually have a much easier time with lens design and don’t need such gigantic lenses, which might make them less impressive looking than an SLR, but I have a cheap Mamiya medium format rangefinder that takes better street photos than they do. And if you’re doing product or landscape photography, you want a large format digital back like Phase One or a view camera, which really are the top of the line.
> SLR is a specialized format designed for fast autofocus used by news/sports photographers
Designed for, no. Optimised for and evolved for, yes! :-)
When the SLR emerged it was definitely slower than the cameras press photographers were using. Even the later instant-return-mirror models, probably. Press photographers didn't switch to SLRs until the Nikon F arrived fifteen years later, in the first few years of the Laos and Vietnam wars. Press shooters mostly hadn't been shooting 35mm for more than a decade before that.
The rest: I'm not convinced at this point, and I think high-end mirrorless really blurs most of those boundaries pretty conclusively. We're definitely stepping towards a formatless world in the sense of distinct boundaries, with computational photography being a big part of it.
Which is precisely why I'd urge really any digital photographer to spend as much time learning to get the best out of their mobile phone as possible. It is one point on the continuum; small sensors but a lot of computation.
> My mobile has taught me that a lot of what I relied on or worked with in a DSLR or mirrorless is a crutch.
> Mobile phone cameras force you, for example, to really think about composition, because you can't simply blur out the bits you don't like (portrait mode still sucks). They force you to think of other ways to isolate subjects, other ways to make use of light and contrast.
I would argue that Bokeh is an incredibly important tool that is part of composition and saying it is just there to "blur out the bits you don't like" seems weird to me.
> I would argue that Bokeh is an incredibly important tool that is part of composition and saying it is just there to "blur out the bits you don't like" seems weird to me.
And yet that is exactly how many photographers use depth of field. Especially amateur portraitists; the shallowest depth of field they can pull off, with the eyes sharp and the least "distracting" background.
(Ever notice the fundamental difference between commercial editorial photography and amateur portrait photography? It is this. Amateur portraitists use blurry backgrounds as a tool to escape complexity.)
FWIW, 25 years ago we would not have had this conversation (notwithstanding the non-existence of HN), because this idea of out-of-focus-rendering as an inherently important quality of a specific lens almost did not ever come up. (And we didn't have the word until Mike Johnston introduced it to the english language).
Bokeh obsession (as distinct from what used to be called "choosing the right depth of field for your composition") is a very modern, super-fast-aperture, gear-maximalist thing. The mobile phone bursts this very contemporary bubble, if you'll excuse the pun.
As an amateur I like to use the depth of field to bring a 3d quality to my shots, sort of separate the layers a bit. Usually stopped down a bit, not wide open.
And this is currently the biggest difference I can notice between photos produced with my X-T30 and a Xiaomi mid-tier phone, the pictures don't look flat, sometimes it even feels like looking through a window into the scene.
The other two differences are field of view and the resolution, since my phone is equipped with a wide angle lens. TBH, when I shoot the wide end of my kit lens on the X-T30 and don't pixel peep, it's hard to tell apart the jpeg's from my camera and my phone.
> As an amateur I like to use the depth of field to bring a 3d quality to my shots, sort of separate the layers a bit. Usually stopped down a bit, not wide open.
To be clear what I mean is that really the "shot at f/1.4 wide open" portrait thing is an amateur phenomenon. It is just not a big part of commercial portraiture at all, outdoors or in.
> TBH, when I shoot the wide end of my kit lens on the X-T30 and don't pixel peep, it's hard to tell apart the jpeg's from my camera and my phone.
Right. This is what I find -- especially with black and white images (my iPhone seems to produce images that work well with, say, a Pan F thing).
But this depth of field stuff what I mean about the way mobile phones force you to think about subject isolation, all of the time; even on the telephoto lens except with subjects less than a couple of feet away.
My experience is that people who have never used a DSLR do not have as much trouble getting phone photos they like that are actually good.
Partly because they haven't had either the scope or crutch it offers; they are used to seeing and instinctively composing without it.
Also because DSLR users spend most of their time composing wide open, not at their target aperture, which ultimately warps our perception and steers us towards shooting images more like the full-time view we get from the viewfinder.
There is an art to subject isolation in composition with constant front-to-back sharpness at all times, and a mobile phone will force you to think about it.
It depends on what you're doing and what the photos are for. If you're shooting events, sports, wildlife, or want to do shots with specialty--like ultra-wideangle--lenses, precise aperture control, large blowups, no. But phones can now handle a huge amount of what most people--even relatively serious photographers, need day to day or on a random hike/city visit/etc. need. If you're shooting with a DSLR and whatever kit lens it came with on A you're probably fine using a phone.
Besides telephoto and real macro, the results of the iPhone 12 made the idea of gear renewal almost absurd for me, it’s just incredible and I’m not talking about the 12 pro.
There is trade offs, none of them justifies carrying heavy gear around most of the time.
edit: it’s a hobby and I just spent a lot of time trying to take pretty pictures, and searching the proper cost tradeoff to do them
I think this is also why we see lots of people who own high end full-frame etc., are now adapting old lenses, building extraordinary DIY rigs like the "digital camera obscura" trend of the moment, making entirely custom lenses with surplus optics and 3D printers, etc.
It's the same reason as for why there's so much interest in film.
These things have different strengths. I think mobile phones have liberated photography in a way unparalleled since the time when 35mm film liberated photography.
There was a period when DSLRs were improving pretty rapidly and it made sense to move up on a pretty regular cadence because the new stuff was so much better. (And we've seen this with phones more recently.)
There's definitely a retro aspect to film and vinyl. Personally that's not for me having lived with the limitations of both. But I spent years working with custom film developing chemistry and the like.
I was photo editor of my undergrad newspaper, editor of high school yearbook, and made beer money with photography in various ways. (I was also de facto photo editor of a newspaper in grad school among other things.)
I spent a lot of time on it and got a lot of enjoyment out of it. I also probably got a bit burned out and wasn't really interested any longer once I didn't have easy access to a good darkroom any longer. But yeah, happy I did it, zero interest in doing any longer. Digital pretty much rekindled that interest.
(Also could really not get into video in those days because overhead was just too much. Don't have obvious entry point into creative video these days but at least casual is easier.)
No but earlier many overshot what they needed because smartphones were so obviously inferior even for social media. FF and even APS-C are still clearly better but what matters here is that modern smartphones are reaching the “good enough” bar to no longer look bad.
Lenses can have a big impact on low light sensitivity. No material is 100% transparent, and camera lenses have many elements for light to pass through before it reaches the sensor. The more lens you have between the subject and sensor, the less light is being captured.
That's why, if you're super concerned with low light sensitivity, you go with fixed-focal length prime lenses. Not having to support zoom means far fewer lens elements through which to pass light.
It's more the case that the wider the aperture, the more the thing needs correcting for aberrations. A fixed focal length prime lens is much easier to correct than a zoom lens, because all the aberrations can be optimised down in the design. For a zoom lens, the aberrations need to be controlled while parts of the lens are moving around in weird ways.
I have a 18-50mm F/2 zoom lens. It's a really nice piece of kit, but I noticed that at the extreme wide end of the zoom range, it has quite catastrophic coma. Zoom lenses are hard.
Yes, that is exactly why. There is a bit of an "unobtanium" problem in that materials science has been having in this problem for quite a long time. Barring a revolutionary new take on legs materials, this is about as good as it gets, cross what people are actually working to pay for
One might then think naively that we should take equal care to match the receiver to its transmission line so that all the energy captured by the receiving antenna is actually delivered to the receiver. However, this is not the case for a good receiver.
[...]
Quite generally, in order to deliver the maximum signal/noise ratio at its output, an ideal receiver must reflect all the energy incident up on it. It is only in the limit of an infinitely bad receiver, in which all the output noise is generated internally, that matched input impedance becomes the condition for maximum signal/noise ratio at the output. Actual receivers are somewhere between ideal and infinitely bad, and so they perform best when partially matched, so that a part of the incident energy is reflected and radiated back out the receiver antenna. In fact, an ideal receiver does not run on energy at all, but reflects back all the energy incident on it!
[...]
This fact surprises many people on first hearing; but we note that it is so general that it remains true in quantum theory, at optical frequencies where hf >> kT and the Nyquist noise formula no longer holds. For initiation of a photo chemical reaction it is not necessary that the light energy be absorbed. For example, it might be thought that the eyes of animals adapted to seeing in the dark would have pupils that act as perfect black bodies, absorbing all the incident light energy. On the contrary, it is a familiar fact that the animals with best night vision have eyes that reflect the incident light strongly, looking like search-lights in the dark.
I don't think there are any really big changes to the lens design recently... All the effort has gone into multi-camera setups, and different types of lens for macro, wide angle, telephoto, etc.
To my understanding, there is still only a single set of moving lenses in an iPhone 13 lens.
My S21 with "scumbag zoom" or creeper zoom. The amount of zoom, in something as thin as it is, is mind blowing. To zoom 10x, and be able to take great pictures, hand held with almost no shake. Then you have the top GoPros shooting 5k with gimbal like stabilization. Cameras are getting crazy.
If I remember right, newer Samsungs have their stack of telephoto optics pointing toward the top of the phone, parallel to the display. Then use a periscope style prism to look out and forward. That way you aren't limited by the device thickness (although I guess now the diameter of the lenses is constrained)
There have been experiments with this over the years. Minolta coined the phrase for a 3x zoom in the DiMAGE X series, starting twenty years ago, and as I recall they were the first to use this technology in a consumer camera:
Yining Karl Li has mentioned in previous posts their intention to write a blog post about modern lens optics, which should outline multiple things.
Hopefully, a general overview of the 100s of lenses they've found/made the models for, alongside how modern optics fail to be accurately modeled by the polynomial model of optics...
Polynomial models are generally usable for most applications. If you need lots of accuracy, such as for long-range stereo, they sorta work only for traditional and non-wide lenses.
I recently switched from a OnePlus3 to an iPhone 12 mini and although the pictures are really much better, one thing is worse, the lens glare. There is often a glare and sometimes a bright green dot. Is that a consequence of these many elements? Less spherical and chromatic aberrations but more glare and other internal reflections?
> When filmed, the Gaussian Girl is shot through a soft-focus filter, a piece of translucent plastic (or very sheer silk, in the days of classic film), or a quick smear of Vaseline, depending on the director's and/or cinematographer's preference. This surrounds her with a softly glowing aura, and smooths out any unappealing pores or lines on her face. The result makes her look nothing short of ethereal.
From what I've learn from 3d software tutorials (trying to simulate a glare), this glare comes from the lenses and each lens will add another ring in the glare? not sure.
From reading some of the comments it sounds like the elements for these lenses are plastic?
Is there a downside to using plastic lens elements compared to glass, in terms of image quality?
If there isn't a downside, could lenses for SLRs/mirrorless cameras also use such elements?
Edit: It sounds like aspheric lens elements might be made from plastic in SLR lenses. But also I found that plastic might not be used for outer lens elements as it scratches easily
I think the main reason to use plastic would be the potentially higher refractive index and better strength properties, which are also why plastic lenses are used in spectacles (some glasses can have higher refractive indexes but still require heavier thicker lenses to avoid the risk of shattering). I would assume that glass lenses would be more likely to shatter if the phone were dropped. But it may really be that the plastic lenses can be manufactured to a higher quality for a reasonable cost.
In my experience, though I can't speak too much about commercial lenses for DSLR or even SLR camera systems, but can speak to some degree with regards to something like a payloads used for aerospace and defense applications.
Designs of aspheres are typically created via Zernike Polynomials. If you have access to an Optical Design program such as Code V, Zemax, OSLO, ASAP, HEXAGON (Raytheon), or Quadoa or equivalent one can spit out something called a SAG Table which takes in account the Zernike Polynomials and allows for much easier time when it comes time to produce either mold or the actual aspheric curved surface that's going to be machined.
Yes there is a downside to using plastic, namely chromatic aberration due to lack of index matching. Other issues are limitations on the bandwidth of transmitted light. Furthermore, most "plastics" used for cell phone lenses have historically been glass-like plastics such as PMMA (polymethylmethyl acrylate) or Zeonex. Most of these types of materials however are extremely sensitive in the UV. The plastic lenses also don't do well with wave front correction or even with just shaping light. To make matters worse, coating plastic is usually problematic as it doesn't do well with higher temperature coatings which tend to aid in not only anti-reflective, but also color correction and glare reduction, i.e. PECVD or other types of enhanced plasma coatings.
Using glass and by glass I mean a crystalline glass, perhaps something closer to a high grade fused silica or if budget permits sapphire or some type of chalcogenide, one gains better modeling control of the index of refraction (abbe number, glass melt, etc..) and also overall optical sensitivities and STOP (structural, thermal and optical) performance parameters. Something else to consider is the coefficient of thermal expansion between glass and metal can more easily be optimized. But all this comes at a cost.
The reason for plastic typically over glass is because of procurement and manufacturing cost. Plastic can have molds machined and have tolerances held rather tightly. Because of this it becomes much easier to pop out lenses. As of late holographic replication and lithographic roll to roll processing has also made making plastic optics a more cost effective option. This is a form of additive manufacturing and actually is probable where things are headed.
Manufacturing "glass optics" especially for aspherical optics are dependent upon first ensuring the substrate is pure and that the glass melts are consistent. In most cases glass must be thermal cycled, typically annealed to ensure and maintain rigidity. Rough machining is then performed essentially hogging out the material, in some cases only yielding between 60 to 70% of useable material. Finally and typically either some type of computer controlled polishing, MRF (magneto rheological finishing) or single point diamond lathing of the crystalline glass is performed. The final finishing steps are extremely costly as a combination of advanced metrology (i.e. MTF, interferometry, point-cloud analysis, non-contact surface analysis, various forms of spectroscopy) has to be performed to ensure that said advanced fabrication has either optimized surfaces for wavefront performance by either optimizing or in most cases eliminating mid-spatial frequencies in the form for micron to nanometer level peaks and valleys, all the while not changing or sacrificing the size and other physical dimensions and also preserving the most sensitive clear aperture (i.e. useable and functional area of the lens). It can be quite a time consuming process.
TLDR; Glass is better than Plastic, but is more costly to manufacture at scale, unless volume and market acceptance can drive the cost down.
Movies are often shot with anamorphic lenses which are lenses with a sort of vertical cylinder shape. This lets wider aspect ratios be shot using the full frame of a more square aspect film or sensor.
Actually, I'm much more amazed by what lens designs for cameras look like. Sure I can appreciate how they fit these designs in a tiny amount of space, but they are also not constrained in shape by the same constrains as glass lenses.
Even more amazing is how they designed very complex lenses in the 50's and 60's essentially by hand. And those designs have stood the test of time and have barely changed. Compared to what modern lens designers have available to them, I just find it mindblowing how the designers then could come up with these things.
Sony makes the l̶e̶n̶s̶e̶s̶ sensors on modern iPhones and, if rumors are to be believed, also the micro OLED displays on a forthcoming new product category.
It's funny. The 1970s Sony Trinitron CRT TVs are what inspired Steve Jobs to make the Apple II case, and here they are working together, at the highest end of technology, many decades later.
Sony only make the sensor, it looks like a bunch of companies manufacture the lenses (sharp, lg infotech, and another company I forgot from my googling 10 seconds ago).
But as the linked thread says, apple designed the lens itself (as it has a patent on it, which is apparently common?)
I am curious what the Samsung lens system looks like given it has real optical zoom
Samsung doesn’t have real optical zoom. Their system works by using a separate 1x, 3x, and 10x zoom camera and software interpolation for in-between levels. The 10x zoom camera has the most interesting optics because its lens stack is too long to fit in the phone, so instead it’s a “periscope” camera in which the lenses are sideways and light hits them through a prism, then goes through another prism to hit the sensor. Here’s a diagram of Apple’s (shorter focal length and hitherto unimplemented) version:
Sony also made the AppleColor 640x480 monitor in 1987 with the release of the Mac II. This was quite likely the most fantastic monitor available in this size in this year. It was rumored to be a regular Sony Computer monitor without an anti-reflective coating so it could be brighter and clearer.
What I have been wondering for years is why this technology hasn't found its way back into regular cameras. AFAICT, regular cameras with vastly bigger apertures, which should be able to take better photos, in fact are generally much worse until you get into huge bulky DSLRs. Why?
> AFAICT, regular cameras with vastly bigger apertures, which should be able to take better photos, in fact are generally much worse until you get into huge bulky DSLRs. Why?
Are you sure about this?
The last point-and-shoot camera I owned I bought around 2010, and other than zoom, my smartphone gives much better results. But then I met someone with a new hobby-level Panasonic point-and-shoot camera and I was flabbergasted at the quality of the photos she gets out of it. I tried taking some pictures with it and compared to my smartphone they all looked they belonged in a magazine. Smartphones have a ton of processing power and sensor quality packed into a small package, but I think it's still hard to outperform a half-decent lens and good quality sensor that are much, much larger, and it seems like point-and-shoot cameras have gotten really good.
Do you remember what model your friend's camera was? I have a DMC-ZX60 which I would rate as OK-but-not-great. It seems to me that it should be possible to pack a truly awesome camera into that form factor, considering that smart phone cameras are literally orders of magnitude smaller.
Point-and-shoot cameras are a dying segment. The target market for this segment is taking casual photos of friends and family. These buyers prioritize convenience and price over performance. Smartphones meet these requirements as good or better than their dedicated counterparts.
I assume the market is extremely bimodal. Either you are satisfied with a phone camera, or you want to go all out and have all the benefits of a DSLR. There are very, very few people who want a stand-alone camera that offers less than a DSLR.
It’s worth saying that there are great quality prosumer compact cameras, but the price point is much higher for this (>$600) than the old school cheap $100 point and shoot used to be.
It’s the bottom end of the market that’s been completely made obsolete, and as the quality gap shrinks there are also less people that want to carry / spend money on two devices.
The picture in the tweet only shows rays originating from an object that is close to the center of the field of view. The article mentioned in another comment has better schematics: https://www.pencilofrays.com/lens-design-forms/#mobile
Smartphone cameras usually have a pretty wide field of view. The complex shapes are required in order to compensate for the aberrations caused by the large angles between the optical rays and the lens surfaces.
No, the colors in that schematic are only a visual aid for distinguishing the rays. Each set of colored ray bundles comes from a single object point. The task of the lens is to map them into a single point in the image plane. At the same time, the lens is designed not to split the light by colors, since this would cause chromatic aberrations. This is not shown in this particular schematic.
This weekend by chance I met the lead engineer for the factory producing these. I showed him this article. He humbly requested to extend his own thanks to all engineers working in the industry. Genuine yoda.
I've always been fascinated by Fresnel lenses[0]. Is this a similar concept of squeezing a much bigger lens down into a smaller space, or is it achieving something that would not be possible with a single lens?
It would be really cool to seen an animation of this showing a sweep of the approach angle from one side of the sensor to the other.
The browser scores given for Interop2022 (Chrome: 71%, Firefox: 74%, and Safari: 73%) are heavily weighed down because of the "Investigation" part of the score which of course they all score 0 on because it's just begun. Without that, their scores on the mentioned focus areas would look more like:
- Chrome: 79%
- Firefox: 82%
- Safari: 81%
I feel like this gives a better insight into the current state of things
What ever happened to holographic lenses? I thought they were supposed to revolutionize lens design. Create a complex lens, take a hologram of it, and you can cheaply print infinite copies of it. I understand there was some light loss, is that why they never took over lens manufacture?
(astronomer here). Some comments below partially answered the question, as modern optical/ir telescopes are based on mirror designs (i.e. they have primary, secondary, and sometimes the tertiary mirror like the Vera Rubin telescope). But the lenses are still heavily used in the camera part of the instrument or in spectrographs. And there is a lot of know-how there, as well as optimization in terms of minimising light losses, aberrations etc. I think the constraints and goals are clearly different from the cameras in iphones, but I'd think the techniques must be pretty similar (although Apple clearly has a ton of money that astronomers don't so it's possible that there are some big innovations there, that I don't know about).
Not an astronomer, but from my limited knowledge telescopes usually use reflection e.g. mirrors so they don't need to correct for chromatic aberrations introduced by lenses and they also don't use things like optical stabilisation. Refractors could possibly be corrected, but AFAIK they aren't widely used in astronomy and hobby telescopes is such a niche market.
Possibly for refracting telescopes where the light passes through lenses but pretty much all serious telescopes use mirrors to reflect light and therefore couldn't do this. However, I'm sure that you can come up with some wild optics if you are optimizing for size or minimizing distortion.
Maybe the fact that there are many artificial stars in that photo due to software adding new light sources which didn't really exist in postprocessing? I have no idea if this happens but it is the claim they were making, and would certainly be worthy of a "c'mon" if true.
> Maybe the fact that there are many artificial stars in that photo due to software adding new light sources which didn't really exist
Huh??
How exactly did you come to the conclusion they just added a bunch of fake stars? Am I missing something? Post processing is a normal part of astrophotography, even with a DSLR.
That sounds like a question for them, not me, I made no such claims. In fact I'd be quite interested in any evidence as well, although I wouldn't be surprised if they were using some kind of ML model in postprocessing which ended up adding some extra stars here and there (more charitably, you could think of it as converting noise into stars, when it was really just noise all along). But I'm certainly no expert on how smartphones process their photos.
> Post processing is a normal part of astrophotography, even with a DSLR.
Yes as someone interested in low light photography I'm well aware of this and it's why I'd be quite curious to see any evidence (assuming it exists) of smartphones doing weird things to their low light photos.
I see zero evidence of an effect like that in the linked photos. I assumed they were talking about artifacting they saw in their own attempt to take photos at night with an iPhone.
Either way, given the content of those links, I don't think their comment makes much sense.
There is something with the hardware as well, there is some green glare even on easy shots, but the fake lights problem I think is software, the software tries to "pretty" the image and introduces these fake lights.
The problem is an artifact of size constraints and plastic lens having far less variation in optical density, that means a lot more work to correctly direct light to avoid various types of distortion.
Long gone are my high-school days of grinding and polishing a 20cm mirror in the cellar, carefully using Foucault knife tests to parabolize it. As an undergrad, using Gaussian formulae when matching lenses in eyepieces. In grad school, writing ray-tracing codes to design multi-element lenses. Then, as a postdoc, using Zernike polynomials to estimate optical errors in the hexagonal mirror segments of the Keck 10 meter telescope.
Today's iPhone optics astonish and impress me: A lens built with over a half-dozen aspherical elements. Coordinated imaging with multiple cameras. Wow!
> Today's iPhone optics astonish and impress me: A lens built with over a half-dozen aspherical elements. Coordinated imaging with multiple cameras. Wow!
Yep. And the same is true of modern "prime" lenses, which appear more like incredibly optimised non-zooming-zooms than the traditional prime recipes we are familiar with from textbooks.
A side note: thank you thank you thank you for The Cuckoo's Egg.
As a Brit I would ordinarily say that the reason I am a tech guy is the ZX Spectrum or the BBC Micro, but I am not sure my life would be the same in a much bigger way without your book.
At 15 or 16 in early 1990 I was sure I wasn't going to university; I was sure I didn't belong there. A friend of my mother's suggested your book to her as something I should read, and I received it as a birthday present.
Over time I have come to understand that this recommendation was extraordinarily shrewd. Not just about the tech aspect, or the internet community aspect, or the thrill of the chase aspect, but because it helped me understand that I was exactly the sort of person who should go to university.
When I am asked to recommend books it's yours I recommend, twinned with John Naughton's A Brief History Of The Future; they are like companion volumes for a particular kind of way to think and feel about technology. Douglas Hofstadter should be so lucky.
Hello! Original author (original tweeter?) here.
This particular design is from the iPhone 7 (or, more precisely, my guess is that it's from the iPhone 7 due to both the date of the patent [1] and due to the elements matching up with marketing images of the iPhone 7), which is 6 years old now, but I think it's broadly still representative of modern smartphone lenses. In the past 5 years or so, advancements in phone cameras have come mostly in better sensors, far better image processing, and adding more cameras, but the basic principles of the compact ultra-aspherical lens design seem to still be in place.
As an example, here [2] is an exploded view of the iPhone 6's lens setup, and here [3] is an exploded view of the more recent iPhone 12's lens setup. The iPhone 12 gained an extra element, but they both use similarly weird ripply elements and you can see the clear lineage between the two phones.
Also, as mentioned in the tweet thread and elsewhere in the comments here, Kats Ikeda has an excellent, incredibly detailed and thorough explainer on mobile phone lens design [4]. I don't actually know a whole lot about the optics field; I'm just dabbling as a hobbyist from a computer graphics perspective. Kats Ikeda's site is a much better resource than my Twitter posts for learning about this stuff.
[1] https://patentimages.storage.googleapis.com/7e/4e/3f/4e88d65...
[2] https://pbs.twimg.com/media/FMyGm6IVkAY77eY?format=jpg&name=...
[3] https://pbs.twimg.com/media/FMyIbszVQAQhMQ_?format=jpg&name=...
[4] https://www.pencilofrays.com/lens-design-forms/#mobile
Thank you for the links. How did you originally find yourself falling down this rabbit hole?
I was working on a CG art project [1] where I wanted to render the final image using a real-world lens with lots of "swirly" bokeh [2], but the renderer I was using didn't have the ability to simulate real-world lenses. So, to finish the project, I had to write my own plugin to achieve this effect. Afterwards, I found the topic interesting, so I started going further down the rabbit hole!
[1] https://pixar-community-production.s3.us-west-1.amazonaws.co...
[2] https://petapixel.com/2020/10/22/this-cheap-projector-lens-c...
Just want to say your project is awesome! And you nailed the swirly bokeh :)
Is everything in that scene CG? That's phenomenal.
Complicated, but simply an incremental evolution of a very old solution. These complicated lens arrangements to counteract chromatic aberration got started in the 18th century. The game is all about getting three distinct wavelengths to focus despite each reacting differently at each lens. The lens in the iPhone looks complex but it isn't any more complicated than others. The most complex I've ever read about was for a satellite which had to maintain focus despite temperature changes. Iirc that employed a stack of over a dozen lenses.
https://en.wikipedia.org/wiki/Achromatic_lens
https://www.dpreview.com/forums/post/61074908 https://photography.tutsplus.com/tutorials/here-is-what-to-l...
https://www.spiedigitallibrary.org/journals/journal-of-appli...
The craziness in the iPhone design isn't exactly the number of elements, but their weird aspherical shapes. The traditional approaches for designing optics are have been developed for spherical lenses because they are much simpler to describe mathematically. Even then, optimizing them requires a lot of numerical computation. At first glance, optimizing these wavy, complex lens shapes seems to require much more involved methods.
I don't think the computational complexity has been a significant problem for at least half a century. Aspheric lenses are described by simple polynomials and ray tracing is just elementary geometry. You only need to analyze a hand full of rays to get the relevant parameters of an imaging system. Probably a few thousand floating point operations in total.
The problem is that it is extremely difficult to manufacture surfaces with optical quality that are not spherical. Even today, aspheres tend to be more expensive than spherical lenses and have significantly worse surface quality. This is the price you have to pay if the objective has to fit inside a phone.
If only the designers at Apple had access to devices that could perform numerical computations...
You first have to know what to compute before you can compute it. A computer won't tell you that.
The comment of Daniel Darabos points to an interesting resource, explaining how these lenses are designed:
https://www.pencilofrays.com/lens-design-forms/#mobile
>The name “Anastigmat” literally means “non-astigmatic” lens, since it corrects astigmatism and its cousin the field curvature. Which is funny, because astigmatism means “non-stigmatic”), where stigmatism is “an image-formation property of an optical system which focuses a single point source in object space into a single point in image space”.
>I love the word Anastigmat. That means that “Anastigmat” can be translated as a “non-non-stigmatic”. I guess “Stigmat” didn’t really have a good ring to it. Nowadays, almost all lenses have the basic aberrations corrected, so we don’t have the need to call them Anastigmats anymore. Sad.
Boy, that page is Comprehensive on lens design - that's an awesome resource.
Interesting. He doesn't really explain how they're designed though. Must be just automatic global optimisation at this point though. No way a human manually optimised all the ray paths.
Usually some form of gradient descent is combined with a global optimization routine that tries to explore the parameter space. This is nothing new, optics has been designed this way since the early 60s.
Are you sure?
Optics equations are nonlinear (and generally nonconvex). Global optimization for nonconvex functions has not existed since the 60s. Even fast function eval (for brute force solutions) has not existed since the 60s.
Definitely not global (probably NP hard) but it’s likely optimized in some basic sense would be my guess. (Probably still using very specific parametrizations of the lenses and such.)
At some point we’ll just use metaoptics for lenses that are fully optimized using some general heuristics, but so far I am not sure they (both lenses and heuristics) are ready for prime time :)
Radially symmetrical lenses are extremely simple to calculate for and discontinuities are rare, so any gradient descent algorithm could find this pretty easily. A 3 lens system can probably be optimized with less computation than a single frame of a modern AAA game.
How they manufacture these is what I'd like to know.
Considering 3-lens systems were being used in telescopes over 200 years ago, id bet it takes far less computation than a single pixel in a modern game.
https://en.m.wikipedia.org/wiki/Barlow_lens https://en.wikipedia.org/wiki/Achromatic_lens
Yeah, but we could also recognise trees 200 years ago
Doing it with spherical lenses is high school math. We were talking about highly aspheric lenses
There's some interesting info from Zemax on manufacture.
https://www.zemax.com/blogs/news/future-of-cell-phone-camera...
Coarse manufacture is fairly simple - you can just press out plastic blanks. The tricks are fine shaping and polishing. You can also use resin:
https://jp.mitsuichemicals.com/en/techno/feature/feature06.h...
“You might have seen the lensmaker’s equation early as high school, and this is the essence of the performance of the lens.”
We’re either doing high school wrong or this person has seen their fair share of top schools in their lifetime.
In high school it’s introduced when they close the blinds and turn off the lights and you play with the glass blocks and 3 lines of light, if that jogs your memory.
A school with working blinds? Where turning out the lights doesn't result in a riot. Giving students glass and expecting it back without cracks in it.
In my school all of that equipment was broken, lost or stolen years before I got there.
I think the lensmaker's equation is pretty par for the course for most advanced physics classes in high school, no?
Over here in The Netherlands it was also definitely part of our physics course, I believe taught around the age of 14 or 15.
Interesting. This is on the young side internationally. FWIW the Dutch have also been global leaders in optics for most history (the field really only starts when the Dutch invented the telescope and the microscope).
Add a point: in Japan (also where lens manufacturers HQ'd), it's taught in high school and only for who chose physics, but some basics are told in junior high school for everyone.
I think its the former, if you are in the US. I also did not learn this formula in high school. My physics class barely went beyond momentum, and I remember the slightest hint of electricity being discussed.
High schoolers probably learned the thin lens equation.
I saw it in A level physics, and it was part of the standard curriculum, so not that rare.
This is from iPhone 7, released 6 years ago. I would like to see an actually modern lens design. Smartphone cameras got so much better in the last 6 years.
The iPhone 12 Pro is insanely better than the iPhone X--especially in low light. I have a bunch of fairly high-end camera equipment that I rarely use outside of some fairly specific circumstances at this point
Yeah, likewise.
I have high end kit (and some semi-pro lighting) which I love to use, but I now use a secondhand iPhone X, an Apexel teleconverter lens and a cheap Ulanzi grip, and I really enjoy this combination for its immediacy.
The higher end kit is now increasingly used in novel, DIY ways, because carrying a phone has taught me that gear should be used for its strengths.
Cameraphones have gotten to the point where I should probably consider getting some external lens gear. It didn't really make sense to me when there were so many other compromises with respect to image quality but offering some additional options when traveling with just a phone is probably legit useful at this point.
The Apexel HD 2X lens has been useful for me. Cheap enough not to stress about, and just enough extra reach to be useful.
I really enjoy the cheap little Ulanzi CapGrip thing, too.
It's definitely not 100% like using a real camera (no half-press for focus, though I am a back-button-focus guy on DSLRs anyway!)
But it adds just enough of a camera-like feel to allow relaxed one-handed grips and slightly more immediate shooting. And it weighs nothing so it's always in a bag or jacket pocket.
Thanks. I'll probably wait until my next upgrade which will probably be a good couple of years. But I can absolutely see pulling the plug on upgrading my other cameras at that point. Which would make even going with a couple higher-end Moment lenses pretty thinkable.
The always with you is a big thing.
Filmic pro was a total game changer re: compression options and image control. Definitely when I started using the video a ton.
I was really quite interested in film at one point but the overhead in terms of big equipment and film processing cost was just more than I wanted to handle. A smartphone today with a bit of software would have been a game changer.
Would you mind sharing which Apexels do you recommend?
I have just the one now. It's the Apexel HD 2X telephoto.
I bought it because I wanted a slightly longer lens for portrait location scouting, and chose this one not because I am certain it is excellent but because I doubt there's enough of a quality gap between the Apexels and the Moment or Rhinoshield lenses to justify the much higher price.
It is really fairly good though, until you get into the corners. But then I happily use vintage lenses on my A7II that are worse in the corners!
What difference there is, is that the Moment lenses use a tiny bayonet, which I think I would prefer to the 17mm thread Apexel, Ulanzi and others use. Screwing the lens on can be fiddly, especially with the semi-open, three-quarter circumference attachment threads you typically see on lens adapter phone cases.
Don't use the clamp attachments. They are hard to align and I think that is a lot of why people find attachment lenses so disappointing.
Get a phone case with a built-in 17mm thread (or bayonet if you pick a bayonet lens type). I think I'm using a Ulanzi case.
The only frustration that remains is that the iPhone's built-in camera app does not let you force the choice of either of the lenses. It will choose for you if you pick 2x. So you might want an app. I really like the ProCamera app.
I'm not sure whether accessory lenses are so useful on some of the three and four camera mobiles, but it works well enough for my iPhone X.
I've had a few clamp-style lens attachments for my iPhone and you have to keep them very clean or they can scratch the screen. Plus the alignment, as you said, is finicky.
Good heavens. A 36X lens from Apexel is all of $64. I love the 21st century.
Yeah 13 Pro here. I sold my DSLR last year. I don't need it any more and I hated carrying it around everywhere.
> I hated carrying it around everywhere.
It really boils down to this. A lot of people have enough pocket space to need no bag most of the time, so having one just for the camera is extra grating.
What suprises me the most is that on the other side, smartphone photo UI hasn’t progressed much from the early days. A lot of my shots that fail on smartphone are because of the controls. The cycle of zooming/framing then locking focus and/or exposure is still a pain, adjusting exposition, forcing an ISO setting that could result in a bad photo is out of the window. I ended up bringing an old DSLR on random walks when I might want to capture stray cats and don’t care about the extra bulk.
All of these are easier to do with third party apps, but those aren’t available on quick access. And taking photos out of quick access is so slow. A DSLR is ready instantly from power down, we’re happy if the phone instantly unlocks.
I have a fairly large FF DSLR Canon system that I rarely use at this point--though not sure if it's worth selling. (I do use it now and then.) I use my Fujifilm mirrorless setup more but even that almost has to be a longer trip where I plan to take a lot of pictures. iPhone is fine for most purposes.
Yeah I do a lot of walking and long trips. I had a mid range DSLR and it was horrible carrying it everywhere and juggling lenses and batteries. The iPhone works 90% of the cases perfectly and the last 10% of the cases, I am happy with the compromise.
Edit: The thing that really killed the DSLR for me was lugging the whole kit up Scafell Pike in the pissing rain and cold in the UK. I got some good photos in the morning but carried a brick for the entire day after the battery dumped because of the cold. I had an iPhone XR back then and that kept me going all day fine. When I got back the whole damn camera despite being weather sealed had condensation in it.
No manufacturer will actually guarantee their weather sealing. It surprised me when I went and researched it. YouTuber James Popsys[0] takes his cameras out in the cold and wet all the time, so whatever he uses is probably a good starting point if you decide to try interchangeable lens cameras again and want one that actually handles moist weather well.
[0] https://www.youtube.com/watch?v=C9VAidHrnY8
OM System (the camera division of Olympus sold off as a separate company) recently announced their OM-1 camera which together with a some of their expensive lenses got an IP53 rating. Which is still a less than many smartphones.
Yeah. But people are pretty locked into their systems.
My recommendation is specific to someone who opted out of systems in favor of an iPhone.
The weather seal may not have been breached. If you put your camera together in a nice warm house, then it would have trapped warm moist air inside it. When you carried it in the cold, the moisture in the air would have condensed.
Best advice with batteries is to keep them in your coat pocket, where they stay reasonably warm.
It’s really that comparable??
They aren't comparable, not really. In the same way that an 8x10" camera and a full-frame camera aren't.
But they are excellent.
Good phone cameras have a value proposition all of their own, and they are utterly changing what we expect from photography.
My mobile has taught me that a lot of what I relied on or worked with in a DSLR or mirrorless is a crutch.
Mobile phone cameras force you, for example, to really think about composition, because you can't simply blur out the bits you don't like (portrait mode still sucks). They force you to think of other ways to isolate subjects, other ways to make use of light and contrast.
I've owned a lot of kit over 20 years or so, though I still own a lot of it -- I'm using a 14-year-old full frame DSLR and a seven year old full-frame mirrorless.
In the last two years, when studio portrait photography has been complicated or impossible, I have used my phone a lot, when out walking by myself. What I thought was just a way to not-totally-give-up photography has turned into a work of its own.
It will anger a lot of photographers who like to whine about how mobile phone cameras can't do X, Y and Z, but here's the truth: if you don't understand what a mobile phone camera can teach you as a photographer, you're probably not really trying.
I shoot professionally (DSLR) and there are situations in my work (automotive) where my iPhone 12 mini just won't work BUT there are also lot of situations in my work where the iPhone could get a better shot—or, at least, get a good shot more easily.
A solid color workflow, tilt-shift/specialized optics, high resolution, and lighting/strobe control all go to the DSLR system, whereas just about everything else goes to the iPhone.
Honestly it's just a different media. There's no need to think too hard about it. Might as well compare oil pastel to charcoal
Personally I think almost all of the output of phone cameras (except what's in their glossy ads) is missing a certain je ne sais quoi that distinguishes Photography (note the capital P) from an ordinary photo. That and you can pry the amazing bokeh from a high-quality SLR lens from my cold dead hands.
> Honestly it's just a different media. There's no need to think too hard about it. Might as well compare oil pastel to charcoal
You might, but even then you might also consider that learning from one medium can translate even to a very different medium.
But in any case here we're not talking about quite different media. The images are processed and edited in fundamentally the same way. As a result, many photographers tend to perceive the one (mobile phone digital photography) as an inherently inferior subset of the range of the other (large digital sensor photography).
This is because photographers -- particularly in the digital era -- do not ordinarily consider the creative advantages of self-imposed restrictions.
And when deliberate creative restrictions are suggested, the immediate response is to dismiss them or to suggest that they are wasteful (you're losing information at capture), better achieved in post-processing (so you have more "flexibility").
So there is a "precision maximalist" and "gear maximalist" default to most contemporary photography. Use the sharpest, fastest, least quirky lens you can get, shoot at the highest resolution, buy the latest if you can.
All of these things are narrowly valuable in some area of photography, but they get glommed together to make a giant assumption.
The mobile phone presents a convenient, nearly unavoidable way to question many of the impacts of that maximalist position.
One of the things I often want to do is ask photographers "why are your DSLR photos better than your friend's mobile phone photos?" and then poke them in the ribs each time they mention the technical advantages of their kit. But it would generally get quite violent.
> So there is a "precision maximalist" and "gear maximalist" default to most contemporary photography. Use the sharpest, fastest, least quirky lens you can get, shoot at the highest resolution, buy the latest if you can.
I wouldn't conflate "buy the latest you can" with gear maximalism. My own journey with that started with dissatisfaction of the quality of the output that I had, and, as I became a better photographer, running up against the limits of my camera and lens. Then, after understanding what I was experiencing was indeed the hardware and not a limitation of my skill, I upgraded, and the hardware leapfrogged my abilities. Then I got better and I once again started hitting the limits of my hardware. Though I am now at a point where the hardware and my abilities are so good that I am content with the output as one can reasonably expect for a hardcore amateur. My point is that "gear maximalism" is not always just mere materialism.
As for the phone, it's convenient. I can guss it up and if the output is sufficiently pig-like I can put lipstick on it til it looks good enough. With the right software I can even output RAW files and theoretically feed them into Lightroom like my normal workflow. But it will never be as good, and there are so many techniques that I simply can't do with it.
"The best camera is the one in your hand" is true and all, but it's flooding the world with shit photos. Good is the enemy of great, after all.
> there are so many techniques that I simply can't do with it.
But that's the point of what I'm getting at, concerning limitations and maximalism.
We've got into a mindset where we seek to push the edges of the kit we have -- we are urged to shoot faster lenses, shallower depth of field, lower and lower light.
But it doesn't get you a higher rate of good photos. A good photo is a good photo regardless of kit, and it's possible to flounder creatively when you have more scope.
As a photographer I think we should be able to creatively progress with whatever photographic tool we have in our hands. See a creative angle, refine it, improve on it, etc. Finding a way to be creative within a set of limitations is all that matters, and therefore starting with a serious set of restrictions is good practice.
FWIW I am a fairly good portraitist, a long-standing small-venue gig photographer, and I'm pretty technically-minded. But thinking back to the first DSLR I owned (a secondhand Fuji S2 Pro), I can think of precisely one photographic limitation I'd run into regularly even now: its high ISO performance was not that good. Otherwise I'd have that camera back to shoot with, even now, especially in the studio. Though I might rage at the maximum card capacity.
I'm regularly surprised to hear people talking of hitting the photographic limits of the kit they use; it suggests to me very specific niches or unusually rubbish kit lenses (I've owned one of those, and even then I got some results I liked).
Because I can't think of a popular DSLR made after about 2005 that is bad; certainly not after 2008. I can think of only a couple of entry-level DSLR or mirrorless cameras with limitations that would trouble me (the lack of a depth of field preview being the main one, aside from glasses-unfriendly viewfinders).
I still shoot with digital kit I bought 13 years ago, and got some of my best results in the last few years with the oldest of those cameras that is now so cheap I'm better off not selling it. I've only bought secondhand, old-model equipment since 2011. I probably won't ever buy new again -- except maybe a Cambo Actus.
The same is true of lenses. I'm currently doing a bit of work with an early 1980s Vivitar wide-to-standard zoom that cost me £30. Crap in the corners, gorgeous otherwise. I don't plan to buy a new lens ever again.
I'd always recommend going back to your former kit (buy it again, cheap on eBay) and testing those beliefs about limits in retrospect.
I still have that older equipment. (In fact, I even bought the updated model, and gave the original to a friend. She knows full well the kinds of pictures that setup has taken and it motivates her to improve) It's lighter, it's what I built my reputation on, and frankly the (later) investment in lenses and such for that platform wasn't cheap. In fact I'm very proud of how much attention I've gotten (which isn't that much, by influencer standards) with just a Costco kit lens and body.
> But it doesn't get you a higher rate of good photos.
Beg to differ here, if one knows what they're doing.
I do a lot of low-light, event photography, among other stuff, and the fancy expensive kit is just flat-out way better. I've shot thousands of these photos with both by now, and the extra technical flexibility is totally worth it. Hell, the extra sensor size and autofocus alone took me to a whole new level. Do you know how much it sucks trying to do low-light with manual focus at a DJ performance because your autofocus can't keep up? How much time do you want to spend twiddling noise reduction controls because the entry-level kit's ISO 1600 looks like crap? How much shadow detail are you willing to sacrifice? It's such a pain in the ass.
And through all that I like to take a few shots with my phone for the instant social media gratification... and those shots are never anywhere near as good. The color is flatter, depth of field is blah, everything is just...meh. They look good on instagram, but that's about it.
> I can think of precisely one photographic limitation I'd run into regularly even now: its high ISO performance was not that good. Otherwise I'd have that camera back to shoot with, even now, especially in the studio. Though I might rage at the maximum card capacity.
Growing up I messed around with 35mm and even had a SLR camera for a short while. But I got serious with my first DSLR (a Nikon D3100). Maybe I was wrong in expecting something comparable to my old 35mm stuff but the dynamic range sucked, high ISO sucked (1600 is grainy AF), autofocus with like 12 points with only one cross-type sensor sucked, and no matter how good I focused the image (even on a nice 35mm prime) I couldn't get the razor sharpness I was after. There was something about how that sensor processed certain light transitions that had a really nice quality, and the camera took a lot of hits, but ultimately I felt like I was hitting a plateau.
One example: I like to do long-exposure shots of firedancers doing their thing. Depending on the fuel type the dancer uses there could be a lot of dynamic range to deal with, and most of the other photographers in this space seem to struggle in coping with it: either the dancer is exposed nicely but the flame is blown out, or the flame is exposed nicely but the dancer is super dark (losing a lot of color detail if you try and fix it in post). Or they just try to squash both together and the picture looks like a bad HDR merge. Here is another place where the sensor and lens matters a ton and both my phone and my DX kit struggle. Can I do and have I done these shots on that kit? Sure. But the nice camera, with its amazing sensor, and the amazing lenses I have for it, are able to capture way more detail, and as a result I get way better results straight-away and spend way less time in post trying to fix things.
Another way to put this, more pithily:
At a first approximation every great photo -- even every great live music photo -- has been taken with equipment less capable than that D3100.
Even more pithily:
Ansel Adams did his best work on equipment far less capable than a D3100.
The point is...?
> I do a lot of low-light, event photography, among other stuff, and the fancy expensive kit is just flat-out way better. I've shot thousands of these photos with both by now, and the extra technical flexibility is totally worth it.
Mm, yep. I do gig photography in small, dark venues -- I've shot probably 125 thousand photos at high ISOs, with six different cameras (Nikon DSLRs for the bulk of it, and Sony mirrorless experiments alongside in a parallel track).
> Hell, the extra sensor size and autofocus alone took me to a whole new level.
The difference in light gathering between an APS-C and full-frame camera of the same technological era is usually about one stop. If you make a several-year jump at the same time, that can be noticeable, I suppose.
> Do you know how much it sucks trying to do low-light with manual focus at a DJ performance because your autofocus can't keep up?
At gigs, I do quite a lot of manual focus (and single-point, back-button focus with manual adjustment). I have no particular issue with it and it helps solve problems autofocus simply cannot solve.
Especially with a Nikon, because you get focus point confirmation even manually.
You do a lot more thinking and observation about equivalent distance. For example if you cannot focus on someone's face because they are behind a brightly lit microphone that autofocus will always prefer: How far are they behind it? If you focus further down the microphone stand or on a belt buckle, lock and recompose on their face, without changing your position, how much does that correct it? Or can you judge how far in inches and then literally move yourself or your camera forward?
One good cross-type sensor is all you absolutely need (stay away from old-style Phase One autofocus kit if this bothers you!)
Admittedly back button focus makes that more useful. And focus peaking on a mirrorless camera is the bee's knees.
> How much time do you want to spend twiddling noise reduction controls because the entry-level kit's ISO 1600 looks like crap? How much shadow detail are you willing to sacrifice? It's such a pain in the ass.
I said above that the only thing that really forced me to upgrade from my S2 Pro was low light performance. But again, I think if I was to go back to using that camera, I would be getting results I wanted a whole stop, maybe a stop and a half faster than I did at the time, through better subject selection, better understanding of its limitations and fifteen years more understanding of exposure.
If you're talking about the D3100, I'd shoot that up to ISO 3200, I reckon. Its low light ISO performance is a little better than the D300 which I used for years at gigs with enormous success. And its dynamic range is not all that noticeably worse. As long as you are careful with your exposure, that's a useful little camera.
DxO Optics Pro/PhotoLab helps of course.
Newer kit does a lot of stuff better, certainly, and it can save you time. And yeah, low light photography is about the only place people run up against the limitations of older kit. But IMO nothing since 2008 has any kind of problem with low light, and in many cases the newer kit is tripped up by the exact same difficult situations.
Idunno dude, ISO 3200 is nearly unusable on the D3100 for anything you want a large picture of, the noise was just too much for me, especially so when I didn't want to shoot fully stopped up. The noise has a look of its own and it compressed horribly and would lose a ton of color detail. Maybe with the right software but I like my workflow as is. The newer (relatively speaking) Nikon sensors contribute some additional DR as well, a comparison that I saw in a performance review a while back was closer to 2 or 3 stops between everything.
Maybe it works for your style, but it was cramping mine. There is enough suffering in art, why add to it?
> The difference in light gathering between an APS-C and full-frame camera of the same technological era is usually about one stop. If you make a several-year jump at the same time, that can be noticeable, I suppose.
That's exactly what I am talking about, a sensor and format upgrade after several years. I have a DX body of the same generation and its sensor performance is very close to its FX brother, and they're both considerably better than the D3100.
A lot of people (including this comment) seem to think that DSLR=“good camera” and all real photos are taken on 35mm format SLRs, but none of that is true.
SLR is a specialized format designed for fast autofocus used by news/sports photographers and 35mm “full frame” is not the largest film size.
Rangefinder and mirrorless formats actually have a much easier time with lens design and don’t need such gigantic lenses, which might make them less impressive looking than an SLR, but I have a cheap Mamiya medium format rangefinder that takes better street photos than they do. And if you’re doing product or landscape photography, you want a large format digital back like Phase One or a view camera, which really are the top of the line.
> SLR is a specialized format designed for fast autofocus used by news/sports photographers
Designed for, no. Optimised for and evolved for, yes! :-)
When the SLR emerged it was definitely slower than the cameras press photographers were using. Even the later instant-return-mirror models, probably. Press photographers didn't switch to SLRs until the Nikon F arrived fifteen years later, in the first few years of the Laos and Vietnam wars. Press shooters mostly hadn't been shooting 35mm for more than a decade before that.
The rest: I'm not convinced at this point, and I think high-end mirrorless really blurs most of those boundaries pretty conclusively. We're definitely stepping towards a formatless world in the sense of distinct boundaries, with computational photography being a big part of it.
Which is precisely why I'd urge really any digital photographer to spend as much time learning to get the best out of their mobile phone as possible. It is one point on the continuum; small sensors but a lot of computation.
> My mobile has taught me that a lot of what I relied on or worked with in a DSLR or mirrorless is a crutch.
> Mobile phone cameras force you, for example, to really think about composition, because you can't simply blur out the bits you don't like (portrait mode still sucks). They force you to think of other ways to isolate subjects, other ways to make use of light and contrast.
I would argue that Bokeh is an incredibly important tool that is part of composition and saying it is just there to "blur out the bits you don't like" seems weird to me.
> I would argue that Bokeh is an incredibly important tool that is part of composition and saying it is just there to "blur out the bits you don't like" seems weird to me.
And yet that is exactly how many photographers use depth of field. Especially amateur portraitists; the shallowest depth of field they can pull off, with the eyes sharp and the least "distracting" background.
(Ever notice the fundamental difference between commercial editorial photography and amateur portrait photography? It is this. Amateur portraitists use blurry backgrounds as a tool to escape complexity.)
FWIW, 25 years ago we would not have had this conversation (notwithstanding the non-existence of HN), because this idea of out-of-focus-rendering as an inherently important quality of a specific lens almost did not ever come up. (And we didn't have the word until Mike Johnston introduced it to the english language).
Bokeh obsession (as distinct from what used to be called "choosing the right depth of field for your composition") is a very modern, super-fast-aperture, gear-maximalist thing. The mobile phone bursts this very contemporary bubble, if you'll excuse the pun.
As an amateur I like to use the depth of field to bring a 3d quality to my shots, sort of separate the layers a bit. Usually stopped down a bit, not wide open. And this is currently the biggest difference I can notice between photos produced with my X-T30 and a Xiaomi mid-tier phone, the pictures don't look flat, sometimes it even feels like looking through a window into the scene. The other two differences are field of view and the resolution, since my phone is equipped with a wide angle lens. TBH, when I shoot the wide end of my kit lens on the X-T30 and don't pixel peep, it's hard to tell apart the jpeg's from my camera and my phone.
> As an amateur I like to use the depth of field to bring a 3d quality to my shots, sort of separate the layers a bit. Usually stopped down a bit, not wide open.
To be clear what I mean is that really the "shot at f/1.4 wide open" portrait thing is an amateur phenomenon. It is just not a big part of commercial portraiture at all, outdoors or in.
> TBH, when I shoot the wide end of my kit lens on the X-T30 and don't pixel peep, it's hard to tell apart the jpeg's from my camera and my phone.
Right. This is what I find -- especially with black and white images (my iPhone seems to produce images that work well with, say, a Pan F thing).
But this depth of field stuff what I mean about the way mobile phones force you to think about subject isolation, all of the time; even on the telephoto lens except with subjects less than a couple of feet away.
My experience is that people who have never used a DSLR do not have as much trouble getting phone photos they like that are actually good.
Partly because they haven't had either the scope or crutch it offers; they are used to seeing and instinctively composing without it.
Also because DSLR users spend most of their time composing wide open, not at their target aperture, which ultimately warps our perception and steers us towards shooting images more like the full-time view we get from the viewfinder.
There is an art to subject isolation in composition with constant front-to-back sharpness at all times, and a mobile phone will force you to think about it.
It depends on what you're doing and what the photos are for. If you're shooting events, sports, wildlife, or want to do shots with specialty--like ultra-wideangle--lenses, precise aperture control, large blowups, no. But phones can now handle a huge amount of what most people--even relatively serious photographers, need day to day or on a random hike/city visit/etc. need. If you're shooting with a DSLR and whatever kit lens it came with on A you're probably fine using a phone.
Besides telephoto and real macro, the results of the iPhone 12 made the idea of gear renewal almost absurd for me, it’s just incredible and I’m not talking about the 12 pro.
There is trade offs, none of them justifies carrying heavy gear around most of the time.
edit: it’s a hobby and I just spent a lot of time trying to take pretty pictures, and searching the proper cost tradeoff to do them
Right. Gear churn just seems peculiar.
I think this is also why we see lots of people who own high end full-frame etc., are now adapting old lenses, building extraordinary DIY rigs like the "digital camera obscura" trend of the moment, making entirely custom lenses with surplus optics and 3D printers, etc.
It's the same reason as for why there's so much interest in film.
These things have different strengths. I think mobile phones have liberated photography in a way unparalleled since the time when 35mm film liberated photography.
>Gear churn just seems peculiar.
There was a period when DSLRs were improving pretty rapidly and it made sense to move up on a pretty regular cadence because the new stuff was so much better. (And we've seen this with phones more recently.)
There's definitely a retro aspect to film and vinyl. Personally that's not for me having lived with the limitations of both. But I spent years working with custom film developing chemistry and the like.
I still think of myself as someone who shoots a little bit of film, though like other things I still think of myself as doing, not since the pandemic.
But I think I view film photography, darkroom work, and mobile phones in basically the same way.
They are just some photography tools among many, and my photography education would be incomplete without them.
I was photo editor of my undergrad newspaper, editor of high school yearbook, and made beer money with photography in various ways. (I was also de facto photo editor of a newspaper in grad school among other things.)
I spent a lot of time on it and got a lot of enjoyment out of it. I also probably got a bit burned out and wasn't really interested any longer once I didn't have easy access to a good darkroom any longer. But yeah, happy I did it, zero interest in doing any longer. Digital pretty much rekindled that interest.
(Also could really not get into video in those days because overhead was just too much. Don't have obvious entry point into creative video these days but at least casual is easier.)
No but earlier many overshot what they needed because smartphones were so obviously inferior even for social media. FF and even APS-C are still clearly better but what matters here is that modern smartphones are reaching the “good enough” bar to no longer look bad.
Absolutely for amateur and enthusiast level usage. 90% of a decent photograph isn't the camera but the photographer and the post processing.
They say the best camera is the one you have on you …
Low light performance is more about the sensor than lenses tho isn't it.
Sensors these days are phenomenal.
Last time I lugged a DSLR around all it did was give me a sore neck.
Lenses can have a big impact on low light sensitivity. No material is 100% transparent, and camera lenses have many elements for light to pass through before it reaches the sensor. The more lens you have between the subject and sensor, the less light is being captured.
That's why, if you're super concerned with low light sensitivity, you go with fixed-focal length prime lenses. Not having to support zoom means far fewer lens elements through which to pass light.
That's not the primary reason, which is that they don't make f1.2 zooms.
Think about why there are no f1.2 zooms
Theoretically there could be a zoom that is an F/1.2, the issue would be the size and expense to manufacture ruins any possible commercial viability.
It's more the case that the wider the aperture, the more the thing needs correcting for aberrations. A fixed focal length prime lens is much easier to correct than a zoom lens, because all the aberrations can be optimised down in the design. For a zoom lens, the aberrations need to be controlled while parts of the lens are moving around in weird ways.
I have a 18-50mm F/2 zoom lens. It's a really nice piece of kit, but I noticed that at the extreme wide end of the zoom range, it has quite catastrophic coma. Zoom lenses are hard.
Yes, that is exactly why. There is a bit of an "unobtanium" problem in that materials science has been having in this problem for quite a long time. Barring a revolutionary new take on legs materials, this is about as good as it gets, cross what people are actually working to pay for
Not for the reason cited by parent. Hence my comment.
I suspect that's mostly down to sensor design (big pixels, reflective layers behind the sensor so the light goes through twice, etc) and software.
> reflective layers behind the sensor
Reminds me of this surprising little tidbit from E. T. Jaynes' 1991 paper "Theory of Radar Target Discrimination" (https://bayes.wustl.edu/etj/articles/craddis.pdf)
One might then think naively that we should take equal care to match the receiver to its transmission line so that all the energy captured by the receiving antenna is actually delivered to the receiver. However, this is not the case for a good receiver.
[...]
Quite generally, in order to deliver the maximum signal/noise ratio at its output, an ideal receiver must reflect all the energy incident up on it. It is only in the limit of an infinitely bad receiver, in which all the output noise is generated internally, that matched input impedance becomes the condition for maximum signal/noise ratio at the output. Actual receivers are somewhere between ideal and infinitely bad, and so they perform best when partially matched, so that a part of the incident energy is reflected and radiated back out the receiver antenna. In fact, an ideal receiver does not run on energy at all, but reflects back all the energy incident on it!
[...]
This fact surprises many people on first hearing; but we note that it is so general that it remains true in quantum theory, at optical frequencies where hf >> kT and the Nyquist noise formula no longer holds. For initiation of a photo chemical reaction it is not necessary that the light energy be absorbed. For example, it might be thought that the eyes of animals adapted to seeing in the dark would have pupils that act as perfect black bodies, absorbing all the incident light energy. On the contrary, it is a familiar fact that the animals with best night vision have eyes that reflect the incident light strongly, looking like search-lights in the dark.
I don't think there are any really big changes to the lens design recently... All the effort has gone into multi-camera setups, and different types of lens for macro, wide angle, telephoto, etc.
To my understanding, there is still only a single set of moving lenses in an iPhone 13 lens.
Correction: It was released 2016, that's 6 years ago.
The point still stands though. But 6 instead of 8 years.
And mine just got updated to iOS 15.
Meanwhile my Pixel 3 XL is three years old and is no longer receiving updates.
That’s what I tell people when they ask why I use iPhones. And then they claim that it is not so.
My S21 with "scumbag zoom" or creeper zoom. The amount of zoom, in something as thin as it is, is mind blowing. To zoom 10x, and be able to take great pictures, hand held with almost no shake. Then you have the top GoPros shooting 5k with gimbal like stabilization. Cameras are getting crazy.
If I remember right, newer Samsungs have their stack of telephoto optics pointing toward the top of the phone, parallel to the display. Then use a periscope style prism to look out and forward. That way you aren't limited by the device thickness (although I guess now the diameter of the lenses is constrained)
Yeah -- this is typically called "folded optics".
There have been experiments with this over the years. Minolta coined the phrase for a 3x zoom in the DiMAGE X series, starting twenty years ago, and as I recall they were the first to use this technology in a consumer camera:
https://www.dpreview.com/reviews/minoltadimagex
Yining Karl Li has mentioned in previous posts their intention to write a blog post about modern lens optics, which should outline multiple things.
Hopefully, a general overview of the 100s of lenses they've found/made the models for, alongside how modern optics fail to be accurately modeled by the polynomial model of optics...
Polynomial models are generally usable for most applications. If you need lots of accuracy, such as for long-range stereo, they sorta work only for traditional and non-wide lenses.
I recently switched from a OnePlus3 to an iPhone 12 mini and although the pictures are really much better, one thing is worse, the lens glare. There is often a glare and sometimes a bright green dot. Is that a consequence of these many elements? Less spherical and chromatic aberrations but more glare and other internal reflections?
Maybe dumb to suggest, but when this happens usually the lens is dirty / a bit greasy, at least from my experience with iPhone cameras.
I’ll degrease the lenses now, perhaps this was quite a brilliant suggestion :)
Not only is this likely, I intentionally leave my phone camera lenses smudged sometimes because it produces an unpredictable dreamy effect.
That's an old Hollywood special effect: https://tvtropes.org/pmwiki/pmwiki.php/Main/GaussianGirl
> When filmed, the Gaussian Girl is shot through a soft-focus filter, a piece of translucent plastic (or very sheer silk, in the days of classic film), or a quick smear of Vaseline, depending on the director's and/or cinematographer's preference. This surrounds her with a softly glowing aura, and smooths out any unappealing pores or lines on her face. The result makes her look nothing short of ethereal.
From what I've learn from 3d software tutorials (trying to simulate a glare), this glare comes from the lenses and each lens will add another ring in the glare? not sure.
The OnePlus 3 had a sapphire camera lens, Apple does not. Maybe that's why the flare.
From reading some of the comments it sounds like the elements for these lenses are plastic?
Is there a downside to using plastic lens elements compared to glass, in terms of image quality?
If there isn't a downside, could lenses for SLRs/mirrorless cameras also use such elements?
Edit: It sounds like aspheric lens elements might be made from plastic in SLR lenses. But also I found that plastic might not be used for outer lens elements as it scratches easily
I think the main reason to use plastic would be the potentially higher refractive index and better strength properties, which are also why plastic lenses are used in spectacles (some glasses can have higher refractive indexes but still require heavier thicker lenses to avoid the risk of shattering). I would assume that glass lenses would be more likely to shatter if the phone were dropped. But it may really be that the plastic lenses can be manufactured to a higher quality for a reasonable cost.
Plastic typically has much worse reflection and losses than AR coated glass (at least for narrowband lenses - I'm extrapolating to photo lenses).
In my experience, though I can't speak too much about commercial lenses for DSLR or even SLR camera systems, but can speak to some degree with regards to something like a payloads used for aerospace and defense applications.
Designs of aspheres are typically created via Zernike Polynomials. If you have access to an Optical Design program such as Code V, Zemax, OSLO, ASAP, HEXAGON (Raytheon), or Quadoa or equivalent one can spit out something called a SAG Table which takes in account the Zernike Polynomials and allows for much easier time when it comes time to produce either mold or the actual aspheric curved surface that's going to be machined.
Yes there is a downside to using plastic, namely chromatic aberration due to lack of index matching. Other issues are limitations on the bandwidth of transmitted light. Furthermore, most "plastics" used for cell phone lenses have historically been glass-like plastics such as PMMA (polymethylmethyl acrylate) or Zeonex. Most of these types of materials however are extremely sensitive in the UV. The plastic lenses also don't do well with wave front correction or even with just shaping light. To make matters worse, coating plastic is usually problematic as it doesn't do well with higher temperature coatings which tend to aid in not only anti-reflective, but also color correction and glare reduction, i.e. PECVD or other types of enhanced plasma coatings.
Using glass and by glass I mean a crystalline glass, perhaps something closer to a high grade fused silica or if budget permits sapphire or some type of chalcogenide, one gains better modeling control of the index of refraction (abbe number, glass melt, etc..) and also overall optical sensitivities and STOP (structural, thermal and optical) performance parameters. Something else to consider is the coefficient of thermal expansion between glass and metal can more easily be optimized. But all this comes at a cost.
The reason for plastic typically over glass is because of procurement and manufacturing cost. Plastic can have molds machined and have tolerances held rather tightly. Because of this it becomes much easier to pop out lenses. As of late holographic replication and lithographic roll to roll processing has also made making plastic optics a more cost effective option. This is a form of additive manufacturing and actually is probable where things are headed.
Manufacturing "glass optics" especially for aspherical optics are dependent upon first ensuring the substrate is pure and that the glass melts are consistent. In most cases glass must be thermal cycled, typically annealed to ensure and maintain rigidity. Rough machining is then performed essentially hogging out the material, in some cases only yielding between 60 to 70% of useable material. Finally and typically either some type of computer controlled polishing, MRF (magneto rheological finishing) or single point diamond lathing of the crystalline glass is performed. The final finishing steps are extremely costly as a combination of advanced metrology (i.e. MTF, interferometry, point-cloud analysis, non-contact surface analysis, various forms of spectroscopy) has to be performed to ensure that said advanced fabrication has either optimized surfaces for wavefront performance by either optimizing or in most cases eliminating mid-spatial frequencies in the form for micron to nanometer level peaks and valleys, all the while not changing or sacrificing the size and other physical dimensions and also preserving the most sensitive clear aperture (i.e. useable and functional area of the lens). It can be quite a time consuming process.
TLDR; Glass is better than Plastic, but is more costly to manufacture at scale, unless volume and market acceptance can drive the cost down.
Perhaps this technology could be useful for breakthrough starshot: https://en.m.wikipedia.org/wiki/Breakthrough_Starshot
They are still rotationally symmetric, at least.
what are the applications for non rotationally symmetric lenses?
Movies are often shot with anamorphic lenses which are lenses with a sort of vertical cylinder shape. This lets wider aspect ratios be shot using the full frame of a more square aspect film or sensor.
https://www.bhphotovideo.com/explora/amp/video/buying-guide/...
Some people have rotational asymmetry in their eyes, and this is compensated for by eyeglasses with rotational asymmetry.
https://en.wikipedia.org/wiki/Astigmatism
LIDAR and Laser beam shaping applications.
Actually, I'm much more amazed by what lens designs for cameras look like. Sure I can appreciate how they fit these designs in a tiny amount of space, but they are also not constrained in shape by the same constrains as glass lenses.
Even more amazing is how they designed very complex lenses in the 50's and 60's essentially by hand. And those designs have stood the test of time and have barely changed. Compared to what modern lens designers have available to them, I just find it mindblowing how the designers then could come up with these things.
https://nitter.net/yiningkarlli/status/1498069538264399872
Thank you!
Sony makes the l̶e̶n̶s̶e̶s̶ sensors on modern iPhones and, if rumors are to be believed, also the micro OLED displays on a forthcoming new product category.
It's funny. The 1970s Sony Trinitron CRT TVs are what inspired Steve Jobs to make the Apple II case, and here they are working together, at the highest end of technology, many decades later.
Sony only make the sensor, it looks like a bunch of companies manufacture the lenses (sharp, lg infotech, and another company I forgot from my googling 10 seconds ago).
But as the linked thread says, apple designed the lens itself (as it has a patent on it, which is apparently common?)
I am curious what the Samsung lens system looks like given it has real optical zoom
Samsung doesn’t have real optical zoom. Their system works by using a separate 1x, 3x, and 10x zoom camera and software interpolation for in-between levels. The 10x zoom camera has the most interesting optics because its lens stack is too long to fit in the phone, so instead it’s a “periscope” camera in which the lenses are sideways and light hits them through a prism, then goes through another prism to hit the sensor. Here’s a diagram of Apple’s (shorter focal length and hitherto unimplemented) version:
https://www.dpreview.com/files/p/articles/8346943838/iPhone_...
The periscope lens has variable optical zoom, doesn't it?
No, it's all digital.
Sony also made the AppleColor 640x480 monitor in 1987 with the release of the Mac II. This was quite likely the most fantastic monitor available in this size in this year. It was rumored to be a regular Sony Computer monitor without an anti-reflective coating so it could be brighter and clearer.
I think Sony make the sensors only.
What I have been wondering for years is why this technology hasn't found its way back into regular cameras. AFAICT, regular cameras with vastly bigger apertures, which should be able to take better photos, in fact are generally much worse until you get into huge bulky DSLRs. Why?
> AFAICT, regular cameras with vastly bigger apertures, which should be able to take better photos, in fact are generally much worse until you get into huge bulky DSLRs. Why?
Are you sure about this?
The last point-and-shoot camera I owned I bought around 2010, and other than zoom, my smartphone gives much better results. But then I met someone with a new hobby-level Panasonic point-and-shoot camera and I was flabbergasted at the quality of the photos she gets out of it. I tried taking some pictures with it and compared to my smartphone they all looked they belonged in a magazine. Smartphones have a ton of processing power and sensor quality packed into a small package, but I think it's still hard to outperform a half-decent lens and good quality sensor that are much, much larger, and it seems like point-and-shoot cameras have gotten really good.
Do you remember what model your friend's camera was? I have a DMC-ZX60 which I would rate as OK-but-not-great. It seems to me that it should be possible to pack a truly awesome camera into that form factor, considering that smart phone cameras are literally orders of magnitude smaller.
It's a Lumix DC-GF9.
Thanks.
Point-and-shoot cameras are a dying segment. The target market for this segment is taking casual photos of friends and family. These buyers prioritize convenience and price over performance. Smartphones meet these requirements as good or better than their dedicated counterparts.
I assume the market is extremely bimodal. Either you are satisfied with a phone camera, or you want to go all out and have all the benefits of a DSLR. There are very, very few people who want a stand-alone camera that offers less than a DSLR.
It’s worth saying that there are great quality prosumer compact cameras, but the price point is much higher for this (>$600) than the old school cheap $100 point and shoot used to be.
It’s the bottom end of the market that’s been completely made obsolete, and as the quality gap shrinks there are also less people that want to carry / spend money on two devices.
It looks way less crazy when you "focus" on just the payload areas of the optic pipeline, trimming out the "lunettic fringes".
Here is what I mean:
https://i.imgur.com/ktbWf0X.png
Why are the lenses so complex outside of the area that matters? Structural integrity / resistance to dropping?
The picture in the tweet only shows rays originating from an object that is close to the center of the field of view. The article mentioned in another comment has better schematics: https://www.pencilofrays.com/lens-design-forms/#mobile
Smartphone cameras usually have a pretty wide field of view. The complex shapes are required in order to compensate for the aberrations caused by the large angles between the optical rays and the lens surfaces.
So the lenses take in incoming white light and split it by color? And there's one sensor for each color or something?
No, the colors in that schematic are only a visual aid for distinguishing the rays. Each set of colored ray bundles comes from a single object point. The task of the lens is to map them into a single point in the image plane. At the same time, the lens is designed not to split the light by colors, since this would cause chromatic aberrations. This is not shown in this particular schematic.
I think they’re still pretty aspherical, right?
This weekend by chance I met the lead engineer for the factory producing these. I showed him this article. He humbly requested to extend his own thanks to all engineers working in the industry. Genuine yoda.
I've always been fascinated by Fresnel lenses[0]. Is this a similar concept of squeezing a much bigger lens down into a smaller space, or is it achieving something that would not be possible with a single lens?
It would be really cool to seen an animation of this showing a sweep of the approach angle from one side of the sensor to the other.
[0] https://en.wikipedia.org/wiki/Fresnel_lens
The browser scores given for Interop2022 (Chrome: 71%, Firefox: 74%, and Safari: 73%) are heavily weighed down because of the "Investigation" part of the score which of course they all score 0 on because it's just begun. Without that, their scores on the mentioned focus areas would look more like:
- Chrome: 79% - Firefox: 82% - Safari: 81%
I feel like this gives a better insight into the current state of things
What ever happened to holographic lenses? I thought they were supposed to revolutionize lens design. Create a complex lens, take a hologram of it, and you can cheaply print infinite copies of it. I understand there was some light loss, is that why they never took over lens manufacture?
So are these lens assemblies designed by some optimizer? There are a lot of radical shapes in there.
Yes. Zemax, CodeV, and other optical design software is fundamentally optimization as a design strategy.
It's possible, but they are sufficiently regular I think some obvious hand optimization math could get you there. I've seen similar in other fields.
I don’t really get it. Also could this be used to improve telescopes?
(astronomer here). Some comments below partially answered the question, as modern optical/ir telescopes are based on mirror designs (i.e. they have primary, secondary, and sometimes the tertiary mirror like the Vera Rubin telescope). But the lenses are still heavily used in the camera part of the instrument or in spectrographs. And there is a lot of know-how there, as well as optimization in terms of minimising light losses, aberrations etc. I think the constraints and goals are clearly different from the cameras in iphones, but I'd think the techniques must be pretty similar (although Apple clearly has a ton of money that astronomers don't so it's possible that there are some big innovations there, that I don't know about).
SCT reflector telescopes use an exotic looking "corrector plate" (lens) to compensate for the easier-to-produce spherical mirrors vs parabolic.
https://en.m.wikipedia.org/wiki/Schmidt_corrector_plate
Not an astronomer, but from my limited knowledge telescopes usually use reflection e.g. mirrors so they don't need to correct for chromatic aberrations introduced by lenses and they also don't use things like optical stabilisation. Refractors could possibly be corrected, but AFAIK they aren't widely used in astronomy and hobby telescopes is such a niche market.
Possibly for refracting telescopes where the light passes through lenses but pretty much all serious telescopes use mirrors to reflect light and therefore couldn't do this. However, I'm sure that you can come up with some wild optics if you are optimizing for size or minimizing distortion.
These designs are required due to the package size constraints and the relatively generous lighting conditions.
Isn't the primary reason that the picture has to be projected on a flat plane?
If you listen close, you can hear them (stabilising?) constantly! (iPhone 6)
Is there something like this for Samsung?
How much does this lens cost?
They’re crazy, and still no match for DSLRs
DSLRs are no match for a Fujifilm GFX100 or a Phase One or the Hubble Space Telescope but none of those users seem to like comparing them as much.
I don’t think you can carry a Hubble telescope with you
And yet you can’t take a picture of the stars or street lights on an iphone…
Well thats definitely not true
Stars: https://www.macrumors.com/2021/10/10/amazing-night-sky-photo...
Street light: https://cdn.vox-cdn.com/thumbor/R51UlZi7g1UPq7b9Be05nWfOP_o=...
https://twitter.com/zoneoftech/status/1236746998805680128
C'mon. From your link: "The images were shot using Apple's ProRAW format and then edited using the mobile version of Lightroom on the iPhone itself"
I have the 12 and it multiplies any tiny source of light in dozens of fake lights.
The street light I;m referring too is much darker than this: a dark street with a string of lights. All the lights will get multiplied as fake lights.
> The images were shot using Apple's ProRAW format and then edited using the mobile version of Lightroom on the iPhone itself
In other words, they were shot on an iPhone. And processed there too, incidentally. What's to "c'mon" about?
> What's to "c'mon" about?
Maybe the fact that there are many artificial stars in that photo due to software adding new light sources which didn't really exist in postprocessing? I have no idea if this happens but it is the claim they were making, and would certainly be worthy of a "c'mon" if true.
> Maybe the fact that there are many artificial stars in that photo due to software adding new light sources which didn't really exist
Huh??
How exactly did you come to the conclusion they just added a bunch of fake stars? Am I missing something? Post processing is a normal part of astrophotography, even with a DSLR.
That sounds like a question for them, not me, I made no such claims. In fact I'd be quite interested in any evidence as well, although I wouldn't be surprised if they were using some kind of ML model in postprocessing which ended up adding some extra stars here and there (more charitably, you could think of it as converting noise into stars, when it was really just noise all along). But I'm certainly no expert on how smartphones process their photos.
> Post processing is a normal part of astrophotography, even with a DSLR.
Yes as someone interested in low light photography I'm well aware of this and it's why I'd be quite curious to see any evidence (assuming it exists) of smartphones doing weird things to their low light photos.
I see zero evidence of an effect like that in the linked photos. I assumed they were talking about artifacting they saw in their own attempt to take photos at night with an iPhone.
Either way, given the content of those links, I don't think their comment makes much sense.
> C'mon. From your link: "The images were shot using Apple's ProRAW format and then edited using the mobile version of Lightroom on the iPhone itself"
Uh, yeah, that’s standard process for astrophotography. Not at all a “c’mon” moment.
It’s still extremely impressive that a smartphone camera can do this. I say this as someone that does a good bit of astrophotography as a hobby.
Physics is still the law. As long as your aperture is small, so is the usable amount of light you’re collecting at the top of that lens funnel.
The problem is that the tiny light from a star or from a street lamp post is multiplied (in software) multiple times to a comic effect...
Of course you can.
Stars example: https://www.macrumors.com/2021/10/10/amazing-night-sky-photo...
Street light example: https://www.cnet.com/tech/mobile/iphone-13-pro-vs-12-pro-max...
C'mon. From your link: "The images were shot using Apple's ProRAW format and then edited using the mobile version of Lightroom on the iPhone itself"
I have the 12 and it multiplies any tiny source of light in dozens of fake lights.
The street light I;m referring too is much darker than this: a dark street with a string of lights. All the lights will get multiplied as fake lights.
Do you have an example for that? Would be interesting to see. Edit: Or do you mean lens flaring like here? https://discussions.apple.com/thread/252037681
Is this just a software problem? Android has had various night sight modes for several years that make dark views work pretty well?
There is something with the hardware as well, there is some green glare even on easy shots, but the fake lights problem I think is software, the software tries to "pretty" the image and introduces these fake lights.
"You think anyone would be doing this kind of utterly insane stuff if it wasn't absolutely necessary?"
- Not a software developer. Obviously.
The problem is an artifact of size constraints and plastic lens having far less variation in optical density, that means a lot more work to correctly direct light to avoid various types of distortion.
https://www.pencilofrays.com/lens-design-forms/#mobile has a much more in depth discussion on the how and why if the lens system design (someone referenced this in a reply to the linked thread)
Plastic lenses are actually more expensive to make than glass. You need extreme precision, more than with lapped glass lenses.
A Taiwanese factory northwest of Dhaka https://www.youngoptics.com/ makes the lion share of molds for plastic lenses worldwide.
YoungOptics used to make them for Sony too. I don't know if it's still the case today.
Long gone are my high-school days of grinding and polishing a 20cm mirror in the cellar, carefully using Foucault knife tests to parabolize it. As an undergrad, using Gaussian formulae when matching lenses in eyepieces. In grad school, writing ray-tracing codes to design multi-element lenses. Then, as a postdoc, using Zernike polynomials to estimate optical errors in the hexagonal mirror segments of the Keck 10 meter telescope.
Today's iPhone optics astonish and impress me: A lens built with over a half-dozen aspherical elements. Coordinated imaging with multiple cameras. Wow!
> Today's iPhone optics astonish and impress me: A lens built with over a half-dozen aspherical elements. Coordinated imaging with multiple cameras. Wow!
Yep. And the same is true of modern "prime" lenses, which appear more like incredibly optimised non-zooming-zooms than the traditional prime recipes we are familiar with from textbooks.
A side note: thank you thank you thank you for The Cuckoo's Egg.
As a Brit I would ordinarily say that the reason I am a tech guy is the ZX Spectrum or the BBC Micro, but I am not sure my life would be the same in a much bigger way without your book.
At 15 or 16 in early 1990 I was sure I wasn't going to university; I was sure I didn't belong there. A friend of my mother's suggested your book to her as something I should read, and I received it as a birthday present.
Over time I have come to understand that this recommendation was extraordinarily shrewd. Not just about the tech aspect, or the internet community aspect, or the thrill of the chase aspect, but because it helped me understand that I was exactly the sort of person who should go to university.
When I am asked to recommend books it's yours I recommend, twinned with John Naughton's A Brief History Of The Future; they are like companion volumes for a particular kind of way to think and feel about technology. Douglas Hofstadter should be so lucky.