TaylorAlexander a year ago

Ooooh I have an original Lytro and of course it just sits there. I am going to have to try this!

By the way don't miss this video about the failed 755 megapixel 300fps Lytro Cinema camera, a contraption the size of a car with off the charts data storage requirements.

https://www.youtube.com/watch?v=pfjYecJHMRU

  • obamallama a year ago

    If the human retinal density was equal to the foveal density across the whole eye, our optic nerves would be half a foot across.

    And if my grandmother had wheels, she'd be a bicycle.

    In short: pay attention or get left behind.

    • ideashower a year ago

      The whole segment never fails to crack me up on a bad day: https://www.youtube.com/watch?v=OplyHCIBmfE

      • dylan604 a year ago

        holyjeebuschristo!!!

        i have no idea who any of these people are, but i absolutely am in tears over this. the expression on his face at the mere suggestion of "if it had ham", but then she kept going with it, and his reaction. ohmugawd!! but it all started with "you're doing it wrong" flat out, not soft coating it, nothing. ahhhh

  • carlosdp a year ago

    What's wild is that it turns out we were only a few years away from technology that could completely replace the purpose of the cinema camera, with off the shelf GPUs on a normal desktop (NeRFs)

    • oofbey a year ago

      NeRFs are cool and someday will do what the Lytro cinema camera claimed. But today it’s nowhere close. See how Lytro took 10 years to go from lab science to a cinema quality camera. NeRFs have a similar journey ahead of them. Today they kinda mostly sorta work in the lab.

      • TaylorAlexander a year ago

        Also just from a fidelity perspective it seems better to fully capture all the light rays than to capture part of the scene and try to recompute it later.

        • dTal a year ago

          Indeed. NeRFs are impressive from an interpolation standpoint, but they can't magically beat Nyquist - any component in the radiance field that with a higher frequency than the sampling rate will be missed, and probably aliased. In practical terms, this means things like highly specular reflections, "glittery" or "twinkly" surfaces, and magnifying optics - anything which could "project" an obvious pattern onto the capture plane.

          An example would be the surface of a swimming pool - reflection of light off a swimming pool will project a pattern like this [0] onto the nearest wall, and if that's where your camera positions are they are effectively sampling that image - there is no data for what happens in between. The resulting NeRF reconstruction will look dull and flat, with the water taking on a matte or satin texture.

          Another example would be a laser beam - all the cameras see is a weak translucent line, if they see anything at all, and you cannot infer that you will receive a blinding amount of light if you put your eye right in the beam unless you happen to know about lasers. Even if you do know about lasers, if the laser itself is not in shot, you cannot determine which way the beam is going.

          Assuming that nothing in the scene is emitting perversely high-frequency signals like a laser, you could in theory make a guess at some of this information by inferring a lot more about the scene - calculating the position of lights, computing normal maps, guessing that certain sets of points are made from the same material and therefore likely have the same BSDF. But current NeRFs don't do any of that, they just run volume rendering on a set of emissive points.

          [0] https://www.google.com/search?q=swimming+pool+reflection

joshu a year ago

I love this. I collect interesting obsolete hardware; I bought a Light L16 recently for cheap but it can no longer update itself, unfortunately. Lytro is on my list as well.

The lytro itself is a lightfield / plenoptic camera. It captures the angle of light coming into the camera as well, which lets you calculate focus AFTER the photo has been taken. Focus is, of course, itself just another computation. http://graphics.stanford.edu/papers/lfcamera/

  • d1sxeyes a year ago

    That’s a really interesting way of phrasing it I’d never fully considered: of course you are right, a physical lens is just performing predictable mathematical transformations on the light passing through it.

    Thank you, that was something of a revelation in understanding how these cameras actually work.

    • weinzierl a year ago

      "just performing predictable mathematical transformations"

      Just linear algebra, to be precise. A lens is just a matrix and rays are simply vectors. Combining lenses is merely matrix multiplication and complex optical systems can be represented by the result, which is a plain simple matrix again. The Eigenvalues tell you interesting properties of your optical system.

      I've always enjoyed this part of physics because it is so simple and elegant, yet so powerful.

      If you look closely linear algebra pops up in many places.

      • fhars a year ago

        Well, sort of... there goes quite a bit of materials science and engineering into making and shaping the glass of a lens to make it behave like a matrix (i.e. to extend the range in which this linear approximation to the actual behavior of a blob of glass is valid).

        • gnramires a year ago

          Blobs of glass are actually overwhelmingly linear :) Non-linear effects in optics are generally negligible unless you go to very high energies (scattering, like raman scattering) or special materials (like fluorescent ones). You need a high powered laser or certain materials for those regimes.

          That said, the geometric functions (i.e. the positions of rays w.r.t. other objects) are probably non-linear in the object position parameters indeed, and intensities certainly are (example: moving an object in front of a screen by X amount changes the illumination in front of the screen non-linearly): it's important to keep in mind what we mean by linearity (it's in terms of light field intensities for a fixed geometry scene) -- the scene are the transfer function coefficients.

          What really makes things complicated in real life is (1) the presence of noise; (2) imperfections (and unknowns) in your physical/geometric apparatus. Even if you know the system perfectly, noise generally disallows reverting (or easily reverting) transfer functions, i.e. undoing blurs and arbitrarily refocusing images with simple detectors. The imperfections and even lack of rigidity of lenses and your system add even more difficulty. That's why making a simple computational lens is not so easy.

        • gumby a year ago

          It sure does. But if you can replace many of the lenses with computation it's a big win. If you can compensate for some of the inherent aberration in a lens design, or specific manufacturing variation in a lens, its an even bigger win.

          In other words, one does not obviate the other.

      • ddalex a year ago

        Going on a metaphysical tangent, it is a bit weird that A LOT of physical processes can be modelled with linear algebra, and don't require something way more advanced ...

        • fhars a year ago

          That is actually deliberate, we tend to organize everyday life and the devices we build around processes that are easy to model, that is why many things look like linear processes and harmonic oscillators (the first two term of the Taylor expansion of the actual behavior). We change the type of spring we use if the current one starts to wear out early under normal operational conditions.

          • IIAOPSW a year ago

            Let me rephrase on his behalf. Isn't it curious just how unreasonably capable our math is at expressing those physical laws of the universe to which we did not invent.

            • jamiek88 a year ago

              I don’t know, seems a bit Anthropic Puddle to me.

        • Someone a year ago

          I think it’s more that we developed linear algebra that made us approximate physical processes by linear models. If you look close enough, almost nothing is linear.

          For example, we happily draw a linear scale on a mercury thermometer, even though we know that not to be correct, even if we incorrectly assume that the coefficient of expansion of mercury is independent of its temperature. Also, try explaining why the conversions between Celsius, Fahrenheit and Kelvin are all linear (answer for Kelvin and Celsius: they technically only are since 2019, when we redefined them (https://en.wikipedia.org/wiki/2019_redefinition_of_the_SI_ba...). I think the Fahrenheit scale was redefined as a shift on the Kelvin one around the same time)

    • joshu a year ago

      More importantly, focusing a lens is one of many possible transformations. Look up coded apertures. so much is possible.

  • slopperyslip a year ago

    Was curious about a Light L19 seeing how cheap it is now and did some digging around for how to update. This Github project from an XDA thread may do the trick https://github.com/helloavo/Light-L16-Archive

    • icelancer a year ago

      Oh wow, this camera! I saw it years ago and never thought about it again. It's such an interesting idea for my field of work that wants variable focus throughout a video shot with high speed photography, but can't get it with the equipment that exists on the market.

    • fire a year ago

      thanks for this, I've been needing to factory reset my L16 but I couldn't locate any of the firmware

    • joshu a year ago

      Thank you for this.

  • sacnoradhq a year ago
    • joshu a year ago

      I have a cuecat and a pc jr keyboard in my collection, as it focuses on tech innovation failure. A bunch of other items on this list are good additions.

      I should probably make a list of my collection. A working juicero. A rolleron. An itanium cpu. A working General Magic device. The iBook I wrote del.icio.us on…

      I had capsela as a kid. Hadn’t thought about it in decades. Thank you.

    • jacquesm a year ago

      > Grandparents had a Heathkit TV like this with an ultrasonic remote that could also be controlled by jingling keys

      I wished my Sony A7 had such a nice and elegant shortcut for remote video recording. Instead it has a crappy little remote control that eats batteries like candy and that seems to un-pair spontaneously in mid recording session and other such niceties.

    • steanne a year ago

      librarything still sells and works with cuecats

matsemann a year ago

> Killer feature of this new technology was the ability to refocus the image after it was taken! (...), the camera was trying to solve a problem that didn’t exist

A related camera technology, but which I've found great to use, is 360 cameras. The ability to re-frame (not focus) the image as I want in post is so great. No longer do I have to spend many attempts positioning a camera perfect, just shoot and later "point" the camera correctly. For instance when filming ski videos, it could take multiple attempts getting the angle correct to get exactly the person+jump in view. Now it just works.

  • lostforwords a year ago

    I worked in scientific imaging and when Lytro came on the scene it showed a lot of promise. It brought the promise that we would be able to clearly capture and measure across multiple focal planes in a single image. Lytro focused on the consumer audience and never produced a product suitable to scientific imaging. You'll see Raytrix is still around in this space as they did embrace the scientific market.

  • eyesee a year ago

    I worked on 360 cameras / software for many years. Looking forward to the day when an article like this comes out for one of the products I worked on.

    When you’re in the heart of it it’s so easy to take pride in the technical challenges you’ve overcome, but completely miss the realities of the marketplace.

    • matsemann a year ago

      I really love my 360 camera. Use it everyday. I fill my 128 gb memory card multiple times a week. I use it on my helmet when biking for work. On a selfie stick when skiing so that it looks like I have a personal camera man following me (example https://www.instagram.com/reel/CnFtVsMJd39/?igshid=YmMyMTA2M... )

      But yeah, they haven't taken that much off compared to traditional action cams. The frame rate and resolution after reframing is the blocker for many. And hard to do that with the current available sensors it seems like, and no one right now wants to gamble on spending much on R&D for launching the next gen.

      • eyesee a year ago

        The problem is that you’re always at a sensor deficit because you need so much more resolution to cover a wide field of view. When sensors and chipsets would finally catch up, user expectations would grow to match. Now we’re up against physics: Can’t add more resolution past the diffraction limit, and larger sensors are impractical for super fisheye lenses.

MrGilbert a year ago

I must admit, I have a really soft spot for well-done reverse engineering write-ups like this. Show me logic board pictures, console commands, code and who-knows-what, and I'm happy as a squirrel.

  • mjburgess a year ago

    I like the simplicity and elegance of embedded programming, but I never have much motivation to do any. I've never really been a hardware person (out of choice).

daniel_reetz a year ago

Author, if you're reading, a really cool thing would be to create a small array of these cameras (4 of them) side by side.

The very wide baseline should allow you to enhance/deepen the refocusing effect and create a wide format image. I've always wanted to do this.

  • Frost1x a year ago

    You can also extract depth information from each cameras lens array. In theory, if you combine that capability on a per camera basis with photogrametric and/or stereo imaging techniques, you should be able to get a fairly refined depth map of a scene passively if the scene is static or you have well synchronized shots. Something I also wanted to explore but never had time to do.

    At the time, I always considered passive ranging to be a potentially more valuable feature of lightfield cameras more so than refocusing images.

    • daniel_reetz a year ago

      My point was that the depth information increases with greater stereo baseline (distance between lytros in this case).

      Depth maps from light fields were a topic Disney Research was publishing on while I was there. Great stuff, highly recommend checking out their siggraph paper.

Firerouge a year ago

Shame lytro never made a security video camera variant. Seems like the ideal application, ensuring that anything from vehicle plates in the background to faces in the foreground can be brought into focus.

What sort of fps is the API capable of producing jpegs? It sounds like the larger illum can manage 3 fps natively, but couldn't find specs on the original lytro.

  • skhr0680 a year ago

    Ideally a surveillance camera will already be focused to have the entire scene sharp. The Lytro camera had postage stamp resolution as far as I remember so it wouldn’t solve the problem that you are thinking of.

    • folkrav a year ago

      > Ideally a surveillance camera will already be focused to have the entire scene sharp

      In practice a lot of surveillance cameras seem to be focused to have the entire scene a blurred mess

      • LoganDark a year ago

        In practice a lot of surveillance cameras were never focused at all. They were installed that way and never configured

  • WithinReason a year ago

    The images produced by Lytro are 1.2 megapixels, so not enough to resolve small detail.

    • atdrummond a year ago

      They had a “DSLR” which was about 4x higher resolution than the original device.

seszett a year ago

I've been intrigued by this camera ever since it came out but I had not checked eBay for a long time so I was surprised to see "$20 to $40" mentioned there.

Well, the cheapest one is actually $200 as far as I can see, showing results worldwide.

  • ljf a year ago

    I saw one here in the UK at a carboot sale last year for £5 and I picked it up and chatted to the owner about it, but knew I had no need for it. On the way home I passed him again and he offered it to me for free, but I politely declined as I already horde enough stuff - so they are out there for a bargain, just need to get lucky I guess.

  • gommm a year ago

    Yup, same here :(

rixrax a year ago

Nice to see old Lytro hardware brought back to life.

When they were launching the company, I was so excited about the possibilities. Maybe naively, I was really rooting for being able to take a photo, and then in their software (and later in Lightroom) be able to slide/brush-in/out points of focus. Alas, all we got were these gimmicky square pictures that you could click to bring various parts to focus.

  • AstixAndBelix a year ago

    As always, a company makes a very interesting piece of hardware available but locks it down with proprietary software so when it dies almost nobody can keep using it effectively.

    At least these project of reverse engineering the file format or even the firmware can bring new life to these quirky cameras

  • nathancahill a year ago

    Yeah, I think the concept is cool from a physics standpoint and the software didn't do it justice. If it had launched after iPhone shipped Portrait mode I think it might have angled more in that direction. My sense is that it came to market too early.

donatj a year ago

I was just tweeting the other day about how my Lytro was really fun for a couple days and jogging through focus was neat but after the initial glow it was just an overpriced camera that gave me underwhelming 4 megapixel photos as the final product.

I had always hoped for better software to come along. I’ll be curious to see if it can find a second life.

gumby a year ago

I'm really surprised that LF cameras haven't been added to any phones. I'd think it would be valuable for "fixing" photos and perhaps allow computation to replace the LIDAR.

Though it feels like the cameras are in a "pro" or at least "prosumer" race. Every apple camera announcement seems to tout "advances" that are mostly irrelevant to me (who, like I assume most people, pretty much only uses the camera to take pictures of people and pets).

  • kridsdale1 a year ago

    The latest iPhones fake this in Cinematic filming mode. They use depth info and a long depth of field, then apply a distance based Gaussian blur to fake a shallow DOF in post. But you can go back and edit footage, selecting other subjects than what was “focused on” during filming.

jonah-archive a year ago

!! I need to check this out. A friend gave me an old Lytro Illum with a bad test firmware awhile back that I never got around to doing anything with, maybe there's some possibility here.

stuntkite a year ago

I always wanted to get one and use it for depth mapping. I'm so glad someone else had the same idea. Kind of a bummer that the company tanked, but I get it.

sedivy94 a year ago

I was just thinking about Lytro the other day... I was quite young but very excited by the prospect of that technology. I know computational photography and AI have done more for cameras in the past decade, but I'm still hopeful there's a paradigm-shattering advancement in sensor tech.

jbandela1 a year ago

On a side note, do you think think type of light field photography could be packaged in a phone camera? Right now we have multiple lenses sticking out the back of the phone. Could we replace all that with just a single large light sensor?

  • GeompMankle a year ago

    If I correctly interpreted your question as ending in "a single large light[field] sensor], no. Light field cameras take a single field of view and overall entrance pupil and then, given that field of view, they trade resolution for being able to specifically note which part of the entrance pupil saw the object in that low resolution way. As a person who specializes in physical optics rather than computational optics, my extremely biased standpoint is that light field sensors can take one really great camera and make it into a bunch of fairly poor cameras collecting 3D depth information. If you value 3D depth information and the ability to add realistic depth of field blur to a photo more than the 2D resolution and sensitivity/noise properties of the photo, lightfield is for you.

    The reason we have multiple lenses sticking out of a phone is to achieve similar high image pixel counts at high sensitivity/low noise across wildly different fields of view (say 108 deg, 69 deg, 28 deg) and wildly different per-pixel angular fields of view in a cost effective manner. A single imaging block capable of seeing a whole 108 deg field of view wide camera image with the per-pixel angular field of view of the 28 deg telephoto imager is generally infeasible in a cost effective way unless you are IMAX.

    • jbandela1 a year ago

      Thank you for your detailed answer.

sacnoradhq a year ago

Oh sweet. I have an original Lytro. Battery is dead so it would need a repair.

highenergystar a year ago

I wonder if a lytro could be attached to a full frame DSLR and take the 'same' picture as the DSLR, and augment the DSLRs picture with the depth field info captured by the lytro ...

worldmerge a year ago

Very cool! Would it be possible to use this for NeRFs? I’m very new to them but it seems like they use light data heavily, at least it seems Luma AI does.

  • ladberg a year ago

    I'm not sure if you're already aware of the connection, but the founder of Lytro (Ren Ng) was actually the advisor on the original NeRF paper! I took a graphics class from him in college and it was one of my favorite classes. In addition to the course material it was great to hear about what his lab was working on and future moonshot projects.

  • Zetobal a year ago

    It wouldn't make sense to use these for nerfs because of the variable focus... and the low resolution. The depth prediction used by NERF is also miles ahead of the depth maps produced by lytro cams.

AustinDev a year ago

Dang, I literally just sent my Lytro to the recycler during my spring cleaning.

olliej a year ago

awww, I finally threw mine out during the house cleaning maybe two months ago :(

  • UmYeahNo a year ago

    Weird. Came here to say the same thing. I let mine charge for a couple days and it wouldn't come back to life, so off to the recyclers it went. Such a shame.

WithinReason a year ago

Could you construct a NeRF from a single Lytro image?

  • dTal a year ago

    In all likelihood yes, but it wouldn't do what you likely want. NeRF is better for interpolation than extrapolation, because it can't confabulate things that were occluded in all of the input data. The main lens of the Lytro is small, so it's basically like having all your input images confined to a circle a few inches big. As soon as you tried to reconstruct anything outside that, you'd see gaps.

    It might be useful for boosting its mediocre resolution, though.

  • jasonwatkinspdx a year ago

    I believe so. The underlying data is essentially 4D.