Yeah, this came out during the last few weeks of my time in high school. It is a major reason I got into computer science and became a programmer. Good Times!
Wow, thanks for sharing, that really takes me back. I was so hyped when I saw this as a kid, my dad and I made a mount for my glasses with two IR LEDs and a battery. I remember that I was super impressed with the effect.
I also went to Maplins (UK Radioshack) and bought some infra red LEDs to hack together something to achieve this same effect. In the end I just taped the Wii Sensor Bar to my glasses!
After watching his videos, I went out and bought an IR pen so I could mimic his digital whiteboard. I think over the years, the bluetooth stack changed, so I could no longer pair with windows.
I tried implementing this with face detection-based head tracking after that demo (or maybe before; I can't remember). I got it working but the effect was very underwhelming. It looks great in that video, but it kind of sucks in real life.
I think the problem is in real life you have an enormous number of other visual cues that tell you that you're not really seeing something 3D - focus, stereoscopy (not for me though sadly), the fact that you know you're looking at the screen, inevitable lag from cameras, etc.
I can't view the videos because of their stupid cookie screen, but I wouldn't be too excited about this. The camera lag especially probably impossible to solve.
this (and several of his ideas) were the reason I value simple solutions so much in my work along with optimising for low cost. "If Johnny Lee can do this crazy thing in cheap, I can think something creative too"
Thanks for posting, I was sure I recalled something like this form a long time ago. I also build my self a FreeTrack headset (https://en.wikipedia.org/wiki/FreeTrack) around this same time to play the Arma / Operating Flashpoint games using IR LED's attached to a hat that my webcam would track.
Came here to say the same. I remember playing around with this back in the day, but using two candles instead of the sensor bar. Yes, it works. No, it’s not a good idea to hold candles that close to your hair.
We built this because we wanted 3D experiences without needing a VR headset.
The approach: use your webcam to track where you're looking, and adjust
the 3D perspective to match.
It creates motion parallax - the same depth cue your brain uses when you
look around a real object. Feels like looking through a window instead of
at a flat screen.
Known limitations:
- Only works for one viewer at a time
- Needs decent lighting
- Currently WebGL only
We're still figuring out where this is genuinely useful vs just a novelty.
Gaming seems promising, also exploring education and product visualization.
It was definitely useful and appreciated on the "New" Nintendo 3DS XL, which also used a camera to track your eye movements and adjust the divergence accordingly. I hate the fact that Nintendo abandoned this technology because experiencing Ocarina of Time and Star Fox 64 in 3D was world-changing to me.
I'd say I'm not the only one who misses this technology in games, because a used New 3DS XL costs at least $200 on eBay right now, which is more than what I paid new.
I always thought 3D would combine really nicely with a ray traced graphics full of bright colors and reflections, similar to all those ray tracing demos with dozens of glossy marbles.
The 3DS is different, it's using a lenticular screen so your eyes actually see different images! The eye tracking allows it to work even if the position of your eyes changes (ie because you moved your head).
Presumably developers could have combined this with parallax head tracking for an even stronger effect when you move your head (or the console), but as far as I know no one did.
Yeah Nintendo 3DS XL was awesome, but even then, you'd have to use that one specific console in order to be able to play that game.
What we're thinking is to enable this technology - as long as there is a camera and a screen - instantly accessible across billions of devices.
That means that the 3D effect would be applicable not only for games built for that specific console - but for any and all games that are already in a 3D environment.
The technology is still alive and well in some genres, particularly flight sims. One common free solution is to run OpenTrack with the included neural net webcam tracker, which plugs into TrackIR-enabled apps and works like a charm.
I remember the 3D glasses that you could plug into the Sega Master System in the mid-80s. They took what would be interlaced frames and rendered them to different eyes instead (which made the version getting shown on the connected TV pretty trippy too).
And then there was the time travel arcade game (also by Sega) that used a kind of Pepper's Ghost effect to give the appearance of 3D without glasses. That was in the early 90s.
I think the idea of 3D displays keeps resurfacing because there's always a chance that the tech has caught up to people's dreams, and VR displays sure have brought the latency down a lot but even the lightest headsets are still pretty uncomfortable after extended use. Maybe in another few generations... but it will still feel limiting until we have holodeck-style environments IMO.
I wasn't aware of all of those, will check them out - thanks for sharing!
Yes I believe you are right in that the tech is catching up with concepts that seemed futuristic in the past. For example the hardware today supports much more than it would have been able to do, say, 5-10 years ago.
Our hypothesis is that the current solutions out there still require the consumer to buy something, wear something, install something etc. - while we want to build something that becomes instantly accessible across billions of devices without any friction for the actual consumer.
Has it though? And what "market" are you referring to here?
Fully agreed that if you want 100% full 6DOF immersion - go and pay hundreds or even thousands of dollars to wear a heavy and cumbersome headset on your head. We're not disputing that or thinking of competing with that.
What we're saying is that there may be a much larger market consisting of people who are not ready to commit to pay so much money to wear something that will give them motion sickness after 10 minutes.
If you're developing a VR game your market consists of 50 million people around the world who owns a VR headset. That's great. But since you already built the VR world in 3D, you could also open up the market to billions of people who want to play your game but on their own devices.
Admittedly, it won't be the same experience, but it could be a "midpoint". Not everyone can afford and is willing to pay for a VR headset.
Other than DCS, Skyrim, and that one Star Wars game at Dave and Buster's where you duel Darth Vader, I don't see a lot that sings to me just yet. Granted, I could easily get 2,000 hours out of DCS over the span of a decade just flying every third and fourth generation fighter jet ever made.
Maybe VR doesn't need that many games because the small handful of good ones have so much depth and replay value. I guess I just talked myself into a $700 VR kit and possibly a $700 GPU upgrade, depending on whether or not my RTX 3060 is up to task.
Not sure what vr kit you're looking at, but if it's a 4k headset to push at 90fps you'll want something more like a 3080 or 4070. But if it's lower resolution it won't need quite so much power.
Back in college (~2008) we implemented this with a 7 foot tall back-projected screen and a couple of Wii remotes after seeing Johnny Lee’s video. The nice thing with that screen was that you could stand so close to it you couldn’t really see the edges.
We had as many people come test as we could, and we found that 90% of them didn’t get a sense of depth, likely because it lacked stereo-vision cues. It only worked for folks with some form of monocular vision, incl myself, who were used to relying primarily on other cues like parallax.
I don't know if you designed it for a specific monitor but, feedback. It tried using it on my M1 Mac.
First thing, there is no loading indicator and it takes too long to start so I though it was broken a few times before I realized I had to wait longer.
Second thing, although it was clearly tracking my head and moving the camera it did not make me feel like I'm looking into a window into a 3d scene behind the monitor.
These kinds of demos have been around before. I don't know why some work better than others.
Isn't it because the webcam FOV is unknown, which is needed to estimate distance from pixel (along with face size, but that should be less variant). The three.js demo had the strength parameter that can be used to calibrate. The iphone app is pre-calibrated for the most common devices, I believe.
I can confirm that is works decently well with a sunny roof window in the background, which is normally enough for people to complain that my face is too dark.
8yo me, who instinctively tried to look behind the display's field of view during intense gaming sessions, would appreciate this feature very much. My belief is that if it shifted the pov to a lesser degree than in the demo, people generally wouldn't notice, but still subconsciously register this as a more immersive experience.
I'm also glad that the web version doesn't try to cook my laptop - good work.
If you click "Menu" and then "Settings" you can play around with e.g. the sensitivity. Ideally we'd automatically optimize the calibration according to for example what device you are using, but that's something we would do a bit more long-term.
I can see this quite useful for educational demonstrations of physics situations and mechanical systems (complex gearing, etc.). Also maybe for product simulations/demonstrations in the design phase — take output from CAD files and make a nice little 3D demo.
Maybe have an "inertia(?)" setting that makes it keep turning when you move far enough off center, as if you were continuing to walk around it.
The single-viewer limitation seems obvious and fundamental, and maybe a bit problematic for the above use cases, such as when showing something to a small group of people. One key may be to take steps to ensure it robustly locks onto and follows only one face/set of eyes. It would be jarring to have it locking onto different faces as conditions or positions subtly change.
Good ideas - we've considered these as well actually!
The intertia-idea wouldn't be too difficult to implement, but its usefulness would probably depend on the application area.
Yep exactly. Usually it locks onto one person's face but it can also jump around, so there are still optimizations we can do there - but generally it's supposed to be for one person. If you compare to VR headsets, two people can't wear the same VR headset anyway!
The latency is unfortunately bad enough that it prevents me from getting any depth illusion :( I've seen other implementations where it works really well though.
May I ask what device you are trying it on? In our experience, it can vary depending on what device - something we're looking to improve on further down the line.
Do you remember which other implementations you've seen that worked really well?
Also, TrackIR is just an IR webcam, IR leds, and a hat with reflectors. You can DIY the exact same setup easily with OpenTrack, but OpenTrack also has a neural net webcam-only tracker which is, AFAIK, pretty much state of the art. At any rate it works incredibly robustly.
Actually I have already used it to implement the same idea as the post, with the added feature of anaglyph (red/blue) glasses 3D. The way I did it, I put an entire lightfield into a texture and rendered it with a shader. Then I just piped the output of OpenTrack directly into the shader and Robert, c'est votre proverbial oncle. The latency isn't quite up to VR standard (the old term for this is "fishtank VR"), but it's still quite convincing if you don't move your head too fast.
There's already a wide variety of Opentrack plugins that use everything from off the shelf webcams to DIY infrared trackers to an iPhone app and FaceID/AirPods.
Definitely! Our current focus is on Unity, because that's what we're most used to, but we'd also build the solution for at least Unreal and Godot as well!
This is fun!
But I see it showing me a close-up view (smaller FOV) if I move my head back, and a wider view (larger FOV) if I bring my head closer. This is the opposite to what I'd expect: if I bring my head nearer to the screen, I should see a more detail, closer (narrower FOV).
Your expectation doesn't match real life! Try it with a window. If you bring your head closer to the window, your FOV increases, ie you can see more of the scene outside the window.
But there is no window. There is a box with a dragon in it. When I bring my eyes closer to the box, I expect to see more of the box, and more of the dragon. You can try it with any real box; substitute any appropriate object for the dragon. Pick your favorite 3D environment that features walls and dragons; notice how both grow, covering more of your FOV. This is what I expect to see as I draw closer to the screen.
how long on what internet connection? i m on 1min and counting on 50mbit.
but maybe it doesnt work on ubuntu 24 + firefox? should be webgl capable though.
> This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.
Yeah what is up? When i accept cookies i find myself at the bottom of the page at a checkout button with a dollar price number? This confused me, after three more tries I gave up.
I got excited about this a few years ago when I was into digital pinball. I built an open source project called Phonemote using an iPhone to track eyes and relay telemetry over WiFi/BT and a plugin for FuturePinball.
However the result wasn’t that useful because we humans have 2 eyes.
The 3d effect is very compelling if you close/cover one eye.
But that becomes annoying quickly.
The best results I had was sneering a little hair gel in the center of my left eye glassses. Then it felt like I was using 2 eyes but really my right eye was seeing the pinball table clearly and fooling my brain.
There is a taiwanese company that has developed a sandwiched lenticular screen protector for phones that should fix this. Looks amazing in my opinion. You should try it out with your project.
From my experience this only ever looks good in a recorded video. Because we are used to assume that the real object captured within the video is perceivable in a stereoscopic form. But it‘s not. The real usage of head and viewtracking is as a controller, not to improve immersion.
Calling this 3D is a stretch, it does not make things appear like they're coming out of the screen. That requires having a way to show something different to each eye, something not possible with a standard display.
Not strictly true. Out past a certain distance your brain uses parallax for depth cues because the difference between each of the eye's images is too small. It's 3d, just not stereoscopic 3d.
But does it actually make stuff look like you can reach out and touch it like the 3D movies at theme parks? I don't see how it would. This just seems like parallax.
You can do this with a Kinect for head tracking and people do that with homemade pinball machines using a TV as the table, and a Kinect to track your head, so it looks like the table is 3D down into the TV.
But I don't think it creates the full 3D effect with things looking like they are coming way out of the screen and like a tangible thing you can reach out and touch.
One of the big issues with that phone was that in order to do dynamic perspective, you're having to run a 3D render at 60fps constantly. That's a huge power hog, and prevents you from doing many of the power savings techniques you otherwise could on a normal phone -- shutting down the GPU, reduced refresh rate, heck, even RAM backed displays.
Interestingly, for this parallax 3D effect to work, the head tracking needs to basically move "backwards" from typical head tracking since it needs to keep the focal point the same, if I'm understanding correctly. Any time I've tried this out it's fun and would likely be most useful for something like a 3D painting you hang on your wall.
Related idea, but not the same, might be my iOS and Android app that uses your phone's AR for head tracking and then sends that data to your PC for smooth sim game head tracking. https://smoothtrack.app
Seemed to have some different names depending on region (Looksley's Line Up, Tales in a Box: Hidden Shapes in Perspective). I recall it working very well at the time.
How is this different from the many available head tracking devices? I used to play Flight Simulator with head tracking[1]. It was great, although it did consume a bit of CPU cycles. IR trackers are more efficient
really fancy idea and cool project! Its a bit stuttery probably due to the 30fps of a webcam but works. Feels slightly weird somehow though, probably from the lag.
update: just tried to open the site again now and its gone but leads to some kind of shop?
imho this should be useful for driving/flight sims, giving the player the ability to lean inside the vehicle, changing their viewpoint on the surroundings.
If the latency is low enough to fool you, it makes the screen look like a window. It's much more limited than a headset, but especially with a natural scene, it does get the illusion across.
My company bought one of those Sony Spatial Reality Displays a couple of weeks ago and I have to say it is truly impressive, the 3D effect is really convincing.
I just tried this demo and it is cool, but nowhere as good.
BTW, the OG demo from Johnny Lee is from 2007: https://www.youtube.com/watch?v=Jd3-eiid-Uw
Yeah, this came out during the last few weeks of my time in high school. It is a major reason I got into computer science and became a programmer. Good Times!
Wow that's really cool!
Wow, thanks for sharing, that really takes me back. I was so hyped when I saw this as a kid, my dad and I made a mount for my glasses with two IR LEDs and a battery. I remember that I was super impressed with the effect.
I also went to Maplins (UK Radioshack) and bought some infra red LEDs to hack together something to achieve this same effect. In the end I just taped the Wii Sensor Bar to my glasses!
After watching his videos, I went out and bought an IR pen so I could mimic his digital whiteboard. I think over the years, the bluetooth stack changed, so I could no longer pair with windows.
Good memories
I tried implementing this with face detection-based head tracking after that demo (or maybe before; I can't remember). I got it working but the effect was very underwhelming. It looks great in that video, but it kind of sucks in real life.
I think the problem is in real life you have an enormous number of other visual cues that tell you that you're not really seeing something 3D - focus, stereoscopy (not for me though sadly), the fact that you know you're looking at the screen, inevitable lag from cameras, etc.
I can't view the videos because of their stupid cookie screen, but I wouldn't be too excited about this. The camera lag especially probably impossible to solve.
Probably also the fact you just don't move much while sitting in front of a screen, so the stereoscopy effect is much more relevant.
this (and several of his ideas) were the reason I value simple solutions so much in my work along with optimising for low cost. "If Johnny Lee can do this crazy thing in cheap, I can think something creative too"
Have you seen the video of the guy who built a camera to film light moving in super slowmo? The things people build amaze me.
Do you have a link?
https://www.youtube.com/watch?v=o4TdHrMi6do
Well worth taking a look at
yes agreed! very inspiring.
Agreed!
Thanks for posting, I was sure I recalled something like this form a long time ago. I also build my self a FreeTrack headset (https://en.wikipedia.org/wiki/FreeTrack) around this same time to play the Arma / Operating Flashpoint games using IR LED's attached to a hat that my webcam would track.
That was a blast from the past! Many, including me, surely still have those Wii sensors/controllers around. Fun times!
Yes - he's the OG who also inspired us! Credit where credit is due!
Came here to say the same. I remember playing around with this back in the day, but using two candles instead of the sensor bar. Yes, it works. No, it’s not a good idea to hold candles that close to your hair.
This is all a severely underutilized technology
Hi HN, I'm Sten, one of the creators.
We built this because we wanted 3D experiences without needing a VR headset. The approach: use your webcam to track where you're looking, and adjust the 3D perspective to match.
Demo: https://portality.io/dragoncourtyard/ (Allow camera, move your head left/right)
It creates motion parallax - the same depth cue your brain uses when you look around a real object. Feels like looking through a window instead of at a flat screen.
Known limitations: - Only works for one viewer at a time - Needs decent lighting - Currently WebGL only
We're still figuring out where this is genuinely useful vs just a novelty. Gaming seems promising, also exploring education and product visualization.
Happy to answer questions!
It was definitely useful and appreciated on the "New" Nintendo 3DS XL, which also used a camera to track your eye movements and adjust the divergence accordingly. I hate the fact that Nintendo abandoned this technology because experiencing Ocarina of Time and Star Fox 64 in 3D was world-changing to me.
I'd say I'm not the only one who misses this technology in games, because a used New 3DS XL costs at least $200 on eBay right now, which is more than what I paid new.
I always thought 3D would combine really nicely with a ray traced graphics full of bright colors and reflections, similar to all those ray tracing demos with dozens of glossy marbles.
The 3DS is different, it's using a lenticular screen so your eyes actually see different images! The eye tracking allows it to work even if the position of your eyes changes (ie because you moved your head).
Presumably developers could have combined this with parallax head tracking for an even stronger effect when you move your head (or the console), but as far as I know no one did.
Do you mean combining the eye tracking with using a lenticular screen? What would do you think the use cases would/could be?
Well, there are two different ways (among others) that your brain detects depth:
1. Each eye sees the object from a different angle.
2. Both eyes see the object from a different angle when the object is moved relative to your head.
The 3DS does only #1. TFA does only #2. Presumably if you did both, you could get a much stronger effect.
I think the New 3DS had the hardware to do both in theory, but it probably would have made development and backwards compatibility overly complicated!
Yeah Nintendo 3DS XL was awesome, but even then, you'd have to use that one specific console in order to be able to play that game.
What we're thinking is to enable this technology - as long as there is a camera and a screen - instantly accessible across billions of devices.
That means that the 3D effect would be applicable not only for games built for that specific console - but for any and all games that are already in a 3D environment.
The technology is still alive and well in some genres, particularly flight sims. One common free solution is to run OpenTrack with the included neural net webcam tracker, which plugs into TrackIR-enabled apps and works like a charm.
Samsung recently released a monitor with similar technology, I believe, as a FYI.
I'm surprised there's still a market for non-VR consumer 3D! I remember the post-Avatar rush of 3D-related products that never quite panned out.
I remember the 3D glasses that you could plug into the Sega Master System in the mid-80s. They took what would be interlaced frames and rendered them to different eyes instead (which made the version getting shown on the connected TV pretty trippy too).
And then there was the time travel arcade game (also by Sega) that used a kind of Pepper's Ghost effect to give the appearance of 3D without glasses. That was in the early 90s.
I think the idea of 3D displays keeps resurfacing because there's always a chance that the tech has caught up to people's dreams, and VR displays sure have brought the latency down a lot but even the lightest headsets are still pretty uncomfortable after extended use. Maybe in another few generations... but it will still feel limiting until we have holodeck-style environments IMO.
I wasn't aware of all of those, will check them out - thanks for sharing!
Yes I believe you are right in that the tech is catching up with concepts that seemed futuristic in the past. For example the hardware today supports much more than it would have been able to do, say, 5-10 years ago.
Our hypothesis is that the current solutions out there still require the consumer to buy something, wear something, install something etc. - while we want to build something that becomes instantly accessible across billions of devices without any friction for the actual consumer.
Vr has taken over this market. Get a vr headset you won't be disappointed.
Has it though? And what "market" are you referring to here?
Fully agreed that if you want 100% full 6DOF immersion - go and pay hundreds or even thousands of dollars to wear a heavy and cumbersome headset on your head. We're not disputing that or thinking of competing with that.
What we're saying is that there may be a much larger market consisting of people who are not ready to commit to pay so much money to wear something that will give them motion sickness after 10 minutes.
If you're developing a VR game your market consists of 50 million people around the world who owns a VR headset. That's great. But since you already built the VR world in 3D, you could also open up the market to billions of people who want to play your game but on their own devices.
Admittedly, it won't be the same experience, but it could be a "midpoint". Not everyone can afford and is willing to pay for a VR headset.
Other than DCS, Skyrim, and that one Star Wars game at Dave and Buster's where you duel Darth Vader, I don't see a lot that sings to me just yet. Granted, I could easily get 2,000 hours out of DCS over the span of a decade just flying every third and fourth generation fighter jet ever made.
Maybe VR doesn't need that many games because the small handful of good ones have so much depth and replay value. I guess I just talked myself into a $700 VR kit and possibly a $700 GPU upgrade, depending on whether or not my RTX 3060 is up to task.
Not sure what vr kit you're looking at, but if it's a 4k headset to push at 90fps you'll want something more like a 3080 or 4070. But if it's lower resolution it won't need quite so much power.
Back in college (~2008) we implemented this with a 7 foot tall back-projected screen and a couple of Wii remotes after seeing Johnny Lee’s video. The nice thing with that screen was that you could stand so close to it you couldn’t really see the edges.
We had as many people come test as we could, and we found that 90% of them didn’t get a sense of depth, likely because it lacked stereo-vision cues. It only worked for folks with some form of monocular vision, incl myself, who were used to relying primarily on other cues like parallax.
That's interesting! Did you continue play around with it and take it further?
We did not, no. Just wrote up the report and moved on.
I don't know if you designed it for a specific monitor but, feedback. It tried using it on my M1 Mac.
First thing, there is no loading indicator and it takes too long to start so I though it was broken a few times before I realized I had to wait longer.
Second thing, although it was clearly tracking my head and moving the camera it did not make me feel like I'm looking into a window into a 3d scene behind the monitor.
These kinds of demos have been around before. I don't know why some work better than others.
some others:
https://discourse.threejs.org/t/parallax-effect-using-face-t... https://www.anxious-bored.com/blog/2018/2/25/theparallaxview...
Isn't it because the webcam FOV is unknown, which is needed to estimate distance from pixel (along with face size, but that should be less variant). The three.js demo had the strength parameter that can be used to calibrate. The iphone app is pre-calibrated for the most common devices, I believe.
Thanks for sharing all this feedback!
And we will check out these links, appreciate you sharing it!
I can confirm that is works decently well with a sunny roof window in the background, which is normally enough for people to complain that my face is too dark.
8yo me, who instinctively tried to look behind the display's field of view during intense gaming sessions, would appreciate this feature very much. My belief is that if it shifted the pov to a lesser degree than in the demo, people generally wouldn't notice, but still subconsciously register this as a more immersive experience.
I'm also glad that the web version doesn't try to cook my laptop - good work.
Thanks!
If you click "Menu" and then "Settings" you can play around with e.g. the sensitivity. Ideally we'd automatically optimize the calibration according to for example what device you are using, but that's something we would do a bit more long-term.
Appreciate it!
Very cool!
I can see this quite useful for educational demonstrations of physics situations and mechanical systems (complex gearing, etc.). Also maybe for product simulations/demonstrations in the design phase — take output from CAD files and make a nice little 3D demo.
Maybe have an "inertia(?)" setting that makes it keep turning when you move far enough off center, as if you were continuing to walk around it.
The single-viewer limitation seems obvious and fundamental, and maybe a bit problematic for the above use cases, such as when showing something to a small group of people. One key may be to take steps to ensure it robustly locks onto and follows only one face/set of eyes. It would be jarring to have it locking onto different faces as conditions or positions subtly change.
Good ideas - we've considered these as well actually!
The intertia-idea wouldn't be too difficult to implement, but its usefulness would probably depend on the application area.
Yep exactly. Usually it locks onto one person's face but it can also jump around, so there are still optimizations we can do there - but generally it's supposed to be for one person. If you compare to VR headsets, two people can't wear the same VR headset anyway!
The latency is unfortunately bad enough that it prevents me from getting any depth illusion :( I've seen other implementations where it works really well though.
May I ask what device you are trying it on? In our experience, it can vary depending on what device - something we're looking to improve on further down the line.
Do you remember which other implementations you've seen that worked really well?
The obvious use case would be to replace the clunky head tracking systems which are often used in simulator games.
Systems like trackir, which require dedicated hardware.
You can do this today with OpenTrack: https://github.com/opentrack/opentrack
Also, TrackIR is just an IR webcam, IR leds, and a hat with reflectors. You can DIY the exact same setup easily with OpenTrack, but OpenTrack also has a neural net webcam-only tracker which is, AFAIK, pretty much state of the art. At any rate it works incredibly robustly.
Actually I have already used it to implement the same idea as the post, with the added feature of anaglyph (red/blue) glasses 3D. The way I did it, I put an entire lightfield into a texture and rendered it with a shader. Then I just piped the output of OpenTrack directly into the shader and Robert, c'est votre proverbial oncle. The latency isn't quite up to VR standard (the old term for this is "fishtank VR"), but it's still quite convincing if you don't move your head too fast.
There's already a wide variety of Opentrack plugins that use everything from off the shelf webcams to DIY infrared trackers to an iPhone app and FaceID/AirPods.
Trackir is just a camera with an infrared led.
How about contributing that to Godot?
Definitely! Our current focus is on Unity, because that's what we're most used to, but we'd also build the solution for at least Unreal and Godot as well!
This is fun! But I see it showing me a close-up view (smaller FOV) if I move my head back, and a wider view (larger FOV) if I bring my head closer. This is the opposite to what I'd expect: if I bring my head nearer to the screen, I should see a more detail, closer (narrower FOV).
Your expectation doesn't match real life! Try it with a window. If you bring your head closer to the window, your FOV increases, ie you can see more of the scene outside the window.
But there is no window. There is a box with a dragon in it. When I bring my eyes closer to the box, I expect to see more of the box, and more of the dragon. You can try it with any real box; substitute any appropriate object for the dragon. Pick your favorite 3D environment that features walls and dragons; notice how both grow, covering more of your FOV. This is what I expect to see as I draw closer to the screen.
The screen is the window.
Exactly what Wowfunhappy and jeffhuys are saying!
The demo gives just a blank screen on Android Firefox, Kiwi and Chrome.
It works for me, just needs a time to load.
Ah, thanks! Got it now on a better connection. A loading indicator would ease confusion.
It seems to vary a lot depending on device. It's just a "basic demo" for now, but yes good idea, thanks!
how long on what internet connection? i m on 1min and counting on 50mbit. but maybe it doesnt work on ubuntu 24 + firefox? should be webgl capable though.
ok took around 2min for me to load, then works.
Devtools say 40.23 MB / 13 MB transferred. Even on 1000Mbps it needed a moment. Works for me on Ubuntu 24 and Firefox.
definitely laggy, but works even under low lighting conditions and a camera that is not facing straight forward.
May I ask what device and browser you are using?
> This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.
OK, that's a no then.
It's YouTube. This is super common, just worded in a weird way.
It's also incorrect. They could just use youtube-nocookie.com instead if tracking cookies are disabled. https://support.google.com/youtube/answer/171780?hl=en#zippy...
The funny thing is I'm unable to navigate the UI to see the video. I disabled (enabled) targeted cookies, and it still doesn't show.
Yeah what is up? When i accept cookies i find myself at the bottom of the page at a checkout button with a dollar price number? This confused me, after three more tries I gave up.
Thats the first time I've seen that, its a no for me too then. Good to know I'm being targeted though :)
I got excited about this a few years ago when I was into digital pinball. I built an open source project called Phonemote using an iPhone to track eyes and relay telemetry over WiFi/BT and a plugin for FuturePinball.
However the result wasn’t that useful because we humans have 2 eyes.
The 3d effect is very compelling if you close/cover one eye.
But that becomes annoying quickly.
The best results I had was sneering a little hair gel in the center of my left eye glassses. Then it felt like I was using 2 eyes but really my right eye was seeing the pinball table clearly and fooling my brain.
There is a taiwanese company that has developed a sandwiched lenticular screen protector for phones that should fix this. Looks amazing in my opinion. You should try it out with your project.
Oh yes at the time I ordered a sheet but it didn’t align with my display panel. I should try that again.
Do you have a link to that, sounds cool?
From my experience this only ever looks good in a recorded video. Because we are used to assume that the real object captured within the video is perceivable in a stereoscopic form. But it‘s not. The real usage of head and viewtracking is as a controller, not to improve immersion.
Calling this 3D is a stretch, it does not make things appear like they're coming out of the screen. That requires having a way to show something different to each eye, something not possible with a standard display.
I did see a demo of 3D without glasses on a full monitor that DID make it look like it was coming out of the screen at CES, it requires a $3000 monitor though: https://www.3dgamemarket.net/content/32-4k-glasses-free-3d-g...
Also the 3DS obviously.
Not strictly true. Out past a certain distance your brain uses parallax for depth cues because the difference between each of the eye's images is too small. It's 3d, just not stereoscopic 3d.
Shutter glasses would like to disagree about your assertion
But does it actually make stuff look like you can reach out and touch it like the 3D movies at theme parks? I don't see how it would. This just seems like parallax.
You can do this with a Kinect for head tracking and people do that with homemade pinball machines using a TV as the table, and a Kinect to track your head, so it looks like the table is 3D down into the TV.
But I don't think it creates the full 3D effect with things looking like they are coming way out of the screen and like a tangible thing you can reach out and touch.
Reminds me of the Amazon Fire phone which featured similar tech prominently. "Dynamic perspective".
https://www.youtube.com/watch?v=6trOg2IK2Zg
Tangentially involved in that project.
One of the big issues with that phone was that in order to do dynamic perspective, you're having to run a 3D render at 60fps constantly. That's a huge power hog, and prevents you from doing many of the power savings techniques you otherwise could on a normal phone -- shutting down the GPU, reduced refresh rate, heck, even RAM backed displays.
Interestingly, for this parallax 3D effect to work, the head tracking needs to basically move "backwards" from typical head tracking since it needs to keep the focal point the same, if I'm understanding correctly. Any time I've tried this out it's fun and would likely be most useful for something like a 3D painting you hang on your wall.
Related idea, but not the same, might be my iOS and Android app that uses your phone's AR for head tracking and then sends that data to your PC for smooth sim game head tracking. https://smoothtrack.app
I remember this being done for a DSIware game released ~2010 - I couldn't find much footage apart from this quick clip from a trailer.
https://youtu.be/4zZfsyHEcZA?si=BE2I991zEVxPEt9F&t=57
Seemed to have some different names depending on region (Looksley's Line Up, Tales in a Box: Hidden Shapes in Perspective). I recall it working very well at the time.
How is this different from the many available head tracking devices? I used to play Flight Simulator with head tracking[1]. It was great, although it did consume a bit of CPU cycles. IR trackers are more efficient
[1] - https://www.youtube.com/watch?v=P07nIcczles (actually, this one was using a paper tracker because the face tracker had a big impact on fps)
really fancy idea and cool project! Its a bit stuttery probably due to the 30fps of a webcam but works. Feels slightly weird somehow though, probably from the lag.
update: just tried to open the site again now and its gone but leads to some kind of shop?
update2: oh use the link in the comment for the demo: https://portality.io/dragoncourtyard/
here's a webgl poc that was tested with mobile & desktop browsers: https://www.webgma.co.il/Articles/window-3d-tracking/en/
source code at https://github.com/guyromm/Window
imho this should be useful for driving/flight sims, giving the player the ability to lean inside the vehicle, changing their viewpoint on the surroundings.
Does this really fool you if you have two eyes? I haven't been able to try it or watch the videos (they're behind a cookie warning I can't get past).
If the latency is low enough to fool you, it makes the screen look like a window. It's much more limited than a headset, but especially with a natural scene, it does get the illusion across.
But you're still behind the window.
Is this similar to sony spatial 3d display ?
My company bought one of those Sony Spatial Reality Displays a couple of weeks ago and I have to say it is truly impressive, the 3D effect is really convincing.
I just tried this demo and it is cool, but nowhere as good.
Seem to recall one of the pinball emulators having this as a plug-in years ago.
Drawback is that it only works if you constantly move your head or device.
there was a compiz 0.8 plugin for that
head/eye tracking?