Show HN: M. C. Escher spiral in WebGL inspired by 3Blue1Brown
static.laszlokorte.deThe latest 3Blue1Brown video [1] about the M. C. Escher print gallery effect inspired me to re-implement the effect as WebGL fragment shader on my own.
The latest 3Blue1Brown video [1] about the M. C. Escher print gallery effect inspired me to re-implement the effect as WebGL fragment shader on my own.
You are forgiven for not knowing about the University of Leiden's Escher and the Droste effect site from 2002, given it shut down in 2024, but they were the first to try filling in the centre of Print Gallery and make the association with the cocoa tins
https://web.archive.org/web/20020802200015/http://escherdros...
It's cited by 3b1b themselves, who used Leiden's un-spiralized image to describe the effect.
I did my own version too: https://www.youtube.com/watch?v=xxLfDHe93_M
Very cool! I once tried rendering his towers. Mainly used normal canvas drawing though :)
https://bewelge.github.io/escherTower/
Why not include the Print Gallery image? Or - if worried about copyright, the ability to load an image.
Why not allow the upload of an arbitrary image?
what could go wrong
You would have to know the position of the smaller copy in the uploaded image for the effect to work.
The main issue is that the image needs to have a high enough resolution to be sharp at all zoom scales. Currently my images are vector graphics that I rasterize depending on the screen resolution.
The Escher Print Gallery requires even larger scales as it uses a zoom factor of 256 across the image (vs 16 for my images)
Others have solved this by either vectorizing the Print Gallery or even rebuilding the scene as 3D signed distance field that can be sampled via ray marching.[1] The later yields the best result but I did not want to copy it.
[1]: https://www.shadertoy.com/view/Mdf3zM
Very cool! However, it took me a while to figure out how this was supposed to be used.
For others:
On desktop, at least, you need to click and drag up/down on the left-hand control that says "swipe" with two arrows.
Or click "Autoplay".
laszlokorte -- can I suggest that the up/down icons should also be clickable/holdable? Since they're icons, they look like buttons, not a "swipe area". And also, maybe default to having autoplay on (but still with the controls visible)? Because it was not clear to me, at first, that the whole point of the site is infinite zoom.
Mouse scroll wheel works, too
Thanks for the suggestion! I added a slow initial auto zoom and updated the up/down arrows to work while being pressed.
Note to other viewers: getting the Escher-esque effect requires tapping a checkbox at the top of the page (easy to miss on a large monitor).
I now updated the default view to already show the Escher effect :)
stupid question to webgl experts here?
- can you build an entire fps shooter game using web gl? how is physics handled? how is collision detection, enemy AI handled? what kind of frame rate can you expect from a counter strike game made in web gl?
- what is the difference between webgl and threejs and babylonjs?
- what is the man hour effort involved for doing something like this assuming you know html, css and js pretty well but not familiar with gamedev
- is open gl the non web version of web gl? or are they completely different?
You can implement the graphics part of it using WebGL. It's strictly a graphics API for drawing to the screen. But there are specific libraries for eg physics that you can use in your WebGL 2 app, or entire 3D engines (like those you mentioned) targeting WebGL around. Or you can DIY.
> is open gl the non web version of web gl? or are they completely different?
The current version of WebGL, WebGL 2, is like OpenGL ES 3.0.
> what is the man hour effort involved for doing something like this assuming you know html, css and js pretty well but not familiar with gamedev
Almost trivial with Ai. I just started making games with threejs. threejs is pretty much the abstractions you'd end up writing your self if you wanted to use webgl.
The hard part is refining, polish, creating fun mechanics, and creating assets.
> Almost trivial with Ai.
Not true in the slightest.
> The hard part is refining, polish, creating fun mechanics, and creating assets.
All things that AI cannot, by definition, do. So, not trivial at all with AI.
Fuck AI, man.
Very few questions are stupid, these are not.
Yes, you can definitely build an entire fps game using WebGL for rendering. Typically using JavaScript to handle physics, collision, gameplay, etc.
My current WebGL project is rendering high definition terrain, high-poly animated models, thousands of particles, shaders, sound and more over 150 frames-per-second on a 10 year old PC with a RTX 3060. I have found hardware acceleration is often not enabled in the browser, or Windows will default to using the integrated-graphics card when running the browser and that must be changed in the Windows Graphics Settings.
WebGL is a graphics API for talking right to the graphics card, supported by The Browser. ThreeJS and BabylonJS are libraries that make it easier to render 2D and 3D graphics, both use WebGL and/or WebGPU behind the scenes for rendering.
Development with HTML/CSS/JavaScript and WebGL is my favorite stack to work with. Development is fast, re-loading is quick, errors and debugging is handled directly in the browsers which have great debug information and performance tracking. No compile time and support on lots of devices.
Yes, OpenGL came first. WebGL is a JavaScript binding of a subset of OpenGL functionalities.
> Development with HTML/CSS/JavaScript and WebGL is my favorite stack to work with.
I love this myself, but..
> have great debug information
How do you debug WebGL stuff? I find that to be one of the least debuggable things I've ever done with computers. If there's multiple shaders feeding into one another, the best I can usually come up with is drawing the intermediate results to screen and debugging visually. Haven't been paying too much attention to the space the past 2-3 years though, so I'm wondering if some new tools emerged that make this easier.
The JavaScript debugging is great right out of The Browser these days.
WebGL debugging... it's a combination of how you're doing it, visually, especially for shader-related issues. For API calls, logging gets most things figured out, there is also this: https://github.com/KhronosGroup/WebGLDeveloperTools
- first of all thank you very much for the detailed insight
- as a guy who is very much new to gamedev, threejs etc but not to programming (have a decade of programming experience on backends, android apps etc) i am running into lots of questions as i try to build a mental model of what game dev process looks like
- let us say i wanted to add a player 3d model into this setup, the player can walk, run, crouch, shoot, throw a grenade, go prone, take cover to the wall etc. how do these animations get implemented? what kind of tools are needed for making these animations
- i read that the technique used is called skeletal animation. how are you supposed to think about this? you press w, the character moves forward. in terms of animation that means your character needs to play the standing at one place animation initially and transition to the walking animation as long as the w button is pressed. now you press shift and this walking animation needs to transition to running animation as long as shift is pressed. is this the right way to think about this?
- do we need intermediate animations like "transition from walk to run", "transition from run to walk", "transition from walk to crouch" etc? that would add a lot of states would it not?
- are there LLM tools that you are aware of that are capable of generating these animations?
- i also read there are different file formats like obj, fbx, m3d, glb etc. is the same data stored in these files in a slightly different way like csv vs json or are they completely different?
This is awesome! I'd love to be able to upload a custom image too.
Cool, I think? It's unusable on mobile Google Chrome. Pinch to zoom worked for about a split second and now it’s broken
I have not implemented proper multi-touch controls yet. Currently the gizmos need to be used for zooming, paning and rotating.
I will add multi-touch gestures soon.
Nice! Nit: on mobile (ff if it matters) swiping down for some time makes the edges very grainy.
Same with swiping up.
This is awesome. I'd love to see the original escher image scroll through there.