pistachiopro a year ago

Ten Minute Physics is a great resource! Matthias Mueller makes concise demos and explains them really well. (Though I think it would take someone without existing familiarity with the subject matter a lot longer than 10 minutes to digest the results. Lots of rewatching!)

A really interesting results that's come out of Matthias and co.'s research in the last few years that is still percolating its way through the realtime physics community is the so called "small step" technique. Explained at a really high level: integrating a physical system requires solving large sparse systems of nonlinear equations. Traditionally, the equations would be solved once a frame with iterative methods (often with a fixed number of iterations, such as 10 or 20 per frame). It turns out it's massively more efficient for accurate results to just run a single iteration of the solver, but increase the physics framerate by 10 or 20 times (or actually, since there's a tiny bit of overhead to process a physics frame, for equivalent computational load, you can usually only increase the framerate 9-18 times). The intuition here is that even though 1 iteration of the solver is much less accurate than 20, by only taking a small step forward in time the error drift in the physical system is even lower. It's only somewhat handwavy to say solver iterations reduce error at a linear rate, but decreasing physics frame time reduces error at an O(n^2) rate.

While almost all engines have added an option to use the small step technique, one issue slowing down adoption is that it is too accurate! The old way of solving the nonlinear equations once a frame introduced a huge amount of numerical damping. That greatly reduced accuracy, but did serve to stabilize simulations, in a way. With the small step technique, numerical damping is greatly reduced, so intentional physical models of damping need to be added back in and tuned by hand. People aren't used to working this way, so designers are still toggling back to the "large step" method. It's likely once people figure out how to properly use this new technique, you'll see a noticeable uptick in the fidelity of physics in games.

  • hnuser123456 a year ago

    This is very coincidental timing. The physics simulation in the Source game engine, and Garry's mod, absolutely cemented in me an interest in computational physics. I really wanted to see fully simulated fluid water in games, particularly GMod, but it never really happened, 15 years ago.

    This never stopped bothering me, so I recently loaded up the game and programmed an entity that was simply a particle that repelled itself with inverse distance-squared falloff. I was only able to get up to around 100 particles before the game slowed from 200+ fps to ~10. There seemed to be a huge amount of overhead in the lua scripting system's connection to the game/physics engine. Something about getting the position of 100 particles, or asking for all the particles within a given range of each particle, even for only 100, was enough to send frametimes over 100ms-1000ms. I wanted at least 500-1000 particles at at least 10fps, and I'm sure in terms of raw math, that is nothing for modern CPUs, even with the most naive implementation. I had to figure out exactly which lua functions had the most overhead for such a small amount of simple math.

    So, I re-wrote the script, seeking to chase every rabbit hole that would let me increase the number of particles while maintaining over 10fps.

    1. Switch from per-particle loop, to a global loop that cached each particle's position per frame, so it was more pure math and less engine bindings/"context switching". Approx 50 particles to approx 150.

    2. Limit the particle simulation to 10hz instead of 66hz+, and adjust the force scale to compensate. Raised the limit to approx 300-400 particles.

    3. A dynamic scaling system, so that the system would never try to simulate more than 100 particles at once, per 10hz tick. Raised the limit to approx 700 particles.

    At this point, there are now other non-CPU bottlenecks. Any method of drawing 700 entities, even basic wireframe?, even with a very high end GPU on a nearly 20 year old game, is significant, and there's no easy way for me to break past this.

    I am sure if I learned how the source engine's plugin system worked, and I hooked into the physics engine using C++ instead of however the lua bindings work, I should be able to simulate a thousand particles at 100+fps, but that's about where the scope of the project starts to contend with "real life", and I continue to wait for a 3D sandbox game with satisfying fluid physics.

    Your optimization method of "only one iteration per frame" is in a sense, the same, and in a sense, the opposite of my method of simply not even simulating every particle every frame... I just didn't have the spare performance for that. Then again, I'm only simulating a single force field with no proper collision detection or anything fancy

    • modeless a year ago

      GPUs can definitely handle more than 700 entities. Here's 100k in a web browser: http://david.li/fluid/

      But even most offline rendered fluid simulations are not that convincing to my eyes, let alone real time ones. It will be a long, long time before we have truly realistic behavior of real time water in unconstrained settings for games.

      • lossolo a year ago

        Have you seen RealFlow? I was having fun with their fluid physics simulator a few years ago, it's really good.

        https://realflow.com/

        • modeless a year ago

          It looks cool, but I find that there's a characteristic "blobby" look to these simulations, especially visible in mostly flat parts or whenever thin sheets of liquid form and break apart. I imagine that increased resolution can mitigate it but the cost eventually becomes prohibitive even for offline simulation so people settle for "good enough". And I think foam/bubbles are usually kind of hacky (when present at all) rather than being simulated in a principled way.

      • hnuser123456 a year ago

        Yeah, it must be something specific about the overhead in the Source engine when drawing entities, but it doesn't matter if I render the particles as 2D sprites or as 3D models, both of them have a similar performance impact over not rendering them at all when there's 700, and there's no way it's a hardware limitation.

        The GP shared some links below from other indie devs, there is a realtime water simulation that looks pretty dang good, though it looks to be on the scale of a human-sized tub.

    • ladberg a year ago

      The problem sounds like you're doing O(N^2) calculations at every tick because each particle has to interact with every other particle. To make it faster you should keep some spatial datastructure so that you can query the other particles within a certain range of your current particle and just interact with those, ignoring the rest.

      The usual implementation is an octree or something similar.

      That said, N=100 should still be fast with the naive algorithm so I'd definitely move off the scripting language to something faster.

      • cyber_kinetist a year ago

        For particles of uniform size, a simple uniform grid will work best. If you set the grid cell size to the particle diameter, then you only need to search the adjacent voxels for each particle. The uniform grid method is also easy to port to the GPU, unlike other acceleration structures like BVHs or Octrees.

      • hnuser123456 a year ago

        I do have a parameter so that each particle only takes into consideration other particles within 750 units, when the free space is 3000x3000x3000 units, but this parameter did not seem too critical, and it's still calculating the distance between each pair twice technically, so I guess I could speed that up. Vector/"table?" operations seemed slower than plain loops, not sure if there's plain interpreter speedup possible there?

    • pistachiopro a year ago

      Realtime fluid sim is making lots of progress, though full 3D sims still appear to be computationally out of reach, at least for games that aren't focused exclusively on them. Here are some fun fluid sim examples from members of the Ten Minute Physics discord:

      https://twitter.com/Michael_Moroz_/status/159507609967093350...

      https://twitter.com/MytinoGames/status/1600278137564184579

      • hnuser123456 a year ago

        I might have to join the discord, that first link is extremely satisfying, and the shading on the materials in the 2nd one does a great of giving the feeling of the 3rd dimension without the computational complexity. Thank you for sharing

    • a_t48 a year ago

      > At this point, there are now other non-CPU bottlenecks. Any method of drawing 700 entities, even basic wireframe?, even with a very high end GPU on a nearly 20 year old game, is significant, and there's no easy way for me to break past this.

      I've run into this before - it _can_ be fast if your engine supports some form of mesh instances and exposes it to the user.

      • ngcc_hk a year ago

        Would it be slow still if it recalculates everything?

        • a_t48 a year ago

          If what recalculates what?

          • ngcc_hk a year ago

            The mesh calculation meant you are not in control. For simple model ok. For complex one, Realky can optimize eg not calculate anything 3 meter away.

            • a_t48 a year ago

              My comment only referred to rendering.

jspdown a year ago

There are some great content on his YouTube channel as well.I highly recommend to have look. He is one of the people behind PBD and XPBD. I found that watching his videos and playing with his demos really helpful while studying the paper.

avsteele a year ago

If you want to play with a little demo I made for talking about laser cooling physics go to:

https://zerok.com/demo/

Admittedly it will be a bit hard to follow without the accoupling lecture, but if you zoom in and press the buttons (cool, compress, etc...) it looks cool at least!

(it only performs well in chrome-derived browsers)

amelius a year ago

I have a question about the position-based dynamics approach. Since the constraints are solved iteratively for every element in the simulation, you can still end up with a system where the constraints do not hold for some of the elements.

Example: you have a link between A and B, and a link between B and C. If you adjust A and B according to the length of the link, and then B and C according to the other length, you've moved B again, and the firsts link might not have the right length anymore.

  • pistachiopro a year ago

    I think your question speaks to something deep and interesting about XPBD.

    All classical mechanics sims revolve on some level around "integrating" Newton's second law of motion: F=ma. Force equals mass times acceleration. XPBD constraints correspond directly to forces, and acceleration is the second time derivate of position. So to update positions every frame, which is what we want our physics sim to do, we need to integrate F=ma.

    Usually we want to use an integrator in the family of "unconditionally stable" integrators. These integrators are guaranteed not to blow up, no matter what weird, difficult input the user gives to the simulation. In realtime physics sims we generally use one called "Implicit Euler." And Implicit Euler is the worst one! It's the least accurate and it adds huge amounts of artificial damping, but it has one really big thing going for it. It's the most resilient to "under-converged" solutions to the equations of motion.

    So, like you pointed out in your question, if you resolve the constraints one after the other, constraints further down the list are likely to violate previous constraints. Most of the time you can mitigate this by running over the list of constraints multiple times, but you'd have to do that many many times for your constraints to perfectly "converge," such that they are all simultaneously satisfied.

    The more accurate and energy preserving unconditionally stable integrators, like "Implicit Midpoint," require perfectly converged solutions. If you don't resolve your constraints perfectly, the "unconditionally stable" Implicit Midpoint will explode! But good ol' Implicit Euler is much less up tight. You can give it surprisingly under-converged solutions and it'll keep on trucking. The shortcomings of Implicit Euler are reduced by taking smaller timesteps between frames, so you generally get the best bang for your buck by investing compute time into more frames with a sloppy, non-converging solver, than using a better integrator that forces convergence.

    At least that's what it looks like now. There's still tons of active research going on in these areas. Maybe someone will discover a faster solver that makes convergence easier, in which case we'll all switch over to move sophisticated integrators.

  • meheleventyone a year ago

    Depends on how you solve the constraints. The papers use an iterative method that might not converge to meet all constraints all the time but you can use others instead. In general though the constraint satisfaction is good enough with the iterative approach.

    I’m writing a physics engine in my spare time for fun using XPBD so here’s a look at the iterative method in action: https://youtu.be/A3_W_EFFsm8

    This is running in Chrome at 60Hz with fifteen sub-steps, each with one constraint solver iteration.

    Am currently integrating rigid bodies which is fun.

    • amelius a year ago

      I think the constraints have not one solution. E.g. multiple overlapping polygons in 2D, how would you resolve that globally in a way such that the solution converges to the physical solution?

      • meheleventyone a year ago

        Yup you can construct constraints such that they are impossible to solve. Solving using global methods can also end up in cases where the solver is going to take to long to solve for the simulation to remain real-time. Lots of trade offs to make based on what you’re trying to do.

emmelaich a year ago

Didn't realise at first that some of the demos were interactive, e.g. the cloth can be pushed around with the mouse. Impressive!

thinkingkong a year ago

This has one of thoe timeless qualities to it. Reminds me of the old NeHe opengl tutorials.

  • deely3 a year ago

    "NeHe opengl tutorials" - these words of wisdom trully brings me back in time.

lukko a year ago

Matthias' tutorials are brilliant! We used them as a reference for building the soft and rigid body sims in our breathing app: https://www.lungy.app

725686 a year ago

For a moment I thought it was the very well loved minutephysics youtube channel...

whinvik a year ago

I am having fun recreating some of the tutorials in Python. Getting visualizations working the same way is harder though so I am not doing everything the same way.

  • explodingman a year ago

    If you get anywhere, put a link to your code here. I’m bookmarking this spot and I’d appreciate a look at some python code when I come back to this (kinda busy right now).

ndsipa_pomu a year ago

There's a typo in the link on that page - "Ten Minute Phyics"

nigamanth a year ago

This honestly reminds me of Feynman Lectures. It's similar to that except Ten Minute Physics explains slightly more modern physics concepts.

  • gfaregan a year ago

    Classical and continuum mechanics are hundreds of years old. A surprising amount of the numerics is pretty old too.

_joel a year ago

How much can ten minutes of physics change when gravity is invited?

xchip a year ago

awesome content, very recommended!

waynesonfire a year ago

interesting ideas but too short. 100 minutes would be better.