This aligns with what I've observed in computational physics.
Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.
I developed a framework that systematically separates these components: using direct elliptic solvers for constraints and explicit methods for flux evolution. The resulting algorithms are both more stable and more efficient than unified implicit approaches.
The key insight is that many systems (EM, GR, fluids, quantum mechanics) share the same pattern:
- An elliptic constraint equation (solved directly, not timestepped)
- A continuity law for charge/mass/probability flux
- Wave-like degrees of freedom (handled with explicit methods)
Witgh this structure, you can avoid the stiffness issues entirely rather than trying to power through them with implicit methods.
> Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.
When I read statements like this, I wonder if this is related to optimal policy conditions required for infinitely lived Bellman equations to have global and per-period policies in alignment
That's a fascinating parallel! both involve separating timeless constraints (value function / elliptic equation) from temporal dynamics (policy / flux evolution).
Trying to timestep through a constraint that should hold instantaneously creates artificial numerical difficulties. The Bellman equation's value iteration is actually another example of this same pattern...
Please don't "publish" on Zenodo. If you think your work has merit, go arXiv -> peer review -> open access journal. Otherwise, put it on your own website. Zenodo is a repository for artefacts (mainly datasets): if you try to put papers on it, people will think you're a crank. It's about as damaging for your reputation (and the reputation of your work) as a paper mill.
Of course, make sure you've done a thorough literature search, and that your paper is written from the perspective of "what is the contribution of this paper to the literature?", since most people reading your work will not read it in isolation: it'll be the hundredth or thousandth paper they've skimmed, trying to find those dozen papers relevant to their work.
In the vast majority of cases, people using ODE solvers are working with physical systems, where there likely aren't more than a dozen digits of accuracy in any of the parameters. So you set some fixed level of accuracy for the output, and seek a method that can reliably attain that accuracy at the lowest cost. (Either that, or the system is too chaotic, in which case you're told not to bother since the output would be meaningless anyway.)
Something I've had trouble finding any information about is, what if we have a mathematical system, where the initial configuration is known to infinite precision, and we want to solve the ODE to an arbitrary precision (say, 100 digits, or 1000 digits)? All the usual stepwise methods would seemingly require a very high order to avoid a galactic step count, but initializing all the coefficients, etc., would be a struggle in itself.
Is this the best that can be done, or are there less-usual methods that would win out at extremely high accuracy levels?
It depends on your dimensions of the system. If your system is one or low dimensional or if the function has sufficient smoothness, then it makes sense to assume that the function is a series of some kind, and then iterative solve on the least squares using gradient descent. If you pick the correct kind of series, you would get extremely quick convergence.
When number of dimensions become very high or if the function becomes extremely irregular, then variations of monte carlo are generally extremely better than any other methods and have much higher accuracy than other methods, but the accuracy is still much lower than low dimensional methods.
> It depends on your dimensions of the system. If your system is one or low dimensional or if the function has sufficient smoothness, then it makes sense to assume that the function is a series of some kind, and then iterative solve on the least squares using gradient descent. If you pick the correct kind of series, you would get extremely quick convergence.
Thank you, this sounds like what I'm looking for. Would you know of any further resources on this? Most of what I've been playing with has 4 dimensions or fewer.
(E.g., in one particular problem that vexed me, I had a 2D vector x(t) and an ODE x'(t) = F(t,x(t)), where F is smooth and well-behaved except for a singularity at the origin x = (0,0). It always hits the origin in finite time, so I wanted to calculate that hitting time from a given x(0). Series solutions work well in the vicinity of the origin, the problem is accurately getting to that vicinity.)
there are adaptive order ODE solvers that can adapt to arbitrarily high order. For example, e.g. https://arxiv.org/abs/2412.14362 (which I'm a co-author on) can get to ~200th order which should be enough to efficiently solve an ODE into 10s of thousands of digits.
The orbital example where BDF loses momentum is really about the difference between a second-order method (BDF2) and a fourth-order method (RK), rather than explicit vs implicit (but: no method with order > 2 can be A-stable; since the whole point of implict methods is to achieve stability, the higher order BDF formulas are relatively niche).
There are whole families of _symplectic_ integrators that conserve physical quantities and are much more suitable for this sort of problem than either option discussed. Even a low-order symplectic method will conserve momentum on an example like this.
The fascinating thing is that discrete symplectic integrators typically can only conserve one of the physical quantities exactly, eg angular momentum but not energy in orbital mechanics.
I always see RK(Fe) mentioned but almost never Bulirsch-Stoer (https://en.wikipedia.org/wiki/Bulirsch%E2%80%93Stoer_algorit...). Some years ago, I was looking to do high precision orbital motion (involving several bodies) simulations, and came across this in Numerical Recipes for C. I don't pretend to understand how B-S works, but it produced amazing precision with adaptive step size adjustment. For highly chaotic systems, it held out far longer than RK.
Unfortunately the Numerical Recipies book is known for being pretty horrible in general for ODEs. For chaotic systems at high tolerances, you likely want to be using Vern9 which is a highly efficient 9th order RK solver.
Hairer and Wanner (Solving Ordinary Differential Equations) is the classic if you want a graduate level understanding of the mathematical details of all the different methods, but it is incredibly dense (thorough understanding of both volumes will take months to years). If you want a more approachable reference, I would recommend https://book.sciml.ai/ (specifically chapters 7-9) which skips over most of the detail on how to construct ODE solvers but covers quite well the methods that tend to be used in practice.
B-S is just a (bad) explicit Runge-Kutta method. You can see this formally since if you take any order of the B-S method, you can write out its computation as a Runge-Kutta tableau. If you do this, you'll see it ends up being an explict RK method that has much higher f evaluations to achieve the leading truncation error coefficients and order that the leading RK methods get. For example, 9th order B-S uses IIRC about 2-3 times as many f evaluations as Vern9, while Vern9 gets like 3 orders of magnitude less error (on average w.r.t. LTE measurements). If you're curious about details of optimizing RK methods, check out https://www.youtube.com/watch?v=s_t6dIKjUUc.
That isn't to say B-S methods aren't worth anything. The interesting thing is that they are a way to generate arbitrary order RK methods (even if no specific order is efficient, it gives a general scheme) and these RK methods have a lot of parallelism that can be exploited. We really worked through this a few years ago building parallelized implementations. The results are here: https://ieeexplore.ieee.org/abstract/document/9926357/ (free version https://arxiv.org/abs/2207.08135) and are part of the DifferentialEquations.jl library. The results are a bit mixed though, which is why they aren't used as the defaults. For non-stiff equations, even with parallelism it's hard to ever get them to match the best explicit RK methods, they just aren't efficient enough.
For stiff equations, if you are in the regime where BLAS does not multithread yet (basically <256 ODEs, so 256x256 LU factorizations but this is BLAS/CPU-dependent), then you can use parallelization in the form of the implicit extrapolation methods in order to factorize different matrices at the same time. Because you're doing more work than something like an optimized Rosenbrock method, you don't get strictly better, but using this to get a high order method that exploits parallelism where other methods end up being serial, you can get a good 2x-4x if you tune the initial order and everything well. This is really nice because 4 ODEs - 200 ODEs, where this ends up being the fastest method for stiff ODEs that we know of right now according to the SciMLBenchmarks https://docs.sciml.ai/SciMLBenchmarksOutput/stable/, and this ends up being a sweet spot where "most" user code online tends to be. However, since it does require tuning a bit to your system, setting threads properly, having your laptop plugged in, using more memory, and possibly upping the initial order manually, it's not the most user-friendly method. Therefore it doesn't make a great default since you're putting a lot more dependencies into the user's mind for a 2x-4x... but it's a great option to have for the person really trying to optimize. This also shows different directions that would be more fruitful and beat this algorithm pretty handedly though... but that research is in progress.
This is taught in the intro class in basic numerics.
Methods are considered more robust and practical if they have larger stability regions. Von-Neumann is the OG analyst for PDEs
The whole point of the article is that "methods are considered more robust and practical if they have larger stability regions" isn't (or at least shouldn't be) necessarily true :-/.
I was expecting a mention of symplectic ODE solvers, although perhaps that was beyond the scope of the blog post. For Hamiltonian ODEs, you can design methods that explicitly preserve energy, outperforming more generic methods.
Yeah, that's a separate post https://scicomp.stackexchange.com/questions/29149/what-does-.... I wanted to keep this post as simple as possible. If you could show cases where explicit Runge-Kutta methods outperform (by some metrics) implicit Runge-Kutta methods, then it leads to a whole understanding of "what matters is what you're trying to measure". And then yes, symplectic integrators are for long time integrations (explicit RK methods will how lower error for shorter time, so its specifically longer time integrations on symplectic manifolds, though there are implicit RK methods which are symplectic but ... tradeoff tradeoff)
> Those kinds of models aren’t really realistic? Energy goes to infinity, angular momentum goes to infinity, the chemical concentration goes to infinity: whatever you’re modeling just goes crazy! If you’re in this scenario, then your model is probably wrong. Or if the model isn’t wrong, the numerical methods aren’t very good anyways.
Uh... that's not how it works. The whole point of making a model and using a simulator is you don't already know what's going to happen. If your system is going to blow up, your model needs to tell you that. If it does, that's good, because it caught your mistake. And if it doesn't, it's obviously wrong, possibly in a deadly way.
Having worked on non-linear simulators/ODE solvers off and on for a decade, I agree and disagree with what you're saying.
I agree with you that that is 100% the point: you don't already know what's going to happen and you're doing modelling and simulation because it's cheaper/safer to do simulation than it is to build and test the real physical system. Finding failure modes, unexpected instability and oscillations, code bugs, etc. is an absolutely fantastic output from simulations.
Where I disagree: you don't already know what's going to happen, but you DO know generally what is going to happen. If you don't have, at a minimum, an intuition for what's going to happen, you are going to have a very hard time distinguishing between "numerical instability with the simulation approach taken", "a bug in the simulation engine", "a model that isn't accurately capturing the physical phenomena", and "an unexpected instability in an otherwise reasonably-accurate model".
For the first really challenging simulation engine that I worked on early on in my career I was fortunate: the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution, but I also had access to incredibly precise state snapshots from the real system (which was already built and on orbit) every 15 minutes. As I was developing the simulator, I was getting "reasonable" values out, but when I started comparing the results against the ground-truth snapshots I could quickly see "oh, it's out by 10 meters after 30 minutes of timesteps... there's got to be either a problem with the model or a numerical stability problem". Without that ground truth data, even just identifying that there were missing terms in the model would have been exceptionally challenging. In the end, the final term that needed to get added to the model was Solar Radiation Pressure; I wasn't taking into account the momentum transfer from the photons striking the SV and that was causing just enough error in the simulation that the results weren't quite correct.
Other simulations I've worked on were more focused on closed-loop control. There was a dynamics model and a control loop. Those can be deceptive to work on in a different way: the open-loop model can be surprisingly incorrect and a tuned closed-loop control system around the incorrect model can produce seemingly correct results. Those kinds of simulations can be quite difficult to debug as well, but if you have a decent intuition of the kinds of control forces that you aught to expect to come from the controller, you can generally figure out if it's a bad numerical simulation, bad model, or good model of a bad system... but without those kinds of gut feelings and maybe envelope math it's going to be challenging and it's going to be easy to trick yourself into thinking it's a good simulation.
Love these kind of comments where I get to learn about things that I haven't played with.
> the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution
What kind of application would require such demanding tolerances ? My first thought was nuclear fission. But then the fact that you had one in orbit sending data feed every 15 mins imploded my fanciful thinking.
Ahh, I really wish I could go into all of the details because it was a really cool project. The high-level answer (which is also related to the software-defined radio post that's on the front page right now) is that it was simulating the orbit of a satellite such that we could simulate the signals the satellite was transmitting with enough accuracy to implicitly get Doppler shift. That is... we didn't explicitly model Doppler shift in any piece of the system, it just showed up as a result of accurately modelling the orbit of the satellite's position and velocity relative to the receiver on Earth.
This aligns with what I've observed in computational physics.
Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.
I developed a framework that systematically separates these components: using direct elliptic solvers for constraints and explicit methods for flux evolution. The resulting algorithms are both more stable and more efficient than unified implicit approaches.
The key insight is that many systems (EM, GR, fluids, quantum mechanics) share the same pattern: - An elliptic constraint equation (solved directly, not timestepped) - A continuity law for charge/mass/probability flux - Wave-like degrees of freedom (handled with explicit methods)
Witgh this structure, you can avoid the stiffness issues entirely rather than trying to power through them with implicit methods.
Paper: https://zenodo.org/records/16968225
You are probably very familiar with it, but this has been the basis of most numerical solvers for the Navier-Stokes equations since the late 1960s:
https://en.wikipedia.org/wiki/Projection_method_(fluid_dynam...
A disadvantage is that you get a splitting error term that tends to dominate, so you gain nothing from using higher-order methods in time.
> Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.
When I read statements like this, I wonder if this is related to optimal policy conditions required for infinitely lived Bellman equations to have global and per-period policies in alignment
That's a fascinating parallel! both involve separating timeless constraints (value function / elliptic equation) from temporal dynamics (policy / flux evolution).
Trying to timestep through a constraint that should hold instantaneously creates artificial numerical difficulties. The Bellman equation's value iteration is actually another example of this same pattern...
Please don't "publish" on Zenodo. If you think your work has merit, go arXiv -> peer review -> open access journal. Otherwise, put it on your own website. Zenodo is a repository for artefacts (mainly datasets): if you try to put papers on it, people will think you're a crank. It's about as damaging for your reputation (and the reputation of your work) as a paper mill.
Of course, make sure you've done a thorough literature search, and that your paper is written from the perspective of "what is the contribution of this paper to the literature?", since most people reading your work will not read it in isolation: it'll be the hundredth or thousandth paper they've skimmed, trying to find those dozen papers relevant to their work.
In the vast majority of cases, people using ODE solvers are working with physical systems, where there likely aren't more than a dozen digits of accuracy in any of the parameters. So you set some fixed level of accuracy for the output, and seek a method that can reliably attain that accuracy at the lowest cost. (Either that, or the system is too chaotic, in which case you're told not to bother since the output would be meaningless anyway.)
Something I've had trouble finding any information about is, what if we have a mathematical system, where the initial configuration is known to infinite precision, and we want to solve the ODE to an arbitrary precision (say, 100 digits, or 1000 digits)? All the usual stepwise methods would seemingly require a very high order to avoid a galactic step count, but initializing all the coefficients, etc., would be a struggle in itself.
Is this the best that can be done, or are there less-usual methods that would win out at extremely high accuracy levels?
It depends on your dimensions of the system. If your system is one or low dimensional or if the function has sufficient smoothness, then it makes sense to assume that the function is a series of some kind, and then iterative solve on the least squares using gradient descent. If you pick the correct kind of series, you would get extremely quick convergence.
When number of dimensions become very high or if the function becomes extremely irregular, then variations of monte carlo are generally extremely better than any other methods and have much higher accuracy than other methods, but the accuracy is still much lower than low dimensional methods.
> It depends on your dimensions of the system. If your system is one or low dimensional or if the function has sufficient smoothness, then it makes sense to assume that the function is a series of some kind, and then iterative solve on the least squares using gradient descent. If you pick the correct kind of series, you would get extremely quick convergence.
Thank you, this sounds like what I'm looking for. Would you know of any further resources on this? Most of what I've been playing with has 4 dimensions or fewer.
(E.g., in one particular problem that vexed me, I had a 2D vector x(t) and an ODE x'(t) = F(t,x(t)), where F is smooth and well-behaved except for a singularity at the origin x = (0,0). It always hits the origin in finite time, so I wanted to calculate that hitting time from a given x(0). Series solutions work well in the vicinity of the origin, the problem is accurately getting to that vicinity.)
there are adaptive order ODE solvers that can adapt to arbitrarily high order. For example, e.g. https://arxiv.org/abs/2412.14362 (which I'm a co-author on) can get to ~200th order which should be enough to efficiently solve an ODE into 10s of thousands of digits.
The orbital example where BDF loses momentum is really about the difference between a second-order method (BDF2) and a fourth-order method (RK), rather than explicit vs implicit (but: no method with order > 2 can be A-stable; since the whole point of implict methods is to achieve stability, the higher order BDF formulas are relatively niche).
There are whole families of _symplectic_ integrators that conserve physical quantities and are much more suitable for this sort of problem than either option discussed. Even a low-order symplectic method will conserve momentum on an example like this.
Obviously^1. But it illustrates the broader point of the article, even if for the concrete problem even better choices are available.
1) if you have studied these things in depth. Which many/most users of solver packages have not.
The fascinating thing is that discrete symplectic integrators typically can only conserve one of the physical quantities exactly, eg angular momentum but not energy in orbital mechanics.
leapfrog!
I always see RK(Fe) mentioned but almost never Bulirsch-Stoer (https://en.wikipedia.org/wiki/Bulirsch%E2%80%93Stoer_algorit...). Some years ago, I was looking to do high precision orbital motion (involving several bodies) simulations, and came across this in Numerical Recipes for C. I don't pretend to understand how B-S works, but it produced amazing precision with adaptive step size adjustment. For highly chaotic systems, it held out far longer than RK.
Unfortunately the Numerical Recipies book is known for being pretty horrible in general for ODEs. For chaotic systems at high tolerances, you likely want to be using Vern9 which is a highly efficient 9th order RK solver.
Is there a reference for ODEs that you like?
Hairer and Wanner (Solving Ordinary Differential Equations) is the classic if you want a graduate level understanding of the mathematical details of all the different methods, but it is incredibly dense (thorough understanding of both volumes will take months to years). If you want a more approachable reference, I would recommend https://book.sciml.ai/ (specifically chapters 7-9) which skips over most of the detail on how to construct ODE solvers but covers quite well the methods that tend to be used in practice.
B-S is just a (bad) explicit Runge-Kutta method. You can see this formally since if you take any order of the B-S method, you can write out its computation as a Runge-Kutta tableau. If you do this, you'll see it ends up being an explict RK method that has much higher f evaluations to achieve the leading truncation error coefficients and order that the leading RK methods get. For example, 9th order B-S uses IIRC about 2-3 times as many f evaluations as Vern9, while Vern9 gets like 3 orders of magnitude less error (on average w.r.t. LTE measurements). If you're curious about details of optimizing RK methods, check out https://www.youtube.com/watch?v=s_t6dIKjUUc.
That isn't to say B-S methods aren't worth anything. The interesting thing is that they are a way to generate arbitrary order RK methods (even if no specific order is efficient, it gives a general scheme) and these RK methods have a lot of parallelism that can be exploited. We really worked through this a few years ago building parallelized implementations. The results are here: https://ieeexplore.ieee.org/abstract/document/9926357/ (free version https://arxiv.org/abs/2207.08135) and are part of the DifferentialEquations.jl library. The results are a bit mixed though, which is why they aren't used as the defaults. For non-stiff equations, even with parallelism it's hard to ever get them to match the best explicit RK methods, they just aren't efficient enough.
For stiff equations, if you are in the regime where BLAS does not multithread yet (basically <256 ODEs, so 256x256 LU factorizations but this is BLAS/CPU-dependent), then you can use parallelization in the form of the implicit extrapolation methods in order to factorize different matrices at the same time. Because you're doing more work than something like an optimized Rosenbrock method, you don't get strictly better, but using this to get a high order method that exploits parallelism where other methods end up being serial, you can get a good 2x-4x if you tune the initial order and everything well. This is really nice because 4 ODEs - 200 ODEs, where this ends up being the fastest method for stiff ODEs that we know of right now according to the SciMLBenchmarks https://docs.sciml.ai/SciMLBenchmarksOutput/stable/, and this ends up being a sweet spot where "most" user code online tends to be. However, since it does require tuning a bit to your system, setting threads properly, having your laptop plugged in, using more memory, and possibly upping the initial order manually, it's not the most user-friendly method. Therefore it doesn't make a great default since you're putting a lot more dependencies into the user's mind for a 2x-4x... but it's a great option to have for the person really trying to optimize. This also shows different directions that would be more fruitful and beat this algorithm pretty handedly though... but that research is in progress.
This is taught in the intro class in basic numerics. Methods are considered more robust and practical if they have larger stability regions. Von-Neumann is the OG analyst for PDEs
The whole point of the article is that "methods are considered more robust and practical if they have larger stability regions" isn't (or at least shouldn't be) necessarily true :-/.
I was expecting a mention of symplectic ODE solvers, although perhaps that was beyond the scope of the blog post. For Hamiltonian ODEs, you can design methods that explicitly preserve energy, outperforming more generic methods.
Yeah, that's a separate post https://scicomp.stackexchange.com/questions/29149/what-does-.... I wanted to keep this post as simple as possible. If you could show cases where explicit Runge-Kutta methods outperform (by some metrics) implicit Runge-Kutta methods, then it leads to a whole understanding of "what matters is what you're trying to measure". And then yes, symplectic integrators are for long time integrations (explicit RK methods will how lower error for shorter time, so its specifically longer time integrations on symplectic manifolds, though there are implicit RK methods which are symplectic but ... tradeoff tradeoff)
> Those kinds of models aren’t really realistic? Energy goes to infinity, angular momentum goes to infinity, the chemical concentration goes to infinity: whatever you’re modeling just goes crazy! If you’re in this scenario, then your model is probably wrong. Or if the model isn’t wrong, the numerical methods aren’t very good anyways.
Uh... that's not how it works. The whole point of making a model and using a simulator is you don't already know what's going to happen. If your system is going to blow up, your model needs to tell you that. If it does, that's good, because it caught your mistake. And if it doesn't, it's obviously wrong, possibly in a deadly way.
Having worked on non-linear simulators/ODE solvers off and on for a decade, I agree and disagree with what you're saying.
I agree with you that that is 100% the point: you don't already know what's going to happen and you're doing modelling and simulation because it's cheaper/safer to do simulation than it is to build and test the real physical system. Finding failure modes, unexpected instability and oscillations, code bugs, etc. is an absolutely fantastic output from simulations.
Where I disagree: you don't already know what's going to happen, but you DO know generally what is going to happen. If you don't have, at a minimum, an intuition for what's going to happen, you are going to have a very hard time distinguishing between "numerical instability with the simulation approach taken", "a bug in the simulation engine", "a model that isn't accurately capturing the physical phenomena", and "an unexpected instability in an otherwise reasonably-accurate model".
For the first really challenging simulation engine that I worked on early on in my career I was fortunate: the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution, but I also had access to incredibly precise state snapshots from the real system (which was already built and on orbit) every 15 minutes. As I was developing the simulator, I was getting "reasonable" values out, but when I started comparing the results against the ground-truth snapshots I could quickly see "oh, it's out by 10 meters after 30 minutes of timesteps... there's got to be either a problem with the model or a numerical stability problem". Without that ground truth data, even just identifying that there were missing terms in the model would have been exceptionally challenging. In the end, the final term that needed to get added to the model was Solar Radiation Pressure; I wasn't taking into account the momentum transfer from the photons striking the SV and that was causing just enough error in the simulation that the results weren't quite correct.
Other simulations I've worked on were more focused on closed-loop control. There was a dynamics model and a control loop. Those can be deceptive to work on in a different way: the open-loop model can be surprisingly incorrect and a tuned closed-loop control system around the incorrect model can produce seemingly correct results. Those kinds of simulations can be quite difficult to debug as well, but if you have a decent intuition of the kinds of control forces that you aught to expect to come from the controller, you can generally figure out if it's a bad numerical simulation, bad model, or good model of a bad system... but without those kinds of gut feelings and maybe envelope math it's going to be challenging and it's going to be easy to trick yourself into thinking it's a good simulation.
Love these kind of comments where I get to learn about things that I haven't played with.
> the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution
What kind of application would require such demanding tolerances ? My first thought was nuclear fission. But then the fact that you had one in orbit sending data feed every 15 mins imploded my fanciful thinking.
Ahh, I really wish I could go into all of the details because it was a really cool project. The high-level answer (which is also related to the software-defined radio post that's on the front page right now) is that it was simulating the orbit of a satellite such that we could simulate the signals the satellite was transmitting with enough accuracy to implicitly get Doppler shift. That is... we didn't explicitly model Doppler shift in any piece of the system, it just showed up as a result of accurately modelling the orbit of the satellite's position and velocity relative to the receiver on Earth.
Fantastic.
My second guess was that this was a part of a GPS like service.
Without elaborating, I'll just say... you're not wrong.