aappleby 6 minutes ago

There's another another way to do this, which I have used successfully for years -

The fixed time step simulation is the ground truth "goal", and the "render simulation" is "pulled" towards the goal using an exponential moving average interpolation.

I find that moving the render simulation 90% of the way to the goal every 80 milliseconds feels good for UI interactions. That means that the rendered version is 99% of the way to the goal after 160 msec, 99.9% after 240 msec, etcetera.

This also has the nice property that the render sim is always smooth regardless of tick rate and it never overshoots the goal sim.

meheleventyone 3 days ago

There's a third way here so rather than:

* Interpolate between the previous state and the new state by the remaining time which leaves the action delayed in the past.

* Simulate an extra variable delta tick with the extra issues you can get from going off cadence.

You can simulate a temporary tick a full fixed tick ahead and then interpolate between the current tick and the future tick. Typically the cost of ticking a new frame doesn't depend hugely on the length of time it's simulating so the cost is largely the same as the method presented. This has some advantages in terms of keeping the time delta consistent, meaning you don't need to worry about it throughout the simulation code.

  • modeless 3 days ago

    This runs the risk of showing events that don't occur, as you're rewinding and replaying part of a time step every frame. For example, the player might die in the temporary time step and you'd show the first frame of death animation but then a jump input comes in later that frame and when you redo the time step for real the player doesn't die.

    • jayd16 3 days ago

      Yes but rollback and replay still looks good in practice and is considered state of the art in latency dependent games (like fighting games).

      • modeless 3 days ago

        For multiplayer, sure. The latency is large and variable, so you have to make compromises. But local inputs are low and fixed latency and ideally you don't need the same compromises.

        The ideal solution from a latency perspective is non-fixed time steps, but there are a lot of downsides of course. The rewind/replay solution could be improved with some tweaks. You could set a flag in the temporary time step to disable all irreversible state changes like dying, so you would never see/hear a flicker of death.

        • tomck 2 days ago

          Why do you think you would see a flicker of death?

          For local inputs, the fixed timestep is always a frame or smaller, this is not an issue unless you're over a network

          Edit: Oh i see, this is a problem with the commenters suggestion of predicting the future though

    • meheleventyone 3 days ago

      Yes if there was a change in input that would mean there was a significant difference but it'd both be fleeting and rare at most game framerates. If one frame you predicted death and the next it corrected it's unlikely much of anything will have started happening. In a rollback networking situation you can often go back several frames of incorrect prediction without it being particularly noticeable, so one frame is basically immaterial.

  • jakubtomsu 3 days ago

    that's a cool idea! However it does mean I would still have to write the interpolation code which is not fun. But it's a good way to solve the latency issue for sure. Thanks for the idea, I'll add a note to mention it somewhere in the post.

    • meheleventyone 3 days ago

      Thanks, like most engineering things it's all about picking the set of tradeoffs that you can live with. :D

dyarosla 3 days ago

The main reason to prefer interpolation is that your fixed time step function does not need to operate on variable time ever- removing a complicated dependency.

For instance, modifying character accelerations based on a fixed time step constant is far more straightforward than the methods required to work with variable time deltas (due to floating point accumulated error). This is why any action-based deterministic game (think platformers, shooters, physics based games) will opt for this.

IMO it is much more straightforward to have a render method that pre-allocates some extra memory, interpolates a handful of values and renders them vs the nondeterminism introduced in a game logic method that has to take into account variable time (especially if also networking multiplayer state). And for this you trade off a frame of visual-only latency - a choice I’d take any day.

  • rkangel 3 days ago

    > modifying character accelerations based on a fixed frame constant is far more straightforward than the methods required to work with variable time deltas

    For those who don't know - the reason this is hard is because of the different amounts of maths error that can accumulate between the two approaches (usually FP precision error). Doing some maths in 10 increments a tenth of the size will likely end up with a slightly different value than doing it once in a full size increment.

    This is particularly important in multiplayer games where multiple players need to be able to do the same calculations and get the identical result. It is not good if the world begins to diverge just because you've got different frame rates!

Mathnerd314 3 days ago

For reference, SuperTux's code: https://github.com/SuperTux/supertux/blob/8d79ac7ce4db5e4225... (disclosure: not code I wrote, but I've tweaked it here and there)

The idea is similarly to just simulate one logical step at a time, with the fixed timestep (this is important because SuperTux uses simple Euler integration which is timestep-sensitive). But there is tracking code that sleeps / adds extra logical steps between frames so the rate of logical frames ends up corresponding closely to the rate of rendered frames. And as with the final solution here there's no interpolation in rendering, you just display the latest game state without storing the previous.

thegeomaster 3 days ago

One important approach is missing: extrapolation.

Usually, your entities all have velocities, which you can use to extrapolate from the last simulated state to the current one (after <dt time has passed). For things like visual effects, you'd have to have a custom extrapolate implementation, which is not really different than a custom interpolate implementation that you need for interpolation.

This eliminates the lag issue, and at anywhere close to 60FPS, looks perfectly fine. It will look strange at very low framerates, but at that point, you can just automatically switch it off.

You do need a way to extrapolate game state, which is slightly painful, but the author's proposed solution has big drawbacks (which he hints at). Since it touches all game state each frame (even though it's "just" a memcpy), it completely changes the performance characteristics of your main loop.

Without this, the complexity of your game step is linear in the number of updated or rendered entities, so you can add large amounts of additional state at any time, as long as only a small part of it will be visible/relevant to update each frame.

With the author's approach, your step complexity is linear in the state size. You basically have an additional write for all state, which gives you a very restrictive upper limit. It's not just AAA games - as soon as you add a particle system, you've created a great many entities which you now need to memcpy every frame.

The scalable solution to this complication is copy-on-write, which is more complexity... or, bite the bullet, write that extrapolation function, and enjoy your freedom to introduce crazy particles, physics, MMO world state, or whatever you want! At real-world framerates, it will look no different.

  • jakubtomsu 3 days ago

    Extrapolation has some very serious drawbacks in action games which make it unusable though. You cannot just move entities forward in time by velocity*delta, because that ignores all collisions with the world. So you end up with a lot of weird jitter and ugly effects like that. And once you start adding support for things like collision while extrapolating, why not just tick the entire game;)

    • jayd16 3 days ago

      > why not just tick the entire game

      In a multiplayer scenario, you might not have enough information to tick the entire game. You'll probably want to extrapolate input from other users, at least.

      • meheleventyone 3 days ago

        On the contrary in multiplayer extrapolation is the last thing you want to do because the time to correction is long and the prediction is really coarse. It essentially results in a lot of corrections, applied late which looks and feels awful.

    • jakubtomsu 3 days ago

      (collisions are by far not the only issue here, it's just the first one you run into)

  • dyarosla 3 days ago

    Extrapolation is one of those ideas that’s not actually used in practice- at least I’ve yet to see it used in any games in any meaningful capacity.

    It’s just far too complicated and requires custom logic while resulting in worse results than more straightforward options. Even for multiplayer games the “extrapolation” is often done by repeating input states and running the regular game loop.

    I also wouldn’t equivocate the interpolation approach with extrapolation. With interpolation you interpolate between two valid states. With extrapolation you produce a potentially invalid state (ie a character that’s inside of a wall). The only work around for the latter issue is to perform a full game tick - at which point you’re no longer doing extrapolation.

    • jayd16 3 days ago

      > Extrapolation is one of those ideas that’s not actually used in practice

      This is how VR frame doubling works, no? "Timewarp"/"Spacewarm"

      Also I would think that a lot of netcode would be considered extrapolation. You'd extrapolate a peer's input or velocity (and perhaps clean it up with further local simulation) and then deal with mis-prediction when changes are replicated.

      • dyarosla 3 days ago

        For the former, Timewarp is used at an OS level to perturb the visibly rendered quad to match the display time orientation. There’s no extrapolation: the rendered frame is simply adjusted to account for the change in headset orientation.

        For the latter, as I mentioned, the extrapolation is not on velocity: you still compute regular game ticks but by holding the input constant. This is quite different from extrapolating velocities.

        • jayd16 3 days ago

          Spacewarp takes the motion vectors and depth buffer and generates new frames from the extrapolated motion. Its detailed here. https://developers.meta.com/horizon/blog/introducing-applica...

          > For the latter, as I mentioned, the extrapolation is not on velocity: you still compute regular game ticks but by holding the input constant. This is quite different from extrapolating velocities.

          Replicating velocity is fairly common. Unreal's character movement replicates velocity and not inputs. I would personally argue that even doing a full game tick with replicated velocities is extrapolation. I'm not sure what the distinction would be or what counts as a full tick with error correction vs local extrapolation per tick with error correction.

          • dyarosla 3 days ago

            I agree- what’s the difference between error correction and a full tick? At what point do you draw the line on error correction?

            Extrapolation is often used to mean extrapolating values without error correction, at which point the results are less than stellar.

            Spacewarp is, like Timewarp, a way to match the render frame time on a headset but by creating a warp of the output image; ill concede that this is technically extrapolation but is far away from whats generally referred to in describing updating entity values in game loops.

davexunit 3 days ago

It's an interesting approach because the one frame of lag introduced by the typical interpolation strategy is not ideal, but I don't think many games can make use of it. As the article states, game state needs to be in a relatively small contiguous chunk of memory with no pointers to chase. That's a very serious constraint.

akira2501 3 days ago

    // Bitwise OR the bit flags
    tick_input.actions[action] += flags
Why?
brid 3 days ago

"Note: This method probably won’t work very well in cases when game_tick can take a very significant amount of time, or the game state is gigantic, e.g. in a big AAA game which needs to run the physics engine and whatnot."

eru 3 days ago

It's somewhat amusing, that someone smart working in a language like Go would come to the same conclusion that Haskell practically forces on you (unless you work to avoid it): by default, your state should be copy-on-write.

> For this reason we duplicate the entire game state into a new, temporary render-only game state, and simulate the tick on this one. That way the fixed timestep game state stays untouched.

Of course, because it's Go, they have to duplicate everything explicitly. A poor-man's copy-on-write.

> Tangent: determinism and replay

> You need to make sure entities are always updated in the same order. This means deterministic O(1) datastructures like pools are your friend.

That's not actually required. But I guess Go makes this the least painful avenue forward?

  • meheleventyone 3 days ago

    > > Tangent: determinism and replay

    > > You need to make sure entities are always updated in the same order. This means deterministic O(1) datastructures like pools are your friend.

    > That's not actually required. But I guess Go makes this the least painful avenue forward?

    The formulation isn't complete but they're correct for what they're getting at. It should be:

    "You need to make sure entities are always updated in the same order iff the order matters to the end result."

    For example if you move entity A before entity B and collision check on move to make sure it's valid if A and B would intersect after moving without collision checking you will end up with different positions of A and B depending on which goes first.

    Your choices then are to make sure entity update order is deterministic or rework the problem so it doesn't have the ordering dependency. The latter can be quite hard to do and the conditions where there is an ordering dependency can be subtle and unintuitive making it hard to cover all cases.

  • pixelpoet 3 days ago

    Pretty sure that's Odin, not Go.

    • eru 3 days ago

      Oh, I used my browser's `inspect` on the source code snipped posted, and they were using Go syntax highlighting, so I assumed the code is written in Go.

      Thanks for pointing this out.

      Well, I guess my original point applies to Odin instead of Go then.

  • diggan 3 days ago

    Maybe I misunderstand your comment, but seems like you're misunderstanding the contents of the article/the point?

    They're doing a duplicate of the state not because it's required in the language or whatever, but because they want to predict something without affecting the actual state. The only way of doing that is running the actual code against a new state duplicate from the existing, actual state.

    Not sure what the language has to do with it, you'd have to do the same in any language, explicitly or implicitly.

    • eru 3 days ago

      In Haskell (and Erlang etc), all data structures 'want' to be copy-on-write by default. That means you don't need to do anything extra to act as if your code is running a against a duplicate: it's behaving like that anyway by default.

      Git and bcachefs work the same way, btw: you never really change to anything, you just make a new version (that shares most of its storage with the previous version), and then at the end update a pointer.

      Making a 'copy' in these systems is basically O(1), but making updates might cost you O(log n) (details depend on the data structure in question, some can do O(1)), and in general you want some form of garbage collection, or some type system that really cares about tracking these kinds of thing statically; think something like Rust's borrow checker on steroids.

      > Not sure what the language has to do with it, you'd have to do the same in any language, explicitly or implicitly.

      If instead your language or system encourages mutation, then making a complete O(n) copy of your state is the easiest way to get the desired behaviour.

      • diggan 2 days ago

        > In Haskell (and Erlang etc), all data structures 'want' to be copy-on-write by default. That means you don't need to do anything extra to act as if your code is running a against a duplicate: it's behaving like that anyway by default.

        Right, but still, that's an implementation detail. It could have been implicit, or it can (like in the authors case be explicit), how is that important?

        I'm well aware of languages that does this implicitly, as I'm mostly writing Clojure code which is all about immutability and persistent data structures. I still don't understand why you want to highlight that this could be implicit instead of explicit, what the author had to do wouldn't change really.

      • meheleventyone 2 days ago

        This isn't really trying to emulate copy-on-write though. It's intentionally making a copy temporarily to mutate it further whilst keeping the original ready to be mutated on the next tick. They share superficial similarity but that's about it.

        As a random tangent, whilst memcpy in the general case is going to be O(n) given that in this program the n is a fixed size the operation in context is always going to be O(1). Fundamentally though it's not a very good way to compare the performance profiles of the approaches to this problem on real hardware.

        • eru 2 days ago

          > As a random tangent, whilst memcpy in the general case is going to be O(n) given that in this program the n is a fixed size the operation in context is always going to be O(1).

          Well, with a fixed n, every operation trivially becomes O(1).

          > This isn't really trying to emulate copy-on-write though. It's intentionally making a copy temporarily to mutate it further whilst keeping the original ready to be mutated on the next tick. They share superficial similarity but that's about it.

          Having to copy everything is exactly what you need to achieve this effect, when you don't have copy-on-write. You need to make a copy somewhere. (Or otherwise, explicitly roll-back any changes you made.)