I’m not sure what you mean. Do you have a source for more complete explanation?
H.261 is way older than Doom and already uses motion compensation and inter-picture prediction.
The aformentioned demos work by recording all input control states for every tic and then replaying them later in-engine. But I don’t think that parallels video encoding.
EDIT: Sorry, I misread. But yeah, Doom does not send deltas but the multiplayer also works by sending those same input states every tic that demo recording saves.
I am probably misremembering details, as this was something we learned about in my algorithms class in college about a decade ago. I’ll see if I can dig up more details after work today.
I’m not sure what you mean. Do you have a source for more complete explanation?
H.261 is way older than Doom and already uses motion compensation and inter-picture prediction.
The aformentioned demos work by recording all input control states for every tic and then replaying them later in-engine. But I don’t think that parallels video encoding.
EDIT: Sorry, I misread. But yeah, Doom does not send deltas but the multiplayer also works by sending those same input states every tic that demo recording saves.
I am probably misremembering details, as this was something we learned about in my algorithms class in college about a decade ago. I’ll see if I can dig up more details after work today.