TAGS: SIMULATION // ARCHITECTURE // PERFORMANCE

Data Age vs. Latency: What “Performance” Means in Integrated Simulation Systems

In an integrated simulation setup, people use the word performance like it is one number. In reality, it is usually two different problems mixed together.

One problem is latency. The other is data age. If you don’t separate them, you end up blaming the wrong machine, or you waste time optimizing code that was never the issue.

Latency is the time it takes data to move through a stage. In middleware terms, it is what happens from the moment the packet reaches your application until the moment you send it out. This is the part you own.

Data age is how old the state is when it becomes useful at the other end. Data age is not only your middleware. It belongs to the whole chain.

It can grow before it reaches you. It can also grow after you send it. In integration work, that is normal.

There is also a third actor people forget. Network infrastructure adds its own delay. Switches, routers, firewalls, even weird NIC driver paths. That latency belongs to neither the producer nor your middleware. It is just part of the pipeline.

A formula that helps

When I want to explain data age in a simple way, I think of it like this:

Data age = latency + sampling delay + buffering

Black Box Event Happens
Sampling +
Network
──▶
Middleware Processing
Network
Latency
──▶
Black Box Internal logic
Display
Latency
──▶
Visible Result
DATA AGE
Grows across the whole chain
End-to-end view. Latency is internal to each stage; Data Age is the cumulative sum of the entire pipeline.

Latency is the travel and processing time through stages. Sampling delay is the “waiting for the next update” problem. Buffering is what happens when any box in the middle decides to queue instead of passing data forward.

This formula is not strict math. It is a mental model. But it helps you stop mixing different problems together.

If your producer publishes at 5 Hz, the sampling delay is already there. Even with a perfect network and perfect middleware, you are still waiting for the next update.

If any system buffers deeply, buffering becomes the real enemy. Your middleware can run fast and still deliver old state because it arrived late to you.

What you control and what you don’t

Most machines in an integration are a black box. You don’t rewrite their internal architecture. You don’t get to change how they schedule threads or manage queues.

So your job is to control your boundary. Receive in the shortest time possible. Do predictable work. Send in the shortest time possible. Then measure your portion and document it.

That is how you prove whether the delay is created inside your software or outside it.

Black box does not always mean zero control though. With documentation and experience, you can sometimes tune these systems without touching their code. Update rates, buffer depths, network settings, priority modes, and similar controls. Sometimes it helps a lot, sometimes it barely moves, but it is the only lever you usually have.

A real example I faced

I had to integrate two systems that were not designed for each other.

One system was a simulation engine. It was designed to send entity updates at a low rate, around 5 Hz. That was not a bug. That was the design. It also reported velocity and speed, but it still sent updates at that slow rate.

On the other side was an image generator. The IG needs high-rate position updates so it can draw the entity every frame and keep the motion smooth. If the position only jumps every 200 ms, the aircraft doesn’t look like it is flying. It looks like it is teleporting.

The key detail here is that the IG did not predict the aircraft position from velocity. Its internal logic cared about position updates. Velocity being present in the packet did not mean the IG was using it for extrapolation.

So you end up with a mismatch in system nature.

The simulation engine was built to publish entity state at a low rate. The image generator was built to consume high-rate position updates for smooth rendering.

Here you don’t have control over the internal code of either system. The job is still to make them integrate.

This is where data age and sampling delay become more useful than raw middleware latency. My middleware could be fast. The motion could still look bad, because the producer is sampling slowly and the IG is not filling the gaps.

The fix was adjusting the system settings, using documentation and experience, to push the update rate to a rate that is smooth enough for the IG to draw the airplane position in real time without making it look like it is teleporting.

This is the reality of integration work. Sometimes performance is not about “making code faster”. Sometimes performance is about aligning two systems that were built with different purposes.

The simple takeaway

Latency tells you how fast your stage is.

Data age tells you how current the whole system is.

And in integrated simulation systems, data age is often controlled by sampling delay, buffering, and even network infrastructure. Your middleware can only control its piece. The rest is black boxes, documentation, and whatever tuning the system allows.