For some reason, everybody loves to use frames per second as a unit of gaming performance. This is understandable. After all, it’s what displays and film use, more is better, and 60 per second is easier to conceptualize than one per 16.66666 milliseconds. While intuitive, FPS has some serious problems as a performance metric, and the purpose of this post is to explain why graphics research and engineering tends to prefer frame time instead.
The reason we care about FPS at all is user responsiveness. A user wants an experience that is fluid and smooth, and responds immediately to their input. In real-time games, this not only feels better but can also give the player an advantage, because they are able to see in-game events sooner and thus respond to them more quickly. To ensure a good user experience, we want to ensure that the user receives their visual stimulii as quickly as possible. None of this has anything to do with frames per second. What we’re really after is the amount of time the user ends up waiting before they receive the next stimulus, which is actually seconds per frame, but usually written in milliseconds because human beings prefer whole numbers.
It turns out that seconds per frame stops getting better as FPS increases:
This graph shows that there is a clear point of diminishing returns here. At some point, the limits of the human brain will kick in and make any frame time improvements redundant. Different users and games will draw that line at different places, but there is definitely a cutoff point, beyond which all FPS numbers are effectively the same. So, the higher the FPS value gets, the less different it is from the next lowest value. This doesn’t work as a unit of measurement. It’s kindof like using different kilograms for weighing elephants and mice.
To illustrate the misleading qualities of FPS, here’s some data showing how much better a certain game gets faster under DX12 with a high-end GPU and different CPUs. The graphs were generated from the measurements given in this article. Their testing was done on the minimal quality level at 1920×1080, but I can tell you with authority that the same speedups will be measured on higher quality levels, given this GPU and resolution.
Based on the pink line, you might be inclined to think that some CPUs are more “DX12-capable” than others, but you’d be misinformed. The actual performance gain is similar across all of the CPUs and doesn’t really correllate to the change in FPS. Even more interesting is that, while a consistent gain will feel faster at slower speeds, it will register as a smaller FPS improvement.
This brings us to our last point about FPS. An FPS delta (change in FPS) is a completely useless number. You cannot tell how much better something got using only the FPS improvement. You have to know what the baseline value was, and then you still have to do some math in your head to figure out how much better that is in relative terms, because a change of, say, 5 FPS is a lot more significant if we started at 20 than at 200.
Using frame time, once we’re accustomed to it, allows us to express costs and benefits clearly using a fixed frame of reference. A 60hz frame is 16.6666ms, and is considered good. A 1ms change is always a 1ms change no matter what, but may or may not matter if frame time is already below the target.
If we are trying to assess the value proposition for a piece of hardware, then FPS per dollar is one of the worst possible metrics. At a fixed price point, as you increase the FPS per dollar, the actual added value approaches zero. You should not use FPS per dollar when making purchasing decisions. The only people who benefit from its use are GPU salesmen.