For the avid individual who wishes to gauge their own systems performance or perhaps wishes to monitor frequencies, framerates and temperatures in real time during gameplay, for the longest time MSI’s Afterburner tool or more specifically Rivatuner RTSS has dominated the field with its minimalistic and descriptive on screen display.

Others choose to use the built in OSD featured from their choice of GPU vendor, such as AMD’s Radeon Overlay or NVIDIA’s Performance Overlay but these options leave a lot to be desired for some, unless you’re in the AMD camp that is.

Intel however have been working endlessly to validate their allegedly “first” attempt at entering the graphics card market by bolstering its entry level components through supportive and frequent software updates.

While Intel’s ARC Alchemist graphics cards remain extremely limited in terms of legacy API support and overall performance compared to equivalent GeForce / Radeon GPUs, Intel are looking to revolutionize their own OSD utility program to play catch up compared to the competition, or in my eyes surpass them entirely.

Intel are bolstering regarding their latest “game on” graphics card driver update for their ARC Alchemist series of products, promising an average of 19% performance improvement on DX11 titles compared to original launch drivers which doesn’t sound far out of the ordinary given the limited optimization an entirely new product and architecture would have upon first entering the market.

But in typical fashion, Intel being Intel they generally only opt to compare performance figures across an assortment of Free-2-Play titles that majority of the actual consumers play, such as LOL, Valorant, CSGO, Apex Legends, Destiny 2 and Genshin Impact of which possibly has to be the first time a hardware vendor has benchmarked Genshin.

The proof is in the pudding though, Intel have been working extensively on their software end to prop up its lackluster hardware side of things with ARC Alchemist, while still not surpassing the likes of the RTX 3060 or god forbid the RX 6700, there is without a doubt that Intel ARC GPUs perform better today than they did originally at launch while still remaining a third wheel in the market at least for now, whose products server only as a cheap AV1 encoder for video compression and or Twitch streaming.

The performance strides Intel have been making aren’t what I am interested in however, that would be the introduction of a new “beta” for Intel’s own PresentMon utility program.

Intel have redesigned PresentMon completely from the ground up, with this revised Beta serving as another in-game performance overlay, capture and telemetry tool that has a rather neat trick up its sleeve.

While sadly PresentMon features a disgusting color scheme that’s an obvious reminder of the company who made it, the PresentMon utility features the specific metric of “GPU Busy” of which you’d only be able to previously judge and monitor in-game through the usage of Intel’s own Graphics Performance Analyzer which in my personal opinion Intel’s GPA should have been the universal standard in performance monitoring and “benchmarking” across the board with Tech Tubers.

But of course Intel’s GPA is extremely convoluted, and tech tuber scum seemingly only wish to benchmark specific titles for mere minutes at a time, switching out from MSI AB / RTSS to Intel’s GPA would only waste time, resources and energy let alone the additional time it would take to compile figures given how their reviews are ushered out in as short a timeframe as possible.

What Intel’s GPA utility does is now what Intel’s PresentMon will gladly tell you in a more abbreviated manner , that being the aforementioned “GPU Busy” metric which is more or less a depiction of how much the GPU is spent “idling” which is just a fancy way of ciphering whether a particular in-game scene or sequence is either CPU bound or GPU bound.

If the GPU is “waiting” rather than completely engaged in the task of rendering frames obviously in such a scenario the GPU’s performance is being held back by the CPU.

But of course Intel’s Graphics Performance Analyzer is far more in-depth about such detailing, but it’s fantastic to see something of the sort make its way into the mainstream especially considering how most modern games end up releasing as unoptimized abominations of fecal matter.

Where the modern tech tuber of today is more than happy to give you a graph depicting frame times in milliseconds, whereas before that all anyone cared about reporting / recording was average framerates and 1% and 0.1% percentile lows which is more or less a matter of performance drops and visual stutters.

Being able to cipher between whether a game sequence is CPU or GPU limited is crucial for evolution of in-game benchmarks to be as authentic and accurate as possible, what good is it to benchmark a scene that’s so utterly fucked that it cannot harness a fraction of the GPUs potential, your numbers would be worthless.

For the time being you’re able to download and install the new PresentMon Beta which works across all hardware vendors, a tactic that NVIDIA sure as shit despises, and it accurately reports real time information regarding my own AMD Ryzen and Radeon hardware so there’s no problems with misread frequencies on my end.

I myself look forward to the evolution of Intel’s PresentMon and will more than likely use it extensively in conjunction with Intel’s GPA tool in performance captures and benchmarks.