What is Sclender, and how does it compare to other upsampling technologies like Nvidia DLSS and AMD FidelityFX?
The moniker "Sclender" stands for "scale and render", and that is what it does: it works with your computer's operating system to get an app's visual output (called the "frame buffer") and works with it, applying resizing and filtering, to produce a new image. Sclender is distinct from technologies like DLSS and FidelityFX in two respects:
- 1) it is an application, as opposed to a driver
- 2) its processing is not GPU bound
Because Sclender is not a driver, it does not interfere with driver-based upsampling technologies like DLSS and FidelityFX; it can, in fact, be used alongside them. On the other hand, it is not necessary to use them either, because Sclender can leverage your CPU's untapped power to give you a better image on its own. Also unlike DLSS and FidelityFX, the app in question need not be optimized for use with Sclender for Sclender to do a good job with its output.
Getting the most out of Sclender: High FPS at High Res
So how do you get 144fps at 4k using Sclender? High performance parts are key.
- RAM speed matters. The faster your RAM, the faster your throughput and less competition between CPU cores for it.
- RAM channels matter. 3 or more channels will allow all your cores to work their hardest without having to wait on each other.
Consider a high performance server motherboard with at least 3 RAM channels (most games use 2 at most), a server-brand CPU (such as AMD Ryzen) with more cores than the game can make use of (so the game logic won't be competing), and the fastest RAM available.
One more tip: if you mix Sclender CPU-based filters with GPU-based shaders, try not to switch back and forth between the hardwares as this can severely degrade performance. If you start with Dejag and follow up XBR, follow up with SuperEagle instead of Dejag. Likewise if you start with XBR and switch to Dejag HQ, finish with Dejag. Use performance shaders/filters late in your filter sequencing, when the artifacts are less likely to be noticeable.
One last thing: Microsoft recently implemented a feature in Windows which enables two GPUs to be used at once, even by different manufacturers and even if one is integrated and the other discrete. But the technology is far from reliable, prone to crashes, and might actually reduce performance in some cases, so you're on your own if you try to use it.