It's very hard to say that a drive that sequentially reads nearly 12,500MB a second doesn't live up to expectations, especially not when its performance everywhere else is better than anything else I've tested, but that sticker on the box of the Samsung 9100 Pro that promises up to 14,800MB/s sticks in the craw of an otherwise perfect M.2 PCIe 5.0 SSD.
Starting at $199.99 for a 1TB capacity drive (about £155/AU$315), the 9100 Pro is Samsung's first 'true' PCIe 5.0 SSD after the Samsung 990 EVO and Samsung 990 EVO Plus. Both those drives are PCIe 5.0, but they only use two PCIe 5.0 lanes, which limits their practical speeds to PCIe 4.0 standards.
The 9100 Pro, meanwhile, is a full-fat PCIe 5.0 x4 M.2 drive, meaning its theoretical max speed for sequential reads is upwards of roughly 15,000MB/s and 14,000MB/s for sequential writes (this number has been edging up over the past 3-5 years, so take that theoretical maximum with a grain of salt at this point).
(Image credit: Future / John Loeffler)
Samsung promises that its latest drive can hit up to 14,800MB/s sequential read and 13,400MB/s sequential write, and the Samsung 9100 Pro gets there somewhat. It clocks in a max sequential write rate of 13,066MB/s in my testing, but the 9100 Pro's maximum sequential read speed only hit 12,427MB/s. This is still incredibly fast—but, it's not what's promised on the box.
Could that change with BIOS or firmware updates, sure. Possibly. But it's not like other PCIe 5.0 drives like the Crucial T705 aren't hitting close to 14,500MB/s. The T705 hits a max sequential read speed of 14,390MB/s in CrystalDiskMark 8 on the same testbench with a Gigabyte Aorus X870E motherboard, AMD Ryzen 9 9950X processor, and 32GB Corsair Dominator DDR5 running at 6,600MT/s and integrated graphics, so there's no interference from a graphics card.
Meanwhile, compared to its predecessor, the Samsung 990 Pro, the 9100 Pro is a much better overall drive, but there are circumstances where the 990 Pro still manages to outperform its successor such as same and secondary drive copy time and random read and write speeds.
(Image credit: Future / John Loeffler)
On balance, the Samsung 9100 Pro isn't the undisputed best SSD you can buy, and for some users (such as gamers or general-use enthusiasts), you will likely be happier with other PCIe 5.0 or even PCIe 4.0 drives on the market, many of which will be cheaper than the 9100 Pro.
That said, this is a drive for professional users and for those who need to save or otherwise write large files to disk regularly, and for that, the Samsung 9100 Pro is the best M.2 SSD on the market.
Gamers won't find much here worth the investment, unfortunately, with even the 990 Pro outperforming the 9100 Pro in 3DMark's SSD benchmark, which is a gaming-exclusive test. It also falls about 17% behind the Corsair MP700 Elite PCIe 5.0 SSD on this benchmark as well.
For general business users, the 9100 Pro is better than its predecessor but lags behind the Crucial T705 in PCMark 10, which tests general productivity speed and performance as well as its performance as a data drive rather than your main system drive.
Given all this, you'd think that the 9100 Pro should score lower, but it really comes in strong on sequential write performance, which is a very big deal for professional users who might need to save media projects that are many, many gigabytes large.
Nothing disrupts a workflow more than a project autosaving for up to half a minute or more, and this is where the 9100 Pro shines. Offering up to 39% faster sequential write performance than the Crucial T705, its nearest competitor in this category, the Samsung 9100 Pro really leans into its pro branding here.
The 9100 Pro unit I tested did not come with a built-in heatsink, so its thermal performance is largely a factor of the excellent PCIe 5.0 heatsink on the Gigabyte Auros X870E motherboard I used for testing (I also used the AMD Ryzen 9 9950X and 32GB of Corsair Dominator DDR5 RAM at 6,600MT/s), so I can't speak much to the heatsink's quality in this review.
Also, I tested a 9100 Pro with a 4TB capacity, whereas every other drive I tested had a 2TB capacity. This, in itself, shouldn't impact baseline scores too much, if at all, but it's worth pointing out that while I still consider this an apples-to-apples comparison, it's more of a Cosmic Crisp-to-Red Delicious comparison, so your actual experienced performance might vary slightly from mine.
All that said, the other major problem with this drive is its price. It's an expensive drive, and for a lot of gamers and those who are more interested in faster loading times for their programs and files, the Crucial T705 is simply a better option with comparable write speeds but close-to-max read speeds for a lower price.
If you're looking for a drive that you can use in more of a professional capacity with frequent saves of very large files like video projects or video game packages in Unreal Engine, the Samsung 9100 Pro is the best SSD you're going to get for that purpose and it will absolutely speed up your everyday workflow considerably.
Samsung 9100 Pro: Price & availability
(Image credit: Future / John Loeffler)
How much does it cost? Starting at $199.99 (about £155/AU$315)
When is it available? Available now
Where can you get it? Available in the US, UK, and Australia
The Samsung 9100 Pro is available in the US, UK, and Australia starting on March 18, 2025, for $199.99 (about £155/AU$315) for a 1TB drive.
Higher capacities will cost you more, with the 2TB capacity going for $299.99 (about £230/AU$470)and the 4TB capacity going for $549.99 (about £425/AU$865).
The 9100 Pro 8TB capacity drive is expected to launch in H2 2025, though its price hasn't been released yet.
This puts the 9100 Pro roughly 30% more expensive to start over the Crucial T705 1TB, and slightly more expensive than the launch MSRP of the Samsung 990 Pro it replaces.
Samsung 9100 Pro: Specs
Should you buy the Samsung 9100 Pro?
(Image credit: Future / John Loeffler)
Buy it if...
You need pro-level sequential write performance Are you writing 20GB files to disk every time you save your architecture project? This drive is for you.
You want an M.2 SSD with high-capacity options With a 4TB drive available now and 8TB capacity coming later this year, this is one of the most spacious M.2 SSDs on the market.
Don't buy it if...
You're on a budget This is hardly the cheapest PCIe 5.0 drive out there, even at this level of performance.
You're looking for a PS5 SSD If you want a new SSD for your PS5 console, this drive is way too fast for the PS5's PCIe 4.0 interface. You're better off getting the 990 Pro.
Samsung 9100 Pro: Also consider
If my Samsung 9100 Pro review has you looking for other options, here are two more M.2 SSDs to consider...
Crucial T705 The Crucial T705 is the best all-around PCIe 5.0 drive you can buy, with stellar performance across the board and a fairly accessible price point.
Samsung 990 Pro The Samsung 990 Pro is the best PCIe 4.0 SSD going, and for 95% of users, this drive will be more than enough for your needs at a decent enough price.
I used it for gaming, content creation, and general storage use
I used my standard suite of SSD benchmarks as well as daily use
To test the 9100 Pro, I ran it through our standard benchmark suite, including CrystalDiskMark 8, PassMark, PCMark 10, 3DMark, and our proprietary 25GB file copy test.
I used this drive as my main system storage (C:\) drive for over a week on my test bench, where I used it extensively for loading games for graphics card benchmarking purposes, content creation, and more. This included loading games and large batches of photos for editing in Lightroom and Photoshop for various reviews.
I've been testing hardware components for TechRadar for over three years now, including several major SSD reviews from Samsung, PNY, and others, so I know what the latest SSDs are best for and whether they are worth your hard-earned money.
So the AMD Ryzen 9 9950X3D has something of a high bar to clear given the strength of AMD's first Zen 5 3D V-Cache chip, the Ryzen 7 9800X3D, but having spent a week testing this chip, I can say unequivocally that AMD has produced the best processor ever made for the consumer market.
Whether it's gaming, creating, or general productivity work, the Ryzen 9 9950X3D doesn't suffer from the same hang-ups that held its predecessor, the AMD Ryzen 9 7950X3D, from completely dominating its competition among the previous generation of processors.
Like its predecessor, the Ryzen 9 9950X3D will sell for $699 / £699 / AU$1,349 when it goes on sale on March 12, 2025. This makes it the most expensive consumer processor on the market, so definitely be prepared to invest quite a bit for this chip, especially if you're upgrading from an Intel or AMD AM4 system. As an AM5 chip, you'll need to upgrade some major components, including motherboard and possibly RAM.
Unlike nearly all other X3D chips besides the 9800X3D and 9900X3D, however, the Ryzen 9 9950X3D is fully overclockable thanks to AMD rearchitecting the way the 3D V-cache sits on the compute die, so there's a lot more that this chip can do that other X3D chips can't.
That includes beating out the current champ for the best gaming CPU, the 9800X3D, in most games while also offering substantially better general and creative performance thanks to twice as many processing cores.
That doesn't mean that the AMD Ryzen 9 9950X3D is flawless, as there are some things to caveat here (which I'll get into in more depth below), but as an overall package, you simply won't find a better CPU on the market right now that will let you do just about anything you want exceptionally well while still letting you run a more reasonable cooling solution. Just be prepared to pay a premium for all that performance.
AMD Ryzen 9 9950X3D: Price & availability
(Image credit: Future / John Loeffler)
How much will it cost? US MSRP is $699 / £699 / AU$1,349
When is it available? It goes on sale on March 12, 2025
Where is it available? It will be available in the US, UK, and Australia at launch
The Ryzen 9 9950X3D goes on sale March 12, 2025, for a US MSRP of $699 / £699 / AU$1,349 in the US, UK, and Australia, respectively, making it the most expensive processor on the market.
It comes in at the same price as its predecessor, the Ryzen 9 7950X3D when it launched, and costs $100 more than the Ryzen 9 9900X3D that launches on the same day.
This is also just over $200 more expensive than the Ryzen 7 9800X3D which has nearly the same level of gaming performance (and in some cases surpasses the 9950X3D), so if you are strictly looking for a gaming CPU, the 9800X3D might be the better value.
Compared to Intel's latest flagship processor, meanwhile, the Ryzen 9 9950X3D is just over $100 more expensive than the Intel Core Ultra 9 285K, though that chip requires a whole new motherboard chipset if you're coming from an Intel LGA 1700 chip like the Intel Core i9-12900K, so it might represent a much larger investment overall.
Value: 3.5 / 5
AMD Ryzen 9 9950X3D: Specs
128MB L3 Cache (96MB + 32MB)
Fully overclockable
Not all processing cores have access to 3D V-cache
Compared to the Ryzen 9 7950X3D, there don't seem to be too many changes spec wise, but there's a lot going on under the hood here.
First, the way the 3D V-cache is seated over the CCX for the 9950X3D differs considerably than with the 7950X3D, specifically that its seated underneath the processing die, rather than above it.
This means that the processing cores are now in 'direct' contact with the lid and cooling solution for the chip, allowing the 9950X3D to be fully overclocked, whereas the V-cache in the 7950X3D sat between the lid and the processing cores, making careful thermal design and limiting necessary and ruling out overclocking.
The 9950X3D does keep the same two-module split in its L3 cache as the 7950X3D, so that only one of the eight-core CCXs in the chip actually has access to the added V-cache (32MB + 64MB), while the other just has access to 32MB.
This had some benefit for more dedicated, directy access for individual cores in use more cache. In the last-gen, this honestly produced somewhat mixed results compared to the 7800X3D, which didn't split the V-cache up this way, leading ultimately to high levels of gaming performance for the 7800X3D.
Whatever issue there was with the 7950X3D looks to have been largely fixed with the 9950X3D, but some hiccups remains, which I'll get to in the performance section.
Beyond that, the 9950X3D has slightly higher base and boost clock speeds, as well as a 50W higher TDP, but its 170W TDP isn't completely unmanageable, especially next to Intel's competing chips.
Specs: 4.5 / 5
AMD Ryzen 9 9950X3D: Performance
(Image credit: Future / John Loeffler)
Almost best-in-class gaming performance
Strong overall performance
While the Ryzen 7 7800X3D was indisputably a better gaming chip than the Ryzen 9 7950X3D by the numbers, I was very curious going into my testing how this chip would fare against the 9800X3D, but I'm happy to report that not only is it better on the whole when it comes to gaming, it's a powerhouse for general computing and creative work as well, making it the best all-around processor on the market right now.
On the synthetic side, the Ryzen 9 9950X3D goes toe-to-toe with the Intel Core Ultra 9 285K in multi-core performance, coming within 2% of Intel's best on average, and chocking up a 10% stronger single-core result than the 285K.
Compared to its predecessor, the 7950X3D, the 9950X3D is about 15% faster in multi-core and single-core performance, while also barely edging out the Ryzen 9 9950X in multi-core performance.
Compared to the Ryzen 7 9800X3D, the eight-core difference between the two really shows up in the results, with the 9950X3D posting a 61% better multi-core performance, and a roughly 5% better single core score compared to the 9800X3D.
On the creative front, the 9950X3D outclasses Intel's best and anything else in the AMD Ryzen lineup that I've tested overall (we'll see how it fares against the 9900X3D once I've had a chance to test that chip), though it is worth noting that the Intel Core Ultra 9 285K is still the better processor for video editing work.
The AMD Ryzen X3D line is all about gaming though, and here, the Ryzen 9 9950X3D posts the best gaming performance of all the chips tested, with one caveat.
In the Total War: Warhammer III Mirrors of Madness benchmark, the Ryzen 9 9950X3D only scores a few fps higher than the non-X3D Ryzen 9 9950X (331 fps to 318 fps, respectively), while also scoring substantially lower than the 9800X3D's 506 fps in that same benchmark. That's a roughly 35% slower showing for the 9950X3D, and given its roughly where the non-X3D chip scored, it's clear that Total War: Warhammer III was running on one of those cores that didn't have access to the extra V-cache.
This is an issue with the Windows process scheduler that might be fixed in time so that games are run on the right cores to leverage the extra cache available, but that's not a guarantee the way it is with the 9800X3D, which gives all cores access to its added V-cache so there aren't similar issues.
It might be a fairly rare occurence, but if your favorite game does take advantage of the extra cache that you're paying a lot of money for, that could be an issue, and it might not be something you'll ever know unless you have a non-X3D 9950X handy to test the way I do.
With that in mind, if all you want is a gaming processor, and you really don't care about any of these other performance categories, you're probably going to be better served by the 9800X3D, as you will get guaranteed gaming performance increases, even if you don't get the same boost in other areas.
While that's a large caveat, it can't take away from the overall performance profile of this chip, which is just astounding pretty much across the board.
If you want the best processor on the market overall, this is it, even with its occasional blips, especially since it runs much cooler than Intel's chips and its power draw is much more acceptable for midrange PCs to manage.
Performance: 4.5 / 5
Should you buy the AMD Ryzen 9 9950X3D?
(Image credit: Future / John Loeffler)
Buy the AMD Ryzen 9 9950X3D if...
You want spectacular performance no matter the workload While gamers will be especially interested in this chip, it's real strength is that it's strong everywhere.
You want the best gaming performance When using 3D V-cache, this processor's gaming chops are unbeatable.
Don't buy it if...
You want consistent top-tier gaming performance When games run on one of this chip's 3D V-cache cores, you're going to get the best performance possible, but Windows might not assign a game to those cores, so you might miss out on this chip's signature feature.
You're on a budget This chip is crazy expensive, so only buy it if you're flush with cash.
Also consider
AMD Ryzen 7 9800X3D If you want consistent, top-tier gaming performance, the 9800X3D will get you performance nearly as good as this chip's, though more consistently.
I used the chip as my main workstation processor and used my updated battery of benchmarks to measure its performance
I used it for general productivity, creative, and gaming workloads
I spent about a week with the Ryzen 9 9950X3D as my main workstation CPU, where I ran basic computing workloads as well as extensive creative work, such as Adobe Photoshop.
I also spent as much time as I could gaming with the chip, including titles like Black Myth: Wukong and Civilization VII. I also used my updated suite of benchmark tools including industry standard utilities like Geekbench 6.2, Cyberpunk 2077, and PugetBench for Creators.
I've been reviewing components for TechRadar for three years now, including more than a dozen processor reviews in that time, so you can trust my testing process and recommendations if you're looking for the best processor for your needs and budget.
AMD had one job to do with the launch of its RDNA 4 graphics cards, spearheaded by the AMD Radeon RX 9070 XT, and that was to not get run over by Blackwell too badly this generation.
With the RX 9070 XT, not only did AMD manage to hold its own against the GeForce RTX monolith, it perfectly positions Team Red to take advantage of the growing discontent among gamers upset over Nvidia's latest GPUs with one of the best graphics cards I've ever tested.
The RX 9070 XT is without question the most powerful consumer graphics card AMD's put out, beating the AMD Radeon RX 7900 XTX overall and coming within inches of the Nvidia GeForce RTX 4080 in 4K and 1440p gaming performance.
It does so with an MSRP of just $599 (about £510 / AU$870), which is substantially lower than those two card's MSRP, much less their asking price online right now. This matters because AMD traditionally hasn't faced the kind of scalping and price inflation that Nvidia's GPUs experience (it does happen, obviously, but not nearly to the same extent as with Nvidia's RTX cards).
That means, ultimately, that gamers who look at the GPU market and find empty shelves, extremely distorted prices, and uninspiring performance for the price they're being asked to pay have an alternative that will likely stay within reach, even if price inflation keeps it above AMD's MSRP.
The RX 9070 XT's performance comes at a bit of a cost though, such as the 309W maximum power draw I saw during my testing, but at this tier of performance, this actually isn't that bad.
This card also isn't too great when it comes to non-raster creative performance and AI compute, but no one is looking to buy this card for its creative or AI performance, as Nvidia already has those categories on lock. No, this is a card for gamers out there, and for that, you just won't find a better one at this price. Even if the price does get hit with inflation, it'll still likely be way lower than what you'd have to pay for an RX 7900 XTX or RTX 4080 (assuming you can find them at this point) making the AMD Radeon RX 9070 XT a gaming GPU that everyone can appreciate and maybe even buy.
AMD Radeon RX 9070 XT: Price & availability
(Image credit: Future / John Loeffler)
How much is it? MSRP is $599 (about £510 / AU$870)
When can you get it? The RX 9070 XT goes on sale March 6, 2025
Where is it available? The RX 9070 XT will be available in the US, UK, and Australia at launch
The AMD Radeon RX 9070 XT is available as of March 6, 2025, starting at $599 (about £510 / AU$870) for reference-spec third-party cards from manufacturers like Asus, Sapphire, Gigabyte, and others, with OC versions and those with added accoutrements like fancy cooling and RGB lighting likely selling for higher than MSRP.
At this price, the RX 9070 XT comes in about $150 cheaper than the RTX 5070 Ti, and about $50 more expensive than the RTX 5070 and the AMD Radeon RX 9070, which also launches alongside the RX 9070 XT. This price also puts the RX 9070 XT on par with the MSRP of the RTX 4070 Super, though this card is getting harder to find nowadays.
While I'll dig into performance in a bit, given the MSRP (and the reasonable hope that this card will be findable at MSRP in some capacity) the RX 9070 XT's value proposition is second only to the RTX 5070 Ti's, if you're going by its MSRP. Since price inflation on the RTX 5070 Ti will persist for some time at least, in many cases you'll likely find the RX 9070 XT offers better performance per price paid of any enthusiast card on the market right now.
Value: 5 / 5
AMD Radeon RX 9070 XT: Specs
(Image credit: Future / John Loeffler)
PCIe 5.0, but still just GDDR6
Hefty power draw
The AMD Radeon RX 9070 XT is the first RDNA 4 card to hit the market, and so its worth digging into its architecture for a bit.
The new architecture is built on TSMC's N4P node, the same as Nvidia Blackwell, and in a move away from AMD's MCM push with the last generation, the RDNA 4 GPU is a monolithic die.
As there's no direct predecessor for this card (or for the RX 9070, for that matter), there's not much that we can apples-to-apples compare the RX 9070 XT against, but I'm going to try, putting the RX 9070 XT roughly between the RX 7800 XT and the RX 7900 GRE if it had a last-gen equivalent.
The Navi 48 GPU in the RX 9070 XT sports 64 compute units, breaking down into 64 ray accelerators, 128 AI accelerators, and 64MB of L3 cache. Its cores are clocked at 1,600MHz to start, but can run as fast as 2,970MHz, just shy of the 3GHz mark.
It uses the same GDDR6 memory as the last-gen AMD cards, with a 256-bit bus and a 644.6GB/s memory bandwidth, which is definitely helpful in pushing out 4K frames quickly.
The TGP of the RX 9070 XT is 304W, which is a good bit higher than the RX 7900 GRE, though for that extra power, you do get a commensurate bump up in performance.
Specs: 4 / 5
AMD Radeon RX 9070 XT: Design
(Image credit: Future / John Loeffler)
No AMD reference card
High TGP means bigger coolers and more cables
There's no AMD reference card for the Radeon RX 9070 XT, but the unit I got to test was the Sapphire Pulse Radeon RX 9070 XT, which I imagine is pretty indicative of what we can expect from the designs of the various third-party cards.
The 304W TGP all but ensures that any version of this card you find will be a triple-fan cooler over a pretty hefty heatsink, so it's not going to be a great option for small form factor cases.
Likewise, that TGP just puts it over the line where it needs a third 8-pin PCIe power connector, something that you may or may not have available in your rig, so keep that in mind. If you do have three spare power connectors, there's no question that cable management will almost certainly be a hassle as well.
After that, it's really just about aesthetics, as the RX 9070 XT (so far) doesn't have anything like the dual pass-through cooling solution of the RTX 5090 and RTX 5080, so it's really up to personal taste.
As for the card I reviewed, the Sapphire Pulse shroud and cooling setup on the RX 9070 XT was pretty plain, as far as desktop GPUs go, but if you're looking for a non-flashy look for your PC, it's a great-looking card.
Design: 4 / 5
AMD Radeon RX 9070 XT: Performance
(Image credit: Future / John Loeffler)
Near-RTX 4080 levels of gaming performance, even with ray tracing
Non-raster creative and AI performance lags behind Nvidia, as expected
Likely the best value you're going to find anywhere near this price point
A note on my data
The charts shown below offer the most recent data I have for the cards tested for this review. They may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.
Simply put, the AMD Radeon RX 9070 XT is the gaming graphics card that we've been clamoring for this entire generation. While it shows some strong performance in synthetics and raster-heavy creative tasks, gaming is where this card really shines, managing to come within 7% overall of the RTX 4080 and getting within 4% of the RTX 4080's overall gaming performance. For a card launching at half the price of the RTX 4080's launch price, this is a fantastic showing.
The RX 9070 XT is squaring up against the RTX 5070 Ti, however, and here the RTX 5070 Ti does manage to pull well ahead of the RX 9070 XT, but it's much closer than I thought it would be starting out.
On the synthetics side, the RX 9070 XT excels at rasterization workloads like 3DMark Steel Nomad, while the RTX 5070 Ti wins out in ray-traced workloads like 3DMark Speed Way, as expected, but AMD's 3rd generation ray accelerators have definitely come a long way in catching up with Nvidia's more sophisticated hardware.
Also, as expected, when it comes to creative workloads, the RX 9070 XT performs very well in raster-based tasks like photo editing, and worse at 3D modeling in Blender, which is heavily reliant on Nvidia's CUDA instruction set, giving Nvidia an all but permanent advantage there.
In video editing, the RX 9070 XT likewise lags behind, though it's still close enough to Nvidia's RTX 5070 Ti that video editors won't notice much difference, even if the difference is there on paper.
Gaming performance is what we're on about though, and here the sub-$600 GPU holds its own against heavy hitters like the RTX 4080, RTX 5070 Ti, and Radeon RX 7900 XTX.
In 1440p gaming, the RX 9070 XT is about 8.4% faster than the RTX 4070 Ti and RX 7900 XTX, just under 4% slower than the RTX 4080, and about 7% slower than the RTX 5070 Ti.
This strong performance carries over into 4K gaming as well, thanks to the RX 9070 XT's 16GB VRAM. Here, it's about 15.5% faster than the RTX 4070 Ti and about 2.5% faster than the RX 7900 XTX. Against the RTX 4080, the RX 9070 XT is just 3.5% slower, while it comes within 8% of the RTX 5070 Ti's 4K gaming performance.
When all is said and done, the RX 9070 XT doesn't quite overpower one of the best Nvidia graphics cards of the last-gen (and definitely doesn't topple the RTX 5070 Ti), but given its performance class, it's power draw, its heat output (which wasn't nearly as bad as the power draw might indicate), and most of all, it's price, the RX 9070 XT is easily the best value of any graphics card playing at 4K.
And given Nvidia's position with gamers right now, AMD has a real chance to win over some converts with this graphics card, and anyone looking for an outstanding 4K GPU absolutely needs to consider it before making their next upgrade.
Performance: 5 / 5
Should you buy the AMD Radeon RX 9070 XT?
Buy the AMD Radeon RX 9070 XT if...
You want the best value proposition for a high-end graphics card The performance of the RX 9070 XT punches way above its price point.
You don't want to pay inflated prices for an Nvidia GPU Price inflation is wreaking havoc on the GPU market right now, but this card might fare better than Nvidia's RTX offerings.
Don't buy it if...
You're on a tight budget If you don't have a lot of money to spend, this card is likely more than you need.
You need strong creative or AI performance While AMD is getting better at creative and AI workloads, it still lags far behind Nvidia's competing offerings.
How I tested the AMD Radeon RX 9070 XT
I spent about a week with the AMD Radeon RX 9070 XT
I used my complete GPU testing suite to analyze the card's performance
I tested the card in everyday, gaming, creative, and AI workload usage
Test System Specs
Here are the specs on the system I used for testing:
I spent about a week with the AMD Radeon RX 9070 XT, which was spent benchmarking, using, and digging into the card's hardware to come to my assessment.
I used industry standard benchmark tools like 3DMark, Cyberpunk 2077, and Pugetbench for Creators to get comparable results with other competing graphics cards, all of while have been tested using the same testbench setup listed on the right.
I've reviewed more than 30 graphics cards in the last three years, and so I've got the experience and insight to help you find the best graphics card for your needs and budget.
A lot of promises were made about the Nvidia GeForce RTX 5070, and in some narrow sense, those promises are fulfilled with Nvidia's mainstream GPU. But the gulf between what was expected and what the RTX 5070 actually delivers is simply too wide a gap to bridge for me and the legion of gamers and enthusiasts out there who won't be able to afford—or even find, frankly—Nvidia's best graphics cards from this generation.
Launching on March 5, 2025, at an MSRP of $549 / £549 / AU$1,109 in the US, UK, and Australia, respectively, this might be one of the few Nvidia Blackwell GPUs you'll find at MSRP (along with available stock), but only for lack of substantial demand. As the middle-tier GPU in Nvidia's lineup, the RTX 5070 is meant to have broader appeal and more accessible pricing and specs than the enthusiast-grade Nvidia GeForce RTX 5090, Nvidia GeForce RTX 5080, and Nvidia GeForce RTX 5070 Ti, but of all the cards this generation, this is the one that seems to have the least to offer prospective buyers over what's already on the market at this price point.
That's not to say there is nothing to commend this card. The RTX 5070 does get up to native Nvidia GeForce RTX 4090 performance in some games thanks to Nvidia Blackwell's exclusive Multi-Frame Generation (MFG) technology. And, to be fair, the RTX 5070 is a substantial improvement over the Nvidia GeForce RTX 4070, so at least in direct gen-on-gen uplift, there is a roughly 20-25% performance gain.
If we're just talking framerates, then in some very narrow cases this card can do that, but at 4K with ray tracing and cranked-up settings, the input latency for the RTX 5070 with MFG can be noticeable depending on your settings, and it can become distracting. Nvidia Reflex helps, but if you take RTX 4090 performance to mean the same experience as the RTX 4090, you simply won't get that with MFG, even in the 80 or so games that support it currently.
(Image credit: Future / John Loeffler)
Add to all this the fact that the RTX 5070 barely outpaces the Nvidia GeForce RTX 4070 Super when you take MFG off the table (which will be the case for the vast majority of games played on this card) and you really don't have anything to show for the extra 30W of power this card pulls down over the RTX 4070 Super.
With the RTX 5070 coming in at less than four percent faster in gaming without MFG than the non-OC RTX 4070 Super, and roughly 5% faster overall, that means that the RTX 5070 is essentially a stock-overclocked RTX 4070 Super, performance-wise, with the added feature of MFG. An overclocked RTX 4070 Super might even match or exceed the RTX 5070's overall performance in all but a handful of games, and that doesn't even touch upon AMD's various offerings in this price range, like the AMD Radeon RX 7900 GRE or AMD's upcoming RX 9070 XT and RX 9070 cards.
Given that the RTX 4070 Super is still generally available on the market (at least for the time being) at a price where you're likely to find it for less than available RTX 5070 cards, and competing AMD cards are often available for less, easier to find, and offer roughly the same level of performance, I really struggle to find any reason to recommend this card, even without the questionable-at-best marketing for this card to sour my feelings about it.
I caught a lot of flack from enthusiasts for praising the RTX 5080 despite its 8-10% performance uplift over the Nvidia GeForce RTX 4080 Super, but at the level of the RTX 5080, there is no real competition and you're still getting the third-best graphics card on the market with a noticeable performance boost over the RTX 4080 Super for the same MSRP. Was it what enthusiasts wanted? No, but it's still a fantastic card with few peers, and the base performance of the RTX 5080 was so good that the latency problem of MFG just wasn't an issue, making it a strong value-add for the card.
You just can't claim that for the RTX 5070. There are simply too many other options for gamers to consider at this price point, and MFG just isn't a strong enough selling point at this performance level to move the needle. If the RTX 5070 is the only card you have available to you for purchase and you need a great 1440p graphics card and can't wait for something better (and you're only paying MSRP), then you'll ultimately be happy with this card. But the Nvidia GeForce RTX 5070 could have and should have been so much better than it ultimately is.
Nvidia GeForce RTX 5070: Price & availability
(Image credit: Future / John Loeffler)
How much is it? MSRP/RRPstarting at $549 / £549 / AU$1,109
When can you get it? The RTX 5070 goes on sale on March 5, 2025
Where is it available? The RTX 5070 will be available in the US, UK, and Australia at launch
The Nvidia GeForce RTX 5070 is available starting March 5, 2025, with an MSRP of $549 / £549 / AU$1,109 in the US, UK, and Australia, respectively.
This puts it at the same price as the current RTX 4070 MSRP, and slightly less than that of the RTX 4070 Super. It's also the same MSRP as the AMD's RX 7900 GRE and upcoming RX 9070, and slightly cheaper than the AMD RX 9070 XT's MSRP.
The relatively low MSRP for the RTX 5070 is one of the bright spots for this card, as well as the existence of the RTX 5070 Founders Edition card, which Nvidia will sell directly at MSRP. This will at least put something of an anchor on the card's price in the face of scalping and general price inflation.
Value: 4 / 5
Nvidia GeForce RTX 5070: Specs
GDDR7 VRAM and PCIe 5.0
Higher power consumption
Still just 12GB VRAM, and fewer compute units
The Nvidia GeForce RTX 5070 is a mixed bag when it comes to specs. On the one hand, you have advanced technology like the new PCIe 5.0 interface and new GDDR7 VRAM, both of which appear great on paper.
On the other hand, it feels like every other spec was configured and tweaked to make sure that it compensated for any performance benefit these technologies would impart to keep the overall package more or less the same as the previous generation GPUs.
For instance, while the RTX 5070 sports faster GDDR7 memory, it doesn't expand the VRAM pool beyond 12GB, unlike its competitors. If Nvidia was hoping that the faster memory would make up for keeping the amount of VRAM the same, it only makes a modest increase in the number of compute units in the GPU (48 compared to the RTX 4070's 46), and a noticeable decrease from the RTX 4070 Super's (56).
Whatever performance gains the RTX 5070 makes with its faster memory, then, is completely neutralized by the larger number of compute units (along with the requisite number of CUDA cores, RT cores, and Tensor cores) in the RTX 4070 Super.
(Image credit: Future / John Loeffler)
The base clock on the RTX 5070 is notably higher, but its boost clock is only slightly increased, which is ultimately where it counts while playing games or running intensive workloads.
Likewise, whatever gains the more advanced TSMC N4P node offers the RTX 5070's GPU over the TSMC N4 node of its predecessors seems to be eaten up by the cutting down of the die. If there was a power or cost reason for this, I have no idea, but I think that this decision is what ultimately sinks the RTX 5070.
It seems like every decision was made to keep things right where they are rather than move things forward. That would be acceptable, honestly, if there was some other major benefit like a greatly reduced power draw or much lower price (I've argued for both rather than pushing for more performance every gen), but somehow the RTX 5070 manages to pull down an extra 30W of power over the RTX 4070 Super and a full 50W over the RTX 4070, and the price is only slightly lower than the RTX 4070 was at launch.
Finally, this is a PCIe 5.0 x16 GPU, which means that if you have a motherboard with 16 PCIe lanes or less, and you're using a PCIe 5.0 SSD, one of these two components is going to get nerfed down to PCIe 4.0, and most motherboards default to prioritizing the GPU.
You might be able to set your PCIe 5.0 priority to your SSD in your motherboard's BIOS settings and put the RTX 5070 into PCIe 4.0, but I haven't tested how this would affect the performance of the RTX 5070, so be mindful that this might be an issue with this card.
Specs: 2.5 / 5
Nvidia GeForce RTX 5070: Design
(Image credit: Future / John Loeffler)
No dual-pass-through cooling
FE card is the same size as the RTX 4070 and RTX 4070 Super FE cards
The Nvidia GeForce RTX 5070 Founders Edition looks identical to the RTX 5090 and RTX 5080 that preceeded it, but with some very key differences, both inside and out.
One of the best things about the RTX 5090 and RTX 5080 FE cards was the innovative dual pass-through cooling solution on those cards, which improved thermals so much that Nvidia was able to shrink the size of those cards from the gargantuan bricks of the last generation to something far more manageable and practical.
(Image credit: Future / John Loeffler)
It would have been nice to see what such a solution could have done for the RTX 5070, but maybe it just wasn't possible to engineer it so it made any sense. Regardless, it's unfortunate that it wasn't an option here, even though the RTX 5070 is hardly unwieldy (at least for the Founders Edition card).
Otherwise, it sports the same 16-pin power connector placement as the RTX 5090 and RTX 5080, so 90-degree power connectors won't fit the Founders Edition, though you will have better luck with most, if not all, AIB partner cards which will likely stick to the same power connector placement of the RTX 40 series.
The RTX 5070 FE will easily fit inside even a SFF case with ease, and its lighter power draw means that even if you have to rely on the included two-to-one cable adapter to plug in two free 8-pin cables from your power supply, it will still be a fairly manageable affair.
Lastly, like all the Founders Edition cards before it, the RTX 5070 has no RGB, with only the white backlight GeForce RTX logo on the top edge of the card to provide any 'flair' of that sort.
Design: 3.5 / 5
Nvidia GeForce RTX 5070: Performance
(Image credit: Future / John Loeffler)
Almost no difference in performance over the RTX 4070 Super without MFG
Using MFG can get you native RTX 4090 framerates in some games
Significantly faster performance over the RTX 4070
A note on my data
The charts shown below offer the most recent data I have for the cards tested for this review. They may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.
Boy howdy, here we go.
The best thing I can say about the performance of this card is that it is just barely the best 1440p graphics card on the market as of this review, and that DLSS 4's Multi Frame Generation can deliver the kind of framerates Nvidia promises in those games where the technology is available, either natively or through the Nvidia App's DLSS override feature.
Both of those statements come with a lot of caveats, though, and the RTX 5070 doesn't make enough progress from the last gen to make a compelling case for itself performance-wise, especially since its signature feature is only available in a smattering of games at the moment.
On the synthetic side of things, the RTX 5070 looks strong against the card it's replacing, the RTX 4070, and generally offers about 25% better performance on synthetic benchmarks like 3DMark Steel Nomad or Speed Way. It also has higher compute performance in Geekbench 6 than its direct predecessor, though not be as drastic a margin (about 10% better).
Compared to the RTX 4070 Super, however, the RTX 5070's performance is only about 6% better overall, and only about 12% better than the AMD RX 7900 GRE's overall synthetic performance.
Again, a win is a win, but it's much closer than it should be gen-on-gen.
The RTX 5070 runs into similar issues on the creative side, where it only outperforms the RTX 4070 Super by about 3% overall, with its best performance coming in PugetBench for Creators' Adobe Premiere benchmark (~13% better than the RTX 4070 Super), but faltering somewhat with Blender Benchmark 4.3.0.
This isn't too surprising, as the RTX 5070 hasn't been released yet and GPUs tend to perform better in Blender several weeks or months after the card's release when the devs can better optimize things for new releases.
All in all, for this class of cards, the RTX 5070 is a solid choice for those who might want to dabble in creative work without much of a financial commitment, but real pros are better off with the Nvidia GeForce RTX 5070 Ti if you're looking to upgrade without spending a fortune.
It's with gaming, though, where the real heartbreak comes with this card.
Technically, with just 12GB VRAM, this isn't a 4K graphics card, but both the RTX 4070 Super and RTX 5070 are strong enough cards that you can get playable native 4K in pretty much every game so long as you never, ever touch ray tracing, global illumination, or the like. Unfortunately, both cards perform roughly the same under these conditions at 4K, with the RTX 5070 pulling into a slight >5 fps lead in a few games like Returnal and Dying Light 2.
However, in some titles like F1 2024, the RTX 4070 Super actually outperforms the RTX 5070 when ray tracing is turned on, or when DLSS is set to balanced and without any Frame Generation. Overall and across different setting configurations, the RTX 5070 only musters a roughly 4.5% better average FPS at 4K than the RTX 4070 Super.
It's pretty much the same story at 1440p, as well, with the RTX 5070 outperforming the RTX 4070 Super by about 2.7% across configurations at 1440p. We're really in the realm of what a good overclock can get you on an RTX 4070 Super rather than a generational leap, despite all the next-gen specs that the RTX 5070 brings to bear.
OK, but what about the RTX 4090? Can the RTX 5070 with DLSS 4 Multi Frame Generation match the native 4K performance of the RTX 4090?
Yes, it can, at least if you're only concerned with average FPS. The only game with an in-game benchmark that I can use to measure the RTX 5070's MFG performance is Cyberpunk 2077, and I've included those results here, but in Indiana Jones and the Great Circle and Dragon Age: Veilguard (using the Nvidia App's override function) I pretty much found MFG to perform consistently as promised, delivering substantially faster FPS than DLSS 4 alone and landing in the ballpark of where the RTX 4090's native 4K performance ends up.
And so long as you stay far away from ray tracing, the base framerate at 4K will be high enough on the RTX 5070 that you won't notice too much, if any, latency in many games. But when you turn ray tracing on, even the RTX 5090's native frame rate tanks, and it's those baseline rendered frames that handle changes based on your input, and the three AI-generated frames based on that initial rendered frame don't factor in whatever input changes you've made at all.
As such, even though you can get up to 129 FPS at 4K with Psycho RT and Ultra preset in Cyberpunk 2077 on the RTX 5070 (blowing way past the RTX 5090's native 51 average FPS on the Ultra preset with Psycho RT), only 44 of the RTX 5070's 129 frames per second are reflecting active input. This leads to a situation where your game looks like its flying by at 129 FPS, but feels like it's still a sluggish 44 FPS.
For most games, this isn't going to be a deal breaker. While I haven't tried the RTX 5070 with 4x MFG on Satisfactory, I'm absolutely positive I will not feel the difference, as it's not the kind of game where you need fast reflexes (other than dealing with the effing Stingers), but Marvel Rivals? You're going to feel it.
Nvidia Reflex definitely helps take the edge off MFG's latency, but it doesn't completely eliminate it, and for some games (and gamers) that is going to matter, leaving the RTX 5070's MFG experience too much of a mixed bag to be a categorical selling point. I think the hate directed at 'fake frames' is wildly overblown, but in the case of the RTX 5070, it's not entirely without merit.
So where does that leave the RTX 5070? Overall, it's the best 1440p card on the market right now, and it's relatively low MSRP makes it the best value proposition in its class. It's also much more likely that you'll actually be able to find this card at MSRP, making the question of value more than just academic.
For most gamers out there, Multi Frame Generation is going to be great, and so long as you go easy on the ray tracing, you'll probably never run into any practical latency in your games, so in those instances, the RTX 5070 might feel like black magic in a circuit board.
But my problem with the RTX 5070 is that it is absolutely not the RTX 4090, and for the vast majority of the games you're going to be playing, it never will be, and that's essentially what was promised when the RTX 5070 was announced. Instead, the RTX 5070 is an RTX 4070 Super with a few games running MFG slapped to its side that look like they're playing on an RTX 4090, but may or may not feel like they are, and that's just not good enough.
It's not what we were promised, not by a long shot.
Performance: 3 / 5
Should you buy the Nvidia GeForce RTX 5070?
(Image credit: Future / John Loeffler)
Buy the Nvidia GeForce RTX 5070 if...
You don't have the money for (or cannot find) an RTX 5070 Ti or RTX 4070 Super This isn't a bad graphics card, but there are so many better cards that offer better value or better performance within its price range.
You want to dabble in creative or AI work without investing a lot of money The creative and AI performance of this card is great for the price.
Don't buy it if...
You can afford to wait for better Whether it's this generation or the next, this card offers very little that you won't be able to find elsewhere within the next two years.
Also consider
Nvidia GeForce RTX 5070 Ti The RTX 5070 Ti is a good bit more expensive, especially with price inflation, but if you can get it at a reasonable price, it is a much better card than the RTX 5070.
Nvidia GeForce RTX 4070 Super With Nvidia RTX 50 series cards getting scalped to heck, if you can find an RTX 4070 Super for a good price, it offers pretty much identical performance to the RTX 5070, minus the Multi Frame Generation.
I spent about a week testing the Nvidia GeForce RTX 5070, using it as my main workstation GPU for creative content work, gaming, and other testing.
I used my updated testing suite including industry standard tools like 3DMark and PugetBench for Creators, as well as built-in game benchmarks like Cyberpunk 2077, Civilization VII, and others.
I've reviewed more than 30 graphics cards for TechRadar in the last two and a half years, as well as extensively testing and retesting graphics cards throughout the year for features, analysis, and other content, so you can trust that my reviews are based on experience and data, as well as my desire to make sure you get the best GPU for your hard earned money.
The Nvidia GeForce RTX 5070 Ti definitely had a high expectation bar to clear after the mixed reception of the Nvidia GeForce RTX 5080 last month, especially from enthusiasts.
And while there are things I fault the RTX 5070 Ti for, there's no doubt that it has taken the lead as the best graphics card most people can buy right now—assuming that scalpers don't get there first.
The fact that the RTX 5070 Ti beats both of those cards handily in terms of performance would normally be enough to get it high marks, but this card even ekes out a win over the Nvidia GeForce RTX 4080 Super, shooting it nearly to the top of the best Nvidia graphics card lists.
As one of the best 4K graphics cards I've ever tested, it isn't without faults, but we're really only talking about the fact that Nvidia isn't releasing a Founders Edition card for this one, and that's unfortunate for a couple of reasons.
For one, and probably most importantly, without a Founders Edition card from Nvidia guaranteed to sell for MSRP directly from Nvidia's website, the MSRP price for this card is just a suggestion. And without an MSRP card from Nvidia keeping AIB partners onside, it'll be hard finding a card at Nvidia's $749 price tag, reducing its value proposition.
Also, because there's no Founders Edition, Nvidia's dual pass-through design to keep the card cool will pass the 5070 Ti by. If you were hoping that the RTX 5070 Ti might be SFF-friendly, I simply don't see how the RTX 5070 Ti fits into that unless you stretch the meaning of small form factor until it hurts.
Those aren't small quibbles, but given everything else the RTX 5070 Ti brings to the table, they do seem like I'm stretching myself a bit to find something bad to say about this card for balance.
For the vast majority of buyers out there looking for outstanding 4K performance at a relatively approachable MSRP, the Nvidia GeForce RTX 5070 Ti is the card you're going to want to buy.
Nvidia GeForce RTX 5070 Ti: Price & availability
(Image credit: Future / John Loeffler)
How much is it? MSRP is $749/£729 (about AU$1,050), but with no Founders Edition, third-party cards will likely be higher
When can you get it? The RTX 5070 Ti goes on sale February 20, 2025
Where is it available? The RTX 5070 Ti will be available in the US, UK, and Australia at launch
The Nvidia GeForce RTX 5070 Ti goes on sale on February 20, 2025, starting at $749/£729 (about AU$1,050) in the US, UK, and Australia, respectively.
Unlike the RTX 5090 and RTX 5080, there is no Founders Edition card for the RTX 5070 Ti, so there are no versions of this card that will be guaranteed to sell at this MSRP price, which does complicate things given the current scalping frenzy we've seen for the previous RTX 50 series cards.
While stock of the Founders Edition RTX 5090 and RTX 5080 might be hard to find even from Nvidia, there is a place, at least, where you could theoretically buy those cards at MSRP. No such luck with the RTX 5070 Ti, which is a shame.
The 5070 Ti MSRP does at least come in under the launch MSRPs of both the RTX 4070 Ti and RTX 4070 Ti Super, neither of which had Founders Edition cards, so stock and pricing will hopefully stay within the bounds of where those cards have been selling for.
The 5070 Ti's MSRP puts it on the lower-end of the enthusiast-class, and while we haven't seen the price for the AMD Radeon RX 9070 XT yet, it's unlikely that AMD's competing RDNA 4 GPU will sell for much less than the RTX 5070 Ti, but if you're not in a hurry, it might be worth waiting a month or two to see what AMD has to offer in this range before deciding which is the better buy.
Value: 4 / 5
Nvidia GeForce RTX 5070 Ti: Specs
(Image credit: Future / John Loeffler)
GDDR7 VRAM and PCIe 5.0
Slight bump in power consumption
More memory than its direct predecessor
Like the rest of the Nvidia Blackwell GPU lineup, there are some notable advances with the RTX 5070 Ti over its predecessors.
First, the RTX 5070 Ti features faster GDDR7 memory which, in addition to having an additional 4GB VRAM than the RTX 4070 Ti's 12GB, means that the RTX 5070 Ti's larger, faster memory pool can process high resolution texture files faster, making it far more capable at 4K resolutions.
Also of note is its 256-bit memory interface, which is 33.3% larger than the RTX 4070 Ti's, and equal to that of the RTX 4070 Ti Super. 64 extra bits might not seem like a lot, but just like trying to fit a couch through a door, even an extra inch or two of extra space can be the difference between moving the whole thing through at once or having to do it in parts, which translates into additional work on both ends.
(Image credit: Future / John Loeffler)
There's also the new PCIe 5.0 x16 interface, which speeds up communication between the graphics card, your processor, and your SSD. If you have a PCIe 5.0 capable motherboard, processor, and SSD, just make note of how many PCIe 5.0 lanes you have available.
The RTX 5070 Ti will take up 16 of them, so if you only have 16 lanes available and you have a PCIe 5.0 SSD, the RTX 5070 Ti is going to get those lanes by default, throttling your SSD to PCIe 4.0 speeds. Some motherboards will let you set PCIe 5.0 priority, if you have to make a choice.
The RTX 5070 Ti uses slightly more power than its predecessors, but in my testing it's maximum power draw came in at just under the card's 300W TDP.
As for the GPU inside the RTX 5070 Ti, it's built using TSMC's N4P process node, which is a refinement of the TSMC N4 node used by its predecessors. While not a full generational jump in process tech, the N4P process does offer better efficiency and a slight increase in transistor density.
Specs & features: 5 / 5
Nvidia GeForce RTX 5070 Ti: Design & features
(Image credit: Future / John Loeffler)
No Nvidia Founders Edition card
No dual-pass-through cooling (at least for now)
There is no Founders Edition card for the RTX 5070 Ti, so the RTX 5070 Ti you end up with may look radically different than the one I tested for this review, the Asus Prime GeForce RTX 5070 Ti.
Whatever partner card you choose though, it's likely to be a chonky card given the card's TDP, since 300W of heat needs a lot of cooling. While the RTX 5090 and RTX 5080 Founders Edition cards featured the innovative dual pass-through design (which dramatically shrank the card's width), it's unlikely you'll find any RTX 5070 Ti cards in the near future that feature this kind of cooling setup, if ever.
With that groundwork laid, you're going to have a lot of options for cooling setups, shroud design, and lighting options, though more feature-rich cards will likely be more expensive, so make sure you consider the added cost when weighing your options.
As for the Asus Prime GeForce RTX 5070 Ti, the sleek shroud of the card lacks the RGB that a lot of gamers like for their builds, but for those of us who are kind of over RGB, the Prime's design is fantastic and easily worked into any typical mid-tower case.
The Prime RTX 5070 Ti features a triple-fan cooling setup, with one of those fans having complete passthrough over the heatsink fins. There's a protective backplate and stainless bracket over the output ports.
The 16-pin power connector rests along the card's backplate, so even if you invested in a 90-degree angled power cable, you'll still be able to use it, assuming your power supply meets the recommended 750W listed on Asus's website. There's a 3-to-1 adapter included with the card, as well, for those who haven't upgraded to an ATX 3.0 PSU yet.
Design: 4 / 5
Nvidia GeForce RTX 5070 Ti: Performance
(Image credit: Future / John Loeffler)
RTX 4080 Super-level performance
Massive improvement over the RTX 4070 Ti Super
Added features like DLSS 4 with Multi-Frame Generation
A note on my data
The charts shown below offer the most recent data I have for the cards tested for this review. They may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.
And so we come to the reason we're all here, which is this card's performance.
Given the...passionate...debate over the RTX 5080's underwhelming gen-on-gen uplift, enthusiasts will be very happy with the performance of the RTX 5070 Ti, at least as far as it relates to the last-gen RTX 4070 Ti and RTX 4070 Ti Super.
Starting with synthetic scores, at 1080p, both the RTX 4070 Ti and RTX 5070 Ti are so overpowered that they get close to CPU-locking on 3DMark's 1080p tests, Night Raid and Fire Strike, though the RTX 5070 Ti does come out about 14% ahead. The RTX 5070 Ti begins to pull away at higher resolutions and once you introduce ray tracing into the mix, with roughly 30% better performance at these higher level tests like Solar Bay, Steel Nomad, and Port Royal.
In terms of raw compute performance, the RTX 5070 Ti scores about 25% better in Geekbench 6 than the RTX 4070 Ti and about 20% better than the RTX 4070 Ti Super.
In creative workloads like Blender Benchmark 4.30, the RTX 5070 Ti pulls way ahead of its predecessors, though the 5070 Ti, 4070 Ti Super, and 4070 Ti all pretty much max out what a GPU can add to my Handbrake 1.9 4K to 1080p encoding test, with all three cards cranking out about 220 FPS encoded on average.
Starting with 1440p gaming, the gen-on-gen improvement of the RTX 5070 Ti over the RTX 4070 Ti is a respectable 20%, even without factoring in DLSS 4 with Multi-Frame Generation.
The biggest complaint that some have about MFG is that if the base frame rate isn't high enough, you'll end up with controls that can feel slightly sluggish, even though the visuals you're seeing are much more fluid.
Fortunately, outside of turning ray tracing to its max settings and leaving Nvidia Reflex off, you're not really going to need to worry about that. The RTX 5070 Ti's minimum FPS for all but one of the games I tested at native 1440p with ray tracing all pretty much hit or exceeded 60 FPS, often by a lot.
Only F1 2024 had a lower-than-60 minimum FPS at native 1440p with max ray tracing, and even then, it still managed to stay above 45 fps, which is fast enough that no human would ever notice any input latency in practice. For 1440p gaming, then, there's absolutely no reason not to turn on MFG whenever it is available since it can substantially increase framerates, often doubling or even tripling them in some cases without issue.
For 4K gaming, the RTX 5070 Ti native performance is spectacular, with nearly every title tested hitting 60 FPS or greater on average, with those that fell short only doing so by 4-5 frames.
Compared to the RTX 4070 Ti and RTX 4070 Ti Super, the faster memory and expanded 16GB VRAM pool definitely turn up for the RTX 5070 Ti at 4K, delivering about 31% better overall average FPS than the RTX 4070 Ti and about 23% better average FPS than the RTX 4070 Ti Super.
In fact, the average 4K performance for the RTX 5070 Ti pulls up pretty much dead even with the RTX 4080 Super's performance, and about 12% better than the AMD Radeon RX 7900 XTX at 4K, despite the latter having 8GB more VRAM.
Like every other graphics card besides the RTX 4090, RTX 5080, and RTX 5090, playing at native 4K with ray tracing maxed out is going to kill your FPS. To the 5070 Ti's credit, though, minimum FPS never dropped so low as to turn things into a slideshow, even if the 5070 Ti's 25 FPS minimum in Cyberpunk 2077 was noticeable.
Turning on DLSS in these cases is a must, even if you skip turning on MFG, but the RTX 5070 Ti's balanced upscaled performance is a fantastic experience.
Leave ray tracing turned off (or set to a lower setting), however, and MFG definitely becomes a viable way to max out your 4K monitor's refresh rate for seriously fluid gaming.
Overall then, the RTX 5070 Ti delivers substantial high-resolution gains gen-on-gen, which should make enthusiasts happy, without having to increase its power consumption all that much.
Of all the graphics cards I've tested over the years, and especially over the past six months, the RTX 5070 Ti is pretty much the perfect balance for whatever you need it for, and if you can get it at MSRP or reasonably close to MSRP, it's without a doubt the best value for your money of any of the current crop of enthusiast graphics cards.
Performance: 5 / 5
Should you buy the Nvidia GeForce RTX 5070 Ti?
(Image credit: Future / John Loeffler)
Buy the Nvidia GeForce RTX 5070 Ti if...
You want the perfect balance of 4K performance and price Assuming you can find it at or close to MSRP, the 4K value proposition on this card is the best you'll find for an enthusiast graphics card.
You want a fantastic creative graphics card on the cheap While the RTX 5070 Ti doesn't have the RTX 5090's creative chops, it's a fantastic pick for 3D modelers and video professionals looking for a (relatively) cheap GPU.
You want Nvidia's latest DLSS features without spending a fortune While this isn't the first Nvidia graphics card to feature DLSS 4 with Multi Frame Generation, it is the cheapest, at least until the RTX 5070 launches in a month or so.
Don't buy it if...
You want the absolute best performance possible The RTX 5070 Ti is a fantastic performer, but the RTX 5080, RTX 4090, and RTX 5090 all offer better raw performance if you're willing to pay more for it.
You're looking for something more affordable While the RTX 5070 Ti has a fantastic price for an enthusiast-grade card, it's still very expensive, especially once scalpers get involved.
You only plan on playing at 1440p If younever plan on playing at 4K this generation, you might want to see if the RTX 5070 or AMD Radeon RX 9070 XT and RX 9070 cards are a better fit.
Also consider
Nvidia GeForce RTX 5080 While more expensive, the RTX 5080 features fantastic performance and value for under a grand at MSRP.
Nvidia GeForce RTX 4080 Super While this card might not be on the store shelves for much longer, the RTX 5070 Ti matches the RTX 4080 Super's performance, so if you can find the RTX 4080 Super at a solid discount, it might be the better pick.
I spent about a week testing the Nvidia GeForce RTX 5070 Ti, using it mostly for creative work and gaming, including titles like Indiana Jones and the Great Circle and Avowed.
I also used my updated suite of benchmarks including industry standards like 3DMark and Geekbench, as well as built-in gaming benchmarks like Cyberpunk 2077 and Dying Light 2.
I also test all of the competing cards in a given card's market class using the same test bench setup throughout so I can fully isolate GPU performance across various, repeatable tests. I then take geometric averages of the various test results (which better insulates the average from being skewed by tests with very large test results) to come to comparable scores for different aspects of the card's performance. I give more weight to gaming performance than creative or AI performance, and performance is given the most weight in how final scores are determined, followed closely by value.
I've been testing GPUs, PCs, and laptops for TechRadar for nearly five years now, with more than two dozen graphics card reviews under my belt in the past three years alone. On top of that, I have a Masters degree in Computer Science and have been building PCs and gaming on PCs for most of my life, so I am well qualified to assess the value of a graphics card and whether it's worth your time and money.
Back in the olden days of Wi-Fi 5, it was Asus’ ZenWiFi ‘mega mesh’ wireless routers that led the world. While regular mesh systems merely dribbled performance across numerous network nodes (like many still do), the ZenWiFi nodes innovatively used secondary 5GHz channels (plus, the nascent 6GHz channel) as fast backhaul to maintain peak performance at a distance. Nowadays, such advantages are built into the Wi-Fi 6E and Wi-Fi 7 standards, so where does that leave a Wi-Fi 7 version in the form of the ZenWiFi BT10? Let’s find out.
The two smart-looking nodes seem identical, but note the discreet sticker denoting one as the master. Failing to notice this may lead to hair being torn out, swearing for half an hour, decrying the powers that be and wondering why the dang thing won’t connect when you’ve obviously done everything right, repeatedly checked the password and #$@&%! drat. Otherwise, the setup process is simple via the phone app.
The app provides the usual monitoring and management settings on the first screen, and immediately asks if you want to reset the default password and set up a separate IoT network. You can assign devices to people, limit bandwidth, block them or assign them QoS optimizations for gaming, streaming or WFH. An ‘Insight’ tab provides suggestions for security and optimization. The Family tab enables you to set content filtering and on/off schedules (and, unlike some rivals, these settings are free).
Other features include Asus’ (Trend Micro-powered) AiProtection, which scans and protects your network as well as all the devices on it. The usual networking tools are available, including Google Assistant voice control. Ultimately, it’s well-featured and very intuitive. My main concern is that the QoS controls have a feature that tracks the websites used by everyone on the network. That raises some serious privacy issues.
Wired connections are the same on both nodes: there’s Gigabit WAN/LAN, 10G Ethernet WAN/LAN, and 10G Ethernet LAN. All the ports are color coded but that could be confusing to some users. There’s also a USB 3.0 port, which can be used for file sharing and media serving.
So how does it perform? On paper, the ZenWiFi BT10 is a tri-band router with 18,000 Mbps worth of throughput. Note, you can choose to reserve the 6GHz channel for backhaul, but leaving it at ‘Auto’ saw better results. I tested it by downloading video files from a Synology NAS to an HP OmniBook Ultra Flip 14 at close range, two rooms away (by the second node, at the front of the house) and 15 meters away in the back garden. It scored 1,661 Mbps, 614 Mbps and 370 Mbps, respectively, which is an excellent result.
All in all, the Asus ZenWiFi BT10 is a very appealing package that looks good, offers heaps of intuitive and useful features, plus fast performance to boot. Then there’s the price… Two nodes cost an eye-watering $900 / £779 / AU$2,799. Still, if you need high-end functionality and speed, it’s hard to beat.
Asus ZenWiFi BT10 review: Price and availability
(Image credit: Future)
How much does it cost? $900 / £779 / AU$2,799
When is it available? Now
Where can you get it? Available in the US, UK, and Australia
Asus has a plethora of Wi-Fi 7 routers, but (like other vendors) it’s pushing the expensive premium models out first. I saw many of its budget Wi-Fi 7 routers at the Computex 2024 trade show and those will offer similar features at lower cost, but there’s no sign of them appearing in most markets, at least not at the time of publication.
Until then, we’re stuck with inflated price tags. It costs $900 in the US, £779 in the UK and AU$2,799 in Australia. For some reason, Aussies seem to be getting particularly hard done by in this case. Most regions sell single nodes, but only a few, it seems, sell the three-node kit.
A tempting alternative is the Asus ROG Rapture GT-BE98. While it’s not a mesh system, the powerful gaming behemoth can single-handedly rival the speeds, performance and features of the BT10, but at a cheaper price. As with most current Asus routers, older models or cheap ones can be used as nodes thanks to Asus’ AiMesh technology – a potentially affordable way of expanding the network into dead zones. However, it’s quite a confronting device and not everyone will want what looks like a giant robot spider in their home.
Value score: 4 / 5
Asus ZenWiFi BT10 review: Specifications
(Image credit: Future)
Asus ZenWiFi BT10 review: Design
Sleek enough for a stylish home
Simple to set up
App (and web-based firmware) are responsive, powerful and intuitive
The physical design of the ZenWiFi BT10 is not far from its predecessor, the ZenWiFi AX XT8. The grilles at the sides are more refined, but both will happily fit into a stylish home or office better than most on the market.
Setting it up is simple, thanks to the mature, intuitive and well-featured app. Just note that, despite the similarities, there’s a sticker on the primary node and you need to connect to that, as using the secondary node won’t work.
While there are many features accessible within the app, Asus has these and many more advanced options accessible via a web browser, and both interfaces are intuitive and responsive to use.
Design score: 5 / 5
Asus ZenWiFi BT10 review: Features
(Image credit: Future)
Security and Parental controls are included (without subscription)
Has almost every consumer networking feature under the sun
10G Ethernet LAN and WAN ports
The Asus router app has been around for some time now and it’s well laid out, intuitive and packed full of features. The opening screen displays a wireless network map and provides a button to manually run a wireless-optimization cycle. There’s a real-time traffic monitor, CPU and RAM monitor, and an at-a-glance display of what type of devices are connected wirelessly and via cables.
The second tab breaks down which devices are connected along with their IP addresses and the resources they’re using. You can easily block them or assign them to family members to provide parental controls. There’s also the ability to configure Asus' AiMesh feature which lets you do things like turn the LEDs off, prioritize the 6GHz channel for backhaul or client connection, and see details like the IP address, MAC address and firmware version. The Insight tab offers smart recommendations regarding using secure connections, intrusion prevention and setting up family groups.
The built-in network security is called Asus AiProtection and it’s powered by Trend Micro. In addition to providing network security assessments, it offers malicious site blocking, two-way intrusion prevention and infected device isolation. It also powers the parental controls and (mercifully) doesn’t require a separate premium subscription – unlike other rivals.
The Family tab lets you add people and their devices to customizable and preset groups. This can provide web filtering that’s suitable for different children’s age groups (plus adults), setting up both online and offline schedules for each day of the week. Again, I’m very pleased to see Asus provide these features without asking for a subscription fee.
The final tab offers access to other standard router features, including QoS and VPN. While the analysis features that come with this are useful, I am concerned about the website history logging, which enables people to spy on the online activity of everyone on the network. You can also set up a USB port as a SAMBA media server or FTP file server, and there’s the ability to add Alexa and Google Assistant integration.
Accessing the firmware via a web browser provides access to all of the above along with functions like adding Dual WAN, 3G / 4G LTE USB WAN, Port Forwarding, Port Triggering, DMZ, DDNS, IPTV, automatic BitTorrent downloading, VPN management; Apple Time Machine compatibility, Shared Folder Privileges, among other high-level, network-admin features. Just note that many of these are available on Asus’ lesser routers, so don’t splash out on an expensive model just because one catches your eye.
Physically, each node has Gigabit WAN, 10G Ethernet LAN, and 10G Ethernet WAN/LAN network ports, plus a USB-A 3.0 connection. It’s also worth mentioning Asus’ AiMesh feature which can use most current (and many older), cheap and premium Asus routers as nodes to further extend a network.
Features: 5 / 5
Asus ZenWiFi BT10 review: Performance
Tri-band Wi-Fi 7 for blistering real-world speed
One of the best performers at long range
10G Ethernet for fast wired connections
Asus ZenWiFi BT10 benchmarks
Close range: 1,661 Mbps Medium range: 614 Mbps Long range: 370 Mbps
The tri-band (2.4GHz, 5GHz and 6GHz) router promises 18,000 Mbps of theoretical performance, but that only exists in lab conditions and certainly can’t be achieved in the real world where every network’s situation will be different. It’s possible to reserve the 6GHz channel for backhaul only, but leaving it set to ‘Auto’ saw better results.
I ran my tests, which included downloading large video files from a Synology NAS (with a wired, 10G Ethernet port) connected to the router, to a Wi-Fi 7-equipped HP OmniBook Ultra Flip 14 laptop at three different ranges.
Up close, it managed 1,661 Mbps, which I’ve only seen beaten by Netgear’s Nighthawk 7 RS700S Wi-Fi router. Two rooms away, at the front of my single-story home (by the second node), it managed 614 Mbps. While that’s a significant drop, it’s still impressive, although other premium routers and mesh systems can be a bit faster. More impressively, the BT10 managed 370 Mbps, 15 meters away, outside the home in the garden. Only top-tier three-node mesh systems have rivaled that (and not all do).
In short, it’s very fast indeed, and I happily edited 4K video on my laptop from across the network with no issues at all.
Performance: 5 / 5
(Image credit: Future)
Should you buy the Asus ZenWiFi BT10?
Buy it if...
You want fast Wi-Fi Wi-Fi 7 really is a game-changer in that it offers superlative performance for old and new devices alike. I never like calling anything future-proof, but the fact that only cutting-edge clients can come close to accessing its full performance is telling. It will be a very long time before it feels slow.
You have weak Wi-Fi in some areas Some premium routers do a great job of distributing a strong signal across a large area. But there are plenty of larger buildings that have dead spots due to size or thick walls. If that’s the case, the BT10 will likely help you out, and you can still add additional nodes via Asus’ AiMesh technology.
You hate subscriptions It’s been disappointing to see that some premium-brand routers now come with core features that you have to pay even more for. In some cases, that means paying for both parental controls and security software, separately. Asus deserves credit for keeping it all free.
Don't buy it if…
You want to save money The BT10, like many premium Wi-Fi 7 kits, is incredibly expensive. While it’s nice to have a future-proof setup, you can still buy last-gen Wi-Fi 6 and 6E models, with similar features for substantially less. You can also add cheap nodes using old and cheap Asus routers that are AiMesh compatible.
You live in Australia Australians appear to be the victims of price gouging when it comes to premium Wi-Fi 7 networking devices. The price here is anomalously high compared to other regions, even with the usual tax and shipping issues.
You only want basic features Some people just want to access the internet without much fuss. If that’s the case, then the BT10 is overpowered, over-featured and overpriced for your requirements. You can save a massive amount of money on a lesser device that will still fulfill your needs.
Also consider
If you're undecided about investing in the Asus ZenWiFi BT10 router, I've compared its specs with three alternatives that might suit you better.
Netgear Nighthawk RS700S The elder sibling of the RS300 is twice as expensive, but it provides Wi-Fi 7 with an even faster speed of 19 Gbps, and has 10G Ethernet, so is great for high-speed broadband connections.
Asus ROG Rapture GT-BE98 This giant robot spider is the ZenWiFi BT10’s big, gamer-oriented brother. If you can get past the looks, it features similar features and performance in one, less-expensive package.
TP-Link Deco BE63 It’s more mature in the market and the price has dropped even more. You also get three nodes to spread the signal even further. It’s a great-value Wi-Fi 7 mesh kit.
We pride ourselves on our independence and our rigorous review-testing process, offering up long-term attention to the products we review and making sure our reviews are updated and maintained - regardless of when a device was released, if you can still buy it, it's on our radar.
Back in the olden days of Wi-Fi 5, it was Asus’ ZenWiFi ‘mega mesh’ wireless routers that led the world. While regular mesh systems merely dribbled performance across numerous network nodes (like many still do), the ZenWiFi nodes innovatively used secondary 5GHz channels (plus, the nascent 6GHz channel) as fast backhaul to maintain peak performance at a distance. Nowadays, such advantages are built into the Wi-Fi 6E and Wi-Fi 7 standards, so where does that leave a Wi-Fi 7 version in the form of the ZenWiFi BT10? Let’s find out.
The two smart-looking nodes seem identical, but note the discreet sticker denoting one as the master. Failing to notice this may lead to hair being torn out, swearing for half an hour, decrying the powers that be and wondering why the dang thing won’t connect when you’ve obviously done everything right, repeatedly checked the password and #$@&%! drat. Otherwise, the setup process is simple via the phone app.
The app provides the usual monitoring and management settings on the first screen, and immediately asks if you want to reset the default password and set up a separate IoT network. You can assign devices to people, limit bandwidth, block them or assign them QoS optimizations for gaming, streaming or WFH. An ‘Insight’ tab provides suggestions for security and optimization. The Family tab enables you to set content filtering and on/off schedules (and, unlike some rivals, these settings are free).
Other features include Asus’ (Trend Micro-powered) AiProtection, which scans and protects your network as well as all the devices on it. The usual networking tools are available, including Google Assistant voice control. Ultimately, it’s well-featured and very intuitive. My main concern is that the QoS controls have a feature that tracks the websites used by everyone on the network. That raises some serious privacy issues.
Wired connections are the same on both nodes: there’s Gigabit WAN/LAN, 10G Ethernet WAN/LAN, and 10G Ethernet LAN. All the ports are color coded but that could be confusing to some users. There’s also a USB 3.0 port, which can be used for file sharing and media serving.
So how does it perform? On paper, the ZenWiFi BT10 is a tri-band router with 18,000 Mbps worth of throughput. Note, you can choose to reserve the 6GHz channel for backhaul, but leaving it at ‘Auto’ saw better results. I tested it by downloading video files from a Synology NAS to an HP OmniBook Ultra Flip 14 at close range, two rooms away (by the second node, at the front of the house) and 15 meters away in the back garden. It scored 1,661 Mbps, 614 Mbps and 370 Mbps, respectively, which is an excellent result.
All in all, the Asus ZenWiFi BT10 is a very appealing package that looks good, offers heaps of intuitive and useful features, plus fast performance to boot. Then there’s the price… Two nodes cost an eye-watering $900 / £779 / AU$2,799. Still, if you need high-end functionality and speed, it’s hard to beat.
Asus ZenWiFi BT10 review: Price and availability
(Image credit: Future)
How much does it cost? $900 / £779 / AU$2,799
When is it available? Now
Where can you get it? Available in the US, UK, and Australia
Asus has a plethora of Wi-Fi 7 routers, but (like other vendors) it’s pushing the expensive premium models out first. I saw many of its budget Wi-Fi 7 routers at the Computex 2024 trade show and those will offer similar features at lower cost, but there’s no sign of them appearing in most markets, at least not at the time of publication.
Until then, we’re stuck with inflated price tags. It costs $900 in the US, £779 in the UK and AU$2,799 in Australia. For some reason, Aussies seem to be getting particularly hard done by in this case. Most regions sell single nodes, but only a few, it seems, sell the three-node kit.
A tempting alternative is the Asus ROG Rapture GT-BE98. While it’s not a mesh system, the powerful gaming behemoth can single-handedly rival the speeds, performance and features of the BT10, but at a cheaper price. As with most current Asus routers, older models or cheap ones can be used as nodes thanks to Asus’ AiMesh technology – a potentially affordable way of expanding the network into dead zones. However, it’s quite a confronting device and not everyone will want what looks like a giant robot spider in their home.
Value score: 4 / 5
Asus ZenWiFi BT10 review: Specifications
(Image credit: Future)
Asus ZenWiFi BT10 review: Design
Sleek enough for a stylish home
Simple to set up
App (and web-based firmware) are responsive, powerful and intuitive
The physical design of the ZenWiFi BT10 is not far from its predecessor, the ZenWiFi AX XT8. The grilles at the sides are more refined, but both will happily fit into a stylish home or office better than most on the market.
Setting it up is simple, thanks to the mature, intuitive and well-featured app. Just note that, despite the similarities, there’s a sticker on the primary node and you need to connect to that, as using the secondary node won’t work.
While there are many features accessible within the app, Asus has these and many more advanced options accessible via a web browser, and both interfaces are intuitive and responsive to use.
Design score: 5 / 5
Asus ZenWiFi BT10 review: Features
(Image credit: Future)
Security and Parental controls are included (without subscription)
Has almost every consumer networking feature under the sun
10G Ethernet LAN and WAN ports
The Asus router app has been around for some time now and it’s well laid out, intuitive and packed full of features. The opening screen displays a wireless network map and provides a button to manually run a wireless-optimization cycle. There’s a real-time traffic monitor, CPU and RAM monitor, and an at-a-glance display of what type of devices are connected wirelessly and via cables.
The second tab breaks down which devices are connected along with their IP addresses and the resources they’re using. You can easily block them or assign them to family members to provide parental controls. There’s also the ability to configure Asus' AiMesh feature which lets you do things like turn the LEDs off, prioritize the 6GHz channel for backhaul or client connection, and see details like the IP address, MAC address and firmware version. The Insight tab offers smart recommendations regarding using secure connections, intrusion prevention and setting up family groups.
The built-in network security is called Asus AiProtection and it’s powered by Trend Micro. In addition to providing network security assessments, it offers malicious site blocking, two-way intrusion prevention and infected device isolation. It also powers the parental controls and (mercifully) doesn’t require a separate premium subscription – unlike other rivals.
The Family tab lets you add people and their devices to customizable and preset groups. This can provide web filtering that’s suitable for different children’s age groups (plus adults), setting up both online and offline schedules for each day of the week. Again, I’m very pleased to see Asus provide these features without asking for a subscription fee.
The final tab offers access to other standard router features, including QoS and VPN. While the analysis features that come with this are useful, I am concerned about the website history logging, which enables people to spy on the online activity of everyone on the network. You can also set up a USB port as a SAMBA media server or FTP file server, and there’s the ability to add Alexa and Google Assistant integration.
Accessing the firmware via a web browser provides access to all of the above along with functions like adding Dual WAN, 3G / 4G LTE USB WAN, Port Forwarding, Port Triggering, DMZ, DDNS, IPTV, automatic BitTorrent downloading, VPN management; Apple Time Machine compatibility, Shared Folder Privileges, among other high-level, network-admin features. Just note that many of these are available on Asus’ lesser routers, so don’t splash out on an expensive model just because one catches your eye.
Physically, each node has Gigabit WAN, 10G Ethernet LAN, and 10G Ethernet WAN/LAN network ports, plus a USB-A 3.0 connection. It’s also worth mentioning Asus’ AiMesh feature which can use most current (and many older), cheap and premium Asus routers as nodes to further extend a network.
Features: 5 / 5
Asus ZenWiFi BT10 review: Performance
Tri-band Wi-Fi 7 for blistering real-world speed
One of the best performers at long range
10G Ethernet for fast wired connections
Asus ZenWiFi BT10 benchmarks
Close range: 1,661 Mbps Medium range: 614 Mbps Long range: 370 Mbps
The tri-band (2.4GHz, 5GHz and 6GHz) router promises 18,000 Mbps of theoretical performance, but that only exists in lab conditions and certainly can’t be achieved in the real world where every network’s situation will be different. It’s possible to reserve the 6GHz channel for backhaul only, but leaving it set to ‘Auto’ saw better results.
I ran my tests, which included downloading large video files from a Synology NAS (with a wired, 10G Ethernet port) connected to the router, to a Wi-Fi 7-equipped HP OmniBook Ultra Flip 14 laptop at three different ranges.
Up close, it managed 1,661 Mbps, which I’ve only seen beaten by Netgear’s Nighthawk 7 RS700S Wi-Fi router. Two rooms away, at the front of my single-story home (by the second node), it managed 614 Mbps. While that’s a significant drop, it’s still impressive, although other premium routers and mesh systems can be a bit faster. More impressively, the BT10 managed 370 Mbps, 15 meters away, outside the home in the garden. Only top-tier three-node mesh systems have rivaled that (and not all do).
In short, it’s very fast indeed, and I happily edited 4K video on my laptop from across the network with no issues at all.
Performance: 5 / 5
(Image credit: Future)
Should you buy the Asus ZenWiFi BT10?
Buy it if...
You want fast Wi-Fi Wi-Fi 7 really is a game-changer in that it offers superlative performance for old and new devices alike. I never like calling anything future-proof, but the fact that only cutting-edge clients can come close to accessing its full performance is telling. It will be a very long time before it feels slow.
You have weak Wi-Fi in some areas Some premium routers do a great job of distributing a strong signal across a large area. But there are plenty of larger buildings that have dead spots due to size or thick walls. If that’s the case, the BT10 will likely help you out, and you can still add additional nodes via Asus’ AiMesh technology.
You hate subscriptions It’s been disappointing to see that some premium-brand routers now come with core features that you have to pay even more for. In some cases, that means paying for both parental controls and security software, separately. Asus deserves credit for keeping it all free.
Don't buy it if…
You want to save money The BT10, like many premium Wi-Fi 7 kits, is incredibly expensive. While it’s nice to have a future-proof setup, you can still buy last-gen Wi-Fi 6 and 6E models, with similar features for substantially less. You can also add cheap nodes using old and cheap Asus routers that are AiMesh compatible.
You live in Australia Australians appear to be the victims of price gouging when it comes to premium Wi-Fi 7 networking devices. The price here is anomalously high compared to other regions, even with the usual tax and shipping issues.
You only want basic features Some people just want to access the internet without much fuss. If that’s the case, then the BT10 is overpowered, over-featured and overpriced for your requirements. You can save a massive amount of money on a lesser device that will still fulfill your needs.
Also consider
If you're undecided about investing in the Asus ZenWiFi BT10 router, I've compared its specs with three alternatives that might suit you better.
Netgear Nighthawk RS700S The elder sibling of the RS300 is twice as expensive, but it provides Wi-Fi 7 with an even faster speed of 19 Gbps, and has 10G Ethernet, so is great for high-speed broadband connections.
Asus ROG Rapture GT-BE98 This giant robot spider is the ZenWiFi BT10’s big, gamer-oriented brother. If you can get past the looks, it features similar features and performance in one, less-expensive package.
TP-Link Deco BE63 It’s more mature in the market and the price has dropped even more. You also get three nodes to spread the signal even further. It’s a great-value Wi-Fi 7 mesh kit.
We pride ourselves on our independence and our rigorous review-testing process, offering up long-term attention to the products we review and making sure our reviews are updated and maintained - regardless of when a device was released, if you can still buy it, it's on our radar.
At first glance, the Nvidia GeForce RTX 5080 doesn't seem like that much of an upgrade from the Nvidia GeForce RTX 4080 it is replacing, but that's only part of the story with this graphics card.
Its performance, to be clear, is unquestioningly solid, positioning it as the third-best graphics card on the market right now, by my testing, and its new PCIe 5.0 interface and GDDR7 VRAM further distances it from the RTX 4080 and RTX 4080 Super from the last generation. It also outpaces the best AMD graphics card, the AMD Radeon RX 7900 XTX, by a healthy margin, pretty much locking up the premium, enthusiast-grade GPUs in Nvidia's corner for at least another generation.
Most impressively, it does this all for the same price as the Nvidia GeForce RTX 4080 Super and RX 7900 XTX: $999 / £939 / AU$2,019. This is also a rare instance where a graphics card launch price actually recedes from the high watermark set by its predecessor, as the RTX 5080 climbs down from the inflated price of the RTX 4080 when it launched back in 2022 for $1,199 / £1,189 / AU$2,219.
Then, of course, there's the new design of the card, which features a slimmer dual-slot profile, making it easier to fit into your case (even if the card's length remains unchanged). The dual flow-through fan cooling solution does wonders for managing the extra heat generated by the extra 40W TDP, and while the 12VHPWR cable connector is still present, the 3-to-1 8-pin adapter is at least somewhat less ridiculous the RTX 5090's 4-to-1 dongle.
The new card design also repositions the power connector itself to make it less cumbersome to plug a cable into the card, though it does pretty much negate any of the 90-degree angle cables that gained popularity with the high-end RTX 40 series cards.
Finally, everything is built off of TSMC's 4nm N4 process node, making it one of the most cutting-edge GPUs on the market in terms of its architecture. While AMD and Intel will follow suit with their own 4nm GPUs soon (AMD RDNA 4 also uses TSMC's 4nm process node and is due to launch in March), right now, Nvidia is the only game in town for this latest hardware.
None of that would matter though if the card didn't perform, however, but gamers and enthusiasts can rest assured that even without DLSS 4, you're getting a respectable upgrade. It might not have the wow factor of the beefier RTX 5090, but for gaming, creating, and even AI workloads, the Nvidia GeForce RTX 5080 is a spectacular balance of performance, price, and innovation that you won't find anywhere else at this level.
Nvidia GeForce RTX 5080: Price & availability
(Image credit: Future)
How much is it? MSRP is $999 / £939 / AU$2,019
When can you get it? The RTX 5080 goes on sale January 30, 2025
Where is it available? The RTX 5080 will be available in the US, UK, and Australia at launch
Where to buy the RTX 5080
Looking to pick up the RTX 5080? Check out our Where to buy RTX 5080 live blog for updates to find stock in the US and UK
The Nvidia GeForce RTX 5080 goes on sale on January 30, 2025, starting at $999 / £939 / AU$2,019 for the Founders Edition and select AIB partner cards, while overclocked (OC) and more feature-rich third-party cards will be priced higher.
This puts the Nvidia RTX 5080 about $200 / £200 / AU$200 cheaper than the launch price of the last-gen RTX 4080, while also matching the price of the RTX 4080 Super.
Both of those RTX 40 series GPUs should see some downward price pressure as a result of the RTX 5080 release, which might complicate the value proposition of the RTX 5080 over the other,
The RTX 5080 is also launching at the same MSRP as the AMD Radeon RX 7900 XTX, which is AMD's top GPU right now. And with AMD confirming that it does not intend to launch an enthusiast-grade RDNA 4 GPU this generation, the RTX 5080's only real competition is from other Nvidia graphics cards like the RTX 4080 Super or RTX 5090.
This makes the RTX 5080 a great value proposition for those looking to buy a premium 4K graphics card, as its price-to-performance ratio is very strong.
Value: 4 / 5
Nvidia GeForce RTX 5080: Specs & features
(Image credit: Future)
GDDR7 VRAM and PCIe 5.0
Still just 16GB VRAM
Slightly higher 360W TDP
While the Nvidia RTX 5080 doesn't push the spec envelope quite as far as the RTX 5090 does, its spec sheet is still impressive.
For starters, like the RTX 5090, the RTX 5080 uses the faster, next-gen PCIe 5.0 interface that allows for faster data processing and coordination with the CPU, which translates directly into higher performance.
You also have new GDDR7 VRAM in the RTX 5080, only the second card to have it after the RTX 5090, and it dramatically increases the memory bandwidth and speed of the RTX 5080 compared to the RTX 4080 and RTX 4080 Super. Those latter two cards both use slower GDDR6X memory, so even though all three cards have the same amount of memory (16GB) and memory bus-width (256-bit), the RTX 5080 has a >25% faster effective memory speed of 30Gbps, compared to the 23Gbps of the RTX 4080 Super and the 22.4Gbps on the base RTX 4080.
This is all on top of the Blackwell GPU inside the card, which is built on TSMC's 4nm process, compared to the Lovelace GPUs in the RTX 4080 and 4080 Super, which use TSMC's 5nm process. So even though the transistor count on the RTX 5080 is slightly lower than its predecessor's, the smaller transistors are faster and more efficient.
The RTX 5080 also has a higher SM count, 84, compared to the RTX 4080's 76 and the RTX 4080 Super's 80, meaning the RTX 5080 has the commensurate increase in shader cores, ray tracing cores, and Tensor cores. It also has a slightly faster boost clock (2,617MHz) than its predecessor and the 4080 Super variant.
Finally, there is a slight increase in the card's TDP, 360W compared to the RTX 4080 and RTX 4080 Super's 320W.
Specs & features: 4.5 / 5
Nvidia GeForce RTX 5080: Design
(Image credit: Future)
Slimmer dual-slot form factor
Dual flow-through cooling system
The redesign of the Nvidia RTX 5080 is identical to that of the RTX 5090, featuring the same slimmed-down dual slot profile as Nvidia's flagship card.
If I were to guess, the redesign of the RTX 5080 isn't as essential as it is for the RTX 5090, which needed a way to bring better cooling for the much hotter 575W TDP, and the RTX 5080 (and eventually the RTX 5070) just slotted into this new design by default.
That said, it's still a fantastic change, especially as it makes the RTX 5080 thinner and lighter than its predecessor.
(Image credit: Future)
The core of the redesign is the new dual flow-through cooling solution, which uses an innovative three-part PCB inside to open up a gap at the front of the card, allowing a second fan to blow cooler air over the heat sink fins drawing heat away from the GPU.
(Image credit: Future)
This means that you don't need as thick of a heat sink to pull away heat, which allows the card itself to get the same thermal performance from a thinner form factor, moving from the triple-slot RTX 4080 design down to a dual-slot RTX 5080. In practice, this also allows for a slight increase in the card's TDP, giving the card a bit of a performance boost as well, just from implementing a dual flow-through design.
Given that fact, I would not be surprised if other card makers follow suit, and we start getting much slimmer graphics cards as a result.
(Image credit: Future)
The only other design choice of note is the 90-degree turn of the 16-pin power port, which should make it easier to plug the 12VHPWR connector into the card. The RTX 4080 didn't suffer nearly the same kinds of issues with its power connectors as the RTX 4090 did, so this design choice really flows down from engineers trying to fix potential problems with the much more power hungry 5090. But, if you're going to implement it for your flagship card, you might as well put it on all of the Founders Edition cards.
Unfortunately, this redesign means that if you invested in a 90-degree-angled 12VHPWR cable, it won't work on the RTX 5080 Founders Edition, though third-party partner cards will have a lot of different designs, so you should be able to find one that fits your cable situation..
Design: 4.5 / 5
Nvidia GeForce RTX 5080: Performance
(Image credit: Future)
Excellent all-around performance
Moderately more powerful than the RTX 4080 and RTX 4080 Super, but nearly as fast as the RTX 4090 in gaming
You'll need DLSS 4 to get the best results
A note on my data
The charts shown below are the most recent test data I have for the cards tested for this review and may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.
A note on the RTX 4080 Super
In my testing for this review, the RTX 4080 Super scored consistently lower than it has in the past, which I believe is an issue with my card specifically that isn't reflective of its actual performance. I'm including the data from the RTX 4080 Super for transparency's sake, but I wouldn't take these numbers as-is. I'll be retesting the RTX 4080 Super soon, and will update my data with new scores once I've troubleshot the issue.
Performance is king, though, and so naturally all the redesign and spec bumps won't amount to much if the RTX 5080 doesn't deliver better performance as a result, and fortunately it does—though maybe not as much as some enthusiasts would like.
Overall, the RTX 5080 manages to score about 13% better than the RTX 4080 and about 19% better than the AMD Radeon RX 7900 XTX, a result that will disappoint some (especially after seeing the 20-25% uplift on the RTX 5090) who were hoping for something closer to 20% or better.
If we were just to go off those numbers, some might call them disappointing, regardless of all the other improvements to the RTX 5080 in terms of design and specs. All this needs to be put in a broader context though, because my perspective changed once I compared the RTX 5080 to the RTX 4090.
Overall, the RTX 5080 is within 12% of the overall performance of the RTX 4090, and within 9% of the RTX 4090's gaming performance, which is a hell of a thing and simply can't be ignored, even by enthusiasts.
Starting with the card's synthetic benchmarks, the card scores about 13% better than the RTX 4080 and RX 7900 XTX, with the RTX 5080 consistently beating out the RTX 4080 and substantially beating the RX 7900 XTX in ray-traced workloads (though the RX 7900 XTX does pull down a slightly better average 1080p rasterization score, to its credit.
Compared to the RTX 4090, the RTX 5080 comes in at about 15% slower on average, with its worst performance coming at lower resolutions. At 4K, though, the RTX 5080 comes in just 7% slower than the last-gen flagship.
In terms of compute performance, the RTX 5080 trounces the RX 7900 XTX, as expected, by about 38%, with a more modest 9% improvement over the RTX 4080. Against the RTX 4090, however, the RTX 5080 comes within just 5% of the RTX 4090's Geekbench compute scores. If you're looking for a cheap AI card, the RTX 5080 is definitely going to be your jam.
On the creative side, PugetBench for Creators Adobe Photoshop benchmark still isn't working for the RTX 5080 Super, so I can't tell you much about its creative raster performance yet (though I will update these charts once that issue is fixed), but going off the 3D modeling and video editing scores, the RTX 5080 is an impressive GPU, as expected.
The entire 3D modeling industry is effectively built on Nvidia's CUDA, so against the RTX 5080, the RX 7900 XTX doesn't stand a chance as the 5080 more than doubles the RX 7900 XTX's Blender Benchmark performance. Gen-on-gen though, the RTX 5080 comes in with about 8% better performance.
Against the RTX 4090, the RTX 5080 comes within 15% on its performance, and for good measure, if you're rocking an RTX 3090 and you're curious about the RTX 5080, the RTX 5080 outperforms the RTX 3090 by about 75% in Blender Benchmark. If you're on an RTX 3090 and want to upgrade, you'll probably still be better off with an RTX 4090, but if you can't find one, the RTX 5080 is a great alternative.
In terms of video editing performance, the RTX 5080 doesn't do as well as its predecessor in PugetBench for Creators Adobe Premiere and effectively ties in my Handbrake 4K to 1080p encoding test. I expect that once the RTX 5080 launches, Puget Systems will be able to update its tools for the new RTX 50 series, so these scores will likely change, but for now, it is what it is, and you're not going to see much difference in your video editing workflows with this card over its predecessor.
(Image credit: Future)
The RTX 5080 is Nvidia's premium "gaming" card, though, so its gaming performance is what's going to matter to the vast majority of buyers out there. For that, you won't be disappointed. Working just off DLSS 3 with no frame generation, the RTX 5080 will get you noticeably improved framerates gen-on-gen at 1440p and 4K, with substantially better minimum/1% framerates as well for smoother gameplay. Turn on DLSS 4 with Multi-Frame Generation and the RTX 5080 does even better, blowing well past the RTX 4090 in some titles.
DLSS 4 with Multi-Frame Generation is game developer-dependent, however, so even though this is the flagship gaming feature for this generation of Nvidia GPUs, not every game will feature it. For testing purposes, then, I stick to DLSS 3 without Frame Generation (and the AMD and Intel equivalents, where appropriate), since this allows for a more apples-to-apples comparison between cards.
At 1440p, the RTX 5080 gets about 13% better average fps and minimum/1% fps overall, with up to 18% better ray tracing performance. Turn on DLSS 3 to balanced and ray tracing to its highest settings and the RTX 5080 gets you about 9% better average fps than its predecessor, but a massive 58% higher minimum/1% fps, on average.
Compared to the RTX 4090, the RTX 5080's average 1440p fps comes within 7% of the RTX 4090's, and within 2% of its minimum/1% fps, on average. In native ray-tracing performance, the RTX 5080 slips to within 14% of the RTX 4090's average fps and within 11% of its minimum/1% performance. Turn on balanced upscaling, however, and everything changes, with the RTX 5080 comes within just 6% of the RTX 4090's ray-traced upscaled average fps, and beats the RTX 4090's minimum/1% fps average by almost 40%.
Cranking things up to 4K, and the RTX 5080's lead over the RTX 4080 grows a good bit. With no ray tracing or upscaling, the RTX 5080 gets about 20% faster average fps and minimum/1% fps than the RTX 4080, overall. Its native ray tracing performance is about the same, however, and it's minimum/1% fps average actually falls behind the RTX 4080's, both with and without DLSS 3.
Against the RTX 4090, the RTX 5080 comes within 12% of its average fps and within 8% of its minimum/1% performance without ray tracing or upscaling. It falls behind considerably in native 4K ray tracing performance (which is to be expected, given the substantially higher RT core count for the RTX 4090), but when using DLSS 3, that ray tracing advantage is cut substantially and the RTX 5080 manages to come within 14% of the RTX 4090's average fps, and within 12% of its minimum/1% fps overall.
Taken together, the RTX 5080 makes some major strides in reaching RTX 4090 performance across the board, getting a little more than halfway across their respective performance gap between the RTX 4080 and RTX 4090.
The RTX 5080 beats its predecessor by just over 13% overall, and comes within 12% of the RTX 4090's overal performance, all while costing less than both RTX 40 series card's launch MSRP, making it an incredible value for a premium card to boot.
Performance: 4 / 5
Should you buy the Nvidia GeForce RTX 5080?
(Image credit: Future)
Buy the Nvidia GeForce RTX 5080 if...
You want fantastic performance for the price You're getting close to RTX 4090 performance for under a grand (or just over two, if you're in Australia) at MSRP.
You want to game at 4K This card's 4K gaming performance is fantastic, coming within 12-14% of the RTX 4090's in a lot of games.
You're not willing to make the jump to an RTX 5090 The RTX 5090 is an absolute beast of a GPU, but even at its MSRP, it's double the price of the RTX 5080, so you're right to wonder if it's worth making the jump to the next tier up.
Don't buy it if...
You want the absolute best performance possible The RTX 5080 comes within striking distance of the RTX 4090 in terms of performance, but it doesn't actually get there, much less reaching the vaunted heights of the RTX 5090.
You're looking for something more affordable At this price, it's an approachable premium graphics card, but it's still a premium GPU, and the RTX 5070 Ti and RTX 5070 are just around the corner.
You only plan on playing at 1440p While this card is great for 1440p gaming, it's frankly overkill for that resolution. You'll be better off with the RTX 5070 Ti if all you want is 1440p.
Also consider
Nvidia GeForce RTX 4090 With the release of the RTX 5090, the RTX 4090 should see it's price come down quite a bit, and if scalpers drive up the price of the RTX 5080, the RTX 4090 might be a better bet.
Nvidia GeForce RTX 5090 Yes, it's double the price of the RTX 5080, and that's going to be a hard leap for a lot of folks, but if you want the best performance out there, this is it.
I spent about a week testing the RTX 5080, using my updated suite of benchmarks like Black Myth Wukong, 3DMark Steel Nomad, and more.
I also used this card as my primary work GPU where I relied on it for photo editing and design work, while also testing out a number of games on it like Cyberpunk 2077, Black Myth Wukong, and others.
I've been testing graphics cards for TechRadar for a couple of years now, with more than two dozen GPU reviews under my belt. I've extensively tested and retested all of the graphics cards discussed in this review, so I'm intimately familiar with their performance. This gives me the best possible position to judge the merits of the RTX 5080, and whether it's the best graphics card for your needs and budget.
When can you get it? The RTX 5080 goes on sale January 30, 2025
Where is it available? The RTX 5080 will be available in the US, UK, and Australia at launch
Where to buy the RTX 5080
Looking to pick up the RTX 5080? Check out our Where to Buy RTX 5080 live blog for updates to find stock in the US and UK.
The Nvidia GeForce RTX 5080 goes on sale on January 30, 2025, starting at $999 / £939 / AU$2,019 for the Founders Edition card from Nvidia, as well as select AIB partner cards. Third-party overclocked (OC) cards and those with other extras like liquid cooling and RGB will ultimately cost more.
The RTX 5080 launches at a much lower price than the original RTX 4080, which had a launch MSRP of $1,199 / £1,189 / AU$2,219, though the RTX 5080 does come in at the same price as the Nvidia RTX 4080 Super.
It's worth noting that the RTX 5080 is fully half the MSRP of the Nvidia RTX 5090 that launches at the same time, and given the performance of the RTX 5080, a lot of potential buyers out there will likely find the RTX 5080 to be the better value of the two cards.
Value: 4 / 5
Nvidia GeForce RTX 5080: Specs & features
(Image credit: Future / John Loeffler)
GDDR7 and PCIe 5.0
Slightly higher SM count than RTX 4080 Super
Moderate increase in TDP, but nothing like the RTX 5090
Specs & features: 4 / 5
Nvidia GeForce RTX 5080: Design
Slim, dual-slot form factor
Better cooling
Design: 4.5 / 5
Nvidia GeForce RTX 5080: Performance
(Image credit: Future)
DLSS 4
A note on my data
The charts shown below are the most recent test data I have for the cards tested for this review and may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.
The Nvidia GeForce RTX 5090 is a difficult GPU to approach as a professional reviewer because it is the rare consumer product that is so powerful, and so good at what it does, you have to really examine if it is actually a useful product for people to buy.
Right out the gate, let me just lay it out for you: depending on the workload, this GPU can get you up to 50% better performance versus the GeForce RTX 4090, and that's not even factoring in multi-frame generation when it comes to gaming, though on average the performance is still a respectable improvement of roughly 21% overall.
Simply put, whatever it is you're looking to use it for, whether gaming, creative work, or AI research and development, this is the best graphics card for the job if all you care about is pure performance.
Things get a bit more complicated if you want to bring energy efficiency into the equation. But if we're being honest, if you're considering buying the Nvidia RTX 5090, you don't care about energy efficiency. This simply isn't that kind of card, and so as much as I want to make energy efficiency an issue in this review, I really can't. It's not intended to be efficient, and those who want this card do not care about how much energy this thing is pulling down—in fact, for many, the enormous TDP on this card is part of its appeal.
Likewise, I can't really argue too much with the card's price, which comes in at $1,999 / £1,939 / AU$4,039 for the Founders Edition, and which will likely be much higher for AIB partner cards (and that's before the inevitable scalping begins). I could rage, rage against the inflation of the price of premium GPUs all I want, but honestly, Nvidia wouldn't charge this much for this card if there wasn't a line out the door and around the block full of enthusiasts who are more than willing to pay that kind of money for this thing on day one.
Do they get their money's worth? For the most part, yes, especially if they're not a gamer but a creative professional or AI researcher. If you're in the latter camp, you're going to be very excited about this card.
If you're a gamer, you'll still get impressive gen-on-gen performance improvements over the celebrated RTX 4090, and the Nvidia RTX 5090 is really the first consumer graphics card I've tested that can get you consistent, high-framerate 8K gameplay even before factoring in Multi-Frame Generation. That marks the RTX 5090 as something of an inflection point of things to come, much like the Nvidia RTX 2080 did back in 2018 with its first-of-its-kind hardware ray tracing.
Is it worth it though?
That, ultimately, is up to the enthusiast buyer who is looking to invest in this card. At this point, you probably already know whether or not you want it, and many will likely be reading this review to validate those decisions that have already been made.
In that, rest easy. Even without the bells and whistles of DLSS 4, this card is a hearty upgrade to the RTX 4090, and considering that the actual price of the RTX 4090 has hovered around $2,000 for the better part of two years despite its $1,599 MSRP, if the RTX 5090 sticks close to its launch price, it's well worth the investment. If it gets scalped to hell and sells for much more above that, you'll need to consider your purchase much more carefully to make sure you're getting the most for your money. Make sure to check out our where to buy an RTX 5090 guide to help you find stock when it goes on sale.
Nvidia GeForce RTX 5090: Price & availability
How much is it? MSRP is $1,999 / £1,939 / AU$4,039
When can you get it? The RTX 5090 goes on sale January 30, 2025
Where is it available? The RTX 5090 will be available in the US, UK, and Australia at launch
Where to buy the RTX 5090
Looking to pick up the RTX 5090? Check out our Where to buy RTX 5090 live blog for updates to find stock in the US and UK
The Nvidia GeForce RTX 5090 goes on sale on January 30, 2025, starting at $1,999 / £1,939 / AU$4,039 for the Nvidia Founders Edition and select AIB partner cards. Overclocked (OC) and other similarly tweaked cards and designs will obviously run higher.
It's worth noting that the RTX 5090 is 25% more expensive than the $1,599 launch price of the RTX 4090, but in reality, we can expect the RTX 5090 to sell for much higher than its MSRP in the months ahead, so we're really looking at an asking price closer to the $2,499.99 MSRP of the Turing-era Nvidia Titan RTX (if you're lucky).
Of course, if you're in the market for the Nvidia RTX 5090, you're probably not squabbling too much about the price of the card. You're already expecting to pay the premium, especially the first adopter premium, that comes with this release.
That said, this is still a ridiculously expensive graphics card for anyone other than an AI startup with VC backing, so it's worth asking yourself before you confirm that purchase if this card is truly the right card for your system and setup.
Value: 3 / 5
Nvidia GeForce RTX 5090: Specs & features
(Image credit: Future / John Loeffler)
First GPU with GDDR7 VRAM and PCIe 5.0
Slightly slower clocks
Obscene 575W TDP
There are a lot of new architectural changes in the Nvidia RTX 50 series GPUs that are worth diving into, especially the move to a transformer AI model for its upscaling, but let's start with the new specs for the RTX 5090.
First and foremost, the flagship Blackwell GPU is the first consumer graphics card to feature next-gen GDDR7 video memory, and it is substantially faster than GDDR6 and GDDR6X (a roughly 33% increase in Gbps over the RTX 4090). Add in the much wider 512-bit memory interface and you have a total memory bandwidth of 1,790GB/s.
This, more than even the increases VRAM pool of 32GB vs 24GB for the RTX 4090, makes this GPU the first really capable 8K graphics card on the market. 8K textures have an enormous footprint in memory, so moving them through the rendering pipelines to generate playable framerates isn't really possible with anything less than this card has.
Yes, you can, maybe, get playable 8K gaming with some RTX 40 or AMD Radeon RX 7000 series cards if you use aggressive upscaling, but you won't really be getting 8K visuals that'll be worth the effort. In reality, the RTX 5090 is what you want if you want to play 8K, but good luck finding an 8K monitor at this point. Those are still years away from really going mainstream (though there are a growing number of 8K TVs).
If you're settling in at 4K though, you're in for a treat, since all that bandwidth means faster 4K texture processing, so you can get very fast native 4K gaming with this card without having to fall back on upscaling tech to get you to 60fps or higher.
(Image credit: Future / John Loeffler)
The clock speeds on the RTX 5090 are slightly slower, which is good, because the other major top-line specs for the RTX 5090 are its gargantuan TDP of 575W and its PCIe 5.0 x16 interface. For the TDP, this thermal challenge, according to Nvidia, required major reengineering of the PCB inside the card, which I'll get to in a bit.
The PCIe 5.0 x16 interface, meanwhile, is the first of its kind in a consumer GPU, though you can expect AMD and Intel to quickly follow suit. Why this matters is because a number of newer motherboards have PCIe 5.0 lanes ready to go, but most people have been using those for PCIe 5.0 m.2 SSDs.
If your motherboard has 20 PCIe 5.0 lanes, the RTX 5090 will take up 16 of those, leaving just four for your SSD. If you have one PCIe 5.0 x4 SSD, you should be fine, but I've seen motherboard configurations that have two or three PCIe 5.0 x4 m.2 slots, so if you've got one of those and you've loaded them up with PCIe 5.0 SSDs, you're likely to see those SSDs drop down to the slower PCIe 4.0 speeds. I don't think it'll be that big of a deal, but it's worth considering if you've invested a lot into your SSD storage.
As for the other specs, they're more or less similar to what you'd find in the RTX 4090, just more of it. The new Blackwell GB202 GPU in the RTX 5090 is built on a TSMC 4nm process, compared to the RTX 4090's TSMC 5nm AD102 GPU. The SM design is the same, so 128 CUDA cores, one ray tracing core, and four tensor cores per SM. At 170 SMs, you've got 21,760 CUDA cores, 170 RT cores, and 680 Tensor cores for the RTX 5090, compared to the RTX 4090's 128 SMs (so 16,384 CUDA, 128 RT, and 512 Tensor cores).
Specs & features: 4.5 / 5
Nvidia GeForce RTX 5090: Design
(Image credit: Future / John Loeffler)
Slim, dual-slot form factor
Better cooling
So there's a significant change to this generation of Nvidia Founders Edition RTX flagship cards in terms of design, and it's not insubstantial.
Holding the RTX 5090 Founders Edition in your hand, you'll immediately notice two things: first, you can comfortably hold it in one hand thanks to it being a dual-slot card rather than a triple-slot, and second, it's significantly lighter than the RTX 4090.
A big part of this is how Nvidia designed the PCB inside the card. Traditionally, graphics cards have been built with a single PCB that extends from the inner edge of the PC case, down through the PCIe slot, and far enough back to accommodate all of the modules needed for the card. On top of this PCB, you'll have a heatsink with piping from the GPU die itself through a couple of dozen aluminum fins to dissipate heat, with some kind of fan or blower system to push or pull cooler air through the heated fins to carry away the heat from the GPU.
The problem with this setup is that if you have a monolithic PCB, you can only really extend the heatsinks and fans off of the PCB to help cool it since a fan blowing air directly into a plastic wall doesn't do much to help move hot air out of the graphics card.
(Image credit: Future / John Loeffler)
Nvidia has a genuinely novel innovation on this account, and that's ditching the monolithic PCB that's been a mainstay of graphics cards for 30 years. Instead, the RTX 5090 (and presumably subsequent RTX 50-series GPUs to come), splits the PCB into three parts: the video output interface at the 'front' of the card facing out from the case, the PCIe interface segment of the card, and the main body of the PCB that houses the GPU itself as well as the VRAM modules and other necessary electronics.
This segmented design allows a gap in the front of the card below the fan, so rather than a fan blowing air into an obstruction, it can fully pass over the fins of the GPU's heatsink, substantially improving the thermals.
As a result, Nvidia is able to shrink the width of the card down considerably, moving from a 2.4-inch width to a 1.9-inch width, or a roughly 20% reduction on paper. That said, it feels substantially smaller than its predecessor, and it's definitely a card that won't completely overwhelm your PC case the way the RTX 4090 does.
(Image credit: Future / John Loeffler)
That said, the obscene power consumption required by this card means that the 8-pin adapter included in the RTX 5090 package is a comical 4-to-1 dongle that pretty much no PSU in anyone's PC case can really accommodate.
Most modular PSUs give you three PCIe 8-pin power connectors at most, so let's just be honest about this setup. You're going to need to get a new ATX 3.0 PSU with at least 1000W to run this card at a minimum (it's officially recommended PSU is 950W, but just round up, you're going to need it), so make sure you factor that into your budget if you pick this card up
Otherwise, the look and feel of the card isn't that different than previous generations, except the front plate of the GPU where the RTX 5090 branding would have gone is now missing, replaced by a finned shroud to allow air to pass through. The RTX 5090 stamp is instead printed on the center panel, similar to how it was done on the Nvidia GeForce RTX 3070 Founders Edition.
As a final touch, the white back-lit GeForce RTX logo and the X strips on the front of the card, when powered, add a nice RGB-lite touch that doesn't look too guady, but for RGB fans out there, you might think it looks rather plain.
Design: 4.5 / 5
Nvidia GeForce RTX 5090: Performance
(Image credit: Future)
Most powerful GPU on the consumer market
Substantially faster than RTX 4090
Playable 8K gaming
A note on my data
The charts shown below are the most recent test data I have for the cards tested for this review and may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.
So how does the Nvidia GeForce RTX 5090 stack up against its predecessor, as well as the best 4K graphics cards on the market more broadly?
Very damn well, it turns out, managing to improve performance over the RTX 4090 in some workloads by 50% or more, while leaving everything else pretty much in the dust.
Though when looked at from 30,000 feet, the overall performance gains are respectable gen-on-gen but aren't the kind of earth-shattering gains the RTX 4090 made over the Nvidia GeForce RTX 3090.
Starting with synthetic workloads, the RTX 5090 scores anywhere from 48.6% faster to about 6.7% slower than the RTX 4090 in various 3DMark tests, depending on the workload. The only poor performance for the RTX 5090 was in 3DMark Night Raid, a test where both cards so completely overwhelm the test that the difference here could be down to CPU bottlenecking or other issues that aren't easily identifiable. On every other 3DMark test, though, the RTX 5090 scores 5.6% better or higher, more often than not by 20-35%. In the most recent;y released test, Steel Nomad, the RTX 5090 is nearly 50% faster than the RTX 4090.
On the compute side of things, the RTX 5090 is up to 34.3% faster in Geekbench 6 OpenGL compute test and 53.9% faster in Vulcan, making it an absolute monster for AI researchers to leverage.
On the creative side, the RTX 5090 is substantially faster in 3D rendering, scoring between 35% and 49.3% faster in my Blender Benchmark 4.30 tests. There's very little difference between the two cards when it comes to video editing though, as they essentially tie in PugetBench for Creators' Adobe Premiere test and in Handbrake 1.7 4K to 1080p encoding.
The latter two results might be down to CPU bottlenecking, as even the RTX 4090 pushes right up against the performance ceiling set by the CPU in a lot of cases.
When it comes to gaming, the RTX 5090 is substantially faster than the RTX 4090, especially at 4K. In non-upscaled 1440p gaming, you're looking at a roughly 18% better average frame rate and a 22.6% better minimum/1% framerate for the RTX 5090. With DLSS 3 upscaling (but no frame generation), you're looking at 23.3% better average and 23% better minimum/1% framerates overall with the RTX 5090 vs the RTX 4090.
With ray tracing turn on without upscaling, you're getting 26.3% better average framerates and about 23% better minimum/1% framerates, and with upscaling turned on to balanced (again, no frame generation), you're looking at about 14% better average fps and about 13% better minimum/1% fps for the RTX 5090 against the RTX 4090.
At 4K, however, the faster memory and wider memory bus really make a difference. Without upscaling and ray tracing turned off, you're getting upwards of 200 fps at 4K for the RTX 5090 on average, compared to the RTX 4090's 154 average fps, a nearly 30% increase. The average minimum/1% fps for the RTX 5090 is about 28% faster than the RTX 4090, as well. With DLSS 3 set to balanced, you're looking at a roughly 22% better average framerate overall compared to the RTX 4090, with an 18% better minimum/1% framerate on average as well.
With ray tracing and no upscaling, the difference is even more pronounced with the RTX 5090 getting just over 34% faster average framerates compared to the RTX 4090 (with a more modest 7% faster average minimum/1% fps). Turn on balanced DLSS 3 with full ray tracing and you're looking at about 22% faster average fps overall for the RTX 5090, but an incredible 66.2% jump in average minimum/1% fps compared to the RTX 4090 at 4K.
Again, none of this even factors in single frame generation, which can already substantially increase framerates in some games (though with the introduction of some input latency). Once Multi-Frame Generation rolls out at launch, you can expect to see these framerates for the RTX 5090 run substantially higher. Pair that with Nvidia Reflex 2 to help mitigate the input latency issues frame generation can introduce, and the playable performance of the RTX 5090 will only get better with time, and it's starting from a substantial lead right out of the gate.
In the end, the overall baseline performance of the RTX 5090 comes in about 21% better than the RTX 4090, which is what you're really looking for when it comes to a gen-on-gen improvement.
That said, you have to ask whether the performance improvement you do get is worth the enormous increase in power consumption. That 575W TDP isn't a joke. I maxed out at 556W of power at 100% utilization, and I hit 100% fairly often in my testing and while gaming.
The dual flow-through fan design also does a great job of cooling the GPU, but at the expense of turning the card into a space heater. That 575W of heat needs to go somewhere, and that somewhere is inside your PC case. Make sure you have adequate airflow to vent all that hot air, otherwise everything in your case is going to slowly cook.
As far as performance-per-price, this card does slightly better than the RTX 4090 on value for the money, but that's never been a buying factor for this kind of card anyway. You want this card for its performance, plain and simple, and in that regard, it's the best there is.
Performance: 5 / 5
Should you buy the Nvidia GeForce RTX 5090?
(Image credit: Future)
Buy the Nvidia GeForce RTX 5090 if...
You want the best performance possible From gaming to 3D modeling to AI compute, the RTX 5090 serves up best-in-class performance.
You want to game at 8K Of all the graphics cards I've tested, the RTX 5090 is so far the only GPU that can realistically game at 8K without compromising on graphics settings.
You really want to flex This card comes with a lot of bragging rights if you're into the PC gaming scene.
Don't buy it if...
You care about efficiency At 575W, this card might as well come with a smokestack and a warning from your utility provider about the additional cost of running it.
You're in any way budget-conscious This card starts off more expensive than most gaming PCs and will only become more so once scalpers get their hands on them. And that's not even factoring in AIB partner cards with extra features that add to the cost.
You have a small form-factor PC There's been some talk about the new Nvidia GPUs being SSF-friendly, but even though this card is thinner than the RTX 4090, it's just as long, so it'll be hard to fit it into a lot of smaller cases.
Also consider
Nvidia GeForce RTX 4090 I mean, honestly, this is the only other card you can compare the RTX 5090 to in terms of performance, so if you're looking for an alternative to the RTX 5090, the RTX 4090 is pretty much it.
I spent about a week and a half testing the Nvidia GeForce RTX 5090, both running synthetic tests as well as using it in my day-to-day PC for both work and gaming.
I used my updated testing suite, which uses industry standard benchmark tools like 3DMark, Geekbench, Pugetbench for Creators, and various built-in gaming benchmarks. I used the same testbench setup listed to the right for the purposes of testing this card, as well as all of the other cards I tested for comparison purposes.
I've tested and retested dozens of graphics cards for the 20+ graphics card reviews I've written for TechRadar over the last few years, and so I know the ins and outs of these PC components. That's why you can trust my review process to help you make the right buying decision for your next GPU, whether it's the RTX 5090 or any of the other graphics cards I review.