Organizer
Gadget news
Nvidia GeForce RTX 5070 Ti review: nearly perfect, but with one major flaw
7:10 pm | February 20, 2025

Author: admin | Category: Computers Computing Computing Components Gadgets | Tags: , , , | Comments: Off

Nvidia GeForce RTX 5070 Ti: Two-minute review

The Nvidia GeForce RTX 5070 Ti definitely had a high expectation bar to clear after the mixed reception of the Nvidia GeForce RTX 5080 last month, especially from enthusiasts.

And while there are things I fault the RTX 5070 Ti for, there's no doubt that it has taken the lead as the best graphics card most people can buy right now—assuming that scalpers don't get there first.

The RTX 5070 Ti starts at $749 / £729 (about AU$1,050), making its MSRP a good bit cheaper than its predecessor was at launch, the Nvidia GeForce RTX 4070 Ti, as well as the buffed-up Nvidia GeForce RTX 4070 Ti Super.

The fact that the RTX 5070 Ti beats both of those cards handily in terms of performance would normally be enough to get it high marks, but this card even ekes out a win over the Nvidia GeForce RTX 4080 Super, shooting it nearly to the top of the best Nvidia graphics card lists.

As one of the best 4K graphics cards I've ever tested, it isn't without faults, but we're really only talking about the fact that Nvidia isn't releasing a Founders Edition card for this one, and that's unfortunate for a couple of reasons.

For one, and probably most importantly, without a Founders Edition card from Nvidia guaranteed to sell for MSRP directly from Nvidia's website, the MSRP price for this card is just a suggestion. And without an MSRP card from Nvidia keeping AIB partners onside, it'll be hard finding a card at Nvidia's $749 price tag, reducing its value proposition.

Also, because there's no Founders Edition, Nvidia's dual pass-through design to keep the card cool will pass the 5070 Ti by. If you were hoping that the RTX 5070 Ti might be SFF-friendly, I simply don't see how the RTX 5070 Ti fits into that unless you stretch the meaning of small form factor until it hurts.

Those aren't small quibbles, but given everything else the RTX 5070 Ti brings to the table, they do seem like I'm stretching myself a bit to find something bad to say about this card for balance.

For the vast majority of buyers out there looking for outstanding 4K performance at a relatively approachable MSRP, the Nvidia GeForce RTX 5070 Ti is the card you're going to want to buy.

Nvidia GeForce RTX 5070 Ti: Price & availability

The Nvidia GeForce RTX 5070 Ti sitting on its retail packaging

(Image credit: Future / John Loeffler)
  • How much is it? MSRP is $749/£729 (about AU$1,050), but with no Founders Edition, third-party cards will likely be higher
  • When can you get it? The RTX 5070 Ti goes on sale February 20, 2025
  • Where is it available? The RTX 5070 Ti will be available in the US, UK, and Australia at launch

The Nvidia GeForce RTX 5070 Ti goes on sale on February 20, 2025, starting at $749/£729 (about AU$1,050) in the US, UK, and Australia, respectively.

Unlike the RTX 5090 and RTX 5080, there is no Founders Edition card for the RTX 5070 Ti, so there are no versions of this card that will be guaranteed to sell at this MSRP price, which does complicate things given the current scalping frenzy we've seen for the previous RTX 50 series cards.

While stock of the Founders Edition RTX 5090 and RTX 5080 might be hard to find even from Nvidia, there is a place, at least, where you could theoretically buy those cards at MSRP. No such luck with the RTX 5070 Ti, which is a shame.

The 5070 Ti MSRP does at least come in under the launch MSRPs of both the RTX 4070 Ti and RTX 4070 Ti Super, neither of which had Founders Edition cards, so stock and pricing will hopefully stay within the bounds of where those cards have been selling for.

The 5070 Ti's MSRP puts it on the lower-end of the enthusiast-class, and while we haven't seen the price for the AMD Radeon RX 9070 XT yet, it's unlikely that AMD's competing RDNA 4 GPU will sell for much less than the RTX 5070 Ti, but if you're not in a hurry, it might be worth waiting a month or two to see what AMD has to offer in this range before deciding which is the better buy.

  • Value: 4 / 5

Nvidia GeForce RTX 5070 Ti: Specs

A closeup of the power connector on the Nvidia GeForce RTX 5070 Ti

(Image credit: Future / John Loeffler)
  • GDDR7 VRAM and PCIe 5.0
  • Slight bump in power consumption
  • More memory than its direct predecessor

Like the rest of the Nvidia Blackwell GPU lineup, there are some notable advances with the RTX 5070 Ti over its predecessors.

First, the RTX 5070 Ti features faster GDDR7 memory which, in addition to having an additional 4GB VRAM than the RTX 4070 Ti's 12GB, means that the RTX 5070 Ti's larger, faster memory pool can process high resolution texture files faster, making it far more capable at 4K resolutions.

Also of note is its 256-bit memory interface, which is 33.3% larger than the RTX 4070 Ti's, and equal to that of the RTX 4070 Ti Super. 64 extra bits might not seem like a lot, but just like trying to fit a couch through a door, even an extra inch or two of extra space can be the difference between moving the whole thing through at once or having to do it in parts, which translates into additional work on both ends.

The output ports on the Nvidia GeForce RTX 5070 Ti

(Image credit: Future / John Loeffler)

There's also the new PCIe 5.0 x16 interface, which speeds up communication between the graphics card, your processor, and your SSD. If you have a PCIe 5.0 capable motherboard, processor, and SSD, just make note of how many PCIe 5.0 lanes you have available.

The RTX 5070 Ti will take up 16 of them, so if you only have 16 lanes available and you have a PCIe 5.0 SSD, the RTX 5070 Ti is going to get those lanes by default, throttling your SSD to PCIe 4.0 speeds. Some motherboards will let you set PCIe 5.0 priority, if you have to make a choice.

The RTX 5070 Ti uses slightly more power than its predecessors, but in my testing it's maximum power draw came in at just under the card's 300W TDP.

As for the GPU inside the RTX 5070 Ti, it's built using TSMC's N4P process node, which is a refinement of the TSMC N4 node used by its predecessors. While not a full generational jump in process tech, the N4P process does offer better efficiency and a slight increase in transistor density.

  • Specs & features: 5 / 5

Nvidia GeForce RTX 5070 Ti: Design & features

The backplate of the Nvidia GeForce RTX 5070 Ti

(Image credit: Future / John Loeffler)
  • No Nvidia Founders Edition card
  • No dual-pass-through cooling (at least for now)

There is no Founders Edition card for the RTX 5070 Ti, so the RTX 5070 Ti you end up with may look radically different than the one I tested for this review, the Asus Prime GeForce RTX 5070 Ti.

Whatever partner card you choose though, it's likely to be a chonky card given the card's TDP, since 300W of heat needs a lot of cooling. While the RTX 5090 and RTX 5080 Founders Edition cards featured the innovative dual pass-through design (which dramatically shrank the card's width), it's unlikely you'll find any RTX 5070 Ti cards in the near future that feature this kind of cooling setup, if ever.

With that groundwork laid, you're going to have a lot of options for cooling setups, shroud design, and lighting options, though more feature-rich cards will likely be more expensive, so make sure you consider the added cost when weighing your options.

As for the Asus Prime GeForce RTX 5070 Ti, the sleek shroud of the card lacks the RGB that a lot of gamers like for their builds, but for those of us who are kind of over RGB, the Prime's design is fantastic and easily worked into any typical mid-tower case.

The Prime RTX 5070 Ti features a triple-fan cooling setup, with one of those fans having complete passthrough over the heatsink fins. There's a protective backplate and stainless bracket over the output ports.

The 16-pin power connector rests along the card's backplate, so even if you invested in a 90-degree angled power cable, you'll still be able to use it, assuming your power supply meets the recommended 750W listed on Asus's website. There's a 3-to-1 adapter included with the card, as well, for those who haven't upgraded to an ATX 3.0 PSU yet.

  • Design: 4 / 5

Nvidia GeForce RTX 5070 Ti: Performance

An Nvidia GeForce RTX 5070 Ti on a table with its retail packaging

(Image credit: Future / John Loeffler)
  • RTX 4080 Super-level performance
  • Massive improvement over the RTX 4070 Ti Super
  • Added features like DLSS 4 with Multi-Frame Generation
A note on my data

The charts shown below offer the most recent data I have for the cards tested for this review. They may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.

And so we come to the reason we're all here, which is this card's performance.

Given the...passionate...debate over the RTX 5080's underwhelming gen-on-gen uplift, enthusiasts will be very happy with the performance of the RTX 5070 Ti, at least as far as it relates to the last-gen RTX 4070 Ti and RTX 4070 Ti Super.

Starting with synthetic scores, at 1080p, both the RTX 4070 Ti and RTX 5070 Ti are so overpowered that they get close to CPU-locking on 3DMark's 1080p tests, Night Raid and Fire Strike, though the RTX 5070 Ti does come out about 14% ahead. The RTX 5070 Ti begins to pull away at higher resolutions and once you introduce ray tracing into the mix, with roughly 30% better performance at these higher level tests like Solar Bay, Steel Nomad, and Port Royal.

In terms of raw compute performance, the RTX 5070 Ti scores about 25% better in Geekbench 6 than the RTX 4070 Ti and about 20% better than the RTX 4070 Ti Super.

In creative workloads like Blender Benchmark 4.30, the RTX 5070 Ti pulls way ahead of its predecessors, though the 5070 Ti, 4070 Ti Super, and 4070 Ti all pretty much max out what a GPU can add to my Handbrake 1.9 4K to 1080p encoding test, with all three cards cranking out about 220 FPS encoded on average.

Starting with 1440p gaming, the gen-on-gen improvement of the RTX 5070 Ti over the RTX 4070 Ti is a respectable 20%, even without factoring in DLSS 4 with Multi-Frame Generation.

The biggest complaint that some have about MFG is that if the base frame rate isn't high enough, you'll end up with controls that can feel slightly sluggish, even though the visuals you're seeing are much more fluid.

Fortunately, outside of turning ray tracing to its max settings and leaving Nvidia Reflex off, you're not really going to need to worry about that. The RTX 5070 Ti's minimum FPS for all but one of the games I tested at native 1440p with ray tracing all pretty much hit or exceeded 60 FPS, often by a lot.

Only F1 2024 had a lower-than-60 minimum FPS at native 1440p with max ray tracing, and even then, it still managed to stay above 45 fps, which is fast enough that no human would ever notice any input latency in practice. For 1440p gaming, then, there's absolutely no reason not to turn on MFG whenever it is available since it can substantially increase framerates, often doubling or even tripling them in some cases without issue.

For 4K gaming, the RTX 5070 Ti native performance is spectacular, with nearly every title tested hitting 60 FPS or greater on average, with those that fell short only doing so by 4-5 frames.

Compared to the RTX 4070 Ti and RTX 4070 Ti Super, the faster memory and expanded 16GB VRAM pool definitely turn up for the RTX 5070 Ti at 4K, delivering about 31% better overall average FPS than the RTX 4070 Ti and about 23% better average FPS than the RTX 4070 Ti Super.

In fact, the average 4K performance for the RTX 5070 Ti pulls up pretty much dead even with the RTX 4080 Super's performance, and about 12% better than the AMD Radeon RX 7900 XTX at 4K, despite the latter having 8GB more VRAM.

Like every other graphics card besides the RTX 4090, RTX 5080, and RTX 5090, playing at native 4K with ray tracing maxed out is going to kill your FPS. To the 5070 Ti's credit, though, minimum FPS never dropped so low as to turn things into a slideshow, even if the 5070 Ti's 25 FPS minimum in Cyberpunk 2077 was noticeable.

Turning on DLSS in these cases is a must, even if you skip turning on MFG, but the RTX 5070 Ti's balanced upscaled performance is a fantastic experience.

Leave ray tracing turned off (or set to a lower setting), however, and MFG definitely becomes a viable way to max out your 4K monitor's refresh rate for seriously fluid gaming.

Overall then, the RTX 5070 Ti delivers substantial high-resolution gains gen-on-gen, which should make enthusiasts happy, without having to increase its power consumption all that much.

Of all the graphics cards I've tested over the years, and especially over the past six months, the RTX 5070 Ti is pretty much the perfect balance for whatever you need it for, and if you can get it at MSRP or reasonably close to MSRP, it's without a doubt the best value for your money of any of the current crop of enthusiast graphics cards.

  • Performance: 5 / 5

Should you buy the Nvidia GeForce RTX 5070 Ti?

A masculine hand holding the Nvidia GeForce RTX 5070 Ti

(Image credit: Future / John Loeffler)

Buy the Nvidia GeForce RTX 5070 Ti if...

You want the perfect balance of 4K performance and price
Assuming you can find it at or close to MSRP, the 4K value proposition on this card is the best you'll find for an enthusiast graphics card.

You want a fantastic creative graphics card on the cheap
While the RTX 5070 Ti doesn't have the RTX 5090's creative chops, it's a fantastic pick for 3D modelers and video professionals looking for a (relatively) cheap GPU.

You want Nvidia's latest DLSS features without spending a fortune
While this isn't the first Nvidia graphics card to feature DLSS 4 with Multi Frame Generation, it is the cheapest, at least until the RTX 5070 launches in a month or so.

Don't buy it if...

You want the absolute best performance possible
The RTX 5070 Ti is a fantastic performer, but the RTX 5080, RTX 4090, and RTX 5090 all offer better raw performance if you're willing to pay more for it.

You're looking for something more affordable
While the RTX 5070 Ti has a fantastic price for an enthusiast-grade card, it's still very expensive, especially once scalpers get involved.

You only plan on playing at 1440p
If you never plan on playing at 4K this generation, you might want to see if the RTX 5070 or AMD Radeon RX 9070 XT and RX 9070 cards are a better fit.

Also consider

Nvidia GeForce RTX 5080
While more expensive, the RTX 5080 features fantastic performance and value for under a grand at MSRP.

Read the full Nvidia GeForce RTX 5080 reviewView Deal

Nvidia GeForce RTX 4080 Super
While this card might not be on the store shelves for much longer, the RTX 5070 Ti matches the RTX 4080 Super's performance, so if you can find the RTX 4080 Super at a solid discount, it might be the better pick.

Read the full Nvidia GeForce RTX 4080 Super reviewView Deal

How I tested the Nvidia GeForce RTX 5070 Ti

  • I spent about a week with the RTX 5070 Ti
  • I used my complete GPU testing suite to analyze the card's performance
  • I tested the card in everyday, gaming, creative, and AI workload usage
Test System Specs

Here are the specs on the system I used for testing:

Motherboard: ASRock Z790i Lightning WiFi
CPU: Intel Core i9-14900K
CPU Cooler:
Gigabyte Auros Waterforce II 360 ICE
RAM: Corsair Dominator DDR5-6600 (2 x 16GB)
SSD:
Crucial T705
PSU: Thermaltake Toughpower PF3 1050W Platinum
Case: Praxis Wetbench

I spent about a week testing the Nvidia GeForce RTX 5070 Ti, using it mostly for creative work and gaming, including titles like Indiana Jones and the Great Circle and Avowed.

I also used my updated suite of benchmarks including industry standards like 3DMark and Geekbench, as well as built-in gaming benchmarks like Cyberpunk 2077 and Dying Light 2.

I also test all of the competing cards in a given card's market class using the same test bench setup throughout so I can fully isolate GPU performance across various, repeatable tests. I then take geometric averages of the various test results (which better insulates the average from being skewed by tests with very large test results) to come to comparable scores for different aspects of the card's performance. I give more weight to gaming performance than creative or AI performance, and performance is given the most weight in how final scores are determined, followed closely by value.

I've been testing GPUs, PCs, and laptops for TechRadar for nearly five years now, with more than two dozen graphics card reviews under my belt in the past three years alone. On top of that, I have a Masters degree in Computer Science and have been building PCs and gaming on PCs for most of my life, so I am well qualified to assess the value of a graphics card and whether it's worth your time and money.

  • Originally reviewed February 2025
Nvidia GeForce RTX 5080
5:00 pm | January 29, 2025

Author: admin | Category: Computers Computing Computing Components Gadgets | Tags: , , , | Comments: Off

Nvidia GeForce RTX 5080: Two-minute review

Nvidia GeForce RTX 5080: Price & availability

  • How much is it? MSRP is $999 / £939 / AU$2,019
  • When can you get it? The RTX 5080 goes on sale January 30, 2025
  • Where is it available? The RTX 5080 will be available in the US, UK, and Australia at launch
Where to buy the RTX 5080

Looking to pick up the RTX 5080? Check out our Where to Buy RTX 5080 live blog for updates to find stock in the US and UK.

The Nvidia GeForce RTX 5080 goes on sale on January 30, 2025, starting at $999 / £939 / AU$2,019 for the Founders Edition card from Nvidia, as well as select AIB partner cards. Third-party overclocked (OC) cards and those with other extras like liquid cooling and RGB will ultimately cost more.

The RTX 5080 launches at a much lower price than the original RTX 4080, which had a launch MSRP of $1,199 / £1,189 / AU$2,219, though the RTX 5080 does come in at the same price as the Nvidia RTX 4080 Super.

It's worth noting that the RTX 5080 is fully half the MSRP of the Nvidia RTX 5090 that launches at the same time, and given the performance of the RTX 5080, a lot of potential buyers out there will likely find the RTX 5080 to be the better value of the two cards.

  • Value: 4 / 5

Nvidia GeForce RTX 5080: Specs & features

The Nvidia GeForce RTX 5080's power connection port

(Image credit: Future / John Loeffler)
  • GDDR7 and PCIe 5.0
  • Slightly higher SM count than RTX 4080 Super
  • Moderate increase in TDP, but nothing like the RTX 5090
  • Specs & features: 4 / 5

Nvidia GeForce RTX 5080: Design

  • Slim, dual-slot form factor
  • Better cooling
  • Design: 4.5 / 5

Nvidia GeForce RTX 5080: Performance

An Nvidia GeForce RTX 5090 slotted into a test bench

(Image credit: Future)
  • DLSS 4
A note on my data

The charts shown below are the most recent test data I have for the cards tested for this review and may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.

  • Performance: 4 / 5

Should you buy the Nvidia GeForce RTX 5080?

A masculine hand holding an RTX 5090

(Image credit: Future)

Buy the Nvidia GeForce RTX 5080 if...

Don't buy it if...

Also consider

Nvidia GeForce RTX 4080 Super

Read the full Nvidia GeForce RTX 4080 Super review

How I tested the Nvidia GeForce RTX 5080

  • I spent about a week and a half with the RTX 5080
  • I used my complete GPU testing suite to analyze the card's performance
  • I tested the card in everyday, gaming, creative, and AI workload usage
Test System Specs

Here are the specs on the system I used for testing:

Motherboard: ASRock Z790i Lightning WiFi
CPU: Intel Core i9-14900K
CPU Cooler:
Gigabyte Auros Waterforce II 360 ICE
RAM: Corsair Dominator DDR5-6600 (2 x 16GB)
SSD:
Crucial T705
PSU: Thermaltake Toughpower PF3 1050W Platinum
Case: Praxis Wetbench

  • Originally reviewed January 2024
Nvidia GeForce RTX 5080 review: nearly RTX 4090 performance for a whole lot less
5:00 pm |

Author: admin | Category: Computers Computing Computing Components Gadgets | Tags: , , , | Comments: Off

Nvidia GeForce RTX 5080: Two-minute review

At first glance, the Nvidia GeForce RTX 5080 doesn't seem like that much of an upgrade from the Nvidia GeForce RTX 4080 it is replacing, but that's only part of the story with this graphics card.

Its performance, to be clear, is unquestioningly solid, positioning it as the third-best graphics card on the market right now, by my testing, and its new PCIe 5.0 interface and GDDR7 VRAM further distances it from the RTX 4080 and RTX 4080 Super from the last generation. It also outpaces the best AMD graphics card, the AMD Radeon RX 7900 XTX, by a healthy margin, pretty much locking up the premium, enthusiast-grade GPUs in Nvidia's corner for at least another generation.

Most impressively, it does this all for the same price as the Nvidia GeForce RTX 4080 Super and RX 7900 XTX: $999 / £939 / AU$2,019. This is also a rare instance where a graphics card launch price actually recedes from the high watermark set by its predecessor, as the RTX 5080 climbs down from the inflated price of the RTX 4080 when it launched back in 2022 for $1,199 / £1,189 / AU$2,219.

Then, of course, there's the new design of the card, which features a slimmer dual-slot profile, making it easier to fit into your case (even if the card's length remains unchanged). The dual flow-through fan cooling solution does wonders for managing the extra heat generated by the extra 40W TDP, and while the 12VHPWR cable connector is still present, the 3-to-1 8-pin adapter is at least somewhat less ridiculous the RTX 5090's 4-to-1 dongle.

The new card design also repositions the power connector itself to make it less cumbersome to plug a cable into the card, though it does pretty much negate any of the 90-degree angle cables that gained popularity with the high-end RTX 40 series cards.

Finally, everything is built off of TSMC's 4nm N4 process node, making it one of the most cutting-edge GPUs on the market in terms of its architecture. While AMD and Intel will follow suit with their own 4nm GPUs soon (AMD RDNA 4 also uses TSMC's 4nm process node and is due to launch in March), right now, Nvidia is the only game in town for this latest hardware.

None of that would matter though if the card didn't perform, however, but gamers and enthusiasts can rest assured that even without DLSS 4, you're getting a respectable upgrade. It might not have the wow factor of the beefier RTX 5090, but for gaming, creating, and even AI workloads, the Nvidia GeForce RTX 5080 is a spectacular balance of performance, price, and innovation that you won't find anywhere else at this level.

Nvidia GeForce RTX 5080: Price & availability

An RTX 5080 sitting on its retail packaging

(Image credit: Future)
  • How much is it? MSRP is $999 / £939 / AU$2,019
  • When can you get it? The RTX 5080 goes on sale January 30, 2025
  • Where is it available? The RTX 5080 will be available in the US, UK, and Australia at launch
Where to buy the RTX 5080

Looking to pick up the RTX 5080? Check out our Where to buy RTX 5080 live blog for updates to find stock in the US and UK

The Nvidia GeForce RTX 5080 goes on sale on January 30, 2025, starting at $999 / £939 / AU$2,019 for the Founders Edition and select AIB partner cards, while overclocked (OC) and more feature-rich third-party cards will be priced higher.

This puts the Nvidia RTX 5080 about $200 / £200 / AU$200 cheaper than the launch price of the last-gen RTX 4080, while also matching the price of the RTX 4080 Super.

Both of those RTX 40 series GPUs should see some downward price pressure as a result of the RTX 5080 release, which might complicate the value proposition of the RTX 5080 over the other,

The RTX 5080 is also launching at the same MSRP as the AMD Radeon RX 7900 XTX, which is AMD's top GPU right now. And with AMD confirming that it does not intend to launch an enthusiast-grade RDNA 4 GPU this generation, the RTX 5080's only real competition is from other Nvidia graphics cards like the RTX 4080 Super or RTX 5090.

This makes the RTX 5080 a great value proposition for those looking to buy a premium 4K graphics card, as its price-to-performance ratio is very strong.

  • Value: 4 / 5

Nvidia GeForce RTX 5080: Specs & features

A masculine hand holding an Nvidia GeForce RTX 5080 showing off the power connector

(Image credit: Future)
  • GDDR7 VRAM and PCIe 5.0
  • Still just 16GB VRAM
  • Slightly higher 360W TDP

While the Nvidia RTX 5080 doesn't push the spec envelope quite as far as the RTX 5090 does, its spec sheet is still impressive.

For starters, like the RTX 5090, the RTX 5080 uses the faster, next-gen PCIe 5.0 interface that allows for faster data processing and coordination with the CPU, which translates directly into higher performance.

You also have new GDDR7 VRAM in the RTX 5080, only the second card to have it after the RTX 5090, and it dramatically increases the memory bandwidth and speed of the RTX 5080 compared to the RTX 4080 and RTX 4080 Super. Those latter two cards both use slower GDDR6X memory, so even though all three cards have the same amount of memory (16GB) and memory bus-width (256-bit), the RTX 5080 has a >25% faster effective memory speed of 30Gbps, compared to the 23Gbps of the RTX 4080 Super and the 22.4Gbps on the base RTX 4080.

This is all on top of the Blackwell GPU inside the card, which is built on TSMC's 4nm process, compared to the Lovelace GPUs in the RTX 4080 and 4080 Super, which use TSMC's 5nm process. So even though the transistor count on the RTX 5080 is slightly lower than its predecessor's, the smaller transistors are faster and more efficient.

The RTX 5080 also has a higher SM count, 84, compared to the RTX 4080's 76 and the RTX 4080 Super's 80, meaning the RTX 5080 has the commensurate increase in shader cores, ray tracing cores, and Tensor cores. It also has a slightly faster boost clock (2,617MHz) than its predecessor and the 4080 Super variant.

Finally, there is a slight increase in the card's TDP, 360W compared to the RTX 4080 and RTX 4080 Super's 320W.

  • Specs & features: 4.5 / 5

Nvidia GeForce RTX 5080: Design

An Nvidia GeForce RTX 5080 leaning against its retail packaging with the RTX 5080 logo visible

(Image credit: Future)
  • Slimmer dual-slot form factor
  • Dual flow-through cooling system

The redesign of the Nvidia RTX 5080 is identical to that of the RTX 5090, featuring the same slimmed-down dual slot profile as Nvidia's flagship card.

If I were to guess, the redesign of the RTX 5080 isn't as essential as it is for the RTX 5090, which needed a way to bring better cooling for the much hotter 575W TDP, and the RTX 5080 (and eventually the RTX 5070) just slotted into this new design by default.

That said, it's still a fantastic change, especially as it makes the RTX 5080 thinner and lighter than its predecessor.

The dual flow through cooling system on the Nvidia GeForce RTX 5080

(Image credit: Future)

The core of the redesign is the new dual flow-through cooling solution, which uses an innovative three-part PCB inside to open up a gap at the front of the card, allowing a second fan to blow cooler air over the heat sink fins drawing heat away from the GPU.

A view of the comparative slot width of the Nvidia GeForce RTX 5080 and RTX 4080

(Image credit: Future)

This means that you don't need as thick of a heat sink to pull away heat, which allows the card itself to get the same thermal performance from a thinner form factor, moving from the triple-slot RTX 4080 design down to a dual-slot RTX 5080. In practice, this also allows for a slight increase in the card's TDP, giving the card a bit of a performance boost as well, just from implementing a dual flow-through design.

Given that fact, I would not be surprised if other card makers follow suit, and we start getting much slimmer graphics cards as a result.

A masculine hand holding an Nvidia GeForce RTX 5080 showing off the power connector

(Image credit: Future)

The only other design choice of note is the 90-degree turn of the 16-pin power port, which should make it easier to plug the 12VHPWR connector into the card. The RTX 4080 didn't suffer nearly the same kinds of issues with its power connectors as the RTX 4090 did, so this design choice really flows down from engineers trying to fix potential problems with the much more power hungry 5090. But, if you're going to implement it for your flagship card, you might as well put it on all of the Founders Edition cards.

Unfortunately, this redesign means that if you invested in a 90-degree-angled 12VHPWR cable, it won't work on the RTX 5080 Founders Edition, though third-party partner cards will have a lot of different designs, so you should be able to find one that fits your cable situation..

  • Design: 4.5 / 5

Nvidia GeForce RTX 5080: Performance

An Nvidia GeForce RTX 5080 slotted and running on a test bench

(Image credit: Future)
  • Excellent all-around performance
  • Moderately more powerful than the RTX 4080 and RTX 4080 Super, but nearly as fast as the RTX 4090 in gaming
  • You'll need DLSS 4 to get the best results
A note on my data

The charts shown below are the most recent test data I have for the cards tested for this review and may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.

A note on the RTX 4080 Super

In my testing for this review, the RTX 4080 Super scored consistently lower than it has in the past, which I believe is an issue with my card specifically that isn't reflective of its actual performance. I'm including the data from the RTX 4080 Super for transparency's sake, but I wouldn't take these numbers as-is. I'll be retesting the RTX 4080 Super soon, and will update my data with new scores once I've troubleshot the issue.

Performance is king, though, and so naturally all the redesign and spec bumps won't amount to much if the RTX 5080 doesn't deliver better performance as a result, and fortunately it does—though maybe not as much as some enthusiasts would like.

Overall, the RTX 5080 manages to score about 13% better than the RTX 4080 and about 19% better than the AMD Radeon RX 7900 XTX, a result that will disappoint some (especially after seeing the 20-25% uplift on the RTX 5090) who were hoping for something closer to 20% or better.

If we were just to go off those numbers, some might call them disappointing, regardless of all the other improvements to the RTX 5080 in terms of design and specs. All this needs to be put in a broader context though, because my perspective changed once I compared the RTX 5080 to the RTX 4090.

Overall, the RTX 5080 is within 12% of the overall performance of the RTX 4090, and within 9% of the RTX 4090's gaming performance, which is a hell of a thing and simply can't be ignored, even by enthusiasts.

Starting with the card's synthetic benchmarks, the card scores about 13% better than the RTX 4080 and RX 7900 XTX, with the RTX 5080 consistently beating out the RTX 4080 and substantially beating the RX 7900 XTX in ray-traced workloads (though the RX 7900 XTX does pull down a slightly better average 1080p rasterization score, to its credit.

Compared to the RTX 4090, the RTX 5080 comes in at about 15% slower on average, with its worst performance coming at lower resolutions. At 4K, though, the RTX 5080 comes in just 7% slower than the last-gen flagship.

In terms of compute performance, the RTX 5080 trounces the RX 7900 XTX, as expected, by about 38%, with a more modest 9% improvement over the RTX 4080. Against the RTX 4090, however, the RTX 5080 comes within just 5% of the RTX 4090's Geekbench compute scores. If you're looking for a cheap AI card, the RTX 5080 is definitely going to be your jam.

On the creative side, PugetBench for Creators Adobe Photoshop benchmark still isn't working for the RTX 5080 Super, so I can't tell you much about its creative raster performance yet (though I will update these charts once that issue is fixed), but going off the 3D modeling and video editing scores, the RTX 5080 is an impressive GPU, as expected.

The entire 3D modeling industry is effectively built on Nvidia's CUDA, so against the RTX 5080, the RX 7900 XTX doesn't stand a chance as the 5080 more than doubles the RX 7900 XTX's Blender Benchmark performance. Gen-on-gen though, the RTX 5080 comes in with about 8% better performance.

Against the RTX 4090, the RTX 5080 comes within 15% on its performance, and for good measure, if you're rocking an RTX 3090 and you're curious about the RTX 5080, the RTX 5080 outperforms the RTX 3090 by about 75% in Blender Benchmark. If you're on an RTX 3090 and want to upgrade, you'll probably still be better off with an RTX 4090, but if you can't find one, the RTX 5080 is a great alternative.

In terms of video editing performance, the RTX 5080 doesn't do as well as its predecessor in PugetBench for Creators Adobe Premiere and effectively ties in my Handbrake 4K to 1080p encoding test. I expect that once the RTX 5080 launches, Puget Systems will be able to update its tools for the new RTX 50 series, so these scores will likely change, but for now, it is what it is, and you're not going to see much difference in your video editing workflows with this card over its predecessor.

An Nvidia GeForce RTX 5080 slotted into a motherboard

(Image credit: Future)

The RTX 5080 is Nvidia's premium "gaming" card, though, so its gaming performance is what's going to matter to the vast majority of buyers out there. For that, you won't be disappointed. Working just off DLSS 3 with no frame generation, the RTX 5080 will get you noticeably improved framerates gen-on-gen at 1440p and 4K, with substantially better minimum/1% framerates as well for smoother gameplay. Turn on DLSS 4 with Multi-Frame Generation and the RTX 5080 does even better, blowing well past the RTX 4090 in some titles.

DLSS 4 with Multi-Frame Generation is game developer-dependent, however, so even though this is the flagship gaming feature for this generation of Nvidia GPUs, not every game will feature it. For testing purposes, then, I stick to DLSS 3 without Frame Generation (and the AMD and Intel equivalents, where appropriate), since this allows for a more apples-to-apples comparison between cards.

At 1440p, the RTX 5080 gets about 13% better average fps and minimum/1% fps overall, with up to 18% better ray tracing performance. Turn on DLSS 3 to balanced and ray tracing to its highest settings and the RTX 5080 gets you about 9% better average fps than its predecessor, but a massive 58% higher minimum/1% fps, on average.

Compared to the RTX 4090, the RTX 5080's average 1440p fps comes within 7% of the RTX 4090's, and within 2% of its minimum/1% fps, on average. In native ray-tracing performance, the RTX 5080 slips to within 14% of the RTX 4090's average fps and within 11% of its minimum/1% performance. Turn on balanced upscaling, however, and everything changes, with the RTX 5080 comes within just 6% of the RTX 4090's ray-traced upscaled average fps, and beats the RTX 4090's minimum/1% fps average by almost 40%.

Cranking things up to 4K, and the RTX 5080's lead over the RTX 4080 grows a good bit. With no ray tracing or upscaling, the RTX 5080 gets about 20% faster average fps and minimum/1% fps than the RTX 4080, overall. Its native ray tracing performance is about the same, however, and it's minimum/1% fps average actually falls behind the RTX 4080's, both with and without DLSS 3.

Against the RTX 4090, the RTX 5080 comes within 12% of its average fps and within 8% of its minimum/1% performance without ray tracing or upscaling. It falls behind considerably in native 4K ray tracing performance (which is to be expected, given the substantially higher RT core count for the RTX 4090), but when using DLSS 3, that ray tracing advantage is cut substantially and the RTX 5080 manages to come within 14% of the RTX 4090's average fps, and within 12% of its minimum/1% fps overall.

Taken together, the RTX 5080 makes some major strides in reaching RTX 4090 performance across the board, getting a little more than halfway across their respective performance gap between the RTX 4080 and RTX 4090.

The RTX 5080 beats its predecessor by just over 13% overall, and comes within 12% of the RTX 4090's overal performance, all while costing less than both RTX 40 series card's launch MSRP, making it an incredible value for a premium card to boot.

  • Performance: 4 / 5

Should you buy the Nvidia GeForce RTX 5080?

A masculine hand holding up an Nvidia GeForce RTX 5080 against a green background

(Image credit: Future)

Buy the Nvidia GeForce RTX 5080 if...

You want fantastic performance for the price
You're getting close to RTX 4090 performance for under a grand (or just over two, if you're in Australia) at MSRP.

You want to game at 4K
This card's 4K gaming performance is fantastic, coming within 12-14% of the RTX 4090's in a lot of games.

You're not willing to make the jump to an RTX 5090
The RTX 5090 is an absolute beast of a GPU, but even at its MSRP, it's double the price of the RTX 5080, so you're right to wonder if it's worth making the jump to the next tier up.

Don't buy it if...

You want the absolute best performance possible
The RTX 5080 comes within striking distance of the RTX 4090 in terms of performance, but it doesn't actually get there, much less reaching the vaunted heights of the RTX 5090.

You're looking for something more affordable
At this price, it's an approachable premium graphics card, but it's still a premium GPU, and the RTX 5070 Ti and RTX 5070 are just around the corner.

You only plan on playing at 1440p
While this card is great for 1440p gaming, it's frankly overkill for that resolution. You'll be better off with the RTX 5070 Ti if all you want is 1440p.

Also consider

Nvidia GeForce RTX 4090
With the release of the RTX 5090, the RTX 4090 should see it's price come down quite a bit, and if scalpers drive up the price of the RTX 5080, the RTX 4090 might be a better bet.

Read the full Nvidia GeForce RTX 4090 review

Nvidia GeForce RTX 5090
Yes, it's double the price of the RTX 5080, and that's going to be a hard leap for a lot of folks, but if you want the best performance out there, this is it.

Read the full Nvidia GeForce RTX 5090 review

How I tested the Nvidia GeForce RTX 5080

  • I spent about a week and a half with the RTX 5080
  • I used my complete GPU testing suite to analyze the card's performance
  • I tested the card in everyday, gaming, creative, and AI workload usage
Test System Specs

Here are the specs on the system I used for testing:

Motherboard: ASRock Z790i Lightning WiFi
CPU: Intel Core i9-14900K
CPU Cooler:
Gigabyte Auros Waterforce II 360 ICE
RAM: Corsair Dominator DDR5-6600 (2 x 16GB)
SSD:
Crucial T705
PSU: Thermaltake Toughpower PF3 1050W Platinum
Case: Praxis Wetbench

I spent about a week testing the RTX 5080, using my updated suite of benchmarks like Black Myth Wukong, 3DMark Steel Nomad, and more.

I also used this card as my primary work GPU where I relied on it for photo editing and design work, while also testing out a number of games on it like Cyberpunk 2077, Black Myth Wukong, and others.

I've been testing graphics cards for TechRadar for a couple of years now, with more than two dozen GPU reviews under my belt. I've extensively tested and retested all of the graphics cards discussed in this review, so I'm intimately familiar with their performance. This gives me the best possible position to judge the merits of the RTX 5080, and whether it's the best graphics card for your needs and budget.

  • Originally reviewed January 2024
Nvidia announces RTX 50-series graphics cards with DLSS 4
4:27 pm | January 7, 2025

Author: admin | Category: Mobile phones news | Tags: , | Comments: Off

Nvidia today announced the GeForce RTX 50 series, the company’s next generation of desktop and mobile graphics processors. Based on the latest Blackwell architecture, the product stack currently includes the RTX 5090, the RTX 5080, the RTX 5070 Ti, and the RTX 5070 in both desktop and mobile variants. The four GPUs have obvious spec differences but let’s focus on things that are common first. All new Blackwell GPUs are built on the TSMC 4NP process. They feature new 5th generation Tensor cores and 4th generation ray tracing cores. There’s a newer 9th gen NVENC encoder and 6th gen...

Nvidia announces RTX 50-series graphics cards with DLSS 4
4:27 pm |

Author: admin | Category: Mobile phones news | Tags: , | Comments: Off

Nvidia today announced the GeForce RTX 50 series, the company’s next generation of desktop and mobile graphics processors. Based on the latest Blackwell architecture, the product stack currently includes the RTX 5090, the RTX 5080, the RTX 5070 Ti, and the RTX 5070 in both desktop and mobile variants. The four GPUs have obvious spec differences but let’s focus on things that are common first. All new Blackwell GPUs are built on the TSMC 4NP process. They feature new 5th generation Tensor cores and 4th generation ray tracing cores. There’s a newer 9th gen NVENC encoder and 6th gen...

Acer Predator Triton 14 review: lightweight and affordable with great performance
5:00 pm | April 2, 2024

Author: admin | Category: Computers Computing Gadgets Gaming Computers Gaming Laptops | Tags: , , | Comments: Off

Acer Predator Triton 14: Two-minute review

“Thinner laptops imbued with the latest hardware” is an adequate mantra for Acer’s Predator Triton series of gaming laptops. From the 500 to the 300 SE, these powerful yet slim devices continuously balance from and factor, and this is nowhere more evident than the latest Acer Predator Triton 14.

Starting at just $1,499.99 in the US (£1083.05/AU$4,599) and standing less than an inch tall when closed and weighing under four lbs, the model I reviewed is packed with a 14-core Intel i7-13700H CPU, Nvidia RTX 4070 GPU, 16GB of RAM, and 1TB SSD. Then there’s the 14-inch display that has a 2560 x 1600 resolution alongside a 250 Hz refresh rate. 

Accompanying the wonderful gaming laptop monitor are powerful DTSX-certified speakers that are loud enough when performance isn’t being pushed. Through and through, the Predator Triton 14 is also suitable for not just gaming. 

The form factor makes this great for general computing while the powerful components are more than good enough when editing photos or videos through Adobe Suite software. Portability doesn't sacrifice a respectable battery life either, with multiple ways of charging the gaming laptop as well. 

The port selection is well thought out and a wonderful keyboard features per-key RGB lighting and plenty of hotkeys. All of these can be customized beyond the performance settings that can be customized through the Predator Sense app. Even the trackpad is smooth as butter with its incorporated fingerprint scanner.  

Despite the balanced approach, some compromises come with the Predator Triton 14. As mentioned above, the cooling and fan system can get incredibly loud when playing a game like Cyberpunk 2077 or Alan Wake II, which means headphones are going to be a must. However, that shouldn’t be too much of a problem when writing a review on Google Docs while listening to music on Tidal. The underside can get uncomfortably hot when under loud as well, so make sure it's being placed on something like a desk if you intend to game on it especially hard.

Potential buyers looking for the Predator Triton 14 to be their main general-use laptop may also need to understand that this is a dedicated purchase. Unfortunately, both RAM and SSD storage aren’t upgradeable at this time. RAM-wise, 32GB is slowly becoming the top-tier standard, so having 16GB may be a bit on the lower side, but it'll still get you several years of gaming. 

Meanwhile, with modern AAA games using well over 100GB of storage, 1TB really isn’t cutting it much anymore. Right now, these specs are more than adequate, but they’re coming close to “not much longer” status. 

If that doesn’t necessarily matter, there’s so much to appreciate with the Acer Predator Triton 14. Not only does it look ready for action but it's ready for any type of game users throw at it. Gamers looking for solid 1440p performance who are content creators are going to have a blast with this, and given its decent price point, it easily makes our list of the best gaming laptops going.

Image 1 of 4

HP laptop various angles

(Image credit: Future - Joel Burgess)
Image 2 of 4

HP laptop various angles

(Image credit: Future - Joel Burgess)
Image 3 of 4

HP laptop various angles

(Image credit: Future - Joel Burgess)
Image 4 of 4

HP laptop various angles

(Image credit: Future - Joel Burgess)

Acer Predator Triton 14: Price and availability

  • How much does it cost? It’s available in 2 configurations in U.S. and UK for $1,499 ( £1083.05) and $1,999 ( £1575.38) and AU for $4,599
  • When is it available? Available now 
  • Where can you get it? From Acer’s online store in U.S., UK and AU

 

Both U.S. and UK configurations of the Acer Predator Triton 14 share the specs for their $1,499 ( £1083.05) and $1,999 ( £1575.38) price points. All configurations have identical Intel i7 CPU, 16GB LPDDR5 RAM, port selection, audio and Full HD webcam. 

At the base price, users get an Nvidia RTX 4050 with 6GB RAM, 1920X1200 resolution display at 165Hz and 512GB SSD storage. This is totally fine for anyone looking to stay in the 1080p native resolution range when gaming. The highest $1,999 configuration for 1440p performance comes with the Nvidia RTX 4070 with 8GB RAM, 2560 x 1600 resolution display at  250Hz and 1TB SSD storage.  There is only one configuration as of print for Australia which is in line with the top tier option outside of offering 32GB of RAM. 

There are two 14-inch gaming laptops that come to mind when thinking of alternatives to the Predator Triton 14. One is the more expensive Razer Blade 14 which starts at $2,399. For those who need something cheaper, the Lenovo Legion 5 Slim 14 gives up performance power for a 1,439.99 price point. With that said, the Predator Triton 14 does find a happy medium when it comes to value.

  • Value score: 4 / 5

Acer Predator Triton 14: Specs

The Acer Predator Triton 14 currently comes in two configurations in the United States,  two in the UK, and one in Australia. 

Acer Predator Triton 14: Design

HP laptop various angles

(Image credit: Future - Joel Burgess)
  • The design matches aggressiveness with modesty alongside a healthy port selection
  • Awesome visual/audio capabilities 
  • Outstanding keyboard layout and touchpad

The Acer Predator Triton 14 hasn’t changed its looks going as far back as the past two years and that’s totally fine. Acer’s 14-inch gaming laptop manages to have more powerful components and still manages to be lightweight and thin. Therefore, that’s an accomplishment on its own. Only coming in one color, Sparkly Silver, the Predator Triton 14 feels good enough to hold in one hand yet not fragile enough to crash if dropped. Regardless of the power packed in, there are three sets of vents on each side and rear which can turn into leaf blowers when performance is put to the max. 

Port selection is solid with the right side housing an HDMI port, USB-A port, and headphone jack. The other has a charge port for the nice-sized power adapter, a USB-A port, and a USB-C port that can also be used for charging as well. At the front of the gaming laptop is a micro-USB slot which will definitely be helpful for creators looking to offload content for use later. 

Once opened, the 14-inch display provides fantastic image quality and performance. For one, the display is Vesa Certified for DisplayHDR 600. This definitely provides great image quality with vivid colors that are the right amount of crisp in contrast, a high level of brightness, and deep blacks. 

This means outside of gaming or watching videos, color correcting on Photoshop and Premiere is easier. Though there are a handful of games that’ll be able to match its 250 Hz refresh rate output with the performance specs, the gameplay looks purposefully smooth. When the cooling fans aren’t running loudly, the DTS:X speakers work sound great as well. Having the codec also means that users can get true virtual surround sound or Spatial Audio if using some form of headphones. 

Keyboard input strikes a nice balance between being tactile and punchy. Typing out long-form editorial content is a comfortable and precise feeling. Playing games like Call of Duty: Modern Warfare III and Cyberpunk 2077 feel just as good as writing a complex email. 

Then there’s the per-key RGB lighting that adds a bit of personal flair. On top of that are several function keys including some for media and access to the Predator Sense app. There’s even a button to switch between performance modes too. Even the touchpad feels great and smooth alongside the fingerprint scanner on the top left side. However, even casual gamers will know to get a gaming mouse instead. 

  • Design score: 4.5 / 5

Acer Predator Triton 14: Performance

HP laptop various angles

(Image credit: Future - Joel Burgess)
  • 1440p gaming at high settings are possible
  • DLSS is clutch 
  • Cooling fans get outrageously loud and lap can get hot 
Benchmarks

 

Here's how the Acer Predator Triton 14 performed in our suite of benchmark tests:

3DMark: Speed Way: 2654  Fire Strike: 24205 Time Spy: 11147
GeekBench 6: 2633 (single-core);  (multi-core) 14626
Total War: Warhammer III (1080p, Ultra): 83.6fps; (1080p, Low): 212.6fps
Cyberpunk 2077 (1080p, Ultra):  90.74fps; (1080p, Low): 89.15fps
F1 23 (1080p, Ultra): 43fps; (1080p, Low): 208fps
25GB File Copy Transfer Rate (Mbps): 2214.546879
Handbrake 1.6: 63 fps
CrossMark Overall: 2075; Productivity: 1980; Creativity: 2155; Responsiveness: 2132;
Web Surfing (Battery Informant):  5:17:26
PCMark 10 Battery (Gaming):  1:49 

1440p gaming performance on the Acer Predator Triton 14 successfully manages to play some of the top AAA games available without much problem. As of right now, two of the most visually demanding games on the platform are Cyberpunk 2077 and Alan Wake II. The Predator Triton 14 handles both games well at 1440p with frame rates that are usually in the 60 fps range. 

Having the 14-core Intel i7-13700H and Nvidia RTX 4070 really goes a long way in helping in-game performance. Having the RTX 4070 also means that users can do AI upscaling through DLSS which can push in-game performance even more. Due to the 2560 x 1600 resolution with a 16:10 ratio, one way to get higher performance is to play a game in native 1080p and upscale from there if playing through the display. Native 2560 x 1600 is fine on its own though. Be mindful that there won’t be too many current games that’ll even come close to hitting a 250-frame-per-second mark at native resolution. The only games that could theoretically come close are possibly Counter-Strike 2, Doom Eternal, and Fortnite if graphics settings are in a reasonable range.

Both Cyberpunk 2077 and Alan Wake 2 are also fine examples of games that’ll test a GPU’s ray tracing capabilities as well. Thankfully, both games work excellently in that regard though Alan Wake II for sure requires some tweaks to maintain a high frame rate. Of course, these games will require max power output or Turbo mode which will have the fans running incredibly loud. If playing on the lap, the heat can get uncomfortable.

Outside of gaming, Adobe Creative Suite performance is acceptable as well. We were able to use multiple layers on photos without much slowdown when using Photoshop. Through Premiere Pro, we could edit 10 minutes worth of 4K video that took less than 10 minutes to export. When it came to general use tasks like web browsing, I had a few dozen Google Chrome tabs opened without much issue. 

  • Performance score: 4.5 / 5

Acer Predator Triton 14: Battery life

HP laptop various angles

(Image credit: Future - Joel Burgess)
  • Battery life reached the halfway mark in about 4 ½ hours 
  • Can be charged through a charging port or USB
  • Recharge time is around two hours

Gaming laptops aren’t necessarily known for their battery life prowess but Eco-mode on the Acer Predator Triton 14 does lead to impressive results. It took around 4 ½ hours for the battery to reach the mid-way point. In total, we were able to get around 7 hours and some change in total. Of course, turning off features like Bluetooth alongside turning down the brightness and keyboard lighting can help reduce battery load too. Trying to play games that’ll push the laptop to the max will deliver around an hour’s worth of gaming so it’s best to keep it plugged in if one plans on doing so.

Gamers who need to get work done during a bi-coastal trip should have plenty of time before they need to charge. Users can change via the powerport which will take around two hours to get the battery to full. Meanwhile, if users forget their power brick, users can charge through the USB-C port but won’t get the same level of performance. 

  • Battery life score: 4 / 5

Should I buy the Acer Predator Triton 14?

Buy it if...

You need a slim gaming laptop with respectable performance 
Weighing under 4 lbs and as tall as a quarter, the Acer Predator Triton 14 still manages to shine when it comes to 1440p performance. 


Don't buy it if...

You would like a quieter machine when pushing specs to the max
The cooling system is incredibly loud when pushing high-quality visuals and performance. 

The Acer Predator Triton 14

How I tested the Acer Predator Triton 14

  • Tested over a two week period 
  • Split between general tasks, creative work and gaming 

My time with the Acer Predator Triton 14 lasted a little over a two-week period. During the day, I used it as my main laptop while working the office job. It was here that I was able to test general performance and speakers. During office hours, I used Google Chrome and related services like Google Docs, Tidal to listen to high fidelity music alongside creative software. 

Through Adobe Photoshop and Premiere Pro, I was able to create graphics and short-form video clips. When away from work, I took the time to play various AAA games. These games included Cyberpunk 2077, Forza Motorsport (2023), Dead Space (2023), Call of Duty: Modern Warfare, and Alan Wake II. 

Since 2020, I’ve been covering various gaming laptops for TechRadar. As a PC Gaming enthusiast, I can definitely help anyone who is looking for a gaming laptop that’s worth their performance measures and pocketbook. 

  • First reviewed April 2024
Asus ROG G22CH review: the Intel NUC Extreme lives on, at least in spirit
7:00 pm | March 1, 2024

Author: admin | Category: Computers Computing Gadgets Gaming Computers Gaming PCs | Tags: , , , | Comments: Off

Asus ROG G22CH: One-minute review

As chipsets get smaller and more efficient, the past handful of years have seen a rise in smaller-form gaming PCs like the Asus ROG G22CH. 

Not only are they non-intrusive compared to the biggest and best gaming PCs, but they have a nice amount of portability as well. Most importantly, clever cooling and component management allow them to pack a nice performance punch at the cost of real upgradability. 

In the case of the ROG G22CH, the rig looks like a horizontally wider version of the Xbox Series X. There’s a sleek all-black look that’s accented by some nice angles with customizable RGB lighting. With that said, the performance specs are also miles ahead of a similar console. 

The ROG G22CH has an Intel i9-13900K CPU, Nvidia GeForce RTX 4070 GPU, 32GB DDR5 RAM, and a 1TB SSD. That’s more than enough for some solid native 1440p gaming with the ability for 4K through DLSS upscaling. 

Starting at 1,399.99 in the US (about £1,120/AU$1,960), it can get expensive pretty quickly as you increase the specs, with UK and Australian buyers more restricted in the kinds of configurations they can buy. 

This is a bit of an issue since upgradability down the line is likely going to be a problem due to the extremely tight chassis. When packing so much performance within such a small rig, efficient cooling is a must. There are two different options including fans and liquid but both are loud during intensive tasks.  

That said, potential buyers looking for a small-form gaming desktop should definitely keep the Asus ROG G22CH in mind, since it's one of the few available on the market now that Intel has retired its NUC Extreme line. Beyond its pretty aggressive styling, its performance prowess is where it matters the most, and it excels in this regard. The gaming desktop can run all of the most popular esports games at high frame rates such as Fortnite and Valorant while handling more visually demanding games like Alan Wake 2 without much fuss. If cost and upgradability are a problem, it might be best to go with a gaming rig that has a bigger case

An Asus ROG G22CH on a desk

(Image credit: Future / John Loeffler)

Asus ROG G22CH: Price & availability

  •  How much does it cost? Cost range between $1,399 and $2,499  
  •  When is it available? It is available now in U.S., UK and AU  
  •  Where can you get it? From various stories depending on territory  

The Asus ROG G22CH is relatively expensive regardless of what configuration one has. For gamers looking for some solid 1080p gaming, the $1,399 option comes with an Intel Core i5-13400F, Nvidia GeForce RTX 3060, 16GB DDR5 RAM, and a 512GB SSD. 

That’s definitely a solid choice for anyone looking to play some of the bigger esports games like Fortnite, Rocket League, Call of Duty, or Valorant. Our review configuration came to about $2,299 and for $200 more users can pump up to the Intel Core i9-14900KF, though this isn't necessarily a huge jump in CPU power. 

When it comes to the UK, there’s only one option available which includes an Intel Core i7, Nvidia GeForce RTX 4070, 16GB RAM, and 2TB SSD for £2,099. Australian buyers have two configurations they can buy. Both have an Nvidia GeForce RTX 4070, 32GB DDR5, and a1TB SSD, but for AU$4,699 you can get an Intel Core i7-14700F configuration, or for $4,999 you can get an Intel Core i9-14900KF system. 

For good measure, there’s even an included mouse and keyboard that comes packed in with all configurations. Serious gamers will probably want to check out the best gaming keyboard and best gaming mouse options though, as the stock peripherals aren't spectacular.

Small-form PC Gaming rigs are usually expensive and naturally face issues when it comes to upgrading. However, the Acer Predator Orion 3000 is the most approachable price-wise and the lowest configuration is a bit more powerful than the ROG G22CH. Meanwhile, if performance is a main concern regardless of money the Origin Chronos V3 with a little bit of upgradable wiggly room and the Corsair One i300 has the best form-factor.

An Asus ROG G22CH on a desk

(Image credit: Future / John Loeffler)

Asus ROG G22CH: Specs

 The Asus ROG G22CH currently comes in a variety of customizable configurations.  

An Asus ROG G22CH on a desk

(Image credit: Future / John Loeffler)

Asus ROG G22CH: Design

  • The case is 4.53" x 12.72" x 11.30" inches and weights 18.52Lbs 
  • An all-black design is accented with two strips of RGB lighting    
  • There's not much room for GPU upgrading

Balancing form and functionality are the most important attributes of a small-sized gaming PC, and the Asus ROG G22CH does a solid job with both. When it comes to design, there’s much to appreciate in terms of the all-black chassis. Having two vertical strips of customizable RGB lighting on the front panel does lend the rig some personality. 

There’s one small stripe on the upper left side and a longer one on the lower right side. Between them is an angular cut alongside the ROG logo. When it comes to ventilation, there’s some form of it on all sides of the ROG G22CH.  Just looking from the front panel, the overall design is really sleek and could give the Xbox Series X a run for its money.

Image 1 of 3

An Asus ROG G22CH on a desk

(Image credit: Future / John Loeffler)
Image 2 of 3

An Asus ROG G22CH on a desk

(Image credit: Future / John Loeffler)
Image 3 of 3

An Asus ROG G22CH on a desk

(Image credit: Future / John Loeffler)

There are plenty of ports available as well. The top IO panel features two USB-A ports alongside a singular USB-C, a 3.5mm combo jack, and a power button. Unfortunately, that USB-C port is the only one available on this PC. On the back are four USB-A split between 2.0 and 3.2, three audio jacks, and a gigabit Ethernet port. That should be more than enough for most PC gamers and creatives though.

Though upgradability will be tough, the ROG G22CH does somewhat make the process easier. Featuring a tool-free design, there’s a sliding latch that allows both sides and upper portions to be lifted to access to its inside. Having that ability without using screws does help a lot, outside of possibly RAM and SSD, getting a large GPU or attempting to swap out motherboards in the future is going to be difficult, if not impossible. 

An Asus ROG G22CH on a desk

(Image credit: Future / John Loeffler)

Asus ROG G22CH: Performance

  • 1440p performance is spectacular  
  • DLSS can do 4K when needed  
  • Fans will run at full volume   
Benchmarks

Here's how the Asus ROG G22CH performed in our series of benchmarks:

3DMark Speed Way: 4,404; Fire Strike: 34,340; Time Spy: 17,500
GeekBench 6 (single-core): 2,866; (multi-core): 17,650
Total War: Warhammer III (1080p, Ultra): 137 fps; (1080p, Low): 343 fps
Cyberpunk 2077 (1080p, Ultra): 123 fps; (1080p, Low): 162 fps
Dirt 5 (1080p, Ultra): 173 fps; (1080p, Low): 283 fps

Outside of gaming, the Asus ROG G22CH is a phenomenal workhorse for various general and creative tasks. Using Google Chrome in addition to listening to high-fidelity music through Tidal are fine working experiences. 

Using Adobe Suite worked totally fine on the G22CH as well. Photoshop was able to handle multiple-layer projects with incredibly high-resolution photos without issue. Editing videos through Premiere Pro allowed easy editing of 4K videos with speedy export times. 

That said, this is a gaming desktop, and it's its gaming performance where the G22CH really shines.

When it comes to handling the top tier of high-fidelity visuals in gaming, the G22CH can handle Cyberpunk 2077, Red Dead Redemption II, Alan Wake II, and the like at native 1440p at high frame rates without breaking a sweat. Our Cyberpunk 2077 tests produced an average 123 fps on Ultra settings at 1080p. Bumping to 1440p with path tracing placed frame rates in the high 90s. Having everything turned to the max in settings allowed Alan Wake II to run in the high 60s. 

If wanting to go up to 4K, users are definitely going to have to rely on Nvidia’s DLSS technology, but it's possible with the right settings tweaks.

When it comes to high esports-level performance, users right now can enjoy a serious edge over the competition. Games like Call of Duty: Warzone, Valorant, Country Strike 2, and Fortnite were able to pump out frame rates well over 100 fps on high settings which is more than enough for the best gaming monitors. For more competitive settings, it’s easy enough to reach past 200 fps. 

Just understand that users will know when the G22CH is being pushed to the limit. When playing rounds of Helldivers 2 and Alan Wake II, the noise from the PC's fans reached around the low 80-decibel mark. This means that headsets are going to be necessary when gaming. 

An Asus ROG G22CH on a desk

(Image credit: Future / John Loeffler)

Should you buy the Asus ROG G22CH?

Buy the Asus ROG G22CH if...

Don't buy it if...

How I tested the Asus ROG G22CH

I tested the Asus ROG G22CH over two weeks. During the day, many general computing tasks were done including Google Chrome and Tidal. Having multiple Google Chrome tabs allowed me to use Asana, Google Docs, and Hootsuite. For creating graphics alongside short-form social media video content, I used Adobe Premiere and Photoshop. 

Testing out high frame rate possibilities, games played included Call of Duty: Warzone, Valorant, and Fortnite. To see how hard we could push visual fidelity, we tried games including Cyberpunk 2077, Alan Wake 2 and Forza Motorsport (2023).

I’ve spent the past several years covering monitors alongside other PC components for Techradar. Outside of gaming, I’ve been proficient in Adobe Suite for over a decade as well. 

Read more about how we test

First reviewed March 2024

Nvidia GeForce RTX 4060: the best midrange graphics card for the masses
4:00 pm | June 28, 2023

Author: admin | Category: Computers Gadgets | Tags: , , , | Comments: Off

Nvidia GeForce RTX 4060: Two-minute review

Nvidia really wants you to know that the Nvidia GeForce RTX 4060 is a card for those who are still running a GTX 1060 or RTX 2060, and it's really Team Green's best marketing strategy for this card.

To be clear, the Nvidia RTX 4060 is respectably better than the Nvidia RTX 3060 it replaces, and comes in at a lower launch MSRP of $299 (about £240/AU$450) than its predecessor. Its 1080p gaming performance is the best you're going to find under $300, and its 1440p performance is pretty solid, especially when you turn on DLSS. If you're playing a game with DLSS 3 and frame generation, even better.

Unfortunately, the card's 4K performance suffers due to the limited video memory it's working with, which is a 50% decrease from the initial RTX 3060 run's 12GB VRAM pool (though at least it doesn't go below the 8GB of the later RTX 3060s).

You also get more sophisticated ray tracing and tensor cores than those found in the Ampere generation, and this maturity shows up in the card's much-improved ray tracing and DLSS performance.

There are also some added bonuses for streamers as well like AV1 support, but this is going to be a lower-midrange gamer's card, not a streamer's, and for what you're getting for the price, it's a great card.

The real problem for this card though is the Nvidia GeForce RTX 3060 Ti. For more than a year after the RTX 3060 Ti hit the scene, it topped our best graphics card list for its spectacular balance of price and performance, punching well above its weight and even outshining the Nvidia GeForce RTX 3070.

Ever since the crypto bubble popped and Nvidia Lovelace cards started hitting the shelves, the last-gen Nvidia Ampere cards have absolutely plummeted in price, including the RTX 3060 Ti. You can now get the RTX 3060 Ti for well below MSRP, and even though the RTX 4060 outperforms the RTX 3060 by roughly 20%, it still falls short of the RTX 3060 Ti, so if you are able to get an RTX 3060 Ti for near or at the same price as the RTX 4060, it might be a better bet. I haven't seen the RTX 3060 Ti drop that low yet, but it's definitely possible.

The reason why the RTX 3060 Ti is competitive here is especially because many of the best features of the RTX 4060 depend on other people implementing Nvidia's DLSS 3 technology in their products. DLSS 3 with Frame Generation is incredible for most games (though there are some latency issues to work out), but the number of games that implement it is rather small at the moment.

Many newer games will have it, but as we've seen with the recent controversy over Starfield partnering with AMD, one of the biggest PC games of the year might not have DLSS implemented at all at launch. It's a hard thing to hold against the RTX 4060 as a solid negative, since when the technology is implemented, it works incredibly well. But it's also unavoidable that Nvidia's biggest selling point of this generation of graphics cards is explicitly tied to the cooperation of third-party game developers.

With something like the Nvidia GeForce RTX 4070, DLSS 3 is a nice feature to have, but it doesn't make or break the card. With the RTX 4060, its appeal is deeply tied to whether or not you have this tech available in your games, and it seriously undercuts the card when it isn't. Its non-DLSS performance is only better than the RTX 3060 by a standard gen-on-gen uplift at 1080p, and without DLSS, 1440p gaming is possible, but will be severely hampered by the limited VRAM. 4K gaming, meanwhile, would be out of the question entirely.

All that said, the Nvidia RTX 4060 is still going to be one hell of an upgrade for anyone coming from a GTX 1060 or RTX 2060, which is really where this card is trying to find its market. RTX 3060 gamers will honestly be better off just saving up some more money for the RTX 4070 than worrying about the RTX 4060 (and you can probably skip the Nvidia RTX 4060 Ti, honestly).

If you're looking for the best cheap graphics card from Nvidia, the Nvidia GeForce RTX 4060 is probably as good as it's going to get for a while, since there have been few - if any - rumblings about an Nvidia RTX 4050 or Nvidia RTX 4050 Ti coming to the budget segment any time soon. Whether it's worth upgrading from an RTX 3060 is debatable, but if money is tight and looking for an upgrade from the Pascal- or Turing-era 60-series cards, you'll absolutely love this card.

Nvidia GeForce RTX 4060: Price & availability

An Nvidia GeForce RTX 4060 on a table with its retail packaging

(Image credit: Future / John Loeffler)
  • How much is it? US MSRP $299 (about £240/AU$450)
  • When is it out? June 29, 2023
  • Where can you get it? Available globally

The Nvidia GeForce RTX 4060 is available on June 29, 2023, for an MSRP of $299 (about £240/AU$450), which is about 10% less than the RTX 3060 was when it launched in 2021.

There is a caveat to this pricing in that there is no Nvidia Founders Edition of the RTX 4060, so it is only available from third-party partners like Asus, PNY, and others. These manufacturers can charge whatever they want for the card, so you can expect to see many of the cards priced higher than Nvidia's MSRP, but there will be those like the Asus RTX 4060 Dual that I tested for this review that will sell at MSRP.

While this card is cheaper than most, it's not the cheapest of the current generation. That would be the AMD Radeon RX 7600, which has an MSRP of $269.99 (about £215/AU$405), which still offers the best performance-to-price value of any of the current-gen cards. Still, given the actual level of performance you get from the RTX 4060, it definitely offers a compelling value over its rival cards, even if they are cheaper in the end.

Nvidia GeForce RTX 4060: Features and chipset

An Nvidia GeForce RTX 4060 on a table with its retail packaging

(Image credit: Future / John Loeffler)
  • 3rd-gen ray tracing and 4th-gen tensor cores
  • Only 8GB VRAM
  • DLSS 3 with Frame Generation under $300

In terms of specs, the Nvidia GeForce RTX 4060 is a marked improvement over the Nvidia RTX 3060 thanks to a smaller TSMC 5nm process node compared to the RTX 3060's 8nm Samsung node. It also features much faster clock speeds, with a roughly 39% faster base and boost clock speed.

You also have a faster memory speed, but a smaller VRAM pool and smaller memory bus, so you end up with a roughly 25% smaller memory bandwidth, which really puts a ceiling on higher resolution performance.

Still, with faster clock speeds, more mature ray tracing and tensor cores, and a lower TGP than its predecessor, this is one of the most powerful and energy-efficient graphics cards in its class. 

Nvidia GeForce RTX 4060: design

An Nvidia GeForce RTX 4060 on a table with its retail packaging

(Image credit: Future / John Loeffler)

There is no reference design for the Nvidia RTX 4060, since there is no Founders Edition, so the design of the card is going to depend entirely on which version you get from which manufacturer. 

In my case, I received the Asus GeForce RTX 4060 Dual OC edition, which features a dual fan design and a much smaller footprint befitting a midrange card. Thankfully, the card uses an 8-pin power connector, so there's no need to fuss with any 12VHPWR adapter cables.

Image 1 of 2

An Nvidia GeForce RTX 4060 on a table with its retail packaging

(Image credit: Future / John Loeffler)
Image 2 of 2

An Nvidia GeForce RTX 4060 on a table with its retail packaging

(Image credit: Future / John Loeffler)

It comes with the now-standard three DisplayPort 1.4 and one HDMI 2.1 video outputs on this generation of Nvidia cards, so those with one of the best USB-C monitors will once again be out of luck here.

The card is a dual-slot width, so you shouldn't have any issues getting it into a case, and it's light enough that you really should be able to get away without having to use a support bracket.

Nvidia GeForce RTX 4060: Performance

An Nvidia RTX 4060 slotted into a test bench

(Image credit: Future / John Loeffler)
  • Best-in-class 1080p gaming performance
  • Huge improvement if coming from RTX 2060 or older
Test system specs

This is the system we used to test the Nvidia GeForce RTX 4060:

CPU: Intel Core i9-13900K
CPU Cooler: Cougar Poseidon GT 360 AIO
RAM: 64GB Corsair Dominator Platinum RGB DDR5-6600MHz
Motherboard: MSI MAG Z790 Tomahawk Wifi
SSD: Samsung 990 Pro 2TB NVMe M.2 SSD
Power Supply: Corsair AX1000
Case: Praxis Wetbench

When it comes to 1080p, the Nvidia RTX 4060 offers the best gaming performance under $300.

The AMD RX 7600 gives it a run for its money in pure rasterization performance, and even manages to beat out the RTX 4060 on occasion, but once you start cranking up ray tracing the RTX 4060 absolutely pulls away from its rivals.

This is especially true when you flip the switch on DLSS, which makes 1440p gaming a very feasible option with this card. While this definitely isn't going to be one of the best 1440p graphics cards, on certain titles with certain settings, you'll be surprised what you can get away with.

Synthetic Benchmarks

When it comes to synthetic benchmarks, you get the typical blow-for-blow between Nvidia and AMD cards that we've seen in the past, with AMD outperforming on pure rasterization tests like 3DMark Time Spy and Firestrike, while Nvidia pulls ahead on ray tracing workloads like Port Royal and Speedway.

The RTX 4060 and RX 7600 are close enough in terms of raw performance that it might as well be a wash on average, but it's worth noting that the RTX 4060 is about 20% better on average than the RTX 3060. I point that out mostly to contrast it with the RTX 4060 Ti, which was only about 10-12% better than the RTX 3060 Ti on average. 

A 20% improvement gen-on-gen, on the other hand, is much more respectable and justifies considering the RTX 4060 as an upgrade even with an RTX 3060 in your rig. You might not actually make that jump for an extra 20% performance with this class of GPU, but it's at least worth considering, unlike with the RTX 4060 Ti.

Gaming Benchmarks

Where the RTX 4060 really takes off though is in gaming performance. Compared to the RX 7600, it's more or less even when just playing at 1080p with max settings without ray tracing or upscaling. Notably, the RTX 4060 actually underperforms the RX 7600 by about 9% in Cyberpunk 2077 when you're not using ray tracing or upscaling.

Crank ray tracing up to Psycho in Cyberpunk 2077 though, and the value of the RTX 4060 really starts to show through. The RX 7600 absolutely tanks when RT is maxed, but that's not universal across the board. In other games, the RX 7600 is competitive, but Cyberpunk 2077 really is AMD's Achilles' Heel. Meanwhile, the RTX 3060 holds up fairly well on some titles, while the RTX 4060 pulls ahead by a substantial amount on others.

With upscaling turned on, the RTX 4060 manages to substantially outperform both the RTX 3060 and the RX 7600. If you leave the base DLSS settings and don't mess with frame generation, the RTX 4060 pulls off a clean win on Cyberpunk 2077, while it has a slightly lower average framerate than the RTX 3060, but a higher minimum framerate, it's a much more stable experience across the board.

Once you turn on frame generation though, things swing dramatically in the RTX 4060's favor. You can even increase the resolution in Cyberpunk 2077 to 1440p with Frame Generation on and you'll get more fps on average and at a minimum than you would with the RTX 3060 at 1080p, while the RX 7600 simply can't keep up at this level.

Unfortunately, a lot of this is dependent on developers implementing Nvidia's new technology. Without DLSS 3 with Frame Generation, you still get respectably better performance than the RTX 3060, but nothing that absolutely blows you away. 

Meanwhile, the RX 7600 offers a compelling alternative if you're looking to save some money and don't care about 1440p or ray tracing.

Still, if you can toggle a setting and give yourself an extra 50 fps on a demanding game, there really is no comparison, and on this alone, the RTX 4060 wins out by default.

Should you buy the Nvidia GeForce RTX 4060?

A man's hand holding up the Nvidia RTX 4060

(Image credit: Future / John Loeffler)

Buy it if...

You want the best 1080p gaming under $300
This card is a 1080p champ in its weight class, even if it walks right up to the line of the middle midrange.

You want fantastic ray tracing support
Nvidia pioneered real-time ray tracing in games, and it really shows here.

Don't buy it if...

You want the best value
While the RTX 4060 is very well-priced, the AMD RX 7600 offers a much better price-to-performance ratio.

You don't care about ray tracing or upscaling
Ray tracing is honestly overrated and a lot of games don't offer or even need upscaling, so if you don't care about these features, Nvidia's RTX 4060 might not offer enough for you to spend the extra money.

Nvidia GeForce RTX 4060: Also consider

How I tested the Nvidia GeForce RTX 4060

  • I spent about a week testing the card 
  • I looked at the cards gaming performance and raw synthetic performance
  • I used our standard battery of graphics card tests and several current PC games to push the GPU to its limits.

I spent extensive time testing the RTX 4060 over a number of days, using synthetic tests like 3DMark and Passmark, while also running several games on the card at different settings and resolutions.

I also tested its closest rival card as well as the card it is replacing in Nvidia's product stack and compared the performance scores across the cards to assess the card's overall performance.

I did this using the latest Nvidia and AMD drivers on a test bench using all of the same hardware for each card tested so that I could isolate the graphics card's contribution to the overall performance I found in-game or in synthetic benchmarks. 

We pride ourselves on our independence and our rigorous review-testing process, offering up long-term attention to the products we review and making sure our reviews are updated and maintained - regardless of when a device was released, if you can still buy it, it's on our radar.

Read more about how we test

First reviewed June 2023

Nvidia GeForce RTX 4060 Ti: A DLSS and ray tracing upgrade, but not much else
4:00 pm | May 23, 2023

Author: admin | Category: Computers Gadgets | Tags: , , , | Comments: Off

Nvidia GeForce RTX 4060 Ti: two minute review

The Nvidia GeForce RTX 4060 Ti is without a doubt one of the most anticipated graphics card launches of this generation, and now that it's here, it should be an easy win for Nvidia over archrival AMD. I wish that were the case.

That's not to say that the RTX 4060 Ti doesn't hold up well against AMD's midrange offerings at the moment, it absolutely does, and there's no question that the features this card brings to the table are impressive, especially DLSS 3, which is the first time a properly midrange GPU (under $500/£500/AU$750) is seeing this feature.

It goes without saying that Nvidia is leaning into DLSS 3 as its biggest selling point, and as I'll get into later, it definitely delivers significantly better performance than the RTX 4060 Ti should be capable of given its various specs — even factoring in the expanded cache which widens up the memory bandwidth of the card despite still having just 8GB GDDR6 VRAM to work with.

The decision to go with 8GB VRAM for this card — a 16GB VRAM variant is going to be released in July 2023 for an MSRP of $499 (about £400/AU$750) — is probably the only thing that kept the price on this card under $400. With an MSRP of $399 (about £320/AU$600), the Nvidia Founders Edition RTX 4060 Ti 8GB is the same price as the Nvidia GeForce RTX 3060 Ti it is replacing, and generally, it offers a pretty good value for that money, with some caveats.

In terms of native, non-DLSS performance, there isn't a whole lot of improvement over the previous generation, which is definitely going to disappoint some, if not many. Given the kinds of performance advances we've seen with higher-end cards, we were hoping to see that extend down into the heart of the midrange, but it seems those benefits generally stop at the Nvidia GeForce RTX 4070.

Instead, you have a card that relies very heavily on DLSS for carry its performance over the line, and where it works, it is generally phenomenal, offering a real, playable 1440p gaming experience, and even brushing up against some decent 4K performance with the right settings. 

This is something AMD has really struggled to match with its FSR, and so Nvidia really has a chance to score a major blow against AMD, but as we'll see, the best AMD graphics card in the last generation's midrange, the AMD Radeon RX 6750 XT, actually outperforms the RTX 4060 Ti in non-ray tracing workloads, including gaming, so this does not bode well for Nvidia once AMD releases its current-gen midrange cards.

This is somewhat exacerbated by the fact that the RTX 4060 Ti's ability to use its new features is fairly limited, and while features like DLSS 3 with Frame Generation are available on the best PC games like Cyberpunk 2077 and Returnal, as of the launch of the RTX 4060 Ti, there are only 50-ish games that support DLSS 3. 

This list will surely grow over time, but you certainly won't get this kind of support on games that may be just recent enough to push the RTX 4060 Ti in terms of its performance, while being just old enough that you'll never see a DLSS 3 patch for it.

I can say that if you're coming from an RTX 2060 Super or older, then this card is absolutely going to blow your mind. It's effectively the RTX 3060 Ti's NG+, so if you missed what I consider to be the best graphics card of the last generation, you'll get all that and more with the RTX 4060 Ti. If you're coming from an Nvidia Ampere card though — especially from greater than the RTX 3060 Ti — chances are you are going to find this is really a downgrade with some neat features to soften the blow. 

Nvidia GeForce RTX 4060 Ti: Price & availability

An Nvidia GeForce RTX 4060 Ti

(Image credit: Future / John Loeffler)
  • How much is it? MSRP listed at $399 (about £240, AU$600)
  • When is it out? It is available starting May 24, 2023
  • Where can you get it? You can buy it in the US, UK, and Australia

The Nvidia GeForce RTX 4060 Ti 8GB is available starting May 24, 2023, with an MSRP of $399 (about £240, AU$600). This is the same launch price of the Nvidia RTX 3060 Ti that this card is replacing, so we're glad to see that Nvidia didn't increase the price on this variant with this generation.

This also puts it on par with the AMD Radeon RX 6750 XT, which initially launched for $549 (about £440/AU$825), but which you can find under $400 right now, even without discounts, at major retailers. AMD hasn't released an RX 7700 XT yet, which would be this card's more natural competition, so comparing the RX 6750 XT and the RTX 4060 Ti isn't really fair, but it's all we have for now until AMD launches its RTX 4060 Ti challenger.

Nvidia GeForce RTX 4060 Ti: Features and chipset

An Nvidia GeForce RTX 4060 Ti

(Image credit: Future / John Loeffler)
  • Only uses one 8-pin...but still requires a 16-pin converter?
  • 3rd-generation ray tracing and 4th-generation tensor cores
  • 288.0 GB/s memory bandwidth, but 32MB L2 cache boosts effective bandwidth to 554.0 GB/s (according to Nvidia)

The Nvidia RTX 4060 Ti is the first properly midrange Nvidia Lovelace graphics card, and so it is built on TSMC's 5nm process, with about 22.9 billion transistors across 34 streaming multiprocessors (SM), which come with 128 shader cores (CUDA), 4 fourth-generation tensor cores, and 1 third-generation ray tracing core per SM.

The clock speed is a solid 2,310MHz base clock, which is about a 64% improvement over the RTX 3060 Ti's 1,410MHz, with a boost clock of 2,535MHz, or about 52% faster than the RTX 3060 Ti's 1,665MHz.

The biggest difference between the two cards is the memory bus. The Nvidia RTX 4060 Ti uses a 128-bit bus for 8GB GDDR6 VRAM, while the RTX 3060 Ti uses a 256-bit bus for the same amount of VRAM. The RTX 4060 Ti has a faster memory clock (2,250MHz), which combined with the expanded L2 cache (32MB), the RTX 4060 Ti has a slightly faster effective memory speed of 18 Gbps to the RTX 3060 Ti's 15 Gbps, while having a much faster effective memory bandwidth.

Still, this really does smack of over-engineering. The move to a 128-bit bus doesn't seem necessary, and given what we've seen of other Lovelace cards, like the Nvidia GeForce RTX 4070, I definitely think Nvidia could have stuck with a higher bus width and it wouldn't be catching nearly the grief it is getting over this. 

What's more, even though the performance of the RTX 4060 Ti is better than the RTX 3060 Ti, I really think that had this card had the same bus width as the RTX 3060 Ti, this card would absolutely anything that approached it in the midrange. As it stands, the RTX 4060 Ti is great, but fails to score the knockout it really needed.

It is worth mentioning though that this card also uses significantly less power than the RTX 3060 Ti. That card had a TGP of 200W, while the RTX 4060 Ti 8GB comes in at 160W, which is a 20% improvement in efficiency. This is great for keeping your build under 600W, and it's a move in the right direction for everyone and deserves to be praised.

Nvidia GeForce RTX 4060 Ti: Design

An Nvidia GeForce RTX 4060 Ti

(Image credit: Future / John Loeffler)

The Nvidia RTX 4060 Ti Founders Edition keeps the same black heatsink with chrome trim as the other Founders Edition cards this generation, and — unfortunately — it also sticks with the 12VHPWR 16-pin power connector. Fortunately, you only need to plug a single 8-pin into it, so it is at least somewhat easier to manage in a case.

Also easier to manage is the size of the card. Using the same dual-fan design as previous Founders Edition cards, the RTX 4060 Ti pretty much shrinks these down a bit. While it's still a dual-slot card, it comes in at just under 9.5 inches long and 4.5 inches tall, making it the kind of card that will easily fit in your case.

There's not much flash here, but that's a given with the Founders Edition, so if you're looking for visual bells-and-whistles like RGB or super-cooling features like a triple fan design, you're going to want to look at any of the third-party cards that release alongside this one for those.

Nvidia GeForce RTX 4060 Ti: Performance

An Nvidia GeForce RTX 4060 Ti

(Image credit: Future / John Loeffler)
  • DLSS 3 is the real draw here
  • Improved ray tracing performance
  • Baseline performance not much improved over the RTX 3060 Ti

When it comes to performance, the Nvidia GeForce RTX 4060 Ti really leans on DLSS 3 for most of its major performance gains, and while this can be substantial, some are going to feel somewhat disappointed.

Test system specs

This is the system we used to test the Nvidia GeForce RTX 4060 Ti:

CPU: Intel Core i9-13900K
CPU Cooler: Cougar Poseidon GT 360 AIO
RAM: 64GB Corsair Dominator Platinum RGB DDR5-6600MHz
Motherboard: MSI MAG Z790 Tomahawk Wifi
SSD: Samsung 990 Pro 2TB NVMe M.2 SSD
Power Supply: Corsair AX1000
Case: Praxis Wetbench

This is largely because even with the introduction of some pretty advanced tech, there aren't a lot of games out right now that can really leverage the best features of this card. While three of the games I use as benchmarks — F1 2022, Cyberpunk 2077, and Returnal — all feature frame generation, these are also three of the latest games out there from major studios that have the time and staffing to implement DLSS 3 with Frame Generation in their games. 

There is a DLSS 3 plug-in coming for Unreal Engine, which should definitely expand the number of games that feature the tech, that's still going to be a ways off at this point before that starts to trickle down to the gamers who will actually be using this card.

I'll get more into DLSS 3 and Frame Generation in a bit, but a quick glance over the underlying architecture for the RTX 4060 Ti tells something of a story here, as shown by synthetic benchmarks using 3DMark and Passmark 3D.

Synthetic Benchmarks

As you can see, the RTX 4060 Ti beats out the RTX 3060 Ti, but only just barely, getting about 11% better performance than the card it's replacing. This is, okay, I guess, but hardly the generational leaps that previous Lovelace cards have been making. 

For example, the RTX 4070 offers a roughly 21% jump over the RTX 3070 on these same synthetic benchmarks. In fact, this puts the RTX 4060 Ti just ahead of the RX 6750 XT, and ultimately just behind the RTX 3070 in terms of raw performance.

As a gaming card, the performance outlook is better, but not by a whole lot overall.

Gaming Benchmarks

On games with heavy effects-based visuals like Metro: Exodus and Cyberpunk 2077 where the advanced architecture of the RTX 4060 Ti can be leveraged, it does edge out the competition, sometimes. The RX 6750 XT still manages a slightly better fps on Returnal at 1080p, on average, when not using ray tracing or upscaling tech, for example. 

The RTX 4060 Ti also gets crushed in CS:GO at 1080p, relatively speaking, which I chalk up entirely to pushing textures through the smaller memory bus of the RTX 4060 Ti. The 192-bit bus on the RX 6750 XT's 12GB GDDR6 VRAM and the 256-bit bus on the RTX 3060 Ti's 8GB GDDR6 really show up in cases like this.

Things do start to turn in the RTX 4060 Ti's favor once you start fiddling with ray tracing. The third-generation ray tracing cores on the RTX 4060 Ti are definitely much more capable than the RTX 3060 Ti's and especially more than the RX 6750 XT's, which are second-generation and first-generation cores, respectively.

The RTX 4060 Ti is the first midrange card I've tested that is going to give you playable native ray-traced gaming at 1080p consistently on max settings, though it will struggle to get to 60 fps on more demanding titles like Cyberpunk 2077.

But lets be honest, nobody is playing any of these games with native resolution ray tracing, you're going to be using an upscaler (and if you aren't then you really need to start). 

Here, the performance of Nvidia's DLSS really shines over AMD's FSR, even without frame generation. In both Cyberpunk 2077 and Returnal, the RTX 4060 Ti can get you over 120 fps on average when using the DLSS Ultra Performance preset, and if you want things to look their best, you can easily get well north of 60 fps on average with every setting maxed out, even ray tracing.

Now, one of the things that the wider memory bus on the RTX 3060 Ti gave that card was a faster throughput when gaming at 1440p. Now, not every game is going to run great at 1440p, but for a lot of them, you're going to be able to get a very playable frame rate. 

The RTX 4060 Ti improves over the RTX 3060 Ti here, but not nearly as much as it should, and on games like F1 2022 and CS:GO where that memory bandwidth difference is going to show, well, it shows up here at 1440p, too.

Of course, once you turn on ray tracing, most games are going to turn into a slide show, but unsurprisingly, the RTX 4060 Ti manages to score a win here on every ray-traced game I tested.

That said, you are really pushing it here on these settings, and you're better off using upscaling if you're going to go for 1440p, especially with settings turned up.

The biggest win for the RTX 4060 Ti here is with Cyberpunk 2077, where it manages 67% better performance at max quality settings than the RX 6750 XT, but maddeningly, it's only about 13% better than the RTX 3060 Ti on the quality preset. On ultra performance, the RTX 4060 Ti is about 52% better than the RX 6750 XT, but again, only 13% better than the RTX 3060 Ti.

When it comes to Returnal, the RX 6750 XT is essentially tied with the RTX 4060 Ti on the quality preset for FSR 2.1 and DLSS, respectively. Bump this up to ultra performance, and the RTX 4060 Ti does better, beating out the RX 6750 XT by about 22% and the RTX 3060 Ti by about 17%.

I imagine the RTX 4060 Ti will perform more or less the same across most games that still rely on DLSS 2.0, which number more than 200. For those games that really leverage DLSS 3 with Frame Generation though, it really is another story entirely.

With Frame Generation, you can get about a 40-60% performance improvement on games that support it. This isn't nothing, since this can even get you playing Cyberpunk 2077 at a playable framerate at 4K on ultra performance. The RTX 3060 Ti and RX 6750 XT really don't have any answer to this, and so they are going to lag behind considerably on any games that have DLSS 3 with Frame Generation.

Does Frame Generation increase latency on some titles, along with other issues? Sure. Will it matter to gamers who get to play Cyberpunk 2077, Returnal, and other titles that play like they were RTX 3080 Ti's? Probably not.

Will any of this matter to anyone who doesn't play those games? Obviously not. And that is ultimately the issue with this card. For what it does well, it has no peer at this price, but if you already have an RTX 3060 Ti, then there is really very little reason to upgrade to this card. Hell, if you have an RX 6750 XT, you might feel like you're better off just waiting to see what AMD has in store for the RX 7700 XT, and I would not blame you in the slightest. 

This isn't a whiff by Team Green by any means, but there's no getting around the fact that the performance of the Nvidia GeForce RTX 4060 Ti absolutely leaves a massive opening for AMD to exploit in the coming months with the RX 7700 XT, or even the RX 7650 XT.

Should you buy the Nvidia GeForce RTX 4060 Ti?

An Nvidia GeForce RTX 4060 Ti

(Image credit: Future / John Loeffler)

Buy it if...

Don’t buy it if…

Also consider

How I tested the Nvidia GeForce RTX 4060 Ti

I spend several days with the RTX 4060 Ti running benchmarks, playing games, and generally measuring its performance against competing cards.

I paid special attention to its DLSS 3 Frame Generation technology, since this is one of the card's biggest selling points, and played several games at length with the tech turned on.

Having covered and tested many graphics cards in my career, I know how a graphics card perform at this level. 

Read more about how we test

Nvidia GeForce RTX 4070 review: the GPU you’ve been waiting for is finally here
4:00 pm | April 12, 2023

Author: admin | Category: Computers Gadgets | Tags: , , , | Comments: Off

Nvidia GeForce RTX 4070: Two-minute review

The Nvidia GeForce RTX 4070 is here at long last, and for gamers who've been starved for an upgrade, go ahead and pick this one up. It can do just about everything.

It's hard to follow up the RTX 3070, one of the best graphics cards of all time, and in our Nvidia GeForce RTX 3070 review, we praised that card for being an outstanding performer at 1080p and 1440p — which is where the overwhelming number of PC gamers game at — while also being a much more affordable option over the other two launch cards for Nvidia's Ampere lineup. We especially noted how the RTX 3070 offered comparable performance to the RTX 2080 Ti for half the price.

Everything we said about the RTX 3070 applies just as easily to the RTX 4070, only now it doesn't just dabble in 4K; it can competently game at every resolution, making it a graphics card that everybody can fall in love with without spending a fortune.

A lot has changed since the RTX 3070 launched towards the end of 2020, and unfortunately, not everything changed for the better. Things are more expensive pretty much everywhere you look, and the Nvidia RTX 4070 isn't immune. At $599 (about £510 / AU$870), the RTX 4070 is fully 20% more expensive than the RTX 3070 was at launch.

The Nvidia GeForce RTX 4070 graphics card standing on top of its retail packaging

(Image credit: Future / John Loeffler)

I'm not happy about this at all, and you shouldn't be either, but all you have to do is look at the scores the RTX 4070 puts up on the board and you'll be as hard pressed as I am to dock it any points for this. It consistently puts out RTX 3080-level performance more or less across the board and even manages to bloddy the nose of the Nvidia GeForce RTX 3080 Ti, and while the RTX 3080 beats out the RTX 4070 at native 4K, turn on DLSS and the RTX 3080 simply gets blown out. 

On the other side of the aisle, the AMD Radeon RX 7900 XT is Team Red's nearest real competition, and it struggles to justify itself in the presence of the RTX 4070. While the RX 7900 XT solidly outperforms the 4070, it's also 50% more expensive, and the benefits of the RX 7900 XT get quickly drowned out by the power of DLSS, especially in titles with DLSS 3.

Moreover, the RTX 4070 makes for a pretty competent creator GPU, offering indie developers and artists who don't have the funding to get themselves an Nvidia GeForce RTX 4090 a handy option for getting some work done within a more limited budget. It's not going to power a major movie studio or anything, but if you're dabbling in 3D modeling or video editing, this card is great compromise between price and performance.

Finally, wrap this all into a package that feels like a downright normal graphics card from ye olden days, back before you needed to include support brackets and balast to keep your gaming PC from tipping over, and you end up with a graphics card that can easily power some of the best gaming PCs that can actually fit into your PC case and your budget.

This graphics card has its issues, which is inevitable, but given what's on offer here, it's easy enough to look past its shortcomings and enjoy some truly outstanding performance at at a reasonable enough price.

Nvidia GeForce RTX 4070 review: Price & availability

An Nvidia GeForce RTX 4070 graphics card seated inside its retail packaging

(Image credit: Future / John Loeffler)
  • How much is it? $599 (about £510 / AU$870)
  • When is it out? April 13, 2023
  • Third-party cards retail prices will match or exceed Nvidia's MSRP

The Nvidia GeForce RTX 4070 is available starting April 13, 2023, with an MSRP of $599 (about £510 / AU$870). Third-party partners will have their own versions of the RTX 4070 that will vary in price, but they will always have a matching or higher regular retail price than the Nvidia GeForce RTX 4070 Founders Edition.

Notably, the RTX 4070 is getting a 20% price increase over the card it's replacing, the RTX 3070, which had a launch price of $499 in the US (about £425 / AU$725). While we'd have loved to see the price stay the same gen-over-gen, this should come as no surprise to anyone who has been watching GPU price inflation recently.

The Nvidia GeForce RTX 4080, for example, has a ludicrously high MSRP of $1,199 (a roughly 72% jump over the RTX 3080), while the Nvidia GeForce RTX 4090 also increased its price over the Nvidia GeForce RTX 3090 to $1,599 from the 3090's $1,499.

Meanwhile, we haven't seen AMD's direct RTX 4070 competitor yet, the AMD Radeon RX 7800 XT, but the AMD Radeon RX 7900 XT is the closest AMD has this generation with an $899 / £799 (around AU$1,350) MSRP, putting it 50% more expensive than the RTX 4070.

This card is also the same price as the Nvidia GeForce RTX 3070 Ti, for what it's worth, and considering that the RTX 4070 punches well above the 3070 Ti's performance, you do at least get a better sense of value out of this card than anything from the last generation.

  • Price score: 4 / 5

Nvidia GeForce RTX 4070 review: Features & chipset

The power connector for an Nvidia GeForce RTX 4070 graphics card

(Image credit: Future / John Loeffler)
  • DLSS 3 with full Frame Generation
  • Third-gen Ray Tracing Cores and fourth-gen Tensor Cores
  • Lower TGP than RTX 3070

The Nvidia RTX 4070 doesn't change too much on paper over its last-gen predecessor, featuring the same number of streaming multiprocessors, therefore the same number of CUDA cores (5,888), ray-tracing cores (46), and tensor cores (184).

It does bump up its memory to the faster GDDR6X and adds an additional 50% VRAM for a total of 12GB. With a 192-bit bus and a memory clock of 1,313MHz, the RTX 4070 has an effective memory speed of 21 Gbps, equal to that of the Nvidia RTX 4070 Ti, for a memory bandwidth of 504.2 GB/s.

It has a lower base and boost frequency than the 4070 Ti, clocking in at 1,920MHz base and 2,475MHz boost (compared to 2,310MHz base and 2,610MHz boost for the 4070 Ti), but this is a substantial bump up from the 1,500MHz base and 1,725MHz boost frequency of the RTX 3070.

This is owing to the 5nm TSMC process used to fab the AD104 GPU, compared to the Samsung 8nm process for the RTX 3070's GA104. Those faster clocks also power next-gen ray tracing and tensor cores, so even though there are the same number of cores in both the RTX 4070 and the RTX 3070, the RTX 4070's are both much faster and more sophisticated.

Also factor in Nvidia Lovelace's DLSS 3 with Frame Generation capacity, something that Nvidia Ampere and Turing cards don't have access to, and what looks like two very similar cards on paper turns out to be anything but in practice.

Finally, thanks to the 5nm process, Nvidia is able to squeeze more performance out of less power, so the TGP for the RTX 4070 is just 200W, making it a fantastic card for a lower-power, sub-600W build.

  • Features & chipset: 5 / 5

Nvidia GeForce RTX 4070 review: Design

The RTX 4070 logo etched into the trim of the Nvidia GeForce RTX 4070 graphics card

(Image credit: Future / John Loeffler)
  • Same size as the RTX 3070
  • 16-pin power connector
  • Same design as RTX 4090 and RTX 4080

With the RTX 4070 Founders Edition, Nvidia finally delivers a next-gen graphics card that can actually fit in your case without requiring a construction winch to hold it in place.

OK, the previous cards weren't that bad, and even at the reduced form factor and weight, you'll still want to toss a GPU bracket into your case for good measure (there's no harm in protecting your investment, after all).

Image 1 of 6

An Nvidia GeForce RTX 4070 graphics card

(Image credit: Future / John Loeffler)
Image 2 of 6

An Nvidia GeForce RTX 4070 graphics card on its retail packaging

(Image credit: Future / John Loeffler)
Image 3 of 6

The output ports of an Nvidia GeForce RTX 4070 graphics card

(Image credit: Future / John Loeffler)
Image 4 of 6

An Nvidia GeForce RTX 4070 graphics card standing upright on a pink desk mat

(Image credit: Future / John Loeffler)
Image 5 of 6

An Nvidia GeForce RTX 4070 graphics card standing upright next to the RTX 3070

(Image credit: Future / John Loeffler)
Image 6 of 6

An Nvidia GeForce RTX 4070 graphics card sitting in front of a much larger Nvidia RTX 4080 graphics card

(Image credit: Future / John Loeffler)

But holding the RTX 4070 in my hand, this is the first card of this generation that doesn't feel like a piece of machinery. Even the more modestly-sized AMD Radeon RX 7900 XTX and RX 7900 XT feel substantial, while the RTX 4070 feels like an old school GeForce graphics card from a couple years back.

The RTX 4070 Founders Edition keeps the same fan design as the RTX 4090 and RTX 4080 that preceeded it (a fan on the front and back), but it shrinks everything down to a dual-slot card about two-thirds the size of those monsters. The RTX 4070 also features the same outputs as previous RTX Lovelace cards (so no USB-C out), and a 16-pin power connector with an included adapter for two 8-pin leads to power the card.

With a TGP of 200W, Nvidia could theoretically have just gone with a single 8-pin connector, but Team Green seems absolutely committed to the 12VHPWR cable, it seems. I'll never stop complaining about this, but it is what it is. If you have an ATX 3.0 power supply, you won't need to worry about that, but the rest of us will have to deal with additional cable management.

  • Design score: 4.5 / 5

Nvidia GeForce RTX 4070 review: Performance

An Nvidia GeForce RTX 4070 graphics card slotted into a motherboard

(Image credit: Future / John Loeffler)
  • Phenomenal gaming performance
  • Can easily push 60 fps in 4K gaming with DLSS
  • RTX 3080 performance at 60% of the power

Right out the gate, let's just say that the Nvidia RTX 4070 is the best 1440p graphics card on the market right now, and it's likely to remain at the top of that list for a good long while.

Its performance prowess isn't limited to just 1440p, mind you, and when I get into the gaming performance, you'll see that its 4K gaming potential is exciting (with caveats), but for starters, we can dig into its synthetic performance in tests like 3DMark to see how the fundamentals stack up.

General Performance

As you can see, the RTX 4070 outperforms the RTX 3070 by about 21% overall, while underperforming the RTX 3080 by about 1.37%, which is close enough to effectively tie the last-gen 4K powerhouse, and underperforms the RTX 3080 Ti by about 6%. Considering that the RTX 3080 Ti's MSRP is nearly twice that of the RTX 4070, this is an astounding result. 

The RTX 4070 does lag behind the RTX 4070 Ti and the RX 7900 XT by quite a bit, averaging about 22% worse performance than the RX 7900 XT and about 13.5% worse performance than the RTX 4070 Ti. These current-gen cards also have substantially better hardware, so this isn't unexpected.

Creative Performance

When it comes to creative performance, well, we have a more limited dataset to work with since Blender Benchmark 3.5.0 decided it only wanted to test half the cards I tried to run it on (including failing to run on the RTX 4070), so we'll have to come back to that one at a later date once the benchmark is updated.

In the meantime, the tests I was able to run really showcased how well the RTX 4070 can handle creative workloads. On Adobe Premiere and Adobe Photoshop, the RTX 4070 performed noticeably better than the RTX 3080 across both apps and fell in very close behind the RTX 4070 Ti for an overall second place finish.

In lieu of Blender's Benchmark, V-Ray 5 is a fairly good stand-in, as well as an excellent benchmark in its own right. Here, the RX 7900 XT wouldn't run, since it doesn't use CUDA or Nvidia's RTX, but we can see the RTX 4070 coming in a respectable runner up to the RTX 4070 Ti.

One of my recent favorite workloads, Lumion 12.5, renders an architectural design into either a short movie clip at 1080p or 4K at 60 fps, making it one of the best benchmarks for creatives to see how a graphics card handles production level workloads rather than synthetic tests.

It requires the same kind of hardware as many of the best PC games in order to light a scene, create realistic water effects, and reproduce foliage on trees, and it's the kind of real-world benchmark that tells more about the card than a simple number devoid of context.

Considering that it can take a five-second, 60 fps movie clip an hour to render at production quality, I switched things up a bit and rather than calculate frames per second, like I do with Handbrake's encoding test, I use frames per hour to give a sense of how long a movie clip you can produce if you leave the clip to render overnight (a common practice).

In the case of the RTX 4070, it rendered a five-second movie clip at 60 fps at draft (1-star) quality 13% faster than the RTX 3080, about 7% faster than the RTX 3080 Ti, and nearly 23% faster than the RX 7900 XT. 

It lagged behind the RTX 4070 Ti, though, by about 8%, a deficit that grew wider at 1080p production (4-star) quality, where the RTX 4070 rendered the movie 25% slower than the 4070 Ti and 6.78% slower than the RX 7900 XT. 

For Handbrake, the RTX 4070 manages to pull out its first clean win on the creative side, though not by a whole lot. Still, 170 frames per second encoding from 4K to 1080p is not bad at all.

Overall then, the RTX 4070 puts in a solid creative performance, besting the RTX 3080, the RX 7900 XT, the RTX 3070 Ti, and the RTX 3070, while barely losing out to the RTX 3080 Ti.

Gaming Performance

An Nvidia GeForce RTX 4070 graphics card on a pink desk mat

(Image credit: Future / John Loeffler)

As good of a creative card as the RTX 4070 is, in its bones, this is a gamers' graphics card, so gaming performance is definitely where I spent most of my time testing the RTX 4070. I want to note that the included figures here are a representative sample of my testing, and that not all test results are shown.

When it comes to gaming performance, the RTX 4070 offers some of the best you're going to get at this price, though there are some stipulations to bring up right out the gate.

First, broadly speaking, this card can game at 4K on most games not called Cyberpunk 2077 or Metro: Exodus using max settings natively, so long as you keep things within reasonable limits. Or, really, one limit: keep ray tracing turned off.

Overall, the RTX 4070 gets about 58 fps on average at 4K when not ray tracing, with a floor of 45 fps at 4K, which is eminently playable. Turn ray tracing to the max and your get an average fps of 34 with a floor of 25, which is just better than a slideshow. 

The RTX 3080 doesn't fare too much better on this metric, managing 40 fps on average with a floor of 29 fps at max settings with ray tacing turned on, while the RTX 3080 Ti averages about 36 fps and a floor of 19 fps. This does put the RTX 4070 just behind the 3080 Ti in terms of average fps and with a higher fps floor than the 3080 Ti.

If you're dead set on ray tracing, the RTX 4070 can certainly deliver, thanks to DLSS, which can bump those numbers back up to 79 fps on average with a floor of 55 fps. Compare that to the RTX 3080's 80 fps average with a 58 fps floor in our tests and the RTX 4070 can definitely go toe to toe with the RTX 3080 when ray tracing on max settings if DLSS is on. 

In addition, the RTX 4070 gets about 10% less fps on average than the RTX 3080 Ti at 4K with ray tracing and DLSS on, (79 fps to the 3080 Ti's 88 fps), and a roughly 14% lower fps floor than the RTX 3080 Ti (55 fps to the 3080 Ti's 64 fps). 

Overall, the RTX 4070 manages an average 57 fps at 4K, with a floor of 41 fps, across all the settings I tested. This is about 28% lower than the RTX 4070 Ti (79 fps average, overall), about 10% lower than the RTX 3080 (63 fps average, overall), the RX 7900 XT (64 fps average, overall), and the RTX 3080 Ti (64 fps average, overall).

These numbers skew a bit against the RTX 4070, since the RTX 4070 Ti, RX 7900 XT, RTX 3080, and RTX 3080 Ti all handle native 4K gaming much better, but so few people play at native 4K anymore that is a fairly meaningless advantage. 

Meanwhile, the RTX 4070 actually beats the RX 7900 XT by about 20% when using DLSS (versus the RX 7900 XT's FSR) at 4K with max settings and ray tracing; 79 fps on average to 66 fps on average, respectively. It also manages to strike a dead heat with the RTX 3080 (80 fps average) and come just 10% short of the RTX 3080 Ti's average RT performance at 4K with ray tracing. 

It's important to note as well that these don't factor in DLSS 3 Frame Generation, to make it a fair comparison.

As for the RTX 3070, the RTX 4070 manages about 39% better average 4K performance, with a 53% higher fps floor (57 fps average with a 43 fps floor for the RTX 4070 compared to the RTX 3070's 41 fps average and 28 fps floor).

When it comes to 1440p gaming, the RTX 4070 is on much more solid footing, even if some of the bigger cards definitely perform better in absolute terms. The RTX 4070 underperforms the RTX 3080 by about 8% in non-ray-traced, non-upscaled 1440p gaming, on average (105 fps to the RTX 3080's 115 fps), though they both have a very similar floor around 80-85 fps. 

Meanwhile, the RTX 4070 falls about 12% short of the RTX 3080 Ti's 119 average fps at non-ray-traced, non-DLSS 1440p.

Both the RTX 4070 Ti and RX 7900 XT kinda clobber the RTX 4070 with roughly 25-29% better performance at non-ray-traced, non-upscaled 1440p gaming, and this carries over into gaming with ray tracing settings maxed out, though the RTX 4070 is still getting north of 60 fps on average (67 fps, to be precise), with a relatively decent floor of 51 fps.

The real kicker though is when we turn on DLSS, at which point the RTX 4070 beats out everything but the RTX 4070 Ti and RTX 3080 Ti, including the RX 7900 XT, which it outperforms by about 29% on average (125 fps to 97 fps), with a much higher floor of 88 fps to the RX 7900 XT's 60 fps, a nearly 49% advantage.

The RTX 4070 also beats the RTX 3080 here too, with about 5% better performance on average and a 7.5% higher fps floor on average than the RTX 3080. Incredibly, the RTX 4070 is just 3% slower than the RTX 3080 Ti when both are using DLSS at 1440p with max ray tracing.

As for the RTX 3070, the RTX 4070 gets about 35% better performance at 1440p with ray tracing and DLSS 2.0 than the card it replaces (125 fps to 93 fps), with a nearly 53% higher fps floor on average (87 fps to the 3070's 57 fps), meaning that where the RTX 3070 is setting the 1440p standard, the RTX 4070 is blowing well past it into territory the RTX 3070 simply cannot go.

The story is pretty much the same at 1080p, with there being essentially no difference between the RTX 4070, the RTX 3080, the RTX 3080 Ti, and the RX 7900 XT, with the RTX 3070 languishing about 30% behind and the RTX 4070 Ti off on its own out ahead of everyone else.

There has been a lot of talk about the RTX 4070 ahead of its launch as benchmarks have leaked and people have looked at numbers out of context and downplayed the performance of the RTX 4070 based on one or two tests. They've even pointed to the price increase to say that this card is a disappointment.

Granted, I'm not thrilled about the 20% price increase either, but there's no getting around the fact that you're getting a graphics card here with just 200W TGP that's putting up numbers to rival the RTX 3080 Ti. And I haven't even touched on the new features packed into Lovelace that you can't get with the last-gen Nvidia graphics cards.

The numbers are what they are, and the RTX 4070's performance is simply outstanding across every resolution in all the ways that matter.

  • Performance score: 5 / 5

Should you buy the Nvidia GeForce RTX 4070 ?

A man's hand holding the Nvidia GeForce RTX 4070 graphics card

(Image credit: Future / John Loeffler)

Buy it if...

You want next-gen performance for less than $600
The Nvidia RTX 4070 offers performance on par with the RTX 3080 and even the RTX 3080 Ti for a good deal less.

You don't want a massive GPU
Graphics cards are starting to resemble transformers nowadays (both the autobot and power plant variety), so it's nice to get a graphics card that's just normal-sized.

You want next-gen features like DLSS 3
Nvidia's hardware is often on the bleeding edge of the industry, but things like DLSS 3 and Nvidia Reflex are Nvidia's not-so-secret force multiplier here.

Don't buy it if...

You can get an RTX 3080 cheap
Generally, the RTX 4070 is going to outperform the 3080, but if you don't care about the advanced features and can grab the 3080 in a bargain bin, you could save some money.

You're looking for Nvidia's next budget card
The RTX 4070 is a lot cheaper than the rest of the current-gen graphics card lineups from Nvidia and AMD, but at $600, it's still too expensive to truly be a "budget" GPU.

Nvidia GeForce RTX 4070 review: Also consider

If our Nvidia GeForce RTX 4070 review has you considering other options, here are two more graphics cards to consider...

How I tested the Nvidia GeForce RTX 4070 Ti

An Nvidia GeForce RTX 4070 graphics card slotted into a motherboard

(Image credit: Future / John Loeffler)
  • I spent about 50 hours with the RTX 4070 in total
  • Besides general benchmarking, I used the card for everyday gaming and creative work
My test bench specs

Here is the systems I used to test the Nvidia GeForce RTX 4070:

CPU: AMD Ryzen 9 7950X3D
CPU Cooler: Cougar Poseidon GT 360 AIO Cooler
DDR5 RAM: 32GB Corsair Dominator Platinum @ 5,200MHz & 32GB G.Skill Trident Z5 Neo @ 5,200MHz
Motherboard: ASRock X670E Taichi
SSD: Samsung 980 Pro SSD @ 1TB
Power Supply: Corsair AX1000 80-Plus Titanium (1000W) Case: Praxis Wetbench

When I test a graphics card, I start by making sure that all tests are performed on the same test bench setup to isolate GPU performance. I then run it through a series of synthetic benchmarking tools like 3DMark as well as in-game benchmarks in the most recent PC games I can access like Cyberpunk 2077 and F1 2022. 

I run everything on the maximum settings possible without upscaling tech, and I run all tests at the resolution a reader is most likely to use a given card at. In the case of the RTX 4070, this meant testing at 1080p, 1440p, and 2160p.

I also make sure to install the latest relevant drivers and rerun tests on any competing graphics card that I might have already reviewed and tested, like the RTX 4070 Ti, RX 7900 XT, and RTX 3080 to make sure that I have the most current scores to account for any driver updates. All of these scores are recorded and compared against the card's predecessor, its most direct rival, and the card directly above and below it in the product stack, if those cards are available. 

I then average these scores to come to a final overall score and divide that by the card's MSRP to see how much performance every dollar or pound spent actually gets you to find how much value the card actually brings to the table.

Finally, I actually use the card in my own personal computer for several days, playing games, using apps like Adobe Photoshop or Adobe Illustrator, and watching for any anomalies, crashes, glitches, or visual disruptions that may occur during my time with the card. Having extensively covered and tested many graphics cards over the years, I know what a graphics card should do and how it should perform, and can readily identify when something is not performing up to expectations and when it exceeds them. 

Read more about how we test

First reviewed April 2023

Next Page »