Organizer
Gadget news
Asus ZenWiFi BT10 review: sleek and super fast Wi-Fi 7 mesh, but you’ll pay for it
7:35 am | February 13, 2025

Author: admin | Category: Computers Computing Computing Components Gadgets Servers & Network Devices | Tags: , | Comments: Off

Asus ZenWiFi BT10: Two-minute review

Back in the olden days of Wi-Fi 5, it was Asus’ ZenWiFi ‘mega mesh’ wireless routers that led the world. While regular mesh systems merely dribbled performance across numerous network nodes (like many still do), the ZenWiFi nodes innovatively used secondary 5GHz channels (plus, the nascent 6GHz channel) as fast backhaul to maintain peak performance at a distance. Nowadays, such advantages are built into the Wi-Fi 6E and Wi-Fi 7 standards, so where does that leave a Wi-Fi 7 version in the form of the ZenWiFi BT10? Let’s find out.

The two smart-looking nodes seem identical, but note the discreet sticker denoting one as the master. Failing to notice this may lead to hair being torn out, swearing for half an hour, decrying the powers that be and wondering why the dang thing won’t connect when you’ve obviously done everything right, repeatedly checked the password and #$@&%! drat. Otherwise, the setup process is simple via the phone app.

The app provides the usual monitoring and management settings on the first screen, and immediately asks if you want to reset the default password and set up a separate IoT network. You can assign devices to people, limit bandwidth, block them or assign them QoS optimizations for gaming, streaming or WFH. An ‘Insight’ tab provides suggestions for security and optimization. The Family tab enables you to set content filtering and on/off schedules (and, unlike some rivals, these settings are free).

Other features include Asus’ (Trend Micro-powered) AiProtection, which scans and protects your network as well as all the devices on it. The usual networking tools are available, including Google Assistant voice control. Ultimately, it’s well-featured and very intuitive. My main concern is that the QoS controls have a feature that tracks the websites used by everyone on the network. That raises some serious privacy issues.

Wired connections are the same on both nodes: there’s Gigabit WAN/LAN, 10G Ethernet WAN/LAN, and 10G Ethernet LAN. All the ports are color coded but that could be confusing to some users. There’s also a USB 3.0 port, which can be used for file sharing and media serving.

So how does it perform? On paper, the ZenWiFi BT10 is a tri-band router with 18,000 Mbps worth of throughput. Note, you can choose to reserve the 6GHz channel for backhaul, but leaving it at ‘Auto’ saw better results. I tested it by downloading video files from a Synology NAS to an HP OmniBook Ultra Flip 14 at close range, two rooms away (by the second node, at the front of the house) and 15 meters away in the back garden. It scored 1,661 Mbps, 614 Mbps and 370 Mbps, respectively, which is an excellent result.

All in all, the Asus ZenWiFi BT10 is a very appealing package that looks good, offers heaps of intuitive and useful features, plus fast performance to boot. Then there’s the price… Two nodes cost an eye-watering $900 / £779 / AU$2,799. Still, if you need high-end functionality and speed, it’s hard to beat.

Asus ZenWiFi BT10 review: Price and availability

Asus ZenWiFi BT10 from the side with power cable, on a wooden desk

(Image credit: Future)
  • How much does it cost? $900 / £779 / AU$2,799
  • When is it available? Now
  • Where can you get it? Available in the US, UK, and Australia

Asus has a plethora of Wi-Fi 7 routers, but (like other vendors) it’s pushing the expensive premium models out first. I saw many of its budget Wi-Fi 7 routers at the Computex 2024 trade show and those will offer similar features at lower cost, but there’s no sign of them appearing in most markets, at least not at the time of publication.

Until then, we’re stuck with inflated price tags. It costs $900 in the US, £779 in the UK and AU$2,799 in Australia. For some reason, Aussies seem to be getting particularly hard done by in this case. Most regions sell single nodes, but only a few, it seems, sell the three-node kit.

A tempting alternative is the Asus ROG Rapture GT-BE98. While it’s not a mesh system, the powerful gaming behemoth can single-handedly rival the speeds, performance and features of the BT10, but at a cheaper price. As with most current Asus routers, older models or cheap ones can be used as nodes thanks to Asus’ AiMesh technology – a potentially affordable way of expanding the network into dead zones. However, it’s quite a confronting device and not everyone will want what looks like a giant robot spider in their home.

  • Value score: 4 / 5

Asus ZenWiFi BT10 review: Specifications

Asus ZenWiFi BT10 close up of the ports

(Image credit: Future)

Asus ZenWiFi BT10 review: Design

  • Sleek enough for a stylish home
  • Simple to set up
  • App (and web-based firmware) are responsive, powerful and intuitive

The physical design of the ZenWiFi BT10 is not far from its predecessor, the ZenWiFi AX XT8. The grilles at the sides are more refined, but both will happily fit into a stylish home or office better than most on the market.

Setting it up is simple, thanks to the mature, intuitive and well-featured app. Just note that, despite the similarities, there’s a sticker on the primary node and you need to connect to that, as using the secondary node won’t work.

While there are many features accessible within the app, Asus has these and many more advanced options accessible via a web browser, and both interfaces are intuitive and responsive to use.

  • Design score: 5 / 5

Asus ZenWiFi BT10 review: Features

Asus ZenWiFi BT10 app screenshots

(Image credit: Future)
  • Security and Parental controls are included (without subscription)
  • Has almost every consumer networking feature under the sun
  • 10G Ethernet LAN and WAN ports

The Asus router app has been around for some time now and it’s well laid out, intuitive and packed full of features. The opening screen displays a wireless network map and provides a button to manually run a wireless-optimization cycle. There’s a real-time traffic monitor, CPU and RAM monitor, and an at-a-glance display of what type of devices are connected wirelessly and via cables.

The second tab breaks down which devices are connected along with their IP addresses and the resources they’re using. You can easily block them or assign them to family members to provide parental controls. There’s also the ability to configure Asus' AiMesh feature which lets you do things like turn the LEDs off, prioritize the 6GHz channel for backhaul or client connection, and see details like the IP address, MAC address and firmware version. The Insight tab offers smart recommendations regarding using secure connections, intrusion prevention and setting up family groups.

The built-in network security is called Asus AiProtection and it’s powered by Trend Micro. In addition to providing network security assessments, it offers malicious site blocking, two-way intrusion prevention and infected device isolation. It also powers the parental controls and (mercifully) doesn’t require a separate premium subscription – unlike other rivals.

The Family tab lets you add people and their devices to customizable and preset groups. This can provide web filtering that’s suitable for different children’s age groups (plus adults), setting up both online and offline schedules for each day of the week. Again, I’m very pleased to see Asus provide these features without asking for a subscription fee.

The final tab offers access to other standard router features, including QoS and VPN. While the analysis features that come with this are useful, I am concerned about the website history logging, which enables people to spy on the online activity of everyone on the network. You can also set up a USB port as a SAMBA media server or FTP file server, and there’s the ability to add Alexa and Google Assistant integration.

Accessing the firmware via a web browser provides access to all of the above along with functions like adding Dual WAN, 3G / 4G LTE USB WAN, Port Forwarding, Port Triggering, DMZ, DDNS, IPTV, automatic BitTorrent downloading, VPN management; Apple Time Machine compatibility, Shared Folder Privileges, among other high-level, network-admin features. Just note that many of these are available on Asus’ lesser routers, so don’t splash out on an expensive model just because one catches your eye.

Physically, each node has Gigabit WAN, 10G Ethernet LAN, and 10G Ethernet WAN/LAN network ports, plus a USB-A 3.0 connection. It’s also worth mentioning Asus’ AiMesh feature which can use most current (and many older), cheap and premium Asus routers as nodes to further extend a network.

  • Features: 5 / 5

Asus ZenWiFi BT10 review: Performance

  • Tri-band Wi-Fi 7 for blistering real-world speed
  • One of the best performers at long range
  • 10G Ethernet for fast wired connections
Asus ZenWiFi BT10 benchmarks

Close range: 1,661 Mbps
Medium range: 614 Mbps
Long range: 370 Mbps

The tri-band (2.4GHz, 5GHz and 6GHz) router promises 18,000 Mbps of theoretical performance, but that only exists in lab conditions and certainly can’t be achieved in the real world where every network’s situation will be different. It’s possible to reserve the 6GHz channel for backhaul only, but leaving it set to ‘Auto’ saw better results.

I ran my tests, which included downloading large video files from a Synology NAS (with a wired, 10G Ethernet port) connected to the router, to a Wi-Fi 7-equipped HP OmniBook Ultra Flip 14 laptop at three different ranges.

Up close, it managed 1,661 Mbps, which I’ve only seen beaten by Netgear’s Nighthawk 7 RS700S Wi-Fi router. Two rooms away, at the front of my single-story home (by the second node), it managed 614 Mbps. While that’s a significant drop, it’s still impressive, although other premium routers and mesh systems can be a bit faster. More impressively, the BT10 managed 370 Mbps, 15 meters away, outside the home in the garden. Only top-tier three-node mesh systems have rivaled that (and not all do).

In short, it’s very fast indeed, and I happily edited 4K video on my laptop from across the network with no issues at all.

  • Performance: 5 / 5

Asus ZenWiFi BT10 from above, showing vents

(Image credit: Future)

Should you buy the Asus ZenWiFi BT10?

Buy it if...

You want fast Wi-Fi
Wi-Fi 7 really is a game-changer in that it offers superlative performance for old and new devices alike. I never like calling anything future-proof, but the fact that only cutting-edge clients can come close to accessing its full performance is telling. It will be a very long time before it feels slow.

You have weak Wi-Fi in some areas
Some premium routers do a great job of distributing a strong signal across a large area. But there are plenty of larger buildings that have dead spots due to size or thick walls. If that’s the case, the BT10 will likely help you out, and you can still add additional nodes via Asus’ AiMesh technology.

You hate subscriptions
It’s been disappointing to see that some premium-brand routers now come with core features that you have to pay even more for. In some cases, that means paying for both parental controls and security software, separately. Asus deserves credit for keeping it all free.

Don't buy it if…

You want to save money
The BT10, like many premium Wi-Fi 7 kits, is incredibly expensive. While it’s nice to have a future-proof setup, you can still buy last-gen Wi-Fi 6 and 6E models, with similar features for substantially less. You can also add cheap nodes using old and cheap Asus routers that are AiMesh compatible.

You live in Australia
Australians appear to be the victims of price gouging when it comes to premium Wi-Fi 7 networking devices. The price here is anomalously high compared to other regions, even with the usual tax and shipping issues.

You only want basic features
Some people just want to access the internet without much fuss. If that’s the case, then the BT10 is overpowered, over-featured and overpriced for your requirements. You can save a massive amount of money on a lesser device that will still fulfill your needs.

Also consider

If you're undecided about investing in the Asus ZenWiFi BT10 router, I've compared its specs with three alternatives that might suit you better.

Netgear Nighthawk RS700S
The elder sibling of the RS300 is twice as expensive, but it provides Wi-Fi 7 with an even faster speed of 19 Gbps, and has 10G Ethernet, so is great for high-speed broadband connections.

Read our full Nighthawk RS700S review

Asus ROG Rapture GT-BE98
This giant robot spider is the ZenWiFi BT10’s big, gamer-oriented brother. If you can get past the looks, it features similar features and performance in one, less-expensive package.

Read our full Asus ROG Rapture GT-BE98 review

TP-Link Deco BE63
It’s more mature in the market and the price has dropped even more. You also get three nodes to spread the signal even further. It’s a great-value Wi-Fi 7 mesh kit.

Read our full TP-Link Deco BE63 review

How I tested the Asus ZenWiFi BT10

  • I tested it in typical home use
  • I tested it at short, medium and long range
  • I tested both the wired and wireless connections

We pride ourselves on our independence and our rigorous review-testing process, offering up long-term attention to the products we review and making sure our reviews are updated and maintained - regardless of when a device was released, if you can still buy it, it's on our radar.

Read more about how we test


  • First reviewed in February 2025
Asus ZenWiFi BT10 review: sleek and super fast Wi-Fi 7 mesh, but you’ll pay for it
7:35 am |

Author: admin | Category: Computers Computing Computing Components Gadgets Servers & Network Devices | Tags: , | Comments: Off

Asus ZenWiFi BT10: Two-minute review

Back in the olden days of Wi-Fi 5, it was Asus’ ZenWiFi ‘mega mesh’ wireless routers that led the world. While regular mesh systems merely dribbled performance across numerous network nodes (like many still do), the ZenWiFi nodes innovatively used secondary 5GHz channels (plus, the nascent 6GHz channel) as fast backhaul to maintain peak performance at a distance. Nowadays, such advantages are built into the Wi-Fi 6E and Wi-Fi 7 standards, so where does that leave a Wi-Fi 7 version in the form of the ZenWiFi BT10? Let’s find out.

The two smart-looking nodes seem identical, but note the discreet sticker denoting one as the master. Failing to notice this may lead to hair being torn out, swearing for half an hour, decrying the powers that be and wondering why the dang thing won’t connect when you’ve obviously done everything right, repeatedly checked the password and #$@&%! drat. Otherwise, the setup process is simple via the phone app.

The app provides the usual monitoring and management settings on the first screen, and immediately asks if you want to reset the default password and set up a separate IoT network. You can assign devices to people, limit bandwidth, block them or assign them QoS optimizations for gaming, streaming or WFH. An ‘Insight’ tab provides suggestions for security and optimization. The Family tab enables you to set content filtering and on/off schedules (and, unlike some rivals, these settings are free).

Other features include Asus’ (Trend Micro-powered) AiProtection, which scans and protects your network as well as all the devices on it. The usual networking tools are available, including Google Assistant voice control. Ultimately, it’s well-featured and very intuitive. My main concern is that the QoS controls have a feature that tracks the websites used by everyone on the network. That raises some serious privacy issues.

Wired connections are the same on both nodes: there’s Gigabit WAN/LAN, 10G Ethernet WAN/LAN, and 10G Ethernet LAN. All the ports are color coded but that could be confusing to some users. There’s also a USB 3.0 port, which can be used for file sharing and media serving.

So how does it perform? On paper, the ZenWiFi BT10 is a tri-band router with 18,000 Mbps worth of throughput. Note, you can choose to reserve the 6GHz channel for backhaul, but leaving it at ‘Auto’ saw better results. I tested it by downloading video files from a Synology NAS to an HP OmniBook Ultra Flip 14 at close range, two rooms away (by the second node, at the front of the house) and 15 meters away in the back garden. It scored 1,661 Mbps, 614 Mbps and 370 Mbps, respectively, which is an excellent result.

All in all, the Asus ZenWiFi BT10 is a very appealing package that looks good, offers heaps of intuitive and useful features, plus fast performance to boot. Then there’s the price… Two nodes cost an eye-watering $900 / £779 / AU$2,799. Still, if you need high-end functionality and speed, it’s hard to beat.

Asus ZenWiFi BT10 review: Price and availability

Asus ZenWiFi BT10 from the side with power cable, on a wooden desk

(Image credit: Future)
  • How much does it cost? $900 / £779 / AU$2,799
  • When is it available? Now
  • Where can you get it? Available in the US, UK, and Australia

Asus has a plethora of Wi-Fi 7 routers, but (like other vendors) it’s pushing the expensive premium models out first. I saw many of its budget Wi-Fi 7 routers at the Computex 2024 trade show and those will offer similar features at lower cost, but there’s no sign of them appearing in most markets, at least not at the time of publication.

Until then, we’re stuck with inflated price tags. It costs $900 in the US, £779 in the UK and AU$2,799 in Australia. For some reason, Aussies seem to be getting particularly hard done by in this case. Most regions sell single nodes, but only a few, it seems, sell the three-node kit.

A tempting alternative is the Asus ROG Rapture GT-BE98. While it’s not a mesh system, the powerful gaming behemoth can single-handedly rival the speeds, performance and features of the BT10, but at a cheaper price. As with most current Asus routers, older models or cheap ones can be used as nodes thanks to Asus’ AiMesh technology – a potentially affordable way of expanding the network into dead zones. However, it’s quite a confronting device and not everyone will want what looks like a giant robot spider in their home.

  • Value score: 4 / 5

Asus ZenWiFi BT10 review: Specifications

Asus ZenWiFi BT10 close up of the ports

(Image credit: Future)

Asus ZenWiFi BT10 review: Design

  • Sleek enough for a stylish home
  • Simple to set up
  • App (and web-based firmware) are responsive, powerful and intuitive

The physical design of the ZenWiFi BT10 is not far from its predecessor, the ZenWiFi AX XT8. The grilles at the sides are more refined, but both will happily fit into a stylish home or office better than most on the market.

Setting it up is simple, thanks to the mature, intuitive and well-featured app. Just note that, despite the similarities, there’s a sticker on the primary node and you need to connect to that, as using the secondary node won’t work.

While there are many features accessible within the app, Asus has these and many more advanced options accessible via a web browser, and both interfaces are intuitive and responsive to use.

  • Design score: 5 / 5

Asus ZenWiFi BT10 review: Features

Asus ZenWiFi BT10 app screenshots

(Image credit: Future)
  • Security and Parental controls are included (without subscription)
  • Has almost every consumer networking feature under the sun
  • 10G Ethernet LAN and WAN ports

The Asus router app has been around for some time now and it’s well laid out, intuitive and packed full of features. The opening screen displays a wireless network map and provides a button to manually run a wireless-optimization cycle. There’s a real-time traffic monitor, CPU and RAM monitor, and an at-a-glance display of what type of devices are connected wirelessly and via cables.

The second tab breaks down which devices are connected along with their IP addresses and the resources they’re using. You can easily block them or assign them to family members to provide parental controls. There’s also the ability to configure Asus' AiMesh feature which lets you do things like turn the LEDs off, prioritize the 6GHz channel for backhaul or client connection, and see details like the IP address, MAC address and firmware version. The Insight tab offers smart recommendations regarding using secure connections, intrusion prevention and setting up family groups.

The built-in network security is called Asus AiProtection and it’s powered by Trend Micro. In addition to providing network security assessments, it offers malicious site blocking, two-way intrusion prevention and infected device isolation. It also powers the parental controls and (mercifully) doesn’t require a separate premium subscription – unlike other rivals.

The Family tab lets you add people and their devices to customizable and preset groups. This can provide web filtering that’s suitable for different children’s age groups (plus adults), setting up both online and offline schedules for each day of the week. Again, I’m very pleased to see Asus provide these features without asking for a subscription fee.

The final tab offers access to other standard router features, including QoS and VPN. While the analysis features that come with this are useful, I am concerned about the website history logging, which enables people to spy on the online activity of everyone on the network. You can also set up a USB port as a SAMBA media server or FTP file server, and there’s the ability to add Alexa and Google Assistant integration.

Accessing the firmware via a web browser provides access to all of the above along with functions like adding Dual WAN, 3G / 4G LTE USB WAN, Port Forwarding, Port Triggering, DMZ, DDNS, IPTV, automatic BitTorrent downloading, VPN management; Apple Time Machine compatibility, Shared Folder Privileges, among other high-level, network-admin features. Just note that many of these are available on Asus’ lesser routers, so don’t splash out on an expensive model just because one catches your eye.

Physically, each node has Gigabit WAN, 10G Ethernet LAN, and 10G Ethernet WAN/LAN network ports, plus a USB-A 3.0 connection. It’s also worth mentioning Asus’ AiMesh feature which can use most current (and many older), cheap and premium Asus routers as nodes to further extend a network.

  • Features: 5 / 5

Asus ZenWiFi BT10 review: Performance

  • Tri-band Wi-Fi 7 for blistering real-world speed
  • One of the best performers at long range
  • 10G Ethernet for fast wired connections
Asus ZenWiFi BT10 benchmarks

Close range: 1,661 Mbps
Medium range: 614 Mbps
Long range: 370 Mbps

The tri-band (2.4GHz, 5GHz and 6GHz) router promises 18,000 Mbps of theoretical performance, but that only exists in lab conditions and certainly can’t be achieved in the real world where every network’s situation will be different. It’s possible to reserve the 6GHz channel for backhaul only, but leaving it set to ‘Auto’ saw better results.

I ran my tests, which included downloading large video files from a Synology NAS (with a wired, 10G Ethernet port) connected to the router, to a Wi-Fi 7-equipped HP OmniBook Ultra Flip 14 laptop at three different ranges.

Up close, it managed 1,661 Mbps, which I’ve only seen beaten by Netgear’s Nighthawk 7 RS700S Wi-Fi router. Two rooms away, at the front of my single-story home (by the second node), it managed 614 Mbps. While that’s a significant drop, it’s still impressive, although other premium routers and mesh systems can be a bit faster. More impressively, the BT10 managed 370 Mbps, 15 meters away, outside the home in the garden. Only top-tier three-node mesh systems have rivaled that (and not all do).

In short, it’s very fast indeed, and I happily edited 4K video on my laptop from across the network with no issues at all.

  • Performance: 5 / 5

Asus ZenWiFi BT10 from above, showing vents

(Image credit: Future)

Should you buy the Asus ZenWiFi BT10?

Buy it if...

You want fast Wi-Fi
Wi-Fi 7 really is a game-changer in that it offers superlative performance for old and new devices alike. I never like calling anything future-proof, but the fact that only cutting-edge clients can come close to accessing its full performance is telling. It will be a very long time before it feels slow.

You have weak Wi-Fi in some areas
Some premium routers do a great job of distributing a strong signal across a large area. But there are plenty of larger buildings that have dead spots due to size or thick walls. If that’s the case, the BT10 will likely help you out, and you can still add additional nodes via Asus’ AiMesh technology.

You hate subscriptions
It’s been disappointing to see that some premium-brand routers now come with core features that you have to pay even more for. In some cases, that means paying for both parental controls and security software, separately. Asus deserves credit for keeping it all free.

Don't buy it if…

You want to save money
The BT10, like many premium Wi-Fi 7 kits, is incredibly expensive. While it’s nice to have a future-proof setup, you can still buy last-gen Wi-Fi 6 and 6E models, with similar features for substantially less. You can also add cheap nodes using old and cheap Asus routers that are AiMesh compatible.

You live in Australia
Australians appear to be the victims of price gouging when it comes to premium Wi-Fi 7 networking devices. The price here is anomalously high compared to other regions, even with the usual tax and shipping issues.

You only want basic features
Some people just want to access the internet without much fuss. If that’s the case, then the BT10 is overpowered, over-featured and overpriced for your requirements. You can save a massive amount of money on a lesser device that will still fulfill your needs.

Also consider

If you're undecided about investing in the Asus ZenWiFi BT10 router, I've compared its specs with three alternatives that might suit you better.

Netgear Nighthawk RS700S
The elder sibling of the RS300 is twice as expensive, but it provides Wi-Fi 7 with an even faster speed of 19 Gbps, and has 10G Ethernet, so is great for high-speed broadband connections.

Read our full Nighthawk RS700S review

Asus ROG Rapture GT-BE98
This giant robot spider is the ZenWiFi BT10’s big, gamer-oriented brother. If you can get past the looks, it features similar features and performance in one, less-expensive package.

Read our full Asus ROG Rapture GT-BE98 review

TP-Link Deco BE63
It’s more mature in the market and the price has dropped even more. You also get three nodes to spread the signal even further. It’s a great-value Wi-Fi 7 mesh kit.

Read our full TP-Link Deco BE63 review

How I tested the Asus ZenWiFi BT10

  • I tested it in typical home use
  • I tested it at short, medium and long range
  • I tested both the wired and wireless connections

We pride ourselves on our independence and our rigorous review-testing process, offering up long-term attention to the products we review and making sure our reviews are updated and maintained - regardless of when a device was released, if you can still buy it, it's on our radar.

Read more about how we test


  • First reviewed in February 2025
Nvidia GeForce RTX 5080 review: nearly RTX 4090 performance for a whole lot less
5:00 pm | January 29, 2025

Author: admin | Category: Computers Computing Computing Components Gadgets | Tags: , , , | Comments: Off

Nvidia GeForce RTX 5080: Two-minute review

At first glance, the Nvidia GeForce RTX 5080 doesn't seem like that much of an upgrade from the Nvidia GeForce RTX 4080 it is replacing, but that's only part of the story with this graphics card.

Its performance, to be clear, is unquestioningly solid, positioning it as the third-best graphics card on the market right now, by my testing, and its new PCIe 5.0 interface and GDDR7 VRAM further distances it from the RTX 4080 and RTX 4080 Super from the last generation. It also outpaces the best AMD graphics card, the AMD Radeon RX 7900 XTX, by a healthy margin, pretty much locking up the premium, enthusiast-grade GPUs in Nvidia's corner for at least another generation.

Most impressively, it does this all for the same price as the Nvidia GeForce RTX 4080 Super and RX 7900 XTX: $999 / £939 / AU$2,019. This is also a rare instance where a graphics card launch price actually recedes from the high watermark set by its predecessor, as the RTX 5080 climbs down from the inflated price of the RTX 4080 when it launched back in 2022 for $1,199 / £1,189 / AU$2,219.

Then, of course, there's the new design of the card, which features a slimmer dual-slot profile, making it easier to fit into your case (even if the card's length remains unchanged). The dual flow-through fan cooling solution does wonders for managing the extra heat generated by the extra 40W TDP, and while the 12VHPWR cable connector is still present, the 3-to-1 8-pin adapter is at least somewhat less ridiculous the RTX 5090's 4-to-1 dongle.

The new card design also repositions the power connector itself to make it less cumbersome to plug a cable into the card, though it does pretty much negate any of the 90-degree angle cables that gained popularity with the high-end RTX 40 series cards.

Finally, everything is built off of TSMC's 4nm N4 process node, making it one of the most cutting-edge GPUs on the market in terms of its architecture. While AMD and Intel will follow suit with their own 4nm GPUs soon (AMD RDNA 4 also uses TSMC's 4nm process node and is due to launch in March), right now, Nvidia is the only game in town for this latest hardware.

None of that would matter though if the card didn't perform, however, but gamers and enthusiasts can rest assured that even without DLSS 4, you're getting a respectable upgrade. It might not have the wow factor of the beefier RTX 5090, but for gaming, creating, and even AI workloads, the Nvidia GeForce RTX 5080 is a spectacular balance of performance, price, and innovation that you won't find anywhere else at this level.

Nvidia GeForce RTX 5080: Price & availability

An RTX 5080 sitting on its retail packaging

(Image credit: Future)
  • How much is it? MSRP is $999 / £939 / AU$2,019
  • When can you get it? The RTX 5080 goes on sale January 30, 2025
  • Where is it available? The RTX 5080 will be available in the US, UK, and Australia at launch
Where to buy the RTX 5080

Looking to pick up the RTX 5080? Check out our Where to buy RTX 5080 live blog for updates to find stock in the US and UK

The Nvidia GeForce RTX 5080 goes on sale on January 30, 2025, starting at $999 / £939 / AU$2,019 for the Founders Edition and select AIB partner cards, while overclocked (OC) and more feature-rich third-party cards will be priced higher.

This puts the Nvidia RTX 5080 about $200 / £200 / AU$200 cheaper than the launch price of the last-gen RTX 4080, while also matching the price of the RTX 4080 Super.

Both of those RTX 40 series GPUs should see some downward price pressure as a result of the RTX 5080 release, which might complicate the value proposition of the RTX 5080 over the other,

The RTX 5080 is also launching at the same MSRP as the AMD Radeon RX 7900 XTX, which is AMD's top GPU right now. And with AMD confirming that it does not intend to launch an enthusiast-grade RDNA 4 GPU this generation, the RTX 5080's only real competition is from other Nvidia graphics cards like the RTX 4080 Super or RTX 5090.

This makes the RTX 5080 a great value proposition for those looking to buy a premium 4K graphics card, as its price-to-performance ratio is very strong.

  • Value: 4 / 5

Nvidia GeForce RTX 5080: Specs & features

A masculine hand holding an Nvidia GeForce RTX 5080 showing off the power connector

(Image credit: Future)
  • GDDR7 VRAM and PCIe 5.0
  • Still just 16GB VRAM
  • Slightly higher 360W TDP

While the Nvidia RTX 5080 doesn't push the spec envelope quite as far as the RTX 5090 does, its spec sheet is still impressive.

For starters, like the RTX 5090, the RTX 5080 uses the faster, next-gen PCIe 5.0 interface that allows for faster data processing and coordination with the CPU, which translates directly into higher performance.

You also have new GDDR7 VRAM in the RTX 5080, only the second card to have it after the RTX 5090, and it dramatically increases the memory bandwidth and speed of the RTX 5080 compared to the RTX 4080 and RTX 4080 Super. Those latter two cards both use slower GDDR6X memory, so even though all three cards have the same amount of memory (16GB) and memory bus-width (256-bit), the RTX 5080 has a >25% faster effective memory speed of 30Gbps, compared to the 23Gbps of the RTX 4080 Super and the 22.4Gbps on the base RTX 4080.

This is all on top of the Blackwell GPU inside the card, which is built on TSMC's 4nm process, compared to the Lovelace GPUs in the RTX 4080 and 4080 Super, which use TSMC's 5nm process. So even though the transistor count on the RTX 5080 is slightly lower than its predecessor's, the smaller transistors are faster and more efficient.

The RTX 5080 also has a higher SM count, 84, compared to the RTX 4080's 76 and the RTX 4080 Super's 80, meaning the RTX 5080 has the commensurate increase in shader cores, ray tracing cores, and Tensor cores. It also has a slightly faster boost clock (2,617MHz) than its predecessor and the 4080 Super variant.

Finally, there is a slight increase in the card's TDP, 360W compared to the RTX 4080 and RTX 4080 Super's 320W.

  • Specs & features: 4.5 / 5

Nvidia GeForce RTX 5080: Design

An Nvidia GeForce RTX 5080 leaning against its retail packaging with the RTX 5080 logo visible

(Image credit: Future)
  • Slimmer dual-slot form factor
  • Dual flow-through cooling system

The redesign of the Nvidia RTX 5080 is identical to that of the RTX 5090, featuring the same slimmed-down dual slot profile as Nvidia's flagship card.

If I were to guess, the redesign of the RTX 5080 isn't as essential as it is for the RTX 5090, which needed a way to bring better cooling for the much hotter 575W TDP, and the RTX 5080 (and eventually the RTX 5070) just slotted into this new design by default.

That said, it's still a fantastic change, especially as it makes the RTX 5080 thinner and lighter than its predecessor.

The dual flow through cooling system on the Nvidia GeForce RTX 5080

(Image credit: Future)

The core of the redesign is the new dual flow-through cooling solution, which uses an innovative three-part PCB inside to open up a gap at the front of the card, allowing a second fan to blow cooler air over the heat sink fins drawing heat away from the GPU.

A view of the comparative slot width of the Nvidia GeForce RTX 5080 and RTX 4080

(Image credit: Future)

This means that you don't need as thick of a heat sink to pull away heat, which allows the card itself to get the same thermal performance from a thinner form factor, moving from the triple-slot RTX 4080 design down to a dual-slot RTX 5080. In practice, this also allows for a slight increase in the card's TDP, giving the card a bit of a performance boost as well, just from implementing a dual flow-through design.

Given that fact, I would not be surprised if other card makers follow suit, and we start getting much slimmer graphics cards as a result.

A masculine hand holding an Nvidia GeForce RTX 5080 showing off the power connector

(Image credit: Future)

The only other design choice of note is the 90-degree turn of the 16-pin power port, which should make it easier to plug the 12VHPWR connector into the card. The RTX 4080 didn't suffer nearly the same kinds of issues with its power connectors as the RTX 4090 did, so this design choice really flows down from engineers trying to fix potential problems with the much more power hungry 5090. But, if you're going to implement it for your flagship card, you might as well put it on all of the Founders Edition cards.

Unfortunately, this redesign means that if you invested in a 90-degree-angled 12VHPWR cable, it won't work on the RTX 5080 Founders Edition, though third-party partner cards will have a lot of different designs, so you should be able to find one that fits your cable situation..

  • Design: 4.5 / 5

Nvidia GeForce RTX 5080: Performance

An Nvidia GeForce RTX 5080 slotted and running on a test bench

(Image credit: Future)
  • Excellent all-around performance
  • Moderately more powerful than the RTX 4080 and RTX 4080 Super, but nearly as fast as the RTX 4090 in gaming
  • You'll need DLSS 4 to get the best results
A note on my data

The charts shown below are the most recent test data I have for the cards tested for this review and may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.

A note on the RTX 4080 Super

In my testing for this review, the RTX 4080 Super scored consistently lower than it has in the past, which I believe is an issue with my card specifically that isn't reflective of its actual performance. I'm including the data from the RTX 4080 Super for transparency's sake, but I wouldn't take these numbers as-is. I'll be retesting the RTX 4080 Super soon, and will update my data with new scores once I've troubleshot the issue.

Performance is king, though, and so naturally all the redesign and spec bumps won't amount to much if the RTX 5080 doesn't deliver better performance as a result, and fortunately it does—though maybe not as much as some enthusiasts would like.

Overall, the RTX 5080 manages to score about 13% better than the RTX 4080 and about 19% better than the AMD Radeon RX 7900 XTX, a result that will disappoint some (especially after seeing the 20-25% uplift on the RTX 5090) who were hoping for something closer to 20% or better.

If we were just to go off those numbers, some might call them disappointing, regardless of all the other improvements to the RTX 5080 in terms of design and specs. All this needs to be put in a broader context though, because my perspective changed once I compared the RTX 5080 to the RTX 4090.

Overall, the RTX 5080 is within 12% of the overall performance of the RTX 4090, and within 9% of the RTX 4090's gaming performance, which is a hell of a thing and simply can't be ignored, even by enthusiasts.

Starting with the card's synthetic benchmarks, the card scores about 13% better than the RTX 4080 and RX 7900 XTX, with the RTX 5080 consistently beating out the RTX 4080 and substantially beating the RX 7900 XTX in ray-traced workloads (though the RX 7900 XTX does pull down a slightly better average 1080p rasterization score, to its credit.

Compared to the RTX 4090, the RTX 5080 comes in at about 15% slower on average, with its worst performance coming at lower resolutions. At 4K, though, the RTX 5080 comes in just 7% slower than the last-gen flagship.

In terms of compute performance, the RTX 5080 trounces the RX 7900 XTX, as expected, by about 38%, with a more modest 9% improvement over the RTX 4080. Against the RTX 4090, however, the RTX 5080 comes within just 5% of the RTX 4090's Geekbench compute scores. If you're looking for a cheap AI card, the RTX 5080 is definitely going to be your jam.

On the creative side, PugetBench for Creators Adobe Photoshop benchmark still isn't working for the RTX 5080 Super, so I can't tell you much about its creative raster performance yet (though I will update these charts once that issue is fixed), but going off the 3D modeling and video editing scores, the RTX 5080 is an impressive GPU, as expected.

The entire 3D modeling industry is effectively built on Nvidia's CUDA, so against the RTX 5080, the RX 7900 XTX doesn't stand a chance as the 5080 more than doubles the RX 7900 XTX's Blender Benchmark performance. Gen-on-gen though, the RTX 5080 comes in with about 8% better performance.

Against the RTX 4090, the RTX 5080 comes within 15% on its performance, and for good measure, if you're rocking an RTX 3090 and you're curious about the RTX 5080, the RTX 5080 outperforms the RTX 3090 by about 75% in Blender Benchmark. If you're on an RTX 3090 and want to upgrade, you'll probably still be better off with an RTX 4090, but if you can't find one, the RTX 5080 is a great alternative.

In terms of video editing performance, the RTX 5080 doesn't do as well as its predecessor in PugetBench for Creators Adobe Premiere and effectively ties in my Handbrake 4K to 1080p encoding test. I expect that once the RTX 5080 launches, Puget Systems will be able to update its tools for the new RTX 50 series, so these scores will likely change, but for now, it is what it is, and you're not going to see much difference in your video editing workflows with this card over its predecessor.

An Nvidia GeForce RTX 5080 slotted into a motherboard

(Image credit: Future)

The RTX 5080 is Nvidia's premium "gaming" card, though, so its gaming performance is what's going to matter to the vast majority of buyers out there. For that, you won't be disappointed. Working just off DLSS 3 with no frame generation, the RTX 5080 will get you noticeably improved framerates gen-on-gen at 1440p and 4K, with substantially better minimum/1% framerates as well for smoother gameplay. Turn on DLSS 4 with Multi-Frame Generation and the RTX 5080 does even better, blowing well past the RTX 4090 in some titles.

DLSS 4 with Multi-Frame Generation is game developer-dependent, however, so even though this is the flagship gaming feature for this generation of Nvidia GPUs, not every game will feature it. For testing purposes, then, I stick to DLSS 3 without Frame Generation (and the AMD and Intel equivalents, where appropriate), since this allows for a more apples-to-apples comparison between cards.

At 1440p, the RTX 5080 gets about 13% better average fps and minimum/1% fps overall, with up to 18% better ray tracing performance. Turn on DLSS 3 to balanced and ray tracing to its highest settings and the RTX 5080 gets you about 9% better average fps than its predecessor, but a massive 58% higher minimum/1% fps, on average.

Compared to the RTX 4090, the RTX 5080's average 1440p fps comes within 7% of the RTX 4090's, and within 2% of its minimum/1% fps, on average. In native ray-tracing performance, the RTX 5080 slips to within 14% of the RTX 4090's average fps and within 11% of its minimum/1% performance. Turn on balanced upscaling, however, and everything changes, with the RTX 5080 comes within just 6% of the RTX 4090's ray-traced upscaled average fps, and beats the RTX 4090's minimum/1% fps average by almost 40%.

Cranking things up to 4K, and the RTX 5080's lead over the RTX 4080 grows a good bit. With no ray tracing or upscaling, the RTX 5080 gets about 20% faster average fps and minimum/1% fps than the RTX 4080, overall. Its native ray tracing performance is about the same, however, and it's minimum/1% fps average actually falls behind the RTX 4080's, both with and without DLSS 3.

Against the RTX 4090, the RTX 5080 comes within 12% of its average fps and within 8% of its minimum/1% performance without ray tracing or upscaling. It falls behind considerably in native 4K ray tracing performance (which is to be expected, given the substantially higher RT core count for the RTX 4090), but when using DLSS 3, that ray tracing advantage is cut substantially and the RTX 5080 manages to come within 14% of the RTX 4090's average fps, and within 12% of its minimum/1% fps overall.

Taken together, the RTX 5080 makes some major strides in reaching RTX 4090 performance across the board, getting a little more than halfway across their respective performance gap between the RTX 4080 and RTX 4090.

The RTX 5080 beats its predecessor by just over 13% overall, and comes within 12% of the RTX 4090's overal performance, all while costing less than both RTX 40 series card's launch MSRP, making it an incredible value for a premium card to boot.

  • Performance: 4 / 5

Should you buy the Nvidia GeForce RTX 5080?

A masculine hand holding up an Nvidia GeForce RTX 5080 against a green background

(Image credit: Future)

Buy the Nvidia GeForce RTX 5080 if...

You want fantastic performance for the price
You're getting close to RTX 4090 performance for under a grand (or just over two, if you're in Australia) at MSRP.

You want to game at 4K
This card's 4K gaming performance is fantastic, coming within 12-14% of the RTX 4090's in a lot of games.

You're not willing to make the jump to an RTX 5090
The RTX 5090 is an absolute beast of a GPU, but even at its MSRP, it's double the price of the RTX 5080, so you're right to wonder if it's worth making the jump to the next tier up.

Don't buy it if...

You want the absolute best performance possible
The RTX 5080 comes within striking distance of the RTX 4090 in terms of performance, but it doesn't actually get there, much less reaching the vaunted heights of the RTX 5090.

You're looking for something more affordable
At this price, it's an approachable premium graphics card, but it's still a premium GPU, and the RTX 5070 Ti and RTX 5070 are just around the corner.

You only plan on playing at 1440p
While this card is great for 1440p gaming, it's frankly overkill for that resolution. You'll be better off with the RTX 5070 Ti if all you want is 1440p.

Also consider

Nvidia GeForce RTX 4090
With the release of the RTX 5090, the RTX 4090 should see it's price come down quite a bit, and if scalpers drive up the price of the RTX 5080, the RTX 4090 might be a better bet.

Read the full Nvidia GeForce RTX 4090 review

Nvidia GeForce RTX 5090
Yes, it's double the price of the RTX 5080, and that's going to be a hard leap for a lot of folks, but if you want the best performance out there, this is it.

Read the full Nvidia GeForce RTX 5090 review

How I tested the Nvidia GeForce RTX 5080

  • I spent about a week and a half with the RTX 5080
  • I used my complete GPU testing suite to analyze the card's performance
  • I tested the card in everyday, gaming, creative, and AI workload usage
Test System Specs

Here are the specs on the system I used for testing:

Motherboard: ASRock Z790i Lightning WiFi
CPU: Intel Core i9-14900K
CPU Cooler:
Gigabyte Auros Waterforce II 360 ICE
RAM: Corsair Dominator DDR5-6600 (2 x 16GB)
SSD:
Crucial T705
PSU: Thermaltake Toughpower PF3 1050W Platinum
Case: Praxis Wetbench

I spent about a week testing the RTX 5080, using my updated suite of benchmarks like Black Myth Wukong, 3DMark Steel Nomad, and more.

I also used this card as my primary work GPU where I relied on it for photo editing and design work, while also testing out a number of games on it like Cyberpunk 2077, Black Myth Wukong, and others.

I've been testing graphics cards for TechRadar for a couple of years now, with more than two dozen GPU reviews under my belt. I've extensively tested and retested all of the graphics cards discussed in this review, so I'm intimately familiar with their performance. This gives me the best possible position to judge the merits of the RTX 5080, and whether it's the best graphics card for your needs and budget.

  • Originally reviewed January 2024
Nvidia GeForce RTX 5080
5:00 pm |

Author: admin | Category: Computers Computing Computing Components Gadgets | Tags: , , , | Comments: Off

Nvidia GeForce RTX 5080: Two-minute review

Nvidia GeForce RTX 5080: Price & availability

  • How much is it? MSRP is $999 / £939 / AU$2,019
  • When can you get it? The RTX 5080 goes on sale January 30, 2025
  • Where is it available? The RTX 5080 will be available in the US, UK, and Australia at launch
Where to buy the RTX 5080

Looking to pick up the RTX 5080? Check out our Where to Buy RTX 5080 live blog for updates to find stock in the US and UK.

The Nvidia GeForce RTX 5080 goes on sale on January 30, 2025, starting at $999 / £939 / AU$2,019 for the Founders Edition card from Nvidia, as well as select AIB partner cards. Third-party overclocked (OC) cards and those with other extras like liquid cooling and RGB will ultimately cost more.

The RTX 5080 launches at a much lower price than the original RTX 4080, which had a launch MSRP of $1,199 / £1,189 / AU$2,219, though the RTX 5080 does come in at the same price as the Nvidia RTX 4080 Super.

It's worth noting that the RTX 5080 is fully half the MSRP of the Nvidia RTX 5090 that launches at the same time, and given the performance of the RTX 5080, a lot of potential buyers out there will likely find the RTX 5080 to be the better value of the two cards.

  • Value: 4 / 5

Nvidia GeForce RTX 5080: Specs & features

The Nvidia GeForce RTX 5080's power connection port

(Image credit: Future / John Loeffler)
  • GDDR7 and PCIe 5.0
  • Slightly higher SM count than RTX 4080 Super
  • Moderate increase in TDP, but nothing like the RTX 5090
  • Specs & features: 4 / 5

Nvidia GeForce RTX 5080: Design

  • Slim, dual-slot form factor
  • Better cooling
  • Design: 4.5 / 5

Nvidia GeForce RTX 5080: Performance

An Nvidia GeForce RTX 5090 slotted into a test bench

(Image credit: Future)
  • DLSS 4
A note on my data

The charts shown below are the most recent test data I have for the cards tested for this review and may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.

  • Performance: 4 / 5

Should you buy the Nvidia GeForce RTX 5080?

A masculine hand holding an RTX 5090

(Image credit: Future)

Buy the Nvidia GeForce RTX 5080 if...

Don't buy it if...

Also consider

Nvidia GeForce RTX 4080 Super

Read the full Nvidia GeForce RTX 4080 Super review

How I tested the Nvidia GeForce RTX 5080

  • I spent about a week and a half with the RTX 5080
  • I used my complete GPU testing suite to analyze the card's performance
  • I tested the card in everyday, gaming, creative, and AI workload usage
Test System Specs

Here are the specs on the system I used for testing:

Motherboard: ASRock Z790i Lightning WiFi
CPU: Intel Core i9-14900K
CPU Cooler:
Gigabyte Auros Waterforce II 360 ICE
RAM: Corsair Dominator DDR5-6600 (2 x 16GB)
SSD:
Crucial T705
PSU: Thermaltake Toughpower PF3 1050W Platinum
Case: Praxis Wetbench

  • Originally reviewed January 2024
Nvidia GeForce RTX 5090: the supercar of graphics cards
5:00 pm | January 23, 2025

Author: admin | Category: Computers Computing Computing Components Gadgets | Tags: , , , | Comments: Off

Nvidia GeForce RTX 5090: Two-minute review

The Nvidia GeForce RTX 5090 is a difficult GPU to approach as a professional reviewer because it is the rare consumer product that is so powerful, and so good at what it does, you have to really examine if it is actually a useful product for people to buy.

Right out the gate, let me just lay it out for you: depending on the workload, this GPU can get you up to 50% better performance versus the GeForce RTX 4090, and that's not even factoring in multi-frame generation when it comes to gaming, though on average the performance is still a respectable improvement of roughly 21% overall.

Simply put, whatever it is you're looking to use it for, whether gaming, creative work, or AI research and development, this is the best graphics card for the job if all you care about is pure performance.

Things get a bit more complicated if you want to bring energy efficiency into the equation. But if we're being honest, if you're considering buying the Nvidia RTX 5090, you don't care about energy efficiency. This simply isn't that kind of card, and so as much as I want to make energy efficiency an issue in this review, I really can't. It's not intended to be efficient, and those who want this card do not care about how much energy this thing is pulling down—in fact, for many, the enormous TDP on this card is part of its appeal.

Likewise, I can't really argue too much with the card's price, which comes in at $1,999 / £1,939 / AU$4,039 for the Founders Edition, and which will likely be much higher for AIB partner cards (and that's before the inevitable scalping begins). I could rage, rage against the inflation of the price of premium GPUs all I want, but honestly, Nvidia wouldn't charge this much for this card if there wasn't a line out the door and around the block full of enthusiasts who are more than willing to pay that kind of money for this thing on day one.

Do they get their money's worth? For the most part, yes, especially if they're not a gamer but a creative professional or AI researcher. If you're in the latter camp, you're going to be very excited about this card.

If you're a gamer, you'll still get impressive gen-on-gen performance improvements over the celebrated RTX 4090, and the Nvidia RTX 5090 is really the first consumer graphics card I've tested that can get you consistent, high-framerate 8K gameplay even before factoring in Multi-Frame Generation. That marks the RTX 5090 as something of an inflection point of things to come, much like the Nvidia RTX 2080 did back in 2018 with its first-of-its-kind hardware ray tracing.

Is it worth it though?

That, ultimately, is up to the enthusiast buyer who is looking to invest in this card. At this point, you probably already know whether or not you want it, and many will likely be reading this review to validate those decisions that have already been made.

In that, rest easy. Even without the bells and whistles of DLSS 4, this card is a hearty upgrade to the RTX 4090, and considering that the actual price of the RTX 4090 has hovered around $2,000 for the better part of two years despite its $1,599 MSRP, if the RTX 5090 sticks close to its launch price, it's well worth the investment. If it gets scalped to hell and sells for much more above that, you'll need to consider your purchase much more carefully to make sure you're getting the most for your money. Make sure to check out our where to buy an RTX 5090 guide to help you find stock when it goes on sale.

Nvidia GeForce RTX 5090: Price & availability

  • How much is it? MSRP is $1,999 / £1,939 / AU$4,039
  • When can you get it? The RTX 5090 goes on sale January 30, 2025
  • Where is it available? The RTX 5090 will be available in the US, UK, and Australia at launch
Where to buy the RTX 5090

Looking to pick up the RTX 5090? Check out our Where to buy RTX 5090 live blog for updates to find stock in the US and UK

The Nvidia GeForce RTX 5090 goes on sale on January 30, 2025, starting at $1,999 / £1,939 / AU$4,039 for the Nvidia Founders Edition and select AIB partner cards. Overclocked (OC) and other similarly tweaked cards and designs will obviously run higher.

It's worth noting that the RTX 5090 is 25% more expensive than the $1,599 launch price of the RTX 4090, but in reality, we can expect the RTX 5090 to sell for much higher than its MSRP in the months ahead, so we're really looking at an asking price closer to the $2,499.99 MSRP of the Turing-era Nvidia Titan RTX (if you're lucky).

Of course, if you're in the market for the Nvidia RTX 5090, you're probably not squabbling too much about the price of the card. You're already expecting to pay the premium, especially the first adopter premium, that comes with this release.

That said, this is still a ridiculously expensive graphics card for anyone other than an AI startup with VC backing, so it's worth asking yourself before you confirm that purchase if this card is truly the right card for your system and setup.

  • Value: 3 / 5

Nvidia GeForce RTX 5090: Specs & features

The Nvidia GeForce RTX 5090's power connection port

(Image credit: Future / John Loeffler)
  • First GPU with GDDR7 VRAM and PCIe 5.0
  • Slightly slower clocks
  • Obscene 575W TDP

There are a lot of new architectural changes in the Nvidia RTX 50 series GPUs that are worth diving into, especially the move to a transformer AI model for its upscaling, but let's start with the new specs for the RTX 5090.

First and foremost, the flagship Blackwell GPU is the first consumer graphics card to feature next-gen GDDR7 video memory, and it is substantially faster than GDDR6 and GDDR6X (a roughly 33% increase in Gbps over the RTX 4090). Add in the much wider 512-bit memory interface and you have a total memory bandwidth of 1,790GB/s.

This, more than even the increases VRAM pool of 32GB vs 24GB for the RTX 4090, makes this GPU the first really capable 8K graphics card on the market. 8K textures have an enormous footprint in memory, so moving them through the rendering pipelines to generate playable framerates isn't really possible with anything less than this card has.

Yes, you can, maybe, get playable 8K gaming with some RTX 40 or AMD Radeon RX 7000 series cards if you use aggressive upscaling, but you won't really be getting 8K visuals that'll be worth the effort. In reality, the RTX 5090 is what you want if you want to play 8K, but good luck finding an 8K monitor at this point. Those are still years away from really going mainstream (though there are a growing number of 8K TVs).

If you're settling in at 4K though, you're in for a treat, since all that bandwidth means faster 4K texture processing, so you can get very fast native 4K gaming with this card without having to fall back on upscaling tech to get you to 60fps or higher.

The GeForce RTX logo on the Nvidia GeForce RTX 5090

(Image credit: Future / John Loeffler)

The clock speeds on the RTX 5090 are slightly slower, which is good, because the other major top-line specs for the RTX 5090 are its gargantuan TDP of 575W and its PCIe 5.0 x16 interface. For the TDP, this thermal challenge, according to Nvidia, required major reengineering of the PCB inside the card, which I'll get to in a bit.

The PCIe 5.0 x16 interface, meanwhile, is the first of its kind in a consumer GPU, though you can expect AMD and Intel to quickly follow suit. Why this matters is because a number of newer motherboards have PCIe 5.0 lanes ready to go, but most people have been using those for PCIe 5.0 m.2 SSDs.

If your motherboard has 20 PCIe 5.0 lanes, the RTX 5090 will take up 16 of those, leaving just four for your SSD. If you have one PCIe 5.0 x4 SSD, you should be fine, but I've seen motherboard configurations that have two or three PCIe 5.0 x4 m.2 slots, so if you've got one of those and you've loaded them up with PCIe 5.0 SSDs, you're likely to see those SSDs drop down to the slower PCIe 4.0 speeds. I don't think it'll be that big of a deal, but it's worth considering if you've invested a lot into your SSD storage.

As for the other specs, they're more or less similar to what you'd find in the RTX 4090, just more of it. The new Blackwell GB202 GPU in the RTX 5090 is built on a TSMC 4nm process, compared to the RTX 4090's TSMC 5nm AD102 GPU. The SM design is the same, so 128 CUDA cores, one ray tracing core, and four tensor cores per SM. At 170 SMs, you've got 21,760 CUDA cores, 170 RT cores, and 680 Tensor cores for the RTX 5090, compared to the RTX 4090's 128 SMs (so 16,384 CUDA, 128 RT, and 512 Tensor cores).

  • Specs & features: 4.5 / 5

Nvidia GeForce RTX 5090: Design

The Nvidia GeForce RTX 5090 sitting on its packaging

(Image credit: Future / John Loeffler)
  • Slim, dual-slot form factor
  • Better cooling

So there's a significant change to this generation of Nvidia Founders Edition RTX flagship cards in terms of design, and it's not insubstantial.

Holding the RTX 5090 Founders Edition in your hand, you'll immediately notice two things: first, you can comfortably hold it in one hand thanks to it being a dual-slot card rather than a triple-slot, and second, it's significantly lighter than the RTX 4090.

A big part of this is how Nvidia designed the PCB inside the card. Traditionally, graphics cards have been built with a single PCB that extends from the inner edge of the PC case, down through the PCIe slot, and far enough back to accommodate all of the modules needed for the card. On top of this PCB, you'll have a heatsink with piping from the GPU die itself through a couple of dozen aluminum fins to dissipate heat, with some kind of fan or blower system to push or pull cooler air through the heated fins to carry away the heat from the GPU.

The problem with this setup is that if you have a monolithic PCB, you can only really extend the heatsinks and fans off of the PCB to help cool it since a fan blowing air directly into a plastic wall doesn't do much to help move hot air out of the graphics card.

A split view of the Nvidia GeForce RTX 5090's dual fan passthrough design

(Image credit: Future / John Loeffler)

Nvidia has a genuinely novel innovation on this account, and that's ditching the monolithic PCB that's been a mainstay of graphics cards for 30 years. Instead, the RTX 5090 (and presumably subsequent RTX 50-series GPUs to come), splits the PCB into three parts: the video output interface at the 'front' of the card facing out from the case, the PCIe interface segment of the card, and the main body of the PCB that houses the GPU itself as well as the VRAM modules and other necessary electronics.

This segmented design allows a gap in the front of the card below the fan, so rather than a fan blowing air into an obstruction, it can fully pass over the fins of the GPU's heatsink, substantially improving the thermals.

As a result, Nvidia is able to shrink the width of the card down considerably, moving from a 2.4-inch width to a 1.9-inch width, or a roughly 20% reduction on paper. That said, it feels substantially smaller than its predecessor, and it's definitely a card that won't completely overwhelm your PC case the way the RTX 4090 does.

The 4 8-pin to 16-pin 12VHPWR adapter included with the Nvidia GeForce RTX 5090

(Image credit: Future / John Loeffler)

That said, the obscene power consumption required by this card means that the 8-pin adapter included in the RTX 5090 package is a comical 4-to-1 dongle that pretty much no PSU in anyone's PC case can really accommodate.

Most modular PSUs give you three PCIe 8-pin power connectors at most, so let's just be honest about this setup. You're going to need to get a new ATX 3.0 PSU with at least 1000W to run this card at a minimum (it's officially recommended PSU is 950W, but just round up, you're going to need it), so make sure you factor that into your budget if you pick this card up

Otherwise, the look and feel of the card isn't that different than previous generations, except the front plate of the GPU where the RTX 5090 branding would have gone is now missing, replaced by a finned shroud to allow air to pass through. The RTX 5090 stamp is instead printed on the center panel, similar to how it was done on the Nvidia GeForce RTX 3070 Founders Edition.

As a final touch, the white back-lit GeForce RTX logo and the X strips on the front of the card, when powered, add a nice RGB-lite touch that doesn't look too guady, but for RGB fans out there, you might think it looks rather plain.

  • Design: 4.5 / 5

Nvidia GeForce RTX 5090: Performance

An Nvidia GeForce RTX 5090 slotted into a test bench

(Image credit: Future)
  • Most powerful GPU on the consumer market
  • Substantially faster than RTX 4090
  • Playable 8K gaming
A note on my data

The charts shown below are the most recent test data I have for the cards tested for this review and may change over time as more card results are added and cards are retested. The 'average of all cards tested' includes cards not shown in these charts for readability purposes.

So how does the Nvidia GeForce RTX 5090 stack up against its predecessor, as well as the best 4K graphics cards on the market more broadly?

Very damn well, it turns out, managing to improve performance over the RTX 4090 in some workloads by 50% or more, while leaving everything else pretty much in the dust.

Though when looked at from 30,000 feet, the overall performance gains are respectable gen-on-gen but aren't the kind of earth-shattering gains the RTX 4090 made over the Nvidia GeForce RTX 3090.

Starting with synthetic workloads, the RTX 5090 scores anywhere from 48.6% faster to about 6.7% slower than the RTX 4090 in various 3DMark tests, depending on the workload. The only poor performance for the RTX 5090 was in 3DMark Night Raid, a test where both cards so completely overwhelm the test that the difference here could be down to CPU bottlenecking or other issues that aren't easily identifiable. On every other 3DMark test, though, the RTX 5090 scores 5.6% better or higher, more often than not by 20-35%. In the most recent;y released test, Steel Nomad, the RTX 5090 is nearly 50% faster than the RTX 4090.

On the compute side of things, the RTX 5090 is up to 34.3% faster in Geekbench 6 OpenGL compute test and 53.9% faster in Vulcan, making it an absolute monster for AI researchers to leverage.

On the creative side, the RTX 5090 is substantially faster in 3D rendering, scoring between 35% and 49.3% faster in my Blender Benchmark 4.30 tests. There's very little difference between the two cards when it comes to video editing though, as they essentially tie in PugetBench for Creators' Adobe Premiere test and in Handbrake 1.7 4K to 1080p encoding.

The latter two results might be down to CPU bottlenecking, as even the RTX 4090 pushes right up against the performance ceiling set by the CPU in a lot of cases.

When it comes to gaming, the RTX 5090 is substantially faster than the RTX 4090, especially at 4K. In non-upscaled 1440p gaming, you're looking at a roughly 18% better average frame rate and a 22.6% better minimum/1% framerate for the RTX 5090. With DLSS 3 upscaling (but no frame generation), you're looking at 23.3% better average and 23% better minimum/1% framerates overall with the RTX 5090 vs the RTX 4090.

With ray tracing turn on without upscaling, you're getting 26.3% better average framerates and about 23% better minimum/1% framerates, and with upscaling turned on to balanced (again, no frame generation), you're looking at about 14% better average fps and about 13% better minimum/1% fps for the RTX 5090 against the RTX 4090.

At 4K, however, the faster memory and wider memory bus really make a difference. Without upscaling and ray tracing turned off, you're getting upwards of 200 fps at 4K for the RTX 5090 on average, compared to the RTX 4090's 154 average fps, a nearly 30% increase. The average minimum/1% fps for the RTX 5090 is about 28% faster than the RTX 4090, as well. With DLSS 3 set to balanced, you're looking at a roughly 22% better average framerate overall compared to the RTX 4090, with an 18% better minimum/1% framerate on average as well.

With ray tracing and no upscaling, the difference is even more pronounced with the RTX 5090 getting just over 34% faster average framerates compared to the RTX 4090 (with a more modest 7% faster average minimum/1% fps). Turn on balanced DLSS 3 with full ray tracing and you're looking at about 22% faster average fps overall for the RTX 5090, but an incredible 66.2% jump in average minimum/1% fps compared to the RTX 4090 at 4K.

Again, none of this even factors in single frame generation, which can already substantially increase framerates in some games (though with the introduction of some input latency). Once Multi-Frame Generation rolls out at launch, you can expect to see these framerates for the RTX 5090 run substantially higher. Pair that with Nvidia Reflex 2 to help mitigate the input latency issues frame generation can introduce, and the playable performance of the RTX 5090 will only get better with time, and it's starting from a substantial lead right out of the gate.

In the end, the overall baseline performance of the RTX 5090 comes in about 21% better than the RTX 4090, which is what you're really looking for when it comes to a gen-on-gen improvement.

That said, you have to ask whether the performance improvement you do get is worth the enormous increase in power consumption. That 575W TDP isn't a joke. I maxed out at 556W of power at 100% utilization, and I hit 100% fairly often in my testing and while gaming.

The dual flow-through fan design also does a great job of cooling the GPU, but at the expense of turning the card into a space heater. That 575W of heat needs to go somewhere, and that somewhere is inside your PC case. Make sure you have adequate airflow to vent all that hot air, otherwise everything in your case is going to slowly cook.

As far as performance-per-price, this card does slightly better than the RTX 4090 on value for the money, but that's never been a buying factor for this kind of card anyway. You want this card for its performance, plain and simple, and in that regard, it's the best there is.

  • Performance: 5 / 5

Should you buy the Nvidia GeForce RTX 5090?

A masculine hand holding an RTX 5090

(Image credit: Future)

Buy the Nvidia GeForce RTX 5090 if...

You want the best performance possible
From gaming to 3D modeling to AI compute, the RTX 5090 serves up best-in-class performance.

You want to game at 8K
Of all the graphics cards I've tested, the RTX 5090 is so far the only GPU that can realistically game at 8K without compromising on graphics settings.

You really want to flex
This card comes with a lot of bragging rights if you're into the PC gaming scene.

Don't buy it if...

You care about efficiency
At 575W, this card might as well come with a smokestack and a warning from your utility provider about the additional cost of running it.

You're in any way budget-conscious
This card starts off more expensive than most gaming PCs and will only become more so once scalpers get their hands on them. And that's not even factoring in AIB partner cards with extra features that add to the cost.

You have a small form-factor PC
There's been some talk about the new Nvidia GPUs being SSF-friendly, but even though this card is thinner than the RTX 4090, it's just as long, so it'll be hard to fit it into a lot of smaller cases.

Also consider

Nvidia GeForce RTX 4090
I mean, honestly, this is the only other card you can compare the RTX 5090 to in terms of performance, so if you're looking for an alternative to the RTX 5090, the RTX 4090 is pretty much it.

Read the full Nvidia GeForce RTX 4090 review

How I tested the Nvidia GeForce RTX 5090

  • I spent about a week and a half with the RTX 5090
  • I used my complete GPU testing suite to analyze the card's performance
  • I tested the card in everyday, gaming, creative, and AI workload usage
Test System Specs

Here are the specs on the system I used for testing:

Motherboard: ASRock Z790i Lightning WiFi
CPU: Intel Core i9-14900K
CPU Cooler:
Gigabyte Auros Waterforce II 360 ICE
RAM: Corsair Dominator DDR5-6600 (2 x 16GB)
SSD:
Crucial T705
PSU: Thermaltake Toughpower PF3 1050W Platinum
Case: Praxis Wetbench

I spent about a week and a half testing the Nvidia GeForce RTX 5090, both running synthetic tests as well as using it in my day-to-day PC for both work and gaming.

I used my updated testing suite, which uses industry standard benchmark tools like 3DMark, Geekbench, Pugetbench for Creators, and various built-in gaming benchmarks. I used the same testbench setup listed to the right for the purposes of testing this card, as well as all of the other cards I tested for comparison purposes.

I've tested and retested dozens of graphics cards for the 20+ graphics card reviews I've written for TechRadar over the last few years, and so I know the ins and outs of these PC components. That's why you can trust my review process to help you make the right buying decision for your next GPU, whether it's the RTX 5090 or any of the other graphics cards I review.

  • Originally reviewed January 2024
Intel Arc B570 review: great value, but overshadowed by the far superior Arc B580
5:00 pm | January 16, 2025

Author: admin | Category: Computers Computing Computing Components Gadgets | Tags: , | Comments: Off

Intel Arc B570: Two-minute review

The Intel Arc B570 is the kind of graphics card I desperately want to love, but my tech-addled PC gaming heart belongs to another.

I'm not talking about the recently-announced Nvidia RTX 50 series GPUs (though we'll see about those in due time). No, I've fallen for the Intel Arc B580, easily one of the best graphics cards on the market thanks to its fantastic 1440p and 1080p gaming performance. And, unfortunately, its price is so good that it's hard to really recommend the Arc B570 in good conscience.

To be fair, the Intel Arc B570's $219 / £219 (around AU$350) MSRP arguably makes it the best cheap graphics card going right now simply by default. The next cheapest current-gen GPU (as of January 2025) from AMD (the Radeon RX 7600) and Nvidia (the GeForce RTX 4060) are roughly 20% to 25% more expensive, and it's still $30 / £30 (about AU$90) cheaper than the Arc B580.

But the problem is that despite some impressive specs for a card this cheap, and solid 1080p performance, for just a little bit more you can get a far more future-proofed GPU that will let you game without compromise at a higher 1440p resolution if you go for the Arc B580. Of course, that's assuming you can get that card at its normal retail price and not the jacked-up prices being charged online by profiteering retailers and third-party sellers.

An Intel Arc B570 seen from the back

(Image credit: Future / John Loeffler)

But looking at the Arc B570 strictly on its merits, ignoring any external factors that are subject to change, and it's undeniable that the Arc B570 is one of the best 1080p graphics cards you can buy, especially considering its price.

At this price price point, you really have to compare the Arc B570 against cards that are several years old, like the Nvidia GeForce RTX 1060 to really put things in perspective. For example, the Nvidia GeForce RTX 3050 had a launch price $30 higher than the Arc B570, and even though I no longer have that card to compare Intel's latest against in a head-to-head matchup like I'd like, it really wasn't that good of a card to justify its price. Say what you will about the Arc B570, but in no universe can you say that you're not getting your money's worth with this GPU.

The heartbreak, then, is just that this card is simply overshadowed by its slightly more expensive sibling. If the Intel Arc B570 was priced at $199, it would be walking away with a definitive budget win. Hell, it still is, but with so little separating the B570 and the B580, pretty much every potential buyer is better off borrowing that extra bit of cash from a friend, sibling, parent, or even a stranger, and picking up the more powerful B580.

Intel Arc B570: Price & availability

An Intel Arc B570 on top of its retail packaging

(Image credit: Future / John Loeffler)
  • How much is it? Starting at $219 / £219 (around AU$350)
  • When can you get it? You can get it from January 16, 2025
  • Where is it available? You can get it in the US, UK, and Australia

The Intel Arc B570 goes on sale in the US, UK, and Australia on January 16, 2025, for $219 / £219 (around AU$350).

This puts it at just $30 / £30 (about AU$90) cheaper than the Intel Arc B580 released in December 2024. That said, it is a good deal cheaper than the competing AMD Radeon RX 7600 and Nvidia RTX 4060, both of which run at least 20% more expensive for roughly the same performance.

I'll dig into the performance-per-dollar of this card in a bit, but I can tell you now that it's one of the best you find on a modern GPU, but it still comes in a distant second to the Intel Arc B580, making it hard card to recommend unless you are seriously strapped for cash or the B580 is being scalped at too high a price.

  • Value: 4.5 / 5

Intel Arc B570: Specs

The top trim of an Intel Arc B570

(Image credit: Future / John Loeffler)
  • 10GB VRAM is a nice-to-have feature
  • Decently-sized memory bus
  • Specs: 4 / 5

Intel Arc B570: Performance

An Intel Arc B570 running on an open test bench

(Image credit: Future / John Loeffler)
  • Great 1080p performance
  • Doable 1440p (within reason)
  • Arc B580 is way better for not a whole lot more money

Ultimately, what matters is performance, and the top-line numbers for the Intel Arc B570 are impressive for a card at its price point, but it is almost exclusively a 1080p graphics card unless you make a lot of compromises for 1440p resolution that frankly aren't going to be worth it in the end.

In terms of creative workloads or AI, this isn't the card for you. I'd simply go for the RTX 4060 if you're really strapped for cash but need something more than a basic cheap gaming GPU.

It also has to be noted that its 1080p gaming performance isn't going to match its more expensive competition on a lot of games, so if you're looking for a graphics card that consistently gets you 60fps at 1080p on max settings without question, you might be better off with some of this card's more expensive competitors.

That said, on average across the several games in my testing suite, including titles like Cyberpunk 2077, F1 2024, Total War: Warhammer III, and others, this card did manage an average 1080p fps of 60. with an average minimum fps of 34.

Of course, it played better on some games more than others, and some games you won't be able to play at max settings for a playable frame rate (like Black Myth Wukong), but over the course of all the titles I played, it's more than passable for 1080p, with the occasionally playable 1440p experience.

For its price, it's genuinely excellent, especially for getting you a card capable of ray-traced gameplay, but for just a little bit more, you can get a lot better with the B580.

  • Performance: 3.5 / 5

Should you buy the Intel Arc B570?

An Intel Arc B570 being held by a masculine hand

(Image credit: Future / John Loeffler)

Buy the Intel Arc B570 if...

You are on a very tight budget
There aren't a lot of current-gen GPUs available at this price point, and even then, this is the cheapest so far.View Deal

You only care about basic 1080p gaming
If you are only looking for a cheap 1080p GPU with some modern extras like ray tracing, this card could be a compelling value at MSRP. View Deal

Don't buy it if...

You want to game at 1440p
Despite its extra VRAM and decent memory bus, it just doesn't have the specs for consistent 1440p gaming without some serious compromises.View Deal

You have some wiggle room in your budget
If you are even slightly flexible in your budget, the Arc B580 is a much, much better option for not a whole lot more money.View Deal

Also consider

Intel Arc B580
OK, so I'm going to be honest, the only other card you should be considering is the Arc B580. If you have any room in your budget, get this card instead. It's so much better for just a little more of an investment.

Read the full Intel Arc B580 review

How I tested the Intel Arc B570

  • I spent about a week with the Intel Arc B570
  • I used it primarily as a gaming GPU with some light creative work
  • I ran the Arc B570 thgouh my revamped testing suite

I tested the Intel Arc B570 using my newly revamped testing suite, including the latest 3DMark tests like Steel Nomad and Solar Bay, as well as the newest gaming benchmarks like Black Myth Wukong and F1 2024.

I used the Arc B570 as my primary GPU on my work PC, using it for basic productivity, creative, and moderate gaming in the office.

I've been testing GPUs for TechRadar for more than two years now, and have extensively benchmarked all of the latest GPUs several times over, so I am well aware of where this card's performance sits amongst its competition as well as how good of a value it is at its price point.

  • Originally reviewed January 2025
Orico O7000 SSD review: high-end PCIe 4.0 storage without the frills
4:00 pm | December 30, 2024

Author: admin | Category: Computers Computing Computing Components Gadgets Storage & Backup | Tags: | Comments: Off

Orico O7000: One-minute review

Although we’re technically in the PCIe 5.0 era of storage, it’s really been a golden age for PCIe 4.0 SSDs thanks to a plethora of choices, such as the Orico O7000, which is positioned as a drive with high-end performance for a midrange price.

This PCIe 4.0 SSD ranges from 512GB to 4TB, and for this review we’re taking the 1TB model for a spin. Rated for 7000MB/s in reads and 6500MB/s in writes, the O7000 isn’t quite at the top-end of PCIe 4.0 storage (which would be the Samsung 990 Pro), but it’s close.

What makes this drive particularly interesting is its current $69 price (about £55/AU$100), which is relatively low nowadays for a drive of this caliber.

Compared to top-end PCIe 4.0 SSDs like the FireCuda 530R from Seagate and the MP600 Pro NH from Corsair, the O7000 is just a shade slower for the most part, and often ties the two drives.

However, when writing lots of data to the O7000 (like if you’re moving all your game installations to it), its performance easily bogs down and becomes extremely slow.

The O7000 1TB also has a weakness in that other brands offer the same SSD hardware under different names, and these drives can sometimes be cheaper, but sales pricing is always a fickle thing.

For now, though, the O7000 is the cheapest option for this hardware, and its performance makes it one of the best values in PCIe 4.0 storage.

Orico O7000: Price & availability

An Orico O7000 SSD on a table with its retail packaging

(Image credit: Future / Matthew Connatser)
  • How much does it cost? From $69 (about £55/AU$100)
  • When is it out? Available now
  • Where can you get it? Available in the US, with UK and Australia availability pending

The O7000 is currently available on Amazon and Newegg, though at the time of writing, Amazon only offers the 1TB model, while Newegg also has the 2TB and 4TB variants in stock. The 1TB model costs $59, the 2TB $93, and the 4TB $196. The 2TB is currently the best deal on a dollar per GB basis, but since SSD prices can easily go up or down, it’s unclear how long this will be true for.

At $59 for the 1TB model, it’s significantly cheaper than other flagship PCIe 4.0 drives like Seagate’s FireCuda 530R and Corsair’s MP600 Pro NH, which cost $94 and $84 respectively for the 1TB version. It’s also much cheaper than older PCIe 5.0 SSDs, such as the $169 FireCuda 540 1TB.

Orico O7000: Specs

Orico O7000: Performance

An Orico O7000 SSD slotted into a motherboard

(Image credit: Future / Matthew Connatser)

Overall, the O7000 shows good, but not perfect, performance. The 530R and MP600 Pro NH were faster for the most part, but in nearly every single test the O7000 nipped on their heels. There were even a few instances where the O7000 tied or beat the Seagate and Corsair drives.

A particularly bright spot for the O7000 is its thermal performance and efficiency. It only maxed out at 50 degrees Celsius, far lower than the three other SSDs we tested. That makes the O7000 a decent candidate for laptops and handheld PCs, which often lack good SSD cooling.

Of course, more heat occurs because of higher power consumption, so the O7000 is a low-power drive too, another reason to choose it for a battery-powered device.

The FireCuda 540 was of course in its own league, but it costs about twice as much and benefits from active cooling, something the O7000 definitely doesn’t need.

However, the O7000’s biggest weakness is in sustained writes. All SSDs suffer from lowered writing performance the less free space they have, but the O7000 can see its performance drop to as low as 150MB/s. However, it’s rare to come across this situation in the real world since it requires tens or hundreds of gigabytes of large files being moved to an SSD.

Orico O7000: Verdict

A hand holding an Orico O7000 SSD

(Image credit: Future / Matthew Connatser)

While the O7000 1TB isn’t perfect, its overall performance is very close to what you get with the FireCuda 530R 2TB and MP600 Pro NH 2TB. With a substantially lower price tag of $69, that makes the O7000 a better deal for the most part. If you’re looking for a new boot drive on a budget or if you just want some fast, secondary storage, the O7000 is a sensible choice.

However, the SSD that the O7000 1TB is based on is also made by other brands, including TeamGroup with its MP44. There’s really no reason to buy the more expensive drive since they’re essentially identical, but that works in Orico’s favor right now since the O7000 is cheaper.

The O7000 2TB is an especially good deal right now, going for just $93, which isn’t much more than lots of 1TB drives. Spending the extra $34 to get double the storage and the same performance makes the O7000 2TB especially appealing versus the 1TB model, though we don’t know how long this will hold since SSD prices can change quite frequently (and have done so as this review was in progress).

Should you buy the Orico O7000

Buy the Orico O7000 if...

You need fast storage on a budget
The O7000 has nearly top-end performance for a PCIe 4.0 SSD, and should work fine for just about anyone.

You want a cool and efficient SSD
The O7000 consumes very little power, which means longer battery life on mobile devices and low temperatures even without a heatsink.

Don't buy it if...

You need top-end performance period
The O7000 isn’t quite as fast as other PCIe 4.0 SSDs and stumbles in sustained writing.

You can find another SSD with the same hardware for less
There are a few SSDs out there that use the same components, such as TeamGroup’s MP44 and Lexar’s NM790. They’re essentially identical, so if they’re cheaper, just get one of those instead.

How I tested the Orico O7000

Although Intel’s new Core Ultra 200 Series has technically replaced last-gen 14th Gen CPUs, I’m using my LGA 1700 test bench for SSD testing, rather than an LGA 1851 test bench.

This is because SSDs run on Arrow Lake CPUs actually perform significantly worse than on 14th Gen CPUs. We’re not entirely sure why this is the case, but in order to show our SSDs’ best possible performance, we have to use Intel’s last-gen CPUs.

The LGA 1700 test bench is equipped with the Core i9-14900K, ASRock’s Z790 Taichi Lite motherboard, 32GB of DDR5 clocked to 5600MHz, and Corsair’s H170i iCUE LINK liquid cooler with a 420mm radiator. We also tested the Firecuda 530R, Firecuda 540, and MP600 Pro NH for comparison.

Intel Arc B580 review: A spectacular success for Intel and a gateway to 1440p for gamers on a budget
5:00 pm | December 12, 2024

Author: admin | Category: Computers Computing Computing Components Gadgets | Tags: , , , | Comments: Off

Intel Arc B580: Two-minute review

When I reviewed the Arc A770 and A750, I said that these Alchemist GPUs were impressive first efforts for Intel's Arc range, but not yet at the level that they needed to be to compete with the likes of Nvidia and AMD in discrete graphics.

Well, with the release of the new Intel Arc B580 (2nd-gen Battlemage), there's no doubt that Intel has produced one of the best graphics cards of this generation, and given gamers on a budget an absolute gift just in time for the holidays.

For starters, let's talk about the price of this GPU. At just $249.99 / £249.99 / AU$439, the Arc B580 undercuts both Nvidia's and AMD's budget offerings, the RTX 4060 and RX 7600, while offering substantially better performance, making its value proposition untouchable at this price range.

While I'll dig deeper into the performance in a bit, I'll cut to the chase and point out the simple fact that neither the RTX 4060 nor the RX 7600 can game at 1440p without severely compromising graphics quality. Not only can the B580 perform this feat, it does so brilliantly.

This comes down to some very straightforward spec choices that Intel made with its Battlemage debut that, especially in hindsight, make Nvidia and AMD's respective decisions even more baffling. First, with a VRAM pool of 12GB, the B580 can hold the larger texture files needed for 1440p gaming, whereas the RTX 4060 Ti cannot, due to its 8GB VRAM loadout.

Then there's the B580's wider 192-bit memory interface, compared to the RTX 4060 Ti's and RX 7600 XT's 128-bit. While this might seem like an obscure spec, it's the secret sauce for the B580. This beefier interface allows it to process those larger texture files much faster than its competitors, so this GPU can fully leverage its bigger VRAM pool in a way that Nvidia and AMD's competing cards simply can't, even with larger VRAM configurations.

Boiling all this down, you end up with a budget-class GPU that can get you fast 1440p framerates the likes of which we haven't seen since the RTX 3060 Ti.

Even more impressive, in my mind, is that I did not encounter a single game where there was some kind of quirk or hiccup caused by the driver. With the Arc Alchemist cards last year, there were issues with some games not running well because of inadequate driver support, or a game's reliance on an older version of DirectX that the Alchemist GPUs weren't optimized for. I didn't encounter any of those problems this time around. The Intel graphics team's long, hard work on getting Arc's drivers up to par has definitely paid off.

If there's a criticism I can make of this graphics card, it's that its creative performance isn't as good as Nvidia's. But given the entire creative world's reliance on Nvidia's bespoke CUDA instruction set, neither Intel nor AMD were ever really going to be able to compete here.

Fortunately, the Intel Arc B580 is a graphics card for gaming, and for any gamer looking to play at 1440p resolution on the cheap, the B580 is really the only graphics card that can do it, making it the only GPU you should be considering at this price point.

Intel Arc B580: Price & availability

An Intel Arc B580 resting upright on its retail packaging

(Image credit: Future / John Loeffler)

The Intel Arc B580 is available in the US, UK, and Australia, and has been from December 13, 2024, starting at $249.99, £249.99, and AU$439 respectively. Third-party graphics card partners like Acer, ASRock, and others will have their own variants of the B580, and their prices may be higher, depending on the card.

The closest competition for the Arc B580 in terms of price are the Nvidia RTX 4060 and AMD RX 7600, both of which have a $20-$50 higher MSRP. And while Nvidia and AMD are preparing to roll out their next-gen graphics cards starting next month, it will still be a few months after the initial flagship launches before either company's budget offerings are announced. So, the B580 is the only current-gen GPU available for under $250 / £250 / AU$450 at the moment, and will likely remain so for many months to come.

  • Value: 5/5

Intel Arc B580: Specifications

The video output ports on the Intel Arc B580

(Image credit: Future / John Loeffler)

Intel Arc B580: Architecture & features

A masculine hand holding up the Intel Arc B580

(Image credit: Future / John Loeffler)

The Intel Arc B580 is the first discrete GPU from Intel based on its new Xe2 graphics architecture, codenamed Battlemage, and there are a lot of low-level changes over the previous-gen Intel Arc Alchemist. Many of these are small tweaks to the architectural design, such as the move from SIMD32 to SIMD16 instructions, but when taken together, all of these small changes add up to a major overhaul of the GPU.

That, in addition to using TSMC's 5nm process, means that even though the GPU itself has become physically smaller in just about every measure, it's much more powerful.

The B580 has a roughly 17% reduction in compute units from the Arc A580 and about 10% fewer transistors, but Intel says that its various architectural changes produce about 70% better performance per compute unit (or Xe core, as Intel calls it). I haven't tested or reviewed the Intel Arc A580, so I can't say for certain if that claim holds up, but there has definitely been a major performance gain gen-on-gen based on my experience with the higher-end Arc Alchemist cards. We also can't ignore the substantially faster boost clock of 2,850MHz, up from 1,700MHz for the A580.

Outside of the GPU architecture, there is also a smaller memory bus, with the A580's 256-bit interface dropping down to 192-bit for the B580, but the B580 features a 50% increase in its video memory pool, as well as a faster memory clock.

  • Specs & features: 4.5 / 5

Intel Arc B580: Design

The brand marking on the Intel Arc B580

(Image credit: Future / John Loeffler)

The Intel Arc B580 Limited Edition reference card is what you'd call the 'base' version of this GPU, but don't call it basic.

Despite its all-black-with-white-accent-lettering appearance, this is a good-looking graphics card, much like the initial Arc Alchemist GPUs before it, thanks to its matte, textured black shroud, dual-fan cooling, and rather understated aesthetic.

In a PC component world full of ostentatious, overly aggressive and flashy designs, there is something almost respectable about a graphics card in 2024 that presents itself without gimmicks, almost daring you to underestimate its capabilities due to its lack of RGB.

That said, there is one noticeable difference with this graphics card's design: the open 'window' over the internal heatsink to help with airflow and cooling. Unfortunately, the HWInfo64 utility I use to measure temperature and power draw for the GPUs I review couldn't read the Arc B580 during testing, so I can't tell you how much of a difference this window makes compared to something like the Intel Arc A750—but it certainly won't hurt its thermals.

Beyond that, the card also sports a single 8-pin power connector, in keeping with its 190W TBP, so you can pretty much guarantee that if you already have a discrete GPU in your system, you'll have the available power cables from your PSU required to use this GPU.

It's also not a very large graphics card, though it is larger than some RTX 4060 and RX 7600 GPUs (it's about 10.7-inches / 272mm), though third-party variants might be more compact. In any case, it's a dual-slot card, so it'll fit in place as an upgrade for just about any graphics card you have in your PC currently.

  • Design: 4.5 / 5

Intel Arc B580: Performance

An Intel Arc B580 running on a test bench

(Image credit: Future / John Loeffler)

OK, so now we come to why I am absolutely in love with this graphics card: performance.

Unfortunately, I don't have an Intel Arc A580 card on hand to compare this GPU to, so I can't directly measure how the B580 stacks up to its predecessor. But I can compare the B580 to its current competition, as well as the Intel Arc A750, which prior to this release was selling at, or somewhat below, the price of this graphics card, and has comparable specs.

In terms of pure synthetic performance, the Arc B580 comes in second to the Nvidia RTX 4060 Ti, performing about 10% slower overall. That said, there were some tests, like 3DMark Fire Strike Ultra, Wild Life Extreme (and Wild Life Extreme Unlimited), and Time Spy Extreme where the extra VRAM allowed the Arc B580 to pull ahead of the much more expensive Nvidia RTX 4060 Ti. The Arc B580 did manage to outperform the RTX 4060 by about 12%, however.

Creative workloads aren't the Arc B580's strongest area, with Nvidia's RTX 4060 and RTX 4060 Ti performing substantially better. This might change once PugetBench for Creators Photoshop benchmark gets updated however, as it crashed during every single test I ran, regardless of which graphics card I was using.

Notably, the Intel Arc B580 encoded 4K video to 1080p at a faster rate using Intel's H.264 codec in Handbrake 1.61 than all of the other cards tested using Nvidia or AMD's H.264 options, so this is something for game streamers to consider if they're looking for a card to process their video on the fly.

But what really matters with this GPU is gaming, and if you compare this graphics card's 1080p performance to the competition, you'll have to go with the nearly 40% more expensive Nvidia RTX 4060 Ti in order to beat it, and it's not a crushing defeat for Intel. While I found the Arc B580 is about 17% slower than the RTX 4060 Ti on average at 1080p (with no ray tracing or upscaling), it's still hitting 82 FPS on average overall and actually has a slightly higher minimum/1% FPS performance of just under 60 FPS.

The AMD RX 7600 XT, Intel Arc A750, and Nvidia RTX 4060 don't even come close to reaching these kinds of numbers, with the Arc B580 scoring a roughly 30% faster average 1080p FPS and an incredible 52% faster minimum/1% FPS advantage over the Nvidia RTX 4060, which comes in a very distant third place among the five GPUs being tested. All in all, it's an impressive performance from the Intel Battlemage graphics card.

Also worth noting is that the Intel Arc B580's ray-tracing performance is noticeably better than AMD's, and not that far behind Nvidia's, though its upscaling performance lags a bit behind AMD and Nvidia at 1080p.

Even more impressive, though, is this card's 1440p performance.

Typically, if you're going to buy any 1440p GPU, not even the best 1440p graphics card, you should expect to pay at least $400-$500 (about £320-£400 / AU$600-AU$750). And to really qualify as a 1440p GPU, you need to hit an average of 60 FPS overall, with an average FPS floor of about 40 FPS. Anything less than that, and you're going to have an uneven experience game-to-game.

In this regard, the only two graphics cards I tested that qualify are the Nvidia RTX 4060 Ti and the Intel Arc B580, and they are very close to each other in terms of 1440p performance. (I can give an honorable mention to the Nvidia RTX 4060, which almost got there, but not quite).

While Nvidia has certain built-in advantages owing to its status as the premiere GPU brand (so pretty much any game is optimized for Nvidia hardware by default), at 1440p it only barely ekes out a win over the Intel Arc B580. And that's ultimately down to its stronger native ray-tracing performance—a scenario which pretty much no one opts for. If you're going to use ray tracing, you're going to use upscaling, and in that situation, the RTX 4060 Ti and Arc B580 are effectively tied at 1440p.

And this 1440p performance in particular is why I'm so enthusiastic about this graphics card. While this is the performance section of the review, I can't help but talk about the value that this card represents for gamers—especially the growing number of 1440p-aspiring gamers out there.

Prior to the Intel Arc B580, gaming at 1440p—which is the PC gaming sweet spot; believe me, I've extensively tested nearly every GPU of the past four years at 1440p—was something reserved for the petit bourgeois of PC gamers. These are the folks not rich enough to really go in for the best 4K graphics cards, but they've got enough money to buy a 1440p monitor and a graphics card powerful enough to drive it.

This used to mean something approaching a grand just for these two items alone, locking a lot of gamers into incremental 1080p advances for two successive generations. No more.

Now, with an entry-level 1440p monitor coming in under $300 /£300 / AU$450, it's entirely possible to upgrade your rig for 1440p gaming for about $500 / £500 / AU$750 with this specific graphics card (and only this graphics card), which is absolutely doable for a hell of a lot of gamers out there who are still languishing at 1080p.

Ultimately, this, more than anything, raises the Intel Arc B580 into S-Tier for me, even though Nvidia's $399.99 RTX 4060 Ti GPU gets slightly better performance. The Nvidia RTX 4060 Ti just doesn't offer this kind of value for the vast majority of gamers out there, and even with its improved performance since its launch, the 4060 Ti is still very hard to recommend.

The Nvidia RTX 4060, meanwhile, can't keep up with the B580 despite being 20% more expensive. And with the AMD RX 7600 XT, laden with its $329.99 MSRP (about £250 / AU$480 RRP), falling noticeably behind the B580, the RX 7600 (which I haven't had a chance to retest yet) doesn't stand a chance (and has a slightly more expensive MSRP).

And, it has to be emphasized, I experienced none of the driver issues with the Intel Arc B580 that I did when I originally reviewed the Intel Arc A750 and Arc A770. Every game I tested ran perfectly well, even if something like Black Myth Wukong ran much better on the two Nvidia cards than it did on Intel's GPUs. Tweak some settings and you'll be good to go.

This was something that just wasn't the case with the previous-gen Arc graphics cards at launch, and it truly held Intel back at the time. In one of my Intel Arc Alchemist reviews, I compared that generation of graphics cards to fantastic journeyman efforts that were good, but maybe not ready to be put out on the show floor. No more. Intel has absolutely graduated to full GPU maker status, and has done so with a card more affordable than the cheapest graphics cards its competition has to offer.

Simply put, for a lot of cash-strapped gamers out there, the Intel Arc B580's performance at this price is nothing short of a miracle, and it makes me question how Intel of all companies was able to pull this off while AMD and Nvidia have not.

Even if you don't buy an Intel Arc B580, give Intel its due for introducing this kind of competition into the graphics card market. If Intel can keep this up for the B570, and hopefully the B770 and B750, then Nvidia and AMD will have no choice but to rein in their price inflation with the next-gen cards they plan to offer next year, making it a win-win for every gamer looking to upgrade.

  • Performance: 4.5 / 5

Intel Arc B580: Should you buy it?

A masculine hand holding an Intel Arc B580

(Image credit: Future / John Loeffler)

Buy the Intel Arc B580 if...

You want an extremely affordable 1440p graphics card
A 1440p graphics card can be quite expensive, but the Intel Arc B580 is incredibly affordable.

You're looking for great gaming performance
The Intel Arc B580 delivers incredible framerates for the price.

Don't buy it if...

You're looking for a budget creative GPU
While the B580 isn't terrible, if you're looking for a GPU for creative work, there are better cards out there.

You want a cheap GPU for AI workloads
The Intel Arc B580 might have dedicated AI hardware, but it still lags behind Nvidia by a good amount.

Also consider

Nvidia GeForce RTX 4060
The Nvidia RTX 4060 is a better option for a lot of creative tasks on a budget, though its gaming performance isn't as strong despite the higher price.

Read the full Nvidia GeForce RTX 4060 review

Nvidia GeForce RTX 4060 Ti
If you want a strong 1080p and 1440p gaming GPU, but also need some muscle for creative or machine learning/AI workloads, this card is what you'll want, so long as you're willing to pay the extra premium in the price.

Read the full Nvidia GeForce RTX 4060 Ti review

How I tested the Intel Arc B580

The backplate of the Intel Arc B580

(Image credit: Future / John Loeffler)
  • I tested the Intel Arc B580 for about three weeks
  • I used my updated suite of graphics card benchmark tests
  • I used the Arc B580 as my primary work GPU for creative workloads like Adobe Photoshop, as well as some in-depth game testing
Test System Specs

Here are the specs on the system I used for testing:

Motherboard: ASRock Z790i Lightning WiFi
CPU: Intel Core i9-14900K
CPU Cooler:
Gigabyte Auros Waterforce II 360 ICE
RAM: Corsair Dominator DDR5-6600 (2 x 16GB)
SSD:
Crucial T705
PSU: Thermaltake Toughpower PF3 1050W Platinum
Case: Praxis Wetbench

Over the course of about three weeks, I used the Intel Arc B580 as my primary workstation GPU when I wasn't actively benchmarking it.

This included using the graphics card for various creative workloads like Adobe Photoshop and light video encoding work.

I also used the B580 for some in-depth game testing, including titles like Black Myth Wukong, Satisfactory, and other recently released games.

I've been doing graphics card reviews for TechRadar for more than two years now, and I've done extensive GPU testing previous to that on a personal basis as a lifelong PC gamer. In addition, my computer science coursework for my Master's degree utilized GPUs very heavily for machine learning and other computational workloads, and as a result, I know my way around every aspect of a GPU. As such, you can rest assured that my testing process is both thorough and sound.

  • Originally reviewed December 2024
AMD Ryzen 7 9800X3D review: a gaming dynamo with new, unexpected suprises
8:24 pm | November 11, 2024

Author: admin | Category: Computers Computing Computing Components Gadgets | Tags: , , | Comments: Off

AMD Ryzen 7 9800X3D: Two-minute review

This generation of processors has been a mixed bag at best (and disappointing at worst), so it makes sense that Team Red would go all out to ensure the AMD Ryzen 7 9800X3D delivered something that exceeded expectations.

In that regard, the 9800X3D is a smashing success, delivering substantially better gaming performance than the AMD Ryzen 7 7800X3D that it replaces, though at a slightly higher $479 / £449.99 (about AU$700) price point. For gamers though, most will happily pay a bit more for a roughly 15% increase in gaming performance on average.

The chip isn't without faults, though. It isn't the absolute best processor for gaming in terms of framerates (that title belongs to the Intel Core i9-14900K over several games, averaged out), but where the last-gen Intel flagship simply threw raw wattage to get to the top, AMD's latest gaming processor uses substantially less power to come within 6% of the 14900K's overall gaming performance, a difference that is so close to being within the margin of variance and test setups that if I reran all my tests next week, the 9800X3D might beat it outright.

But, honestly, it doesn't need to do that. Intel's 14900K is overkill for anyone not running creative workloads like video editing, and the power cost is simply too high to justify getting an extra 6% overall gaming performance in synthetic tests. The Ryzen 9800X3D, meanwhile, will get you effectively identical actual performance and at a lower cost, both in MSRP terms, but also reduced power consumption and indirect savings like not needing to shell out for a 360mm AIO cooler to get the most out of the chip.

For that, the Ryzen 7 9800X3D is indisputably the best processor for gaming you can buy right now, and it cements 3D V-Cache as the second most impressive innovation for gaming hardware after AI upscaling and it's something that Intel just doesn't have an answer for it.

AMD Ryzen 7 9800X3D: Price & availability

An AMD Ryzen 7 9800X3D in its retail packaging

(Image credit: Future / John Loeffler)
  • How much is it? Its MSRP is $479 / £449.99 (about AU$700)
  • When is it out? It went on sale November 7, 2024
  • Where can you get it? You can get it in the US, UK, and Australia

The AMD Ryzen 7 9800X3D is available now in the US, UK, and Australia for $479 / £449.99 / AU$799, respectively.

This is a bump up from the price of the Ryzen 7 7800X3D it replaces, which launched at $449 / £439 / AU$779. I would have loved to see the price remain the same, of course, but the last-gen chip wasn't exactly a cheap processor to begin with, and both chips are very much targeted at an enthusiast market where the price bump here isn't exactly going to be a dealbreaker—so long as the performance increase justifies the bump in price.

In terms of Intel's competing offerings right now, on the performance side, the Intel Core Ultra 9 285K (and Intel Core i9-14900K, for that matter) is $110 / £100 / AU$300 more expensive to get the same kind of gaming performance. As for in-class silicon, the 9800X3D is about 15-20% more expensive than the competing Intel Core Ultra 7 265K, and is roughly 33% more expensive than the AMD Ryzen 7 9700X.

Essentially, the 9800X3D is a specialist chip for gamers, and while it isn't a performance slouch, at this price it's really only for PC gamers who want the best overall gaming processor and don't much care about stellar performance elsewhere. For those who need more than just a gaming chip, other options from AMD and Intel will be a better fit for the price.

  • Value: 3.5 / 5

AMD Ryzen 7 9800X3D: Specs

AMD Ryzen 7 9800X3D: Features & chipset

A mockup of the design of AMD's 2nd-generation 3D V-cache

(Image credit: AMD)

The fundamental specs of the 9800X3D aren't much different from the 7800X3D. They both sport the same 8-core/16-thread setup as the AMD Ryzen 7 9700X, but like the 7800X3D, the 9800X3D has an additional 64MB L3 cache while the Ryzen 7 9700X only has 32MB L3 cache.

This 3D V-Cache in the Ryzen 7 9800X3D has been redesigned from the previous two generations of AMD X3D chips. This second-generation 3D V-Cache, as AMD calls it, fundamentally changes how the 3D V-Cache die interfaces with the main processing die, which addresses some of the main complaints of the previous two generations of X3D chips.

In the first iteration of 3D V-Cache, the extra cache die was stacked on top of the main processing cores, but with 2nd-generation V-Cache, the extra cache die is underneath the main die, leaving the main processing cores free to directly interface with the CPU cooler.

This is a big deal, since the processing cores are where all the heat is being generated, so having an extra layer of silicon between it and the CPU cooler had a lot of implications for what the chip could do. Thermals had to be carefully managed, so clock speeds had to be kept in check and there was no ability to overclock the chip.

By moving the 3D V-Cache die underneath the main processor core complex, the thermal restraints around clock speeds and voltage no longer apply, so the 9800X3D is the first 3D V-cache chip to feature full overclocking support, allowing precise voltage controls at the same voltage limits as the rest of the Ryzen 9000-series lineup.

Compared to the AMD Ryzen 7 7800X3D, then, the 9800X3D benefits from noticeably faster base clock and boost clock speeds out of the box, and overclockers can now tinker with their CPUs without too much concern that they'll brick the chip (though with overclocking, that is always possible and can void your warranty, so use caution).

Beyond that, the only major change from the previous generation is faster DDR5 memory support, from 5200MHz with the 7800X3D to 5600MHz with the 9800X3D, though both chips support AMD EXPO memory overclocking for even faster memory speeds.

  • Features: 4 / 5

AMD Ryzen 7 9800X3D: Performance

At the end of the day though, all that fancy new tech wouldn't amount to much if the chip's performance didn't deliver, and thankfully, it does - though not universally.

In the synthetic benchmarks, the Ryzen 7 9800X3D showed very strong single-core performance on par with the rest of the Ryzen 9000-series lineup. The Ryzen 7 7800X3D, meanwhile, lags behind its Ryzen 7000 siblings noticeably, owing to the need to control thermals by limiting clock speeds. The Ryzen 9800X3D does not have this problem. Likewise, its multi-core performance is also unconstrained, running ahead of the Ryzen 7 9700X across the board.

On the creative front, this is generally not going to be a chip for creatives to concern themselves with - though there is one exception. If you're a photographer or graphic designer who does a lot of work in Adobe Photoshop or its alternatives, the Ryzen 7 9800X3D's extra cache is going to be a serious benefit for your workflows, beating out even the Intel Core i9-14900K in PugetBench for Adobe Photoshop be a few hundred points.

Everyone else though, this chip is not going to do much for you.

On the gaming side, this is where the 9800X3D really shows off, though there's a bit of a caveat to that. In games where the main CPU bottleneck is game logic, such as Total War: Warhammer III or Civilization VI, the extra 3D V-Cache isn't necessarily going to help your game performance. In that instance, you're going to want something with the fastest clocks possible to plow through all those AI decision trees or physics calculations before a game frame is even drawn.

As such, Intel's last-gen (and even current-gen) chips have an advantage in some games like Returnal (where complex bullet and geometry physics are the main CPU workload) or Total War: Warhammer III (where a lot of individual actors need to have their logic calculated quickly) because these gaming workloads benefit from faster clock speeds.

Where 3D V-Cache really benefits gaming is when there's data being communicated from the CPU to the GPU, like texture files or model geometry, and that additional cache memory can retain these smaller-but-not-tiny files in the fastest possible memory that can hold it. This mitigates the latency introduced when drawing a new game frame when the CPU has to go back to RAM to fetch a file because it didn't already have it in its much closer cache memory.

Games like F1 2023 and Tiny Tina's Wonderland benefited mightily from the extra available cache. In the case of the former, the Ryzen 9800X3D just wallops the Intel Core i9-14900K, and in the case of the latter, runs a very close second to it.

Taken all together, the Intel Core has a slight advantage just given the mix of games I used to test these chips, but for most gamers, the odds are good that the thing you're going to be looking for is a processor that works with your graphics card the best most of the time, and in this case, that'll be the Ryzen 7 9800X3D.

Overall, then, with performance that comes in neck-and-neck with the best Intel processors in gaming workloads on average, the Ryzen 7 9800X3D would already be an incredible chip.

But I simply can't get over the fact that the 9800X3D can do this with just 53% of the power of the Core i9-14900K. Add to that the Ryzen 7 9800X3D's impressive single- and multi-core performance, surprisingly great Photoshop performance, and gen-on-gen performance gains at very little power or monetary cost, and the Ryzen 7 9800X3D is easily one of the best AMD processors ever made.

  • Performance: 5 / 5

Should you buy the AMD Ryzen 7 9800X3D?

An AMD Ryzen 7 9800X3D in a masculine hand

(Image credit: Future / John Loeffler)

Buy the AMD Ryzen 7 9800X3D if...

On balance, the Ryzen 7 9800X3D is as good a gaming processor as you'll ever need.

Unlike its predecessor, the Ryzen 7 9800X3D can keep up with its peer class in general performance as well, not just gaming.

Don't buy it if...

If you're looking for more of a general-purpose processor, this chip isn't really for you.

The Ryzen 7 9800X3D isn't cheap, and for those on a budget, there are good processors out there that will get the job done.View Deal

Also consider

The Intel Core i7-14700K is still my favorite processor for its incredible performance at an accessible price.

Read the full Intel Core i7-14700K reviewView Deal

  • Originally reviewed November 2024
Intel Core Ultra 9 285K and Intel Core Ultra 5 245K Review
6:00 pm | October 24, 2024

Author: admin | Category: Computers Computing Computing Components Gadgets | Tags: , , | Comments: Off

Intel Core Ultra 9 285K & Intel Core Ultra 5 245K: One-minute review

An Intel Core Ultra 9 processor in its retail packaging

(Image credit: Future / John Loeffler)

I've had a couple of pre-briefings with Intel over the past couple of months about Intel Arrow Lake, so I can't say I'm surprised by the Intel Core Ultra 9 285K and Intel Core Ultra 5 245K, but it doesn't mean enthusiasts are going to be any less disappointed with what we got in the end.

Both Core Ultra chips effectively match the performance of the chips they are replacing, and while I've been saying for a while now that we have to stop looking at performance as the only metric that matters, the efficiency gains offered by these chips are not substantial enough to really merit the investment if you're rocking a 13th-gen Intel chip or better.

The new chips do come with some architecture changes worthy of note, though, and they aren’t all useless for consumers. For starters, the chips come equipped with an NPU, the first Intel desktops to do so, and the new Intel Arc integrated GPU offers some improved graphics capabilities that will make a real difference for some AIO (all-in-one) PCs.

However, these are largely going to be unnecessary for just about any gamer or content creator out there since most desktops are going to have a discrete graphics card that will run AI circles around the NPU in these chips, and make the Intel Arc iGPU pretty much a non-factor in anything other than the most budget gaming PC from a couple of years ago.

For some folks, unfortunately, Intel Arrow Lake misses the target they wanted it to hit, and with strong competition from AMD’s Ryzen 9000-series, these aren't the best processors for gaming or content creation. But, Intel has to start somewhere as it shifts to a new platform, and it managed to produce a very different kind of chip over its predecessors without giving up too much on the performance front, which is no easy feat.

Ultimately, they're perfectly fine chips if you're buying them in a prebuilt PC or if you're coming in from 11th-gen Intel or older (or making the move from AMD), since you'll have to buy all-new kit anyway, so you might as well set yourself up for Nova Lake next year. But anyone with a Raptor Lake chip isn't going to see any real benefit from these, so they're better off waiting for Nova Lake in 2025 to make the jump.

Intel Core Ultra 9 285K & Intel Core Ultra 5 245K: Price & availability

An Intel Core Ultra 5 processor in its retail packaging

(Image credit: Future / John Loeffler)

The Intel Core Ultra 9 285K and Intel Core Ultra 5 245K are available now in the US, UK, and Australia, priced at $589 / £548.99 / AU$1,099 and $309 / £289.99 / AU$589, respectively.

While the prices for these two chips stay the same or come in slightly lower than their predecessors, which is good, there are a bunch of added costs to upgrade to these chips... which is bad. First, they require an LGA 1851 chipset, so you’re going to have to buy a new motherboard in order to use them. They also don’t support DDR4 RAM, so you’re likely going to have to buy new DDR5 RAM as well.

The LGA 1851 socket does take the same CPU coolers as an LGA 1700 socket, though, so if you have a 12th-gen or better processor, at least your cooler will fit.

Against AMD’s latest, the Core Ultra 9 285K is better priced than AMD’s flagship Ryzen 9 9950X, but more expensive than the Ryzen 9 9900X. The Core Ultra 5 245K is slightly more expensive than AMD’s competing Ryzen 5 9600X.

  • Value: 3 / 5

Intel Core Ultra 9 285K & Intel Core Ultra 5 245K: Specs

An Intel Core Ultra 5 processor slotted into a motherboard

(Image credit: Future / John Loeffler)

Intel Core Ultra 9 285K & Intel Core Ultra 5 245K: Chipset & features

The Intel Core Ultra 9 285K and Intel Core Ultra 5 245K are newly architected desktop processors, powered by the same Lion Cove P-cores and Skymont E-cores found in the Intel Meteor Lake chips released late last year for laptops.

Intel Arrow Lake is essentially Intel Meteor Lake for desktops, and so it also features the same Intel NPU 3 13 TOPS neural processor as Meteor Lake, and the same Intel Arc Alchemist integrated GPU with four Xe cores (including four ray tracing cores) as its laptop cousin.

The max clock speeds of the Core Ultra 9 285K and Core Ultra 5 245K are slightly lower on performance cores (though with a higher base frequency) and higher across the board on the efficiency cores over the Core i9-14900K and Core i5-14600K, respectively.

The maximum amount of RAM is unchanged at 192GB, though the Core Ultra chips do not support DDR4 RAM, but they can support faster DDR5 memory up to 6,400MT/s.

Other than that, the TDP of the two Core Ultra chips is essentially unchanged from the 14th-gen chips they’re replacing, but they do have a 5°C higher TjMax (Tjunction max, which is the maximum thermal junction temperature that a processor can hit before it lowers performance to prevent overheating), so the chips won’t start to throttle until they hit 105°C.

  • Features: 3.5 / 5

Intel Core Ultra 9 285K & Intel Core Ultra 5 245K: Performance

Intel Core Ultra 5 processor in a motherboard

(Image credit: Future / John Loeffler)

Well, we’ve finally come to the performance segment of the review, and I wish I had better news for you, but most of you will be disappointed.

Starting with synthetic performance, the Core Ultra 9 285K is a very mixed bag vis-a-vis the Core i9-14900K and AMD Ryzen 9 9950X and Ryzen 9 9900X.

In some tests like Geekbench 6.2, the Core Ultra 9 outperforms the 14900K in single-core performance by about 8%, only to lose out by about the same in Cinebench R23’s single-core benchmark. Meanwhile, in Cinebench R23’s multi-core performance, the Core Ultra 9 285K comes in about 12% faster than the 14900K and is essentially tied with the AMD Ryzen 9 9950X.

The Core Ultra 5 245K, meanwhile, is effectively even with the Core i5-14600K, but fares much better in PCMark 10’s Home CPU benchmark, showing a roughly 14.5% performance boost over the 14600K and a 5.6% better showing than the AMD Ryzen 5 9600X.

In terms of average creative performance, the Core Ultra 9 285K does slightly better than the 14900K but slightly worse than the Ryzen 9 9950X — it’s substantially better than the Ryzen 9 9900X, on average, however.

The Core Ultra 5 245K, meanwhile does slightly worse, on average, than the Core i5-14600K, but comes out nearly 23% better on average than the Ryzen 5 9600X.

The gaming performance of the Core Ultra chips was easily the biggest disappointment, however, and is where these two chips really falter against Raptor Lake Refresh.

The Core Ultra 9 285K came in about 14% slower in gaming performance than the Core i9-14900K (though about 7-8% better than the Ryzen 9 9950X and Ryzen 9 9900X).

The Core Ultra 5 245K, meanwhile, came in about 9% slower than the i5-14600K, and only about 4% faster than the Ryzen 5 9600X.

Needless to say, if you’re looking for the best processor for gaming, you’ll want to look at the AMD Ryzen 7 7800X3D or wait to see what the upcoming AMD Ryzen 7 9800X3D does later this year.

When all the scores are tabulated and the final averages calculated, the Core Ultra 9 285K shows slightly better multi-core performance, slightly lower single-core performance, slightly better creative performance, and much worse gaming performance against its predecessor.

The Core Ultra 5 245K is generally slower for just about everything compared to the Core i5-14600K, though it does have much better productivity performance, so this will make a great chip for affordable AIO PCs without discrete graphics.

The real disappointment with Arrow Lake, though, lies with its energy efficiency... or lack thereof. Most people don’t even need the performance of the Intel Core i9-14900K or even the Intel Core i5-14600K, so I’d be fine with lower performance if it meant that there was much less power draw, but the Core Ultra 9 285K and Core Ultra 5 245K max out at 90.5% and 93.3% of the power of their predecessors, respectively.

That’s still much too high, and at that point, you might as well just stick with Raptor Lake Refresh and undervolt the CPU.

Ultimately, given the significantly higher cost of making the switch to these processors from the LGA 1700 chips, the performance and efficiency just don’t make these compelling purchases on their own.

If you’re shopping for the best prebuilt gaming PC though, though, I won’t be too worried about picking between one with a 14th-gen chip or these new Core Ultras. You’re not going to notice the difference.

  • Performance: 3.5 / 5

Intel Core Ultra 9 285K & Intel Core Ultra 5 245K: Should you buy it?

Buy the Intel Core Ultra 9 285K or Intel Core Ultra 5 245K if...

If you haven't made the leap to the latest Intel processors, you're going to have to buy all new stuff anyway, so you might as well go for these chips and future-proof your PC for Nova Lake next year.

While the efficiency gains on these two chips aren't huge, they are more efficient, which is definitely a good thing.View Deal

Don't buy them if...

Running these chips is going to require a new motherboard at least, and likely will require you to buy new RAM as well, making these chips a substantial investment.

While the performance of these chips is great in absolute terms, they aren't any better than their predecessors, though substantially worse for gaming.

Intel Core Ultra 9 285K & Intel Core Ultra 5 245K: Also Consider

The Intel Core i7-14700K is still my pick for the best processor for most people thanks to its strong performance and accessible pricing.

Read the full Intel Core i7-14700K review

If you're looking for the best processor for gaming, then this is the processor you need to buy, at least until its successor come out.

Read the full AMD Ryzen 7 7800X3D review

How I tested the Intel Core Ultra 9 285K and Intel Core Ultra 5 245K

When I test processors, I put them through a rigorous process that takes into account synthetic benchmarks, creative workloads, gaming performance, and more to arrive at my scores.

I use industry standard tools like Geekbench, Cinebench, and PCMark, as well as creative apps like Adobe Photoshop, Blender, and Handbrake.

For gaming, I use built-in benchmarks for CPU-intensive games like Total War: Warhammer III on low graphics settings at 1080p to better isolate a CPU's impact on the game's framerates.

Finally, I make sure to use the same system for common-socket processors, the fastest RAM and SSD, and the most powerful GPU and motherboards available to maintain consistency wherever possible to ensure that CPU scores are comparable.

With each new processor I review, I retest previous processors I've reviewed in order to get updated scores for each, after installing the latest system and BIOS updates.

I've tested and retested two generations of processors more times than I can count over the last couple of years, so I am intimately familiar with how these chips perform, and my deep computer science and journalism background allows me to put all of this testing data into its proper context for consumers so they can make the right choice when shopping for a new processor.

  • Originally reviewed October 2024
Next Page »