Organizer
Gadget news
Apple buys secretive audio AI startup Q.ai
9:05 am | January 30, 2026

Author: admin | Category: Mobile phones news | Tags: | Comments: Off

Apple has bought the secretive Israeli audio AI startup Q.ai, in a transaction that's valued at around $2 billion. That would make it Apple's second most expensive purchase in history, outdone only by its acquisition of Beats. Q.ai is working on AI technology for audio, specifically applications of machine learning to help devices understand whispered speech and to enhance audio quality in challenging environments. Last year, Q.ai filed a patent application for using "facial skin micromovements" to detect mouthed or spoken words, identify a person, and assess their emotions and...

I spent days testing Acer’s new 16-inch laptop — and sadly AI remains just a gimmick
12:50 pm | January 27, 2026

Author: admin | Category: Computers Computing Gadgets Laptops Windows Laptops | Tags: , | Comments: Off

Acer Aspire 16 AI: Two-minute review

The Acer Aspire 16 AI is a large laptop promising powerful AI features in an elegant body. It certainly looks the part, thanks to the premium materials and finish, as well as the impressively thin chassis. It’s also surprisingly light for a laptop of this size, which further improves its portability.

However, the price paid for this litheness is the somewhat flimsy build quality, falling below the standards of the best laptop constructions. There’s a fair amount of flex to the chassis, while the lid hinge doesn’t offer the greatest stability – although it at least managed to stay planted while I typed.

There are a good number of ports on the Aspire 16 AI, including two USB-C and two USB-A ports. However, the former are located closest to you, a choice I usually lament since it means your power adapter has to cross over with any cable you have plugged in to the USB-A port. It’s also a shame that the card reader is only fit for microSDs.

For day-to-day use, the Aspire 16 AI is very capable. It can handle light productivity and 4K streaming without missing a beat. However, the included AI features are disappointing: they’re either too basic in their functionality or fail to work altogether.

Gaming also proved to be a lackluster experience. Its shared memory GPU can just about handle AAA titles on the lowest settings, and even then you won’t exactly be treated to the smoothest frame rates.

Close-up of camera on Acer Aspire 16 AI on pink background

(Image credit: Future)

Under these kinds of intensive workloads, the Aspire 16 AI can generate a fair amount of heat, but thankfully it’s concentrated underneath, towards the back. Coupled with the hushed fans, the Aspire 16 AI remains comfortable to use in such scenarios.

The display in my review unit, with its OLED technology and 2048 x 1280 resolution, provided a crystal-clear image, rendering colors vividly and delivering high brightness levels. This latter aspect is especially useful for combating reflections, which can be quite prominent.

Thanks to the spacing and satisfying feel of its keys, the keyboard on the Aspire 16 AI is easy to use. However, the number pad keys are too small for my liking, and I wished there was a right Control key, as I find this useful for productivity purposes.

The touchpad is smooth and large, which helps with navigation, but it can get in the way when typing. Also, the one in my review unit felt loose and rattled when clicking, making it awkward to use and suggests signs of poor quality control.

Battery life is somewhat disappointing, and isn’t a patch on that of the smaller 14 AI. In fact, many of its key rivals can outlast it. I only managed to get roughly nine hours from it when playing a movie on a continuous loop.

On the face of it, the Aspire 16 AI might look like good value, but it doesn’t deliver enough to justify its cost. Its slender form and mostly great display aren’t enough to make up for its drawbacks, while other laptops at this price point offer more complete packages.

Acer Aspire 16 AI review: Price & Availability

  • $649.99 / £799.99 / AU$1,499
  • Available now in various configurations
  • Better value rivals exist

The Aspire 16 AI starts from $649.99 / £799.99 / AU$1,499 and is available now. It can be configured with various processors, including Intel and Qualcomm (ARM) chips, with a couple of storage and RAM options to choose from.

Unfortunately, there are better value laptops out there with more power and performance, better suited to heavier workloads. The Apple MacBook Air 13-inch (M4) is one such example. Starting prices aren’t as low, but they’re similar to the higher spec models of the Aspire 16 AI. They also have excellent build quality, making them a better value proposition all things considered.

If you want to stick with Windows, the Asus TUF Gaming A16 Advantage Edition is another alternative. Again, it’s similarly priced to the higher-spec variants of the Aspire 16 AI, but offers much better gaming performance, chiefly thanks to its AMD Radeon RX 7600S GPU. It’s no surprise we think it’s one of the best cheap gaming laptops around right now.

  • Value: 3 / 5

Acer Aspire 16 AI review: Specs

Acer Aspire 16 AI Specs

Acer Aspire 16 AI Base Config

Acer Aspire 16 AI Review Config

Price

$649.99 / £799.99 / AU$1,499

£949 (about $1,280, AU$1,960)

CPU

Qualcomm Snapdragon X X1-26-100 (8 cores), 3GHz

AMD Ryzen AI 7 350, 2.0GHz (8 cores)

GPU

Qualcomm Adreno GPU (shared memory)

AMD Radeon 860M (shared memory)

RAM

16GB LPDDR5X

16GB LPDDR5X

Storage

512GB PCI Express NVMe 4.0 (M.2)

1TB PCI Express NVMe 4.0 (M.2)

Display

16-inch WUXGA (1920 x 1200) 16:10 ComfyView (Matte) 120Hz, IPS

16-inch WUXGA+ (2048 x 1280) OLED, 16:10, 120Hz

Ports and Connectivity

2x USB-C (Thunderbolt 4), 2x USB-A, 1x HDMI 2.1, 1x headset jack, 1x microSD, Wi-Fi 7, Bluetooth 5.4

2x USB-C (Thunderbolt 4), 2x USB-A, 1x HDMI 2.1, 1x headset jack, 1x microSD, Wi-Fi 7, Bluetooth 5.4

Battery

65Wh

65Wh

Dimensions

14 x 9.8 x 0.6 inch / 355 x 250 x 16mm

14 x 9.8 x 0.6 inch / 355 x 250 x 16mm

Weight

3.4lbs / 1.55kg

3.4lbs / 1.55kg

Acer Aspire 16 AI review: Design

Close-up of keyboard on Acer Aspire 16 AI

(Image credit: Future)
  • Brilliantly thin and light
  • Not the sturdiest
  • Touchpad issues

Thanks to its minimal design, the Aspire 16 AI has sleek looks. The low-shine metallic lid also adds to its elegance, befitting its premium price tag.

It’s pleasingly light and slender, too, making it more portable than you might expect for a 16-inch laptop. The bezel for the display is minuscule as well, which helps to maximize its full potential.

There’s a satisfying click when you close the lid on the Aspire 16 AI, something I haven’t encountered on any other laptop before. The hinge also allows for the screen to recline all the back by 180 degrees, something I’m always happy to see.

However, lid stability isn’t the best, as it’s prone to wobbling, although, thankfully, it remains stable while typing on the keyboard. The overall construction of the Aspire 16 AI isn’t especially impressive, either, with the chassis having a fair amount of flex.

Close-up of touchpad on Acer Aspire 16 AI

(Image credit: Future)

Worse still, the touchpad in my review unit had a horrible rattle, as if some part was loose at the bottom section. It’s possible this issue is confined to my review unit alone – perhaps it had been passed around several journalists before it got to me – but the issue still doesn’t speak highly of its build quality or Acer's quality control.

There’s a varied selection of ports on the Aspire 16 AI, spread evenly across both sides. On the left are two USB-C ports, one USB-A port, and an HDMI port. However, I found it inconvenient that the USB-C ports are placed nearest to you, since one has to be used for the power adapter; I much prefer the thick cable for this to trail from the back of the laptop, rather than from the middle, as it does with the Aspire 16 AI.

On the right you’ll find another USB-A port, followed by a combo audio jack and a microSD card reader. It’s a shame the latter can’t accommodate standard SD card sizes, but this is a small grievance.

  • Design: 3.5 / 5

Acer Aspire 16 AI review: Performance

Close-up of left-side ports on Acer Aspire 16 AI, on desk with pink background

(Image credit: Future)
  • Good productivity and streaming performance
  • Poor for gaming
  • Useless AI features

The Aspire 16 AI offers great general performance. It takes light productivity in its stride, from word processing to spreadsheet creation, and multiple browser tabs didn’t cause a problem for me, thanks to the 16GB of RAM in my review unit. Streaming 4K content is well within its grasp, too. I experienced little buffering or slow down, providing a seamless viewing experience in the main.

However, despite what Acer claims, the gaming performance of the Aspire 16 AI is quite poor. With its shared memory, the AMD Radeon GPU didn’t handle AAA titles very well. When I played Cyberpunk 2077 with the default Ray Tracing: Low preset and resolution scaling set to Performance mode, I was getting 20fps on average – not what you’d call playable.

The best I could achieve with the game was about 38fps, but that was at the lowest possible graphics preset and the resolution dropped to 1080p. This at least made it playable, but if you’re expecting to get even moderately close to the performance of the best gaming laptops, you’ll be sorely disappointed.

During my playtime, the Aspire 16 AI generated a fair amount of heat. Fortunately, this was heavily concentrated underneath and at the back, thus steering clear of any parts you might actually touch. Fan noise is also pleasantly subdued.

As when I tested the 14 AI, the AI features the Aspire 16 AI sports are disappointing. The centerpiece appears to be Acer LiveSense, a photo editing and webcam suite with very basic functionality, not to mention a poor UI and frequent glitches.

For more AI features, you’ll have to download Acer Intelligence Space, the brand’s hub. Contrary to when I tested the 14 AI, I managed to install it successfully. However, it didn’t get off to an auspicious start, as a dialog box warned me that I had insufficient memory resources, explaining that it needed 6.5GB free and a total of at least 16GB to execute smoothly.

Close-up of right-hand ports on Acer Aspire 16 AI on table

(Image credit: Future)

I proceeded anyway and was greeted with a clear user interface that revealed the various AI apps I could install. However, a large portion of them seem to be incompatible with the Aspire 16 AI, and those that are were once more very limited in their functionality.

On a more positive note, the 2K OLED display in my review unit was as clear and as vibrant as you might expect. The very shiny coating can cause prominent reflections, but these can be mitigated by the screen’s brightness values (especially if you disable the ‘change brightness based on content’ setting).

The keyboard feels premium, too, thanks to the subtle texture and tight fit of the keys themselves. They’re also light, tactile, and reasonably spaced, although perhaps not to the extent of other laptop keyboards. I didn’t find this aspect to be a problem when typing, but I did while gaming, as it made adopting the WASD more uncomfortable for me.

At least the number pad doesn’t eat into the layout space. However, contrary to many full-sized laptop keyboards I’ve experienced, it’s the number pad itself that feels cramped, with its keys being too narrow to be used easily. Another small but notable gripe I have with the keyboard is the absence of a right Control key, which can be frustrating when performing productivity tasks.

The touchpad performs well enough, with its large and smooth surface making for easy navigation. However, thanks to the aforementioned rattle in its bottom portion, clicks felt unpleasant. It can also get in the way while typing: on occasion, the palm of my thumbs would activate the cursor, although thankfully not clicks or taps.

  • Performance: 3.5 / 5

Acer Aspire 16 AI review: Battery Life

Back of Acer Aspire 16 AI open on table with pink background

(Image credit: Future)
  • Average battery life
  • 14 AI battery life much better
  • Other rivals are better, too

The battery life of the Aspire 16 AI isn’t particularly impressive. It lasted just over nine hours in our movie playback test, which is a middling result. This is a far cry from the time achieved by the 14 AI, which lasted over twice as long, making the Aspire 16 AI even more disappointing by comparison.

What’s more, plenty of its rivals can beat this score, including the Microsoft Surface Laptop 13-inch, which managed over 17 hours, and the Asus TUF Gaming A16 Advantage Edition, which lasted 11 hours.

  • Battery Life: 3.5 / 5

Should I buy the Acer Aspire 16 AI?

Acer Aspire 16 AI Scorecard

Attributes

Notes

Rating

Value

Starting prices are low, but climb up the specs and the value starts to diminish.

3 / 5

Design

Build quality isn’t the best, but it’s impressively thin and light. It looks good, too.

3.5 / 5

Performance

Everyday tasks are dispatched without a hitch, but it can’t cope well with heavier demands, such as gaming. The display is very good, though.

3.5 / 5

Battery Life

Only average, and the smaller 14 AI absolutely obliterates it on this front.

3 / 5

Total

The Aspire 16 AI is a capable workhorse, but its poor GPU, underwhelming AI features, and suspect build quality result in a middling machine.

3 / 5

Buy the Acer Aspire 16 AI if...

You want a large and bright display
The 16-inch OLED on my model looked great, its powerful backlight can overcome its reflective nature.

You want something portable
Despite its large size, the Aspire 16 AI is impressively light and thin, making it easy to carry around.

Don't buy it if...

You’ll be running graphics-intensive apps
The Aspire 16 AI could barely handle AAA gaming at modest settings, saddled as it is with a shared memory GPU.

You want a super-sturdy machine
There’s plenty of flex in the body, and the seemingly broken touchpad on my particular unit was disconcerting.

Acer Aspire 16 AI review: Also Consider

Asus TUF Gaming A16 Advantage Edition
If you’re after more graphical power but don’t want to spend more for it, the TUF Gaming A16 Advantage Edition might be the solution. It comes equipped with an AMD Radeon RX 7600S GPU, which is capable of handling AAA titles smoothly, although you may have to forgo Ray Tracing. Read our full Asus TUF Gaming A16 Advantage Edition review.

Apple MacBook Air 13-inch (M4)
Unusually for an Apple product, this MacBook Air is actually a great budget pick if you’re after a powerful machine, being among the best laptops for video editing for this reason. Its sumptuous design and display are additional feathers in its creative cap. Read our full Apple MacBook Air 13-inch (M4) review.

How I tested the Acer Aspire 16 AI

  • Tested for several days
  • Used for various tasks
  • Plentiful laptop reviewing experience

I tested the Aspire 16 AI for several days, during which time I used it for various tasks, from productivity and browsing to streaming and gaming.

I also ran our series of benchmark tests to assess its all-round performance more concretely, and played a movie on a continuous loop while unplugged to see how long its battery lasted.

I have been using laptops for decades, and have reviewed a large and varied selection of them too, ranging in their form factors, price points, and intended purposes.

  • First reviewed: January 2026
  • Read more about how we test
My hands-on experience of the Asus Ascent GX10 was a radical one that is only relevant to those actively engaged in AI development
4:05 am |

Author: admin | Category: Computers Computing Gadgets | Tags: , | Comments: Off

Rather than a review, this is a ‘hands-on’ in which I’ve explored what the Asus Ascent GX10 offers, providing information that might be critical to those considering purchasing one.

The first important piece of information about this hardware is that this isn’t a PC, or rather, it's not an Intel, AMD or X86-compliant platform that can run Windows.

It’s built around ARM technology, much like modern phones and tablets, although its ARM technology has been scaled up to work with massively powerful Nvidia Blackwell technology that is capable of 1 petaFLOP of AI performance using FP4.

This has all been shoehorned into a gorgeously engineered 150mm-square, 51mm-high form factor that resembles an oversized NUC.

The system can be used directly by attaching a mouse, keyboard, and screen, but it’s also intended to be used in a headless mode from another system, which might explain why it comes with relatively modest onboard storage.

What this system doesn’t allow for is much expansion, at least internally. The inclusion of a special networking connection, the Nvidia ConnectX-7 port, does allow another Ascent GX10 node to be stacked on top, doubling the amount of processing power and the price.

The platform that runs the integrated AI software stack is Ubuntu Linux, so familiarity with that might be useful for those wanting to work directly on it.

As anyone working in AI can already attest, nothing to do with this type of development is cheap, and the Asus Ascent GX10 is more than $3000 for a single node.

But given the expertise needed to use this hardware and the associated developer costs, this AI-focused hardware might be the least expensive part of any project. And, with memory costs rising dramatically, a system with 128GB of LPDDR5X onboard it might be significantly more expensive by the end of 2026 than it is at the start of it.

Asus Ascent GX10: Price and availability

  • How much does it cost? From $3090, £2800
  • When is it out? Available now
  • Where can you get it? From online retailers.

The ASUS Ascent GX10 isn’t available directly from Asus, but it's easy to find at many online retailers, including Amazon.

For US readers, the price on Amazon.com is $3099.99 for the 1TB storage SKU (GX10-GG0015BN), and $4,149.99 for the 4TB storage model (GX10-GG0016BN).

Given that a 4TB Gen 5 SSD is about $500, that is a remarkable price hike for the extra storage capacity.

For UK readers, on Amazon.co.uk the 1TB model price is £3769, but I found it via online retailer SCAN for a more palatable £2799.98. SCAN also carries a 2TB option for $3199.99 and the 4TB model for £3638.99.

The important details of this platform are that the hardware inside the GX10 isn’t exclusive to Asus, as Nvidia GPUs are (in theory) available across a number of brands, and Nvidia has its own model.

The Nvidia DGX Spark Personal AI Supercomputer, as the originator modestly calls it, costs £3699.98 in the UK, for a system with 128GB of RAM and 4TB of storage.

Acer offers the Veriton AI GN100, which bears an uncanny visual resemblance to the Asus but comes with 4TB of storage, like the Nvidia option. This is £3999.99 direct from Acer in the UK, but only $2999.99 from Acer in the US.

Another choice is the Gigabyte AI TOP ATOM Desktop Supercomputer, a 4TB storage model that sells for £3479.99 from SCAN in the UK, and can be found on Amazon.com for $3999.

And the final model with the same spec as most is the MSI EdgeXpert Desktop AI Supercomputer, selling for £3,598.99 from SCAN in the UK, and $3999 on Amazon.com for US customers.

Overall, the prices of all these products are roughly in the same ballpark, but the Asus in its 1TB configuration is one of the cheaper choices, especially for those in Europe.

Asus Ascent GX10 AI Supercomputer

(Image credit: Mark Pickavance)

Asus Ascent GX10: Specs

Item

Spec

CPU:

ARM v9.2-A CPU (GB10) (20 ARM cores, 10 Cortex-X925, 10 Corex-A725)

GPU:

NVIDIA Blackwell GPU (GB10, integrated)

RAM:

128 GB LPDDR5x, unified system memory

Storage:

1TB M.2 NVMe PCIe 4.0 SSD storage

Expansion:

N/A

Ports:

3x USB 3.2 Gen 2x2 Type-C, 20Gbps, alternate mode (DisplayPort 2.1) 1x USB 3.2 Gen 2x2 Type-C,with PD in(180W EPR PD3.1 SPEC) 1x HDMI 2.1 1x NVIDIA ConnectX-7 SmartNIC

Networking:

10GbE LAN, AW-EM637 Wi-Fi 7 (Gig+) , Bluetooth 5.4

OS:

Nvidia DGX OS (Ubuntu Linux)

PSU:

48V 5A 240W

Dimensions:

150 x 150 x 51 mm (5.91 x 5.91 x 2.01 inch)

Weight:

1.48kg

Asus Ascent GX10: Design

  • Uber NUC
  • Connect-7 scalability
  • Limited internal access

Asus Ascent GX10 AI Supercomputer

(Image credit: Mark Pickavance)

While the GX10 looks like an oversized NUC mini PC, at 1.48kg it's heavier than any I’ve previously encountered. And that doesn’t include the substantial 240W PSU.

The front is an elegant grill with only the power button for company, and all the ports are on the rear. These include four USB-C ports, one of which is required for the PSU to connect, a single 10GbE LAN port and a single HDMI 2.1 video out.

You can connect more than one monitor by using the USB 3.2 Gen 2x2 ports in DP Alt mode, if you have the adapters to convert those into DisplayPort.

What seems mildly odd is that Asus went with three USB 3.2 Gen 2x2, a standard that was an effective dead end in USB development, and not USB4. And, there are no Type-A USB ports at all, forcing the buyer to use an adapter or hub to attach a mouse and keyboard to this system.

As mice and keyboards are still mostly USB-A, that’s slightly irritating.

But what makes this system truly interesting is the inclusion of a ConnectX-7 Smart NIC alongside the more conventional 10GbE Ethernet port.

The best the 10GbE LAN port can offer is a data transfer of around 840MB/s, which is technically slower than the USB ports, even if it's quick by networking technology.

The ConnectX-7 port is a technology developed by Mellanox Technologies Ltd, an Israeli-American multinational supplier of computer networking products based on InfiniBand and Ethernet technology that was acquired by Nvidia in 2019.

In this context, ConnectX-7 provides a means to link a second GX10 directly over a 200 Gbit/s (25 GB/s) InfiniBand network, enabling performance scaling across the two systems.

There are certainly parallels with this type of technology to the time when Nvidia enabled two GPUs to work in unison using a dedicated interconnect, but the ConnectX-7 interface is a much more sophisticated option where both processing and memory can be used in collective exercise, enabling the handling of large-scale models with over 400 billion parameters. That's double the 200 billion that a single unit can cope with.

Asus Ascent GX10 AI Supercomputer

(Image credit: Mark Pickavance)

Mellanox does make ConnectX switches, but I’m not sure if it is possible to connect more than two GX10 via one of those. Being realistic, each system is still only capable of 200 Gbit/s communication, so adding additional nodes beyond two might offer diminishing returns. But this technology is utilised in switched fabrics for enterprise data centres and high-performance computing, and in these scenarios, the Mellanox Quantum family of InfiniBand switches supports up to 40 ports running at HDR 200 Gbit/s.

It may be that products like the GX10 will be the vanguard for the wider use and application of ConnectX technology, and a blueprint for easily expandable clusters.

However, the last aspect I looked at on the GX10 was a disappointment, and it was the only nod to upgradability that this system has, beyond adding a second machine.

On the underside of the GX10 is a small panel that can be removed to provide access to the one M.2 NVMe drive that this system supports.

In our review, the hardware was occupied by a single 2242 M.2 PCIe 4.0 1TB drive, although you can also get this system with 4TB. The fact that there wasn’t room for a 2280 drive is a shock, because that effectively limits the maximum internal storage to 4TB.

But conversely, the only other of these types of systems I’ve seen, the Acer GN100 AI Mini Workstation, has no access to the internal storage at all. So perhaps Asus Ascent GX10 owners should be thankful for small mercies.

Asus Ascent GX10: Features

  • ARM 20-core CPU
  • Grace Blackwell GB10
  • AI platforms compared

The Nvidia GB10 Grace Blackwell Superchip represents a significant leap in AI hardware, emerging from a collaborative effort between Nvidia and ARM. Its origins lie in the growing demand for specialised computing platforms capable of supporting the rapid development and deployment of artificial intelligence models. Unlike traditional x86-based systems, the GB10 is built around ARM v9.2-A architecture, featuring a combination of 20 ARM cores—specifically, 10 Cortex-X925 and 10 Cortex-A725 cores. This design choice reflects a broader industry trend towards ARM-based solutions, which offer improved efficiency and scalability for AI workloads.

The GB10’s capabilities are nothing short of remarkable. It integrates a powerful Nvidia Blackwell GPU paired with the ARM CPU, delivering up to a petaFLOP of AI performance using FP4 precision. This level of computational power is particularly suited to the training and inference of large language models (LLMs) and diffusion models, which underpin much of today’s generative AI. The system is further enhanced by 128GB of unified LPDDR5x memory, ensuring that even the most demanding AI tasks can be handled efficiently.

The GB10’s operating environment is based on Ubuntu Linux, specifically tailored with NVIDIA’s DGX OS, making it an ideal platform for developers familiar with open-source AI tools and workflows.

There is an exceptionally fine irony to this OS choice, since Nvidia’s hardly been a friend to Linux over the past three decades, and has actively obstructed its attempts to compete more widely with Microsoft Windows. If anyone doubts my opinion on the relationship between Linux and Nvidia, then search for “Linus Torvalds” and “Nvidia”. Recently, Linus has warmed to the company, but much less to Nvidia CEO Jensen Huang. And, he’s not a fan of the AI industry, which he described as "90% marketing and 10% reality".

Looking to the future, the evolution of the GB10 and similar superchips will likely be shaped by the ongoing arms race in AI hardware. As models grow ever larger and more complex, the need for even greater memory bandwidth, faster interconnects, and more efficient processing architectures will drive innovation. The modularity offered by technologies like ConnectX-7 hints at a future where AI systems can be scaled seamlessly by linking multiple nodes, enabling the handling of models with hundreds of billions of parameters.

In terms of raw AI performance, the GB10 delivers up to 1 petaFLOP at FP4 precision, which is heavily optimised for quantised AI workloads. While this is less than the multi-petaFLOP performance of NVIDIA’s flagship data centre chips (such as the Blackwell B200 or GB200), the GB10’s power efficiency is a standout. It operates at around 140W TDP, far lower than the 250W or more seen in GPUs like the RTX 5070, yet offers vastly more memory (128GB vs 12GB on the 5070). This makes the GB10 especially suitable for developers and researchers who need to work with large models locally, without the need for a full server rack.

Asus Ascent GX10 AI Supercomputer

(Image credit: Mark Pickavance)

While there are some other players hidden in the shadows, mostly Chinese, the key AI players are Nvidia, AMD, Google and Apple.

NVIDIA has the Blackwell B200/GB200 products for datacenter flagships, offering up to 20 petaFLOPS of sparse FP4 compute and massive HBM3e memory bandwidth. These are massively expensive enterprise products, and the GB10, by contrast, is a scaled-down, more accessible version for desktop and edge use, trading some peak performance for efficiency and compactness.

AMD's line of AI accelerators is the Instinct MI300/MI350, these are competitive in terms of raw compute and memory bandwidth, with the MI350X offering up to 288GB HBM3e and strong FP4/FP6 performance. But these don’t offer the same level of flexibility as the GB10, even if they’re better suited to interference tasks. And, the same can be said for Google TPU v6/v7, a technology that is highly efficient for large-scale inference and is optimised for Google’s own cloud and AI services.

Whereas Apple M3/M4/M5 and Edge AI Chips are optimised for on-device AI in consumer products, with impressive efficiency and integrated neural engines. However, these chips are not designed for large-scale model training or inference, and their memory and compute capabilities are far below what the GB10 offers for professional AI development.

The NVIDIA GB10 Grace Blackwell Superchip stands out as a bridge between consumer AI hardware and data centre accelerators. It offers a unique blend of high memory capacity, power efficiency, and local accessibility, making it ideal for developers and researchers who need serious AI capability without the scale or cost of a full server. While it cannot match the absolute peak performance of the largest data centre chips, its unified memory, advanced interconnects, and software support make it a compelling choice for cutting-edge AI work at the desktop.

However, that statement does assume that current AI is a path work taking.

Asus Ascent GX10: AI Reality Check

Asus Ascent GX10 AI Supercomputer

(Image credit: Mark Pickavance)

Looking at the specifications of the Asus Ascent GX10, it's easy to be impressed by how much computing power Asus, with the help of Nvidia, has managed to squeeze into a tiny computer, and its ability to scale.

However, there are three practical little pigs living in this AI straw house, and in this story, I’m the wolf.

Those researching AI might think I’m referring to the three AI issues that confront all public implementations. Those being algorithmic bias, lack of transparency (aka explainability), and the significant ethical/societal risks associated with the spread of misinformation. But I’m not, because these are potentially fixable to a degree.

Instead, I’m talking about the three unfixable issues with current models

Almost every AI platform is based on a concept called the Deep Neural Net, and under that are two approaches that are generally classified as LLM (Large Language Models) and Diffusion models, which are the ones that can generate images and video.

What both these sides of the Deep Neural Net coin show is a pattern-matching approach to problems, like the computer is playing a complex version of the children’s card game Snap. The results are coloured by the scale of the data and how quickly the routines and hardware platforms find the patterns.

Before IBM made computers, they sold card files, with the concept that it was quicker to navigate the cards to the information you wanted.

It’s a generalisation, but these models are purely more sophisticated versions of that, because if the pattern they’re looking for doesn’t exist in the data, then the routine can’t inspirationally create it.

To make the results seem less random, model designers have tried to specialise their AI constructs to focus on narrower criteria, but the Nirvana of AGI (Artificial General Intelligence) is that the AI should be generally applicable to almost any problem.

How this issue manifests in AI responses is that when confronted with a pattern that the routine can’t match accurately, it just offers up the partial matches it found that may or may not be related at all.

These ‘hallucinations’, as they’re often called, are a choice the model makers have between the AI admitting it has no idea what the answer is, and delivering a response that’s got a remarkably low possibility of being correct. Given that AI companies don’t like the idea of their models admitting they haven’t a clue what the answer is, hallucinations are deemed preferable.

Asus Ascent GX10 AI Supercomputer

(Image credit: Mark Pickavance)

Perhaps some of the problem here is not AI, but that users aren’t trained to check what the AI is producing, which isn’t entirely spurious.

The next issue is the classic ‘prompt injection’ issue, where you ask a question, and then, often based on the response, you realise you asked the wrong one, and then proceed in an entirely different direction. The AI doesn’t recognise this pivot and tries to apply its previous pattern constructions to the new problem, and becomes entirely confused.

And the final piglet, where current AI entirely falls down, might be classed as original thinking, where what the user wants is a new approach to a problem that hasn’t been documented before. What has defined humans as being especially impressive thinkers is their ability to abstract, and that is something that current AI doesn’t do, even modestly.

While prompt injection can probably be solved, the other two issues regarding generalisation and abstraction are unlikely to be fixed by the Deep Neural Net, these need a radically new approach, and ironically, not one that AI is likely to come up with.

Some of you reading this will be wondering why I’ve inserted this information into this product reveal, but the whole purpose of the Asus Ascent GX10 is to facilitate the design and testing of LLMs and Diffusion models, and at this time, these have significant limitations.

But critically, the development of the whole Deep Neural Net direction doesn’t appear to have resolution to some of the more problematic issues, which suggests it might ultimately be a dead end.

It might turn out to be useful for lots of problems, but it's not the AI we’re looking for, and the likelihood of it evolving into that true artificial intelligence is extremely low.

This is especially relevant to the Asus Ascent GX10, since it doesn’t have a practical purpose beyond the creation of models, as it’s not a PC.

These aren’t all the issues associated with AI, but they’re some of the ones that might directly impact those buying the GX10, at some point or another.

Asus Ascent GX10: Early verdict

Asus Ascent GX10 AI Supercomputer

(Image credit: Mark Pickavance)

It’s exciting to see Asus make something this radical, showing that it truly believes in a post-Windows, post-PC future where hardware is purely specified for a specific task, and in the case of the Asus Ascent GX10, that’s AI model development.

I’ve already covered the caveats regarding that subject, so for the purpose of this conclusion, let's pretend that AI is the solid bet that some think, and not an underachieving dead end that others believe.

For corporations, the cost of this hardware won’t be an issue for their IT people to experience building AI models and evaluating their worth.

The beauty of a system like the GX10 is that it’s a finite cost, unlike buying access to an AI server centre cluster, which will be an ongoing cost, and likely to become more expensive if demand is high. While the data centre still might be needed for the biggest projects, or for deployment, the GX10 does provide a first rung for any proof of concept.

However, if the AI path is not the one that is ultimately taken, this machine becomes mostly a beautifully engineered paperweight.

For more compact computing, see our guide to the best mini PCs you can buy

South Korea passes the first AI regulations
12:22 am | January 23, 2026

Author: admin | Category: Mobile phones news | Tags: , | Comments: Off

South Korea has launched a landmark set of laws to regulate AI before any other country or bloc (the EU's regulations are set to go into effect in stages through next year). Under Korea's AI Basic Act, companies must ensure there is human oversight for "high-impact" AI in fields like nuclear safety, drinking water, transport, healthcare, and financial uses like credit evaluation and loan screening. Companies must also give users advance notice about products or services using high-impact or generative AI, and clearly label AI-generated output that is difficult to distinguish from...

Apple’s next wearable tipped to be an AI pin with cameras
3:07 pm | January 22, 2026

Author: admin | Category: Mobile phones news | Tags: | Comments: Off

Fancy an Apple-branded wearable AI pin? A new report suggests Apple is working on a compact AI-powered pin, roughly the size of an AirTag. If it makes it to market, the device would mark Apple’s first product built primarily around AI and could run the revamped Siri chatbot. According to a report from The Information, the wearable is still in development, though the project could face delays or even be shelved altogether. An AI-generated image of the rumored Apple AI pin. The AI pin is reportedly circular in shape, closely resembling the AirTag in both size and design, with an...

Mercy review: Chris Pratt’s new AI sci-fi thriller is so haphazard, you’ll wonder if ChatGPT could do a better job of writing it
8:00 pm | January 21, 2026

Author: admin | Category: Computers Entertainment Gadgets Streaming | Tags: | Comments: Off

I need everyone in the movie industry to listen up and repeat the following pact: "I solemnly swear to never make a film told through the lens of social media ever again. Never will I sit my main character in front of a screen, digesting the rest of the storyline through open internet tabs, Instagram feeds and MacBook files. I will only include digital elements if it effectively serves the plot."

Agreed? Great, because Chris Pratt's new AI sci-fi thriller Mercy is the latest victim of this heinous crime. With a 101 minute runtime, Pratt spends 90 of those sitting in the same chair, wrongly accused of a murder he didn't commit. Instead of being given a defense lawyer like a normal society would, he has to face off against an AI-generated judge in a 'mercy' courtroom (who conveniently looks exactly like Rebecca Ferguson).

If he can't prove his innocence past a certain percentage, he'll be fried on the spot. Override the algorithm sufficiently, and he'll walk free. Cue an entire movie of sifting through ring cam footage, facetiming witnesses and finding crucial evidence on his daughter's private Finsta account.

After about 15 minutes of this, the gimmick wears off pretty quickly. Pratt himself is clearly loving it (possibly due to the ease of his character also being called Chris) but unsurprisingly, this doesn't translate offscreen. Mercy is mundane in its own unique way, but there are few surprises – it'll hit you over the head with its ambivalent AI messaging.

Mercy refuses to call AI a hero or a villain, and that's a missed opportunity

"Maybe humans and AI both make mistakes" is a line of dialogue in Mercy that I've only slightly paraphrased, and it sums up the movie's moral vagueness in one nifty sentence. Sure, we've just spent an hour and 40 minutes watching an AI-generated court judge nearly kill Chris over a wrongful conviction, but we all make mistakes, right?

This was Amazon MGM Studios' chance to lay down the AI line by deciding what side of the industry argument they're on. Instead, they've chosen to sit on the fence, and that transforms any vim and vigor Mercy did have into pure monotony. If we're not using storytelling to send home a powerful message, especially about something so ever-changing, then what's the point?

Of course, the point is to make a bit of money at the box office by seeming to touch on a topical subject. It's the same way that a social media influencer might look like they're supporting a social campaign, but are actually doing the surface-level bare minimum to help it. Mercy could have been an industry-changing heavyweight piece of art, but no – let's play around with some CGI graphics instead.

For a big-budget studio, these graphics feel incredibly cheap. This is where the most obvious connection to Prime Video's take on War of the Worlds, starring Ice Cube, comes into play. Both have the same function and aesthetic look – almost as if Amazon is ashamed that is uninspired slop is all it's got to offer.

Rebecca Ferguson is our one and only savior

Rebecca Ferguson as an AI judge

Rebecca Ferguson is our AI judge. (Image credit: Amazon MGM Studios)

Almost no movie (perhaps with the exception of 2023 thriller Missing) can use tech, screens and social media as its sole method of storytelling to its advantage – the concept is as lame as lame comes. But our AI-fashioned Rebecca Ferguson is the jewel in our crown of criminal offenses.

Even as a non-human entity, Ferguson shines. She's far from a voice of reason, but seeing the cracks in her generated facade is easily the most satisfying payoff in this otherwise faltering farce. She's also the only source of continuity when Mercy decides to finally let Chris out of his chair for an unhinged 15-minute duration, abandoning all of its narrative mechanics without warning.

You get where I'm coming from here. ChatGPT could probably have written a much stronger script and overarching plot, while watching any other takes on AI or the digital world would be a more shrewd use of your time. Our best case scenario is hoping Mercy is popular enough to finance more Guardians of the Galaxy or Star-Lord content, and then never speak of it again.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

The Audeze Maxwell 2 is an incredible high-end gaming headset – but don’t expect a big upgrade from its predecessor
5:00 pm | January 15, 2026

Author: admin | Category: Computers Gadgets Gaming | Tags: , | Comments: Off

Audeze Maxwell 2: one-minute review

Audeze is known for enthusiast-level audiophile hardware. When that tech drips down to the Maxwell gaming line, reviewer and consumer alike have an important question to ask: Can we actually hear a difference versus the competition?

The answer in this case is a resounding, 90mm driver-powered ‘yes’ in the form of the new Audeze Maxwell 2 wireless gaming headset that rattles your eardrums and stupefies you into a kind of aural nirvana. In 15 years of reviewing audio equipment from gaming to studio product categories – and do excuse me while I do a little sick in my mouth at the pompousness of this statement – I’ve rarely heard such a well-rounded and emotive frequency response.

It’s important to keep that in mind, because although there is good reason to be critical of this headset as a consumer release, it really can’t be faulted in raw audio terms. If great sound is all you care about, money no object, then you’ve already read all you needed to in this review. Go and enjoy your new headset.

But gaming headsets have become an incredibly crowded vertical, and in the race to win our attention and money, manufacturers have really spoiled us with features lately. Broadcast-quality noise-cancelling mics, simultaneous 2.4GHz wireless and Bluetooth connections, and even active noise cancellation have started to feel like table stakes in the flagship model end of the market.

That places a lot of importance on the secondary features of this follow-up to the original Audeze Maxwell. After all, it follows a prior model (the 'Gen 1', if you will) that gobbled up acclaim and awards like a ravenous James Cameron on a nineties Oscars night. The Gen 1 Maxwells are available for around $100 / £100 less than the new 'Gen 2s', and sadly, there’s no single must-have feature about the newer incarnation that justifies spending more money.

The Gen 2s do feature the company’s SLAM Acoustic Management, a marketing buzzword for ‘better audio’ more or less, along with Bluetooth connectivity, a wider headband for better weight distribution, and redesigned physical controls that do indeed feel pleasant to locate and operate. But given that the newer model is 2.4oz / 70g heavier than its predecessor, elements like the headband design upgrade feel less like a win and more like a necessity.

So here I am in the very strange position of reviewing a stellar headset that I can’t fully recommend, because so much of what makes it stellar was also true of the outgoing model.

The Audeze Maxwell 2 gaming headset lying on a pale desk with the top pointing toward the camera and being held in a hand

(Image credit: Future/Phil Iwaniuk)

Audeze Maxwell 2: price & availability

  • List price: $349.99 / £339.99 / around AU$520 (Xbox version)
  • Significantly more than the SteelSeries Arctis Nova Pro Wireless but less than the Nova Elite
  • Maxwell V1 is still available and cheaper

You’d expect a premium price from an Audeze headset. The company has made its name by delivering no-compromise sound from audiophile-grade equipment, and a lot of that tech has found its way into the Maxwell gaming line. You can hear and feel the quality difference compared to the vast majority of gaming headsets immediately, even versus some of our favourite options like the Razer BlackShark V3 Pro and the SteelSeries Arctis Nova Pro Wireless.

There’s a slight price difference between the PS5 and Xbox versions of this headset, both of which are also compatible with PC and mobile devices via Bluetooth. The PlayStation option is slightly cheaper at $329.99 / £339, while the Xbox version has a $349.99 / £369 list price. This makes it significantly cheaper than the other recent audiophile gaming headset of the time, the SteelSeries Arctis Nova Elite, which retails for a chonky $599.99 / £599.99.

It’s not just the sound that communicates where the money’s been spent. The build quality and presentation are also wonderful, like something you’d find waiting for you on an eye-wateringly expensive first-class plane seat. The only caveat, as you’ll read numerous times throughout this review, is that the outgoing model is currently going cheaper, and it’s fundamentally just as good.

The Audeze Maxwell 2 gaming headset lying on a pale desk with a close-up view of its buttons

(Image credit: Future/Phil Iwaniuk)

Audeze Maxwell 2: Specs

Audeze Maxwell 2

Price

$349.99 / £339.99 / around AU$520

Weight

17.2oz / 490g

Drivers

90mm Planar Magnetic

Compatibility

PC, Xbox Series X|S, PlayStation 5, Nintendo Switch|2, MacOS, iOS, Android

Connection type

Bluetooth, 2.4GHz wireless, wired 3.5mm/USB-C

Battery life

80 hours

Features

Detachable hypercardioid 16-bit/48KHz high bandwidth mic with FILTER AI noise removal, internal beamforming mics, 24-bit/96kHz high-resolution audio, patent-pending SLAM technology, Bluetooth support for Auracast, LE Audio, LDAC, and AAC

Software

Audeze App (PC and mobile)

Audeze Maxwell 2: Design & features

  • Chunky and imposing looks, but very heavy
  • Pro audio finish with impressive materials choices
  • Control layout takes some getting used to

One thing’s for certain: you’re extremely unlikely to misplace this headset. Weighing in at 17.2oz / 490g (Or around 16.2oz / 460g if you remove the magnetic earcup plates) and featuring deep, luxurious cushioning around formidably large cups housing 90mm drivers, this is a strikingly solid model that conveys quality and longevity as soon as you cast your eyes over it. Brushed gunmetal finish, soft memory foam cushioning, and a new inner headband suspension strap with breathable holes combine to create an aesthetic that communicates the Maxwell 2’s mission: audiophile-grade gear in the gaming market.

I love that look, personally. I’m especially impressed by little details like the pin-sharp Audeze logos on each earcup, beneath the magnetic covers. Remove the detachable mic, and this is definitely a pair of headphones you wouldn’t mind being seen in public wearing.

There’s a downside to that: it’s an especially heavy model. Weight doesn’t have a linear relationship with discomfort, of course, and manufacturers can do plenty to minimise the effect of 17.2oz / 490g sitting across your cranium. But if you are prone to discomfort when wearing bulkier cans, this particular attribute is worth keeping in mind.

Personally, I found the comfort levels high for two to three hours of use. I do feel the weight across the top of my head, and also in the increased clamping force of the earcups around my ears, but not to such a degree that I need a break. It’s January as I write this in the UK, so heat isn’t an issue, but I could imagine the pleather earcup cushions might be more of an issue in hotter climes, as with any headset that has a lot of clamping force.

Moving on to the layout of its physical controls, redesigned for this Gen 2 model, I’m impressed overall. I love to have a physical chat mix dial on the headset, and sure enough, there’s a nice notched scroller with beautiful actuation on the rear-left earcup, just below the volume scroll wheel. On the right-hand cup are the power button and mic mute toggle switch, the latter of which is recessed so much that it can be slightly fiddly to operate, but it feels like the switch itself is of a high-quality, durable standard. The only fiddly aspect of the layout is the pairing button on the front of the left earcup, the position of which takes a little while to commit to memory.

The Audeze Maxwell 2 gaming headset lying on a pale desk with a hand tweaking an ear cushion

(Image credit: Future/Phil Iwaniuk)

Audeze Maxwell 2: Performance

  • Sound with a genuine wow factor
  • AI mic noise cancellation is hit or miss
  • App support for further tweaks

Now we’re into the section where the Maxwell 2 shines. It takes a lot of clever designers, engineers, and the right suppliers to achieve audio this good, and, particularly in the gaming vertical, most manufacturers have simply never taken the commercial risk of spending this much to reach this level of fidelity.

Audeze has the advantage of having honed its tech in the enthusiast space, giving the company a library of designs and parts to refer to when designing a gaming model. We saw the fruits of that labour in the original Maxwell, and now those same 90mm drivers with their frankly preposterous 10Hz-50KHz frequency response have been tuned further with Audeze’s SLAM technology.

The marketing materials say this technology allows for “heightened spatial immersion, precise and punchier bass response”, and I can’t argue with that. Apparently, it’s all down to the physical construction of the drivers, not a software-level boost, and that’s always firmer ground to establish audio fidelity on.

The overall fidelity standard has raised its game in gaming headsets lately, so the difference between contemporaries like the Razer BlackShark V3 Pro or SteelSeries Arctis Pro Nova Wireless and this model isn’t as night and day as might have been the case five years ago, when ‘gaming’ models were still artificially boosting their low end at the cost of clarity. Rather, it’s that every component part of the aural landscape is that bit clearer and more emotive.

A man wearing the Audeze Maxwell 2 gaming headset showing the left side of his face and the left ear cup and microphone.

(Image credit: Future/Phil Iwaniuk)

The bass response is huge, but tight. It doesn’t overwhelm the rest of the EQ spectrum, leaving room for sparkly high-end frequencies to chime. Human speech sounds true-to-life, indicating a well-tuned midrange response, while the stereo landscape feels impressively vast when you listen to the right sources. Try out some binaural recordings, and you’ll hear what I mean.

If audio reproduction is peerless – and it is – then audio output is a different story. The mic on this Gen 2 model features AI-assisted noise cancellation, and not to blame everything on Skynet’s malicious invasion of our lives, but it doesn’t work very well in my experience.

I’ve tested this headset using every connection type available, including digital and analog wired connections, but whenever I enable the AI noise cancellation, I get a muffled sound in Discord. That’s likely due to Discord and Audeze’s noise cancellation technologies working counterproductively over the top of each other, but whatever the cause, it’s a shame to have to turn off the noise cancellation.

After some tuning, the mic sounds much better. But at this price point, the expectation is for a mic with out-of-the-box quality. Similarly, there are some other chinks in the armor when you dig around in the connectivity options. Simultaneous Bluetooth connection is only possible with a digital or analog wired connection, rather than with the 2.4GHz wireless via the dongle – another feature you’d hope for at this price.

I can’t knock the 80-hour battery life, mind you. That’s an incredible figure, and charging is easy enough via a USB-C connection just below the mic.

The Audeze Maxwell 2 lying on a pale desk showing its cups toward the camera

(Image credit: Future/Phil Iwaniuk)

Should you buy the Audeze Maxwell 2?

Buy it if...

You’ll stop at nothing for incredible sound
It was never in doubt – there’s no comparison to the fidelity, punchiness, and emotion generated by the 90mm drivers inside these earcups.

You’re all about that bass
The bass response from these drivers is like sprinting into a brick wall - with a tailwind.

You want audiophile headset looks
So long, RGB, and tribal designs. Hello to a grown-up aesthetic that you’d be happy to wear in the street.

Don't buy it if...

You can find a Gen 1 for sale instead
It’s practically just as good, and it’s available for less. Sorry, Gen 2, but it just makes sense to buy the older model.

You need the utmost mic quality
There are some issues with Discord’s noise cancellation and the AI-powered Audeze version.

Simultaneous Bluetooth and 2.4GHz wireless is a deal-breaker
If this is a must-have for you, then you'll need to look elsewhere, though you can achieve simultaneous digital or analog wired with Bluetooth, though.

Also consider...

Does this Audeze model put you ill at ease? Consider these premium wireless alternatives.

Audeze Maxwell 2

Razer BlackShark V3 Pro

SteelSeries Arctis Nova Pro Wireless

Price

$349.99 / £339.99 / around AU$520

$249.99 / £249.99 / around AU$510

$349 (£329, AU$649)

Weight

17.2oz / 490g

12.9oz / 367g

11.85oz / 336g

Drivers

90mm Planar Magnetic

Razer TriForce Bio-Cellulose 50 mm Drivers Gen-2

40mm neodymium

Compatibility

PC, Xbox Series X|S, PlayStation 5, Nintendo Switch|2, MacOS, iOS, Android

PC, Xbox Series X (Xbox version only), PlayStation 5 (PlayStation version only), iOS, Android

Xbox Series X|S, Xbox One, PS5, PS4, Nintendo Switch, PC, Mac, Mobile

Connection type

Bluetooth, 2.4GHz wireless, wired 3.5mm/USB-C

Bluetooth, 2.4GHz wireless (Hyperspeed dongle), USB wired, 2.5mm wired

Wireless (2.4Ghz via dongle), Wired (USB-C), Bluetooth 5.3

Battery life

80 hours

70 hours

Up to 60 hours (2 x fully-charged batteries), Infinite Power System

Features

Detachable hypercardioid 16-bit/48KHz high bandwidth mic with FILTER AI noise removal, internal beamforming mics, 24-bit/96kHz high-resolution audio, patent-pending SLAM technology, Bluetooth support for Auracast, LE Audio, LDAC, and AAC

ANC, Razer HyperClear full-band 12mm mic, THX Spatial Audio

40mm Neodymium, ANC, magnetic drivers, 360-degree spatial audio, retractable ClearCast 2.X mic

Software

Audeze App (PC and mobile)

Razer Audio App, Razer Synapse

SteelSeries GG/Sonar (PC)

Razer Blackshark V3 Pro
Quite simply, the best all-round gaming headset on the market today. The V3 Pro version features ANC, a great mic, and a comparable 70-hour battery life, bested only in raw audio fidelity by the Audeze Maxwell 2.

For more information, check out our full Razer Blackshark V3 Pro review.

Steelseries Arctis Nova Pro Wireless
Featuring SteelSeries’ unique dual-battery charging solution, premium looks, plus ANC implementation, the Nova Pro Wireless is a premium headset option with few faults.

For more information, check out our full SteelSeries Arctis Nova Pro Wireless review.

How I tested the Audeze Maxwell 2

  • Weeks of solid use on PC, Xbox, and Mac
  • All connection types tested
  • Put through its paces in gaming, movies, music and work calls

When a headset with audio fidelity chops as formidable as this arrives, there’s only one thing for it: you play lossless classical music, as loud as your ears can withstand, until entering a stupor. That’s stage one of testing this headset.

Given that there are several connection options and multi-device compatibility, I checked each option off to ensure functionality and fidelity. I also updated the firmware via the Audeze software before poking around in the app options.

Given that the higher weight looked like it might be an issue, I wore the Maxwell 2 all day during my workday for a week straight, which included using it for work calls. That also gave me a chance to take feedback on the mic quality using different chat clients, which is where I identified that the Discord issue isn’t a universal noise-cancelling problem.

Oh, and in case you’re wondering, Baby Steps sounds fantastic through these things.

First reviewed December 2025-January 2026

Read more about how we test

Clipchamp (2026) review
7:54 pm | January 14, 2026

Author: admin | Category: Computers Gadgets Pro | Tags: | Comments: Off

The Microsoft-owned Clipchamp is distinct from most video editors, since the main draw here is that you can edit videos in your browser (provided that browser is either Chrome or Edge).

There’s an obvious advantage to that - as long as you’re logged in to your account, you can work from any computer. There's no need to check you have top-end computer specs and you don’t need to install any additional software.

Now, this isn't going to compete with Premiere Pro, Final Cut, or any of the other best video editing software I've used. As the name suggests, it's a lot more basic than those apps, and a lot of its use depends on adding content to pre-built templates.

I took a look at how easy it is to use the tool, and whether Clipchamp has a place in the creative workflow.

Clipchamp: Pricing & plans

Using Microsoft Clipchamp to edit videos during our review

(Image credit: Microsoft // Future)
  • Generous free option with no watermarks
  • Premium subscription with Microsoft 365

Being able to edit online is one thing, being restricted to only a couple of browsers is another. I’m not a fan of being forced to work with a specific browser. Personally, I like Firefox and Safari, but Clipchamp is only compatible with Microsoft Edge and Google Chrome. If you already use these browsers, great, but if you don’t, you’ll have to decide from the outset if that restriction will put you off using this video editor.

As for the price, Clipchamp comes in two flavours: ‘Free’ and ‘Premium'.

‘Free’ is surprisingly generous, letting you work on projects up to 1080p, have access to what they call ‘AI editing tools’ for audio and video, grant you the ability to record your computer’s screen, webcam, and audio, and all without any watermark anywhere, which is pretty cool.

As for ‘Premium’, its projects can be up to 4K, and you gain access to premium stock assets, filters and effects (‘Free’ only has a basic assortment of those).

Unfortunately, though, you can't get a Premium subscription as a standalone. Instead, Clipchamp is bundled with Microsoft Office 365, so if you’re not one for subscribing to business software, you’ll have to decide if Clipchamp Premium is worth getting for between $100 and $130 a year - which is quite hefty for an online video editor - or whether a tool like Canva Video might be the better pick. On the bright side, if you already subscribe to Office, then you can have fun with Premium right now.

Clipchamp: Getting started

Using Microsoft Clipchamp to edit videos during our review

(Image credit: Microsoft // Future)
  • You absolutely need a Microsoft login for personal accounts

You can choose to use your email address, or log in through your Google or Microsoft account… except if you choose to work on personal projects, Clipchamp will then inform you only Microsoft accounts are able to do that.

And that’s after giving your email address, created a password, and clicked on many, many emails and buttons,

Making it clear what the state of play is from the get-go would’ve saved me a lot of time. It doesn’t really endear you to the service you’re about to explore.

However, I decided to put that little hiccup - something that could easily be fixed with a few lines of text at the login page - to the side, and set off exploring the online service.

Clipchamp: Interface & experience

Using Microsoft Clipchamp to edit videos during our review

(Image credit: Microsoft // Future)
  • Impressive considering it's browser-based
  • Good interface with easy to use tools
  • Experience marred by tiny preview section and lots of buffering

The home page looks fine. You’ve got a sidebar on the left to gain access to your settings and ready-made templates, among others, while the bulk of the page is devoted to tips and tricks to encourage you to try new features (I was offered recording from a webcam, and using digital voices to turn your text into speech). You’ll also see a few featured templates, a button to edit by yourself, and another with the help of AI, and at the bottom, all your previous projects.

Nothing new here really in terms of design and layout, but it’s simple and clear, which helps you get to where you wish to go.

I thought I’d try out the manual editing first, as that’s my usual bread and butter… And I must say, it works really well. To the left is a sidebar containing all available tools. From there, you have access to any media you uploaded to the service, a library of stock assets, text tools and transitions, templates (again), and a section dedicated to recording media. This includes webcam, a connected camera, your desktop, or a microphone (all of which worked really well). This is also another place where the ‘text to speech’ option can be accessed.

All well and good.

When it comes to editing, it’s all about dragging. Drag a clip from your library to the timeline to add it to your project. Repeat the process, to build up your edit. Drag a clip’s edges to resize it, drag an entire clip to move it around; select an item in the timeline for its changeable parameters to appear in a sidebar to the right. It’s all pretty intuitive and standard fare.

The one thing that annoyed me is how small the preview section is. This is generally the part of the interface that needs to be as big as possible, so you can see what you’re working on. Here, it’s tiny.

Worse still, dragging the playhead along the timeline doesn’t update what you see in that preview section, so you can’t quickly scroll to another part of your edit and carry on working: you have to wait for the buffering to end.

That’s an obvious downside to working online, but it’s also a frustrating one if you’re used to working fast. If you’re a casual editor, you might be fine with that though.

Clipchamp: Recording

Editing videos in Clipchamp, Microsoft's free video editor

(Image credit: Microsoft)
  • All options work well
  • Choose your text-to-speech narrator wisely

You get four recording options in Clipchamp: Camera, Screen, Camera & Screen, and text-to-speech. These work exactly as you’d expect - grant the app access to your mic and webcam, select which window, tab, or desktop to record, hit Share.

It’s not a bad shout if you need a no-fuss one of the best free screen recorders for no-fuss, no-hassle set-up and use. It's also useful for recording piece-to-camera videos and webinars.

The built-in text-to-speech software is slightly different. It’s like a robot narrator. Input content into the text field, choose a language and voice that fits your video, then tinker with the pitch and speed to create something that passes for human speech.

The variety across the board here is excellent. However, some voices were much more natural than others, closer to ‘realistic’ smart speaker voices than the usual stilted robots found in Microsoft apps. Save the sound clip and you can drag it onto your timeline like any other media.  

Clipchamp: AI editing

Using Microsoft Clipchamp to edit videos during our review

(Image credit: Microsoft // Future)
  • Not truly AI
  • Automatic algorithms, and not very clever ones at that

Now, might AI overcome some of the buffering I experienced? After all, if the algorithms do the work for you, it should be a much easier affair.

To be honest, this was one of the most disappointing aspects of Clipchamp. I can live with a bit of buffering. But the claims of AI editing are laughable.

First things first, I uploaded some footage - and that process is absolutely fine. Then I had to like or dislike a bunch of themes, or select the option ‘choose for me’.

When it comes to orientation, it’s either landscape or portrait (the more numerous options I found when editing manually weren't present this time round). There is an option to choose from a handful of songs and fonts, or just accept the default selection that’s been presented, and then export.

As the algorithm does its thing, I was offered various ways to save the project: save to the desktop, upload it to an online storage service such as OneDrive, Google Drive or Dropbox, or to social media sites such as YouTube, TikTok or LinkedIn.

Then came the big reveal.

I have to say, I wasn’t impressed with the output. Sure everything was edited for me, but the choices were anaemic.

I uploaded widescreen shots and requested a vertical video suitable for social media. The algorithm didn’t crop my footage. It just presented it with massive black bars top and bottom. This was not what I was expecting.

The editing was also unimpressive. Oh and the preview section during export could also be bigger (what is it with Clipchamp and tiny preview sections?)

I tried multiple times, and noticed the edit seems to follow the order the clips were in, and it didn’t even edit to the beat of its chosen song. I mean, really, that should be a basic feature for an AI tool.

If, like me, you’re not happy with the results, you can always ‘Keep Editing’, i.e., take the work already done by the machine, and refine it to your liking in the manual editing section. That could definitely save some time. Frankly, I’d bin the whole thing and start properly from scratch. But maybe that’s just me.

Should I buy Clipchamp?

Using Microsoft Clipchamp to edit videos during our review

(Image credit: Microsoft // Future)

Buy it if…

You’re looking for an way to edit online, with some simple tools that are well implemented, and best of all, the free tier doesn’t watermark your output!

Don’t buy it if…

You’re not a fan of having to wait for the interface to catch up with you, you’d appreciate a bigger preview section, and are far from impressed by the lamentable AI feature.

For more editors, we've tested and reviewed the best free video editing software and the best video editing apps.

CyberLink PowerDirector 365 (2026) review
3:26 pm |

Author: admin | Category: Computers Gadgets Pro Software & Services | Tags: | Comments: Off

When you think of the best video editing software, you more often think of the big players like Adobe Premiere Pro, Apple Final Cut Pro, and even DaVinci Resolve. The problem is, these professional-grade tools can feel intimidating.

And that's where CyberLink PowerDirector 365 comes in. It offers high-end tools and editing workflow, wrapped up in an easy-to-understand interface that's suitable for beginners.

So, we look a look at the latest version (v24) to see how PowerDirector stacks up.

CyberLink PowerDirector 365: Price & availability

  • Competitively priced subscription
  • Often discounted

Like so many software packages these days, PowerDirector is only available on a subscription. You do have a couple of options though: pay $80 for the year for it alone, or combine it with PhotoDirector for $145 annually.

That’s the basic price, but you’ll find CyberLink often offers steep discounts for its software. For instance, as of this writing, you can get these for $60 or $93 respectively.

It’s definitely much cheaper than Adobe Premiere Pro, and it would take 4 years of you paying for PowerDirector at full price to exceed the cost of Apple’s Final Cut Pro. So price-wise, it’s pretty good.

Even better, you can download the software and start using it for free to make sure it works as you intend it to. You’ll encounter limitations, such as a watermark output, and a host of advanced tools and effects which are off limits to you, but the essential ones aren’t.

CyberLink PowerDirector 365: Interface

Using CyberLink PowerDirector 365 to edit a video for our review

(Image credit: CyberLink // Future)
  • Well-organized interface
  • Clear navigation

Launch PowerDirector and you’ll be graced with its welcome screen. From there, you can of course click on ‘New Project’ and get into the editing side of things (more on that in a minute), but that’s not all that window has to offer. You’ll find a handful of large icons, most of which offer quick drag-and-drop effects.

They’re there if you’ve already got a video clip or exported project which you wish to alter with one specific effect throughout. Click on one of those icons, a pop up window appears, drop a clip onto it, and the software will get working. Convenient, yes, but editing this isn’t. So let’s check out the editing side of things.

We’ve reached the stage now in terms of interface development, that if you’ve seen one video editor, you’ve pretty much seen them all. I don’t view that as a bad thing: it makes it easy to switch between them; aside from having a sidebar on the right instead of on the left, or similar, it should take you seconds to find your way around PowerDirector’s interface.

You’ll find a list of icons, top left, which control the top third of the interface. These allow you to switch between your clips, and specific functions, such as titles, transitions, effects, and so on. Top right is the preview section; it’s linked either to your timeline (which takes up the bottom half of the interface) or any selected clip in your media section.

Unlike Premiere Pro, the interface isn’t customisable. You work with what you get. It’s even more inflexible than Final Cut Pro - and I thought FCP was strict! - but at the end of the day, that’s not entirely a bad thing: it means you can sit in front of any computer with PowerDirector installed and know where everything is. That’s a big plus in my book. But the price for that familiarity is a rigid interface. A price worth paying? That would depend on your preference and workflow.

CyberLink PowerDirector 365: Tools

Using CyberLink PowerDirector 365 to edit a video for our review

(Image credit: CyberLink // Future)
  • Everything you need to edit a video
  • Free to add effects, transitions, and titles
  • No keyboard controls

Everything you need to edit a video project is there for you to use. The timeline has multiple layers, so you can end up making a relatively complex movie. You’ll find various animated titles, Transitions, Effects, Particles, Stickers, and more, all ready to spruce up your edit. They are all excellent and well crafted.

If you’re on a subscription, it’s all available to you, but if you’re working with the free version, you’ll encounter some serious limitations: most of these tools are ‘premium’ ones; you can recognise them thanks to a small black crown inside a yellow circle, top left of a tool’s icon. Despite that, you’ll still be able to insert them into your project, but when it comes to exporting it, you’ll be offered the option of forking out some of your money to be able to use them, or having them automatically removed prior to rendering.

Some tools can’t be accessed unless you log in to your CyberLink account (which is free to setup). That’s because they’re AI-based and require credits to function. You do get 100 credits per month with a subscription, and any additional credit packs are reduced by 50% as long as you keep paying, but you can also get these packs at full price while using the free version. They start at 100, and go up to 2,000, and obviously the more you buy, the cheaper each individual credit gets.

Editing is simple, but it could be easier, mind you. Maybe it’s because I’m used to more professional editors: I use the keyboard a lot when editing, and other programs allow me to use the JKL keys to playback in reverse, stop and go forward respectively; using the left and right arrow keys moves me back or forward one frame, and the up and down arrow keys jump me to the next or previous edit point…

And there are so many others. These greatly speed up my work. Unfortunately, PowerDirector doesn’t have any of those, which forces users to rely more on the mouse or trackpad. It’s not necessarily a bad thing, especially if you’re not used to such shortcuts, but the lack of options certainly is.

CyberLink PowerDirector 365: Latest updates

Using CyberLink PowerDirector 365 to edit a video for our review

(Image credit: CyberLink // Future)
  • Strong push for AI-based tools,
  • AI credits required, but not consumer-friendly implementation
  • Devs regularly adding new features

One of the great things about PowerDirector, is that new features are regularly released - whether they’re new effects to celebrate a forthcoming festive season, or new tools. At the time of review (January 2026), CyberLink is making an increasing push for AI-based tools which are, as you’d expect, powered by separately purchased credits.

One of the newest additions is ‘Video Generator’. The way it works is, you choose a style from a list of thumbnails, add your own photo, and PowerDirector will transform it to match that style and animate it as well for 5 or 10 seconds, for good measure.

The one that appealed to me the most was the ‘AI Anime Video Effect’, as it transforms your clip into animation. You have 17 styles to choose from, and the process is designed to turn 10, 20 or 30 seconds of video into your preferred style.

The only problem I can see with such features, is you have to pay before you see the results. You do get a tiny preview of the effect based on some placeholder image by mousing over the thumbnail, but truth be told, that’s really not enough.

What if ‘Vivid’ didn’t work as an anime style for your project, but ‘Classic’ would’ve been better? Well, you’ll have to pay again. The idea and concepts are good, but the implementation doesn’t feel consumer-friendly to me.

CyberLink PowerDirector 365: Final verdict

Using CyberLink PowerDirector 365 to edit a video for our review

(Image credit: CyberLink // Future)

CyberLink PowerDirector 365 remains one of the best video editing software for beginners, as well as intermediate editors.

It's packed with all the tools most general users will need for content creation - and at a fraction of the price of higher-end and premium software. Especially if you manage to grab a discounted subscription. Bonus points for offering a free, if limited, option.

I like the overall workflow and the number of features that keep coming to PowerDirector. I even enjoyed using the AI tools here. But the fact that you need to keep buying credits without the ability to simply preview the AI generation means it loses a star in my review. For me, that doesn't feel fair to users.

Beyond that, though, there's not much I don't like about PowerDirector 365, especially for those who want to create professional-looking videos without the steep learning curve I often see in other video editors.

Should I buy CyberLink PowerDirector 365?

Using CyberLink PowerDirector 365 to edit a video for our review

(Image credit: CyberLink // Future)

Buy it if...
You want a video editor that is simple to use, is affordable (or even free), and gets regularly updated with new tools and fun effects, transitions, and animated texts.

Don't buy it if...
You feel you need a video editor that’s more fluid, and you’re not a fan of the ‘pay before you see’ model that’s used for the AI tools.

For more editors, we've tested and reviewed the best free video editing software

Bixby to get a major AI upgrade with One UI 8.5
8:29 pm | January 9, 2026

Author: admin | Category: Mobile phones news | Tags: | Comments: Off

When the AI revolution began, many were expecting Samsung to upgrade its assistant Bixby with an AI brain, but that didn't come with the last couple of One UI upgrades. Now, a Reddit user shared some screenshots from their rooted device, running One UI 8.5 and showing off a brand new, smarter Bixby. Bixby with AI The screenshots show that Bixby leverages Perplexity AI for most of its tasks, suggesting the earlier rumors about Samsung partnering with Perplexity were accurate. But Bixby integration doesn't stop there. It will also work with The Weather Channel, HERE Maps,...

« Previous PageNext Page »