The new Titan is here. The RTX 3090 may not come with the traditional name, but the $1500 price tag and 24GB of GDDR6X memory show that this is a firmly ‘prosumer’ GPU, designed for scientists and content creators who’ll consider the card a cheaper alternative to Nvidia’s professional Quadros – and as you’ll see in the final verdict, that puts us firmly in the frame as target customers. The RTX 3090 is also pitched as a gaming monster, capable of driving an 8K display at a time when even 4K has yet to be broadly adopted. That makes it a fascinating candidate for review, as there’s so much to cover – that unique triple-slot cooler, the fully-enabled GA102 GPU inside and Ampere’s efficient architecture, all combined into what Nvidia promises to be the world’s fastest graphics card. We’ll put that claim to the test in our newly redesigned gauntlet of gaming benchmarks, along with our impressions of the card’s hardware design and power efficiency.
As we noted in our RTX 3080 review, Ampere represents an important moment for Team Green. After going through the pain of introducing new features like RTX and DLSS last generation atop a modest performance increase, the 30-series cards are a chance to back up these innovations with the kind of raw speed that requires no buy-in from gamers or developers to appreciate.
Nvidia has relied on a die shrink to accomplish this performance boost, moving from TSMC’s 12nm process to Samsung’s 8nm. If you compare the RTX 3090 and the RTX 2080 Ti, the new card has 50 per cent more transistors – yet its die is 20 per cent smaller, which unlocks both efficiency and performance improvements. Combine this with architectural changes like next-gen ray tracing and tensor cores and faster, more efficient GDDR6X memory, and you have a recipe for a seriously capable graphics card. In the case of the RTX 3080, the combination resulted in a massive increase in horsepower, but what do you get for your extra $700 with the RTX 3090?
Part of that extra cash has no doubt gone into the thermal solution. We were impressed with the RTX 3080 FE’s novel ‘flow-through’ cooler, with one fan on each side of the card, but the 3090 FE takes it to another level. It boasts a genuine triple-slot design (rather than the ‘two and three quarters’ we see more commonly) and tipped our kitchen scales at 2.2 kilograms. This graphics card is longer than the Xbox Series X is tall (we’ve checked!) and you may struggle to fit it into a standard case – so if you’re planning to get one, check your allowances carefully. The card’s pennant-shaped PCB leaves the entire back end of the card free for a hefty aluminium fin block, and combined with the card’s inflated volume you get substantial cooling performance. We don’t have the equipment for vigorous thermal testing, but we saw the card spend much of its time in the mid 60s to 70 degrees Celsius, meaning that it’s up there with some of the best of the third party cooling solutions..
The RTX 3090 Founders Edition is one of two graphics cards to support Nvidia’s new 12-pin power standard, alongside the RTX 3080 FE. The 12-pin connector looks odd at first glance, but it’s cleverly designed to tuck an unprecedented amount of power into a single port – one that’s smaller than the dual or triple eight-pin inputs you’d normally expect to find on a graphics card of this calibre. The 3090’s PCB only extends to about the middle of the card, so that’s where the 12-pin input is located too. You’ll get a handy – if slightly too short to be sightly – 12-pin to dual eight-pin adapter in the box, so you don’t need to worry about upgrading your power supply if you’re at or above the 750W recommendation. Thankfully, makers of modular power supplies are selling or giving away dedicated full-length cables terminating in the new 12-pin connector if you prefer a tidier look.
Like the RTX 3080, the RTX 3090 has a full complement of high-bandwidth ports, including one HDMI 2.1 and three DisplayPort 1.4a connections on the Founders Edition card we’re testing. The HDMI 2.1 port is critical for high-resolution, high refresh rate gaming on TVs and next-gen monitors, as its 48Gbps of bandwidth allows for 10 or 12-bit colour at 4K resolution and a refresh rate of 120Hz. It’ll also allow an 8K connection at 60Hz – something that Nvidia has already leaned into heavily in their marketing of the RTX 3090, despite a vanishingly small number of 8K displays on the market.
The 4K120 or 8K60 support meshes well with Ampere’s newfound support for AV1 decoding, an emerging video standard that is up to 50 per cent more efficient than the prevalent h.264 codec. That should cut the amount of bandwidth required to stream high resolution and/or high frame-rate video significantly, and allows up to 8K 60fps video content to be delivered. Chrome, Twitch and VLC are all confirmed to support AV1 decoding with the RTX 3090, although a precise rollout date for each service has yet to be announced.
The 3090 is also the only RTX model to feature an NVLink port, allowing two 3090s to be hooked together in SLI – although this is more intended for data-crunching than gaming. Nvidia has announced that it won’t create any SLI driver profiles after January 2021, so expect the gaming applications to shrink from ‘limited’ to ‘non-existent’ going forward. One thing you won’t find on the 3090 is a USB-C VirtualLink port, as this standard sadly never found mainstream adoption amongst VR headsets.
Nvidia’s initial Ampere reveal included some impressive claims, and one in particular stood out: the company stated that Ampere was extraordinarily power efficient, outperforming the respectable Turing architecture watt-for-watt and achieving almost double the performance of Turing at the same (relatively low) power envelope. Rather than accepting better performance at a similar power spec and calling it a day, Nvidia has maximised the amount of power available to the new GPUs – hence the new 12-pin connector we saw earlier. Like the RTX 3080, Nvidia recommends RTX 3090 owners use a 750W power supply and rates the card at 350W, 30W higher than the RTX 3080.
To see how the RTX 3090’s power usage and efficiency compares to the RTX 3080 and earlier GPUs like the Titan RTX, RTX 2080 Ti and RX 5700 XT, we measured their power draw and frame-rates in Death Stranding and Gears 5, both at maximum settings. The measurements were obtained using Nvidia’s power capture analysis tool (PCAT), which sits between between the graphics card and its power sources (eg the PCIe slot and the GPU’s eight-pin or 12-pin power inputs), measuring the wattage before passing it through to the card. The testing here is relatively simplistic – we measured frame-rate and power draw in specific scenes.
The results here are fascinating. Despite sporting a higher TDP, the 3090 actually consumes fewer watts to generate a single frame than the 3080 in the titles we tested: In Death Stranding, the RTX 3080 requires four per cent more power per frame, and this drops to just under three per cent in Gears 5. This may sound implausible bearing in mind the beastly nature of the RTX 3090 – but the new card’s total graphics power is 12.5 per cent higher overall and it is capable of a performance bump in excess of that number.
The next power band along is the RTX 2080 Ti and the Titan RTX, which consume 11 and 14 per cent more watts per frame, respectively, in our static Death Stranding scene. There’s another leap to the RX 5700 XT, AMD’s fastest card at the time of publication (if you discount the short-lived Radeon 7), which requires nearly 20 per cent more watts per frame – and this title is heavily performant on AMD hardware. The Gears 5 test portrays a much tighter field – we’re much closer to the Turing cards in efficiency terms, but AMD’s ‘watts per frame’ metric is much, much higher.
Regardless, the order stays the same, with the 3090 being the most efficient card we’ve tested, followed closely by the RTX 3080, then the 2080 Ti and Titan RTX some distance after that, and the RX 5700 XT bringing up the rear. We expected the RTX 3070 to be a more efficient performer than the 3080, given its more modest power envelope, but we never expected to see the 3090 achieve the same feat.
As we mentioned in our RTX 3080 review, we’ve chosen to update our test rig for the first time in a few years to be ready for both Nvidia and AMD’s upcoming graphics cards. We’ve swapped out our Intel Core i7 8700K and Asus Maximus 11 Extreme Z390 motherboard for the new Core i9 10900K, installed on a high-end Asus Maximus 12 Extreme Z490 motherboard. Our memory has increased in frequency slightly, with the new rig using two 8GB sticks of G.Skill Trident Z Royal running at 3600MHz CL16. These faster components should reduce CPU bottlenecking at lower resolutions.
Our storage needs have also outgrown our prior setup, we’ve opted for a 2TB Samsung 970 Evo Plus NVMe drive provided by Box. The system is powered by a 1000W Corsair power supply, so we easily meet the 750W requirement from the 3090 – which will presumably be the most power-hungry GPU we’ll see in the immediate future.
Note that we considered opting for a Ryzen 3950X test bench to take advantage of the new GPU’s PCIe 4.0 capabilities, but our investigations revealed higher frame-rates at lower resolutions by sticking with an Intel test bed – and more importantly, equal performance at 4K even with the RTX 3080 being limited to PCIe 3.0. We’ll continue to monitor the situation in the future, but for now all signs point towards raw CPU horsepower having a greater effect on performance than PCIe bandwidth outside of a few specific scenarios.
With all that said and done, it’s time to sink our teeth into some game benchmarks. Click the quick links below, or hit the Next button to get started.