Rumor mill: With the RTX 4000 series now pretty much behind us, leaks and rumors are emerging about Nvidia’s next generation of consumer graphics cards, the RTX 5000 line. The latest of these offers some performance indicators, including the RTX 5090 showing a 1.7x overall uplift compared to its predecessor. Another leak points to Nvidia finally using a multi-chiplet design in the company’s high-performance Compute Gpus.

Starting with the consumer series, a leaker on the Chiphell forum, Panzerlied, has posted what are claimed to be stats for the RTX 5090: a 50% increase in scale (which presumably refers to cores), a 52% increase in memory bandwidth, 78% increase in L2 cache, 15% increase in frequency, and 1.7x performance uplift.

Applying those figures to the RTX 4090 would suggest that the successor will pack around 24,000 CUDA cores, a 2.9 GHz boost clock, and 128MB of L2 cache.

It’s also suggested that the RTX 5090’s memory will use GDDR7 and be boosted to 32 Gbps. The AD102 GPU successor is rumored to include a 512-bit memory bus, though it might not be used in the RTX 5090. As VideoCardz notes, the card could come with configurations such as 512-bit/24 Gbps or 448-bit/28 Gbps.

While everyone has their own theory about the next generation of Nvidia cards, it’s worth mentioning that Panzerlied has made correct claims in the past. Moreover, these most recent rumors were “confirmed” by prolific hardware leaker Kopite7kimi.

If the figures are true or close to the truth, one has to wonder what sort of price tag Nvidia will slap on the RTX 5090. Team Green was heavily criticized over its Lovelace pricing, but it’s hard to imagine this RTX 5090 being cheaper than, or even the same price as, the $1,600 RTX 4090.

Previous RTX 5000-series rumors also pointed to significant performance increases compared to Lovelace. No word yet on a release date, though many say they will land next year.

In a related story, Kopite7kimi also made some claims about Nvidia’s next-gen products. He says that the Blackwell architecture will be used across both consumer and datacenter GPUs, as opposed to the current Ada Lovelace/Hopper split. Moreover, Nvidia will apparently be following Intel and AMD in using a multi-chiplet design for the first time in its datacenter class of GPU.

“After the dramas of GA100 and GH100, it seems that GB100 is finally going to use MCM,” kopite7kimi wrote. “Maybe GB100=2*GB102.”

Even if Nvidia does go down the multi-chiplet design route for its compute GPUs, the company is still expected to stick with a monolithic design for consumer products.