This site may earn chapter commissions from the links on this page. Terms of apply.

Micron is planning an aggressive ramp of GDDR6 technology across the unabridged GPU business. The introduction of GDDR6 has pregnant implications for the mainstream PC graphics business, where GDDR5 remains the ascendant memory compages.

GDDR5 has had an infrequent run. First introduced in 2008, it however dominates the PC graphics industry below the $300 price point, where the bulk of GPUs are sold. All good things, however, must come to an end. With VR and 4K both pushing into the mainstream, information technology'due south time to heave retention bandwidth and capacities above the level GDDR5 provided. GDDR5X was a brusque-term solution to increase bandwidth at the high-end, but both AMD and Nvidia need solutions that scale upward for lower total costs.

GDDR6

The diagram above (PDF) compares GDDR5X and GDDR6. While the data rate is the aforementioned between the two designs, GDDR6 has 2 completely independent retentiveness channels that can read or write to data as necessary. This should amend overall memory efficiency. Access granularity has also improved, from 64 bytes to 32 bytes.

Just as GDDR5 delivered a substantial memory bandwidth improvement at every level compared with older GDDR3 cards, GDDR6 should boost performance of lower-end and midrange cards as well.

"As GPUs crank out more bandwidth over time, the memory needs to keep up or information technology's going to become stuck," Kristopher Kido, Micron's global graphics memory managing director, told VentureBeat. "Our partners will make up one's mind how fast to run it. But it's clear that operation has to keep increasing for deep learning, autonomous vehicles, and other workloads."

Currently, GDDR6 is expected to launch with transfer rates between 12Gb/southward and 16Gb/s per channel. That'southward significantly faster than GDDR5, though nosotros don't know when AMD and Nvidia will adopt the retentivity standard. AMD's Polaris family is GDDR5-based with no news of a successor, and Nvidia has been mum virtually its own refresh plans in 2022. As for HBM2, it appears stuck at the high end of the market. Neither AMD nor Nvidia have stated if they will utilize it for future high-end cards, and Nvidia has never deployed it in mainstream or high-cease consumer GPUs also (high-end, in this example, beingness GPUs in the $400 to $600 range).

When AMD switched to HBM, information technology justified the move due to the difficulty of scaling upwardly GDDR5 to higher clock speeds and the increased ability consumption that resulted. This paid off with Fury and Vega — by all accounts, both GPUs consume much less ability than they would if they'd used GDDR5 or GDDR5X. At the same time, nonetheless, GDDR6 may nowadays a ameliorate overall contour for future designs — especially if HBM2 costs can't be brought nether control.