CraftRigs
CraftRigs / Glossary / GDDR6X
Memory & Storage

GDDR6X

The GPU memory standard used in RTX 30 and 40-series high-end cards, delivering up to 1,008 GB/s.

GDDR6X is an enhanced variant of the GDDR6 memory standard, developed by Micron and used in NVIDIA's high-end RTX 30-series and RTX 40-series GPUs. The "X" variant uses PAM4 (Pulse Amplitude Modulation with 4 levels) signaling to roughly double data throughput per pin over standard GDDR6, without requiring a wider bus.

Which Cards Use GDDR6X

GDDR6X appears on the flagship and upper-midrange cards in the RTX 30 and 40 lineups:

  • RTX 3090 / 3090 Ti — 24GB GDDR6X, 936 GB/s
  • RTX 3080 Ti — 12GB GDDR6X, 912 GB/s
  • RTX 4090 — 24GB GDDR6X, 1,008 GB/s
  • RTX 4080 Super — 16GB GDDR6X, 736 GB/s

Standard GDDR6 (without the X) is used on midrange and lower-end cards (RTX 4070 and below), which have notably lower bandwidth.

GDDR6X vs GDDR7

GDDR7, used in RTX 50-series, is approximately 1.5–2x faster than GDDR6X in peak bandwidth. The RTX 4090's 1,008 GB/s is competitive but no longer class-leading. For LLM workloads launched in 2025 or later, GDDR7 cards will outperform equivalent GDDR6X hardware by a meaningful margin.

Thermal Characteristics

GDDR6X runs hotter than GDDR6. The RTX 4090 in particular is known for memory junction temperatures hitting 90°C+ under sustained load. This is within spec but worth noting if your inference rig runs 24/7. Good case airflow and — for some users — aftermarket cooling makes a real difference.

Why It Matters for Local AI

If you already own an RTX 3090 or 4090, GDDR6X is an excellent foundation for a local LLM rig. The RTX 4090's 1,008 GB/s remains competitive with 2025's hardware, and 24GB of capacity is enough for serious 32B-class inference. Used RTX 3090s are often the best price-to-performance option for 24GB VRAM on a budget.