CraftRigs
Hardware Comparison

AMD RyzenClaw vs NVIDIA DGX Spark: Which Local AI Workstation Is Worth It in 2026?

By Chloe Smith 7 min read

Some links on this page may be affiliate links. We disclose it because you deserve to know, not because it changes anything. Every recommendation here comes from benchmarks, not budgets.

Two thousand dollars. That's the gap between these machines right now, and it opened almost overnight.

When NVIDIA quietly posted a price update to its developer forum in late February, the DGX Spark jumped from $3,999 to $4,699 — a $700 hit blamed on DRAM and NAND flash shortages squeezing the 128GB LPDDR5X unified memory inside the chassis. Meanwhile AMD had just launched its Agent Computer initiative, positioning the RyzenClaw configuration at $2,700. Same memory capacity. Nearly identical AI inference benchmarks on the workloads most people actually run. One costs 74% more than the other.

So let's be blunt: the value equation here has shifted, and it's shifted hard toward AMD.


What Actually Happened to the DGX Spark Price

NVIDIA's statement was terse. "We have adjusted the MSRP of DGX Spark (Founders Edition) due to worldwide constraints in memory supply." That's it. No roadmap update, no new specs to justify the increase, no apology. You're paying $700 more for the exact same hardware you could have bought three months earlier.

This isn't theoretical scarcity either. DDR5 RAM kits are seeing wild price swings across the board right now — Corsair and G.Skill kits hit $4,000 on Newegg at peak. Apple pulled its 512GB Mac Studio upgrade option entirely. Framework's chip shortage tracker has shown steady RAM price climbs for six weeks straight. NVIDIA's 128GB LPDDR5X pool is expensive to source, and they passed that cost straight to buyers.

The 18% price hike landed without warning. If you ordered a DGX Spark before February 27th, congratulations — you saved $700. If you're buying now, you're paying $4,699 for a machine that John Carmack publicly called "half-baked" at launch and that thermally throttled under sustained workloads until a January 2026 software update fixed it. That's a rough history for a $5,000 purchase.

Caution

The DGX Spark's January 2026 software update resolved the thermal throttling issues that plagued early units. If you're buying used or refurbished, verify the firmware is current before committing.


The RyzenClaw, Explained

AMD's RyzenClaw is a hardware configuration spec, not a branded product you can buy on a shelf. It's AMD's official recommended build for running OpenClaw — their local AI agent framework — using a Ryzen AI Max+ 395 system with 128GB of unified LPDDR5x-8000 memory, with 96GB of that pool allocated as variable graphics memory.

The reference hardware is available through multiple vendors. The Minisforum MS-S1 Max hits $2,999 at Microcenter with 128GB RAM and 2TB SSD. HP's ZBook Ultra workstation lands in similar territory. ASUS has ROG-branded configurations that skew slightly higher. The $2,700 figure represents the entry point for a capable RyzenClaw-spec machine — you can find 128GB Ryzen AI Max+ 395 mini PCs starting around that mark, though specific configurations vary.

The Ryzen AI Max+ 395 itself is a 16-core Zen 5 chip built on TSMC's process. It's x86. That matters more than most benchmarks suggest.

Note

RyzenClaw config at a glance: Ryzen AI Max+ 395 (16-core Zen 5, 3.0GHz), 128GB LPDDR5x-8000 unified memory, up to 96GB allocatable as VRAM, ~126 TOPS AI compute, Windows 11 native. Reference pricing starts around $2,700.


The Benchmark Reality

AMD published specific numbers when they launched the Agent Computer initiative. A RyzenClaw system running Qwen 3.5 35B A3B delivers roughly 45 tokens per second, processes 10,000 input tokens in about 19.5 seconds, handles a 260K token context window, and can run up to six agents concurrently. For the 122B parameter version of the same model — something you cannot fit on any discrete consumer GPU — RyzenClaw handles it. Locally. Without cloud.

NotebookCheck ran a direct comparison and landed here: "In various AI benchmarks and pure inference speed, the chips are nearly on par, especially in FP16 and FP64 tasks. The memory bandwidth and many other performance figures are also identical on paper."

The DGX Spark has a real FP4 advantage — 1 petaFLOP of FP4 throughput is genuinely impressive for a desktop box, and NVIDIA's quantization tooling extracts performance AMD can't match at those lower precision levels. For FP4 inference specifically, the DGX Spark wins. But FP4 matters mainly when you're running NVIDIA's own optimized model containers, which brings us to the ecosystem problem.

AMD wins on FP16 and FP64 tasks. The gap at FP4 is real but narrower in practice than the theoretical specs suggest.


Why the Architecture Difference Matters More Than the FLOPS

The DGX Spark runs on an ARM-based Grace CPU module. That's the same ARM core NVIDIA uses in their data center Grace Blackwell systems — powerful, but not x86. The DGX OS is Ubuntu 24.04 LTS with a light NVIDIA flavor. There's no Windows support. Legacy apps don't run. If you need anything outside the NVIDIA container ecosystem, you're doing significant work to get it running.

The RyzenClaw runs standard x86 Windows 11. OpenClaw itself uses WSL2 with LM Studio and llama.cpp for inference — a stack that also runs natively on a gaming PC, a laptop, or any other Windows machine you already own. AMD says setup takes under an hour. Based on community reports, that's accurate for anyone who's used a terminal before.

This is the underappreciated part of the comparison. The DGX Spark is a specialized AI appliance. The RyzenClaw is a Windows workstation that happens to run 122B parameter models locally. Those are different products for different workflows.

For a look at how another ARM-based NVIDIA appliance stacks up, see the DGX Spark vs Mac Studio comparison.


AMD's Agent Computer Vision

The Agent Computer concept AMD published on March 13th is worth taking seriously, not just as marketing. The core argument: AI agents don't need cloud infrastructure. They need persistent, local compute with direct data access and no subscription ceiling.

AMD's OpenClaw framework runs Memory.md locally through embedded vectors — no cloud sync, no external API calls, no data leaving your machine. For developers running autonomous coding agents, research pipelines, or multi-agent workflows, always-on local inference at $2,700 is a fundamentally different proposition than $4,699 plus the hidden cost of NVIDIA ecosystem lock-in.

The six-concurrent-agents figure is the number I'd lean on. Orchestrating agent swarms on consumer hardware, at this price point, wasn't practical 18 months ago. That it's mundane now says something about where local AI has landed.

Tip

AMD recommends allocating 96GB of the 128GB memory pool as variable graphics memory for RyzenClaw inference workloads. This isn't the default — you need to configure it manually in the BIOS/UEFI. Don't skip this step; it has a measurable effect on throughput.


When the DGX Spark Still Wins

I don't want to oversell the AMD case into absurdity. There are real scenarios where $4,699 for the DGX Spark is the right call.

If you're working in PyTorch with CUDA-optimized training loops, the NVIDIA ecosystem is still years ahead. ROCm has improved dramatically but it's not CUDA. If your job involves fine-tuning on NVIDIA's TensorRT-LLM stack or deploying containers that'll eventually move to an H100 cluster, training locally on a DGX Spark means your code ports directly. That workflow coherence has dollar value.

The 200 Gbps ConnectX 7 NIC in the DGX Spark is also real. If you're running multi-node configurations or connecting the Spark to a larger DGX Pod setup, that networking capability is relevant. The RyzenClaw has dual 10GbE, which is fine for most workloads but isn't the same thing.

And honestly — the DGX Spark's industrial design is exceptional. The champagne-gold metal chassis, the metal foam panels, the overall form factor. If you're going to put a local AI workstation on your desk and have clients or colleagues see it, the DGX Spark looks the part in a way that a Minisforum mini PC doesn't.

That's a legitimate consideration and I won't pretend otherwise.


The Verdict

At $2,700 vs $4,699, the burden of proof sits entirely on the DGX Spark. And it can't clear that bar for most buyers.

If you're a developer or researcher running local LLMs, building AI agent pipelines, doing inference work on large models, or just want a capable AI workstation that doesn't require cloud subscriptions — the RyzenClaw configuration wins. The benchmarks are close enough that the $2,000 price difference overwhelms any FLOP-counting advantage. The x86 architecture means you keep your existing software ecosystem. The Windows compatibility means you don't need to relearn your tooling.

Buy the DGX Spark if CUDA is genuinely non-negotiable for your workflow, you need the enterprise networking, or you're buying it because it's going on camera.

Buy the RyzenClaw if you want the best local AI workstation for the money in March 2026.

That's the whole equation.


See Also

ryzen-ai-max dgx-spark local-ai-workstation amd-agent-computer nvidia 2026

Technical Intelligence, Weekly.

Access our longitudinal study of hardware performance and architectural optimization benchmarks.