DigitalOcean has added AMD Instinct MI350X GPUs to its Agentic Inference Cloud, offering customers a new high‑performance compute option for inference workloads. The MI350X, built on AMD’s CDNA 4 architecture, delivers lower latency and higher throughput than previous GPU offerings, enabling larger models and faster token generation.
The MI350X features a 3 nm process, 288 GB of HBM3e memory, and a performance boost that positions it as a competitor to NVIDIA’s high‑end GPUs. DigitalOcean’s announcement on February 19 2026 follows the June 2025 launch of the MI350X, and the new droplets are initially available in the Atlanta data center, with plans to expand to additional regions in the coming months.
Vinay Kumar, Chief Product and Technology Officer, said the new GPUs “boost performance and the massive memory capacity needed to run the world’s most complex AI workloads while delivering compelling unit economics.” The company has already helped Character.AI double its production request throughput and cut inference costs by 50% using earlier AMD Instinct GPUs.
The launch expands DigitalOcean’s competitive positioning against hyperscalers by offering a cost‑effective, inference‑optimized platform. The company plans to add AMD Instinct MI355X GPUs and liquid‑cooled racks next quarter, signaling a continued commitment to high‑performance compute for AI workloads.
Analysts have noted the positive market reaction to DigitalOcean’s AI strategy, with upgrades from Cantor Fitzgerald and BofA Securities citing the company’s focus on agentic AI assistants. The new GPU droplets are expected to strengthen DigitalOcean’s appeal to developers and SMBs seeking advanced AI infrastructure.
The content on EveryTicker is for informational purposes only and should not be construed as financial or investment advice. We are not financial advisors. Consult with a qualified professional before making any investment decisions. Any actions you take based on information from this site are solely at your own risk.