πŸ–₯ DigitalOcean GPU Infrastructure Reference

Internal one-stop shop β€” SKUs, specs, pricing, networking, regions, data centers, inference products, and roadmap.

Internal Reference Updated May 2026 βœ“ Pricing from DO Docs
On-Demand Pay per second
Contract 12-month reserved
Sold Out No capacity
Coming Soon Roadmap
TBD Confirm with engineering
NVIDIA GPU Droplets
GPU ModelConfigVRAMvCPURAMBoot DiskScratch DiskSize SlugVirtualizationModeStatus
RTX 4000 Ada1Γ—20 GB832 GB500 GiB NVMeβ€”gpu-rtx4000ada-20gbPassthroughOn-DemandGA
L40S1Γ—48 GBβ€”β€”500 GiB NVMeβ€”gpu-l40s-48gbPassthroughOn-DemandGA
RTX 6000 Ada1Γ—48 GBβ€”β€”500 GiB NVMeβ€”β€”PassthroughOn-DemandGA
H100 SXM51Γ—80 GBβ€”β€”720 GiB NVMe5 TiB NVMegpu-h100x1-160gbPassthrough + FabricMgr svc VMOn-DemandGA
H100 SXM58Γ—640 GB1601,920 GB2 TiB NVMe40 TiB NVMegpu-h100x8-640gb
gpu-h100x8-640gb-contracted
Passthrough + 4Γ— NVSwitch + FabricMgrOn-Demand ContractGA
H200 SXM51Γ—141 GB24240 GB720 GiB NVMe5 TiB NVMegpu-h200x1-141gbPassthrough + FabricMgr svc VMOn-DemandGA
H200 SXM58Γ—1,128 GB (1.1 TB)1921,920 GB2 TiB NVMe40 TiB NVMegpu-h200x8-1128gb
gpu-h200x8-1128gb-contracted
Passthrough + 4Γ— NVSwitch + FabricMgrSold Out ContractGA
B300 (Blackwell)1Γ—288 GB HBM3eTBDTBDβ€”β€”gpu-b300x1-…Passthrough + DOCA + CX-8 NICsContractGA (single ~3/31/26)
B300 (Blackwell)8Γ—2,304 GB (2.25 TB)TBDTBDβ€”β€”gpu-b300x8-2304gb-contractedPassthrough + 2Γ— NVSwitch + DOCA + CX-8ContractMulti-node ~mid-May 2026
AMD GPU Droplets
GPU ModelConfigVRAMvCPURAMBoot DiskScratch DiskSize SlugVirtualizationModeStatus
MI300X1Γ—192 GB HBM320240 GBβ€”β€”β€”SR-IOV (VF) β€” intentionalOn-DemandGA
MI300X8Γ—1,536 GB1601,280 GB2 TiB NVMe40 TiB NVMeβ€”SR-IOV (VF) + Infinity FabricOn-Demand ContractGA
MI325X8Γ—2,048 GB (2 TB)1601,280 GB2 TiB NVMe40 TiB NVMeβ€”SR-IOV (VF) + Infinity FabricContract OnlyGA
MI350X8Γ—2,304 GB (2.25 TB)TBDTBD2 TiB NVMe40 TiB NVMegpu-mi350x8-2304gbSR-IOV (VF) + Infinity Fabric (CDNA 4)Contract OnlyGA (RIC1+ATL1, Feb/Mar 2026)
MI355X ⚑8Γ—~2,304 GB (est.)β€”β€”β€”β€”gpu-mi355x8-2304gb (est.)SR-IOV (VF) β€” expected same as MI350X; liquid-cooledComing SoonQ3 2026 est.
⚑ Upcoming items β€” confirm with engineering before sharing externally. Sources: Official Docs Β· #prm-gpu-do-anysphere Β· MI350X announcement
Billing: Per-second with 5-minute minimum. Powered-off Droplets still charge β€” destroy to stop billing. Monthly estimates = hourly Γ— 720 (30 days Γ— 24 hrs). Contract pricing requires 12-month commitment via sales. Source: DO Droplet Pricing Docs Β· GPU Pricing Page
NVIDIA β€” On-Demand Pricing
RTX 4000 Ada
1Γ— GPU Β· 20 GB VRAM Β· 8 vCPU Β· 32 GB RAM
Hourly$0.76
Monthly (~720 hrs)~$547
Per GPU/hr$0.76

On-DemandGA
L40S
1Γ— GPU Β· 48 GB VRAM
Hourly$1.57
Monthly (~720 hrs)~$1,130
Per GPU/hr$1.57

On-DemandGA
RTX 6000 Ada
1Γ— GPU Β· 48 GB VRAM
Hourly$1.57
Monthly (~720 hrs)~$1,130
Per GPU/hr$1.57

On-DemandGA
H100 SXM5 (1Γ—)
1Γ— GPU Β· 80 GB VRAM Β· 720 GiB boot + 5 TiB scratch
Hourly$3.39
Monthly (~720 hrs)~$2,441
Per GPU/hr$3.39

On-DemandGA
H100 SXM5 (8Γ—)
8Γ— GPUs Β· 640 GB VRAM Β· 160 vCPU Β· 1,920 GB RAM Β· 2 TiB + 40 TiB
Hourly (node)$23.92
Monthly (~720 hrs)~$17,222
Per GPU/hr$2.99

Mostly Sold OutGA
H200 SXM5 (1Γ—)
1Γ— GPU Β· 141 GB VRAM Β· 24 vCPU Β· 240 GB RAM
Hourly$3.44
Monthly (~720 hrs)~$2,477
Per GPU/hr$3.44

On-DemandGA
H200 SXM5 (8Γ—)
8Γ— GPUs Β· 1,128 GB VRAM Β· 192 vCPU Β· 1,920 GB RAM Β· 2 TiB + 40 TiB
Hourly (node)$27.52
Monthly (~720 hrs)~$19,814
Per GPU/hr$3.44

Sold OutGA
B300 (Blackwell)
1Γ— or 8Γ— Β· 288 GB / 2,304 GB VRAM Β· Contract only
HourlyContact Sales
MonthlyContact Sales
Market 12-mo range~$5.65+/GPU/hr

ContractGA ~2026
AMD β€” Pricing
MI300X (1Γ—)
1Γ— GPU Β· 192 GB VRAM Β· 20 vCPU Β· 240 GB RAM
Hourly$1.99
Monthly (~720 hrs)~$1,433
Per GPU/hr$1.99

On-DemandGA
MI300X (8Γ—)
8Γ— GPUs Β· 1,536 GB VRAM Β· 160 vCPU Β· 1,280 GB RAM Β· 2 TiB + 40 TiB
Hourly (node)$15.92
Monthly (~720 hrs)~$11,462
Per GPU/hr$1.99

On-DemandGA
MI325X (8Γ—)
8Γ— GPUs Β· 2,048 GB VRAM Β· 160 vCPU Β· 1,280 GB RAM Β· 2 TiB + 40 TiB
Hourly (node)Contact Sales
MonthlyContact Sales
Contract 12-mo/GPU$1.69/GPU/hr β†’ ~$9,734/mo

Contract OnlyGA
MI350X (8Γ—)
8Γ— GPUs Β· 2,304 GB VRAM Β· 2 TiB + 40 TiB NVMe
HourlyContact Sales
MonthlyContact Sales
Per GPU/hrContact Sales

Contract OnlyGA
MI355X (8Γ—) ⚑
8Γ— GPUs Β· ~2,304 GB VRAM Β· Liquid-cooled Β· Coming soon
HourlyNot yet available
MonthlyNot yet available
Per GPU/hrTBD

Coming SoonTBD
12-Month Contract Pricing Summary
GPUConfigPer GPU/hr (12-mo contract)Node/hr (12-mo)Monthly/node (est.)Savings vs On-DemandNotes
H100 SXM58Γ—$1.99/GPU/hr$15.92/hr~$11,462/mo~33% offvs $23.92/hr on-demand
MI300X8Γ—$1.49/GPU/hr$11.92/hr~$8,582/mo~25% offvs $15.92/hr on-demand
MI325X8Γ—$1.69/GPU/hr$13.52/hr~$9,734/moβ€”Contract-only SKU; no OD baseline
H200 SXM58Γ—Contact SalesContact Salesβ€”β€”Contract required; confirm with sales
B3008Γ—Contact SalesContact Salesβ€”β€”Market range from $5.65/GPU/hr (12-mo); DO pricing TBD
MI350X8Γ—Contact SalesContact Salesβ€”β€”Contract-only; confirm with sales
Inference Products Pricing
ProductPricing ModelRateNotes
Serverless InferencePer-token (per model)See Model Catalog β€” varies by modelNo GPU provisioning cost; billed per token only when running inference
Dedicated InferencePer GPU-hourSame rates as GPU Droplet hourly for selected GPUB300, H100, H200, MI300X, MI325X, etc. β€” mirrors GPU Droplet pricing
Inference HubNo extra charge$0.00Platform access is free; pay only for Serverless or Dedicated usage
BYOM Model StorageFlat monthly$5.00/moModel weights stored in service-managed Spaces location
Pricing sourced from: DO Droplet Pricing Docs (validated Apr 7, 2026) Β· GPU Droplets Pricing Page Β· Inference Pricing Docs (validated May 1, 2026) Β· DO Blog (contract rates). All prices USD. Contract pricing via 12-month commitment through sales.
Intra-Node NVIDIA NVLink
900 GBps
450 GBps/dir; 4Γ— NVSwitches (H100/H200), 2Γ— (B300)
Inter-Node RoCEv2 Fabric
3.2 Tbps
8 Γ— 400 Gbps β€” 1 dedicated NIC per GPU
ib_write_bw Observed
~390 Gbps
Across all 400G GPU fabrics (AMD & NVIDIA)
NCCL Test β€” B300 8Γ—
~838 GB/s
ric1node350, Apr 9, 2026
NCCL Test β€” H200 8Γ—
~482 GB/s
nyc2node5101, Apr 9, 2026
Host Public (N/S)
10 Gbps
All GPU Droplets
Host Private (E/W)
25 Gbps
All GPU Droplets
H200 Host Ethernet
4Γ—100G
400 Gbps total host-side
Per-SKU Networking Details
GPU SKUIntra-node InterconnectIntra-node BWSwitch/FabricInter-nodeGPU Fabric BWHost EthernetPublicPrivateNIC ModelNIC–GPU Pairing
H100 SXM5 (8Γ—)NVLink900 GBps4Γ— NVSwitchesRoCEv23.2 Tbps (8Γ—400G)4Γ—100 Gbps10 Gbps25 GbpsCX-7 (1/GPU)1:1 GPU↔NIC rail
H200 SXM5 (8Γ—)NVLink900 GBps4Γ— NVSwitchesRoCEv23.2 Tbps (8Γ—400G)4Γ—100 Gbps10 Gbps25 GbpsCX-7 (1/GPU)1:1 GPU↔NIC rail
B300 (8Γ—)NVLink (NVSwitch)TBD2Γ— NVSwitchesRoCEv2est. 3.2 TbpsTBD10 Gbps25 GbpsCX-8 onboard (2 VF NICs/GPU)1:1 β€” DOCA drivers required
MI300X / MI325X (8Γ—)Infinity Fabric (on-die)896 GB/s (bidirectional)N/A β€” on-dieRoCEv23.2 Tbps (8Γ—400G)β€”10 Gbps25 GbpsSR-IOV VF (1/GPU)1:1 GPU↔NIC rail (VF mode)
MI350X (8Γ—)Infinity Fabric (CDNA 4)TBDN/A β€” on-dieRoCEv2est. 3.2 Tbpsβ€”10 Gbps25 GbpsSR-IOV VF (1/GPU)1:1 GPU↔NIC rail (VF mode)
MI355X (8Γ—)Infinity FabricTBDN/A β€” on-dieRoCEv2TBDβ€”β€”β€”Same as MI350X (expected)β€”
RTX 4000 / L40S / RTX 6000 (1Γ—)N/A (single GPU)N/ANoneN/AN/Aβ€”10 Gbps25 Gbpsβ€”N/A
NIC–GPU Rail Architecture: Each GPU is paired 1:1 with a dedicated NIC. For inter-node communication, GPU 4 first sends data to GPU 5 on the same node via NVSwitch (or Infinity Fabric for AMD), then GPU 5's paired NIC sends it over the RoCE fabric to the remote GPU 5. No redundancy within a rail. "Contracted" slugs additionally pass the GPU fabric NICs through to the Droplet.
AMD SR-IOV Note: All AMD GPUs expose as Virtual Functions (VF) via SR-IOV β€” intentional. Does NOT affect RoCE performance. However, hipIpcOpenMemHandle (cross-process GPU memory on same node) may fail in VF mode. AMD ticket open. No customer-facing workaround confirmed yet.
B300 Special Requirement: B300 requires DOCA drivers for Mellanox CX-8 NICs. Must use gpu-h100x8-base image (poorly named but supports all hardware including B300). The gpu-h100x1-base image lacks DOCA drivers and will not work with B300.
Sources: #gpu-droplet Sep 2025 Β· #solutions-team-public Aug 2025 Β· #gpu-droplet Apr 2026 (SR-IOV) Β· #prm-gpu-do-anysphere Apr 2026 (B300/DOCA)
Capacity note: H100 and H200 are largely sold out on-demand. Capacity fluctuates β€” customers can sometimes grab single machines. Direct enterprise customers to sales for contracted capacity. NVIDIA GPUs currently NOT available in European regions (GDPR consideration). AMS3 (Amsterdam) mentioned on product page as a future GPU location.
GPU SKUConfigAvailable RegionsModeMax Cluster SizeSLASpin-upNotes
RTX 4000 Ada1Γ—TOR1On-Demand199.5%/mo<1 minTOR1 only per official docs
L40S / RTX 6000 Ada1Γ—TOR1 + others TBDOn-Demand199.5%/mo<1 minConfirm full region list
H100 SXM51Γ— / 8Γ—NYC2NYC3TOR1ATL1Sold Out / Contract512 GPUs / 64 nodes99.5%/mo<1 minOn-demand capacity fluctuates; spot via grafana capacity monitor
H200 SXM51Γ— / 8Γ—NYC2ATL1Sold Out / Contract512 GPUs / 64 nodes99.5%/mo<1 minVery limited on-demand; mostly contracted
B3001Γ— / 8Γ—RIC1Contract Sold OutTBD99.5%/moTBDSingle-node GA ~3/31/26; multi-node ~mid-May 2026; RIC1 only
MI300X1Γ— / 8Γ—NYC1TOR1ATL1On-Demand / Contract512 GPUs / 64 nodes99.5%/mo<1 minDedicated Inference regions: NYC1, TOR1, ATL
MI325X8Γ—ATL1SFO2SFO3NYC1Contract Only512 GPUs / 64 nodes99.5%/moβ€”Active firmware upgrade program underway
MI350X8Γ—RIC1ATL1Contract OnlyTBD99.5%/moβ€”ATL1 (Feb 2026) + RIC1 (Mar 12, 2026). On-demand option TBD.
MI355X ⚑8Γ—MEM1Coming SoonTBDβ€”β€”MEM1 cluster referenced Apr 2026. Liquid-cooled racks. Q3 2026 est.
Sources: Region Availability Docs Β· #solutions-team-public Apr 2026 Β· #gpu-program Jan 2026
RegionLocationLaunchedGPU SKUsPurpose / DesignKey InfrastructureNotes
ATL1Atlanta-Douglasville, GAJune 2025H200 MI325X, MI300X, MI350XLargest DO DC at launch; AI/ML optimized; AMD Developer Cloud partnershipVAST storage; multi-room colo; full DO stack (DOKS, DBs, App Platform, LBaaS)80% of CPTO teams involved in buildout. Inference available (NYC1, TOR1, ATL).
RIC1Richmond, VAMar 12, 2026B300 MI350XPurpose-built next-gen GPU; high-density GPU pod designHigh-density B300 + MI350X pods; VAST storage; full network stack; DOKS-ready; Jammy kernel for B300B300 private preview completed in 6 weeks (vs 8-week "speed of light" target). Single-node GA ~3/31/26.
NYC2New York, NYLegacyH100, H200Legacy DC; core DO stackH100/H200 nodes (largely sold out)Sparse GPU capacity; H200 test nodes used by virt team
TOR1Toronto, CanadaLegacyH100, RTX 4000 Ada MI300XMulti-GPU region; RTX 4000 Ada exclusive to TOR1Standard DO stack; Canadian data residencyRTX 4000 Ada only available in TOR1 as of Mar 2026
SFO2/3San Francisco, CALegacyMI325XAMD GPU region; active firmware upgrade effortsMI325X fleet nodesMI325X firmware upgrades underway; SFO3 validated Jan 2026
NYC1New York, NYLegacyMI300XLegacy DC; MI300X + Dedicated Inference launch regionStandard DO stackOne of 3 Dedicated Inference private preview regions (NYC1, TOR1, ATL)
MEM1 ⚑Memphis, TN (est.)Q3 2026 est.MI355XDedicated MI355X cluster; liquid-cooled racksTBD⚑ Referenced in Apr 2026 Moonshot AI POC thread. Confirm DC name/location with engineering.
Sources: #announcements ATL1 Jun 2025 Β· #announcements RIC1 Mar 2026 Β· #gpu-droplet MEM1/MI355X Apr 2026 Β· MI350X announcement Feb 2026
ProductStatusGPU InfraPricingAPI / AccessSupported GPUsKey FeaturesDocs
Serverless InferenceGALatest NVIDIA GPUs (DO-managed; runs on DI platform)Per-token per modelinference.do-ai.run
/v1/chat/completions
NVIDIA (Blackwell + others, managed by DO)No GPU provisioning; OpenAI-compatible API; Day-0 model launches; unified billing; prompt caching; streaming; security-hardened defaultsDocs
Dedicated InferenceGA (Apr 30, 2026)Private, reserved GPU instancesPer GPU-hour (same as GPU Droplet rates)Control Panel + APIB300, H100, H200, MI300X, MI325X, etc.No noisy-neighbor; auto-scaling; custom concurrency & sequence lengths; scale-to-zero; BYOM; LoRA adapters; speculative decoding; private & isolated; 1-click from Serverless/PlaygroundDocs
Inference HubPublic Preview (Mar 2026)Same as above$0 for Hub; pay per usageDO Control Panel UIServerless + DedicatedModel Catalog (search/filter); Playground; built-in code snippets; single pane of glass for both modes; model evaluation; intelligent routing (roadmap)Docs
GPU DropletsGARaw VM with dedicated GPU(s)Per GPU-hourDO API, CLI, Control Panel, TerraformAll SKUs (H100, H200, B300, MI300X, MI325X, MI350X, RTX series, L40S)Full root access; DOKS integration; VAST/NFS shared storage; multi-node up to 512 GPUs; 99.5% SLA; pre-installed CUDA/ROCm/PyTorch/TensorFlowDocs
Recent Serverless Inference Model Launches (2026)
ModelProviderAPI Model IDLaunch DateDay-0?Use Case
GLM-5Zhipu AIglm-5Mar 19, 2026Day-0 (GTC)Deep reasoning, long-context, agentic
Kimi-K2.5Moonshot AIkimi-k2.5Mar 19, 2026Day-0 (GTC)Multi-step reasoning, multimodal
MiniMax-M2.5MiniMaxminimax-m2.5Mar 19, 2026Day-0 (GTC)High-volume production, coding, agents
Nemotron 3 SuperNVIDIAnemotron-3-superMar 19, 2026Day-0 (GTC)Fast multilingual reasoning, agentic (120B hybrid)
Arcee Trinity Large ThinkingArcee AItrinity-large-thinkingApr 2, 2026NoAgentic, long-horizon, multi-turn tool calls
Opus 4.7Anthropicopus-4.7Apr 17, 2026Day-0Advanced reasoning, agentic, coding
GPT Image 2.0OpenAIopenai-gpt-image-2Apr 23, 2026Day-0Image generation β€” /v1/images/generations
GPT-5.5OpenAIopenai-gpt-5.5Apr 28, 2026Day-0Autonomous multi-step agent tasks, coding
Sources: #announcements Dedicated Inference GA Apr 2026 Β· #announcements Inference Hub Mar 2026 Β· Available Models Docs
⚑ All roadmap items are internal signals from Slack and announcements. Confirm current status and timelines with Product/Capacity before sharing externally. Timelines shift frequently.
GPU / FeatureVendorConfigVRAMRegionTarget DateStatusNotes / Source
B300 β€” Multi-node GANVIDIA8Γ— multi-node2,304 GB HBM3eRIC1~Mid-May 2026In ProgressSingle-node GA ~3/31. Multi-node ~mid-May. Contract only + sold out. β€” #gpu-program Jan 2026
MI355X β€” GAAMD8Γ—~2,304 GB (est.)MEM1Q3 2026 (est.)Coming SoonLiquid-cooled racks. "Next quarter, DO will deploy MI355X GPUs." β€” Feb 2026 press release
B300 β€” On-DemandNVIDIA1Γ— / 8Γ—288 GB / 2,304 GBRIC1TBDTBDCurrently contract-only + sold out. On-demand when capacity expands.
MI350X β€” On-DemandAMD8Γ—2,304 GBRIC1 ATL1TBDTBDCurrently contract-only. On-demand under discussion. β€” #gpu-program Jan 2026
GPU Contract Customer Experienceβ€”β€”β€”AllH1 2026In Progress2026 priority: improve contract GPU customer experience as GPU footprint expands rapidly. Led by Jenni Griesmann. β€” #gpu-program Jan 2026
MXFP4/NVFP4 Precision SupportNVIDIA / AMDβ€”β€”β€”TBDTBDRequires B300 or MI350X. Targets FP8 quality gap for memory-intensive inference. β€” DO Blog Apr 2026
AMS3 (Amsterdam) GPUTBDβ€”β€”AMS32026+TBDDO product page mentions AMS3 as a current/upcoming GPU region. First Europe GPU location. GPU type TBD.
Sources: #gpu-program Jan 2026 Β· #announcements RIC1 Mar 2026 Β· MI355X announcement Feb 2026 Β· GPU Droplets product page (AMS3)
πŸ’° On-Demand Pricing (Hourly β†’ Monthly)

RTX 4000 Ada 1Γ—  β†’  $0.76/hr (~$547/mo)
L40S / RTX 6000 Ada 1Γ—  β†’  $1.57/hr (~$1,130/mo)
MI300X 1Γ—  β†’  $1.99/hr (~$1,433/mo)
MI300X 8Γ—  β†’  $15.92/hr (~$11,462/mo)
H100 SXM5 1Γ—  β†’  $3.39/hr (~$2,441/mo)
H100 SXM5 8Γ—  β†’  $23.92/hr (~$17,222/mo)
H200 SXM5 1Γ—  β†’  $3.44/hr (~$2,477/mo)
H200 SXM5 8Γ—  β†’  $27.52/hr (~$19,814/mo)
MI325X 8Γ—  β†’  Contact Sales
MI350X 8Γ—  β†’  Contact Sales
B300  β†’  Contact Sales
πŸ“ 12-Month Contract Pricing

H100 8Γ—  β†’  $1.99/GPU/hr = $15.92/hr (~$11,462/mo) Β· 33% off OD
MI300X 8Γ—  β†’  $1.49/GPU/hr = $11.92/hr (~$8,582/mo) Β· 25% off OD
MI325X 8Γ—  β†’  $1.69/GPU/hr = $13.52/hr (~$9,734/mo)
H200 8Γ—  β†’  Contact Sales
B300  β†’  Contact Sales (~$5.65+/GPU/hr market rate)
MI350X  β†’  Contact Sales

All contracts via 12-month commitment through sales team.
⚑ Key Sales & Solutions Facts

β€’ H100/H200: largely sold out; on-demand fluctuates
β€’ B300: contract-only + sold out; single-node GA ~3/31/26
β€’ AMD GPUs use SR-IOV (VF) β€” intentional, not a bug
β€’ NVIDIA NOT available in European regions
β€’ AMS3 (Amsterdam) mentioned as upcoming GPU DC
β€’ Multi-node max: 64 nodes / ~512 GPUs
β€’ Spin-up: <1 minute (on-demand)
β€’ SLA: 99.5%/month, 5-minute intervals
β€’ Dedicated Inference provisioning: ~10–20 min (DOKS + LBaaS + model load)
πŸ— Architecture Quick Facts

β€’ NVIDIA intra-node: NVLink 900 GBps (4Γ— NVSwitches H100/H200; 2Γ— B300)
β€’ AMD intra-node: Infinity Fabric 896 GB/s (on-die)
β€’ Inter-node: RoCEv2, 3.2 Tbps (8Γ—400G), 1 dedicated NIC per GPU
β€’ ib_write_bw: ~390 Gbps observed across all 400G fabrics
β€’ NCCL B300: ~838 GB/s Β· H200: ~482 GB/s
β€’ 1Γ— NVIDIA: FabricManager svc VM isolates GPU from other customers
β€’ AMD: SR-IOV VF intentional; no RoCE performance penalty
β€’ B300: requires DOCA drivers (use gpu-h100x8-base image)
πŸ“Œ Blank Fields β€” Engineering Follow-ups

β€’ B300 vCPU, RAM, disk specs β€” confirm with engineering
β€’ MI350X vCPU, RAM β€” confirm with engineering
β€’ MI355X all specs (VRAM, vCPU, RAM) β€” TBD
β€’ L40S / RTX 6000 Ada β€” full region list
β€’ B300 exact hourly pricing β€” contact sales
β€’ MI350X / MI355X pricing β€” contact sales
β€’ MEM1 DC name/location β€” confirm with DC ops
β€’ AMS3 GPU types and timeline β€” confirm with product
Disclaimer: Compiled from internal Slack (#announcements, #gpu-droplet, #gpu-program, #solutions-team-public, #dedicated-inference-dev) and official docs as of May 4, 2026. Pricing from DO Pricing Docs validated Apr 7, 2026. Roadmap items (⚑) are indicative and subject to change.