AWS vs Azure vs GCP: pricing compared across 3,000+ instances in 2026
None of the three providers is cheapest across the board. AWS tends to win on general-purpose x86 and has the deepest ARM lineup. GCP is meaningfully cheaper on GPU and on committed-use discounts for sustained workloads. Azure is competitive on 3-year reserved commitments and has the best spot discounts on specific memory-heavy SKUs. The category winners rarely agree — the provider you should pick depends on which instance family you land on.
How we got these numbers
WhichVM pulls directly from the public pricing APIs that AWS, Azure, and GCP publish. For this write-up we took a snapshot on April 15, 2026 across 3,000+ instance types and the primary commercial regions for each provider. All numbers below are on-demand Linux pricing in USD unless noted. When we say “reserved” we mean the standard no-upfront 1-year term (AWS Reserved Instances, Azure Reserved VM Instances, GCP committed use discounts).
We’re comparing listed public prices. Enterprise agreements, EDPs, private pricing addendums, and negotiated discounts will shift the picture — sometimes by 20–40%. Anyone spending seven figures a year is not paying rate card. But for the other 95% of teams, rate card is what shows up on the invoice.
General purpose: the entry tier
This is the bucket most web apps, small databases, and internal services land in. The canonical comparison is m5.large (AWS) vs D2s v3 (Azure) vs n2-standard-2 (GCP) — all 2 vCPU, 8 GB RAM, x86, same-day prices.
Across us-east-1 / eastus / us-central1, AWS (m5.large at $0.0960/hr) and GCP (n2-standard-2 at $0.0971/hr) come in within a cent of each other on Linux on-demand. Azure’s rate card pricing for the equivalent SKU varies by region and contract type — check the live compare for current numbers, as Azure list prices can differ significantly from the other two depending on the SKU and region. On committed pricing, GCP’s 3-year CUD on N2 is aggressive enough that it becomes the cheapest general-purpose x86 option once you’re willing to commit. AWS’s Savings Plans are more flexible but the headline discount is lower.
The more interesting move in 2026 is ARM. AWS’s Graviton3 lineup — m7g, c7g, r7g — runs roughly $0.01/hr lower than their x86 equivalents, which sounds small but adds up to $7–10 per instance per month on always-on workloads. Azure’s Cobalt instances are catching up but the regional coverage is still thin. GCP’s Axion (T2A) is promising on paper but has narrower availability than Graviton.
Next.js SSR, small Postgres, staging APIs, background workers: m7g.large on AWS is the value pick in 2026. 2 vCPU Graviton3, 8 GB RAM, comes in around $7–10/month cheaper than m5.large. If you’re on GCP and can commit 3 years, n2-standard-2 with CUD beats it on total cost. Azure’s D2s v3 list price varies — verify current rates before committing.
Compute-optimized: CPU-bound work
Compute-optimized lines up cleanly across providers: AWS c7g / c7i, Azure compute-optimized (F-series), GCP c3 / c3a. The ARM advantage is real but subtle on hourly rates — c7g.xlarge (Graviton3) runs about $0.01–0.014/hr below c6i.xlarge (Intel), which reads as noise on a single instance but compounds to $8–10 per instance per month on always-on fleets. For stateless CPU-bound work (build runners, video transcoding, compression, ML preprocessing) the ARM toolchain is no longer the blocker it was in 2022, making the switch straightforward.
GCP c3 is priced in line with AWS x86 but has a narrower regional footprint. For CI or batch work where you can pick the family, ARM Graviton3 > x86 is the price-ordered pick in 2026.
CI runners, self-hosted GitHub Actions, stateless API fleets, build farms: c7g.xlarge or c7g.2xlarge on spot pricing is the value sweet spot — ARM toolchains are no longer the blocker they were in 2022, and spot for CI is a near-free lunch if your jobs are under ~15 minutes.
Memory-optimized: caches and analytical DBs
Memory-heavy workloads are where the provider gap is tightest at rate card. r6i.large (AWS, $0.126/hr) and n2-highmem-2 (GCP, $0.131/hr) land within ~4% of each other on Linux on-demand — effectively a coin flip on price for the same 2 vCPU / 16 GiB profile. Azure’s E-series spot prices are broadly in line with primary region rates; the bigger saving on Azure memory SKUs comes from reserved instances rather than spot.
For in-memory caches, the x2g / x2i families on AWS are worth flagging: cost-per-GB-RAM is lower than anything GCP publishes, and Azure’s memory-optimized lines don’t scale to the same RAM/vCPU ratios.
Redis / Memcached nodes, Postgres with heavy pg_stat pressure, analytical warehouses under 500 GB: r6i.large or r7g.large on 1-year reserved is the balanced pick. If you’re memory-bound past 256 GB RAM, go straight to x2idn — no equivalent on the other clouds lands at the same $/GB.
GPU: where the gap gets real
GPU pricing is the least consistent category and the one where picking the wrong provider costs the most. For T4 inference-class GPUs, g4dn.xlarge on AWS and NC4as_T4_v3 on Azure are close, with Azure usually a few percent cheaper on-demand. GCP’s n1-standard-4 + T4 can be the cheapest of the three but availability is spotty in 2026.
At the training tier the picture is less clear-cut. p4d.24xlarge (8× A100) on AWS, Azure’s ND96amsr_A100_v4, and GCP’s a2-highgpu-8g all sit at different price points — verify current on-demand rates on the compare page before making a training infrastructure decision, as A100 pricing has shifted with capacity availability. For H100, all three providers are supply-constrained; rate card matters less than whether you can actually get capacity.
LLM inference up to ~13B parameters, image/embedding workloads, light training: g4dn.xlarge on AWS is the default value pick — the ecosystem of prebuilt DLAMI images and the spot market are both deep. For training runs on A100, compare current rates across providers before committing — A100 on-demand pricing varies with capacity availability in 2026. For H100, pick whichever provider has inventory.
Regional anomalies you can exploit
Same SKU, different region, very different bill. A few we saw consistently in the April snapshot:
- AWS
m5.large: ap-south-1 ($0.1010/hr) is actually ~5% more expensive than us-east-1 ($0.0960/hr) — Mumbai carries a premium, not a discount. If price is the priority, us-east-1 wins. - Azure
D2s v3: westus3 vs eastus — ~5–6% cheaper in westus3 with no latency penalty for US-West-based users. - GCP
n2-standard-4: us-central1 vs europe-west4 — Iowa is ~8–9% cheaper and still the default recommendation for most non-EU workloads.
One important note: don’t assume that the closest geographic region is the cheapest. AWS Mumbai (ap-south-1) actually costs more than N. Virginia for the same SKU — the premium goes the other way. For GCP and Azure, secondary US regions tend to be 5–9% cheaper than the primary US East regions with no meaningful latency difference for most non-latency-critical workloads.
Commitment discounts: who wins at 1yr vs 3yr
On 1-year no-upfront, AWS and Azure are competitive on general-purpose SKUs — rate card prices differ more than the commitment discount rates, so check the actual reserved price for your specific SKU rather than assuming a provider-level pattern. GCP’s 1-year CUD structure differs from the other two; compare the effective hourly rate directly on the compare page with the pricing model filter set to reserved.
On 3-year all-upfront, the order flips. GCP’s 3-year CUD on N2 and C3 is the deepest discount of the three and usually wins on total cost. Azure’s 3-year RI on memory-optimized E-series is competitive. AWS Savings Plans are more flexible than either but the headline discount is lower.
GCP custom machine types: the waste killer
One structural differentiator that rate-card comparisons miss: GCP lets you pick the exact vCPU-to-RAM ratio at provisioning time instead of forcing you into predefined t-shirt sizes. If a workload needs 2 vCPU and 5 GiB, you pay for 2 vCPU and 5 GiB — not the 8 GiB that comes with e2-standard-2, and not the 16 GiB that comes with e2-standard-4.
Concrete case: a Kubernetes pod originally sized for e2-standard-4 (4 vCPU, 16 GB) that actually uses ~5 GB of RAM can drop to e2-custom-2-5120 for a meaningful reduction on the monthly run rate. GCP charges roughly a 5% premium per vCPU/GB on custom types, but eliminating over-provisioned memory more than offsets it. For container-heavy workloads where pod requests don’t line up with any predefined SKU, this is the single strongest structural argument for GCP over the other two.
Pick by workload, not by family
The family-based breakdown above is the mental model most teams already have. But in practice you’re picking an instance for a specific workload. The quick-lookup version:
| Workload | AWS | Azure | GCP |
|---|---|---|---|
| Web app / general API | m7g.large | D2ads v5 | n2-standard-2 |
| Kubernetes workers | m7g.large | D2as v5 | e2-custom-* |
| CI / build farm (bursty) | c7g spot | Fsv2 spot | t2d spot |
| Small Postgres | r7g.large | E2ads v5 | n2-highmem-2 |
| Redis / in-memory cache | r7g / x2idn | Easv5 | n2-highmem |
| LLM inference (T4-class) | g4dn.xlarge | NC4as T4 v3 | n1 + T4 |
| LLM training (A100) | p4d.24xlarge | ND A100 v4 | a2-highgpu-8g |
| Confidential / PII | m7i + Nitro Enclaves | DCadsv6 | n2d Confidential VM |
The honest verdict
There is no “cheapest cloud.” There are cheapest instances, and which provider publishes them depends on the category:
- Web apps / general purpose / ARM-friendly workloads: AWS, pick Graviton.
- CPU-bound batch and CI: AWS Graviton3 (
c7g) on spot, or Azure F-series if you need x86 and already run on Azure. - Memory-heavy Postgres / caches: AWS
r6i/r7g, or Azure E-series on spot for non-production. - GPU inference: AWS
g4dnfor ecosystem depth and spot availability. For A100 training, compare current on-demand rates — pricing shifts with capacity. - 3-year committed / steady workloads: GCP on N2 or A2.
The meta-lesson: the “which cloud” debate is mostly the wrong question. Pick the workload, then pick the instance, then let the provider fall out of that decision.
Try it yourself
WhichVM has every instance across the three providers with the same pricing snapshot we used for this article. Some useful entry points:
- The full AWS us-east-1 table — sortable by price, vCPU, memory, GPU.
- General-purpose cross-cloud compare (AWS
m7g.largevs AzureStandard_D2s_v3vs GCPn2-standard-2). - ARM vs x86 compare —
c7g.xlargevsc6i.xlarge. g4dn.xlargedetail page — with per-region pricing.
Find something that looks wrong, missing, or out of date? Flag it via . All pricing on WhichVM is refreshed daily from the official provider APIs.
Built by Jayanth. More posts on the blog index.