Storage on a Budget: R2 vs GCS Cost Breakdown

Object storage looks like a simple “pay for bytes” deal until you read the fine print. You’re billed for three things: storage at rest, operations (reads/writes/metadata calls), and data leaving the platform. That last part (egress) is where budgets turn into horror stories. Here’s the core split, R2 charges for storage and requests but does not charge for public internet egress. GCS charges for all three. If your workload serves users on the open web, that policy difference often dominates everything else.

Storage classes: keep hot hot, cold cold

Both providers offer a hot tier for frequently accessed objects and colder tiers for archives. The trap is the same everywhere. Cold tiers look cheap until you pay retrieval fees because an “archive” suddenly becomes popular. Use hot storage for anything on a user path, and only push to colder classes when you’re confident reads will be rare and predictable. Don’t play penny games with content people actually touch.

Requests and access patterns

Reads and writes aren’t free on either side. R2 groups mutating calls as a pricier class and simple reads as a cheaper class. GCS has a similar split with slightly different accounting by bucket type and region. This matters because UX choices multiply requests, image galleries that fetch one thumbnail per scroll tick, or APIs that request objects in chatty bursts, will grow the operations line even when the bytes themselves are cheap. Batch where you can, version assets immutably (great for caching), and keep an eye on how many GETs a single page view triggers.

Egress and the CDN reality check

This is the budget killer. With R2, serving objects directly to the public doesn’t incur egress fees. With GCS, it does (tiered per GB) rates that scale with traffic. Yes, you can hide a lot of that behind a CDN, and you should. A well tuned CDN shrinks origin egress by turning most traffic into cache hits. But that’s another moving part to engineer and monitor, and cache misses still backhaul to the origin. The contrast is simple, R2 starts at zero for internet egress and lets you add a CDN for performance. GCS expects a CDN to keep the bill sane.

Free tiers and realistic scenarios

Free tiers decide how expensive your prototype period feel. R2’s free allocation (storage + very generous read quota) lets many small apps run without paying while they find their audience, and zero egress means spikes from social shares don’t instantly become a tax. GCS’s Always Free is helpful but smaller and region limited. It’s easy to burn through the free operations on read heavy workloads and then fall straight into paid egress the moment traffic gets interesting.

Think through two common patterns. First, a modest app: a few dozen to a few hundred gigabytes stored, six figures of writes per month (uploads/updates), a couple million reads, and a few hundred gigabytes of user downloads. On R2, you’re paying storage + requests. On GCS, the egress line usually becomes the largest single number on the invoice. Second, a weekend growth spurt, half a terabyte stored, millions of reads, and multiple terabytes of downloads because a link caught fire. R2’s bill scales with storage and requests; GCS’s bill is now egress-dominated. Neither case is exotic, they’re what “mildly popular” looks like.

Performance, ecosystem, and migration

Latency is largely a CDN story in 2025. R2 benefits from Cloudflare’s global network and plays nicely with edge compute. GCS performs well from the regions you choose and pairs tightly with Google’s CDNs. The more important difference is ecosystem gravity. If your team already lives in Google Cloud, IAM, logs, transfer jobs, lifecycle rules, dual/multi-region replication then GCS will feel like home. If your tooling, SDKs, and policy mental model are S3 shaped, R2 is a low-friction fit.

You don’t have to pick forever on day one. A pragmatic trial is “public assets first”, sync a slice of objects to R2, point only the public asset domain at it, and measure. If the curve is better (it usually is for bandwidth heavy paths), migrate more. Access controls (signed URLs, bucket policies) exist in both worlds, choose the one that matches how your team already reasons about permissions. Durability claims and SLAs are table stakes at this tier. Your real availability will be dominated by CDN hit ratios and your retry/backoff policies.

Estimation cost

Reading all of those doesn’t look fun, let’s compare it side by side instead

Assumptions R2 cost breakdownR2 totalGCS cost breakdownGCS total
100 GB stored
100k writes
2M reads
200GB egress
Storage:
100×0.015=$1.5
Class A:
1×4.50=$4.5
Class B:
2×0.36=$0.72
Egress:
$0
$6.72Storage:
100×0.020=$2.00
Class A:
100×0.005=$0.50
Class B:
2000×0.0004=$0.80
Egress:
200×0.12=$24.00
$27.30
500GB stored
1M writes
50M reads
5 TB egress
Storage: 500×0.015=$7.50
Class A: 1×4.50/M=$4.50
Class B: 50×0.36/M=$18.00
Egress: $0
$30.00Storage:
500×0.020=$10.00
Class A:
1000×0.005=$5.00
Class B:
50000×0.0004=$20.00
Egress:
(1000×0.12) + (4000×0.11) = $560.00
$595.00

The pragmatic choice

Whatever you choose, put meters on it. Track bytes stored, request counts by type, origin egress (or cache fill) and CDN hit ratio. Alert on step changes. For GCS, distinguish “egress avoided by CDN” from “egress paid.” For R2, watch read volume as traffic climbs so request costs don’t quietly overtake storage.

The decision framework is simple:

  • If your workload is public facing and bandwidth heavy, R2 produces a flatter, more legible bill, storage + requests, no popularity tax.
  • If your workload is tightly coupled to Google Cloud’s ecosystem and you’re already running a high hit ratio CDN in front of storage, GCS can still make sense, the operational ease may outweigh raw cost.
  • If you’re unsure, migrate only the internet-facing assets to R2 and compare real invoices for a month. Numbers beat guesswork.

Posted in

Leave a comment