Amazon Elastic Block Storage (EBS) Guide [2025]: Pricing & Performance
If EC2 instances are your servers, Amazon Elastic Block Storage (EBS) is the hard drive that keeps your data safe — even when the server stops.
I’ve seen teams lose hours of work because they didn’t understand this simple truth. They spun up an EC2 instance, deployed their application, and when the instance terminated… everything vanished. The database, the logs, the uploaded files — gone.
That’s when Amazon EBS becomes your best friend in AWS.
In this guide, I’ll walk you through everything you need to know about Amazon Elastic Block Store — from the fundamentals to production-grade performance tuning. Whether you’re preparing for the AWS SAA-C03 exam or building real infrastructure, this is the EBS knowledge that actually matters.
Table of Contents: Amazon Elastic Block Storage (EBS)
What is Amazon EBS?
Amazon Elastic Block Store (EBS) provides persistent block-level storage volumes for EC2 instances. Think of it as a network-attached hard drive that exists independently of your EC2 instance’s lifecycle.
Here’s what makes EBS special:
Persistence — Your data survives instance stops, starts, and even terminations (if configured correctly). Unlike the temporary storage that comes with some EC2 instances, EBS volumes stick around.
Flexibility — You can detach an EBS volume from one instance and attach it to another. Need to move your database to a bigger server? Just detach, attach, and you’re done.
Snapshots — EBS lets you create point-in-time backups that are stored in S3. These snapshots are incremental, so you only pay for changed data after the first backup.
Encryption — With AWS KMS integration, encrypting your data at rest and in transit is straightforward. No custom encryption logic needed.
EBS vs Instance Store: The Critical Difference
This trips up a lot of beginners, so let me be clear.
Instance Store (also called ephemeral storage) is physically attached to the host machine running your EC2 instance. It’s blazing fast, but here’s the catch — when your instance stops or terminates, that data is gone forever. It’s like RAM for storage.
EBS volumes are network-attached. They’re slightly slower than Instance Store (we’re talking milliseconds here), but your data persists. For databases, application state, or anything you can’t afford to lose, EBS is the answer.
💡 Real-World Scenario: Imagine running a MySQL database on EC2. If you use Instance Store, a simple instance reboot could wipe your entire database. With EBS, your data survives restarts, and you can even create automated backups with snapshots.
How Amazon EBS Works (Architecture Overview)
Understanding the architecture helps you make better decisions about volume types and performance.
EBS Volumes
An EBS volume is essentially a virtual disk that lives in a specific Availability Zone (AZ). When you create a volume, you choose the AZ, size, volume type, and performance characteristics.
Key points about volumes:
- Each volume exists in one AZ only — you can’t attach a volume in us-east-1a to an instance in us-east-1b
- Volumes can range from 1 GiB to 64 TiB depending on the volume type
- You can attach multiple volumes to a single EC2 instance
- Most volume types support only one instance attachment at a time (with exceptions)
EBS Snapshots
Snapshots are point-in-time copies of your EBS volumes stored in Amazon S3. They’re incremental — the first snapshot captures everything, but subsequent snapshots only store the blocks that changed since the last snapshot.
This makes snapshots cost-effective for backup strategies. A 500 GB volume with 50 GB of daily changes won’t cost you 500 GB per snapshot after the first one.
EBS Multi-Attach
For specific use cases, Multi-Attach allows you to attach a single io1 or io2 volume to up to 16 Nitro-based EC2 instances in the same AZ simultaneously.
This is useful for:
- Clustered applications that manage concurrent write operations
- High-availability scenarios where multiple nodes need shared storage access
⚠️ Important: Multi-Attach requires your application to handle concurrent write coordination. EBS doesn’t manage write conflicts for you.
EBS Encryption
EBS encryption uses AWS Key Management Service (KMS) to encrypt your data. When you enable encryption on a volume:
- Data at rest is encrypted
- Data moving between EC2 and EBS is encrypted
- Snapshots created from the volume are encrypted
- Volumes created from those snapshots are encrypted
The encryption happens transparently — no code changes required in your application.
EBS Lifecycle
The typical lifecycle looks like this:
- Create a volume (or restore from snapshot)
- Attach to an EC2 instance
- Format and mount the filesystem (first time only)
- Use — read/write your data
- Snapshot — create backups as needed
- Detach — when moving to another instance
- Delete — when no longer needed
EBS Volume Types Explained (2025 Updates)
Choosing the right volume type is one of the most important decisions you’ll make. Let me break down each option with real use cases.
gp3 — General Purpose SSD (Recommended Default)
gp3 is the current generation general-purpose SSD and should be your default choice for most workloads.
| Specification | Value |
|---|---|
| Baseline IOPS | 3,000 (included) |
| Baseline Throughput | 125 MiB/s (included) |
| Max IOPS | 16,000 |
| Max Throughput | 1,000 MiB/s |
| Size Range | 1 GiB – 16 TiB |
Why gp3 wins: Unlike gp2, you can provision IOPS and throughput independently of volume size. Need 10,000 IOPS on a 100 GB volume? No problem. With gp2, you’d need a 3,334 GB volume to get the same performance.
Use cases: Boot volumes, development environments, small to medium databases, general application workloads.
gp2 — General Purpose SSD (Legacy)
gp2 is the previous generation. It’s still available, but there’s rarely a reason to choose it over gp3.
The main difference: gp2 IOPS scale with volume size (3 IOPS per GiB, up to 16,000). For small volumes, you get burst credits that allow temporary performance spikes.
My recommendation: Migrate existing gp2 volumes to gp3. You’ll likely get better performance at lower cost.
io1/io2 — Provisioned IOPS SSD
When you need guaranteed, consistent performance, io1 and io2 are your options.
| Specification | io1 | io2 |
|---|---|---|
| Max IOPS | 64,000 | 64,000 (256,000 with io2 Block Express) |
| Max Throughput | 1,000 MiB/s | 4,000 MiB/s (Block Express) |
| Durability | 99.8% – 99.9% | 99.999% |
| Multi-Attach | Yes | Yes |
io2 Block Express is the latest option, offering the highest performance for mission-critical databases.
Use cases: Production databases (MySQL, PostgreSQL, Oracle), latency-sensitive applications, workloads requiring sustained IOPS.
💡 Pro Tip: io2 offers better durability than io1 at the same price. If you’re using io1, consider switching to io2.
st1 — Throughput Optimized HDD
st1 is designed for frequently accessed, throughput-intensive workloads where you care more about sequential read/write speed than random I/O.
| Specification | Value |
|---|---|
| Baseline Throughput | 40 MiB/s per TiB |
| Max Throughput | 500 MiB/s |
| Max IOPS | 500 |
| Size Range | 125 GiB – 16 TiB |
Use cases: Big data workloads, data warehouses, log processing, streaming workloads like Kafka.
Not suitable for: Boot volumes, databases requiring random I/O.
sc1 — Cold HDD
sc1 is the lowest-cost option, designed for infrequently accessed data.
| Specification | Value |
|---|---|
| Baseline Throughput | 12 MiB/s per TiB |
| Max Throughput | 250 MiB/s |
| Max IOPS | 250 |
| Size Range | 125 GiB – 16 TiB |
Use cases: Archive data, infrequent backups, scenarios where cost matters more than performance.
🤔 Reflection: Which volume type should you use for your next EC2 deployment? For most cases, start with gp3. Only move to io2 if you need guaranteed high IOPS, or to st1/sc1 if throughput and cost matter more than latency.
EBS Snapshots & Backup Strategy
A solid backup strategy can save your job — I’ve seen it happen. Let’s cover how to do it right.
How Incremental Snapshots Work
The first snapshot of a volume copies all the data. Every snapshot after that only stores the blocks that changed since the previous snapshot.
Here’s the beautiful part: each snapshot is still a complete, independent backup. If you delete an intermediate snapshot, AWS automatically consolidates the data so your remaining snapshots stay valid.
Crash-Consistent vs Application-Consistent Snapshots
Crash-consistent snapshots capture the volume exactly as it is at that moment — like pulling the power cord on a server. The filesystem might have uncommitted writes in memory.
Application-consistent snapshots ensure the application has flushed all data to disk before the snapshot. For databases, this means freezing writes momentarily or using native backup tools first.
For production databases: Always use application-consistent snapshots. Flush the database cache, pause writes if possible, then snapshot.
Automating with Amazon Data Lifecycle Manager (DLM)
Manually creating snapshots doesn’t scale. Data Lifecycle Manager automates the entire process.
With DLM, you can:
- Schedule snapshots on a recurring basis (hourly, daily, weekly)
- Automatically delete old snapshots based on retention rules
- Copy snapshots to other regions for disaster recovery
- Apply tags for cost tracking
Example DLM Policy:
- Take daily snapshots at 3 AM UTC
- Retain snapshots for 30 days
- Copy to a secondary region
- Tag with “Environment: Production”
Cross-Region and Cross-Account Snapshot Copying
For disaster recovery, you’ll want snapshots in multiple regions. You can copy snapshots:
- Across regions — Protect against regional outages
- Across accounts — Separate backup accounts for security
⚠️ Security Note: When copying encrypted snapshots across accounts, you need to share the KMS key or re-encrypt with a key the target account can access.
Restoring Volumes from Snapshots
Creating a volume from a snapshot is straightforward, but there’s a performance consideration.
When you create a volume from a snapshot, the data is loaded lazily from S3. The first read of any block will be slower as it’s fetched from S3.
For production workloads: Enable Fast Snapshot Restore (FSR) to pre-warm volumes. Or, read all blocks before putting the volume into production (you can use tools like dd or fio for this).
❓ Quiz: Do snapshots back up the entire volume or only the changed blocks?
Answer: After the initial full snapshot, subsequent snapshots only back up changed blocks. But each snapshot still represents a complete point-in-time copy.

EBS Encryption Explained
Encryption isn’t optional anymore — it’s expected. Here’s how EBS encryption works in practice.
KMS-Managed Encryption
When you enable encryption on an EBS volume, AWS uses a Customer Master Key (CMK) from KMS to encrypt the data encryption key. You can use:
- AWS managed key (
aws/ebs) — AWS creates and manages it for you - Customer managed key — You create and control the key, including rotation and access policies
Enabling Default Encryption
You can enable EBS encryption by default at the account level for each region. Once enabled, all new EBS volumes and snapshots are automatically encrypted.
This is a governance best practice — no one can accidentally create an unencrypted volume.
To enable: Go to EC2 Console → Account Attributes → EBS Encryption → Enable
Encrypted Snapshots and Volumes
The encryption relationship is straightforward:
- Encrypted volume → Encrypted snapshot
- Encrypted snapshot → Encrypted volume
- Unencrypted snapshot → You can create an encrypted volume (by copying with encryption enabled)
Cross-Account Key Usage
If you share an encrypted snapshot with another account, they need access to the KMS key to use it. You have two options:
- Share your CMK with the other account
- Have them copy the snapshot and re-encrypt with their own key
🔐 Compliance Note: For PCI-DSS, HIPAA, and SOC2 compliance, encryption at rest is typically required. EBS encryption with KMS-managed keys satisfies these requirements when configured correctly.
Performance Optimization
Performance issues with EBS usually come down to misunderstanding the relationship between volume characteristics and workload patterns.
Volume Size vs IOPS Relationship
For gp2 volumes, IOPS scale with size: 3 IOPS per GiB, minimum 100 IOPS, maximum 16,000 IOPS.
For gp3 volumes, baseline 3,000 IOPS is included regardless of size. You can provision up to 16,000 IOPS independently.
The takeaway: With gp3, you don’t need to over-provision storage just to get more IOPS.
gp3 Throughput Tuning
gp3 includes 125 MiB/s baseline throughput. For throughput-heavy workloads, you can provision up to 1,000 MiB/s.
Calculate your needs:
- Streaming large files? Increase throughput
- Running a database with small random reads? Focus on IOPS instead
RAID Strategies on EC2
For workloads exceeding single-volume limits, you can combine multiple EBS volumes using RAID:
RAID 0 (Striping) — Combines volumes for additive IOPS and throughput. No redundancy — if one volume fails, you lose everything. Use for temporary high-performance workloads where data can be recreated.
RAID 1 (Mirroring) — Writes data to two volumes simultaneously. Provides redundancy but no performance gain. Use when you need protection beyond EBS’s built-in durability.
⚠️ Avoid RAID 5/6 on EBS — the parity writes create significant overhead and aren’t worth the complexity.
Monitoring with CloudWatch Metrics
Key EBS metrics to monitor:
- VolumeReadOps/VolumeWriteOps — Total operations
- VolumeReadBytes/VolumeWriteBytes — Data transferred
- VolumeTotalReadTime/VolumeTotalWriteTime — Latency
- VolumeQueueLength — Operations waiting (high values indicate bottleneck)
- BurstBalance — For gp2, shows remaining burst credits
Set alarms on VolumeQueueLength — a sustained high queue length means your volume can’t keep up with demand.
Burst vs Baseline Performance
gp2 volumes under 1 TiB can burst to 3,000 IOPS using burst credits. Once credits are depleted, performance drops to baseline.
gp3 doesn’t have burst mechanics — you get consistent performance based on your provisioned values.
🤔 Reflection: Is your EC2 application throttled by EBS? Check CloudWatch for BurstBalance depletion or high queue lengths. If you see issues, consider switching to gp3 with higher provisioned IOPS.
EBS Use Cases in Real DevOps Work
Let me share some patterns I’ve used in production environments.
Database Workloads
MySQL/PostgreSQL on io2: For production databases, io2 provides the consistent low-latency performance databases need. Provision IOPS based on your query patterns — typically 50-100 IOPS per connection for OLTP workloads.
Snapshot strategy: Use database-native tools to create consistent dumps, then snapshot the volume. Alternatively, use AWS Backup with database-aware agents.
Container Workloads (EKS)
EBS CSI Driver enables persistent volumes in Kubernetes. StatefulSets like databases or message queues can attach EBS volumes that survive pod restarts.
Configuration: Use gp3 StorageClass with appropriate IOPS for your workload. Remember — the volume stays in one AZ, so configure your workload accordingly.
CI/CD Build Runners
Build runners benefit from EBS for caching dependencies. A gp3 volume storing your npm/pip/maven cache dramatically speeds up builds.
Tip: Use a dedicated EBS volume for the cache directory. This separates cache data from the boot volume, making it easier to snapshot and restore.
Web Servers
For web servers, gp3 handles application logs, uploaded files, and static assets efficiently. The baseline 3,000 IOPS is plenty for most web workloads.
Log rotation: Implement log rotation and ship logs to CloudWatch or S3 to prevent volumes from filling up.
Analytics Workloads
st1 for data lakes: When you’re processing large sequential datasets, st1’s throughput optimization makes sense. A cluster of EMR nodes reading from st1 volumes can efficiently process terabytes of data.
EBS Security & Monitoring
Security isn’t just about encryption — it’s about access control and visibility.
IAM Permissions for EBS
Control who can create, attach, modify, and delete volumes with IAM policies. Key actions to manage:
ec2:CreateVolumeec2:AttachVolumeec2:DetachVolumeec2:DeleteVolumeec2:CreateSnapshotec2:DeleteSnapshot
Use resource tags and conditions to limit access. For example, only allow developers to create volumes in development accounts.
Snapshot Sharing Risks
When you share a snapshot, you’re sharing all the data on it. Before sharing:
- Ensure no sensitive data exists on the volume
- Consider creating a sanitized copy specifically for sharing
- Use encryption with shared KMS keys for controlled access
🚀 Security Tip: Never share snapshots publicly unless you absolutely intend to. Public snapshots are discoverable and downloadable by anyone with an AWS account.
GuardDuty and CloudTrail
GuardDuty can detect suspicious activity related to EBS, like unusual snapshot creation patterns or unexpected API calls from new locations.
CloudTrail logs all EBS API calls. Set up alerts for:
DeleteVolumeon production resourcesModifySnapshotAttribute(changing sharing settings)CreateSnapshotfrom unauthorized principals
AWS Config for Compliance
Use AWS Config rules to enforce EBS policies:
encrypted-volumes— Ensures all volumes are encryptedebs-optimized-instance— Checks that instances are EBS-optimizedebs-snapshot-public-restorable-check— Alerts on publicly shared snapshots
Common Mistakes to Avoid
Learn from others’ mistakes — these are issues I’ve encountered repeatedly.
Using gp2 instead of gp3 — gp3 offers better performance at lower cost for most workloads. Unless you have a specific reason, always choose gp3.
Orphaned EBS volumes — When you terminate an EC2 instance, the root volume might be deleted, but additional volumes often aren’t. These orphaned volumes continue accruing charges. Audit regularly.
Relying only on volume-level redundancy — EBS replicates data within an AZ, but that doesn’t protect against AZ failures or accidental deletion. Always maintain snapshot backups.
Unencrypted snapshots — Even if your volumes are encrypted, check that snapshots are too. Enable default encryption at the account level to prevent gaps.
Using HDD volumes for databases — st1 and sc1 are optimized for sequential throughput, not random I/O. Databases need IOPS — use gp3 or io2.
Ignoring DeleteOnTermination — By default, root volumes are deleted when instances terminate, but additional volumes aren’t. Set DeleteOnTermination = true for volumes that should be cleaned up automatically, or false for data that must persist.
⚠️ Callout: Always set
DeleteOnTerminationintentionally. The default behavior can lead to either data loss or forgotten orphaned volumes.
EBS Pricing Breakdown
Understanding pricing helps you optimize costs without sacrificing performance.
Volume Pricing
You pay per GB-month for provisioned storage:
- gp3: ~$0.08/GB-month + $0.005/provisioned IOPS (above 3,000) + $0.04/provisioned MB/s (above 125)
- gp2: ~$0.10/GB-month
- io1: ~$0.125/GB-month + $0.065/provisioned IOPS
- io2: ~$0.125/GB-month + $0.065/provisioned IOPS
- st1: ~$0.045/GB-month
- sc1: ~$0.015/GB-month
Prices vary by region — check current AWS pricing.
Snapshot Pricing
Snapshots are stored in S3 and priced at ~$0.05/GB-month. Remember, incremental snapshots mean you’re only paying for changed data after the first snapshot.
Cross-Region Copy Costs
Copying snapshots across regions incurs data transfer charges plus storage in the destination region. Plan for these costs in your disaster recovery budget.
Cost Optimization Example
Scenario: You have a 500 GB gp2 volume with consistent 5,000 IOPS requirement.
gp2 cost: 500 GB × $0.10 = $50/month (but gp2 only delivers 1,500 IOPS at this size — you’d need 1,667 GB for 5,000 IOPS = $166.70/month)
gp3 cost: 500 GB × $0.08 = $40/month + 2,000 extra IOPS × $0.005 = $10/month = $50/month total
By switching to gp3, you get the performance you need at a fraction of the cost.
Conclusion
EBS is the backbone of persistent storage in AWS. When you understand EBS well, you build faster, safer, and more predictable infrastructure.
Here’s what to remember:
- Choose gp3 as your default volume type — it’s cost-effective and flexible
- Use io2 for production databases that need guaranteed performance
- Automate snapshots with Data Lifecycle Manager — manual backups don’t scale
- Enable default encryption at the account level — no exceptions
- Monitor CloudWatch metrics to catch performance issues before users notice
- Clean up orphaned volumes — they’re easy to forget and expensive over time
Whether you’re running a simple web server or a complex database cluster, EBS provides the storage foundation you need. Master these concepts, and you’ll handle any storage challenge AWS throws at you.
👉 Take the Free Amazon EC2 & EBS Hands-On Course to master volumes, snapshots, and performance tuning with real labs.
Frequently Asked Questions
What is Amazon EBS?
Amazon Elastic Block Store (EBS) is a high-performance block storage service designed for use with Amazon EC2 instances. It provides persistent storage that exists independently of EC2 instance lifecycles, making it suitable for databases, file systems, and applications requiring durable storage.
What are the types of EBS volumes?
AWS offers five EBS volume types: gp3 and gp2 (General Purpose SSD), io1 and io2 (Provisioned IOPS SSD), st1 (Throughput Optimized HDD), and sc1 (Cold HDD). Each type is optimized for different workload characteristics, from high-performance databases to infrequently accessed archives.
How do EBS snapshots work?
EBS snapshots are point-in-time backups stored in Amazon S3. The first snapshot copies all data, while subsequent snapshots are incremental — only capturing blocks that changed since the previous snapshot. Each snapshot remains a complete, independent backup that can restore a full volume.
What is the difference between EBS and Instance Store?
EBS volumes are network-attached persistent storage that survives instance stops and terminations. Instance Store is physically attached to the host and provides temporary storage — data is lost when the instance stops. Use EBS for data that must persist; use Instance Store only for temporary caches or scratch data.
Is Amazon EBS encrypted?
EBS supports encryption at rest and in transit using AWS KMS keys. You can enable default encryption at the account level, ensuring all new volumes and snapshots are automatically encrypted. Encrypted snapshots can only create encrypted volumes, maintaining security throughout the data lifecycle.
