AWS EBS Deep Dive: gp2 vs gp3 vs io1 — Hands-On AWS Lab for Beginners

Table of Contents: AWS EBS Deep Dive


Introduction

Let me tell you something that still haunts me from my early AWS days. A client’s production database went from snappy to sluggish overnight. The culprit? They’d launched an RDS instance backed by gp2 storage, burned through burst credits during a traffic spike, and watched their IOPS crater to baseline. Nobody understood why the database “randomly” became slow every afternoon.

This lab exists because EBS misunderstandings cause real outages. I’ve seen teams overpay by 40% using io1 when gp3 would’ve been perfect. I’ve watched engineers lose data because they thought terminating an instance wouldn’t touch “their” EBS volume. These aren’t hypotheticals—they’re Tuesday.

What you’ll learn:

  • The actual performance and cost differences between gp2, gp3, and io1
  • How to create, attach, mount, and expand EBS volumes without downtime
  • Snapshot creation and restoration that actually works
  • Production-grade habits that prevent 3 AM pages

Who should do this lab: Anyone touching EC2 in production. If you’re studying for the Solutions Architect exam, this is foundational. If you’re a DevOps engineer inheriting AWS infrastructure, you need this yesterday.

Common mistakes I see constantly:

  • Choosing gp2 “because it’s the default” without understanding burst credits
  • Attaching volumes in the wrong Availability Zone (yes, this fails silently in weird ways)
  • Forgetting that snapshots are incremental but restore creates full volumes
  • Never testing snapshot restores until disaster strikes

Lab Overview

You’re going to build something practical: an EC2 instance with multiple EBS volume types attached, formatted, and mounted. Then you’ll expand a volume online, snapshot it, and restore from that snapshot.

High-level workflow:

  1. Create three EBS volumes (gp2, gp3, io1) in your EC2 instance’s AZ
  2. Attach all three to your running instance
  3. Format and mount them to different directories
  4. Expand one volume without rebooting
  5. Create a snapshot
  6. Restore a new volume from that snapshot and validate data integrity

Skills you’ll walk away with:

  • Confident EBS volume lifecycle management
  • Understanding of when to use each volume type
  • Online volume expansion (this alone saves you maintenance windows)
  • Snapshot-based disaster recovery

Where this matters in production: Database storage, application logs, container persistent volumes, boot volumes, backup strategies. EBS is everywhere. Get this wrong, and you’re either bleeding money or losing data.


Prerequisites

Before starting, confirm you have:

Your EC2 instance must be running. Note its Availability Zone (e.g., us-east-1a) — you’ll need this.


Step-by-Step Hands-On Lab

Step 1: Create EBS Volumes in the Console

Navigate to EC2 Console → Elastic Block Store → Volumes → Create Volume.

You’ll create three volumes to compare. Here’s what matters:

Volume 1: gp2 (General Purpose SSD)

  • Size: 10 GiB
  • Volume type: gp2
  • Availability Zone: Same as your EC2 instance (critical!)
  • Encryption: Enable with default KMS key

gp2 gives you 3 IOPS per GiB with burst capability up to 3,000 IOPS. A 10 GiB volume has 30 baseline IOPS—basically nothing. This is why small gp2 volumes feel fast initially (burst credits) then crawl.

Volume 2: gp3 (Latest General Purpose SSD)

  • Size: 10 GiB
  • Volume type: gp3
  • IOPS: 3,000 (default)
  • Throughput: 125 MiB/s (default)
  • Availability Zone: Same as EC2 instance
  • Encryption: Enable

gp3 is the better choice 90% of the time. You get 3,000 IOPS baseline regardless of size. No burst credit anxiety. Often 20% cheaper than equivalent gp2.

Volume 3: io1 (Provisioned IOPS SSD)

  • Size: 10 GiB
  • Volume type: io1
  • IOPS: 500 (minimum for demo)
  • Availability Zone: Same as EC2 instance
  • Encryption: Enable

io1 is for workloads demanding consistent, high IOPS—think production databases with strict latency requirements. It’s expensive. I’ve seen teams use io1 for dev environments because “production uses it.” Don’t be that team.

What you should see: Three volumes in “available” state. Note the Volume IDs.

Common misconfiguration: Wrong AZ. If your EC2 is in us-east-1a and you create the volume in us-east-1b, attachment fails. There’s no cross-AZ EBS attachment.

Step 2: Attach EBS Volumes to Your EC2 Instance

Select each volume → Actions → Attach Volume.

  • Instance: Select your running EC2 instance
  • Device name: Accept the suggested name (e.g., /dev/sdf, /dev/sdg, /dev/sdh)

Repeat for all three volumes.

Device naming reality check: AWS suggests /dev/sdf but Linux might show it as /dev/xvdf or /dev/nvme1n1 depending on instance type. Nitro-based instances use NVMe naming. Don’t panic when names don’t match exactly.

What you should see: Volume state changes to “in-use” with your instance ID shown.

Step 3: Connect to Your EC2 Instance

SSH into your instance:

ssh -i your-key.pem ec2-user@your-instance-public-ip

Or use Session Manager from the EC2 console if you’ve configured it.

Step 4: Identify the New Disks

Run these commands to see your attached volumes:

lsblk

You’ll see output like:

NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda          202:0    0   8G  0 disk
└─xvda1       202:1    0   8G  0 part /
xvdf          202:80   0  10G  0 disk
xvdg          202:96   0  10G  0 disk
xvdh          202:112  0  10G  0 disk

The three 10G disks are your new EBS volumes. No mountpoints yet—they’re raw disks.

sudo file -s /dev/xvdf

Output: /dev/xvdf: data means no filesystem exists. Good—we’ll create one.

Step 5: Create Filesystem and Mount Volumes

Format each volume with ext4:

sudo mkfs -t ext4 /dev/xvdf
sudo mkfs -t ext4 /dev/xvdg
sudo mkfs -t ext4 /dev/xvdh

Create mount points:

sudo mkdir /mnt/gp2-volume
sudo mkdir /mnt/gp3-volume
sudo mkdir /mnt/io1-volume

Mount the volumes:

sudo mount /dev/xvdf /mnt/gp2-volume
sudo mount /dev/xvdg /mnt/gp3-volume
sudo mount /dev/xvdh /mnt/io1-volume

Verify with:

df -h

You should see all three mounted with ~9.8G available (ext4 reserves some space).

Making mounts persistent across reboots:

Get the UUID of each volume:

sudo blkid

Edit /etc/fstab:

sudo nano /etc/fstab

Add lines like:

UUID=your-uuid-here /mnt/gp2-volume ext4 defaults,nofail 0 2

The nofail option is crucial—without it, your instance won’t boot if the volume is detached.

Step 6: Expand EBS Volume Online

Let’s expand the gp3 volume from 10 GiB to 20 GiB without rebooting.

In the Console:
EC2 → Volumes → Select your gp3 volume → Actions → Modify Volume

  • Change size to 20 GiB
  • Click Modify

The volume enters “optimizing” state. Wait until it shows “completed” (usually under a minute for small volumes).

On the instance, extend the filesystem:

lsblk

You’ll see the disk is now 20G, but the filesystem hasn’t grown yet.

sudo growpart /dev/xvdg 1

Wait—this command is for partitioned disks. Since we formatted the whole disk without partitions:

sudo resize2fs /dev/xvdg

Verify:

df -h /mnt/gp3-volume

Now shows ~20G available. No reboot. No downtime. This is one of my favorite EBS features.

Step 7: Create EBS Snapshot

Let’s write test data first:

echo "Snapshot test data - $(date)" | sudo tee /mnt/gp3-volume/testfile.txt

In the Console:
EC2 → Volumes → Select gp3 volume → Actions → Create Snapshot

  • Description: “Lab 0.3 – gp3 snapshot test”
  • Click Create Snapshot

Navigate to Snapshots to monitor progress.

What snapshots actually capture: Point-in-time block-level copy. The first snapshot copies all used blocks. Subsequent snapshots are incremental—only changed blocks. But here’s the gotcha: each snapshot is independent for restoration purposes. Delete snapshot #1, and snapshot #2 still works.

Crash-consistent vs application-consistent: EBS snapshots are crash-consistent—like pulling the power cord. For databases, flush buffers first or use application-aware backup tools.

Step 8: Restore Volume from Snapshot

Once your snapshot shows “completed”:

EC2 → Snapshots → Select snapshot → Actions → Create Volume from Snapshot

  • Volume type: gp3
  • Size: 20 GiB (or larger)
  • AZ: Same as your instance
  • Encryption: Enable

Attach this new volume to your instance as /dev/sdi.

On the instance:

sudo mkdir /mnt/restored-volume
sudo mount /dev/xvdi /mnt/restored-volume
cat /mnt/restored-volume/testfile.txt

Your test data should appear. Snapshot restore validated.


Real Lab Experiences: Architect Insights

The gp2 burst credit disaster: A startup ran their MongoDB cluster on gp2 volumes. Worked great in testing. In production, sustained write load depleted burst credits within hours. Database latency spiked from 2ms to 200ms. They blamed MongoDB, then the network, then EC2. Three days of firefighting before someone checked CloudWatch’s BurstBalance metric. Migration to gp3 fixed it in an hour.

The io1 budget explosion: A team provisioned io1 at 10,000 IOPS for a dev database “to match production.” Monthly bill: $650 for one volume. Production’s actual IOPS usage: 800 average. They could’ve used gp3 for $80.

Snapshot restoration surprise: An engineer snapshotted a 500 GiB volume and assumed restoration would be instant. First read from the restored volume took 45 seconds per block (lazy loading from S3). Production app timed out. Lesson: Use Fast Snapshot Restore for critical recovery scenarios, or pre-warm volumes by reading all blocks.

My advice before touching production EBS: Always enable encryption. Always tag volumes with purpose and owner. Always test snapshot restores before you need them. Always check the AZ twice.


Architecture Diagram Description

Picture your architecture:

  • EC2 Instance (center) running Amazon Linux in us-east-1a
  • Three EBS Volumes attached: gp2 (10 GiB), gp3 (20 GiB), io1 (10 GiB) — all in us-east-1a
  • EBS Snapshot stored in S3 (managed by AWS, you don’t see the bucket)
  • Restored Volume created from snapshot, attached to the same instance

The snapshot flow: EBS Volume → Snapshot API → S3 (AWS-managed) → New EBS Volume. You pay for snapshot storage at S3 rates, significantly cheaper than EBS volume pricing.


Validation and Testing

Confirm your setup:

# View all block devices
lsblk

# Check mounted filesystems
df -h

# Verify filesystem types
mount | grep /mnt

# Test write performance (rough test)
sudo dd if=/dev/zero of=/mnt/gp3-volume/testblock bs=1M count=100
sudo rm /mnt/gp3-volume/testblock

# Verify persistence after unmount/remount
sudo umount /mnt/gp3-volume
sudo mount /dev/xvdg /mnt/gp3-volume
cat /mnt/gp3-volume/testfile.txt

Troubleshooting Guide

Volume not visible after attachment:

sudo dmesg | tail -20

Look for disk detection messages. Try lsblk again after 30 seconds—NVMe volumes sometimes need a moment.

Attachment fails silently:
Check the AZ. Seriously. It’s almost always the AZ.

Filesystem won’t expand after volume resize:
You modified the volume, but did you run resize2fs? For XFS filesystems, use xfs_growfs instead.

Mount fails with wrong fs type:

sudo file -s /dev/xvdf

Confirm a filesystem exists. If it shows “data,” you need mkfs first.

Snapshot stuck in “pending”:
Large volumes with heavy I/O take longer. Check for EBS API throttling if you’re snapshotting many volumes simultaneously.


AWS Best Practices: Solutions Architect Level

Security:

  • Enable encryption by default at the account level (EC2 → EBS Encryption settings)
  • Use customer-managed KMS keys for sensitive workloads
  • Never attach unencrypted volumes to instances handling PII

Reliability:

  • Automate snapshots with Amazon Data Lifecycle Manager
  • Test restores quarterly—untested backups aren’t backups
  • Use Multi-Attach io1/io2 only with cluster-aware filesystems

Cost Optimization:

  • Default to gp3 for new workloads
  • Right-size io1/io2 IOPS based on CloudWatch metrics, not guesses
  • Delete unattached volumes and old snapshots monthly

Operational Excellence:

  • Tag every volume: Name, Environment, Owner, Application
  • Monitor BurstBalance for any remaining gp2 volumes
  • Set up CloudWatch alarms for VolumeReadOps and VolumeWriteOps

Scaling Consideration:

  • EBS volumes can be resized up, never down
  • Plan initial sizes with growth in mind
  • Use Elastic Volumes to adjust IOPS and throughput without detachment

AWS EBS Interview Questions (Solutions Architect & DevOps)

These questions come directly from interviews I’ve conducted and attended. Know these cold before your next AWS role interview.

Q1: What’s the difference between gp2 and gp3, and when would you choose each?

Strong Answer: gp2 uses a burst credit model where IOPS scales with volume size (3 IOPS per GiB, baseline). Small volumes run on burst credits and throttle when depleted. gp3 provides consistent 3,000 IOPS baseline regardless of size, with independently adjustable IOPS and throughput. Choose gp3 for new workloads—it’s typically 20% cheaper with predictable performance. Only use gp2 if you’re maintaining legacy infrastructure or have specific burst-pattern workloads that benefit from the credit model.

Red flag answer: “They’re basically the same, gp3 is just newer.”

Q2: Can you attach an EBS volume to multiple EC2 instances simultaneously?

Strong Answer: Standard EBS volumes support single-instance attachment only. However, io1 and io2 volumes support Multi-Attach, allowing attachment to up to 16 Nitro-based instances in the same AZ. Critical caveat: you must use a cluster-aware filesystem like GFS2 or OCFS2—standard ext4 or XFS will cause data corruption. Multi-Attach is designed for clustered applications, not general file sharing.

Follow-up trap: “Would you use Multi-Attach for a shared web content directory?” Answer: No—use EFS or FSx for shared filesystems. Multi-Attach is for clustered databases and failover scenarios.

Q3: An EBS volume shows “available” but won’t attach to your instance. What do you check?

Strong Answer: First, verify the Availability Zone—EBS volumes can only attach to instances in the same AZ. Second, check if the volume is encrypted with a KMS key the instance’s IAM role can’t access. Third, confirm the instance isn’t at its volume attachment limit (varies by instance type, typically 28 for Nitro). Fourth, check if the volume is already being attached (stuck in “attaching” state from a failed previous attempt).

Q4: How do EBS snapshots work, and are they incremental or full copies?

Strong Answer: Snapshots are incremental at the storage level—only changed blocks since the last snapshot are saved. However, each snapshot is logically independent; you can delete any snapshot without affecting others. Snapshots are stored in S3 (AWS-managed, not visible in your buckets) and are regional by default. First read from a restored volume may have latency due to lazy loading from S3—use Fast Snapshot Restore for production-critical recovery.

Q5: Your database team reports intermittent slow performance on an RDS instance using gp2 storage. How do you troubleshoot?

Strong Answer: Immediately check CloudWatch for the BurstBalance metric. If it’s dropping toward zero and correlating with performance dips, they’ve exhausted burst credits. Check VolumeReadOps and VolumeWriteOps to understand I/O patterns. Solutions: migrate to gp3 for consistent baseline IOPS, increase volume size to raise gp2 baseline (3 IOPS per GiB), or move to io1/io2 for guaranteed provisioned IOPS. I’d recommend gp3 migration—it’s usually cheaper and eliminates burst credit concerns entirely.

Q6: How would you design an EBS backup strategy for a production database?

Strong Answer: Implement Amazon Data Lifecycle Manager for automated, scheduled snapshots with defined retention policies. For databases, ensure application-consistent snapshots by flushing buffers before snapshot (or use native database backup tools to S3). Enable cross-region snapshot copy for disaster recovery. Tag snapshots with application name, environment, and creation date. Test restore procedures quarterly—document RTO/RPO and validate them. For critical workloads, enable Fast Snapshot Restore in target AZs to eliminate first-read latency during recovery.

Q7: What happens to EBS volumes when you terminate an EC2 instance?

Strong Answer: It depends on the DeleteOnTermination attribute. Root volumes default to DeleteOnTermination=true—they’re deleted. Additional volumes default to DeleteOnTermination=false—they persist as “available” volumes. This catches people constantly. Always verify this setting before termination, especially for instances with important data volumes. You can modify this attribute on running instances via the console or CLI.

Q8: Explain EBS encryption and its performance impact.

Strong Answer: EBS encryption uses AES-256 and is handled at the instance level with negligible performance impact on modern instance types—the encryption/decryption happens on the Nitro hardware. You can use AWS-managed keys or customer-managed KMS keys. Encrypted volumes produce encrypted snapshots, and snapshots can only be shared cross-account if using customer-managed KMS keys with proper key policies. Best practice: enable encryption by default at the account level.


Frequently Asked Questions (AWS EBS Deep Dive)

What is the difference between gp2 and gp3 EBS volumes?

gp2 and gp3 are both general-purpose SSD volumes, but they differ in performance model and pricing. gp2 provides 3 IOPS per GiB with burst capability up to 3,000 IOPS using a credit system. gp3 delivers a consistent baseline of 3,000 IOPS and 125 MiB/s throughput regardless of volume size, with the ability to provision up to 16,000 IOPS and 1,000 MiB/s independently. gp3 is typically 20% cheaper than gp2 for equivalent capacity and offers more predictable performance without burst credit management.

Can I expand an EBS volume without downtime?

Yes, you can expand EBS volumes without downtime or detachment. Use the Modify Volume feature in the AWS Console or CLI to increase size, IOPS, or throughput. After modification completes, extend the filesystem using resize2fs for ext4 or xfs_growfs for XFS—no reboot required. Note that you can only increase volume size, never decrease it, and you must wait at least 6 hours between modifications to the same volume.

How long do EBS snapshots take to create?

EBS snapshot creation is nearly instantaneous from your perspective—the snapshot enters “pending” state immediately and you can continue using the volume. The actual data transfer to S3 happens in the background. First snapshots of large volumes take longer to complete (hours for multi-terabyte volumes), while incremental snapshots complete faster since only changed blocks are copied. Snapshot completion time depends on volume size, amount of changed data, and current EBS service load.

What happens to EBS data when an EC2 instance is terminated?

When you terminate an EC2 instance, root EBS volumes are deleted by default (DeleteOnTermination=true), while additional attached volumes persist in “available” state (DeleteOnTermination=false). This default behavior catches many users by surprise. To preserve root volume data, either modify the DeleteOnTermination attribute before termination or create a snapshot. Always verify termination protection and volume deletion settings for instances containing important data.

Can I attach an EBS volume to an instance in a different Availability Zone?

No, EBS volumes can only attach to EC2 instances within the same Availability Zone. This is a fundamental EBS limitation. To move data between AZs, create a snapshot of the volume, then create a new volume from that snapshot in the target AZ. For cross-region transfers, copy the snapshot to the destination region first, then create a volume. Plan your architecture with this constraint in mind, especially for high-availability designs.

How much does EBS cost compared across volume types?

EBS pricing varies significantly by volume type. As of current AWS pricing (verify for your region): gp3 costs approximately $0.08/GB-month for storage plus separate charges for provisioned IOPS beyond 3,000. gp2 costs approximately $0.10/GB-month with IOPS included based on size. io1 costs approximately $0.125/GB-month plus $0.065 per provisioned IOPS-month—a 10,000 IOPS volume adds $650/month in IOPS charges alone. Snapshots cost approximately $0.05/GB-month for stored data. Always calculate total cost including IOPS for io1/io2 volumes.

Are EBS snapshots stored in S3?

Yes, EBS snapshots are stored in Amazon S3, but in AWS-managed buckets that don’t appear in your S3 console. You’re charged S3 storage rates for snapshot data (approximately $0.05/GB-month). Snapshots are stored incrementally—only changed blocks consume additional storage. Snapshots are regional by default but can be copied cross-region for disaster recovery. You cannot access snapshot data directly through S3 APIs; you must create an EBS volume from the snapshot.

How do I recover data from an EBS snapshot?

To recover data from an EBS snapshot, navigate to EC2 → Snapshots, select your snapshot, and choose “Create Volume from Snapshot.” Specify the target Availability Zone (must match your EC2 instance’s AZ), volume type, and size (can be larger than original). Once the volume is created, attach it to your instance, then mount it using standard Linux commands. First reads from restored volumes may have latency due to lazy loading—for production recovery, enable Fast Snapshot Restore to eliminate this delay.

What is EBS burst balance and why does it matter?

Burst balance is a credit system for gp2 volumes that allows small volumes to temporarily exceed their baseline IOPS. Each gp2 volume earns credits when operating below baseline (3 IOPS per GiB) and spends them when bursting up to 3,000 IOPS. A depleted burst balance causes performance to drop to baseline—potentially as low as 100 IOPS for very small volumes. Monitor the BurstBalance CloudWatch metric; if it consistently drops below 20%, consider migrating to gp3 or increasing volume size. gp3 volumes don’t have burst balance—they provide consistent baseline performance.

Should I use io1 or io2 for production databases?

io2 is generally preferred over io1 for production databases due to higher durability (99.999% vs 99.9%), higher maximum IOPS (64,000 vs 32,000 per volume), and better IOPS-to-storage ratio (500:1 vs 50:1). io2 Block Express offers even higher performance for Nitro instances. However, evaluate whether you actually need provisioned IOPS—many production databases perform well on gp3 with up to 16,000 IOPS at significantly lower cost. Reserve io1/io2 for workloads with strict latency requirements, high sustained IOPS needs, or databases requiring Multi-Attach for clustering.


Conclusion and Next Steps

You’ve now completed a real EBS workflow: creating volumes across three types, understanding their performance characteristics, mounting and expanding storage online, and implementing snapshot-based recovery. This isn’t theoretical—these are the exact operations you’ll perform in production.

What breaks when EBS is misunderstood:

  • Databases slow down mysteriously (burst credits)
  • Costs spiral without explanation (io1 everywhere)
  • Disaster recovery fails when you need it most (untested snapshots)
  • Instances won’t boot (bad fstab entries without nofail)

Master this, and you’ve eliminated an entire category of production incidents.

Next Lab: {link} Lab 0.4 — EC2 Instance Lifecycle: Stop, Start, Terminate, and Data Persistence Behavior

You’ll learn what happens to your EBS volumes, instance store data, and public IPs when you stop, start, or terminate instances. Spoiler: the defaults surprise people.


Related Resources:

Similar Posts

Leave a Reply