Lab 12: Software RAID with mdadm

Time: 40 minutes | Level: Advanced | Docker: docker run -it --rm --privileged ubuntu:22.04 bash

Software RAID uses your CPU to mirror, stripe, or parity-protect data across multiple block devices. mdadm is the Linux tool for creating and managing MD (Multiple Device) arrays.


Prerequisites

docker run -it --rm --privileged ubuntu:22.04 bash
apt-get update -qq && apt-get install -y mdadm

Step 1: Create Virtual Disks (Loopback Devices)

# Create explicit loop device nodes
for i in 30 31 32 33; do mknod /dev/loop$i b 7 $i 2>/dev/null || true; done

# Create 150 MiB disk images
dd if=/dev/zero of=/tmp/r1.img bs=1M count=150
dd if=/dev/zero of=/tmp/r2.img bs=1M count=150
dd if=/dev/zero of=/tmp/r3.img bs=1M count=150

# Attach to loop devices
losetup /dev/loop30 /tmp/r1.img
losetup /dev/loop31 /tmp/r2.img
losetup /dev/loop32 /tmp/r3.img

# Verify
losetup -a | grep "loop3[012]"

📸 Verified Output:

💡 Check /proc/mdstat at any time to see all active RAID arrays and their status.


Step 2: Create a RAID 1 Array (Mirroring)

RAID 1 writes identical data to all member disks — if one fails, data survives on the others.

📸 Verified Output:

💡 [UU] means both disks are Up. [_U] would mean first disk is degraded/missing.


Step 3: Inspect the Array with mdadm --detail

📸 Verified Output:


Step 4: Format and Mount the RAID Array

📸 Verified Output:


Step 5: Simulate a Disk Failure

📸 Verified Output:

💡 [_U] — first slot is degraded (_), second is up (U). Data is still fully accessible from loop31!


Step 6: Replace the Failed Disk (Recovery)

📸 Verified Output:

💡 On real drives, a resync takes minutes to hours depending on array size. watch cat /proc/mdstat lets you monitor rebuild progress.


Step 7: RAID 5 and mdadm.conf

Create a RAID 5 array (striping with distributed parity):

📸 Verified Output:

Save the array configuration:

📸 Verified Output:

💡 Without /etc/mdadm/mdadm.conf, arrays may not auto-assemble at boot. Always save the config after creating arrays.


Step 8: Capstone — RAID Level Comparison

📸 Verified Output:


Summary

RAID Level
Min Disks
Usable Capacity
Fault Tolerance
Use Case

RAID 0

2

100%

None

Performance (no redundancy)

RAID 1

2

50%

N-1 disks

Boot drives, critical data

RAID 5

3

67-94%

1 disk

General-purpose storage

RAID 6

4

50-88%

2 disks

High-capacity, archive

RAID 10

4

50%

1 per mirror

Databases, high I/O

Command
Purpose

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd{a,b}

Create RAID 1

cat /proc/mdstat

Array status

mdadm --detail /dev/md0

Detailed info

mdadm /dev/md0 --fail /dev/sda

Mark disk failed

mdadm /dev/md0 --remove /dev/sda

Remove failed disk

mdadm /dev/md0 --add /dev/sdc

Add replacement

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Save config

Last updated