Lab 05: Shell Scripting — Real-World Automation

Time: 30 minutes | Level: Practitioner | Docker: docker run -it --rm ubuntu:22.04 bash


Overview

This capstone lab puts everything together: variables, loops, functions, and error handling — applied to real-world automation tasks. You'll build a backup script, a log rotator, a file watcher, and a system report generator. These are the building blocks of real DevOps and sysadmin automation.


Step 1: Backup Script — Timestamped Archives

A backup script that creates timestamped .tar.gz archives and reports what it created:

mkdir -p /tmp/source /tmp/backups
echo "config data" > /tmp/source/config.txt
echo "app data"    > /tmp/source/app.log

backup() {
  local src="$1" dest="$2"
  local ts=$(date +%Y%m%d_%H%M%S)
  local archive="${dest}/backup_${ts}.tar.gz"
  tar -czf "$archive" -C "$(dirname $src)" "$(basename $src)" 2>/dev/null
  echo "Backup created: $(basename $archive)"
  echo "Size: $(du -sh $archive | cut -f1)"
}
backup /tmp/source /tmp/backups
ls /tmp/backups/

📸 Verified Output:

💡 Tip: date +%Y%m%d_%H%M%S produces a sortable timestamp. Archives named this way are automatically in chronological order when listed with ls. For daily backups, date +%Y%m%d (no time) creates at most one backup per day.


Step 2: Log Rotator — Managing Log Files

Log files grow forever without rotation. This rotator compresses old logs and enforces a retention limit:

📸 Verified Output:

💡 Tip: In production, logrotate handles this automatically. But for custom log sources (app-specific files, script output), a hand-rolled rotator gives you full control over naming, compression, and retention policy.


Step 3: File Watcher — Detect New Files

Continuously poll a directory and report newly appearing files:

📸 Verified Output:

💡 Tip: This polling approach works on any system without installing inotifywait. For production on Linux, inotifywait -m -e create dir/ is more efficient because it uses kernel events instead of polling — no missed files between intervals.


Step 4: System Report Generator

A formatted report that gathers disk, memory, and process data into a readable summary:

📸 Verified Output:

💡 Tip: Redirect the report to a file with generate_report > /tmp/report_$(date +%Y%m%d).txt and then email or Slack it. Add | tee /path/to/file to see it on-screen and save it simultaneously.


Step 5: Combining Concepts — Parameterized Automation

Chain the tools into one function that accepts arguments and handles errors:

📸 Verified Output:

💡 Tip: Default argument values (${1:-/tmp/maint_test}) let scripts work standalone or be driven by CI/CD parameters. This is how production automation scripts stay flexible without requiring every argument every time.


Step 6: Error Handling in Automation

Automation that fails silently is dangerous. Apply the full error-handling template to an automation task:

📸 Verified Output:

💡 Tip: tee -a "$LOGFILE" writes to both stdout and the log file simultaneously. Always timestamp log lines — when things go wrong at 3am, you'll want to know exactly when each step happened.


Step 7: Putting It All Together — Full Maintenance Script

A production-style script combining backup, rotation, reporting, and error handling:

📸 Verified Output:

💡 Tip: Notice that cleanup runs last (via trap EXIT) even though it's defined at the top. This is the power of trap — it guarantees cleanup regardless of how the script exits. The script is also idempotent: run it again and it works correctly.


Step 8: Capstone — Scheduled Automation Simulation

Simulate a cron-driven automation system that checks its own schedule, prevents concurrent runs, and cleans up:

📸 Verified Output:

💡 Tip: The lock file pattern (echo $$ > lockfile + check with kill -0) prevents a second cron invocation from running while the first is still working. This is essential for any cron job that might run longer than its schedule interval. The stale-lock check (using kill -0) handles the case where a previous run crashed without cleaning up.


Summary

Automation Component
Key Techniques
Real-World Use

Backup script

tar -czf, date timestamps, du

Data protection, disaster recovery

Log rotator

ls -t | tail, mv, rm, retention count

Log management, disk space control

File watcher

associative arrays, background &, wait

Trigger on new uploads, deployments

Report generator

df, free, ps, awk formatting

Monitoring dashboards, email reports

Parameterized scripts

${1:-default}, getopts

Reusable across environments

Error handling

set -euo pipefail, trap, die()

Production reliability

Lock files

echo $$, kill -0, trap release EXIT

Safe cron job concurrency

Timestamped logging

date +%T, tee -a

Audit trails, debugging

Idempotent operations

mkdir -p, check before create

Safe to re-run without side effects

Function libraries

source utils.sh

DRY, testable script components

Last updated