This capstone lab puts everything together: variables, loops, functions, and error handling — applied to real-world automation tasks. You'll build a backup script, a log rotator, a file watcher, and a system report generator. These are the building blocks of real DevOps and sysadmin automation.
Step 1: Backup Script — Timestamped Archives
A backup script that creates timestamped .tar.gz archives and reports what it created:
💡 Tip:date +%Y%m%d_%H%M%S produces a sortable timestamp. Archives named this way are automatically in chronological order when listed with ls. For daily backups, date +%Y%m%d (no time) creates at most one backup per day.
Step 2: Log Rotator — Managing Log Files
Log files grow forever without rotation. This rotator compresses old logs and enforces a retention limit:
📸 Verified Output:
💡 Tip: In production, logrotate handles this automatically. But for custom log sources (app-specific files, script output), a hand-rolled rotator gives you full control over naming, compression, and retention policy.
Step 3: File Watcher — Detect New Files
Continuously poll a directory and report newly appearing files:
📸 Verified Output:
💡 Tip: This polling approach works on any system without installing inotifywait. For production on Linux, inotifywait -m -e create dir/ is more efficient because it uses kernel events instead of polling — no missed files between intervals.
Step 4: System Report Generator
A formatted report that gathers disk, memory, and process data into a readable summary:
📸 Verified Output:
💡 Tip: Redirect the report to a file with generate_report > /tmp/report_$(date +%Y%m%d).txt and then email or Slack it. Add | tee /path/to/file to see it on-screen and save it simultaneously.
Chain the tools into one function that accepts arguments and handles errors:
📸 Verified Output:
💡 Tip: Default argument values (${1:-/tmp/maint_test}) let scripts work standalone or be driven by CI/CD parameters. This is how production automation scripts stay flexible without requiring every argument every time.
Step 6: Error Handling in Automation
Automation that fails silently is dangerous. Apply the full error-handling template to an automation task:
📸 Verified Output:
💡 Tip:tee -a "$LOGFILE" writes to both stdout and the log file simultaneously. Always timestamp log lines — when things go wrong at 3am, you'll want to know exactly when each step happened.
Step 7: Putting It All Together — Full Maintenance Script
A production-style script combining backup, rotation, reporting, and error handling:
📸 Verified Output:
💡 Tip: Notice that cleanup runs last (via trap EXIT) even though it's defined at the top. This is the power of trap — it guarantees cleanup regardless of how the script exits. The script is also idempotent: run it again and it works correctly.
Simulate a cron-driven automation system that checks its own schedule, prevents concurrent runs, and cleans up:
📸 Verified Output:
💡 Tip: The lock file pattern (echo $$ > lockfile + check with kill -0) prevents a second cron invocation from running while the first is still working. This is essential for any cron job that might run longer than its schedule interval. The stale-lock check (using kill -0) handles the case where a previous run crashed without cleaning up.
#!/usr/bin/env bash
set -euo pipefail
LOCK_FILE="/tmp/automation.lock"
LOG="/tmp/automation_cron.log"
RUN_DIR="/tmp/cron_work"
log() { echo "$(date +%T) $*" | tee -a "$LOG"; }
die() { log "FATAL: $*" >&2; exit 1; }
acquire_lock() {
if [ -f "$LOCK_FILE" ]; then
local pid; pid=$(cat "$LOCK_FILE")
if kill -0 "$pid" 2>/dev/null; then
die "Already running (PID $pid)"
else
log "Stale lock found, removing"
rm -f "$LOCK_FILE"
fi
fi
echo $$ > "$LOCK_FILE"
log "Lock acquired (PID $$)"
}
release_lock() {
rm -f "$LOCK_FILE"
log "Lock released"
}
run_job() {
local job_name="$1"
log "Job start: $job_name"
mkdir -p "$RUN_DIR"
echo "$(date): $job_name ran" >> "$RUN_DIR/history.txt"
sleep 1 # simulate work
log "Job done: $job_name"
}
trap release_lock EXIT
acquire_lock
run_job "daily-backup"
run_job "log-rotation"
run_job "report-generation"
log "All jobs complete"
cat "$RUN_DIR/history.txt"
05:49:07 Lock acquired (PID 42)
05:49:07 Job start: daily-backup
05:49:08 Job done: daily-backup
05:49:08 Job start: log-rotation
05:49:09 Job done: log-rotation
05:49:09 Job start: report-generation
05:49:10 Job done: report-generation
05:49:10 All jobs complete
Thu Mar 5 05:49:08 UTC 2026: daily-backup ran
Thu Mar 5 05:49:09 UTC 2026: log-rotation ran
Thu Mar 5 05:49:10 UTC 2026: report-generation ran
05:49:10 Lock released