Master standard streams (stdin, stdout, stderr), redirection operators (>, >>, 2>, &>), and the pipe | for chaining commands. Pipes are what make Linux so powerful — dozens of simple tools combined into complex workflows.
# stderr goes to terminal too (same place, different stream)cat/nonexistent2>&1
📸 Verified Output:
Step 2: Redirecting stdout
📸 Verified Output:
📸 Verified Output:
💡 > is destructive — it truncates the file first. >> is safe — it always appends. When in doubt, use >>. Many production incidents started with > logfile instead of >> logfile.
Step 3: Redirecting stderr
📸 Verified Output:
📸 Verified Output:
💡 /dev/null is the black hole of Linux — anything written to it disappears. Perfect for suppressing noise: find / -name passwd 2>/dev/null hides all the "Permission denied" errors.
Step 4: Combining stdout and stderr
📸 Verified Output:
📸 Verified Output:
Step 5: Pipes — Connecting Commands
📸 Verified Output:
📸 Verified Output:
📸 Verified Output:
💡 Each pipe creates a mini-pipeline in memory — data flows from left to right without writing temp files. The shell connects stdout of one process directly to stdin of the next.
Step 6: Useful Pipeline Tools
📸 Verified Output:
📸 Verified Output:
📸 Verified Output:
📸 Verified Output:
Step 7: tee — Split the Pipeline
📸 Verified Output:
💡 tee is invaluable in long pipelines where you want to save intermediate results while still processing further. command | tee output.txt | grep ERROR saves everything but only shows errors.
# awk: field processing
cat /etc/passwd | awk -F: '{print $1, $3}' | head -5
root 0
daemon 1
bin 2
sys 3
sync 4
# tee writes to file AND passes through to stdout
cat /etc/passwd | grep root | tee /tmp/root_lines.txt | wc -l
echo "Lines saved:"
cat /tmp/root_lines.txt