Lab 8: Prompt Engineering — Getting the Most Out of LLMs

Objective

Master the art and science of writing prompts that reliably produce high-quality LLM outputs. By the end you will be able to:

  • Apply the core principles of effective prompting

  • Use zero-shot, few-shot, and chain-of-thought techniques

  • Structure system prompts for consistent AI behaviour

  • Avoid common prompting pitfalls


Why Prompt Engineering Matters

LLMs are not search engines. The same question asked differently can produce dramatically different results:

❌ Bad:    "Write code"
✅ Good:   "Write a Python function that takes a list of integers and returns 
            the top 3 most frequent elements. Include type hints, a docstring, 
            and handle edge cases (empty list, ties). Use only standard library."

The model has the capability — your prompt determines whether it accesses the right capability.


The Anatomy of a Good Prompt


Core Techniques

1. Zero-Shot Prompting

No examples — just a clear instruction. Works well when the task is common in training data.

2. Few-Shot Prompting

Provide 2–5 examples to demonstrate the desired format and reasoning style. Dramatically improves consistency.

3. Chain-of-Thought (CoT) Prompting

For reasoning tasks, instruct the model to think step by step. Significantly improves accuracy on maths, logic, and multi-step problems.

Zero-shot CoT trick: Simply add "Let's think step by step" to almost any reasoning question.

4. Role Prompting

Give the model a specific identity and expertise level.

5. Structured Output

Force JSON or specific formats for programmatic use.

With modern APIs, use structured outputs (JSON mode) to guarantee valid JSON:

6. Retrieval-Augmented Prompting

Ground the model in specific facts by including relevant documents in the prompt:


System Prompt Design

The system prompt sets persistent behaviour for the entire conversation. Design it carefully:


Prompt Anti-Patterns

Anti-Pattern
Problem
Fix

Vague instruction

"Make it better"

"Improve the clarity and reduce the word count by 30%"

No format specified

Random output structure

"Return as JSON / bullet list / numbered steps"

Negative-only constraints

"Don't use jargon"

Add positive: "Use plain English a 16-year-old could understand"

No context

"Fix this"

Paste the actual code/text with error messages

Overloaded prompts

10 tasks in one message

Break into separate calls

Asking to lie

"Pretend you know X"

Provide X in the context instead


Advanced: Prompt Chaining

Break complex tasks into a pipeline of smaller prompts:


Prompt Engineering for Code


Summary

Technique
When to Use
Impact

Zero-shot

Common, simple tasks

Baseline

Few-shot

Custom formats, edge cases

High

Chain-of-Thought

Maths, logic, multi-step

Very High

Role prompting

Specialised expertise

Medium–High

Structured output

Programmatic use

Critical

Retrieval-augmented

Factual accuracy

Very High

Prompt chaining

Complex multi-step tasks

Very High


Further Reading

Last updated