03 Testing Fundamentals: Automated Testing in CI & Test Coverage
December 2025 (6330 Words, 36 Minutes)
1. Introduction: Your CI Checks Style, But Does It Check Correctness?
Where we are so far:
From Chapter 02 (Feature Development), you learned:
- Feature branch workflow with pull requests
- CI automatically runs on every PR
- Branch protection blocks merges if CI fails
- Ruff checks code style automatically
From Chapter 03 (Testing Fundamentals), you learned:
- How to write unit tests with pytest
- Testing pyramid and equivalence classes
- Boundary testing
- How to run tests locally
Your current CI workflow (from Chapter 02 (Feature Development)):
# .github/workflows/quality.yml
name: Code Quality
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
quality:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up uv
uses: astral-sh/setup-uv@v4
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.cache/uv
key: ${{ runner.os }}-uv-${{ hashFiles('**/uv.lock') }}
- name: Install dependencies
run: uv sync --dev
- name: Run Ruff linter
run: uv run ruff check .
- name: Check formatting
run: uv run ruff format --check .
- name: Run Pyright
run: uv run pyright
This CI already checks:
- ✅ Code style (Ruff linter)
- ✅ Formatting consistency (Ruff format)
- ✅ Type hints (Pyright)
But there’s a gap:
# Your PR passes CI:
✅ Ruff check: PASSED
✅ Ruff format: PASSED
# You merge the PR
$ git merge feature/fix-intersection
# Later, someone runs the tests:
$ pytest tests/
================================ FAILURES ==================================
tests/test_geometry.py::test_find_intersection_normal_angle FAILED
E AssertionError: assert None is not None
The problem:
- ✅ CI checked that code looks good (style)
- ❌ CI never checked that code works correctly (tests)
- ❌ Broken code passed CI and reached main branch!
Today we fix this gap: Add pytest to CI so broken code CANNOT merge.
Pedagogical Note: Practice First, Theory Second
In this course, we follow a “practice first” approach for testing:
- Chapter 03 (Testing Fundamentals) taught you how to write tests (pytest basics, equivalence classes, boundary values)
- Chapter 03 (TDD and CI) (this lecture) teaches you how to automate tests (CI integration, coverage measurement)
- Chapter 03 (Testing Theory and Coverage) teaches you why coverage works (formal theory, C0 vs C1, systematic test design)
This mirrors how professional developers learn: first get things working, then understand the theory behind it. By the time you reach Chapter 03 (Testing Theory and Coverage), you’ll already have hands-on experience with coverage metrics, making the theory more meaningful.
2. Learning Objectives
By the end of this lecture, you will:
- Add pytest to your existing CI pipeline (extend what you built in Chapter 02 (Feature Development))
- Understand test coverage and why it’s a measurable quality metric
- Integrate coverage reporting into GitHub Actions
- Use coverage to find untested code (objective, not subjective)
- Learn Test-Driven Development (TDD) as a discipline to prevent forgetting tests
- Apply the complete workflow to Road Profile Viewer
What you WILL learn:
- Extending CI to check correctness, not just style
- Measuring what percentage of your code is tested
- Using coverage to catch gaps in test suites
- TDD as a workflow that enforces test writing
What you WON’T learn:
- 100% coverage is always necessary (it’s not!)
- Coverage guarantees correctness (it doesn’t!)
- Advanced CI/CD pipelines (out of scope)
What you’ll learn in the NEXT lecture (Chapter 03 (Testing Theory and Coverage)):
- The formal theoretical framework for testing (Program, Model, Coverage Criterion)
- The difference between Statement (C0) and Branch (C1) coverage
- How to systematically design tests to achieve specific coverage goals
- Why equivalence classes and boundary values are also “coverage criteria”
3. Part 1: Adding Tests to Your CI Pipeline
3.1 Your Current CI (from Chapter 02)
You already have this working:
# .github/workflows/quality.yml (from Chapter 02)
name: Code Quality
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
quality:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up uv
uses: astral-sh/setup-uv@v4
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.cache/uv
key: ${{ runner.os }}-uv-${{ hashFiles('**/uv.lock') }}
- name: Install dependencies
run: uv sync --dev
- name: Run Ruff linter
run: uv run ruff check .
- name: Check formatting
run: uv run ruff format --check .
- name: Run Pyright
run: uv run pyright
This already:
- ✅ Runs automatically on every PR and push
- ✅ Blocks merge if checks fail (branch protection enabled)
- ✅ Ensures code style, formatting, and types are correct
What’s missing: Running pytest!
3.2 Adding pytest to CI
Update .github/workflows/quality.yml:
Add ONE step to your existing workflow:
# .github/workflows/quality.yml (UPDATED)
name: Code Quality
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
quality:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up uv
uses: astral-sh/setup-uv@v4
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.cache/uv
key: ${{ runner.os }}-uv-${{ hashFiles('**/uv.lock') }}
- name: Install dependencies
run: uv sync --dev
- name: Run Ruff linter
run: uv run ruff check .
- name: Check formatting
run: uv run ruff format --check .
- name: Run Pyright
run: uv run pyright
# NEW: Add this step!
- name: Run tests
run: uv run pytest tests/ -v
That’s it! Now CI runs tests automatically after all quality checks.
3.3 What Happens Now
Before (Chapter 02 (Feature Development)):
PR created → CI runs (quality.yml):
✅ Ruff linter
✅ Ruff format
✅ Pyright
If all pass → Can merge
After (Chapter 03 (TDD and CI)):
PR created → CI runs (quality.yml):
✅ Ruff linter
✅ Ruff format
✅ Pyright
✅ Pytest ← NEW!
If all pass → Can merge
If ANY fail → Cannot merge
Branch protection (already enabled from Chapter 02 (Feature Development)) now blocks:
- PRs with linting issues (Ruff linter)
- PRs with formatting issues (Ruff format)
- PRs with type errors (Pyright)
- PRs with failing tests (pytest) ← NEW!
3.4 What Happens When Tests Fail in CI?
Example: Developer breaks a test
# geometry.py - Developer makes a change
def find_intersection(x_road, y_road, angle_degrees, camera_x=0, camera_y=1.5):
# Changed logic, didn't realize it broke edge case
if len(x_road) == 0:
return 0, 0, 0 # Changed from (None, None, None) - WRONG!
# ...
Push to PR:
$ git push origin feature/fix-intersection
CI runs automatically and shows:
Run uv run pytest tests/ -v
tests/test_geometry.py::test_find_intersection_normal_angle PASSED [ 20%]
tests/test_geometry.py::test_find_intersection_empty_arrays FAILED [ 40%]
=================================== FAILURES ===================================
________________________________ test_empty_arrays _____________________________
def test_find_intersection_empty_arrays():
x_road = np.array([])
y_road = np.array([])
x, y, dist = find_intersection(x_road, y_road, -10.0)
> assert x is None, "Empty array should return None"
E AssertionError: Empty array should return None
E assert 0 is None
tests/test_geometry.py:42: AssertionError
========================= 1 failed, 4 passed in 0.08s =========================
Error: Process completed with exit code 1.
GitHub shows:
- ❌ PR Testing / test — Failed
- Merge button is disabled
Developer must:
- Look at CI logs
- See which test failed and why
- Fix the code
- Push again
- Wait for CI to pass
- Then merge
4. Part 2: The Coverage Problem - Did We Test Enough?
4.1 Humans Make Mistakes
Even with CI enforcing test execution, there’s another problem:
Scenario: Tests run, but don’t test everything
# geometry.py
def find_intersection(x_road, y_road, angle_degrees, camera_x=0, camera_y=1.5):
"""Find intersection between camera ray and road profile."""
# Path 1: Empty array check
if len(x_road) == 0 or len(y_road) == 0:
return None, None, None
# Path 2: Normal calculation
angle_rad = -np.deg2rad(angle_degrees)
# Path 3: Vertical angle edge case (HUMANS FORGOT TO TEST THIS!)
if np.abs(np.cos(angle_rad)) < 1e-10:
return None, None, None # What if this has a bug?
# Path 4: Calculate intersection
slope = np.tan(angle_rad)
# ... intersection calculation
return x_int, y_int, dist
Developer writes tests:
def test_find_intersection_normal_angle():
# Tests Path 2 + Path 4 ✅
pass
def test_find_intersection_empty_arrays():
# Tests Path 1 ✅
pass
# Oops! Forgot to test Path 3 (vertical angles)! ❌
CI status:
✅ All 2 tests passed!
But: Vertical angle code (Path 3) is never tested. Bug could be hiding there!
The problem: Humans can forget to test certain code paths.
4.2 Coverage: An Objective Metric
Question: How do we know if we tested enough code?
Answer: Test coverage - measures what percentage of code is executed by tests.
Definition:
Coverage = (Lines executed by tests / Total lines of code) × 100%
Example:
def find_intersection(x_road, y_road, angle_degrees, camera_x=0, camera_y=1.5):
if len(x_road) == 0: # Line 1 - ✅ Tested
return None, None, None # Line 2 - ✅ Tested
angle_rad = -np.deg2rad(angle_degrees) # Line 3 - ✅ Tested
if np.abs(np.cos(angle_rad)) < 1e-10: # Line 4 - ❌ NEVER EXECUTED!
return None, None, None # Line 5 - ❌ NEVER EXECUTED!
slope = np.tan(angle_rad) # Line 6 - ✅ Tested
# ... more code
return x_int, y_int, dist # Line 10 - ✅ Tested
Coverage calculation:
Tested lines: 1, 2, 3, 6, 7, 8, 9, 10 = 8 lines
Untested lines: 4, 5 = 2 lines
Total: 10 lines
Coverage = 8/10 = 80%
Coverage report shows:
geometry.py: 80% coverage
Missing lines: 4-5
This tells us: “Hey! Lines 4-5 (vertical angle check) are never tested!”
Going Deeper (Chapter 03 (Testing Theory and Coverage)): In Chapter 03 (Testing Theory and Coverage), we’ll explore the theoretical foundations of coverage. You’ll learn that coverage is always “coverage of a model” - and there are different models (Control Flow Graphs, Input Domain Partitions) that lead to different coverage criteria (Statement, Branch, Path, etc.). For now, just understand that coverage tells you “what percentage of code ran during tests.”
4.3 Coverage is Not Perfect, But It’s Objective
What coverage DOES:
- ✅ Shows which lines of code are executed by tests
- ✅ Identifies completely untested code
- ✅ Provides measurable metric (80%, 90%, etc.)
- ✅ Human-independent (machine calculates it)
What coverage DOESN’T do:
- ❌ Guarantee code is correct (tests might not assert correctly)
- ❌ Test all possible inputs (still need equivalence classes!)
- ❌ Test logic correctness (can execute code but not verify it works)
Example: 100% coverage with bad test:
def calculate_tax(income):
if income < 50000:
return income * 0.1 # 10% tax
else:
return income * 0.2 # 20% tax
# Bad test with 100% coverage:
def test_calculate_tax():
calculate_tax(30000) # Executes line 3 ✅
calculate_tax(60000) # Executes line 5 ✅
# But doesn't assert anything! ❌
Coverage: 100% ✅ Actually tests correctness? ❌
The takeaway: Coverage shows what you executed, not whether it’s correct.
But: It’s still valuable! Finding untested code is the first step.
Going Deeper (Chapter 03 (Testing Theory and Coverage)): We’ll explore these limitations in much more detail in Chapter 03 (Testing Theory and Coverage), including specific examples of how 100% coverage can still miss bugs, and how coverage relates to formal requirements.
5. Part 3: Setting Up Coverage Reporting
5.1 Installing coverage.py
Coverage is measured using coverage.py (pytest plugin: pytest-cov).
Add to pyproject.toml:
[project]
dependencies = [
"numpy",
"dash",
]
[project.optional-dependencies]
dev = [
"pytest>=8.0",
"pytest-cov>=4.0", # Add this
"ruff>=0.8",
]
Install:
$ uv sync
5.2 Running Coverage Locally
Run tests with coverage:
$ uv run pytest tests/ --cov=. --cov-report=term-missing
---------- coverage: platform win32, python 3.12.0 -----------
Name Stmts Miss Cover Missing
-----------------------------------------------------
geometry.py 45 3 93% 87-89
main.py 12 2 83% 15-16
road.py 34 0 100%
visualization.py 28 28 0% 1-45
-----------------------------------------------------
TOTAL 119 33 72%
What this shows:
geometry.py: 93% covered, lines 87-89 not testedmain.py: 83% covered, lines 15-16 not testedroad.py: 100% covered! ✅visualization.py: 0% covered (no tests for UI yet)- Overall: 72% coverage
5.3 Understanding the Coverage Report
Name Stmts Miss Cover Missing
-----------------------------------------------------
geometry.py 45 3 93% 87-89
Columns:
- Stmts (Statements): Total executable lines in file (45)
- Miss: Lines NOT executed by tests (3)
- Cover: Percentage covered (93%)
- Missing: Which line numbers are untested (87-89)
Action: Look at lines 87-89 in geometry.py:
# geometry.py, lines 87-89
if np.abs(np.cos(angle_rad)) < 1e-10: # Line 87
return None, None, None # Line 88
# Line 89 might be blank or closing bracket
Realization: “We never test vertical angles! Need to add test!”
5.4 Adding Tests Based on Coverage
Before (80% coverage):
# tests/test_geometry.py
def test_find_intersection_normal_angle():
# Tests normal angles
pass
def test_find_intersection_empty_arrays():
# Tests empty arrays
pass
After seeing coverage report, add missing test:
def test_find_intersection_vertical_angle():
"""
Test vertical ray (90°) - previously untested!
Coverage report showed lines 87-89 were missing.
"""
x_road = np.array([0, 10, 20, 30])
y_road = np.array([0, 2, 4, 6])
# Vertical ray should be handled specially
x, y, dist = find_intersection(x_road, y_road, 90.0)
assert x is None, "Vertical ray should return None"
Run coverage again:
$ uv run pytest tests/ --cov=. --cov-report=term-missing
Name Stmts Miss Cover Missing
-----------------------------------------------------
geometry.py 45 0 100%
Coverage improved from 93% → 100%! Lines 87-89 now tested.
6. Part 4: Integrating Coverage into CI
6.1 Adding Coverage to GitHub Actions
Update .github/workflows/quality.yml:
# .github/workflows/quality.yml (UPDATED with coverage)
name: Code Quality
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
quality:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up uv
uses: astral-sh/setup-uv@v4
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.cache/uv
key: ${{ runner.os }}-uv-${{ hashFiles('**/uv.lock') }}
- name: Install dependencies
run: uv sync --dev
- name: Run Ruff linter
run: uv run ruff check .
- name: Check formatting
run: uv run ruff format --check .
- name: Run Pyright
run: uv run pyright
# UPDATED: Run tests with coverage
- name: Run tests with coverage
run: uv run pytest tests/ --cov=. --cov-report=term-missing --cov-fail-under=70
- name: Upload coverage report
if: always()
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: .coverage
Key addition:
--cov-fail-under=70
This means: CI fails if coverage drops below 70%.
6.2 What Happens with Coverage Enforcement
Scenario: Developer adds code without tests
# geometry.py - Developer adds new function
def calculate_curvature(x_road, y_road):
"""Calculate road curvature at each point."""
# ... 10 lines of new code
return curvature_array
# But developer doesn't write tests for it!
Push to PR:
$ git push origin feature/add-curvature
CI runs:
Run uv run pytest tests/ --cov=. --cov-report=term-missing --cov-fail-under=70
---------- coverage: platform linux, python 3.12.0 -----------
Name Stmts Miss Cover
-------------------------------------------
geometry.py 55 10 82%
main.py 12 2 83%
road.py 34 0 100%
visualization.py 28 28 0%
-------------------------------------------
TOTAL 129 40 69%
FAIL Required test coverage of 70% not reached. Total coverage: 69.00%
Error: Process completed with exit code 1.
GitHub shows:
❌ PR Testing / test — Failed
Coverage dropped to 69% (required: 70%)
New code in geometry.py is not tested.
Developer must:
- Write tests for
calculate_curvature() - Push again
- Coverage increases above 70%
- CI passes
- Then merge
Result: Cannot merge untested code!
6.3 Setting Appropriate Coverage Thresholds
Common thresholds:
70% - Minimum acceptable (some code untested)
80% - Good (most code tested)
90% - Great (high confidence)
100% - Unrealistic for most projects (UI code, error handlers, etc.)
Recommendation for Road Profile Viewer:
--cov-fail-under=70 # Start here
# Later, increase as you add more tests:
--cov-fail-under=80
Strategy:
- Start with current coverage (e.g., 65%)
- Set threshold slightly higher (e.g., 70%)
- New code must maintain or improve coverage
- Gradually increase threshold over time
Don’t aim for 100%! Some code is hard to test (UI, error cases, etc.).
Preview (Chapter 03 (Testing Theory and Coverage)): The coverage tool reports “line coverage” (also called Statement Coverage or C0). In Chapter 03 (Testing Theory and Coverage), you’ll learn about “Branch Coverage” (C1), which is stronger - it requires testing both True and False outcomes of every decision. Branch coverage gives more confidence but requires more tests.
7. Part 5: Test-Driven Development (TDD) - A Discipline to Enforce Test Writing
7.1 The “Forgot to Test” Problem
Even with CI + Coverage, there’s still a gap:
1. Developer writes new function
2. Pushes to PR
3. CI fails (coverage too low)
4. Developer writes tests AFTER the fact
5. Push again
Problem: Tests are still an afterthought, just enforced by CI.
Better approach: Write tests FIRST, then code. Impossible to forget!
7.2 TDD: Tests First, Code Second
Idea: Before writing ANY code, write a test that defines what it should do.
The Red-Green-Refactor Cycle:
🔴 RED: Write a failing test
"This is what I want the code to do"
🟢 GREEN: Write minimal code to make it pass
"Make it work (don't make it perfect)"
🔵 REFACTOR: Clean up the code
"Make it better while tests ensure it works"
Repeat for next feature
7.3 TDD Example: Building validate_road_data()
Requirement: Validate road data before using it.
Step 1: 🔴 RED - Write failing test
# tests/test_road_validation.py
import numpy as np
import pytest
from road import validate_road_data # Function doesn't exist yet!
def test_validate_road_data_accepts_valid_data():
"""Valid data should pass without raising exception."""
x_road = np.array([0, 10, 20, 30])
y_road = np.array([0, 2, 4, 6])
# Should not raise exception
validate_road_data(x_road, y_road)
Run test:
$ uv run pytest tests/test_road_validation.py -v
E ImportError: cannot import name 'validate_road_data' from 'road'
🔴 RED: Test fails (function doesn’t exist). Good!
Step 2: 🟢 GREEN - Write minimal code to pass
# road.py
def validate_road_data(x_road, y_road):
"""Validate road data."""
pass # Minimal implementation - just don't crash
Run test:
$ uv run pytest tests/test_road_validation.py -v
tests/test_road_validation.py::test_validate_road_data_accepts_valid_data PASSED
🟢 GREEN: Test passes!
“But it does nothing!” - That’s OK! We’ll add real validation when we have tests that require it.
Step 3: 🔴 RED - Add test for edge case
def test_validate_road_data_rejects_empty_arrays():
"""Empty arrays should raise ValueError."""
x_road = np.array([])
y_road = np.array([])
with pytest.raises(ValueError, match="Arrays cannot be empty"):
validate_road_data(x_road, y_road)
Run test:
$ uv run pytest tests/test_road_validation.py -v
tests/test_road_validation.py::test_validate_road_data_rejects_empty_arrays FAILED
E Failed: DID NOT RAISE <class 'ValueError'>
🔴 RED: Test fails. Now we NEED to implement empty check.
Step 4: 🟢 GREEN - Implement empty check
# road.py
def validate_road_data(x_road, y_road):
"""Validate road data."""
if len(x_road) == 0 or len(y_road) == 0:
raise ValueError("Arrays cannot be empty")
Run tests:
$ uv run pytest tests/test_road_validation.py -v
tests/test_road_validation.py::test_validate_road_data_accepts_valid_data PASSED [50%]
tests/test_road_validation.py::test_validate_road_data_rejects_empty_arrays PASSED [100%]
🟢 GREEN: Both tests pass!
Step 5: Continue the cycle
Add more tests (mismatched lengths, NaN, non-increasing x):
def test_validate_road_data_rejects_mismatched_lengths():
x_road = np.array([0, 10, 20])
y_road = np.array([0, 2])
with pytest.raises(ValueError, match="same length"):
validate_road_data(x_road, y_road)
def test_validate_road_data_rejects_nan():
x_road = np.array([0, 10, np.nan])
y_road = np.array([0, 2, 4])
with pytest.raises(ValueError, match="NaN"):
validate_road_data(x_road, y_road)
Implement each validation:
def validate_road_data(x_road, y_road):
if len(x_road) == 0 or len(y_road) == 0:
raise ValueError("Arrays cannot be empty")
if len(x_road) != len(y_road):
raise ValueError("Arrays must have the same length")
if np.any(np.isnan(x_road)) or np.any(np.isnan(y_road)):
raise ValueError("Arrays must not contain NaN")
# ... more validations
Step 6: 🔵 REFACTOR - Clean up code
def validate_road_data(x_road, y_road):
"""Validate road profile data."""
# Check empty
if len(x_road) == 0 or len(y_road) == 0:
raise ValueError("Arrays cannot be empty")
# Check same length
if len(x_road) != len(y_road):
raise ValueError("Arrays must have the same length")
# Check for invalid numeric values (refactored into loop)
for arr, name in [(x_road, "x_road"), (y_road, "y_road")]:
if np.any(np.isnan(arr)):
raise ValueError(f"{name} must not contain NaN")
if np.any(np.isinf(arr)):
raise ValueError(f"{name} must not contain infinity")
Run tests:
$ uv run pytest tests/test_road_validation.py -v
============================== 5 passed in 0.08s ==============================
🔵 REFACTOR complete: Code is cleaner, tests still pass!
7.4 TDD Benefits
1. Can’t forget to write tests
Without TDD: Write code → (maybe write tests later)
With TDD: Can't write code without test first!
2. Better function design
Writing tests first forces you to think:
- What inputs should this accept?
- What should it return?
- What errors should it raise?
3. 100% coverage from the start
Every line of code was written to pass a test
→ Every line is covered!
4. Refactoring confidence
Tests ensure refactored code still works
Can clean up without fear of breaking things
7.5 When to Use TDD
TDD works well for:
- ✅ Pure functions with clear requirements (validation, calculations)
- ✅ Bug fixes (write test that reproduces bug, then fix)
- ✅ Algorithms with well-defined behavior
TDD is hard for:
- ❌ Exploratory prototyping (“I don’t know what I’m building yet”)
- ❌ UI/UX work (hard to test aesthetics)
- ❌ Learning new libraries (need to experiment first)
Pragmatic approach:
1. Prototype/explore without TDD
2. Once you know what you want, DELETE prototype
3. Rebuild with TDD using lessons learned
Or:
1. Build it quick
2. Write tests for current behavior (characterization tests)
3. Refactor with TDD going forward
TDD is a tool, not a religion. Use when it helps.
Connection to Chapter 03 (Testing Theory and Coverage): TDD naturally tends to produce high coverage because every line of code is written to make a test pass. In Chapter 03 (Testing Theory and Coverage), you’ll learn the formal relationship: TDD produces test suites that are “adequate” for multiple coverage criteria simultaneously.
8. Part 6: Hands-On Exercise - Complete Workflow
8.1 Exercise: Add calculate_viewing_distance() with CI + Coverage + TDD
Goal: Add new feature using the complete professional workflow.
Step 1: Create feature branch
$ git checkout -b feature/add-viewing-distance
Step 2: 🔴 RED - Write failing test (TDD)
Create tests/test_viewing_distance.py:
import numpy as np
import pytest
from geometry import calculate_viewing_distance
def test_calculate_viewing_distance_returns_positive_distance():
"""Should return positive distance for normal downward angle."""
# Arrange
x_road = np.array([0, 10, 20, 30])
y_road = np.array([0, 2, 4, 6])
angle = -10.0
camera_y = 10.0 # Camera above road
# Act
distance = calculate_viewing_distance(x_road, y_road, angle, camera_y=camera_y)
# Assert
assert distance is not None, "Should find intersection"
assert distance > 0, "Distance should be positive"
def test_calculate_viewing_distance_returns_none_when_ray_misses():
"""Should return None if ray doesn't intersect road."""
x_road = np.array([0, 10, 20, 30])
y_road = np.array([0, 2, 4, 6])
angle = 45.0 # Upward angle, misses road
distance = calculate_viewing_distance(x_road, y_road, angle)
assert distance is None
Run tests:
$ uv run pytest tests/test_viewing_distance.py -v
E ImportError: cannot import name 'calculate_viewing_distance'
🔴 RED: Tests fail. Good!
Commit the failing test:
$ git add tests/test_viewing_distance.py
$ git commit -m "Add failing tests for calculate_viewing_distance (RED phase)"
Step 3: 🟢 GREEN - Implement function
Add to geometry.py:
def calculate_viewing_distance(x_road, y_road, angle_degrees, camera_x=0, camera_y=1.5):
"""
Calculate maximum viewing distance along road.
Returns distance from camera to farthest visible point on road.
If no intersection found, returns None.
"""
# Use existing find_intersection function
x_int, y_int, _ = find_intersection(x_road, y_road, angle_degrees, camera_x, camera_y)
if x_int is None:
return None
# Calculate Euclidean distance
distance = np.sqrt((x_int - camera_x)**2 + (y_int - camera_y)**2)
return distance
Run tests:
$ uv run pytest tests/test_viewing_distance.py -v
tests/test_viewing_distance.py::test_calculate_viewing_distance_returns_positive_distance PASSED [50%]
tests/test_viewing_distance.py::test_calculate_viewing_distance_returns_none_when_ray_misses PASSED [100%]
============================== 2 passed in 0.05s ==============================
🟢 GREEN: Tests pass!
Check coverage:
$ uv run pytest tests/test_viewing_distance.py --cov=geometry --cov-report=term-missing
geometry.py 95% (missing: line 112 - some edge case)
Commit implementation:
$ git add geometry.py
$ git commit -m "Implement calculate_viewing_distance (GREEN phase)"
Step 4: Add test for missing coverage
Coverage showed line 112 untested. Add test:
def test_calculate_viewing_distance_handles_empty_road():
"""Should handle empty road gracefully."""
x_road = np.array([])
y_road = np.array([])
# Should return None (no road to intersect)
distance = calculate_viewing_distance(x_road, y_road, -10.0)
assert distance is None
Run with coverage:
$ uv run pytest tests/test_viewing_distance.py --cov=geometry --cov-report=term-missing
geometry.py 100%
Commit:
$ git add tests/test_viewing_distance.py
$ git commit -m "Add test for empty road edge case - 100% coverage"
Step 5: Push and create PR
$ git push -u origin feature/add-viewing-distance
$ gh pr create --title "Add calculate_viewing_distance function" --body "..."
GitHub Actions runs automatically:
✅ Ruff check: PASSED
✅ Ruff format: PASSED
✅ Tests: PASSED (3/3)
✅ Coverage: PASSED (74% > 70%)
All checks have passed
Merge button enabled! ✅
9. Part 7: The Complete Professional Workflow
9.1 Summary: CI + Coverage + TDD Together
┌─────────────────────────────────────────────────┐
│ Developer writes code │
└─────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ TDD: Write tests FIRST │
│ - Red: Failing test │
│ - Green: Minimal implementation │
│ - Refactor: Clean up │
└─────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Push to PR │
└─────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ CI runs automatically │
│ ✅ Code quality (ruff) │
│ ✅ Tests (pytest) │
│ ✅ Coverage (pytest-cov) │
└─────────────────┬───────────────────────────────┘
│
┌─────┴─────┐
│ │
▼ ▼
┌─────────┐ ┌─────────┐
│ FAIL │ │ PASS │
└────┬────┘ └────┬────┘
│ │
│ ▼
│ ┌─────────────┐
│ │ Can merge │
│ └─────────────┘
│
▼
┌────────────────────┐
│ Fix required: │
│ - Failing tests │
│ - Low coverage │
│ - Code quality │
└────────────────────┘
9.2 The Three Layers of Protection
Layer 1: TDD (Discipline)
- Ensures tests are written
- Ensures tests are written FIRST
- Designs better code
Layer 2: CI (Enforcement)
- Automatically runs tests on every PR
- Blocks merge if tests fail
- No human can forget to run tests
Layer 3: Coverage (Measurement)
- Shows what percentage of code is tested
- Identifies untested code objectively
- Blocks merge if coverage drops
- Human-independent metric
Together: Nearly impossible to ship untested code!
10. Summary: What You’ve Accomplished
10.1 Before This Lecture
- ✅ Knew how to write tests (Chapter 03 (Testing Fundamentals))
- ❌ Tests were optional (easy to forget)
- ❌ No way to measure test quality
- ❌ Untested code could reach production
10.2 After This Lecture
- ✅ CI enforces test execution (can’t merge without passing tests)
- ✅ Coverage shows what’s tested (objective measurement)
- ✅ TDD prevents forgetting tests (write tests first)
- ✅ Complete professional workflow (TDD → CI → Coverage)
11. Key Takeaways
11.1 The Problem
Humans forget to run tests. We need automation.
11.2 The Solutions
1. CI Enforcement
- Tests run automatically on every PR
- Failing tests block merge
- Main branch stays stable
2. Coverage Measurement
- Objective metric (70%, 80%, 90%)
- Shows untested code
- Blocks merge if coverage drops
3. TDD Discipline
- Write tests FIRST, code SECOND
- Can’t forget to write tests
- Better design from test-first thinking
11.3 The Workflow
# 1. TDD: Write test first
🔴 RED → 🟢 GREEN → 🔵 REFACTOR
# 2. Push to PR
git push origin feature-branch
# 3. CI validates
✅ Tests pass
✅ Coverage ≥ 70%
# 4. Merge with confidence
11.4 Remember
- Coverage ≠ Correctness (but it helps find gaps)
- 70-80% is good enough (don’t aim for 100%)
- TDD is a tool (use when it helps)
- CI is mandatory (enforce, don’t hope)
12. Further Reading
Test Coverage:
- coverage.py documentation: coverage.readthedocs.io
- pytest-cov plugin: pytest-cov.readthedocs.io
- “When is 100% coverage not enough?” (thoughtworks.com)
Test-Driven Development:
- “Test Driven Development: By Example” by Kent Beck
- Martin Fowler on TDD: martinfowler.com/bliki/TestDrivenDevelopment.html
CI/CD Best Practices:
- GitHub Actions documentation
- Branch protection rules
- Continuous Integration patterns
Next Steps:
- Set up CI for Road Profile Viewer
- Add coverage reporting
- Practice TDD on new features
- Gradually increase coverage threshold
Coming Next: Chapter 03 (Testing Theory and Coverage) - Testing Theory
In Chapter 03 (Testing Theory and Coverage), we’ll dive deep into the theoretical foundations:
- Formal definitions: Program, Input Domain, Test Suite, Model, Coverage Criterion
- Statement Coverage (C0) vs Branch Coverage (C1) - and why C1 subsumes C0
- How to read a Control Flow Graph and design tests systematically
- Equivalence classes and boundary values as coverage criteria
- The requirements-tests-coverage spiral: how testing reveals missing specifications
This theory will help you understand why the practical techniques you learned today actually work.
Congratulations! You now have a complete, professional testing workflow: write tests first (TDD), run them automatically (CI), and measure objectively (coverage). Your code is safer, better tested, and production-ready.