Home

03 Testing Fundamentals: Automated Testing in CI & Test Coverage

lecture ci-cd github-actions pytest test-coverage coverage-py tdd automation

1. Introduction: Your CI Checks Style, But Does It Check Correctness?

Where we are so far:

From Chapter 02 (Feature Development), you learned:

From Chapter 03 (Testing Fundamentals), you learned:

Your current CI workflow (from Chapter 02 (Feature Development)):

# .github/workflows/quality.yml
name: Code Quality

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  quality:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Set up uv
        uses: astral-sh/setup-uv@v4

      - name: Cache dependencies
        uses: actions/cache@v4
        with:
          path: ~/.cache/uv
          key: ${{ runner.os }}-uv-${{ hashFiles('**/uv.lock') }}

      - name: Install dependencies
        run: uv sync --dev

      - name: Run Ruff linter
        run: uv run ruff check .

      - name: Check formatting
        run: uv run ruff format --check .

      - name: Run Pyright
        run: uv run pyright

This CI already checks:

But there’s a gap:

# Your PR passes CI:
✅ Ruff check: PASSED
✅ Ruff format: PASSED

# You merge the PR
$ git merge feature/fix-intersection

# Later, someone runs the tests:
$ pytest tests/
================================ FAILURES ==================================
tests/test_geometry.py::test_find_intersection_normal_angle FAILED
E   AssertionError: assert None is not None

The problem:

Today we fix this gap: Add pytest to CI so broken code CANNOT merge.

Pedagogical Note: Practice First, Theory Second

In this course, we follow a “practice first” approach for testing:

This mirrors how professional developers learn: first get things working, then understand the theory behind it. By the time you reach Chapter 03 (Testing Theory and Coverage), you’ll already have hands-on experience with coverage metrics, making the theory more meaningful.


2. Learning Objectives

By the end of this lecture, you will:

  1. Add pytest to your existing CI pipeline (extend what you built in Chapter 02 (Feature Development))
  2. Understand test coverage and why it’s a measurable quality metric
  3. Integrate coverage reporting into GitHub Actions
  4. Use coverage to find untested code (objective, not subjective)
  5. Learn Test-Driven Development (TDD) as a discipline to prevent forgetting tests
  6. Apply the complete workflow to Road Profile Viewer

What you WILL learn:

What you WON’T learn:

What you’ll learn in the NEXT lecture (Chapter 03 (Testing Theory and Coverage)):


3. Part 1: Adding Tests to Your CI Pipeline

3.1 Your Current CI (from Chapter 02)

You already have this working:

# .github/workflows/quality.yml (from Chapter 02)
name: Code Quality

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  quality:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Set up uv
        uses: astral-sh/setup-uv@v4

      - name: Cache dependencies
        uses: actions/cache@v4
        with:
          path: ~/.cache/uv
          key: ${{ runner.os }}-uv-${{ hashFiles('**/uv.lock') }}

      - name: Install dependencies
        run: uv sync --dev

      - name: Run Ruff linter
        run: uv run ruff check .

      - name: Check formatting
        run: uv run ruff format --check .

      - name: Run Pyright
        run: uv run pyright

This already:

What’s missing: Running pytest!


3.2 Adding pytest to CI

Update .github/workflows/quality.yml:

Add ONE step to your existing workflow:

# .github/workflows/quality.yml (UPDATED)
name: Code Quality

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  quality:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Set up uv
        uses: astral-sh/setup-uv@v4

      - name: Cache dependencies
        uses: actions/cache@v4
        with:
          path: ~/.cache/uv
          key: ${{ runner.os }}-uv-${{ hashFiles('**/uv.lock') }}

      - name: Install dependencies
        run: uv sync --dev

      - name: Run Ruff linter
        run: uv run ruff check .

      - name: Check formatting
        run: uv run ruff format --check .

      - name: Run Pyright
        run: uv run pyright

      # NEW: Add this step!
      - name: Run tests
        run: uv run pytest tests/ -v

That’s it! Now CI runs tests automatically after all quality checks.


3.3 What Happens Now

Before (Chapter 02 (Feature Development)):

PR created → CI runs (quality.yml):
  ✅ Ruff linter
  ✅ Ruff format
  ✅ Pyright

If all pass → Can merge

After (Chapter 03 (TDD and CI)):

PR created → CI runs (quality.yml):
  ✅ Ruff linter
  ✅ Ruff format
  ✅ Pyright
  ✅ Pytest  ← NEW!

If all pass → Can merge
If ANY fail → Cannot merge

Branch protection (already enabled from Chapter 02 (Feature Development)) now blocks:


3.4 What Happens When Tests Fail in CI?

Example: Developer breaks a test

# geometry.py - Developer makes a change
def find_intersection(x_road, y_road, angle_degrees, camera_x=0, camera_y=1.5):
    # Changed logic, didn't realize it broke edge case
    if len(x_road) == 0:
        return 0, 0, 0  # Changed from (None, None, None) - WRONG!
    # ...

Push to PR:

$ git push origin feature/fix-intersection

CI runs automatically and shows:

Run uv run pytest tests/ -v

tests/test_geometry.py::test_find_intersection_normal_angle PASSED      [ 20%]
tests/test_geometry.py::test_find_intersection_empty_arrays FAILED      [ 40%]

=================================== FAILURES ===================================
________________________________ test_empty_arrays _____________________________

    def test_find_intersection_empty_arrays():
        x_road = np.array([])
        y_road = np.array([])
        x, y, dist = find_intersection(x_road, y_road, -10.0)
>       assert x is None, "Empty array should return None"
E       AssertionError: Empty array should return None
E       assert 0 is None

tests/test_geometry.py:42: AssertionError
========================= 1 failed, 4 passed in 0.08s =========================
Error: Process completed with exit code 1.

GitHub shows:

Developer must:

  1. Look at CI logs
  2. See which test failed and why
  3. Fix the code
  4. Push again
  5. Wait for CI to pass
  6. Then merge

4. Part 2: The Coverage Problem - Did We Test Enough?

4.1 Humans Make Mistakes

Even with CI enforcing test execution, there’s another problem:

Scenario: Tests run, but don’t test everything

# geometry.py
def find_intersection(x_road, y_road, angle_degrees, camera_x=0, camera_y=1.5):
    """Find intersection between camera ray and road profile."""

    # Path 1: Empty array check
    if len(x_road) == 0 or len(y_road) == 0:
        return None, None, None

    # Path 2: Normal calculation
    angle_rad = -np.deg2rad(angle_degrees)

    # Path 3: Vertical angle edge case (HUMANS FORGOT TO TEST THIS!)
    if np.abs(np.cos(angle_rad)) < 1e-10:
        return None, None, None  # What if this has a bug?

    # Path 4: Calculate intersection
    slope = np.tan(angle_rad)
    # ... intersection calculation
    return x_int, y_int, dist

Developer writes tests:

def test_find_intersection_normal_angle():
    # Tests Path 2 + Path 4 ✅
    pass

def test_find_intersection_empty_arrays():
    # Tests Path 1 ✅
    pass

# Oops! Forgot to test Path 3 (vertical angles)! ❌

CI status:

✅ All 2 tests passed!

But: Vertical angle code (Path 3) is never tested. Bug could be hiding there!

The problem: Humans can forget to test certain code paths.


4.2 Coverage: An Objective Metric

Question: How do we know if we tested enough code?

Answer: Test coverage - measures what percentage of code is executed by tests.

Definition:

Coverage = (Lines executed by tests / Total lines of code) × 100%

Example:

def find_intersection(x_road, y_road, angle_degrees, camera_x=0, camera_y=1.5):
    if len(x_road) == 0:           # Line 1 - ✅ Tested
        return None, None, None    # Line 2 - ✅ Tested

    angle_rad = -np.deg2rad(angle_degrees)  # Line 3 - ✅ Tested

    if np.abs(np.cos(angle_rad)) < 1e-10:   # Line 4 - ❌ NEVER EXECUTED!
        return None, None, None              # Line 5 - ❌ NEVER EXECUTED!

    slope = np.tan(angle_rad)      # Line 6 - ✅ Tested
    # ... more code
    return x_int, y_int, dist      # Line 10 - ✅ Tested

Coverage calculation:

Tested lines: 1, 2, 3, 6, 7, 8, 9, 10 = 8 lines
Untested lines: 4, 5 = 2 lines
Total: 10 lines

Coverage = 8/10 = 80%

Coverage report shows:

geometry.py: 80% coverage
Missing lines: 4-5

This tells us: “Hey! Lines 4-5 (vertical angle check) are never tested!”

Going Deeper (Chapter 03 (Testing Theory and Coverage)): In Chapter 03 (Testing Theory and Coverage), we’ll explore the theoretical foundations of coverage. You’ll learn that coverage is always “coverage of a model” - and there are different models (Control Flow Graphs, Input Domain Partitions) that lead to different coverage criteria (Statement, Branch, Path, etc.). For now, just understand that coverage tells you “what percentage of code ran during tests.”


4.3 Coverage is Not Perfect, But It’s Objective

What coverage DOES:

What coverage DOESN’T do:

Example: 100% coverage with bad test:

def calculate_tax(income):
    if income < 50000:
        return income * 0.1    # 10% tax
    else:
        return income * 0.2    # 20% tax

# Bad test with 100% coverage:
def test_calculate_tax():
    calculate_tax(30000)  # Executes line 3 ✅
    calculate_tax(60000)  # Executes line 5 ✅
    # But doesn't assert anything! ❌

Coverage: 100%Actually tests correctness?

The takeaway: Coverage shows what you executed, not whether it’s correct.

But: It’s still valuable! Finding untested code is the first step.

Going Deeper (Chapter 03 (Testing Theory and Coverage)): We’ll explore these limitations in much more detail in Chapter 03 (Testing Theory and Coverage), including specific examples of how 100% coverage can still miss bugs, and how coverage relates to formal requirements.


5. Part 3: Setting Up Coverage Reporting

5.1 Installing coverage.py

Coverage is measured using coverage.py (pytest plugin: pytest-cov).

Add to pyproject.toml:

[project]
dependencies = [
    "numpy",
    "dash",
]

[project.optional-dependencies]
dev = [
    "pytest>=8.0",
    "pytest-cov>=4.0",  # Add this
    "ruff>=0.8",
]

Install:

$ uv sync

5.2 Running Coverage Locally

Run tests with coverage:

$ uv run pytest tests/ --cov=. --cov-report=term-missing

---------- coverage: platform win32, python 3.12.0 -----------
Name                    Stmts   Miss  Cover   Missing
-----------------------------------------------------
geometry.py                45      3    93%   87-89
main.py                    12      2    83%   15-16
road.py                    34      0   100%
visualization.py           28     28     0%   1-45
-----------------------------------------------------
TOTAL                     119     33    72%

What this shows:


5.3 Understanding the Coverage Report

Name                    Stmts   Miss  Cover   Missing
-----------------------------------------------------
geometry.py                45      3    93%   87-89

Columns:

Action: Look at lines 87-89 in geometry.py:

# geometry.py, lines 87-89
if np.abs(np.cos(angle_rad)) < 1e-10:   # Line 87
    return None, None, None              # Line 88
    # Line 89 might be blank or closing bracket

Realization: “We never test vertical angles! Need to add test!”


5.4 Adding Tests Based on Coverage

Before (80% coverage):

# tests/test_geometry.py
def test_find_intersection_normal_angle():
    # Tests normal angles
    pass

def test_find_intersection_empty_arrays():
    # Tests empty arrays
    pass

After seeing coverage report, add missing test:

def test_find_intersection_vertical_angle():
    """
    Test vertical ray (90°) - previously untested!
    Coverage report showed lines 87-89 were missing.
    """
    x_road = np.array([0, 10, 20, 30])
    y_road = np.array([0, 2, 4, 6])

    # Vertical ray should be handled specially
    x, y, dist = find_intersection(x_road, y_road, 90.0)
    assert x is None, "Vertical ray should return None"

Run coverage again:

$ uv run pytest tests/ --cov=. --cov-report=term-missing

Name                    Stmts   Miss  Cover   Missing
-----------------------------------------------------
geometry.py                45      0   100%

Coverage improved from 93% → 100%! Lines 87-89 now tested.


6. Part 4: Integrating Coverage into CI

6.1 Adding Coverage to GitHub Actions

Update .github/workflows/quality.yml:

# .github/workflows/quality.yml (UPDATED with coverage)
name: Code Quality

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  quality:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Set up uv
        uses: astral-sh/setup-uv@v4

      - name: Cache dependencies
        uses: actions/cache@v4
        with:
          path: ~/.cache/uv
          key: ${{ runner.os }}-uv-${{ hashFiles('**/uv.lock') }}

      - name: Install dependencies
        run: uv sync --dev

      - name: Run Ruff linter
        run: uv run ruff check .

      - name: Check formatting
        run: uv run ruff format --check .

      - name: Run Pyright
        run: uv run pyright

      # UPDATED: Run tests with coverage
      - name: Run tests with coverage
        run: uv run pytest tests/ --cov=. --cov-report=term-missing --cov-fail-under=70

      - name: Upload coverage report
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: coverage-report
          path: .coverage

Key addition:

--cov-fail-under=70

This means: CI fails if coverage drops below 70%.


6.2 What Happens with Coverage Enforcement

Scenario: Developer adds code without tests

# geometry.py - Developer adds new function
def calculate_curvature(x_road, y_road):
    """Calculate road curvature at each point."""
    # ... 10 lines of new code
    return curvature_array

# But developer doesn't write tests for it!

Push to PR:

$ git push origin feature/add-curvature

CI runs:

Run uv run pytest tests/ --cov=. --cov-report=term-missing --cov-fail-under=70

---------- coverage: platform linux, python 3.12.0 -----------
Name                    Stmts   Miss  Cover
-------------------------------------------
geometry.py                55     10    82%
main.py                    12      2    83%
road.py                    34      0   100%
visualization.py           28     28     0%
-------------------------------------------
TOTAL                     129     40    69%

FAIL Required test coverage of 70% not reached. Total coverage: 69.00%
Error: Process completed with exit code 1.

GitHub shows:

❌ PR Testing / test — Failed

Coverage dropped to 69% (required: 70%)
New code in geometry.py is not tested.

Developer must:

  1. Write tests for calculate_curvature()
  2. Push again
  3. Coverage increases above 70%
  4. CI passes
  5. Then merge

Result: Cannot merge untested code!


6.3 Setting Appropriate Coverage Thresholds

Common thresholds:

70% - Minimum acceptable (some code untested)
80% - Good (most code tested)
90% - Great (high confidence)
100% - Unrealistic for most projects (UI code, error handlers, etc.)

Recommendation for Road Profile Viewer:

--cov-fail-under=70  # Start here

# Later, increase as you add more tests:
--cov-fail-under=80

Strategy:

  1. Start with current coverage (e.g., 65%)
  2. Set threshold slightly higher (e.g., 70%)
  3. New code must maintain or improve coverage
  4. Gradually increase threshold over time

Don’t aim for 100%! Some code is hard to test (UI, error cases, etc.).

Preview (Chapter 03 (Testing Theory and Coverage)): The coverage tool reports “line coverage” (also called Statement Coverage or C0). In Chapter 03 (Testing Theory and Coverage), you’ll learn about “Branch Coverage” (C1), which is stronger - it requires testing both True and False outcomes of every decision. Branch coverage gives more confidence but requires more tests.


7. Part 5: Test-Driven Development (TDD) - A Discipline to Enforce Test Writing

7.1 The “Forgot to Test” Problem

Even with CI + Coverage, there’s still a gap:

1. Developer writes new function
2. Pushes to PR
3. CI fails (coverage too low)
4. Developer writes tests AFTER the fact
5. Push again

Problem: Tests are still an afterthought, just enforced by CI.

Better approach: Write tests FIRST, then code. Impossible to forget!


7.2 TDD: Tests First, Code Second

Idea: Before writing ANY code, write a test that defines what it should do.

The Red-Green-Refactor Cycle:

🔴 RED: Write a failing test
   "This is what I want the code to do"

🟢 GREEN: Write minimal code to make it pass
   "Make it work (don't make it perfect)"

🔵 REFACTOR: Clean up the code
   "Make it better while tests ensure it works"

Repeat for next feature

7.3 TDD Example: Building validate_road_data()

Requirement: Validate road data before using it.

Step 1: 🔴 RED - Write failing test

# tests/test_road_validation.py
import numpy as np
import pytest
from road import validate_road_data  # Function doesn't exist yet!


def test_validate_road_data_accepts_valid_data():
    """Valid data should pass without raising exception."""
    x_road = np.array([0, 10, 20, 30])
    y_road = np.array([0, 2, 4, 6])

    # Should not raise exception
    validate_road_data(x_road, y_road)

Run test:

$ uv run pytest tests/test_road_validation.py -v

E   ImportError: cannot import name 'validate_road_data' from 'road'

🔴 RED: Test fails (function doesn’t exist). Good!


Step 2: 🟢 GREEN - Write minimal code to pass

# road.py
def validate_road_data(x_road, y_road):
    """Validate road data."""
    pass  # Minimal implementation - just don't crash

Run test:

$ uv run pytest tests/test_road_validation.py -v

tests/test_road_validation.py::test_validate_road_data_accepts_valid_data PASSED

🟢 GREEN: Test passes!

“But it does nothing!” - That’s OK! We’ll add real validation when we have tests that require it.


Step 3: 🔴 RED - Add test for edge case

def test_validate_road_data_rejects_empty_arrays():
    """Empty arrays should raise ValueError."""
    x_road = np.array([])
    y_road = np.array([])

    with pytest.raises(ValueError, match="Arrays cannot be empty"):
        validate_road_data(x_road, y_road)

Run test:

$ uv run pytest tests/test_road_validation.py -v

tests/test_road_validation.py::test_validate_road_data_rejects_empty_arrays FAILED
E   Failed: DID NOT RAISE <class 'ValueError'>

🔴 RED: Test fails. Now we NEED to implement empty check.


Step 4: 🟢 GREEN - Implement empty check

# road.py
def validate_road_data(x_road, y_road):
    """Validate road data."""
    if len(x_road) == 0 or len(y_road) == 0:
        raise ValueError("Arrays cannot be empty")

Run tests:

$ uv run pytest tests/test_road_validation.py -v

tests/test_road_validation.py::test_validate_road_data_accepts_valid_data PASSED   [50%]
tests/test_road_validation.py::test_validate_road_data_rejects_empty_arrays PASSED [100%]

🟢 GREEN: Both tests pass!


Step 5: Continue the cycle

Add more tests (mismatched lengths, NaN, non-increasing x):

def test_validate_road_data_rejects_mismatched_lengths():
    x_road = np.array([0, 10, 20])
    y_road = np.array([0, 2])
    with pytest.raises(ValueError, match="same length"):
        validate_road_data(x_road, y_road)

def test_validate_road_data_rejects_nan():
    x_road = np.array([0, 10, np.nan])
    y_road = np.array([0, 2, 4])
    with pytest.raises(ValueError, match="NaN"):
        validate_road_data(x_road, y_road)

Implement each validation:

def validate_road_data(x_road, y_road):
    if len(x_road) == 0 or len(y_road) == 0:
        raise ValueError("Arrays cannot be empty")

    if len(x_road) != len(y_road):
        raise ValueError("Arrays must have the same length")

    if np.any(np.isnan(x_road)) or np.any(np.isnan(y_road)):
        raise ValueError("Arrays must not contain NaN")

    # ... more validations

Step 6: 🔵 REFACTOR - Clean up code

def validate_road_data(x_road, y_road):
    """Validate road profile data."""

    # Check empty
    if len(x_road) == 0 or len(y_road) == 0:
        raise ValueError("Arrays cannot be empty")

    # Check same length
    if len(x_road) != len(y_road):
        raise ValueError("Arrays must have the same length")

    # Check for invalid numeric values (refactored into loop)
    for arr, name in [(x_road, "x_road"), (y_road, "y_road")]:
        if np.any(np.isnan(arr)):
            raise ValueError(f"{name} must not contain NaN")
        if np.any(np.isinf(arr)):
            raise ValueError(f"{name} must not contain infinity")

Run tests:

$ uv run pytest tests/test_road_validation.py -v

============================== 5 passed in 0.08s ==============================

🔵 REFACTOR complete: Code is cleaner, tests still pass!


7.4 TDD Benefits

1. Can’t forget to write tests

Without TDD: Write code → (maybe write tests later)
With TDD: Can't write code without test first!

2. Better function design

Writing tests first forces you to think:
- What inputs should this accept?
- What should it return?
- What errors should it raise?

3. 100% coverage from the start

Every line of code was written to pass a test
→ Every line is covered!

4. Refactoring confidence

Tests ensure refactored code still works
Can clean up without fear of breaking things

7.5 When to Use TDD

TDD works well for:

TDD is hard for:

Pragmatic approach:

1. Prototype/explore without TDD
2. Once you know what you want, DELETE prototype
3. Rebuild with TDD using lessons learned

Or:

1. Build it quick
2. Write tests for current behavior (characterization tests)
3. Refactor with TDD going forward

TDD is a tool, not a religion. Use when it helps.

Connection to Chapter 03 (Testing Theory and Coverage): TDD naturally tends to produce high coverage because every line of code is written to make a test pass. In Chapter 03 (Testing Theory and Coverage), you’ll learn the formal relationship: TDD produces test suites that are “adequate” for multiple coverage criteria simultaneously.


8. Part 6: Hands-On Exercise - Complete Workflow

8.1 Exercise: Add calculate_viewing_distance() with CI + Coverage + TDD

Goal: Add new feature using the complete professional workflow.


Step 1: Create feature branch

$ git checkout -b feature/add-viewing-distance

Step 2: 🔴 RED - Write failing test (TDD)

Create tests/test_viewing_distance.py:

import numpy as np
import pytest
from geometry import calculate_viewing_distance


def test_calculate_viewing_distance_returns_positive_distance():
    """Should return positive distance for normal downward angle."""
    # Arrange
    x_road = np.array([0, 10, 20, 30])
    y_road = np.array([0, 2, 4, 6])
    angle = -10.0
    camera_y = 10.0  # Camera above road

    # Act
    distance = calculate_viewing_distance(x_road, y_road, angle, camera_y=camera_y)

    # Assert
    assert distance is not None, "Should find intersection"
    assert distance > 0, "Distance should be positive"


def test_calculate_viewing_distance_returns_none_when_ray_misses():
    """Should return None if ray doesn't intersect road."""
    x_road = np.array([0, 10, 20, 30])
    y_road = np.array([0, 2, 4, 6])
    angle = 45.0  # Upward angle, misses road

    distance = calculate_viewing_distance(x_road, y_road, angle)
    assert distance is None

Run tests:

$ uv run pytest tests/test_viewing_distance.py -v

E   ImportError: cannot import name 'calculate_viewing_distance'

🔴 RED: Tests fail. Good!

Commit the failing test:

$ git add tests/test_viewing_distance.py
$ git commit -m "Add failing tests for calculate_viewing_distance (RED phase)"

Step 3: 🟢 GREEN - Implement function

Add to geometry.py:

def calculate_viewing_distance(x_road, y_road, angle_degrees, camera_x=0, camera_y=1.5):
    """
    Calculate maximum viewing distance along road.

    Returns distance from camera to farthest visible point on road.
    If no intersection found, returns None.
    """
    # Use existing find_intersection function
    x_int, y_int, _ = find_intersection(x_road, y_road, angle_degrees, camera_x, camera_y)

    if x_int is None:
        return None

    # Calculate Euclidean distance
    distance = np.sqrt((x_int - camera_x)**2 + (y_int - camera_y)**2)
    return distance

Run tests:

$ uv run pytest tests/test_viewing_distance.py -v

tests/test_viewing_distance.py::test_calculate_viewing_distance_returns_positive_distance PASSED [50%]
tests/test_viewing_distance.py::test_calculate_viewing_distance_returns_none_when_ray_misses PASSED [100%]

============================== 2 passed in 0.05s ==============================

🟢 GREEN: Tests pass!

Check coverage:

$ uv run pytest tests/test_viewing_distance.py --cov=geometry --cov-report=term-missing

geometry.py       95%   (missing: line 112 - some edge case)

Commit implementation:

$ git add geometry.py
$ git commit -m "Implement calculate_viewing_distance (GREEN phase)"

Step 4: Add test for missing coverage

Coverage showed line 112 untested. Add test:

def test_calculate_viewing_distance_handles_empty_road():
    """Should handle empty road gracefully."""
    x_road = np.array([])
    y_road = np.array([])

    # Should return None (no road to intersect)
    distance = calculate_viewing_distance(x_road, y_road, -10.0)
    assert distance is None

Run with coverage:

$ uv run pytest tests/test_viewing_distance.py --cov=geometry --cov-report=term-missing

geometry.py       100%

Commit:

$ git add tests/test_viewing_distance.py
$ git commit -m "Add test for empty road edge case - 100% coverage"

Step 5: Push and create PR

$ git push -u origin feature/add-viewing-distance
$ gh pr create --title "Add calculate_viewing_distance function" --body "..."

GitHub Actions runs automatically:

✅ Ruff check: PASSED
✅ Ruff format: PASSED
✅ Tests: PASSED (3/3)
✅ Coverage: PASSED (74% > 70%)

All checks have passed

Merge button enabled!


9. Part 7: The Complete Professional Workflow

9.1 Summary: CI + Coverage + TDD Together

┌─────────────────────────────────────────────────┐
│  Developer writes code                          │
└─────────────────┬───────────────────────────────┘
                  │
                  ▼
┌─────────────────────────────────────────────────┐
│  TDD: Write tests FIRST                         │
│  - Red: Failing test                            │
│  - Green: Minimal implementation                │
│  - Refactor: Clean up                           │
└─────────────────┬───────────────────────────────┘
                  │
                  ▼
┌─────────────────────────────────────────────────┐
│  Push to PR                                     │
└─────────────────┬───────────────────────────────┘
                  │
                  ▼
┌─────────────────────────────────────────────────┐
│  CI runs automatically                          │
│  ✅ Code quality (ruff)                         │
│  ✅ Tests (pytest)                              │
│  ✅ Coverage (pytest-cov)                       │
└─────────────────┬───────────────────────────────┘
                  │
            ┌─────┴─────┐
            │           │
            ▼           ▼
      ┌─────────┐ ┌─────────┐
      │  FAIL   │ │  PASS   │
      └────┬────┘ └────┬────┘
           │           │
           │           ▼
           │     ┌─────────────┐
           │     │  Can merge  │
           │     └─────────────┘
           │
           ▼
      ┌────────────────────┐
      │  Fix required:     │
      │  - Failing tests   │
      │  - Low coverage    │
      │  - Code quality    │
      └────────────────────┘

9.2 The Three Layers of Protection

Layer 1: TDD (Discipline)

Layer 2: CI (Enforcement)

Layer 3: Coverage (Measurement)

Together: Nearly impossible to ship untested code!


10. Summary: What You’ve Accomplished

10.1 Before This Lecture

10.2 After This Lecture


11. Key Takeaways

11.1 The Problem

Humans forget to run tests. We need automation.

11.2 The Solutions

1. CI Enforcement

2. Coverage Measurement

3. TDD Discipline

11.3 The Workflow

# 1. TDD: Write test first
🔴 RED → 🟢 GREEN → 🔵 REFACTOR

# 2. Push to PR
git push origin feature-branch

# 3. CI validates
✅ Tests pass
✅ Coverage ≥ 70%

# 4. Merge with confidence

11.4 Remember


12. Further Reading

Test Coverage:

Test-Driven Development:

CI/CD Best Practices:

Next Steps:

Coming Next: Chapter 03 (Testing Theory and Coverage) - Testing Theory

In Chapter 03 (Testing Theory and Coverage), we’ll dive deep into the theoretical foundations:

This theory will help you understand why the practical techniques you learned today actually work.


Congratulations! You now have a complete, professional testing workflow: write tests first (TDD), run them automatically (CI), and measure objectively (coverage). Your code is safer, better tested, and production-ready.

© 2026 Dominik Mueller   •  Powered by Soopr   •  Theme  Moonwalk