04 Requirements Engineering: From Tests to Specifications
December 2025 (7327 Words, 41 Minutes)
1. Introduction: Your Tests Are Green, But What Are You Actually Building?
Where you are so far:
From Chapter 03 (Testing Theory and Coverage), you discovered something uncomfortable: when you tried to improve test coverage for find_intersection(), you kept running into questions like:
- “What should happen when the ray is vertical?”
- “Is returning
Nonefor segments behind the camera correct behavior, or a bug?” - “Who decided that parallel lines should use
t=0?”
You realized that coverage gaps reveal requirements gaps. You can’t write good tests if you don’t know what the software is supposed to do.
The uncomfortable truth:
Throughout this course, you’ve been writing tests against implicit requirements - docstrings, type hints, code comments, and your own assumptions. But real software projects need explicit requirements that everyone agrees on.
Today’s question: How do we systematically capture, document, and validate what we’re building - before we write the tests?
2. Learning Objectives
By the end of this lecture, you will:
- Understand what requirements are and why they matter for software quality
- Distinguish functional vs non-functional requirements with concrete examples
- Identify stakeholders and understand their different perspectives
- Apply quality criteria to requirements (testable, measurable, unambiguous)
- Use modern tools for requirements: User Stories, GitHub Issues, Acceptance Criteria
- Link requirements to tests for traceability
Continued in Part 2:
- Requirements in the development workflow
- Eliciting requirements through stakeholder interviews
- Bug vs Change Request distinction
- POC-driven requirements discovery
- Preview of Agile methodology
What you WON’T learn yet:
- Formal specification languages (Z, VDM)
- Model-based requirements (SysML, UML use cases in depth)
- Regulatory requirements (safety-critical systems)
- Full Agile methodology (Chapter 04)
3. Requirements at the Code Level: Where Chapter 03 Left Off
Before we zoom out to business requirements, let’s start where we left off - at the code level.
3.1 Implicit Requirements in Code
In Chapter 03 (Testing Theory and Coverage), you discovered these implicit requirements hidden in find_intersection():
def find_intersection(
x_road: NDArray[np.float64], # Implicit: expects numpy array of floats
y_road: NDArray[np.float64], # Implicit: must be same length as x_road
angle_degrees: float, # Implicit: probably 0-360, but not enforced
camera_x: float = 0,
camera_y: float = 1.5,
) -> tuple[float | None, float | None, float | None]:
"""
Find the intersection point between the camera ray and the road profile.
Returns:
tuple of (x, y, distance) or (None, None, None) if no intersection
"""
What’s missing?
- What happens if
x_roadandy_roadhave different lengths? - What’s the valid range for
angle_degrees? (0-360? -180 to 180? Any float?) - What if
x_roadis empty? - What if
x_roadvalues are not sorted?
3.2 Design by Contract: Making Requirements Explicit
Design by Contract (DbC) is a programming methodology where you explicitly state:
- Preconditions: What must be true BEFORE calling the function
- Postconditions: What will be true AFTER the function returns
- Invariants: What remains true throughout execution
Example: Explicit requirements for find_intersection() (the comment approach)
def find_intersection(
x_road: NDArray[np.float64],
y_road: NDArray[np.float64],
angle_degrees: float,
camera_x: float = 0,
camera_y: float = 1.5,
) -> tuple[float | None, float | None, float | None]:
"""
Find the intersection point between the camera ray and the road profile.
Preconditions (Requirements on input):
- len(x_road) == len(y_road) >= 2
- x_road values are monotonically increasing (sorted left to right)
- angle_degrees is in range [0, 360) degrees
- camera_y > max(y_road) (camera is above the road)
Postconditions (Guarantees on output):
- If intersection exists: returns (x, y, dist) where:
- x is in range [min(x_road), max(x_road)]
- y is the interpolated road height at x
- dist > 0 is the Euclidean distance from camera to intersection
- If no intersection: returns (None, None, None)
Special cases:
- Vertical ray (angle = 90 or 270): returns (None, None, None)
- All road segments behind camera: returns (None, None, None)
- Ray parallel to road segment: uses segment start point
"""
This is requirements engineering at the function level.
Each precondition and postcondition is a testable requirement. Each special case is an equivalence class you learned in Chapter 03 (Testing Fundamentals).
3.3 The Problem with Comments: They Lie
Important Disclaimer: The docstring approach shown above is not how we want you to do Design by Contract in modern code. We show it because:
- You will encounter this pattern in real-world codebases, especially older ones
- Many textbooks and tutorials still teach this approach
- Understanding its problems helps you appreciate better alternatives
Do not adopt this pattern for new code. There are better ways, as we’ll see next.
The docstring approach above looks professional, but it has serious problems:
Problem 1: Comments make code harder to read
Every function now starts with 20+ lines of documentation before you see any actual code. In a large codebase, this becomes overwhelming - you’re swimming in comments just to understand what the code does.
Problem 2: Comments get outdated
Nothing forces the docstring to stay synchronized with the code. When you refactor or add features, do you always update the docstring? Be honest. Most developers don’t - and now your “contract” is a lie.
Problem 3: Comments don’t prevent bugs
The docstring says len(x_road) == len(y_road), but what happens if someone passes arrays of different lengths? The code will crash somewhere with a confusing error, not at the point where the contract was violated.
3.4 Better Approach: Let the Type System Enforce Requirements
Instead of documenting requirements in comments, we can enforce them through better data representation.
Example: Enforcing x_road and y_road have the same length
The current signature allows this bug:
# BUG: Arrays have different lengths - but the type system allows it!
find_intersection(
x_road=np.array([0, 10, 20]),
y_road=np.array([0, 5]) # Oops, missing one element
)
Better design: Use a single array of (x, y) tuples
import numpy as np
from numpy.typing import NDArray
# Road profile is an Nx2 array where each row is (x, y)
# x and y are inherently linked - impossible to have mismatched lengths!
RoadProfile = NDArray[np.float64] # Shape: (N, 2)
def find_intersection(
road: RoadProfile, # Shape (N, 2) - each row is (x, y)
angle_degrees: float,
camera_x: float = 0,
camera_y: float = 1.5,
) -> tuple[float | None, float | None, float | None]:
# Access points as road[i, 0] for x and road[i, 1] for y
# Or unpack: x, y = road[i]
...
# Usage:
road = np.array([
[0.0, 0.0], # Point 1: x=0, y=0
[10.0, 5.0], # Point 2: x=10, y=5
[20.0, 10.0], # Point 3: x=20, y=10
])
Now the requirement “x and y must have the same length” is impossible to violate - each row contains both coordinates together.
Example: Enforcing valid angle ranges
The current signature allows this bug:
# BUG: What does angle=500 even mean? Is it 500° or 140° (500 mod 360)?
find_intersection(x_road, y_road, angle_degrees=500)
Better design: Create an Angle class
from dataclasses import dataclass
import math
@dataclass
class Angle:
"""An angle in degrees, normalized to [0, 360)."""
_degrees: float
def __init__(self, degrees: float):
# Normalize to [0, 360) on construction
self._degrees = degrees % 360
@property
def degrees(self) -> float:
return self._degrees
@property
def radians(self) -> float:
return math.radians(self._degrees)
def is_vertical(self) -> bool:
"""Returns True if angle is approximately 90° or 270°."""
return abs(math.cos(self.radians)) < 1e-10
# Now the function signature enforces valid angles:
def find_intersection(
road: RoadProfile,
angle: Angle, # Always normalized, has is_vertical() method
camera_x: float = 0,
camera_y: float = 1.5,
) -> tuple[float | None, float | None, float | None]:
if angle.is_vertical():
return None, None, None
...
What changed?
Angle(500)automatically becomesAngle(140)- the class enforces normalization- The
is_vertical()method encapsulates the magic number1e-10 - The requirement is now part of the type, not a comment that can be ignored
3.5 Runtime Checks with Asserts (Rarely the Right Choice)
Some developers use assert statements to check requirements at runtime. You’ll see this in some codebases, so let’s understand it - but this is rarely the right approach for production code:
def find_intersection(
x_road: NDArray[np.float64],
y_road: NDArray[np.float64],
angle_degrees: float,
...
) -> tuple[float | None, float | None, float | None]:
# Runtime requirement checks
assert len(x_road) == len(y_road), "x_road and y_road must have same length"
assert len(x_road) >= 2, "Road must have at least 2 points"
assert 0 <= angle_degrees < 360, f"Angle must be in [0, 360), got {angle_degrees}"
# ... actual implementation
Pros:
- Fails fast with clear error message
- Documents requirements in executable form
- Catches bugs during development
Cons (and these are serious):
- Performance overhead: Checks run every time the function is called
- Can be disabled:
python -Oremoves all asserts - your production code might run without any of these checks! - Crashes in production: An
AssertionErrorin production is often worse than graceful handling - your application just dies - Not a substitute for proper error handling: Asserts are meant for “this should never happen” cases, not input validation
- False sense of security: Developers think “I added asserts, I’m safe” - but they might be disabled
Bottom line: Asserts have a very narrow use case - checking internal invariants during development and testing. They are not a requirements enforcement mechanism. Don’t rely on them for production code. If you need to validate input, use proper validation with meaningful error handling (raise
ValueErrorwith a helpful message, return an error result, etc.).
3.6 Tests as Executable Requirements
Here’s a key insight that connects requirements to testing:
Tests can serve as executable documentation of requirements.
Instead of:
- Comments (can be outdated)
- Asserts (can be disabled, crash in production)
- Better types (good but limited to structural constraints)
We can write tests that explicitly verify each requirement:
import numpy as np
import pytest
from road_profile_viewer.geometry import find_intersection, Angle
class TestFindIntersectionRequirements:
"""Tests documenting the requirements for find_intersection()."""
def test_req_vertical_ray_returns_none(self):
"""REQ-GEOM-001: Vertical rays shall return None."""
road = np.array([[0, 0], [10, 5]])
result = find_intersection(road, Angle(90), camera_x=5, camera_y=10)
assert result == (None, None, None)
def test_req_intersection_distance_positive(self):
"""REQ-GEOM-002: When intersection exists, distance shall be > 0."""
road = np.array([[0, 0], [10, 5], [20, 10]])
x, y, dist = find_intersection(road, Angle(45), camera_x=0, camera_y=15)
assert dist is not None
assert dist > 0
def test_req_intersection_on_road(self):
"""REQ-GEOM-003: Intersection point shall lie on the road profile."""
road = np.array([[0, 0], [10, 5], [20, 10]])
x, y, dist = find_intersection(road, Angle(45), camera_x=0, camera_y=15)
assert x is not None
assert 0 <= x <= 20 # Within road x-range
Notice:
- Each test has a requirement ID (REQ-GEOM-001, etc.) - we’ll learn in Section 6 that good requirements should have unique identifiers, and here you see why: it lets us link tests directly to requirements
- The test name describes the requirement
- The test is the executable documentation
- Tests can’t get “out of sync” - if the code changes and breaks a requirement, the test fails
We’ll explore this further in Section 8: Linking Requirements to Tests, where we’ll see how to:
- Define a systematic naming scheme for requirements
- Use pytest markers to tag tests with requirement IDs
- Generate traceability reports showing which requirements are tested
But first, let’s zoom out and understand what requirements are at the business level.
3.7 The Bridge: From Code Requirements to Business Requirements
Notice what happened: We started with code coverage gaps and ended up with better data types and tests that verify requirements.
The same process happens at every level:
Code Coverage Gap → Function Requirement → Module Requirement → System Requirement → Business Need
Now let’s zoom out and look at requirements from the top down.
4. What Are Requirements?
4.1 Definition
Requirement: A condition or capability needed by a user to solve a problem or achieve an objective. — IEEE Standard 610.12
More practically:
A requirement describes what the system should do (or how well it should do it), without specifying how to implement it.
4.2 Functional vs Non-Functional Requirements
| Type | Definition | Example (Road Profile Viewer) |
|---|---|---|
| Functional | What the system does - specific behaviors and functions | "The system shall display the intersection point when the user clicks on the road profile" |
| Non-Functional | How well the system does it - quality attributes | "The intersection calculation shall complete in less than 100ms" |
Non-functional requirements categories:
- Performance: Speed, throughput, resource usage
- Security: Authentication, authorization, data protection
- Usability: Ease of learning, efficiency of use
- Reliability: Uptime, fault tolerance, recovery
- Scalability: Handling growth in users/data
- Maintainability: Ease of modification and debugging
4.3 Example: Road Profile Viewer Requirements
Functional Requirements:
- FR-1: The system shall load road profile data from CSV files
- FR-2: The system shall display the road profile as a 2D line chart
- FR-3: The system shall calculate and display the camera ray based on user-specified angle
- FR-4: The system shall calculate and highlight the intersection point between ray and road
- FR-5: The system shall display the distance from camera to intersection point
Non-Functional Requirements:
- NFR-1: The system shall load a 10,000-point road profile in under 2 seconds
- NFR-2: The UI shall update within 100ms after user input
- NFR-3: The system shall run on Python 3.10 or later
- NFR-4: The codebase shall maintain >80% test coverage
4.4 Requirement IDs: Naming Conventions and What They Mean
You might have noticed we’re using IDs like FR-1 and NFR-1 here, but earlier in Section 3.6 we used REQ-GEOM-001. Let’s clarify this.
Why do requirements need IDs?
- Traceability: Link requirements to tests, code, and documentation
- Communication: “Let’s discuss FR-4” is clearer than “that intersection thing”
- Change tracking: When FR-4 changes, you know which tests to update
- Coverage analysis: Which requirements have tests? Which don’t?
Common ID Naming Schemes:
| Scheme | Example | When to Use |
|---|---|---|
| Type-based | FR-1, NFR-2 |
Simple projects, distinguishes functional vs non-functional |
| Domain-based | REQ-GEOM-001, REQ-UI-005 |
Larger projects with multiple modules/components |
| Hierarchical | REQ-1.2.3 |
Requirements that decompose into sub-requirements |
| Feature-based | LOAD-001, CALC-002 |
Organizing by user-facing features |
Both schemes refer to the same concept - they’re just different conventions. FR-4 and REQ-GEOM-001 both identify a specific requirement that can be tested.
Does the number imply priority or order?
No! The number is just an identifier, not a priority ranking.
FR-1is not necessarily more important thanFR-5- Numbers are typically assigned in the order requirements were written/discovered
- Priority is tracked separately (we’ll see this in Section 7 with User Stories)
If you need to indicate priority, add it explicitly:
FR-4 [Priority: High]: The system shall calculate the intersection point...
FR-5 [Priority: Low]: The system shall export results to PDF...
Or use a separate priority field in your tracking system.
4.5 Where to Store Requirements: Formats and Tools
Requirements need to live somewhere. Here are the options, from informal to formal:
Informal (avoid for anything serious):
- Emails, chat messages, sticky notes
- Problem: Gets lost, no version control, no traceability
Semi-formal (good for most projects):
- Markdown files in your repository (e.g.,
docs/requirements.md)- Version controlled with code
- Easy to read and edit
- Can link to tests and code
- GitHub/GitLab Issues
- Built-in to your development workflow
- Links to PRs and commits
- Labels for categorization (FR, NFR, priority)
- Templates for consistent format
- Project management tools (Jira, Linear, Notion, Trello)
- Rich tracking features
- Integration with development tools
- Custom fields for priority, status, etc.
Formal (regulated industries, large projects):
- IEEE 830 / ISO/IEC/IEEE 29148 format documents
- Standardized structure for requirements specification
- IEEE 830 was the classic standard (now superseded by 29148)
- Required for some contracts/certifications
- Heavy overhead for small projects
- Requirements Management Tools (IBM DOORS, Jama, Polarion)
- Full traceability matrices
- Change impact analysis
- Audit trails
- Expensive, complex
What should you use?
For this course and most software projects:
- Start with GitHub Issues - one issue per requirement or user story
- Use labels to categorize:
requirement,FR,NFR,priority:high, etc. - Link issues to PRs - when you implement a requirement, reference the issue
- Link issues to tests - mention the requirement ID in test docstrings
Example GitHub Issue for FR-4:
Title: FR-4: Calculate and highlight intersection point
## Description
The system shall calculate and highlight the intersection point
between the camera ray and the road profile.
## Acceptance Criteria
- [ ] Intersection point is calculated correctly
- [ ] Point is visually highlighted on the chart
- [ ] Coordinates are displayed to the user
## Priority
High - Core functionality
## Related
- Implements: REQ-GEOM-001, REQ-GEOM-002, REQ-GEOM-003
- Tests: test_geometry.py::TestFindIntersectionRequirements
We’ll explore this workflow in more detail in Section 8: Linking Requirements to Tests.
5. Stakeholders: Who Cares About Your Software?
5.1 What is a Stakeholder?
Stakeholder: Any person or organization that has an interest in or is affected by the system.
Different stakeholders have different - sometimes conflicting - requirements.
5.2 Types of Stakeholders
graph TD
System[Software System] --- Users
System --- Customers
System --- Dev[Developers]
System --- Ops[Operations]
System --- Business
System --- Reg[Regulators]
Users --> EndUser[End Users]
Users --> Admin[Administrators]
Customers --> Buyer[Purchaser]
Customers --> Sponsor[Project Sponsor]
Dev --> Architect[Architect]
Dev --> Programmer[Programmer]
Dev --> Tester[QA/Tester]
Business --> PM[Product Manager]
Business --> Marketing
Business --> Support
Important: This diagram is not exhaustive. The stakeholder roles for your project depend entirely on the domain, scope, and regulatory environment.
Stakeholder complexity varies dramatically by project type:
| Project Type | Typical Stakeholders | Why? |
|---|---|---|
| Medical Device Software | End users, Patients, Doctors, Hospital IT, Clinical researchers, FDA/regulatory bodies, Quality assurance, Risk management, Cybersecurity officers, Data privacy officers, Insurance companies, Legal/compliance | Lives at stake → heavy regulation (FDA, IEC 62304), audit requirements, liability concerns |
| Short Video Game (Indie) | Players, Developer(s), Publisher (if any), Platform (Steam, etc.) | Entertainment focus, minimal regulation, smaller scope |
| Banking Application | Customers, Bank employees, Regulators (BaFin, ECB), Security officers, Fraud detection, Compliance officers, Auditors | Financial regulation, security requirements, audit trails mandatory |
Your first task in any project: Identify who has a stake in your system. Missing a stakeholder means missing their requirements - which you’ll discover painfully late.
5.3 Stakeholder Matrix: Road Profile Viewer
| Stakeholder | Role | Primary Concerns | Example Requirement |
|---|---|---|---|
| Road Engineer | End User | Accuracy, ease of use | "Intersection accuracy within 0.1m" |
| Lab Manager | Customer | Cost, training time | "Trainable in under 1 hour" |
| IT Admin | Operations | Installation, maintenance | "Single-file deployment" |
| Developer | Internal | Code quality, testability | "Modular architecture for testing" |
| Safety Officer | Regulator | Reliability, audit trail | "All calculations logged" |
Key insight: The same feature looks different to different stakeholders. The “intersection calculation” is:
- To the engineer: A measurement tool
- To the manager: A cost/time saving
- To the developer: An algorithm to implement
- To QA: A behavior to verify
6. What Makes a Good Requirement?
Not all requirements are created equal. A bad requirement leads to:
- Misunderstandings between stakeholders
- Features that don’t meet user needs
- Tests that don’t actually validate correctness
- Disputes about whether the software is “done”
6.1 The INVEST Criteria (for User Stories)
| Letter | Criterion | Description | Bad Example | Good Example |
|---|---|---|---|---|
| I | Independent | Can be implemented without depending on other stories | "After login is done, show dashboard" | "Show dashboard for authenticated users" |
| N | Negotiable | Details can be discussed, not set in stone | "Use blue #0000FF for buttons" | "Buttons should be visually prominent" |
| V | Valuable | Delivers value to stakeholder | "Refactor database layer" | "Load profiles 50% faster" |
| E | Estimable | Team can estimate effort | "Make the system better" | "Add export to PDF feature" |
| S | Small | Can be completed in one sprint | "Implement all reporting" | "Generate single-profile report" |
| T | Testable | Clear criteria for "done" | "System should be fast" | "Response time < 200ms" |
6.2 The Testability Connection
Remember Chapter 03 (Testing Fundamentals)-7? The “Testable” criterion is where requirements engineering meets testing:
Untestable requirement:
“The system should be user-friendly”
How do you write a test for this? What does the test assert? “User-friendly” means different things to different people.
Testable requirement:
“The system shall load a road profile and calculate an intersection in under 2 seconds”
Now you can write a test:
def test_load_and_calculate_performance():
"""
Performance test: Core workflow completes within time limit.
Requirement: NFR-PERF-001
"""
start_time = time.time()
app.load_profile("sample_road.csv")
app.set_camera_angle(45.0)
result = app.calculate_intersection()
elapsed = time.time() - start_time
assert elapsed < 2.0, f"Workflow took {elapsed:.2f}s, exceeds 2 second limit"
assert result is not None, "Calculation should return a result"
The key insight: if you can’t write a test for it, it’s not a good requirement. Vague requirements like “user-friendly” need to be refined into specific, measurable criteria before they’re useful.
6.3 Requirements and the Testing Pyramid
6.3.1 What Kind of Test Did We Just Write?
Look back at the test we wrote above:
def test_load_and_calculate_performance():
app.load_profile("sample_road.csv")
app.set_camera_angle(45.0)
result = app.calculate_intersection()
assert elapsed < 2.0
This is NOT a unit test! It tests multiple components working together:
- File loading (
load_profile) - State management (
set_camera_angle) - Calculation (
calculate_intersection) - Timing across all operations
Remember the Testing Pyramid from Chapter 03 (Testing Fundamentals)?
/ \
/ E2E \ ← Few, slow, expensive
/-------\
/ Module \ ← Some, moderate speed ◄── Our test is HERE
/-----------\
/ Unit \ ← Many, fast, cheap
/---------------\
Our performance test is a module test (or integration test) - it tests components working together. This raises a question: if stakeholder requirements are tested at the module/E2E level, what are all those unit tests for?
6.3.2 Why Higher-Level Tests for Stakeholder Requirements?
The stakeholder requirement was:
“The system shall load a road profile and calculate an intersection in under 2 seconds”
This is fundamentally a system-level requirement - it spans multiple components. You cannot test this with a unit test of find_intersection() alone because:
- It includes file I/O time
- It includes the full workflow
- The stakeholder doesn’t care about
find_intersection()- they care about the whole experience
This is why the testing pyramid exists:
- Stakeholder requirements → Tested at module/E2E level
- But we need many more unit tests… for what?
6.3.3 The Implementation Gap: Where Derived Requirements Come From
When implementing FR-CALC-001: "Calculate intersection and display result", developers make decisions:
- Data representation: Use
NDArray[(N,2)]or separatex_road,y_road? - Algorithm choice: Ray-casting? Interpolation?
- Edge case handling: What about vertical rays? Camera below road?
- API design: Use
Angleclass or raw floats?
The customer doesn’t care which approach you choose - both can satisfy FR-CALC-001.
But here’s the catch: These choices often depend implicitly on requirements - especially non-functional ones. Consider:
- Performance (NFR-PERF-001: “< 100ms response time”): Ray-casting with early exit is faster than checking all segments. This NFR influences algorithm choice.
- Memory (NFR-MEM-001: “< 50MB memory usage”): Storing road as
NDArray[(N,2)]is more compact than separate arrays with metadata objects. - Maintainability (NFR-MAINT-001: “Cyclomatic complexity < 10 per function”): Using an
Angleclass with a dedicatedis_vertical()method keeps complexity low, even if slightly slower.
The customer’s explicit requirements (“calculate intersection”) don’t dictate these choices, but their implicit requirements (fast, memory-efficient, maintainable) do.
6.3.3.1 Design Decision Documents (DDDs)
When you make significant implementation choices, you should document them. A Design Decision Document (or Architectural Decision Record, ADR) captures:
- Context: What problem are we solving?
- Decision: What did we choose and why?
- Consequences: What are the trade-offs?
- Alternatives Considered: What else did we evaluate?
Why bother documenting decisions?
- Future you will forget why you chose ray-casting over interpolation
- New team members need context, not just code
- Requirements traceability: Links decisions back to the NFRs that drove them
- Change impact: When NFR-PERF-001 changes from “< 100ms” to “< 10ms”, you know which decisions to revisit
Where to store DDDs:
Option 1: GitHub Issues (recommended for discussion-heavy decisions)
- Built-in comments for team discussion
- Link to PRs that implement the decision
- Labels for categorization (
decision,architecture,algorithm) - Searchable and integrated with your workflow
Example GitHub Issue:
Title: DDD-001: Ray-casting algorithm for intersection calculation
## Context
FR-CALC-001 requires calculating intersection between camera ray and road profile.
NFR-PERF-001 requires < 100ms response time.
## Decision
Use ray-casting with early exit when intersection found.
## Rationale
- Benchmarked at 15ms for 10,000-point profiles (meets NFR-PERF-001)
- Early exit optimization reduces average case to O(n/2)
- Well-documented algorithm, easy to test
## Alternatives Considered
- **Interpolation search**: Faster for sorted data but complex edge cases
- **Binary search on segments**: Requires preprocessing, adds complexity
## Consequences
- Vertical rays (90°, 270°) require special handling → REQ-GEOM-001
- Performance degrades linearly with profile size
## Related
- Implements: FR-CALC-001, constrained by NFR-PERF-001
- Derived requirements: REQ-GEOM-001, REQ-GEOM-002
Option 2: Markdown files in the repository (recommended for stable, approved decisions)
- Version controlled alongside code
- Review and discussion happens on the PR
- Common location:
docs/decisions/ordocs/adr/ - Template: MADR (Markdown Architectural Decision Records)
Practical workflow:
- Open a GitHub Issue to discuss the decision with the team
- Once consensus is reached, create a markdown file to record it
- Reference the issue in the markdown file for discussion history
- Link the decision document from relevant code (docstrings, comments)
But once you decide, you create new requirements that your code must satisfy. These are called derived requirements.
6.3.4 Derived Requirements: The Official Definition
The term “derived requirement” is standardized in industry:
Derived requirement (ISO/IEC/IEEE 29148:2018): “A requirement deduced or inferred from the collection and organization of requirements into a particular system hierarchy.”
The standard uses parent/child terminology:
- Parent requirement: What the stakeholder asked for
- Derived (child) requirement: What our implementation needs to satisfy to meet the parent
NASA’s Systems Engineering Handbook provides a practical definition:
“Derived requirements arise from constraints, consideration of issues implied but not explicitly stated in the high-level direction, or factors introduced by the selected architecture and design.”
NASA also uses the term self-derived requirements for requirements that emerge purely from design decisions - exactly what we’re discussing here.
6.3.5 Full Worked Example: From Stakeholder to Unit Test
Let’s trace the full chain:
Parent Requirement (Stakeholder):
FR-CALC-001: User shall see intersection point displayed on road profile chart
Module Test (Tests the stakeholder requirement):
def test_fr_calc_001_intersection_displayed():
"""
Acceptance test for FR-CALC-001.
Tests the full workflow from user perspective.
"""
app = RoadProfileViewer()
app.load_profile("test_road.csv")
app.set_camera_angle(45.0)
result = app.calculate_and_display()
assert result.intersection_point is not None
assert result.chart_shows_marker == True
Implementation Decision:
“We’ll implement
find_intersection()using ray-casting with numpy arrays. Vertical rays are undefined and should return None.”
Derived Requirements (from this decision):
REQ-GEOM-001: find_intersection() shall return (None, None, None) for vertical rays (90°, 270°)
REQ-GEOM-002: When intersection exists, returned distance shall be > 0
REQ-GEOM-003: Returned intersection point shall lie within road x-range
REQ-GEOM-004: Function shall accept road profile as NDArray with shape (N, 2)
Unit Tests (Test the derived requirements):
class TestFindIntersectionDerivedRequirements:
"""Unit tests for derived requirements of find_intersection()."""
def test_req_geom_001_vertical_ray(self):
"""REQ-GEOM-001: Vertical rays return None."""
road = np.array([[0, 0], [10, 5]])
result = find_intersection(road, Angle(90), camera_x=5, camera_y=10)
assert result == (None, None, None)
def test_req_geom_002_positive_distance(self):
"""REQ-GEOM-002: Distance is positive when intersection exists."""
road = np.array([[0, 0], [10, 5], [20, 10]])
x, y, dist = find_intersection(road, Angle(45), camera_x=0, camera_y=15)
assert dist is not None and dist > 0
def test_req_geom_003_on_road(self):
"""REQ-GEOM-003: Intersection lies within road bounds."""
road = np.array([[0, 0], [10, 5], [20, 10]])
x, y, dist = find_intersection(road, Angle(45), camera_x=0, camera_y=15)
assert 0 <= x <= 20
The Traceability Chain:
FR-CALC-001 (Stakeholder)
│
├──► test_fr_calc_001_intersection_displayed() [Module Test]
│
└──► find_intersection() [Implementation]
│
├──► REQ-GEOM-001 ──► test_req_geom_001_vertical_ray() [Unit Test]
├──► REQ-GEOM-002 ──► test_req_geom_002_positive_distance() [Unit Test]
└──► REQ-GEOM-003 ──► test_req_geom_003_on_road() [Unit Test]
6.3.6 Where to Document Derived Requirements?
| Requirement Type | Audience | Where to Document |
|---|---|---|
| Stakeholder requirements | Everyone | GitHub Issues, User Stories |
| Derived requirements | Developers | Code (docstrings), test docstrings, internal docs |
For this course, the pragmatic approach:
- Stakeholder requirements → GitHub Issues with FR-/NFR- IDs
- Derived requirements → Document in code:
- Function docstrings (preconditions, postconditions)
- Test class/method docstrings
- Internal REQ-* comments in tests
Why NOT put derived requirements in GitHub Issues?
- Too low-level for stakeholders (they don’t care about
Angle(90)) - Changes with implementation (if you refactor, derived requirements change)
- Clutters the issue tracker
6.3.7 The Testing Pyramid Makes Sense Now
| Test Level | What It Tests | Requirement Type | Run Frequency |
|---|---|---|---|
| E2E | Full user workflows | Stakeholder (FR-*) | Before release |
| Module | Component integration | Stakeholder (FR-*, NFR-*) | On PR merge |
| Unit | Implementation correctness | Derived (REQ-*) | Every commit |
The insight:
- We have few module/E2E tests because there are relatively few stakeholder requirements
- We have many unit tests because each implementation creates many derived requirements
- The pyramid shape emerges naturally from this structure!
Unit tests don’t directly test stakeholder requirements - they test that our implementation of those requirements is correct. The module/E2E tests verify the actual requirement.
6.3.8 Avoiding “Orphan” Requirements
NASA’s Software Engineering Handbook warns about orphan requirements - code or tests that can’t be traced to any parent requirement.
Signs of orphans:
- Unit tests with no clear purpose (“test_thing_works”)
- Code that “might be useful someday”
- Features nobody asked for
Prevention: Every derived requirement should trace back to a stakeholder requirement. If you can’t explain how test_vertical_ray_returns_none() helps satisfy a stakeholder need, question whether you need it.
Bidirectional traceability means you can trace:
- Forward: Stakeholder requirement → derived requirements → unit tests
- Backward: Unit test → derived requirement → stakeholder requirement
If either direction breaks, you have a problem.
7. Requirements Documentation Tools
7.1 The Spectrum of Formality
Informal Formal
| |
v v
Emails → Sticky Notes → User Stories → Use Cases → IEEE 830 → Formal Specs (Z)
(IEEE 830 and its successor ISO/IEC/IEEE 29148 define standardized formats for requirements specifications - see Section 4.5 for details)
Most modern software development uses semi-formal approaches: structured enough to be clear, flexible enough to change.
7.2 User Stories
Format:
As a [role], I want [feature], so that [benefit].
Example:
As a road engineer, I want to see the intersection point highlighted on the chart, so that I can quickly verify my measurements.
Why this format works:
- Who: Identifies the stakeholder
- What: Describes the need (not the solution)
- Why: Explains the value (helps prioritization)
7.3 Acceptance Criteria: Given-When-Then
Each user story needs acceptance criteria - specific conditions that must be met for the story to be “done.”
Format (Gherkin syntax):
Given [initial context]
When [action occurs]
Then [expected outcome]
Example:
Feature: Intersection Calculation
Scenario: Normal intersection with road
Given a road profile is loaded with points [(0,0), (10,5), (20,10)]
And the camera is at position (0, 15)
When I set the viewing angle to 45 degrees
Then the system should display an intersection point
And the intersection should be between x=0 and x=20
And the distance should be greater than 0
Scenario: Vertical ray (edge case)
Given a road profile is loaded
And the camera is at position (5, 15)
When I set the viewing angle to 90 degrees
Then the system should display "No intersection"
This is directly testable! Tools like pytest-bdd can execute these as automated tests.
7.4 GitHub Issues as Requirements
Modern development often uses GitHub Issues to track requirements:
## User Story
As a road engineer, I want to export my results to PDF so that I can include them in reports.
## Acceptance Criteria
- [ ] Export button visible on main screen
- [ ] PDF includes road profile chart
- [ ] PDF includes intersection coordinates
- [ ] PDF includes calculation parameters
- [ ] File name defaults to "road_profile_YYYY-MM-DD.pdf"
## Technical Notes
- Use `reportlab` library for PDF generation
- Follow existing chart styling
## Links
- Related to #42 (Export feature epic)
- Blocks #45 (Reporting milestone)
Why GitHub Issues Excel for Requirements
Remember Section 6’s INVEST criteria? Terms like “Independent,” “Small,” and “Valuable” are inherently subjective. What one developer considers “small enough” might seem too large to another. What the product owner considers “valuable” might not align with engineering priorities.
This is where GitHub Issues shine - they’re built for collaborative discussion:
- Comments: Team members can debate whether a requirement is truly “Independent” or has hidden dependencies
- Reactions: Quick feedback (👍 👎 🤔) on proposed acceptance criteria
- @mentions: Pull in specific expertise (“@security-team - does this meet NFR-SEC-001?”)
- Threads: Separate discussions for different aspects of the same requirement
- History: Full audit trail of how the requirement evolved through discussion
Example discussion on an issue:
@project-manager: I think this is small enough for one week.
@dev-alice: Actually, the PDF generation alone is 3 days. Can we split into “basic export” and “styled export”?
@dev-bob: Agreed. Also, is the date format in the filename testable? Which timezone?
@project-manager: Good points. Updated acceptance criteria to specify UTC and split the feature.
This collaborative refinement is exactly what you need for subjective criteria - the requirement improves through team discussion, and the conversation is preserved for future reference.
Additional Benefits:
- Linked to code (PRs reference issues)
- Tracked in version control
- Visible to all team members
- Progress tracked automatically
- Labels for categorization (FR, NFR, priority, milestone)
- Milestones for release planning
8. Linking Requirements to Tests: Traceability
8.1 What is Traceability?
Traceability: The ability to link requirements → tests → code in both directions.
Why it matters:
- Forward: Does every requirement have a test?
- Backward: Does every test verify a requirement?
- Impact analysis: If requirement changes, which tests/code are affected?
8.2 Implementing Traceability with pytest
Using markers to link tests to requirements:
import pytest
# Define custom markers for requirements
pytestmark = pytest.mark.requirements
@pytest.mark.requirement("FR-4")
def test_intersection_found_for_normal_case():
"""
Requirement FR-4: The system shall calculate and highlight
the intersection point between ray and road.
"""
result = find_intersection(x_road, y_road, angle=45.0, camera_x=0, camera_y=10)
assert result[0] is not None, "Should find intersection"
@pytest.mark.requirement("FR-4")
@pytest.mark.requirement("NFR-2")
def test_intersection_performance():
"""
Requirements:
- FR-4: Calculate intersection
- NFR-2: Complete within 100ms
"""
import time
start = time.time()
result = find_intersection(x_road, y_road, angle=45.0, camera_x=0, camera_y=10)
elapsed = (time.time() - start) * 1000
assert result[0] is not None
assert elapsed < 100, f"Took {elapsed}ms, exceeds 100ms requirement"
8.3 Requirements Coverage vs Code Coverage
Code coverage (Chapter 03 (TDD and CI)-7): Did we execute all the code?
Requirements coverage: Did we test all the requirements?
| Metric | What it measures | Limitation |
|---|---|---|
| Code Coverage | % of code lines/branches executed by tests | 100% coverage ≠ all requirements tested |
| Requirements Coverage | % of requirements with at least one test | 100% coverage ≠ requirements correct |
You need both:
- High code coverage ensures your tests exercise the implementation
- High requirements coverage ensures your tests verify the specification
9. Summary
| Concept | Key Point | Connection to Testing |
|---|---|---|
| Requirements | What the system should do (or how well) | Tests verify requirements are met |
| Functional vs Non-Functional | What it does vs how well it does it | Different test types for each |
| Stakeholders | Different perspectives, different requirements | Different acceptance criteria |
| INVEST criteria | Good requirements are testable | Testability enables test design |
| User Stories | As a [role], I want [feature], so that [benefit] | Acceptance criteria = test cases |
| Traceability | Requirements ↔ Tests ↔ Code | Requirements coverage metric |
What we covered:
- Requirements at the code level (Design by Contract, type safety)
- What requirements are (FR vs NFR)
- Who cares about them (stakeholders)
- What makes them good (INVEST, testability)
- How to document them (User Stories, GitHub Issues)
- How to link them to tests (traceability)
10. What’s Next: Part 2
In Part 2, we’ll cover:
- Requirements in the development workflow - Waterfall vs iterative approaches
- Business perspectives - How different roles view requirements
- Elicitation techniques - How to discover requirements through interviews
- Bug vs Change Request - A critical business distinction
- POC-driven requirements - Build to learn, including Gen AI approaches
- Preview of Agile - Why it emerged as a response to requirements uncertainty
Continue to Part 2 → Requirements Engineering - From Process to Practice