05 Agile Development Part 1: Philosophy, Practices, and User Stories
January 2026 (7497 Words, 42 Minutes)
1. Introduction: From Failure to Agility
In Part 2 of Requirements Engineering, we examined why traditional software development models often fail. We identified four critical problems:
- Late feedback — Working software appears 6-18 months after requirements decisions, meaning errors are discovered far too late
- The cost of change curve — Boehm’s research showed that defects found in production cost 30-100x more to fix than those caught during requirements
- Three false assumptions:
- “We can know all requirements upfront”
- “We can design completely before coding”
- “We can build it right the first time”
- The Planning Paradox — If we plan more, we waste time on plans that will change. If we plan less, we build the wrong thing.
These problems aren’t just theoretical. They’ve caused countless project failures, cost overruns, and frustrated developers and users alike.
But what if the problem isn’t poor execution of traditional methods? What if the methods themselves are fundamentally flawed?
In this two-part lecture series, we explore an alternative approach: Agile development. Rather than fighting uncertainty with more planning, Agile embraces uncertainty and builds processes that adapt to change.
Part 1 (this lecture) covers the philosophical foundation, the Agile Manifesto, Extreme Programming practices, and user stories. Part 2 covers the Scrum framework, sprint execution, and real-world evidence of Agile adoption.
2. Learning Objectives
By the end of this lecture, you will be able to:
- Explain the philosophical foundation of Agile using Herbert Simon’s concept of bounded rationality
- Articulate the four values and twelve principles of the Agile Manifesto
- Apply Extreme Programming (XP) practices including TDD, refactoring, pair programming, and continuous integration
- Compare human and AI pair programming — understanding when each approach provides the most value
- Write effective user stories using the “As a [role], I want [feature], so that [benefit]” format with acceptance criteria
- Track user stories using GitHub Issues — labels, milestones, task lists, and project boards
- Connect Agile practices to testing — understanding how iterative development integrates with TDD and CI
3. Herbert Simon’s Bounded Rationality: The Philosophical Foundation
Before the Agile Manifesto was written in 2001, a Nobel Prize-winning economist had already explained why traditional planning approaches are doomed to fail.
3.1 Who Was Herbert Simon?
Herbert Alexander Simon (1916-2001) was an American polymath whose work spanned economics, cognitive psychology, computer science, and organizational theory. He spent most of his career at Carnegie Mellon University, where he helped found one of the world’s first computer science departments.
Simon received two of the highest honors in different fields:
- ACM Turing Award (1975) — for contributions to artificial intelligence
- Nobel Prize in Economics (1978) — for his research on decision-making in organizations
His most influential work, Administrative Behavior (1947), introduced a concept that would revolutionize how we think about human decision-making.
3.2 Bounded Rationality Explained
Classical economics assumed that humans are perfectly rational decision-makers — they gather all available information, evaluate all possible alternatives, and choose the optimal solution.
Simon argued this is fiction. In reality, humans face three fundamental limitations:
| Limitation | Description | Example in Software |
|---|---|---|
| Cognitive limitations | We cannot process all available information | No developer can hold an entire system's behavior in their head |
| Time constraints | Decisions must be made before all data is available | Requirements must be written before users have used the system |
| Incomplete knowledge | We cannot predict how the environment will change | Market conditions, user needs, and technology evolve unpredictably |
Because of these limitations, Simon argued that humans don’t optimize — they satisfice (a word he coined, combining “satisfy” and “suffice”). We don’t find the best solution; we find a solution that’s good enough given our constraints.
3.3 The Connection to Software Development
Simon’s insight has profound implications for software development:
If bounded rationality is true, then complete requirements specifications are impossible. The Waterfall assumption — “we can know all requirements upfront” — is not just difficult to achieve; it’s philosophically incoherent.
Consider our Road Profile Viewer project. Even with careful requirements gathering, we cannot anticipate:
- How users will actually interact with the upload feature
- What edge cases will emerge in production
- How the underlying Dash framework might change
- What new features users will request after seeing the working software
This isn’t a failure of our requirements process — it’s a fundamental property of human cognition.
flowchart LR
subgraph Traditional["Traditional Assumption"]
direction TB
A1[Complete Information] --> B1[Optimal Decision]
B1 --> C1[Perfect System]
end
subgraph Bounded["Bounded Rationality (Reality)"]
direction TB
A2[Limited Information] --> B2[Satisfactory Decision]
B2 --> C2[Learn from Use]
C2 --> D2[Adapt & Improve]
D2 --> A2
end
Traditional ~~~ Bounded
Simon’s conclusion: Instead of pretending we can achieve perfect upfront understanding, we should build processes that embrace learning and adaptation. This is exactly what Agile does.
4. Agile Methods Overview
4.1 The Rise of Agile: Historical Context
In the 1980s and early 1990s, the software engineering community believed that better software came through careful project planning, formalized quality assurance, and rigorous, controlled development processes. This “plan-driven” approach was developed for large, long-lived systems like aerospace and government software — projects that might take up to 10 years from specification to deployment.
However, when this heavyweight approach was applied to small and medium-sized business systems, problems emerged:
- More time was spent on process than programming — documenting how the system should be developed consumed resources that could have gone into actual development
- Constant rework as requirements changed — when requirements changed (as they inevitably did), specifications and designs had to be updated along with the code
- Software delivered too late — by the time software was available, the original business need had often changed so radically that the software was effectively useless
Dissatisfaction with these heavyweight approaches led to the development of agile methods in the late 1990s, including Extreme Programming (Beck 1999), Scrum (Schwaber and Beedle 2001), and DSDM (Stapleton 2003).
4.2 What Makes a Method “Agile”?
All agile methods share three fundamental characteristics:
| Characteristic | Description | Contrast with Plan-Driven |
|---|---|---|
| Interleaved Processes | Specification, design, and implementation happen together. Requirements are an outline, not a detailed specification. | Plan-driven: Separate phases with formal documents between each stage |
| Incremental Development | System developed in a series of increments. End-users and stakeholders are involved in specifying and evaluating each increment. | Plan-driven: Complete system delivered at end of project |
| Extensive Tool Support | Automated testing tools, configuration management, system integration, and UI generation tools support the rapid pace. | Plan-driven: Tools support documentation and compliance |
The result: agile methods create new releases every two to three weeks, involve customers to get rapid feedback, and minimize documentation by using informal communications rather than formal meetings with written documents.
flowchart LR
subgraph Plan["Plan-Driven Development"]
direction TB
R1[Requirements<br>Engineering] --> RS[Requirements<br>Specification]
RS --> D1[Design &<br>Implementation]
D1 -.-> RC[Requirements<br>Change Requests]
RC -.-> RS
end
subgraph Agile["Agile Development"]
direction TB
R2[Requirements<br>Engineering] <--> D2[Design &<br>Implementation]
end
Plan ~~~ Agile
4.3 The Five Principles of Agile Methods
While different agile methods vary in their specific practices, they share a common set of principles:
| Principle | Description | Practical Challenge |
|---|---|---|
| Customer Involvement | Customers should be closely involved throughout development. Their role is to provide and prioritize new requirements and to evaluate iterations. | Customers have other demands on their time and may not be able to participate full-time. External stakeholders (like regulators) are hard to represent. |
| Embrace Change | Expect requirements to change, and design the system to accommodate these changes. | Prioritizing changes can be extremely difficult when there are many stakeholders with different priorities. |
| Incremental Delivery | Software is developed in increments, with the customer specifying the requirements for each increment. | Rapid iterations may not fit with longer-term business planning and marketing cycles. |
| Maintain Simplicity | Focus on simplicity in both the software and the development process. Actively work to eliminate complexity from the system. | Under pressure from delivery schedules, teams may not have time to carry out simplifications. |
| People, Not Process | The skills of the development team should be recognized and exploited. Team members should develop their own ways of working. | Individual team members may not have suitable personalities for the intense involvement typical of agile methods. |
4.4 Where Agile Works Best
Agile methods have been particularly successful in two situations:
-
Product development — where a software company is developing a small or medium-sized product for sale. Virtually all software products and apps are now developed using an agile approach.
-
Custom system development — where there is a clear commitment from the customer to become involved in the development process and where there are few external stakeholders and regulations.
Why do these situations work well? Because they allow for continuous communication between the product manager or customer and the development team, and the software is stand-alone rather than tightly integrated with other systems being developed simultaneously.
4.5 The Agile Response to Uncertainty
Rather than fighting uncertainty with more planning, Agile works with uncertainty:
| Traditional Response to Uncertainty | Agile Response to Uncertainty |
|---|---|
| More planning upfront | Short planning cycles (1-4 weeks) |
| Heavy documentation | Working software as primary artifact |
| Change control boards | Welcome change as competitive advantage |
| Long release cycles (months/years) | Frequent releases (weeks) |
| Big bang testing at the end | Continuous testing throughout |
| Specialist handoffs | Cross-functional teams |
The core insight: feedback is more valuable than prediction. Instead of spending months trying to predict what users want, spend weeks building something and learning from their actual reactions.
5. The Agile Manifesto
5.1 History and Context
In February 2001, seventeen software practitioners gathered at a ski resort in Snowbird, Utah. They represented different methodologies — Extreme Programming, Scrum, Crystal, and others — but shared a common frustration with heavyweight processes.
Over a weekend, they distilled their collective wisdom into four values and twelve principles. The result was the Agile Manifesto, one of the most influential documents in software history.
5.2 The Four Values
Manifesto for Agile Software Development
We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more.
Note the final sentence: the manifesto doesn’t reject processes, documentation, contracts, or plans. It establishes priorities.
| Value | What It Means | What It Does NOT Mean |
|---|---|---|
| Individuals and interactions over processes and tools | Trust skilled people; minimize bureaucracy; face-to-face conversation | No processes at all; tools are useless |
| Working software over comprehensive documentation | Demonstrate progress with running code; document what's necessary | Zero documentation; never write anything down |
| Customer collaboration over contract negotiation | Ongoing conversation; shared understanding; partnership | No contracts needed; don't define scope |
| Responding to change over following a plan | Adapt as you learn; plans are living documents | No planning at all; just start coding |
5.3 The Twelve Principles
Behind the four values are twelve principles that guide Agile practice:
Delivery-focused:
- Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
- Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
- Working software is the primary measure of progress.
Change-embracing:
- Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
- The best architectures, requirements, and designs emerge from self-organizing teams.
- At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
People-centered:
- Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
- The most efficient and effective method of conveying information is face-to-face conversation.
- Business people and developers must work together daily throughout the project.
- Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
Quality-driven:
- Continuous attention to technical excellence and good design enhances agility.
- Simplicity — the art of maximizing the amount of work not done — is essential.
Delivery Focus
- 1. Early & Continuous Delivery
- 2. Frequent Releases
- 3. Working Software = Progress
Embrace Change
- 4. Welcome Requirements Changes
- 5. Self-Organizing Teams
- 6. Reflect & Adjust
People First
- 7. Trust Motivated Individuals
- 8. Face-to-Face Conversation
- 9. Daily Collaboration
- 10. Sustainable Pace
Technical Excellence
- 11. Continuous Excellence
- 12. Simplicity
5.4 Practical Challenges of Agile Principles
The manifesto principles sound appealing, but applying them in practice raises difficult questions. Real-world organizations face constraints that can make pure agile adoption challenging:
| Principle | Practical Challenge | Real-World Implications |
|---|---|---|
| Customer involvement | Finding customers willing to participate full-time in the development process | Customers have their own jobs; Product Owners are often proxies with incomplete knowledge of real user needs |
| Embrace change | Not all stakeholders may be willing to accept that priorities will change | Some changes have high implementation costs; late fundamental changes may require significant rework even with good design |
| Maintaining simplicity | Simplicity requires extra work to achieve; under delivery pressure, teams skip refactoring | Technical debt accumulates; "just make it work" becomes the norm; future changes become harder |
| People over process | Not everyone has the personality for close collaborative work | Some team members prefer to work alone; pair programming can be exhausting; conflicts may arise without process guardrails |
| Minimal documentation | Long-lived systems require documentation; team turnover loses knowledge | When original developers leave, new team members struggle without documentation; maintenance becomes expensive |
5.5 When Is Agile Most Applicable?
Agile methods are not a universal solution. They work best under certain conditions:
Agile works well when:
- The product is developed by a small, co-located team who can communicate informally
- The system has few external stakeholders (customers, regulators, other systems)
- Requirements can realistically change frequently without massive rework costs
- Customer representatives are available and committed to participate
- The organization supports iterative delivery (not just one big bang release)
Agile may struggle when:
- The system requires detailed, upfront specification (e.g., medical devices, safety-critical systems)
- The team is distributed across time zones making face-to-face communication impossible
- Contracts require fixed scope rather than flexible priorities
- External regulators require comprehensive documentation before deployment
- The system must integrate with multiple other systems being developed in parallel
The reality: Most organizations today use a hybrid approach — adopting agile practices where possible while maintaining necessary documentation and processes for compliance, integration, or organizational constraints.
6. Extreme Programming (XP): Key Practices
6.1 What is XP?
Extreme Programming (XP) was created by Kent Beck in the late 1990s. The name comes from taking good software development practices to their “extreme”:
- If code review is good, review code constantly (pair programming)
- If testing is good, test first (TDD)
- If integration is good, integrate continuously (CI)
- If simple design is good, always seek the simplest solution (refactoring)
XP provides the technical practices that make Agile work. Without these practices, Agile becomes just “doing less planning” — which leads to chaos, not agility.
6.2 The XP Release Cycle
XP organizes work into releases, iterations, and tasks:
Release Level (weeks to months)
Within Each Iteration
At the start of a project, story cards capture customer requirements. The customer works with the team to prioritize these stories, and each release contains the highest-priority features that can fit within the schedule.
Once stories are assigned to a release, the development team breaks them into task cards. Each task represents roughly 1-2 days of work, making progress visible and estimation more accurate.
Connection to Previous Lecture: We introduced user stories in Chapter 04: Requirements Engineering. There, you learned the “As a… I want… So that…” format and INVEST criteria. XP’s story cards are the physical representation of these user stories — a lightweight way to capture and prioritize them without heavy documentation. The digital equivalents (GitHub Issues, Jira tickets) serve the same purpose today.
6.3 Core XP Practices
| Practice | Description | Connection to This Course |
|---|---|---|
| User Stories | Requirements as short descriptions from user perspective | Chapter 04: User Stories |
| Test-First Development (TDD) | Write failing test, write code to pass, refactor | Chapter 03: TDD |
| Refactoring | Improve code structure without changing behavior | Chapter 02: Refactoring |
| Pair Programming | Two developers, one computer, continuous review | See deep dive below |
| Continuous Integration | Merge frequently, test automatically on every commit | Chapter 02: CI/CD |
| Collective Ownership | Anyone can change any code; no "code silos" | Enabled by tests and pair programming |
| Simple Design | Design for current needs, not hypothetical futures | Chapter 02: YAGNI |
| Small Releases | Deploy to production frequently with minimal changes | Agile Principle #2 |
| Sustainable Pace | 40-hour weeks; avoid burnout and late-stage heroics | Agile Principle #10 |
| On-Site Customer | Customer representative available full-time to answer questions | Enables rapid feedback |
6.4 Test-First Development Deep Dive
Test-first development (TDD) inverts the traditional development sequence. Instead of writing code and then writing tests, you:
- Write a failing test that defines the expected behavior
- Write the minimum code to make the test pass
- Refactor to improve design while keeping tests green
This approach has a crucial psychological benefit: it forces you to think about what the code should do before thinking about how to implement it.
The Test Lag Problem
In practice, many teams struggle to maintain discipline. When under schedule pressure, developers write tests after the code — or skip tests entirely. This creates test lag: a growing gap between production code and test coverage.
Once test lag develops, developers face a dilemma:
- Writing tests for existing code feels wasteful (the code already works)
- But untested code accumulates technical debt
- Eventually, fear of breaking untested code slows all development
XP addresses this by making test-first a team norm, not an individual choice. Pair programming helps enforce the discipline because your partner will notice if you skip the test step.
Example: Test-First for Profile Validation
# Step 1: Write the failing test
def test_profile_name_too_long_raises_error():
"""Names over 100 characters should be rejected."""
long_name = "A" * 101
with pytest.raises(ValidationError, match="Name must be 1-100 characters"):
validate_profile_name(long_name)
# Step 2: Write minimum code to pass
def validate_profile_name(name: str) -> str:
if len(name) > 100:
raise ValidationError("Name must be 1-100 characters")
return name
# Step 3: Refactor (add more validation rules)
def validate_profile_name(name: str) -> str:
if not name or len(name) > 100:
raise ValidationError("Name must be 1-100 characters")
return name.strip()
6.5 Pair Programming Deep Dive
Pair programming is perhaps the most counterintuitive XP practice. Two developers work together on one computer:
- Driver: Types the code, focuses on the immediate task (tactical thinking)
- Navigator: Reviews each line as it’s typed, thinks about the bigger picture (strategic thinking)
Partners rotate roles frequently (every 15-30 minutes) and swap pairs throughout the day.
Benefits:
- Continuous code review — defects caught immediately
- Knowledge sharing — no single points of failure
- Mentoring — junior developers learn from senior developers in real-time
- Focus — harder to get distracted when someone is watching
What Does Research Say?
Pair programming has been studied extensively:
-
Williams et al. (2000) found that pairs produced code with 15% fewer defects than individuals working alone, while taking only 15% more total person-hours. The quality improvement justified the apparent cost.
-
Arisholm et al. (2007) conducted a controlled experiment with 295 professional developers. Their findings were more nuanced:
- For simple tasks, individuals were more efficient
- For complex tasks, pairs showed significant quality benefits
- Junior-junior pairs performed poorly; pairing works best with at least one experienced developer
The research suggests pair programming is not universally beneficial — it’s a tool best applied to complex, high-risk, or learning-intensive work.
When to use pair programming:
- Complex or unfamiliar code
- Critical system components
- Onboarding new team members
- When stuck on a difficult problem
When to work solo:
- Simple, routine tasks
- Exploratory prototyping
- When pair fatigue sets in (it’s mentally intensive)
- Well-understood, low-risk changes
6.6 AI as Your Pair Programming Partner: The New Navigator?
Since Lecture 1, you’ve been using GitHub Copilot as part of your development environment. Whether you realized it or not, every time Copilot suggested code and you evaluated whether to accept it — you were engaged in a form of pair programming with an AI partner.
This raises an important question: Do the research findings about human pair programming from 2000 and 2007 still apply in the age of generative AI? Or has AI fundamentally changed the equation?
6.6.1 The Rise of AI Pair Programmers
The landscape shifted rapidly:
- 2021: GitHub Copilot launched as a “AI pair programmer”
- 2022: ChatGPT demonstrated conversational coding assistance
- 2023: Claude and GPT-4 brought more sophisticated code understanding
- 2024-2025: AI coding assistants became mainstream development tools
These tools function as an always-available navigator — suggesting code completions, answering questions, and helping debug problems. You remain the driver; the AI offers navigation suggestions.
6.6.2 What Does Research Say?
Researchers have begun comparing human-human pair programming with human-AI pair programming. The findings are nuanced:
| Dimension | Human Pair Programming | AI Pair Programming |
|---|---|---|
| Speed | ~15% overhead (two people, one task) | 55% faster task completion (GitHub 2023) |
| Defect reduction | 15% fewer defects (Williams 2000) | Mixed: readability ↑, but 41% higher code churn (GitClear 2024) |
| Anxiety/motivation | Moderate improvement | Significant reduction (p < .001, d = 0.35) (STEM Education 2025) |
| Complex tasks | Strong benefits | Struggles without project context |
| Availability | Working hours, scheduling required | 24/7, infinite patience |
| Expertise matching | Fixed (whoever's available) | Customizable/adaptable to your level |
| Knowledge transfer | High (tacit knowledge shared) | None |
| Mentoring capability | Yes | No |
| Team culture building | Yes | No |
6.6.3 The Productivity Promise
The productivity gains are real and well-documented:
- GitHub’s controlled study (2023): Developers completed tasks 55% faster with Copilot
- Time savings: Up to 50% on documentation, 30-40% on repetitive coding tasks
- Developer experience: 90% of developers reported feeling more fulfilled; 70% experienced reduced mental effort on repetitive tasks (Accenture study)
These gains are most pronounced for boilerplate code, syntax lookup, and routine implementations — precisely the tasks where pair programming overhead is hardest to justify.
6.6.4 The Quality Question
However, the quality picture is more complicated. Research from 2024 presents conflicting evidence:
Positive findings (GitHub-sponsored research):
- Developers with Copilot were 56% more likely to pass all unit tests
- Code readability improved by 3.6%, maintainability by 2.5%
Concerning findings (independent research):
- GitClear 2024: AI-generated code has 41% higher churn rate — meaning it gets revised or deleted more often
- Uplevel Data Labs: “Developers with Copilot access saw a significantly higher bug rate while their issue throughput remained consistent”
Interpretation: AI may trade initial speed for later rework. The code ships faster, but may require more maintenance.
6.6.5 A Nuanced Finding for Learners
A 2025 study in the ACM Learning@Scale conference found that the benefits of AI pair programming were unevenly distributed among students:
- Students with strong metacognitive skills (ability to monitor and evaluate their own thinking) achieved enhanced performance with AI
- Students with weak metacognitive skills were negatively impacted by AI assistance
This suggests a critical insight: AI amplifies existing skills rather than replacing them. If you don’t understand why code works, AI won’t teach you — it will just generate more code you don’t understand.
This connects directly to our emphasis on TDD and testing: your tests verify that AI suggestions actually work correctly. You still need to understand the code well enough to write meaningful tests.
6.6.6 What AI Cannot Replace
Research consistently identifies capabilities that remain uniquely human:
- Project history and the “why”: AI doesn’t know why your team chose this architecture, what failed before, or what constraints shaped past decisions
- Tacit knowledge transfer: Senior developers share instincts and intuitions that aren’t documented anywhere
- Team culture and shared understanding: Building trust, navigating disagreements, and developing collective ownership
- Genuine challenge from experience: A human navigator says “I’ve seen this pattern fail before” with conviction from lived experience
- Ethical and contextual judgment: Understanding organizational politics, user needs, and societal implications
As one 2025 study noted: students “appreciated LLM-based tools as valuable pair programming partners” but “had different expectations compared to human teammates.”
6.6.7 Practical Guidance: When to Use Which
Based on the research, here’s updated guidance for the AI era:
Use AI pair programming for:
- Boilerplate code and repetitive implementations
- Syntax you don’t remember (API calls, library usage)
- Quick prototyping and exploration
- Late-night solo work when no human partner is available
- Documentation and code comments
Use human pair programming for:
- Complex design decisions requiring architectural judgment
- Learning new domains (the human can explain the “why”)
- Onboarding to a team and its codebase
- Critical system components where defects are costly
- When you need to be challenged, not just helped
The skilled developer of 2025 and beyond will likely use both strategically — recognizing when they need speed and availability (AI) versus when they need wisdom, challenge, and knowledge transfer (human).
The best pairing setup might be: a human navigator who knows when to let you drive — and when to suggest you ask Copilot for a syntax reminder.
6.7 How XP Practices Reinforce Each Other
XP practices aren’t independent — they form a mutually reinforcing system:
- User stories define what to test
- TDD ensures tests exist before code
- Refactoring keeps design simple
- Simple design enables quick integration
- CI enables small, frequent releases
- Small releases generate feedback for new user stories
- Pair programming improves quality of all activities
7. User Stories for Road Profile Viewer
Let’s apply XP practices to our course project: the Road Profile Viewer.
7.1 User Story Format
From Chapter 04, recall the user story format:
As a [role], I want [feature], so that [benefit].
This format captures:
- Who wants the feature (the role)
- What they want (the feature)
- Why they want it (the business value)
7.2 Story Cards and Task Cards
In traditional XP, user stories are written on physical story cards — index cards small enough to force brevity. The constraint is intentional: a story that can’t fit on a card is too big and needs to be split.
What goes on a Story Card:
- Story title
- Brief description (1-2 sentences)
- Priority (set by customer)
- Effort estimate (set by team, often in “story points”)
Once a story is selected for implementation, the development team breaks it into task cards. Each task represents a specific piece of work (1-2 days maximum):
| Story Card | Task Cards (derived from story) |
|---|---|
| Profile Upload As a road engineer, I want to upload new road profiles via JSON files. Priority: High Estimate: 5 points |
|
Why physical cards?
In the early days of XP, physical cards served several purposes:
- Visibility: Cards on a board show project status at a glance
- Tactile manipulation: Physically moving cards creates a sense of progress
- Constraint: Small cards force concise requirements
- Collaboration: Cards can be arranged, prioritized, and discussed as a group
7.2.1 From Physical Cards to Digital Tools
Today, most teams use digital tools instead of physical cards. The shift happened for practical reasons:
- Remote work: Distributed teams can’t gather around a physical board
- Searchability: Finding a specific story among hundreds is instant
- Integration: Tools connect to version control, CI/CD, and documentation
- History: Every change is tracked — who moved what, when, and why
| Aspect | Physical Cards | GitHub Issues | Jira |
|---|---|---|---|
| Best for | Small co-located teams, workshops | Open source, small-medium teams, developers | Enterprise, large teams, management reporting |
| Cost | Index cards + markers | Free (public), included with GitHub | $7.75-15.25/user/month |
| Learning curve | None | Low (if you know GitHub) | Medium-High (many features) |
| Sprint planning | Move cards on board | GitHub Projects boards | Built-in sprint management |
| Reporting | Manual counting | Basic (insights tab) | Advanced (burndown, velocity, forecasting) |
For this course, we recommend GitHub Issues because:
- You’re already using GitHub for version control
- Issues link directly to commits and pull requests
- It’s free and sufficient for most projects
- Learning one tool deeply is better than learning many superficially
The tool doesn’t matter — the practice does. Whether you use sticky notes, GitHub Issues, or Jira, the principles remain the same: keep stories small, break them into tasks, and make progress visible.
7.3 Road Profile Viewer Feature Requirements
The project requires:
- Dropdown selector for multiple road profiles
- Upload page with JSON file upload
- Database persistence (SQLite or TinyDB)
- Pydantic data validation
- 90%+ C1 test coverage
7.4 Example User Stories
7.4.1 Story 1: Profile Selection
As a road engineer, I want to select from multiple stored road profiles using a dropdown, so that I can quickly switch between different road measurements.
Acceptance Criteria (Given-When-Then):
Scenario: Select a different profile
Given the application has 3 stored profiles
And the dropdown shows "Highway A1" as the default
When I click the dropdown and select "Rural Road B7"
Then the chart updates to show the "Rural Road B7" profile
And the intersection calculation uses the new profile data
Scenario: Default selection on startup
Given the application has stored profiles
When the application starts
Then the dropdown pre-selects the first profile alphabetically
And the chart displays that profile
7.4.2 Story 2: Profile Upload
As a road engineer, I want to upload new road profiles via JSON files, so that I can add measurements from field surveys to the system.
Acceptance Criteria:
Scenario: Upload valid JSON profile
Given I am on the upload page
And I have a valid JSON file with 100 road points
When I select the file and click "Upload"
Then the system displays a preview chart
And I can enter a custom name for the profile
And clicking "Save" stores the profile in the database
And I receive a success confirmation
Scenario: Upload invalid JSON format
Given I am on the upload page
When I select a file that is not valid JSON
Then the system displays "Invalid JSON format" error
And the profile is NOT saved to the database
Scenario: Upload mismatched coordinates
Given I am on the upload page
When I select a JSON file where x_coordinates has 50 items but y_coordinates has 45 items
Then the system displays "Coordinate arrays must have equal length" error
And the profile is NOT saved
7.4.3 Story 3: Data Validation
As a system administrator, I want uploaded profiles validated against schema rules, so that invalid data cannot corrupt the database.
Acceptance Criteria:
Scenario: Valid profile data
Given a JSON file with name "Test Road" (10 chars)
And x_coordinates and y_coordinates each have 25 float values
When the system validates the file
Then validation passes
And the profile can be saved
Scenario: Name too long
Given a JSON file with name exceeding 100 characters
When the system validates the file
Then validation fails with "Name must be 1-100 characters"
Scenario: Empty coordinate arrays
Given a JSON file with empty x_coordinates array
When the system validates the file
Then validation fails with "Minimum 2 coordinate points required"
7.4.4 Story 4: Profile Preview
As a road engineer, I want to preview the profile graph before saving, so that I can verify I’m uploading the correct data.
Acceptance Criteria:
Scenario: Preview before save
Given I have uploaded a valid JSON file
When the system accepts the file
Then I see a graph showing the road profile
And the graph axes are labeled correctly
And I can still cancel without saving
7.4.5 Story 5: Database Persistence
As a road engineer, I want my profiles to persist across application restarts, so that I don’t lose my data when closing the application.
Acceptance Criteria:
Scenario: Profiles survive restart
Given I have saved 5 profiles to the database
When I restart the application
Then all 5 profiles appear in the dropdown
And each profile loads correctly when selected
7.5 Managing User Stories in Practice
The user stories above are well-structured on paper, but how do teams actually track them during development? Let’s see how to translate these stories into GitHub Issues — the tool you’re already using for this course.
7.5.1 Using GitHub Issues for User Stories
GitHub Issues provides everything you need to manage user stories:
Labels categorize issues by type and priority:
story— A user story (vs.bugortask)priority:high,priority:medium,priority:lowsprint:1,sprint:2— Which sprint the story is planned forfeature:upload,feature:validation— Feature area
Milestones group stories into releases:
- “MVP Release v1.0” — First usable version
- “Sprint 1” — Work planned for current iteration
Task lists within issues track sub-tasks:
- Checkboxes for each task derived from the story
- Progress visible at a glance (3 of 6 complete)
Linking connects stories to implementation:
- Reference issues in commit messages:
Fixes #12 - Pull requests automatically close related issues
7.5.2 Example: Profile Upload as a GitHub Issue
Here’s how Story 2: Profile Upload would look as a GitHub Issue:
User Story
As a road engineer, I want to upload new road profiles via JSON files, so that I can add measurements from field surveys to the system.
Acceptance Criteria
- Upload page accessible at
/upload - File picker accepts
.jsonfiles only - Valid JSON shows preview chart before saving
- Invalid JSON shows clear error message
- Mismatched coordinate arrays rejected with explanation
- Success confirmation displayed after save
- New profile appears in dropdown immediately
Tasks
- Create
/uploadroute and template - Implement file upload form component
- Add JSON parsing with error handling
- Implement Pydantic validation model
- Add database save operation
- Write integration tests (target: 90% coverage)
Definition of Done
- All acceptance criteria verified
- Tests written and passing
- Code reviewed and approved
- No known bugs
Why this structure works:
- User story at the top — Anyone can understand what and why
- Acceptance criteria as checkboxes — Clear definition of “done”
- Tasks as checkboxes — Track implementation progress
- Labels for filtering — Find all high-priority stories, all upload features, etc.
7.5.3 GitHub Projects for Sprint Planning
For sprint-level planning, use GitHub Projects (the newer “Projects V2”):
- Create a project board linked to your repository
- Add columns for your workflow:
Backlog→Sprint→In Progress→Review→Done - Drag issues between columns as work progresses
- Use filters to see only current sprint:
milestone:"Sprint 1"
The board gives the team a visual overview — the digital equivalent of the physical card wall that XP teams used in the 1990s.
7.5.4 When Teams Outgrow GitHub Issues
For larger teams or enterprise environments, tools like Jira offer:
- Advanced reporting: Burndown charts, velocity tracking, forecasting
- Sprint analytics: How much work completed vs. planned
- Workflow customization: Enforce processes, required fields
- Integration with Confluence: Link stories to documentation
The principles remain the same regardless of tool. If you master user stories, task breakdown, and sprint planning with GitHub Issues, you’ll adapt quickly to any other tool.
| Further reading: GitHub Issues documentation | Atlassian Jira guides |
7.6 User Story Breakdown
| Story | Priority | Estimated Effort | Dependencies |
|---|---|---|---|
| Database Persistence | High | Medium | Schema design |
| Data Validation | High | Small | Pydantic setup |
| Profile Selection | High | Medium | Database, existing UI |
| Profile Upload | High | Large | Validation, Database |
| Profile Preview | Medium | Small | Upload form |
8. Summary
| Concept | Key Point | Connection to Testing |
|---|---|---|
| Bounded Rationality | We cannot fully specify systems upfront because humans satisfice, not optimize | Tests must evolve as understanding grows; incomplete specs mean incomplete tests |
| Agile Manifesto | 4 values, 12 principles prioritizing working software and responding to change | "Working software" requires tests; change requires regression tests |
| XP Practices | TDD, refactoring, pair programming, continuous integration | TDD is core XP practice; refactoring requires test safety net |
| AI Pair Programming | 55% faster for routine tasks, but 41% higher churn; amplifies existing skills | Tests verify AI suggestions work correctly; you still need to understand the code |
| User Stories | "As a [role], I want [feature], so that [benefit]" with acceptance criteria | Acceptance criteria are executable test cases |
| GitHub Issues | Digital story cards with labels, milestones, and task lists | Link issues to commits and PRs for traceability |
8.1 Key Takeaways
- Traditional models fail because bounded rationality makes complete upfront planning impossible.
- Agile accepts uncertainty and builds processes around short feedback loops.
- XP practices (TDD, refactoring, CI) provide the technical foundation that makes Agile work.
- AI pair programming complements but doesn’t replace human collaboration — use both strategically.
- User stories with acceptance criteria transform requirements into executable test specifications.
- GitHub Issues provides everything you need to track user stories, with tight integration to your code.
9. Reflection Questions
-
Herbert Simon’s insight: How does bounded rationality explain why your Road Profile Viewer requirements kept changing during development? Give a specific example from your team’s experience.
-
Manifesto interpretation: “Working software over comprehensive documentation” — does this mean no documentation? What documentation would you still write for Road Profile Viewer?
-
XP trade-offs: Pair programming requires two developers on one task. When would the benefits (continuous review, knowledge sharing) outweigh the apparent cost of having two people on one computer?
-
AI vs Human pairing: Based on the research presented, when would you choose human pair programming over AI assistance? Give a concrete example from your Road Profile Viewer work.
-
User story practice: Write a user story with acceptance criteria for a feature NOT in the current requirements — for example, “export profile to PDF” or “compare two profiles side-by-side.”
-
GitHub Issues practice: Take one of your user stories and create a GitHub Issue following the structure in Section 7.5.2. Include labels, acceptance criteria as checkboxes, and a task breakdown.
10. Further Reading
10.1 Books
- Kent Beck: Extreme Programming Explained (2nd edition) — The original XP book, still relevant today
- Mike Cohn: User Stories Applied — Practical guide to writing and using user stories
- Herbert A. Simon: Administrative Behavior (1947) — The philosophical foundation for understanding bounded rationality
10.2 Articles and Resources
- Agile Manifesto — The original document
- Agile Principles — The 12 principles behind the manifesto
- Stanford Encyclopedia of Philosophy: Bounded Rationality — Academic overview of Simon’s work
- Wikipedia: Herbert A. Simon — Overview of Simon’s life and contributions
- GitHub Issues documentation — Official GitHub guide
10.3 Tools
- GitHub Issues — Built-in issue tracking, integrated with your repository
- GitHub Projects — Kanban boards for sprint planning
- Jira — Enterprise-grade sprint management (covered in Part 2)
- Linear — Modern, developer-friendly project management
11. What’s Next
In Part 2: Scrum Framework and Real-World Application, we’ll cover:
- The Scrum Framework — roles, events, and artifacts
- Sprint execution — planning, daily standups, reviews, and retrospectives
- Real-world evidence — how the US Military is adopting Agile for mission-critical systems
- Practical application — running a sprint for the Road Profile Viewer
With the philosophy, practices, and user stories from Part 1, you’re ready to learn how Scrum organizes these elements into a repeatable process.