Home

07 Multi-Language Projects Part 1: C++ Development Fundamentals

lecture cpp cmake google-test clang-format clang-tidy compilation build-systems

1. Introduction: Your Python Code Works, But Can It Run on an Embedded System?

Your Road Profile Viewer works perfectly. The Python code is clean, tested, and follows all quality standards you learned in this course. The visualization is beautiful, the ray intersection algorithm is mathematically correct, and the CI pipeline glows green.

Then your advisor asks: β€œCan we run the ray intersection algorithm on the vehicle’s embedded ECU?”

You check the ECU specifications:

Your Python code cannot run here.

Not because it’s poorly written, but because Python itself requires:

This is not an unusual situation. In automotive, robotics, aerospace, and scientific computing, Python is often the prototyping language, but performance-critical code runs in C or C++.

THE LANGUAGE SPECTRUM
🐍 Python
  • ✍️ Easy to write
  • 🐒 Slow to run
  • πŸ“¦ High memory
  • πŸ”„ Interpreted
  • 🎭 Dynamic typing
  • ⏸️ GC pauses
  • πŸ“š Rich ecosystem
Great for:
Prototyping Data science Web backends Automation
⚑ C/C++
  • 🧩 Hard to write
  • πŸš€ Fast to run
  • πŸ’Ύ Low memory
  • βš™οΈ Compiled
  • πŸ”’ Static typing
  • 🎯 Manual memory
  • πŸ”§ Close to hardware
Great for:
Embedded systems Game engines Operating systems Real-time systems

The question is: How do you bridge these two worlds without rewriting everything?

This two-part lecture series answers that question:

In this lecture, you will learn how to:

  1. Understand the C++ build process β€” compilation, linking, and why it differs from Python
  2. Use CMake as the modern C++ build system
  3. Apply code quality tools (clang-format, clang-tidy) with the same discipline as Ruff for Python
  4. Write unit tests using Google Test
  5. Generate coverage reports for C++ code

By the end, you’ll have the foundation to write production-quality C++ code that can later be integrated with Python


2. Learning Objectives

By the end of this lecture, you will be able to:

  1. Explain the difference between compiled and interpreted languages, and why it matters for embedded systems
  2. Set up a C++ project with CMake, including dependencies and build configurations
  3. Apply code quality tools to C++ code (clang-format, clang-tidy) with the same discipline as Ruff for Python
  4. Configure compiler warnings appropriately for different project requirements (safety vs. speed)
  5. Write unit tests for C++ code using Google Test
  6. Generate coverage reports for C++ code using gcov/llvm-cov

2.1 What You Won’t Learn

These topics deserve their own courses. Our focus is on software engineering practices that transfer from Python to C++.


3. Part 1: C++ Development Fundamentals

Before we can integrate C++ with Python, we need to understand how C++ development works. If you’ve only written Python, C++ will feel different in fundamental ways.

4. The Fundamental Difference: Compiled vs. Interpreted

4.1 How Python Executes Code

A deeper look under the hood

In this section, we explore Python’s execution model in more detail than before. Understanding bytecode, the Python Virtual Machine, and the interpreter helps us appreciate the fundamental differences between Python and C++β€”specifically why compiled languages can be 10-100x faster.

For most day-to-day development, what you learned earlier in this courseβ€”using uv for dependency management, understanding virtual environments, writing clean codeβ€”is what matters most. You can be a productive Python developer without knowing how the PVM works.

But when you need to understand why Python is slower, why C++ is used for performance-critical code, or how tools like pybind11 bridge the two worlds, this foundational knowledge becomes essential.

4.1.1 What is an Interpreter?

An interpreter is a program that executes code written in a programming language. Unlike a compiler (which translates the entire program to machine code before running), an interpreter processes code line by line (or statement by statement) at runtime.

The Python Interpreter (CPython)

When people say β€œPython,” they usually mean CPythonβ€”the reference implementation of Python written in C. It’s called CPython because the interpreter itself is written in C, not because it has anything to do with C++.

Other Python implementations exist:

For this course, we use CPython (the default when you install Python).

4.1.2 Running Python Directly

When you run a Python program:

python main.py

Here’s what happens:

πŸ“„
main.py
Your source code (text file)
β–Ό
🐍
Python Interpreter
Reads your code at runtime (CPython)
β–Ό
πŸ“¦
Bytecode
Intermediate representation (.pyc in __pycache__/)
β–Ό
βš™οΈ
Python Virtual Machine
Executes bytecode instruction by instruction
β–Ό
βœ…
Results

4.1.3 What is Bytecode?

You might wonder: if Python is β€œinterpreted,” why is there a β€œcompilation to bytecode” step? This is a common source of confusion.

Bytecode is an intermediate representation β€” a set of low-level instructions that are easier for the Python Virtual Machine (PVM) to execute than raw source code. Think of it as a β€œsimplified” version of your program that the PVM can process quickly.

The interpreter creates bytecode on first run. When Python executes your code for the first time:

  1. The interpreter reads your .py file
  2. It compiles the source code into bytecode
  3. It saves the bytecode to a .pyc file in the __pycache__/ directory
  4. The PVM executes the bytecode

Bytecode is cached and reused. On subsequent runs, Python optimizes startup:

  1. Python checks if a .pyc file exists for your module
  2. Python compares timestamps: Is the .pyc newer than the .py?
  3. If yes β†’ skip compilation, load bytecode directly (faster startup!)
  4. If no β†’ recompile (your source code changed)
πŸš€ First Run
πŸ“„main.py
β†’
βš™οΈCompile
β†’
πŸ“¦main.pyc
β–Ό
▢️Execute
⚑ Subsequent Runs
πŸ“„main.py
β†’
❓.pyc newer?
βœ“ Yes
β–Ό
πŸ“¦Load .pyc
β–Ό
▢️Execute
βœ— No
β–Ό
πŸ”„Recompile
β–Ό
▢️Execute

Why does this matter?

You can inspect bytecode yourself:

import dis

def add(a, b):
    return a + b

dis.dis(add)

Output:

  2           0 LOAD_FAST                0 (a)
              2 LOAD_FAST                1 (b)
              4 BINARY_ADD
              6 RETURN_VALUE

These are the bytecode instructions the PVM actually executes. Each instruction is simple: load a value, add two values, return.

Important distinction:

Aspect Python Bytecode C++ Machine Code
Executed by Python Virtual Machine (software) CPU directly (hardware)
Portability Same bytecode runs on any OS with Python Different binary for each OS/architecture
Speed Slower (interpreted by PVM) Faster (native execution)
File extension .pyc .exe, .out, or no extension

This is why Python is often called β€œinterpreted” even though there’s a compilation stepβ€”the bytecode still needs the PVM to run, unlike C++ which compiles to native machine code.

4.1.4 What is the Python Virtual Machine (PVM)?

The Python Virtual Machine is not a separate program you installβ€”it’s the core execution engine built into the CPython interpreter. When you install Python, you get the PVM as part of the package.

The PVM is a C program (specifically, it’s the heart of CPython). It consists of:

  1. A bytecode evaluation loop β€” The main function that reads bytecode instructions one by one
  2. A stack β€” Where values are stored during computation
  3. Frame objects β€” Track function calls, local variables, and execution state
  4. Memory management β€” Handles object allocation and garbage collection

How the PVM executes your code:

🐍 Python Virtual Machine
πŸ“¦
Bytecode
.pyc
β†’
πŸ”„
Eval Loop
ceval.c
β†’
⚑
Execute Action
push/pop/call
β–Ό
β–Ό
πŸ“š
Stack
[values]
πŸ’Ύ
Memory
[objects]

The evaluation loop in action:

When you run result = 3 + 5, the PVM does this:

Bytecode instruction      Stack state       Action
─────────────────────     ───────────       ──────
LOAD_CONST 3              [3]               Push 3 onto stack
LOAD_CONST 5              [3, 5]            Push 5 onto stack
BINARY_ADD                [8]               Pop two values, add, push result
STORE_NAME 'result'       []                Pop 8, store in variable 'result'

The PVM is a stack-based virtual machine. Every operation pushes values onto a stack, performs computations, and pops results off.

The mental model for students:

When you run a Python script, imagine a tiny robot (the PVM) inside your computer:

  1. The robot receives a list of simple instructions (bytecode)
  2. The robot has a notepad (the stack) where it writes intermediate values
  3. The robot reads one instruction at a time, performs it, and moves to the next
  4. The robot manages memory: creating objects when needed, cleaning them up when unused

This robot runs at about 10-100x slower than native machine code because:

Why this matters for this lecture:

When we write C++ code and compile it to machine code, we bypass the robot entirely. The CPU executes our instructions directlyβ€”no interpretation, no stack manipulation overhead, no runtime type checking. This is why C++ can be 10-100x faster for computation-heavy tasks.

The actual source code:

The PVM’s core is in Python/ceval.c β€” a giant switch statement with thousands of lines:

// Simplified view of ceval.c
for (;;) {
    switch (opcode) {
        case LOAD_CONST:
            value = constants[oparg];
            PUSH(value);
            break;
        case BINARY_ADD:
            right = POP();
            left = POP();
            result = PyNumber_Add(left, right);
            PUSH(result);
            break;
        // ... hundreds more cases ...
    }
}

This loop runs continuously while your Python program executes.

4.1.5 The Interpreter’s Steps

The interpreter does several things:

  1. Lexical analysis: Breaks source code into tokens
  2. Parsing: Builds an Abstract Syntax Tree (AST)
  3. Compilation to bytecode: Translates AST to bytecode instructions
  4. Execution: The PVM executes bytecode

4.1.6 Running Python with uv

In this course, we use uv to manage Python environments. When you run:

uv run main.py

Here’s what happens before Python even starts:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   uv run        β”‚  ← uv command
β”‚   main.py       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚
         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 1. uv reads pyproject.toml                          β”‚
β”‚ 2. uv checks if virtual environment exists          β”‚
β”‚ 3. uv creates/updates venv if needed                β”‚
β”‚ 4. uv installs missing dependencies                 β”‚
β”‚ 5. uv activates the virtual environment             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚
         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ python main.py  β”‚  ← Now Python runs (same as above)
β”‚ (in venv)       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key differences between python main.py and uv run main.py:

Aspect python main.py uv run main.py
Environment Uses whatever Python is in PATH Uses project's virtual environment
Dependencies Must be installed manually Automatically installed from pyproject.toml
Reproducibility Depends on system state Consistent across machines
Isolation May conflict with system packages Isolated virtual environment
First run Fails if dependencies missing Installs dependencies automatically

Why this matters for multi-language projects:

When we add C++ code with pybind11, uv run ensures:

Key characteristics of interpreted execution:

4.1.7 Deep Dive Resources

If you want to understand Python’s internals better:

Official Documentation:

Books:

Articles and Talks:

Source Code:

4.2 How C++ Executes Code

Prerequisites for this section

This section assumes you’ve written small C++ programs beforeβ€”perhaps a β€œHello World,” a simple loop, or a function that calculates something. We’re not teaching C++ syntax here; this course isn’t a replacement for a dedicated C++ programming course.

Instead, we’re exploring the mechanisms that make C++ fundamentally different from Python: how your source code becomes an executable program that runs directly on hardware. Understanding this process is essential before we can bridge Python and C++ in the next lecture.

4.2.1 What is a Compiler?

A compiler is a programβ€”yes, just another piece of softwareβ€”that translates source code written in a high-level programming language into machine code that a computer’s processor can execute directly.

The fundamental problem compilers solve:

Humans think in abstractions: variables, functions, loops, conditions. CPUs only understand binary instructions: β€œload this value from memory address X,” β€œadd these two registers,” β€œjump to instruction Y if the result is zero.”

Writing programs directly in machine code (or even assembly language) is tedious, error-prone, and architecture-specific. A program written for an Intel CPU won’t run on an ARM processor without being completely rewritten.

The compiler’s job is to bridge this gap:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                                                                     β”‚
β”‚   Human-readable code          Machine-executable code              β”‚
β”‚                                                                     β”‚
β”‚   int add(int a, int b) {      01001000 10001001 11111000           β”‚
β”‚       return a + b;      β†’     00001001 11110000                    β”‚
β”‚   }                            11000011                             β”‚
β”‚                                                                     β”‚
β”‚   (C++ source)                 (x86-64 machine code)                β”‚
β”‚                                                                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Historical context: The first compiler was developed by Grace Hopper in the early 1950s. Before compilers existed, programmers wrote machine code by handβ€”an incredibly slow and error-prone process. Hopper’s insight was revolutionary: let a program do the translation automatically.

4.2.2 Different C++ Compilers

Unlike Python, where CPython is the dominant implementation, the C++ ecosystem has several major compilers:

Compiler Command Platform Notes
GCC (GNU Compiler Collection) g++ Linux, macOS, Windows (MinGW) Open source, widely used in academia and Linux development
Clang (LLVM) clang++ Linux, macOS, Windows Modern, excellent error messages, used by Apple/Google
MSVC (Microsoft Visual C++) cl.exe Windows Integrated with Visual Studio, dominant on Windows

Why does this matter?

  1. Portability: Well-written C++ code compiles with any standard-compliant compiler
  2. Behavior differences: Compilers may have slightly different defaults or extensions
  3. Error messages: Clang is known for clearer, more helpful error messages than GCC
  4. Optimization: Different compilers may produce differently optimized code
  5. Platform constraints: Your target platform may dictate the compiler (embedded systems often use specific toolchains)

In this course, we’ll use GCC (g++) because it’s available on all major platforms and is the default on most Linux systems. The concepts apply equally to Clang and MSVC.

4.2.3 The Build Process Overview

When you build and run a C++ program, you use two separate commands:

g++ -o main main.cpp   # Step 1: Compile (creates executable)
./main                  # Step 2: Run (executes the program)

Step 1: The compile command

Part Meaning
g++ The compiler command (GCC's C++ compiler). You could also use clang++ for Clang.
-o main The output flag (-o) followed by the desired executable name (main). Without this, GCC creates a file called a.out by default.
main.cpp The source file to compile. This is your C++ code.

Step 2: Running the executable

After compilation succeeds, you have a new file called main (or main.exe on Windows). This is a standalone executableβ€”native machine code that runs directly on your CPU.

Part Meaning
./main Run the executable. The ./ prefix tells the shell to look in the current directory (Linux/macOS). On Windows, you'd just type main.exe or .\main.exe.

Important: Unlike Python, where you run python script.py every time, a compiled C++ program doesn’t need the compiler to run. Once compiled, you can copy main to another machine (with the same OS/architecture) and run it directlyβ€”no compiler or development tools required.

But waitβ€”what about header files?

If you’ve written C++ before, you know that projects typically have header files (.hpp or .h) that declare functions, classes, and types. Where do they fit in?

There are many more compiler options:

The simple command above hides a lot of complexity. In real projects, you’ll use additional flags:

For now, let’s focus on the big picture. This single command actually triggers a multi-stage process:

πŸ“„
main.cpp
Your source code (text file)
β–Ό
πŸ“‹
Preprocessor
Handles #include, #define (text substitution)
β–Ό
⚑
Compiler
Converts C++ to machine code (g++, clang)
β–Ό
πŸ“¦
Object Files
Binary code for each source file (.o, .obj)
β–Ό
πŸ”—
Linker
Combines object files, resolves references
β–Ό
πŸš€
Executable
Native machine code (runs directly on CPU)
β–Ό
βœ…
Results

Key characteristics:

4.3 Why This Matters for Embedded Systems

Aspect Python C++
Runtime requirements Python interpreter (10-50 MB) None (standalone binary)
Memory overhead High (objects have metadata) Low (raw data)
Execution speed 10-100x slower Native speed
Startup time Slow (interpreter init) Fast (immediate execution)
Predictability GC pauses can cause jitter Deterministic timing

For the 200 MHz ECU with 256 KB RAM, Python simply won’t fit. But a compiled C++ program can run in kilobytes of RAM with microsecond-level timing precision.


5. The Build Process: From Source to Executable

A note on scope

This is a software engineering courseβ€”not a course on compilers or computer architecture. We’re not trying to turn you into compiler engineers or teach you how to write your own compiler.

Instead, this section consolidates knowledge you’ve likely encountered in other courses or through self-study. The goal is to establish a common mental model so that when we discuss multi-language projects, build systems, and integration challenges, everyone is on the same page.

If some of this is review, greatβ€”use it to reinforce your understanding. If it’s new, don’t worry about memorizing every detail. Focus on the big picture: how source code becomes an executable, and why that matters when combining Python and C++.

Let’s walk through each step of the C++ build process.

5.1 Preprocessing

The preprocessor handles directives that start with #:

// main.cpp
#include <iostream>           // ← Include standard library header
#include "geometry.hpp"       // ← Include our project header

#define PI 3.14159265359      // ← Text substitution

int main() {
    std::cout << "Pi = " << PI << std::endl;
    return 0;
}

The preprocessor:

  1. Copies the entire content of <iostream> into your file
  2. Copies the entire content of geometry.hpp into your file
  3. Replaces every PI with 3.14159265359

The output is a single, expanded file with no #include or #define left.

5.2 Compilation

Recall from Section 4.2.3: The Build Process Overview that compilation is one stage in a three-stage process: preprocessing β†’ compilation β†’ linking. Here we examine the compilation stage more closely.

The compiler converts preprocessed C++ code to object codeβ€”machine instructions for a specific processor architecture (x86-64, ARM, etc.).

The compile-only command:

g++ -c main.cpp -o main.o

The -c flag tells the compiler: β€œcompile only, don’t link.” This produces an object file (main.o) rather than an executable.

What the compiler does internally:

  1. Lexical analysis: Breaks source code into tokens (keywords, identifiers, operators)
  2. Parsing: Builds an Abstract Syntax Tree (AST) representing the code structure
  3. Semantic analysis: Checks types, resolves names, enforces language rules
  4. Optimization: Improves performance (if -O1, -O2, or -O3 flags are used)
  5. Code generation: Outputs machine instructions for the target architecture

This is more complex than Python’s bytecode compilation because the output must run directly on the CPUβ€”there’s no virtual machine to interpret it.

What’s inside an object file (.o):

This produces main.o, a binary file containing:

Why object files are NOT executable:

Object files may reference functions defined in other files. For example, main.o might call calculate_ray_line() which is defined in geometry.o. The compiler doesn’t know where that function will be in memoryβ€”only the linker resolves these cross-file references.

πŸ”— Before Linking: Unresolved References
πŸ“„ main.o
main()
call ??? (unknown)
πŸ“„ geometry.o
calculate_ray_line()
βœ“ defined here
β–Ό
πŸ”§ Linker resolves "???" to actual address

This separation allows incremental builds: if you change geometry.cpp, only geometry.o needs to be recompiled. The unchanged main.o is reused.

5.3 Linking

After compiling each source file to an object file (using g++ -c), the linker combines them into a single executable.

The complete manual workflow:

Here’s how you build a multi-file project step by step:

# Step 1: Compile each source file to an object file
g++ -c main.cpp -o main.o           # Creates main.o
g++ -c geometry.cpp -o geometry.o   # Creates geometry.o

# Step 2: Link all object files into an executable
g++ main.o geometry.o -o main       # Creates executable 'main'

# Step 3: Run the executable
./main                              # Execute the program

Notice that in Step 2, we use g++ again but without the -c flag. When you pass .o files to g++, it knows to invoke the linker rather than the compiler.

What the linker does:

  1. Reads all object files: Loads machine code and symbol tables from each .o file
  2. Resolves symbol references: Matches function calls to their definitions
    • main.o calls calculate_ray_line() β†’ linker finds it in geometry.o
  3. Assigns final memory addresses: Decides where each function and variable will live in memory
  4. Writes the executable: Combines everything into a single binary file
βš™οΈ Linking: Combining Object Files
πŸ“„ main.o
main()
calls calculate_ray()
β†’
πŸ“„ geometry.o
calculate_ray()
β–Ό
🎯
main (executable)
βœ“ All code linked βœ“ All addresses resolved

Common linker errors:

When linking fails, the error messages come from the linker, not the compiler. Here are the most common:

Error Meaning Typical Cause
undefined reference to 'function_name' Linker can't find the function's definition Forgot to compile/link the .cpp file that defines it, or misspelled the function name
multiple definition of 'function_name' Same function defined in multiple object files Function defined in header file (should be declared only) or included same .cpp twice
undefined reference to 'main' No main() function found Forgot to include the file with main(), or misspelled main

Why use separate compile and link steps?

In section 5.2, we mentioned incremental builds. Here’s the practical benefit:

# Initial build: compile everything
g++ -c main.cpp -o main.o           # 2 seconds
g++ -c geometry.cpp -o geometry.o   # 2 seconds
g++ main.o geometry.o -o main       # 0.5 seconds
# Total: 4.5 seconds

# After changing ONLY geometry.cpp:
g++ -c geometry.cpp -o geometry.o   # 2 seconds (recompile changed file)
g++ main.o geometry.o -o main       # 0.5 seconds (relink)
# Total: 2.5 seconds (main.o reused!)

For large projects with hundreds of files, this saves enormous amounts of time. Build systems like Make and CMake automate this dependency tracking.

The shortcut (for small projects):

For simple projects, you can skip the manual steps and let g++ handle everything:

# This does preprocessing, compilation, AND linking in one command
g++ main.cpp geometry.cpp -o main

This is convenient but doesn’t give you incremental buildsβ€”every file is recompiled every time.

5.4 Deep Dive Resources

The build process is a rich topic that spans compilers, operating systems, and computer architecture. Here are carefully selected resources if you want to go deeper:

Official Documentation:

Books (for serious study):

Articles and Tutorials:

Understanding Object File Formats:

Different operating systems use different executable formats:

Format Platform Tools to Inspect
ELF (Executable and Linkable Format) Linux, BSD, embedded systems readelf, objdump, nm
Mach-O macOS, iOS otool, nm, lipo
PE/COFF (Portable Executable) Windows dumpbin (MSVC), objdump (MinGW)

Try it yourself:

# Compile with debug info
g++ -c -g main.cpp -o main.o

# List symbols in the object file
nm main.o

# Show section headers and sizes
objdump -h main.o

# Disassemble to see the actual machine code
objdump -d main.o

These commands let you peek inside object files and see exactly what the compiler produced.


6. Make and Makefiles: The Traditional Build Tool

Before we introduce CMake, let’s understand why build automation tools were invented in the first place. This context helps you appreciate what CMake doesβ€”and why you shouldn’t write Makefiles by hand for new projects.

Note on build systems: CMake is one of the most widely adopted build systems in the C++ ecosystem today, used by major projects like LLVM, Qt, and OpenCV. However, it’s not the only optionβ€”alternatives like Bazel (Google), Meson, and xmake are gaining traction. We focus on CMake because of its widespread industry adoption and excellent tooling support.

6.1 The Problem: Manual Compilation Doesn’t Scale

In section 5.3, we showed the manual workflow for a two-file project:

g++ -c main.cpp -o main.o
g++ -c geometry.cpp -o geometry.o
g++ main.o geometry.o -o main

Three commands. Manageable. But what happens as your project grows?

A realistic small project (10 files):

g++ -c -Wall -Wextra -std=c++20 -I./include src/main.cpp -o build/main.o
g++ -c -Wall -Wextra -std=c++20 -I./include src/geometry.cpp -o build/geometry.o
g++ -c -Wall -Wextra -std=c++20 -I./include src/road_profile.cpp -o build/road_profile.o
g++ -c -Wall -Wextra -std=c++20 -I./include src/camera.cpp -o build/camera.o
g++ -c -Wall -Wextra -std=c++20 -I./include src/ray_tracer.cpp -o build/ray_tracer.o
g++ -c -Wall -Wextra -std=c++20 -I./include src/visualization.cpp -o build/visualization.o
g++ -c -Wall -Wextra -std=c++20 -I./include src/config.cpp -o build/config.o
g++ -c -Wall -Wextra -std=c++20 -I./include src/utils.cpp -o build/utils.o
g++ -c -Wall -Wextra -std=c++20 -I./include src/file_io.cpp -o build/file_io.o
g++ -c -Wall -Wextra -std=c++20 -I./include src/math_helpers.cpp -o build/math_helpers.o
g++ build/main.o build/geometry.o build/road_profile.o build/camera.o \
    build/ray_tracer.o build/visualization.o build/config.o build/utils.o \
    build/file_io.o build/math_helpers.o -o build/road_profile_viewer

That’s 11 commands you need to type every time you rebuild. And we haven’t even added:

The real nightmare: dependency tracking

Suppose you change geometry.hpp. Which files need recompilation? Every .cpp file that includes itβ€”directly or indirectly. Can you remember which ones? Can you trust yourself to recompile all of them and only them?

geometry.hpp is included by:
β”œβ”€β”€ geometry.cpp (direct)
β”œβ”€β”€ ray_tracer.cpp (direct)
β”œβ”€β”€ visualization.cpp (includes ray_tracer.hpp, which includes geometry.hpp)
└── main.cpp (includes visualization.hpp, which includes...)

If you forget to recompile visualization.cpp, your program might:

This is why Make was invented.

6.2 What is Make?

make is a build automation tool created in 1976 at Bell Labs. It solves the two fundamental problems of manual compilation:

  1. Automation: You define the build rules once, then run a single command
  2. Incremental builds: Make tracks file modification times and only rebuilds what changed

The core insight: Make treats building software as a dependency graph. Each file depends on other files. When a file changes, everything that depends on it must be rebuilt.

              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
              β”‚    main     β”‚  (executable)
              β”‚  (target)   β”‚
              β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
                     β”‚ depends on
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚           β”‚           β”‚
         β–Ό           β–Ό           β–Ό
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚ main.o  β”‚ β”‚geometry.oβ”‚ β”‚ utils.o β”‚  (object files)
    β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
         β”‚           β”‚           β”‚
         β–Ό           β–Ό           β–Ό
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚main.cpp β”‚ β”‚geometry β”‚ β”‚utils.cppβ”‚  (source files)
    β”‚         β”‚ β”‚.cpp/.hppβ”‚ β”‚         β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

When geometry.hpp changes:

  1. Make sees geometry.o depends on it β†’ recompile geometry.cpp
  2. Make sees main depends on geometry.o β†’ relink
  3. Make sees utils.o doesn’t depend on it β†’ skip (time saved!)

6.3 Basic Makefile Structure

# Makefile

# Compiler settings
CXX = g++
CXXFLAGS = -Wall -Wextra -std=c++20

# Target: dependencies
#     command to build target

main: main.o geometry.o
	$(CXX) $(CXXFLAGS) main.o geometry.o -o main

main.o: main.cpp geometry.hpp
	$(CXX) $(CXXFLAGS) -c main.cpp -o main.o

geometry.o: geometry.cpp geometry.hpp
	$(CXX) $(CXXFLAGS) -c geometry.cpp -o geometry.o

clean:
	rm -f *.o main

How it works:

  1. You run make main
  2. Make checks if main is older than main.o or geometry.o
  3. If so, it recursively checks those dependencies
  4. It rebuilds only what’s necessary

6.4 Why Makefiles Are Not Enough

Make was revolutionary in 1976. But software development has changed dramatically since then. Here’s why writing Makefiles by hand is problematic for modern projects:

Problem 1: Not portable across operating systems

Make uses shell commands directly. This Makefile works on Linux/macOS:

clean:
	rm -f *.o main

But on Windows, there’s no rm command. You’d need:

clean:
	del /Q *.o main.exe

Now you need two different Makefiles, or complex conditional logic. And that’s just for a simple clean target.

Problem 2: Manual header dependency tracking

Remember our rule?

geometry.o: geometry.cpp geometry.hpp
	$(CXX) $(CXXFLAGS) -c geometry.cpp -o geometry.o

What if geometry.hpp includes math_types.hpp? You need to update the rule:

geometry.o: geometry.cpp geometry.hpp math_types.hpp
	$(CXX) $(CXXFLAGS) -c geometry.cpp -o geometry.o

Now imagine 50 source files, each including 5-10 headers, some of which include other headers. You must manually track every transitive dependency. If you forget one, changing a header won’t trigger recompilation, and you’ll get mysterious bugs.

(There are workarounds using GCC’s -MMD flag to auto-generate dependencies, but they’re awkward and require additional Makefile complexity.)

Problem 3: Verbose and repetitive

Our simple Makefile has explicit rules for each source file. A real project with 100 source files would need… 100 rules. You can use β€œpattern rules” to reduce repetition:

%.o: %.cpp
	$(CXX) $(CXXFLAGS) -c $< -o $@

But now you lose explicit dependency tracking. And pattern rules have subtle gotchas that confuse even experienced developers.

Problem 4: Different compilers need different flags

What if your project needs to support both GCC and Clang on Linux, plus MSVC on Windows?

# This gets ugly fast
ifeq ($(CXX),g++)
    WARNINGS = -Wall -Wextra
else ifeq ($(CXX),clang++)
    WARNINGS = -Wall -Wextra
else ifeq ($(CXX),cl)
    WARNINGS = /W4
endif

Now multiply this by every compiler flag category (optimization, debugging, standards compliance…).

Problem 5: No standard way to find external libraries

Want to use OpenCV in your project? On Linux, the headers might be in /usr/include/opencv4. On macOS with Homebrew, they’re in /opt/homebrew/include/opencv4. On Windows, who knows?

# Hardcoded paths that break on other machines
OPENCV_INCLUDE = /usr/include/opencv4
OPENCV_LIBS = -lopencv_core -lopencv_imgproc

This Makefile works on your machine but fails on your colleague’s laptop.

Problem 6: No IDE integration

Modern IDEs (VS Code, CLion, Visual Studio) can’t read Makefiles to understand your project structure. They can’t provide:

The bottom line:

Make solved the 1976 problem brilliantly. But modern C++ development needs:

This is why CMake was createdβ€”and why virtually all modern C++ projects use it.

6.5 Further Reading

If you want to understand Make more deeply (useful for maintaining legacy projects):

Enter CMake.


7. CMake: Cross-Platform Build Configuration

In Section 6.4, we identified six fundamental problems with Makefiles:

  1. Not portable across operating systems
  2. Manual header dependency tracking
  3. Verbose and repetitive
  4. Different compilers need different flags
  5. No standard way to find external libraries
  6. No IDE integration

CMake was designed specifically to solve these problems. Let’s see how.

7.1 What is CMake?

CMake is a meta-build system. It doesn’t compile your code directlyβ€”instead, it generates build files for other tools:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        CMakeLists.txt                               β”‚
β”‚              (one file, works on all platforms)                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                          β”‚
                          β”‚  cmake  (reads CMakeLists.txt)
                          β”‚
        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
        β”‚                 β”‚                 β”‚
        β–Ό                 β–Ό                 β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Makefile    β”‚ β”‚  Ninja files  β”‚ β”‚ Visual Studio β”‚
β”‚   (Linux)     β”‚ β”‚  (Linux/Mac)  β”‚ β”‚   project     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚                 β”‚                 β”‚
        β–Ό                 β–Ό                 β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚     make      β”‚ β”‚    ninja      β”‚ β”‚   MSBuild     β”‚
β”‚  (compiles)   β”‚ β”‚  (compiles)   β”‚ β”‚  (compiles)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What is Ninja?

You’ll notice β€œNinja” in the diagram. Ninja is a small, fast build system created by a Google engineer in 2012. While Make dates back to 1976 and has many features, Ninja focuses on one thing: speed.

Aspect Make Ninja
Design goal General-purpose build automation Maximum build speed
Human-readable files Yes (Makefiles are meant to be edited) No (Ninja files are generated, not hand-written)
Startup time ~100-500ms for large projects ~10-50ms (10x faster)
Parallel builds Requires make -j flag Parallel by default
Typical use Standalone or with CMake Always with a generator (CMake, Meson, gn)

Why does speed matter? In large projects (millions of lines of code), Make can take seconds just to determine that nothing needs rebuilding. Ninja does this in milliseconds. For iterative developmentβ€”change one line, rebuild, testβ€”this adds up quickly.

Recommendation: For new projects, use Ninja as your CMake generator when available. On Linux/macOS: cmake -G Ninja ... On Windows with Visual Studio, the default generator is usually fine.

You write one CMakeLists.txt file, and CMake generates the appropriate build system for whatever platform you’re on.

7.2 How CMake Solves Our Problems

For Python Developers: CMakeLists.txt is Like pyproject.toml

If you’re coming from Python, you already understand the concept of a project definition file. Here’s how the road-profile-viewer’s pyproject.toml maps to an equivalent C++ CMakeLists.txt:

Concept pyproject.toml (Python) CMakeLists.txt (C++)
Project metadata [project]
name = "road-profile-viewer"
version = "0.1.0"
project(road_profile_viewer
VERSION 1.0.0
LANGUAGES CXX)
Language version requires-python = ">=3.12" set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
Dependencies dependencies = [
"numpy>=1.26.0",
"dash>=2.14.0",
]
find_package(OpenCV REQUIRED)
or
FetchContent_Declare(json ...)
Build system [build-system]
requires = ["uv_build"]
build-backend = "uv_build"
cmake_minimum_required(VERSION 3.16)
(CMake generates Makefiles, Ninja, VS projects)
Entry point [project.scripts]
road-profile-viewer = "..."
add_executable(main src/main.cpp)
Code quality tools [tool.ruff]
[tool.pyright]
.clang-format
.clang-tidy
(separate config files)
Dev dependencies [dependency-groups]
dev = ["pytest", "ruff"]
FetchContent_Declare(googletest ...)
(often conditionally included)

The key insight: CMakeLists.txt serves the same purpose as pyproject.tomlβ€”it’s the single source of truth for how your project is built, what it depends on, and how it should be configured. The syntax is different, but the concepts map directly.

Now let’s see how CMake solves each of the Make problems we identified in Section 6.4.

Problem 1: Not portable across operating systems

Remember our Makefile clean target?

# Makefile (Linux/macOS)           # Makefile (Windows)
clean:                              clean:
    rm -f *.o main                      del /Q *.o main.exe

With CMake, you describe what to build, not how to build it:

# CMakeLists.txt (works everywhere!)
add_executable(main src/main.cpp src/geometry.cpp)

CMake knows that on Windows it should generate main.exe, use cl.exe for compilation, and create a Visual Studio solution. On Linux, it generates a Makefile that uses g++ and creates main. You never write platform-specific commands.

Problem 2: Manual header dependency tracking

In Make, we had to manually list every header:

# Makefile - YOU must track all headers manually
geometry.o: geometry.cpp geometry.hpp math_types.hpp utils.hpp
    $(CXX) $(CXXFLAGS) -c geometry.cpp -o geometry.o

CMake automatically scans your source files and tracks dependencies:

# CMakeLists.txt - CMake handles dependencies automatically
add_library(geometry src/geometry.cpp)
target_include_directories(geometry PUBLIC include)

When you change math_types.hpp, CMake (via the generated build system) knows exactly which files need recompilation. You never manually maintain dependency lists.

Problem 3: Verbose and repetitive

In Section 6.1, our 10-file Makefile required 11 explicit commands. Even for our simpler road-profile-viewer C++ port (currently just geometry.cpp and main.cpp), the Makefile approach is verbose:

# Makefile for road-profile-viewer
CXX = g++
CXXFLAGS = -Wall -Wextra -std=c++20 -I./include

geometry.o: src/geometry.cpp include/geometry.hpp
	$(CXX) $(CXXFLAGS) -c src/geometry.cpp -o geometry.o

main.o: src/main.cpp include/geometry.hpp
	$(CXX) $(CXXFLAGS) -c src/main.cpp -o main.o

main: main.o geometry.o
	$(CXX) main.o geometry.o -o main

clean:
	rm -f *.o main

With CMake, the same project is described declaratively:

# CMakeLists.txt - same project, much cleaner
add_library(geometry_core src/geometry.cpp)
target_include_directories(geometry_core PUBLIC include)

add_executable(main src/main.cpp)
target_link_libraries(main PRIVATE geometry_core)

Four lines instead of twelve. And as your project grows to 10, 50, or 100 files, CMake scales gracefullyβ€”you just add files to the appropriate add_library() or add_executable() command.

Problem 4: Different compilers need different flags

Remember the ugly conditionals for GCC vs. Clang vs. MSVC?

# Makefile - manual compiler detection
ifeq ($(CXX),g++)
    WARNINGS = -Wall -Wextra
else ifeq ($(CXX),cl)
    WARNINGS = /W4
endif

CMake provides β€œgenerator expressions” that adapt to the compiler automatically:

# CMakeLists.txt - CMake handles compiler differences
target_compile_options(geometry PRIVATE
    $<$<CXX_COMPILER_ID:GNU,Clang>:-Wall -Wextra -Wpedantic>
    $<$<CXX_COMPILER_ID:MSVC>:/W4>
)

Or even simpler, CMake’s modern approach uses abstract properties:

# Let CMake choose appropriate flags for the standard
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)

CMake knows that GCC needs -std=c++20, Clang needs -std=c++20, and MSVC needs /std:c++20.

Problem 5: No standard way to find external libraries

In Make, we hardcoded paths that broke on other machines:

# Makefile - hardcoded paths (breaks on colleague's machine)
OPENCV_INCLUDE = /usr/include/opencv4
OPENCV_LIBS = -lopencv_core -lopencv_imgproc

CMake has a package discovery system:

# CMakeLists.txt - works on any machine with OpenCV installed
find_package(OpenCV REQUIRED)
target_link_libraries(my_app PRIVATE ${OpenCV_LIBS})

find_package() searches standard locations on each platform:

Your colleague runs cmake .. and it just works.

Problem 6: No IDE integration

Modern IDEs understand CMake natively:

This is because CMake generates a compile_commands.json file that tells IDEs:

# Generate compile_commands.json for IDE integration
cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON ..

Putting It All Together: The Complete CMakeLists.txt

Now that we’ve seen how CMake solves each problem, here’s the complete CMakeLists.txt for our road-profile-viewer C++ port. This is the file you’ll create in your project:

# CMakeLists.txt for road-profile-viewer C++ port
cmake_minimum_required(VERSION 3.16)
project(road_profile_viewer_cpp VERSION 1.0.0 LANGUAGES CXX)

# C++ standard settings
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)

# Create a library from geometry code
add_library(geometry_core
    src/geometry.cpp
)

# Specify where to find header files
target_include_directories(geometry_core PUBLIC
    ${CMAKE_CURRENT_SOURCE_DIR}/include
)

# Create the main executable
add_executable(main
    src/main.cpp
)

# Link the library to the executable
target_link_libraries(main PRIVATE geometry_core)

Compare this to what we would need in Make:

Aspect Makefile CMakeLists.txt
Lines of code ~15 lines (with clean target) ~15 lines, but more declarative
Adding a new file Add rule, add dependencies, update link step Add to add_library()
Changing compiler Update flags manually CMake adapts automatically
Cross-platform Write separate Makefiles Same file works everywhere
IDE support Manual configuration Automatic via compile_commands.json

This CMakeLists.txt does everything our Makefile did, plus handles dependencies, cross-platform builds, and IDE integrationβ€”all from one file that will grow gracefully as the project expands.

7.3 The Workflow: Make vs. CMake

Let’s visualize the difference in developer experience:

With Make (the old way):

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Developer changes geometry.hpp                                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                          β”‚
                          β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  "Which files depend on geometry.hpp?"                              β”‚
β”‚  Developer must MANUALLY remember or check                          β”‚
β”‚  - geometry.cpp? Yes                                                β”‚
β”‚  - ray_tracer.cpp? Probably...                                      β”‚
β”‚  - main.cpp? Maybe indirectly?                                      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                          β”‚
                          β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Hope you remembered correctly, or face mysterious runtime bugs     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

With CMake (the modern way):

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Developer changes geometry.hpp                                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                          β”‚
                          β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Developer runs: cmake --build build                                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                          β”‚
                          β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  CMake AUTOMATICALLY:                                               β”‚
β”‚  βœ“ Detects geometry.hpp changed                                     β”‚
β”‚  βœ“ Finds all files that include it (directly or transitively)      β”‚
β”‚  βœ“ Recompiles only those files                                      β”‚
β”‚  βœ“ Re-links the executable                                          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The mental load shifts from β€œwhat do I need to rebuild?” to simply β€œrebuild” and trusting the tool.

7.4 CMakeLists.txt Structure

Here’s a basic CMake configuration:

# CMakeLists.txt

# Minimum CMake version required
cmake_minimum_required(VERSION 3.16)

# Project name and language
project(road_profile_viewer_cpp VERSION 1.0.0 LANGUAGES CXX)

# C++ standard
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)

# Create a library from our source files
add_library(geometry_core
    src/geometry.cpp
)

# Specify where to find header files
target_include_directories(geometry_core PUBLIC
    ${CMAKE_CURRENT_SOURCE_DIR}/include
)

# Create an executable
add_executable(main
    src/main.cpp
)

# Link the library to the executable
target_link_libraries(main PRIVATE geometry_core)

7.5 CMake Commands Explained

Let’s break down the key CMake commands:

cmake_minimum_required(VERSION 3.16)

project(name VERSION x.y.z LANGUAGES CXX)

set(CMAKE_CXX_STANDARD 20)

add_library(name source_files...)

target_include_directories(target PUBLIC dirs...)

add_executable(name source_files...)

target_link_libraries(target PRIVATE libs...)

7.6 Building with CMake

The typical CMake workflow:

# 1. Create a build directory (out-of-source build)
mkdir build
cd build

# 2. Generate build files
cmake ..

# 3. Build the project
cmake --build .

# Or, on Unix with Make:
make

# Or, with Ninja (faster):
cmake -G Ninja ..
ninja

Why out-of-source builds?

7.7 CMake Resources

CMake has excellent documentation and a large ecosystem. Here are carefully selected resources to go deeper:

Official Documentation:

Books:

Video Resources:

Modern CMake Best Practices:

The CMake ecosystem has evolved significantly. β€œModern CMake” (3.0+) emphasizes:

Old CMake (avoid) Modern CMake (prefer)
include_directories() target_include_directories()
add_definitions() target_compile_definitions()
link_libraries() target_link_libraries()
Global variables Target properties (PUBLIC/PRIVATE/INTERFACE)

The key insight: always use target_* commands. They make dependencies explicit and help CMake understand your project’s structure.

Ninja Resources:


8. Handling External Libraries

In Section 6.4, we identified Problem 5: No standard way to find external libraries. Remember the pain?

# Makefile - hardcoded paths that break on colleague's machine
OPENCV_INCLUDE = /usr/include/opencv4
OPENCV_LIBS = -lopencv_core -lopencv_imgproc

Now let’s see how CMake solves this properlyβ€”and why it matters for our road-profile-viewer.

8.1 The Problem: Your Project Needs External Code

Our Python road-profile-viewer uses three external libraries:

# pyproject.toml
dependencies = [
    "dash>=2.14.0",
    "plotly>=5.18.0",
    "numpy>=1.26.0",
]

When you run uv sync, these are downloaded automatically. But in C++, there’s no single package manager like PyPI. External code comes from:

  1. System libraries β€” Installed via apt, brew, or vcpkg
  2. Header-only libraries β€” Just #include and go
  3. Source dependencies β€” Downloaded and compiled with your project

CMake handles all three. Let’s see each one.

8.2 FetchContent: CMake’s Dependency Manager

Before diving into specific solutions, let’s introduce the key tool that changed C++ dependency management: FetchContent.

The History

Before CMake 3.11 (released March 2018), C++ developers had few good options for dependencies:

In March 2018, Kitware (the company maintaining CMake since 2000) introduced FetchContent as part of CMake 3.11. The motivation was clear:

β€œWe need a way to declare dependencies that are downloaded at configure time (not build time), integrated seamlessly as if they were part of your project.”

What FetchContent Does

FetchContent is CMake’s answer to uv sync. It downloads and integrates dependencies automatically during the cmake .. configuration step:

# CMakeLists.txt
include(FetchContent)  # Load the module

# Declare what you need (like adding to pyproject.toml)
FetchContent_Declare(
    dependency_name
    GIT_REPOSITORY https://github.com/example/library.git
    GIT_TAG v1.0.0  # Pin to specific version
)

# Download and make available (like running uv sync)
FetchContent_MakeAvailable(dependency_name)

The FetchContent workflow:

First cmake run:
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  cmake ..                                                        β”‚
β”‚    β”‚                                                             β”‚
β”‚    β”œβ”€β”€ Reads CMakeLists.txt                                     β”‚
β”‚    β”œβ”€β”€ Sees FetchContent_Declare(...)                           β”‚
β”‚    β”œβ”€β”€ Downloads from GitHub to build/_deps/<name>-src/         β”‚
β”‚    β”œβ”€β”€ Configures dependency as a subproject                    β”‚
β”‚    └── Makes targets available (like library::library)          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Subsequent cmake runs:
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  cmake ..                                                        β”‚
β”‚    β”‚                                                             β”‚
β”‚    └── Already downloaded, skips fetch (fast!)                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Why FetchContent matters:

When FetchContent is NOT ideal: CI/CD Considerations

While FetchContent is excellent for learning and small projects, many production CI/CD environments avoid it for several reasons:

Issue Problem in CI Alternative
Network dependency Every build requires GitHub access. If GitHub is down, your CI fails. Pre-built binaries in artifact repository
No binary caching FetchContent downloads source and recompiles. With 100+ builds/day, this wastes hours. vcpkg/Conan with binary caching
Docker layer inefficiency FetchContent runs at configure time, inside your build. Dependencies can't be cached in base image layers. Install dependencies in Dockerfile base layer
Security/compliance Some organizations require all dependencies to pass security scanning before use. FetchContent fetches directly from source. Artifact repository (Artifactory, Nexus) with approved packages
Air-gapped environments Some CI environments have no internet access (government, finance). Vendored dependencies or internal mirrors

Example: Docker build with FetchContent problem

# ❌ Inefficient: Dependencies downloaded EVERY build
FROM ubuntu:22.04
COPY . /app
WORKDIR /app/build
RUN cmake .. && cmake --build .  # FetchContent runs here, no caching
# βœ… Better: Dependencies in base image layer (cached)
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y libgtest-dev  # Cached layer
COPY . /app
WORKDIR /app/build
RUN cmake .. && cmake --build .  # Uses system gtest, fast rebuild

Bottom line: FetchContent is perfect for:

For production CI/CD with many daily builds, consider vcpkg or Conan with binary caching, or pre-install dependencies in your Docker base images.

Now let’s see how FetchContent (and other tools) handle different types of dependencies.

8.3 System Libraries with find_package()

Not all libraries should be fetched. Some are pre-installed on the systemβ€”large frameworks like OpenCV, Qt, or Boost that you install once via your package manager.

CMake’s find_package() locates these system-installed libraries automatically.

Example: Using OpenCV for image processing

# CMakeLists.txt
find_package(OpenCV REQUIRED)

add_executable(image_processor src/main.cpp)
target_link_libraries(image_processor PRIVATE ${OpenCV_LIBS})
target_include_directories(image_processor PRIVATE ${OpenCV_INCLUDE_DIRS})

What happens behind the scenes:

  1. CMake searches standard locations for OpenCV:
    • Linux: /usr/lib, /usr/local/lib, pkg-config paths
    • macOS: /opt/homebrew/lib, /usr/local/lib, framework paths
    • Windows: Program Files, vcpkg registry, environment variables
  2. CMake sets variables like OpenCV_LIBS and OpenCV_INCLUDE_DIRS

  3. Your colleague runs cmake .. and it just worksβ€”no hardcoded paths

Compare with Make:

# Makefile - breaks on different machines
OPENCV_INCLUDE = /usr/include/opencv4          # My machine
# OPENCV_INCLUDE = /opt/homebrew/include/opencv4  # Colleague's Mac

target:
    g++ -I$(OPENCV_INCLUDE) main.cpp -lopencv_core

When to use find_package() vs FetchContent:

Use find_package() Use FetchContent
Large frameworks (OpenCV, Qt, Boost) Smaller libraries
System dependencies Test frameworks (Google Test)
Pre-compiled binaries Header-only libraries
Libraries with complex builds Libraries you want version-pinned

8.4 Example: Header-Only Libraries with FetchContent

Some modern C++ libraries are β€œheader-only”—no compilation needed, just include and use. FetchContent makes these trivial to integrate.

Example: nlohmann/json for JSON parsing

Our road-profile-viewer might need to read configuration files. Instead of parsing JSON manually, we can use a popular header-only library:

# CMakeLists.txt
include(FetchContent)

FetchContent_Declare(
    json
    GIT_REPOSITORY https://github.com/nlohmann/json.git
    GIT_TAG v3.11.2
)
FetchContent_MakeAvailable(json)

add_executable(main src/main.cpp)
target_link_libraries(main PRIVATE nlohmann_json::nlohmann_json)

Using it in code:

#include <nlohmann/json.hpp>
#include <fstream>

int main() {
    std::ifstream config_file("config.json");
    nlohmann::json config = nlohmann::json::parse(config_file);

    double camera_height = config["camera"]["height"];
    // ...
}

Why header-only libraries are convenient:

8.5 Example: Test Frameworks with FetchContent

For our road-profile-viewer, we need Google Test. This is a perfect FetchContent use caseβ€”we want the same test framework version across all developer machines and CI.

# CMakeLists.txt
include(FetchContent)

# Declare Google Test dependency
FetchContent_Declare(
    googletest
    GIT_REPOSITORY https://github.com/google/googletest.git
    GIT_TAG v1.14.0
)

# Download and integrate
FetchContent_MakeAvailable(googletest)

# Use it in your test target
add_executable(geometry_tests tests/test_geometry.cpp)
target_link_libraries(geometry_tests PRIVATE
    geometry_core
    GTest::gtest_main
)

Multiple dependencies together:

include(FetchContent)

# Declare all dependencies
FetchContent_Declare(
    googletest
    GIT_REPOSITORY https://github.com/google/googletest.git
    GIT_TAG v1.14.0
)

FetchContent_Declare(
    json
    GIT_REPOSITORY https://github.com/nlohmann/json.git
    GIT_TAG v3.11.2
)

# Fetch all at once (efficient!)
FetchContent_MakeAvailable(googletest json)

8.6 Comparison: Python vs C++ Dependency Management

Aspect Python (uv/pip) C++ (CMake)
Declare dependencies pyproject.toml FetchContent_Declare()
Install/fetch uv sync cmake .. (auto-fetches)
Central repository PyPI None (GitHub, vcpkg, Conan)
Version pinning numpy>=1.26.0 GIT_TAG v1.14.0
Lock file uv.lock None standard (CMake presets emerging)

Key insight: C++ dependency management is more manual than Python’s, but CMake’s FetchContent brings it closer to the modern Python experience.

8.7 Dependency Management Resources

C++ dependency management is a rapidly evolving area. Here are resources to stay current:

Official Documentation:

Package Managers (alternatives to FetchContent):

When to use each:

Tool Best For Trade-offs
FetchContent Simple projects, educational use, full source control Slow initial build, no binary caching
vcpkg Windows development, Microsoft ecosystem Requires manifest mode for reproducibility
Conan Large projects, binary caching, CI/CD Steeper learning curve, Python dependency
CPM.cmake FetchContent with caching Third-party tool, adds complexity

Articles and Tutorials:

FetchContent vs ExternalProject:

Understanding why FetchContent replaced ExternalProject helps you appreciate its design:

Aspect ExternalProject (old) FetchContent (modern)
When it runs Build time Configure time
Target visibility Targets not visible to main project Targets fully integrated
Typical use Superbuild patterns Direct dependency inclusion
First appeared CMake 2.8 (2009) CMake 3.11 (2018)

9. Organizing Your Code: Headers and Source Files

In Section 6.4, we identified Problem 2: Manual header dependency tracking. Remember?

# You had to manually list every header a file depends on
geometry.o: geometry.cpp geometry.hpp math_types.hpp utils.hpp

CMake solves the tracking, but there’s a deeper question: why does C++ split code into header files and source files at all? Understanding this connects back to everything we learned about compilation and linking.

9.1 Why Two File Types? The Compilation Model

Recall from Section 5.2: the compiler processes one source file at a time, producing one object file:

geometry.cpp  ──→  g++ -c  ──→  geometry.o
main.cpp      ──→  g++ -c  ──→  main.o

But main.cpp needs to call functions from geometry.cpp. How does the compiler know those functions exist?

The header file solves this: It contains declarations that tell the compiler β€œtrust me, this function exists somewhere.”

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  When compiling main.cpp:                                        β”‚
β”‚                                                                  β”‚
β”‚  1. Compiler reads main.cpp                                      β”‚
β”‚  2. Sees: #include "geometry.hpp"                               β”‚
β”‚  3. Reads geometry.hpp β†’ learns calculate_ray_line() exists     β”‚
β”‚  4. Compiles main.cpp, trusting the function will be linked     β”‚
β”‚  5. Later, linker connects the actual function from geometry.o  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Python doesn’t need this because it’s interpretedβ€”the interpreter can look up functions at runtime. C++ must resolve everything at compile time.

9.2 Header Files: The Public Interface

Header files (.hpp or .h) contain declarationsβ€”they describe what exists, not how it works:

// geometry.hpp - The PUBLIC interface of our geometry module
#ifndef ROAD_PROFILE_VIEWER_GEOMETRY_HPP
#define ROAD_PROFILE_VIEWER_GEOMETRY_HPP

#include <vector>
#include <optional>

namespace road_profile_viewer {

// Data structures (fully defined - users need to know the layout)
struct RayLine {
    std::vector<double> x;
    std::vector<double> y;
};

struct IntersectionResult {
    double x;
    double y;
    double distance;
};

// Function DECLARATIONS (no implementation here!)
RayLine calculate_ray_line(
    double angle_degrees,
    double camera_x = 0.0,
    double camera_y = 2.0,
    double x_max = 80.0
);

std::optional<IntersectionResult> find_intersection(
    const std::vector<double>& x_road,
    const std::vector<double>& y_road,
    double angle_degrees,
    double camera_x = 0.0,
    double camera_y = 1.5
);

}  // namespace road_profile_viewer

#endif  // ROAD_PROFILE_VIEWER_GEOMETRY_HPP

Key elements explained:

1. Include guards (#ifndef, #define, #endif)

#ifndef ROAD_PROFILE_VIEWER_GEOMETRY_HPP  // If not already defined...
#define ROAD_PROFILE_VIEWER_GEOMETRY_HPP  // ...define it now

// ... content ...

#endif  // End of guard

Without guards, if two files both #include "geometry.hpp", the compiler sees the declarations twice and reports β€œredefinition” errors. Guards ensure the content is processed only once.

2. Namespace (namespace road_profile_viewer)

Groups related code and prevents name collisions. If another library also has a calculate_ray_line() function, namespaces keep them separate:

road_profile_viewer::calculate_ray_line(45.0);  // Ours
other_library::calculate_ray_line(45.0);         // Theirs

3. Declarations only β€” no function bodies

The header says β€œthis function exists with this signature” but doesn’t show the implementation. This is intentional:

9.3 Source Files: The Implementation

Source files (.cpp) contain definitionsβ€”the actual code:

// geometry.cpp - The PRIVATE implementation
#include "geometry.hpp"
#include <cmath>
#include <numbers>

namespace road_profile_viewer {

RayLine calculate_ray_line(
    double angle_degrees,
    double camera_x,
    double camera_y,
    double x_max
) {
    // Convert degrees to radians
    const double angle_rad = -angle_degrees * std::numbers::pi / 180.0;

    // Calculate ray endpoint
    double x_end, y_end;

    if (std::abs(std::cos(angle_rad)) < 1e-10) {
        // Vertical ray (looking straight down)
        x_end = camera_x;
        y_end = 0.0;
    } else {
        // Ray intersects ground at y = 0
        const double tan_angle = std::tan(angle_rad);
        x_end = camera_x + camera_y / tan_angle;
        y_end = 0.0;

        // Clamp to x_max
        if (x_end > x_max) {
            x_end = x_max;
            y_end = camera_y - (x_end - camera_x) * tan_angle;
        }
    }

    return RayLine{
        .x = {camera_x, x_end},
        .y = {camera_y, y_end}
    };
}

// ... find_intersection implementation ...

}  // namespace road_profile_viewer

Notice:

9.4 How This Connects to the Build System

Now the CMakeLists.txt makes more sense:

# Create a library from the SOURCE file (not the header!)
add_library(geometry_core
    src/geometry.cpp    # <-- The implementation
)

# Tell CMake where headers are (for #include to work)
target_include_directories(geometry_core PUBLIC
    ${CMAKE_CURRENT_SOURCE_DIR}/include
)

# The executable only needs to link against the library
add_executable(main src/main.cpp)
target_link_libraries(main PRIVATE geometry_core)

The flow:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ cmake --build .                                                       β”‚
β”‚                                                                       β”‚
β”‚  1. Compile geometry.cpp β†’ geometry_core library                     β”‚
β”‚     (Header geometry.hpp is read during compilation)                  β”‚
β”‚                                                                       β”‚
β”‚  2. Compile main.cpp β†’ main.o                                        β”‚
β”‚     (main.cpp does #include "geometry.hpp" to know the API)          β”‚
β”‚                                                                       β”‚
β”‚  3. Link main.o + geometry_core β†’ main executable                    β”‚
β”‚     (Linker connects main's calls to geometry's implementations)     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

9.5 Python vs C++ Modules

Aspect Python Module C++ Module
Files Single .py file .hpp (interface) + .cpp (implementation)
Interface Implicit (whatever is defined) Explicit (header declares public API)
Import from geometry import func #include "geometry.hpp"
Namespace File name is namespace Explicit namespace keyword
Visibility _private convention Header only exposes public API
Compile impact N/A (interpreted) Header changes trigger recompilation of all includers

The tradeoff: C++ requires more files and explicit structure, but gains compile-time checking, faster execution, and explicit interfaces.

9.6 Project Structure for road-profile-viewer C++

Putting it all together, here’s how our C++ port is organized:

road-profile-viewer/
β”œβ”€β”€ cpp/
β”‚   β”œβ”€β”€ CMakeLists.txt           # Build configuration
β”‚   β”œβ”€β”€ include/
β”‚   β”‚   └── geometry.hpp         # Public interface (declarations)
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ geometry.cpp         # Implementation (definitions)
β”‚   β”‚   └── main.cpp             # Entry point
β”‚   └── tests/
β”‚       └── test_geometry.cpp    # Unit tests

This structure:

9.7 Header and Module Resources

Understanding C++ code organization is essential for larger projects. Here are resources to deepen your knowledge:

Official Documentation:

The Future: C++20 Modules

C++20 introduced a new module system that may eventually replace the header/source split:

// math.cppm (module interface)
export module math;

export int add(int a, int b) {
    return a + b;
}

// main.cpp
import math;

int main() {
    return add(1, 2);
}

Why we still teach headers:

Headers (traditional) Modules (C++20)
Universal compiler support Limited compiler support (improving)
Works with all existing code Requires migration effort
CMake fully supports CMake support experimental
All tutorials/books use headers Few learning resources yet

Modules will become standard eventually, but for now (2024-2026), headers remain the practical choice for most projects.

Articles and Tutorials:

Books for deeper study:

Common Header Mistakes:

Mistake Problem Solution
Missing include guards Redefinition errors Always use #ifndef or #pragma once
Putting implementations in headers Slow compilation, code bloat Only declarations in .hpp
Including unnecessary headers Slow compilation, hidden dependencies Forward declarations, IWYU tool
Circular includes Compilation fails Forward declarations, restructure code

10. Summary

In this lecture, you learned the fundamentals of C++ developmentβ€”the foundation you’ll need to integrate C++ with your Python projects.

10.1 Key Takeaways

  1. C++ is compiled, Python is interpreted: This fundamental difference explains performance gaps (10-100x faster) and deployment constraints (no runtime needed).

  2. The build process has three stages: Preprocessing, compilation, and linking transform source code into native executables.

  3. CMake is the cross-platform build configuration tool: It generates platform-specific build files (Makefiles, Ninja, Visual Studio projects) from a single configuration.

  4. Dependency management evolved with CMake: From manual downloads to find_package() to modern FetchContent, managing external libraries has become increasingly automated.

  5. Header/source file separation: Unlike Python’s single .py files, C++ separates declarations (.hpp) from implementations (.cpp) to enable separate compilation.

  6. The compilation model requires declarations: When compiling one file, the compiler needs to know what functions exist in other filesβ€”headers provide this information.

10.2 What You Can Now Do


11. Reflection Questions

  1. Why does C++ require explicit type declarations while Python allows dynamic typing? What are the trade-offs?

  2. The Python interpreter includes garbage collection. C++ doesn’t. What challenges does this create for C++ developers?

  3. CMake generates Makefiles (or Ninja files). Why use a β€œmeta-build system” instead of writing Makefiles directly?

  4. FetchContent downloads dependencies at configure time. When might this be problematic? What alternatives exist?

  5. Why do C++ projects separate declarations (headers) from implementations (source files)? Python doesn’t do thisβ€”what’s different?


12. What’s Next

In the next lecture (Part 2), we will:

You’ll apply everything you learned today to make your Road Profile Viewer’s geometry functions callable from both Python and C++.


13. Further Reading

13.1 C++ Fundamentals

13.2 Build Process Deep Dive

13.3 Dependency Management

© 2026 Dominik Mueller   β€’  Powered by Soopr   β€’  Theme  Moonwalk