Skip to content

Build Guide

This guide covers building Micromegas from source and setting up a development environment.

Prerequisites

  • Rust - Latest stable version
  • Python 3.8+
  • Docker - For running PostgreSQL
  • Git
  • Build tools - C/C++ compiler and linker (required for Rust compilation)
  • Linux: sudo apt-get install build-essential clang mold
  • macOS: xcode-select --install
  • Windows: Install Visual Studio Build Tools

mold linker requirement

On Linux, the project requires the mold linker as configured in .cargo/config.toml. This provides faster linking for large projects.

Additional CI Tools

For running the full CI pipeline locally, you'll need:

# Install cargo-machete for unused dependency checking
cargo install cargo-machete

Rust Development

Clone and Build

git clone https://github.com/madesroches/micromegas.git
cd micromegas/rust

# Build all components
cargo build

# Build with optimizations
cargo build --release

# Build specific component
cargo build -p telemetry-ingestion-srv

Testing

# Run all tests
cargo test

# Run tests with output
cargo test -- --nocapture

# Run specific test
cargo test -p micromegas-tracing

Format and Lint

# Format code (required before commits)
cargo fmt

# Run linter
cargo clippy --workspace -- -D warnings

# Run full CI pipeline
python3 ../build/rust_ci.py

Advanced Builds

# Clean build
cargo clean && cargo build

# Release with debug symbols for profiling
cargo build --profile release-debug

# Profiling build
cargo build --profile profiling

# Cross-platform build
rustup target add x86_64-pc-windows-gnu
cargo build --target x86_64-pc-windows-gnu

Python Development

cd python/micromegas

# Install dependencies
poetry install

# Run tests
pytest

# Format code (required before commits)
black .

Documentation

# Install dependencies
pip install -r mkdocs/docs-requirements.txt

# Start development server
cd mkdocs
mkdocs serve

# Build static site
mkdocs build

Self-Hosted CI Runner

Developer workstations can contribute to CI builds using a Docker-based self-hosted GitHub Actions runner. Builds from the repo owner route to the dev worker when it's online, falling back to GitHub-hosted runners when it's not.

Prerequisites

  • Docker
  • A fine-grained GitHub PAT with Administration: Read and write scoped to madesroches/micromegas

Setup

Store the PAT locally (choose one):

# Option 1: environment variable
export MICROMEGAS_RUNNER_PAT=ghp_xxx

# Option 2: file (recommended for persistent use)
mkdir -p ~/.config/micromegas
echo "ghp_xxx" > ~/.config/micromegas/runner-pat
chmod 600 ~/.config/micromegas/runner-pat

The same PAT must be stored as the repository secret RUNNER_PAT:

gh secret set RUNNER_PAT

Usage

# Start the worker (runs until Ctrl+C)
python3 build/dev_worker.py

# With resource limits
python3 build/dev_worker.py --cpus 8 --memory 16g

# Build the container image without starting the worker
python3 build/dev_worker.py --build-image

# Clear the build cache
python3 build/dev_worker.py --clear-cache

# Rotate cache: clear, restart, trigger a warming build on main
python3 build/dev_worker.py --rotate-cache

Nightly Cache Rotation

Use --rotate-at to automatically wipe and warm the cache each night:

# Start with nightly rotation at 03:00 local time
python3 build/dev_worker.py --rotate-at 3

Between container runs, the worker restarts the container (clearing the cache) and triggers a full build on main so daytime builds hit a warm cache. No cron job needed.

How It Works

Each workflow has a check-runner job that runs on ubuntu-latest and decides where the real jobs run:

  1. If the build author is the repo owner and a dev worker is online, jobs route to dev-worker
  2. Otherwise, jobs run on ubuntu-latest (existing behavior)

The runner container is persistent and handles multiple jobs back-to-back. The build cache (cargo registry, target directories) lives on the container filesystem and stays warm as long as the container is running.

See tasks/container_based_dev_worker_plan.md for the full design.

Next Steps