Part 12 of 12 in the Linux From Scratch series.
Previous: Compiling the Kernel


We're done.

Not "done" as in "close enough." Done as in: power on the VM, GRUB loads, the kernel boots, init starts, bootscripts run, a login prompt appears. Type the root password. Get a shell. Run gcc --version. It works. Run uname -r. It says 6.16.1. Run cat /etc/os-release. It says LFS.

A complete, bootable Linux operating system built entirely from source code.

Let's take inventory.

The Release Files

Three files identify our system:

/etc/os-release

NAME="Linux From Scratch"
VERSION="12.3"
ID=lfs
PRETTY_NAME="Linux From Scratch 12.3"
VERSION_CODENAME="<your name here>"
HOME_URL="https://www.linuxfromscratch.org/"

This is the modern standard. hostnamectl, neofetch, and other tools read this file. Any program that needs to know "what distro is this?" checks here first.

/etc/lfs-release

12.3

LFS-specific. One line. The version number.

/etc/lsb-release

DISTRIB_ID="Linux From Scratch"
DISTRIB_RELEASE="12.3"
DISTRIB_CODENAME="<your name here>"
DISTRIB_DESCRIPTION="Linux From Scratch"

Linux Standard Base format. Older tools read this. Redundant with os-release, but some legacy scripts expect it.

Three files that say the same thing three ways. Welcome to Linux standardization.

System Inventory

Let's count what we built.

In /usr/bin: ~600 binaries. Everything from ar to zstd. The entire GNU coreutils suite (ls, cp, mv, rm, cat, chmod, chown...), text processing (grep, sed, awk, sort, cut, tr), compression (gzip, bzip2, xz, zstd), development (gcc, g++, cpp, ld, as, make, autoconf, automake), scripting (bash, perl, python3), editors (vim), networking (iproute2), and more.

In /usr/sbin: ~88 binaries. System administration tools: fsck, mkfs.ext4, grub-install, init, agetty, syslogd, udevd, chroot, useradd, groupadd.

In /usr/lib: Shared libraries. libc.so.6 (Glibc — the C library everything links against), libstdc++.so (C++ standard library), libssl.so and libcrypto.so (OpenSSL), libpython3.13.so, libncurses.so, libreadline.so, libz.so, and hundreds more.

In /usr/lib/modules/6.16.1: Loadable kernel modules. Drivers compiled as modules rather than built into the kernel.

In /boot: The kernel (vmlinuz-6.16.1-lfs-12.3, ~14MB), System.map, kernel config, GRUB files.

The kernel: Linux 6.16.1. Compiled with virtio drivers for our VM, ext4, devtmpfs. ~14MB compressed.

That's a real operating system. Not a minimal rescue environment. Not a container base image. A system with a compiler, a kernel, a bootloader, networking, process management, file utilities, text editors, scripting languages, and cryptographic libraries.

What We Actually Did

Let me retrace the entire journey. Because from the inside, building 80+ packages feels like a blur. From the outside, it's an engineering sequence with real logic.

Phase 1: Foundation (Posts 1-3)

We started by understanding why. Not as an academic exercise — because knowing why Linux works the way it does makes you a better engineer. Then we set up the build environment: a host Linux system, a dedicated partition, directory structure, and a clean lfs user with a controlled environment. No pollution from the host.

Phase 2: Building (Posts 4-6)

We built a cross-compiler. GCC and Glibc, targeting our new system but running on the host. Then we used that cross-compiler to build temporary tools — a minimal set of utilities that don't depend on the host at all. Then we chroot'd into the new system. From that moment, the host system ceased to exist from our perspective.

Phase 3: The System (Posts 7-9)

Inside the chroot, we rebuilt everything properly. GCC, Glibc (the final versions), then every package the system needs: Bash, Coreutils, Findutils, Grep, Sed, Gawk, Make, Tar, Gzip, Xz, Zstd, Vim, Python, Perl, OpenSSL, procps-ng, SysVinit, udev, e2fsprogs, GRUB, and dozens more. Each package configured, compiled, tested (where tests exist), and installed.

Phase 4: Boot (Posts 10-12)

We installed bootscripts and configured the system: init, runlevels, network, shell environment. We wrote /etc/fstab. We compiled the Linux kernel — the actual operating system — and installed GRUB to load it.

Total: ~80+ packages compiled from source. Every dependency resolved manually. Every configure flag understood (or at least considered).

The Stripping Disaster

A story from the build. Because things don't always go smoothly, and the mistakes teach you as much as the successes.

During the cleanup phase, we strip debug symbols from binaries and libraries to save space. Standard practice. The LFS book has you run strip across /usr/lib and /usr/bin.

But here's the trap. The dynamic linker — ld-linux-x86-64.so.2 — lives in /usr/lib. And you're running inside a chroot where every dynamically-linked program depends on it. If your strip command damages or destroys the dynamic linker, nothing works anymore. Not ls. Not bash. Not strip itself.

That's exactly what happened. The dynamic linker got corrupted. Every command returned "No such file or directory" — the kernel's confusing error message when it can't find the ELF interpreter (the dynamic linker).

The fix: use a statically-linked binary to copy a fresh dynamic linker into place. Or, if you have a working statically-linked bash, use it to bootstrap repairs. The LFS book actually warns about this and has you save a statically-linked ld-linux before stripping. If you didn't save one... you're re-entering the chroot from the host and rebuilding glibc.

The lesson: understand what you're operating on. When you run a command that modifies system libraries, you're modifying the ground you're standing on. strip /usr/lib/* is a power tool near a load-bearing wall. Know which walls are load-bearing.

What LFS Teaches You

After building LFS, certain things click that no amount of reading can replicate:

Package dependencies aren't magic. You know exactly why python3 needs libffi, why gcc needs gmp, mpfr, and mpc, why perl needs gdbm. You resolved these by hand. When apt or dnf shows a dependency tree, you understand what it means.

The kernel is just a program. A big, complex, privileged program — but a program. It has a main() function (well, start_kernel()), it allocates memory, it manages data structures. You configured it, compiled it, and installed it like any other software.

init is just a program. PID 1 is special because the kernel starts it. After that, it's a process like any other. It reads config files, forks children, waits for signals. There's no magic in SysV init or systemd — just software making system calls.

Everything is files and processes. Devices are files (/dev/). Kernel state is files (/proc/, /sys/). Configuration is files (/etc/). Running programs are processes. That's the entire model. Once you internalize it, Linux stops being mysterious.

When something breaks on a "real" distro, you know what's underneath. Kernel panic? You've configured and compiled a kernel. Boot failure? You've written grub.cfg. Network not working? You've manually configured interfaces. Service won't start? You've written init scripts. The abstractions that distros add are just layers on top of what you built here.

What's Next

LFS gives you a base system. It boots, you get a shell, you can compile code. But there's no:

  • Graphical desktop (X11/Wayland, window managers)
  • Web browser
  • Audio/video playback
  • Package manager
  • DHCP client (we used static IPs)
  • Wireless networking
  • Printing

That's where Beyond Linux From Scratch (BLFS) comes in. Same philosophy — build from source, understand what you're building — but for the software that makes a system usable day-to-day. X Window System, Mesa (GPU drivers), PulseAudio/PipeWire, Firefox, LibreOffice. Hundreds of packages, each with its own dependencies.

Or you can take what you learned and apply it elsewhere:

  • Gentoo — a distro built on the same principles, but with a package manager (Portage) that automates what you did manually. Understanding LFS makes Gentoo's emerge system transparent.
  • Arch Linux — doesn't compile from source, but its minimal, hands-on approach shares LFS's philosophy. You'll understand Arch's wiki at a deeper level.
  • NixOS — radically different approach (functional package management), but the underlying Linux concepts are the same ones you learned here.
  • Distro packaging — knowing how software is built from source is exactly what distro package maintainers need. Debian, Fedora, Alpine — they all need people who understand configure && make && make install and how to turn that into a package.

The Series

Twelve posts. One system. Here's where we've been:

Foundation

  1. Why Build Your Own Linux — The case for building LFS
  2. How Linux Actually Works — Kernel, userspace, and everything in between
  3. Preparing the Build Environment — Partition, directories, clean environment

Building

  1. Building the Cross-Toolchain — GCC and Glibc, the bootstrap
  2. Temporary Tools — Minimal utilities for independence
  3. Into the Chroot — Leaving the host behind

The System

  1. Building the Core — GCC, Glibc — for real this time
  2. System Plumbing — Shells to security
  3. The Final Packages — Init, filesystems, cleanup

Boot

  1. Bootscripts and System Configuration — Making it start
  2. Compiling the Kernel — The heart of the system
  3. The Finish Line — You are here

Final Thoughts

We started with an empty virtual disk and a question: what does it actually take to build a working Linux system?

The answer: about 80 packages, a cross-compiler, a lot of configure && make && make install, a kernel, a bootloader, and more patience than most people think they have.

Every byte on this system came from source code we downloaded, inspected (well, some of us), and compiled. No binary blobs. No pre-built packages. No trust-us-it-works distro installer. Source code → compiler → machine code → working system.

That's not normal. Most people will never do this. But if you followed along — even just reading, not building — you now have a mental model of Linux that most professionals lack. You know where the abstractions are, because you built the thing the abstractions abstract.

That's worth something.


Series complete. 🏁

All posts: Linux From Scratch

Compiled by AI. Proofread by caffeine. ☕

Built from source, obviously.