Part 11 of 12 in the Linux From Scratch series.
Previous: Bootscripts and System Configuration · Next: The Finish Line


Everything we've built so far — GCC, Glibc, Bash, Coreutils, the bootscripts — all of it is userspace. Programs that run on top of something. That something is the kernel.

Chapter 10 of LFS is where we compile the Linux kernel, set up the filesystem table, and install GRUB. After this, our system can boot on its own. No host distro. No safety net. Just hardware (or VM), bootloader, kernel, and the system we built.

This is the moment.

/etc/fstab: What Gets Mounted Where

Before the kernel, we need to tell the system about its filesystems. /etc/fstab — the filesystem table — is read at boot to determine what to mount.

# Begin /etc/fstab

# file system  mount-point    type     options             dump  fsck
#                                                                order

/dev/vdb1      /              ext4     defaults            1     1
proc           /proc          proc     nosuid,noexec,nodev 0     0
sysfs          /sys           sysfs    nosuid,noexec,nodev 0     0
devpts         /dev/pts       devpts   gid=5,mode=620      0     0
tmpfs          /run           tmpfs    defaults             0     0
devtmpfs       /dev           devtmpfs mode=0755,nosuid     0     0

# End /etc/fstab

Let's break down every line.

/dev/vdb1 / — Our root filesystem. Physical (well, virtual) disk, first partition, ext4. This is where everything lives. defaults means read-write, allow suid, allow exec — normal. The 1 1 at the end means: include in dump backups, and fsck this first at boot.

The rest are pseudo-filesystems. They don't live on disk. The kernel creates them in memory.

proc /proc — The process filesystem. Every running process gets a directory here: /proc/1 is init, /proc/self is the current process. Also exposes kernel parameters: /proc/cpuinfo, /proc/meminfo, /proc/sys/*. When you run ps, it reads /proc. When you run sysctl, it reads /proc/sys.

sysfs /sys — The device tree. Represents every device, driver, and bus in the system as a directory hierarchy. /sys/class/net/eth0 is your network interface. /sys/block/vdb is your disk. udev reads this to create device nodes in /dev.

devpts /dev/pts — Pseudo-terminal devices. When you open a terminal emulator or SSH session, the kernel creates a pseudo-terminal pair: a master and a slave. The slave side appears here as /dev/pts/0, /dev/pts/1, etc. gid=5 assigns them to the tty group. mode=620 means owner read/write, group write — so the terminal owner can read input and programs can write output.

tmpfs /run — A RAM-based filesystem for runtime data. PID files, lock files, socket files. Fast because it's in memory. Lost on reboot, which is exactly what you want for transient state.

devtmpfs /dev — Auto-populated device nodes. The kernel creates device files here automatically when it detects hardware. Then udev takes over and applies rules (permissions, symlinks, naming). Without devtmpfs, you'd need to create every device node manually with mknod.

Six lines. Each one is a fundamental concept in how Linux systems work. Every distro has these (or equivalents). Now you know why they exist.

Kernel Compilation

Linux 6.16.1. Source tarball downloaded, extracted to /sources/linux-6.16.1/.

Step 1: Clean Slate

make mrproper

mrproper removes all generated files, config files, and build artifacts. Named after a German cleaning product (Mr. Proper / Mr. Clean). Starts you with a perfectly clean source tree. Run this once. Don't skip it.

Step 2: Default Configuration

make defconfig

This generates .config — the kernel configuration file — with sane defaults for your architecture. On x86_64, defconfig gives you a kernel that works on most real hardware: SATA drivers, USB, common network cards, standard filesystems.

But "most real hardware" doesn't include our virtual hardware.

Step 3: VM-Specific Drivers

Our QEMU VM uses virtio — a paravirtualization standard. The virtual disk isn't SATA, it's virtio-blk. The virtual network card isn't Intel e1000, it's virtio-net. Without the right drivers, the kernel boots but can't see the disk. Kernel panic: "unable to mount root fs."

We need to enable:

CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_BLK=y
CONFIG_VIRTIO_NET=y
CONFIG_EXT4_FS=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y

These get set in .config. You can use make menuconfig (ncurses-based menu), or edit .config directly and run:

make olddefconfig

This resolves any new or dependent options, setting them to defaults. It's the non-interactive way to update a config. Changed CONFIG_VIRTIO=y? olddefconfig enables CONFIG_VIRTIO_RING=y automatically because virtio depends on it.

A note on =y vs =m. Setting a driver to y compiles it directly into the kernel image. Setting it to m builds it as a loadable module. For boot-critical drivers — the ones needed to mount root — you want =y. If the disk driver is a module, the kernel needs to load it from the disk it can't read yet. Chicken-and-egg problem. (An initramfs solves this, but LFS doesn't use one. Simpler.)

Step 4: Compile

make

That's it. One command. The kernel build system handles the rest.

What happens behind the scenes: thousands of C files compiled, linked into vmlinux (the raw kernel), then compressed into arch/x86/boot/bzImage — the bootable kernel image. On modern hardware (or a decently-specced VM), this takes a few minutes. On a single-core setup, maybe 20-30 minutes.

The result: arch/x86/boot/bzImage. About 14MB. That's your entire operating system kernel. Every process scheduler, memory manager, filesystem driver, network stack — 14 megabytes.

Step 5: Install Modules

make modules_install

Loadable kernel modules go to /lib/modules/6.16.1/. These are drivers and features not compiled into the kernel directly — loaded on demand when hardware is detected or functionality is requested.

modules_install also generates:

  • modules.dep — dependency information (module A requires module B)
  • modules.alias — maps hardware IDs to module names (so udev knows which module to load for which device)

Step 6: Install the Kernel

cp -iv arch/x86/boot/bzImage /boot/vmlinuz-6.16.1-lfs-12.3
cp -iv System.map /boot/System.map-6.16.1
cp -iv .config /boot/config-6.16.1

Three files in /boot:

  • vmlinuz-6.16.1-lfs-12.3 — the compressed kernel image. This is what GRUB loads. The vmlinuz name is convention: vm (virtual memory) + linu (Linux) + z (compressed).
  • System.map — symbol table mapping memory addresses to function names. Used for debugging kernel panics. When the kernel oops says "BUG at ffffffff81234567", System.map tells you which function that is.
  • .config — the exact configuration used to build this kernel. Reproducibility. If you need to rebuild or tweak, you have the recipe.

GRUB: The Bootloader

The kernel is on disk. But the CPU doesn't know that. When you power on, the BIOS loads the first 512 bytes of the boot disk — the Master Boot Record (MBR). That 512 bytes needs to contain enough code to find and load the kernel.

GRUB (GRand Unified Bootloader) is that code.

Installation

grub-install /dev/vdb

This writes GRUB's Stage 1 code to the MBR of /dev/vdb, and installs Stage 2 files (the full GRUB with filesystem drivers, menu system, etc.) to /boot/grub/.

Stage 1 is tiny — fits in 446 bytes (the MBR's boot code area). Its only job: load Stage 2 from /boot/grub/. Stage 2 is the full bootloader: reads grub.cfg, shows a menu (if configured), loads the kernel.

Configuration

cat > /boot/grub/grub.cfg << "EOF"
# Begin /boot/grub/grub.cfg

set default=0
set timeout=5

insmod ext2

set root=(hd0,1)

menuentry "GNU/Linux, Linux 6.16.1-lfs-12.3" {
    linux /boot/vmlinuz-6.16.1-lfs-12.3 root=/dev/vdb1 ro
}

EOF

Line by line:

  • set default=0 — boot the first menu entry by default
  • set timeout=5 — wait 5 seconds before auto-booting (time to press keys if needed)
  • insmod ext2 — load GRUB's ext2/3/4 filesystem module (GRUB uses the same module for all ext variants)
  • set root=(hd0,1) — tell GRUB where to find files. hd0 = first disk, 1 = first partition. GRUB's numbering: disks start at 0, partitions start at 1.
  • menuentry — a boot option with a human-readable label
  • linux /boot/vmlinuz... — path to the kernel image (relative to GRUB's root)
  • root=/dev/vdb1 — kernel parameter: use this partition as the root filesystem
  • ro — mount root read-only initially (fsck runs, then it's remounted read-write)

The Complete Boot Sequence

Now we can trace the entire path from power-on to login prompt:

  1. BIOS — POST (Power-On Self Test), finds boot disk, loads MBR
  2. GRUB Stage 1 (MBR, 446 bytes) — loads Stage 2 from /boot/grub/
  3. GRUB Stage 2 — reads grub.cfg, loads kernel into memory
  4. Linux kernel — initializes hardware, mounts root filesystem (from root= parameter), starts /sbin/init
  5. init (PID 1) — reads /etc/inittab, enters runlevel 3
  6. rc scripts — mount filesystems, start udev, configure network, start syslog
  7. agetty — spawns on tty1-tty6, presents login prompt
  8. login — authenticates user, starts shell
  9. bash — reads /etc/profile, presents prompt

Every single step in this chain is something we built or configured. The BIOS is the only thing we didn't create. Everything from GRUB onward — our code, our config, our system.

That's not true of any distro you download. Somewhere in their boot chain, there's abstraction you can't see. Here, it's glass all the way down.

Kernel Configuration: The Rabbit Hole

We used defconfig plus a few VM-specific options. That works. But the full kernel configuration has over 10,000 options. You could spend days in make menuconfig, enabling and disabling features.

Some choices that matter:

  • File systems: We enabled ext4. You could add XFS, Btrfs, FAT32 (for USB drives), NFS (for network shares).
  • Networking: TCP/IP is on by default. But firewall support (netfilter/iptables), bridging, VLANs — all optional.
  • Security: SELinux, AppArmor, seccomp — all kernel features, all off by default in defconfig.
  • Scheduling: The default scheduler (EEVDF in 6.16.x) is fine for general use. Real-time kernels use different schedulers.
  • Debugging: You can enable kernel debugging, ftrace, kprobes — powerful tools for understanding kernel behavior. Adds overhead, usually disabled in production.

The beautiful thing: you know where these options live. make menuconfig shows you the hierarchy. .config is a text file. Nothing is hidden.

What We Built

After this chapter:

  • /etc/fstab tells the system what to mount
  • /boot/vmlinuz-6.16.1-lfs-12.3 is our custom-compiled kernel
  • /boot/grub/grub.cfg tells GRUB how to load it
  • GRUB is installed in the MBR

Our system can boot. Not theoretically. Actually boot. From a cold start to a login prompt, running nothing but code we compiled from source.

One more post. Let's wrap this up.


Previous: Bootscripts and System Configuration · Next: The Finish Line

All posts in this series: Linux From Scratch