Karan Sharma

State of My Homelab 2025

9 minutes (2153 words)

๐Ÿ”—Introduction

For the past five years, I have maintained a homelab in various configurations. This journey has served as a practical exploration of different technologies, from Raspberry Pi clusters running K3s to a hybrid cloud setup and eventually a cloud-based Nomad setup. Each iteration provided valuable lessons, consistently highlighting the operational benefits of simplicity.

This article details the current state of my homelab. A primary motivation for this build was to dip my toes into โ€œactualโ€ homelabbingโ€”that is, maintaining a physical server at home. The main design goal was to build a dedicated, reliable, and performant server that is easy to maintain. This led me to move away from complex container orchestrators like Kubernetes in favor of a more straightforward Docker Compose workflow. I will cover the hardware build, software architecture, and the rationale behind the key decisions.

๐Ÿ”—Hardware Configuration

After considerable research, I selected components to balance performance, power efficiency, and cost. The server is designed for 24/7 operation in a home environment, making noise and power consumption important considerations.

๐Ÿ”—The Build

ComponentChoicePrice
CPUAMD Ryzen 5 7600X (6-core, 4.7 GHz)$167.58
CPU CoolerARCTIC Liquid Freezer III Pro 360$89.99
MotherboardMSI B650M Gaming Plus WiFi$225.83
RAMKingston FURY Beast 32GB DDR5-6000$136.99
Boot DriveWD Blue SN580 500GB NVMe$88.76
Storage 1WD Red Plus 4TB (5400 RPM)$99.99
Storage 2Seagate IronWolf Pro 4TB (7200 RPM)$150.00
CaseASUS Prime AP201 MicroATX$89.99
PSUCorsair SF750 (80+ Platinum)$169.99
Total$1,219.12
Homelab build in progress Completed build with RGB and storage drives MSI BIOS showing system information

๐Ÿ”—Component Rationale

๐Ÿ”—System Architecture & Deployment

My previous setups involved Kubernetes and Nomad, but the operational overhead proved unnecessary for my use case. I have since standardized on a Git-based, Docker Compose workflow that prioritizes simplicity and transparency.

๐Ÿ”—Directory Structure and โ€œStacksโ€

The core of the system is a Git repository that holds all configurations. Each service is defined as a self-contained โ€œstackโ€ in its own directory. The structure is organized by machine, making it easy to manage multiple environments:

homelab/
โ”œโ”€โ”€ deploy.sh                 # Main deployment script
โ”œโ”€โ”€ justfile                  # Task runner for common commands
โ””โ”€โ”€ machines/
    โ”œโ”€โ”€ floyd-homelab-1/      # Primary home server
    โ”‚   โ”œโ”€โ”€ config.sh         # SSH and deployment settings
    โ”‚   โ””โ”€โ”€ stacks/
    โ”‚       โ”œโ”€โ”€ immich/
    โ”‚       โ”‚   โ””โ”€โ”€ docker-compose.yml
    โ”‚       โ””โ”€โ”€ paperless/
    โ”‚           โ””โ”€โ”€ docker-compose.yml
    โ””โ”€โ”€ floyd-pub-1/          # Public-facing VPS
        โ”œโ”€โ”€ config.sh
        โ””โ”€โ”€ stacks/
            โ”œโ”€โ”€ caddy/
            โ””โ”€โ”€ ntfy/

This modular approach allows me to manage each applicationโ€™s configuration, including its docker-compose.yml and any related files, as an independent unit.

๐Ÿ”—Deployment Workflow

Deployments are handled by a custom deploy.sh script, with a justfile providing a convenient command-runner interface. The process is fundamentally simple:

  1. Sync: rsync copies the specified stackโ€™s directory from the local Git repository to a REMOTE_BASE_PATH (e.g., /opt/homelab) on the target machine.
  2. Execute: ssh runs the appropriate docker compose command on the remote machine.

Each machineโ€™s connection settings (SSH_HOST, SSH_USER, REMOTE_BASE_PATH) are defined in its machines/<name>/config.sh file. This file can also contain pre_deploy and post_deploy hooks for custom actions.

The justfile makes daily operations trivial:

# Deploy a single stack to a machine
just deploy-stack floyd-homelab-1 immich

# View the logs for a stack
just logs floyd-homelab-1 immich

# Test a deployment without making changes
just dry-run floyd-homelab-1

Deployment workflow demonstration

This system provides fine-grained control over deployments, with support for actions like up, down, restart, pull, and recreate (which also removes persistent volumes).

๐Ÿ”—Container & Configuration Patterns

To keep the system consistent, I follow a few key patterns:

๐Ÿ”—Multi-Machine Topology

The homelab comprises three distinct machines to provide isolation and redundancy.

This distributed setup isolates my home network from the public internet and ensures that critical public services remain online even if the home server is down for maintenance.

๐Ÿ”—Hosted Services

The following is a breakdown of the services, or โ€œstacks,โ€ running on each machine. A few key services that are central to the homelab are detailed further in the next section.

๐Ÿ”—floyd-homelab-1 (Primary Server)

๐Ÿ”—floyd-monitor-public (Monitoring VPS)

๐Ÿ”—floyd-pub-1 (Public VPS)

๐Ÿ”—Service Highlights

๐Ÿ”—Technitium: A Powerful DNS Server

I came across Technitium DNS after seeing a recommendation from @oddtazz, and it has been a revelation. For anyone who wants more than just basic ad blocking from their DNS server, itโ€™s a game-changer. It serves as both a recursive and authoritative server, meaning I donโ€™t need a separate tool like unbound to resolve from root hints. The level of configuration is incredibleโ€”from DNSSEC, custom zones, and SOA records to fine-grained caching control.

The UI is a bit dated, but thatโ€™s a minor point for me given the raw power it provides. It is a vastly underrated tool for any homelabber who wants to go beyond Pi-hole or AdGuard Home.

Technitium DNS Server UI

๐Ÿ”—Beszel: Lightweight Monitoring

For a long time, I felt that monitoring a homelab meant spinning up a full Prometheus and Grafana stack. Beszel is the perfect antidote to that complexity. It provides exactly what I need for basic node monitoringโ€”CPU, memory, disk, and network usageโ€”in a simple, lightweight package.

Itโ€™s incredibly easy to set up and provides a clean, real-time view of my servers without the overhead of a more complex system. For a simple homelab monitoring setup, itโ€™s hard to beat.

Beszel Monitoring UI

๐Ÿ”—Gatus: External Health Checks

While Beszel monitors the servers from the inside, Gatus watches them from the outside. Running on an independent Hetzner VM, its job is to ensure my services are reachable from the public internet. It validates HTTP status codes, response times, and more.

This separation is crucial; if my entire home network goes down, Gatus is still online to send an alert to my phone. Itโ€™s the final piece of the puzzle for robust monitoring, ensuring I know when things are broken even if the monitoring service itself is part of the outage.

Gatus Health Checks UI

๐Ÿ”—Storage and Backup Strategy

Data integrity and recoverability are critical. My strategy is built on layers of redundancy and encryption.

๐Ÿ”—Storage: BTRFS RAID 1 + LUKS Encryption

I chose BTRFS for its modern features:

The two 4TB drives are mirrored in a RAID 1 array, providing redundancy against a single drive failure. The entire array is encrypted using LUKS2, with the key stored on the boot SSD for automatic mounting. This protects data at rest in case of physical theft or drive disposal.

Mount options in /etc/fstab:

/dev/mapper/crypt-sda /mnt/storage btrfs defaults,noatime,compress=zstd 0 2

๐Ÿ”—Backup: Restic + Cloudflare R2

RAID does not protect against accidental deletion, file corruption, or catastrophic failure. My backup strategy follows the 3-2-1 rule.

Daily, automated backups are managed by systemd timers running restic. Backups are encrypted and sent to Cloudflare R2, providing an off-site copy. R2 was chosen for its zero-cost egress, which is a significant advantage for restores.

The backup script covers critical application data and the Docker Compose configurations:

BACKUP_PATHS=(
    "/mnt/storage"        # All application data
    "/home/karan/stacks"  # Docker Compose configs
)

Each backup run reports its status to a healthchecks.io endpoint, which sends a push notification on failure. I must appreciate its generous free tier, which is more than sufficient for my needs.

Healthchecks.io backup monitoring dashboard

๐Ÿ”—Conclusion

This homelab represents a shift in philosophy from exploring complexity to valuing simplicity and reliability. The upfront hardware investment of ~$1,200 is offset by eliminating recurring cloud hosting costs and providing complete control over my data and services.

For those considering a homelab, my primary recommendation is to start with a simple, well-understood foundation. A reliable machine with a solid backup strategy is more valuable than a complex, hard-to-maintain cluster. The goal is to build a system that serves your needs, not one that you serve.


Tags: #Homelab #Docker #Self-hosting #Infrastructure