The hardest part of running a home-lab long-term isn't setting it up — it's setting it up in a way that still makes sense six months later when you've changed things, broken things, and forgotten why you made the decisions you made. Most guides optimize for getting something working. This is about keeping it working through your own future interventions.
My current setup is built around a Raspberry Pi 3B+ and an Arch Linux main machine. The Pi handles always-on services; the Arch box handles everything else. The philosophy that's kept it stable: every service should be independently reachable, and nothing should depend on remembering a port number or an IP.
The Pi stack
The Pi runs three things that I actually use constantly:
- NAS over SSH: SSHFS mounts from the main machine when I need it. Simple, no Samba complexity, no random Windows compatibility issues. The Pi's SSH config is locked down — key-only auth, no password, AllowUsers explicit.
- WireGuard via PiVPN: the single best decision I made for this setup. WireGuard's handshake is fast enough that I leave it on all the time on my phone. Split tunneling to route only local traffic through it, so I'm not throttling everything.
- DuckDNS: dynamic DNS so I can reach the Pi by name regardless of what my ISP has assigned me. A cron job updates the record every five minutes. The domain is stable; the IP is ephemeral and I never have to think about it.
The WireGuard + DuckDNS combination is what actually makes the lab useful remotely. Without stable naming, you're either memorizing IPs or updating config files every time your ISP rotates your address. The cron updater is four lines and runs forever.
Arch as a daily driver
I've been on Arch long enough that the "btw" joke has stopped being funny and started being just accurate. The appeal isn't masochism — it's that the system does exactly what I told it to do, and nothing else. No background services I didn't ask for, no package manager second-guessing my decisions, no distribution-level opinions about how I should configure things.
The tradeoff is maintenance overhead. Arch updates break things occasionally, and when they do, it's usually because something changed upstream that I didn't notice. The practice that's kept this manageable:
- Read the Arch Linux news feed before updating. Not always, but before any big update batch. Most breakage is announced in advance.
- Never run a full system update right before you need the machine to work. Friday afternoon before a weekend project is not the time.
- Keep a working timeshift snapshot. Btrfs snapshots are cheap and have saved me more than once. The barrier to rolling back needs to be zero.
- Document your AUR packages separately. They're not in pacman's main log and they drift in ways that official packages don't.
The principle that holds it together
Everything in the lab follows one rule: if I came back to it after three months away, could I figure out what it's doing and why from the files alone? If not, I'm building something fragile.
In practice this means comments in config files, a plain-text notes file in each service directory, and no "I'll remember what this does" shortcuts. The future version of me has a bad memory and is annoyed at the past version of me for assuming otherwise.
The lab has paid off in a way I didn't fully anticipate: the operational experience transfers directly to work. Thinking about network segmentation, managing SSH keys, running WireGuard — these aren't abstract concepts when you've set them up yourself and debugged them at 11pm because something stopped working. The hands-on reps matter, and the home-lab is where I get them at a pace and scope I control.