6 minutes
100 days of homelab Activity Log
I’ll just log mundane things here and update this post. Cool stuff will get a dedicated post. Do I go top bottom, or bottom top? I guess we’ll find out.
day 002 20250306
- made this blog post (baby steps okay?)
- logged into one of my ingress boxes and looked at logs. 99% bot traffic. no surprise.
- one of my lanecloud hypervisor hosts has 550 days of uptime… that’s something I guess.
- started on an ansible role to add custom startup scripts to clammy-ng
- got distracted with some hugo theme tweaks that I couldn’t quite make work
day 003 20250308
Ended up being kind of an Armbian Day
- submitted the removal of a RK3568 bugfix patch since it got merged into LTS kernels and was conflicting on build.
- Ran a bunch of glmark2 and sbcbench marks on boards comparing kernels. No real conclusions other than I think an RK3588 DMA patch someone added had a slight performance regression.
day 004 20250309
My Synology backups that goto my eSata drive have been dead for a month. Got the drive online… now says its full. Really need to add back push notifications for that. More work to be done to get this resolved.
Made a new playbook for my Provision KVM guest ansible role. It just uses vars_prompt to make it easy to spin up a few VM in a hurry. I like it.
Wasted a lot of time trying to debug performance issues with my librespeed instance. Some of the performance problems are just VM performance sucks on Synology VMs, even when using appropriate acceleration. Other parts of it, is that it seems to be a known thing that nginx or ingress-nginx also seems to introduce some performance issues when it proxies librespeed. I managed to do some tuning that made it better, but it’s still unable to hit wirespeed on gigabit. I’m extra amused that librespeed is more performant on a VM running on my Rock 5B than it is running on my Synology DS1821+
Here’s some settings for the service ingress that sort of helped with nginx.. but needs more TBH
annotations: nginx.ingress.kubernetes.io/proxy-body-size: "21M" nginx.ingress.kubernetes.io/proxy-buffering: "on" nginx.ingress.kubernetes.io/proxy-buffers-number: "50"
day 005 20250310
Just kind of did my rounds and looked at some things.
- Verified Synology backups are happy now that I cleaned up my external drive
- Updated Netbox
- deleted retired LET deal VMs
- updated renewal date field on remaining LET deal VMs
- Verified db dumps and borgmatic backups are running on my Netbox server
- Looked at some healthchecks.io helm charts. Would be nice to deploy that in the future and integrate with borgmatic
day 006 20250311
Pretty beat today so just did some fiddling. I fired up some temporary debian pods on my synology moncluster VM, and one on my Rock 5B k3s cluster. I ran quick yabs.sh benchmarks on them. The pod on the 6 core ARM VM was faster in single and multithreaded than the 4 core VM on my Synology 1821+. Between this and the librespeed benchmarks, I think I’ve convinced myself the monocluster is moving off the Synology and onto a Rock 5B.
WTF is the monocluster
I have 2 k3s deployments at home. The first one is the “armlab” cluster. Its 6 VMs running on 3 Rock 5B boards. It’s meant to be torn down and recreated a lot. It has an HA control plane via kube-vip. It has metallb, cillium, and some basic service deployed via flux. I really wanted to figure out using BGP in the HA cluster before deploying to it, but I decided I should do that later and first just focus on getting my nomad cluster.
The monocluster was my KISS approach to just focus on a “best practices” IAC setup for k8s with fluxcd…and focus on bootstrapping all the extra things like external-dns, external-secrets, etc. One detail is that I was also trying to design it as a solution I could use to run on a single node VPS.. It took me a while to solve for getting nginx to listen on port 80,443 node ports but I did it. I’ve been redeploying it like crazy as well while I add functionality, but it’s time to set up it’s final (temporary) home.. and move stuff over it to it.
day 007 20250313
I found one of my rock-5b boards that I used as an armbian build node a year or so ago with the Googalator kernel. It has a nice NVME in it. It’s a great fit for the home of my monocluster. So I decided to build an image and get it installed… Naturally I hit some unexpected curveballs
- My other Synology cache drive alerted that it had 1% lifespan…… I had to replace it.. Now I’m running my cache on a pair of $20 nvmes… RIP crucial P3
- I booted edge kernel 6.14.0-rc4… it was weird… the nvme was intermittent. And eventually disappeared… I went back and forth a few times and then decided to build -current aka LTS kernel 6.12.y.
- worked better.. but something with armbian-install or uboot on SPI flash isn’t booting from NVME.. i don’t feel like debugging.
- just using sdcard for bootloader for now. seems to work.
day 008 20250314
I’m surprised about my burst of productivity given I’ve been on like 4 hours of sleep today.
- The nvme boot problem seem to partially have somethign to do with the nvme drive I was using.. it would init fine on a reboot, but not on a cold boot. Storage controllers and SBCs can be picky. I pulled a 512 Skyhinx out of an external enclosure and now it’s happy.
- I re-installed monocluster as bare metal k3s on the above mentioned Rock 5B.
- I also tested libre speed… looks good!
root@ronny-1:~# librespeed-cli --skip-cert-verify --local-json librespeed.json --concurrent 2 --server 1
Using local JSON server list: librespeed.json
Selected server: mtest [librespeed.mtest]
You're testing from: {"processedString":"172.17.20.115 - private IPv4 access","rawIspInfo":""}
Ping: 0.64 ms Jitter: 0.38 ms
Download rate: 925.08 Mbps
Upload rate: 953.28 Mbps
day 009 20250315
Time to start getting some actual workloads moved over…. or in my case.. test some workloads.
I focused on some basic nginx servers that are just directory indexes for data on an NFS mount. Was a great starting point.
I lost a lot of time trying to use app-template to do the things. It’s cool, but also extremely overkill and ended up being a bad fit for my use case.
I dumbed things down to unified manifest file for each server + service. I took
advantage of flux’s ConfigMapGenerator
to make my configuration hashed and
cause pods to redeploy when I update. Works well.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: web-static
resources:
- ./namespace.yaml
- ./nginx-linuxmirror.yaml
- ./nginx-otherserver.yaml
configMapGenerator:
- name: nginx-linuxmirror
files:
- default.conf=nginx-configs/default-linuxmirror.conf
- name: nginx-default
files:
- default.conf=nginx-configs/default.conf