The homelab has not undergone any major change in recent months due to personal reasons, but now is a good time to get back on track. Today I want to share how I managed to connect a completely remote server to my Kubernetes cluster and how it’s able to directly write to my home NAS with the help of Tailscale. This might come in handy to those who are thinking of setting up their own VPS to connect to their K8s cluster, whether it be hosted at home or in the cloud, or maybe to those who just simply want to connect two mutually remote devices running on two different networks with a firewall....
K8s with external Ceph, disaster recovery, and StorageClass migration
In the past couple of weeks I was able to source matching mini USFF PCs which upgrades the mini homelab from 14 CPU cores to 18! Along with this I decided to attach a 2.5Gbe NIC and a 1TB NVME on each device to be used for Ceph allowing for hyper-converged infrastructure. Ceph on its own is a huge topic. It has so many moving parts-monitors, metadata servers, OSDs, placement groups to name a few....
Low-power NAS with HP T630 and Openmediavault
Retro post from October 2023 I used to run my k3s cluster on top of my OPNsense box, an HP T630 thin client, and a Lenovo Mini PC. I’ve recently replaced both the former with two other Mini PCs from HP as well running on 6th-gen Intel i5 CPU. So I was again sitting on the fence whether I should just sell the T630 or try to find some other use case for it....
Fixing Longhorn error FailedMount - exit status 32
A couple of days ago I started facing Longhorn issues after rebooting all three nodes. For some reason my adguard deployment was stuck trying to mount the PV. I’m running my adguard deployment with RWX and this means it’s mounted over NFS. Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedMount 45m (x3 over 56m) kubelet Unable to attach or mount volumes: unmounted volumes=[adguard-conf-pv adguard-work-pv], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition Warning FailedMount 16m (x23 over 61m) kubelet MountVolume....
Security and observability with Cilium on my 5G network
Last June I shared a post about deploying a 4G core network, exploring containerization of 4G telco applications at home with GNS3. That time GNS3 was acting as another layer of virtualization since it was running as a VM on top of my NAS. This time I’ve decided to convert my main server from a NAS-first equipment to a hypervisor-first solution, allowing me to spin up VMs faster and more efficiently with the help of Terraform and Ansible....
Secure HTTP access to services with Traefik, Cert-manager and Cloudflare
Before I started working on spinning up my 3-node K3s cluster, I was under the impression that Traefik would be one of the easiest to migrate from my docker setup since it already had some kind of native integration with Kubernetes in terms of available custom resources. Unfortunately this wasn’t the case as with my personal experience. Reading through the custom values yaml file and referring to the available documentation to figure out how the docker configuration compares to when deploying in K8s wasn’t as straightforward as I expected....
Troubleshooting low throughput on Proxmox
When I initially spinned up my k8s cluster, I got everything working but I always experienced network disconnects. it turns out it’s due to my NIC. It was quite difficult to notice this or maybe I just really never doubted my hypervisor and the hardware. Eventually I thought of checking the dmesg logs from within proxmox. Sep 29 14:54:58 pve1 kernel: e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang: TDH <22> TDT <bb> next_to_use <bb> next_to_clean <22> buffer_info[next_to_clean]: time_stamp <100cb7acb> next_to_watch <23> jiffies <100cb7d99> next_to_watch....
Re-engineering the Homelab with IaC and Kubernetes: An overview
In relation to my previous post where I mentioned that I will be starting a new journey learning IaC or Infrastructure-as-Code, today I am very happy to record this milestone of finally achieving a stable kubernetes cluster created with the help of Ansible and Terraform. At this time of writing, so far only two services have been migrated from the docker environment into the new K8S cluster. That is my DNS which is also replaced now by AdguardHome (sorry Pihole!...
State of the Network — the first 120 days
The past months have been crazy since the induction of my homelab. There have been so much reading here and there and a bulk of what used to be my idle time has since been allotted to technical research and self-development. My writing has not been able to keep up either because there have been a lot of changes and modifications I’ve been doing from the get-go. The network just reached the fourth month mark and it’s already about to undergo a somewhat major re-design....
Keepalived with Pihole for DNS HA
In my previous post about my DNS, I mentioned there that I migrated Pihole from Unraid to my Proxmox hosting my router. But in fact, on top of that, I left an instance of Pihole on Unraid running inside a LXC container. Together with that I also configured keepalived for high availability. Hosting the DNS on the same hypervisor as my router should already be sufficient (I’d say my OPNsense VM is more likely to face issues than a LXC container) but I still wanted to try out an use case for keepalived....