Over several years I set up home automation / monitoring across multiple sites. I recently rebuilt the system in order to simplify and maintain consistency between sites. What I describe below isn’t necessarily the best or simplest way, but I currently find it works best for my needs.
Situation #
We have three different “sites” with the following common requirements:
-
I don’t particularly trust my outdoor security cameras, and therefore wanted to isolate them so that they could only communicate with internal resources. In other words: the cameras shouldn’t be directly accessible from the Internet, and shouldn’t be able to phone home.
-
Motion sensors triggering lights in certain locations. For example: kitchen under-cabinet lighting, and front-door external lighting at night.
-
Easy to access live camera feeds
-
I don’t want an outage at one site to affect other sites
-
Image and text notifications to multiple Android and iOS mobile devices
Each site currently has an OpnSense router and they are configured with a site to site VPN so that we can acceess the internal resources of each network directly.
Other details:
-
One site has/had a spotty Internet connection. We used to have DSL and a terrible satellite backup, but have since switched to a more reliable satellite Internet provider. This site also occasionally has power outages.
-
The other two sites have faster wired connections.
Options #
Over the years I’ve used a mix of commercial “DVR” security camera hardware and software, but ultimately a few years ago settled on Home Assistant and Frigate. This combination is very versatile, and can be as simple or as complex to install as one desires.
Home Assistant has several deployment options, a few key ones:
- Home Assistant Operating System is perhaps the simplest to deploy, operate, and maintain.
- Container simply packages Home Assistant up in an OCI (Docker-compatible) container.
- Home Assistant Green is a hardware-based solution which has HA already installed.
My (Current) Solution #
I opted for a more complex deployment, based around Home Assistant’s Container deployment option.
Regarding locking down my security cameras (and other IoT devices): I have a VLAN and a wifi SSID at each site which is configured to explicitly block Internet access. Devices on these VLANs and wifi networks can only communicate with each other and with other local network devices.
At each site I use a relatively cheap x86-64 laptop in order to manage short power outages. Each laptop is running NixOS, with largely the same configuration shared among each machine. The benefit of NixOS for me on this project is I can maintain largely the same configuration between each system and deploy from one place using the nixops deployment tool.
Each NixOS instance is configured to have a Home Assistant OCI container, Frigate OCI container, and a couple other services. HA does not officially support NixOS, which is why I opted for the OCI container approach.
Home Assistant at each site manages automations locally, but to simplify having a centralized view I use Remote Home-Assistant. One site which has the most reliable Internet and power I use as a primary site. The other sites feed into the primary site, such that users can just go to the primary Home Assistant instance to see and interact with everything.
For notifications I use the Signal Messenger integration, which is a bit of a pain to use. It requires running a Dockerized Signal REST API instance, which can occasionally lose it’s authentication with Signal. But once set up it’s very convienent. I have a Signal group for each site and all important notifications, including security camera images, are broadcasted to each group as appropriate. This lets us discuss events in real time in the group feed, and provides an easy to browse history of events in an interface which feels natural. It works so much better than the more traditional “ephemeral” notifications apps typically use.
One other important element to this setup is access. I have a cheap cloud virtual server running nginx on a domain name, and have SSH tunnels automatically established from each site laptop to this cloud server that share their own nginx proxies. This cloud server then routes traffic over these SSH tunnels depending on the desired site and service.
Each laptop at each site runs nginx as a reverse proxy with domain reoslution to determine which service to provide. I use the following convention:
https://SERVICE-SITE.MYDOMAIN.TLD
The reason I use a dash rather than a dot is to simplify my use of SSL certificates. I use a wildcard certificate that is shared with each site. Wildcard certificates are only one level deep for each “dot” in a domain. So a certificate for *.MYDOMAIN.TLD won’t match for *.SITE.MYDOMAIN.TLD.
Within the VPN mesh, I override the IP address for my domain so that the internal IP addresses are used instead of the public one. This way, within my VPN mesh, accessing Home Assistant and other services stay “within” the network instead of routing out the the public Internet and back through my cloud server and down the SSH tunnels.
Future #
The way I manage the TLS certificate is rather clunky, and I recently learned about Cloudflared tunnels which sound like a good alternate to my use of SSH tunnels with the cloud server. Alternatively, I’m considering doing away with direct Internet access entirely and just putting in place a client VPN for external access.