[-] Nomad64@lemmy.world 3 points 9 months ago

Each network is different. I did this for my network which has multiple subnets and internal DNS servers sitting on the "server" subnet. The "server" subnet is excluded, since devices in there are more tightly controlled (and it would create a routing loop).

Granted, it may not be the best way, but here is how I did it:

  • Create Firewall Alias group (type: hosts) with IP addresses of internal DNS servers (PiHoles, in my case).
  • Create Firewall Alis group (type: URL Table IPs) for external DNS over HTTPs servers (content: https://raw.githubusercontent.com/jpgpi250/piholemanual/master/DOHipv4.txt)
  • Create NAT Port Forward to route all traffic on port 53 to the alias (TCP/UDP, source: network, destination: !network on port 53, redirect target: DNS alias, redirect port: 53) for each network
  • Each network (except the "server" network) has the below rule set (order is important)
    • Allow TCP/UDP 53 to DNS alias
    • Drop all TCP/UDP 53
    • Drop all TCP/UDP 853
    • Drop all TCP/UDP 443 traffic to external DNS over HTTPs alias group

Since NAT port forward rules are processed before interface/network rules, any device using port 53 for DNS (regardless of the IP address they have set) will automatically (and transparently) get redirected to my PiHole servers. The drops are in place so devices that try to use other common DNS methods are blocked. Generally, those devices will then default to the DHCP DNS servers.

I have been running this config for a few years and have found a few downsides:

  • You can't visit websites that have the same addresses as their DNS hosts, ie: https://1.1.1.1
  • Although https://github.com/jpgpi250/piholemanual is updated regularly, it has contained the odd false-positive (GitHub pages had a weird overlap at one point) breaking legitimate HTTPS traffic
  • My PiHole servers are configured to allow queries from all origins (theoretical security risk)

Hope this helps! And remember to be careful when messing with DNS and clear those caches when testing.

[-] Nomad64@lemmy.world 9 points 10 months ago

I second Paperless NGX. I have been using it for a few years, and it has been working great!

[-] Nomad64@lemmy.world 13 points 11 months ago

I was excited for the Roborock map. Unfortunately it isn't live and doesn't appear to be interactive. A good step in the right direction, though!

[-] Nomad64@lemmy.world 2 points 1 year ago

Great post with lots of detail! I have had MyQ for years and hated it pretty much at first use. Back then, they were asking for a monthly fee for the "privilege" of integrating it with Google.

Since the MyQ integration for HA is now dead, I have ordered a Ratgdo and am patiently waiting for it to arrive. I tried the Anthom.tech opener, but that does not work with most Chamberlain/Lift master opener made after 2011.

[-] Nomad64@lemmy.world 1 points 1 year ago

The primary reason to virtualize is to maximize the "bang for your buck" on your hardware. Containers are great, but have their limits.

So long as you have a desire to learn it (and the budget), I say dive in with Proxmox and see how you can put that hardware to use. VMWare ESXi is more common in a business/enterprise setting, but costs money to for anything beyond the basic functionality after the evaluation period.

Nomad64

joined 1 year ago