4

Started off by

  1. Enabling unattended updates
  2. Enable only ssh login with key
  3. Create user with sudo privileges
  4. Disable root login
  5. Enable ufw with necessary ports
  6. Disable ping
  7. Change ssh default port 21 to something else.

Got the ideas from networkchuck

Did this on the proxmox host as well as all VMs.

Any suggestions?

top 50 comments
sorted by: hot top controversial new old
[-] Zerafiall@alien.top 3 points 11 months ago
  1. Don’t bother with disabling icmp. You’ll use it way more then it’s worth disabling, and something like nmap -Pn -p- X.X.X.0/24 will find all your servers anyways (same can be said for ssh and port 22. But moving that does stop some bots)

  2. As long as i go out not exposing anything the the global internet, you really don’t need a lot. The fire wall should already deny all inbound traffic.

The next step is monitoring. It's one thing to think your stuff is safe and locked down. It's another thing to know your stuff is safe. Something like Observium, Nagios, Zabbix, or otherwise is a great way to make sure everything stays up, as well as having insights into what everything it doing. Even Uptime Kuma is a good test. Then something like Wazuh to watch for security events and OpenVAS or Nessus, to look holes. I'd even though in CrowdSec for host based virus detection. (Warning, this will quickly send you down the rabbit hole of being a SOC analyst for your own home)

[-] Internet-of-cruft@alien.top 2 points 11 months ago

Block outbound traffic too.

Open up just what you need.

Segment internally and restrict access. You don't need more than SSH to a Linux Server, or perhaps to it's web interface for an application running on it.

[-] NevarroGuildsman@alien.top 1 points 11 months ago

I just set up Wazuh at work and pointed it at a non-domain, vanilla Windows 11 machine to test and it came back with over 300 events immediately. Not trying to scare anyone off as I think it's a great tool, more just a heads up that the rabbit hole runs very deep.

[-] jmartin72@alien.top 2 points 11 months ago

Don't expose anything to the outside world. If you do, use something like Cloudflare tunnels or Tailscale.

[-] umbrella@lemmy.ml 2 points 11 months ago* (last edited 11 months ago)

Or host a VPN on it and get in through that. Many of these microservices are insecure, and the real risk comes from opening them up to the Internet. This is important.

Also set permissions properly if applicable

[-] sysadminafterdark@alien.top 2 points 11 months ago

Take a look at CIS benchmarks and DoD STIGs. Many companies are starting to harden their infrastructure using these standards, depending on the requirements of the environment. Once you get the hang of it, then automate deployment. DO NOT blow in ALL of the rules at once. You WILL break shit. Every environment has security exceptions. If you’re running Active Directory, run Ping Castle and remediate any issues. Audit often, make sure everything is being monitored.

[-] EugeneBelford1995@alien.top 2 points 11 months ago

Honestly, between the home lab being behind a RTR, NATed, patched & updated, and given the lack of users clicking on random crap and plugging in thumb drives from God Only Knows Where ... I'd go out on a limb and say it's already more secure than most PCs.

There's also no data besides what I already put on Medium and GitHub, so it's not a very attractive target.

[-] tango_suckah@alien.top 2 points 11 months ago

I watch networkchuck on occasion, but some of his ideas are... questionable I think. Not necessarily wrong, but not the "YOU MUST DO THIS" that his titles suggest (I get it, get clicks, no hate).

Of the ideas you mentioned, (2), (3), (4), and (5) are somewhere between "reasonable" and "definitely". The rest are either iffy (unattended updates) or security theater (disable ICMP, change ports).

Something to keep in mind for step (2), securing SSH login with a key: this is only as secure as your key. If your own machine, or any machine or service that stores your key, is compromised then your entire network is compromised. Granted, this is kind of obvious, but just making it clear.

As for security theater, specifically step (6). Don't disable ping. It adds nothing to security and makes it harder to troubleshoot. If I am an attacker in a position for ping to get to an internal resource in the first place, then I'm just going to listen for ARP broadcasts (on same subnet) or let an internal router do it for me ("request timed out" == host is there but not responding).

[-] darthrater78@alien.top 1 points 11 months ago

By only having it on when I need it.

People that have theirs on 24/7....why? I used Home Assistant to automate mine so I can bring it up remotely if needed.

[-] gwicksted@alien.top 1 points 11 months ago

I have a camera outside, I’m a pretty big guy, and my rack was built inside my office so it can’t be moved quickly.

Oh, you mean digital security? Lol I have a lot of subnets and don’t forward in much traffic. The WiFi password I give out gets you on my kids network. Plus I run DPI and IDS. I use cloudflare DNS (sometimes operating an internal pihole too). And I don’t browse social media on PCs only on mobile. The only holes punched from WiFi to internal are for printing. And even the wired clients are segregated from my work network.

[-] tabortsenare@alien.top 1 points 11 months ago

Internet > Firewall, IP Whitelist, IPS/IDS yada yada> DMZ / VLAN > > Proxmox /w FW:$true (rule only for game ports) > GameServer > Deny all traffic from GameServer to go anywhere but internet

Proxmox server has firewall, all the hosts on proxmox have firewall enabled (in proxmox). Only allow my main device to access. No VLAN crosstalk permitted.

I don't bother with anything else internally, if they're inside they deserve to SSH with my default root / password credentials

[-] Comfortable-Cause-81@alien.top 1 points 11 months ago

ssh default port is 22.

Really, unless I'm trying to learn security (valid), or have something to protect. I do the basic best practices.

Real security is an offline backup.

[-] PreppyAndrew@alien.top 1 points 11 months ago

SSH port really doesnt matter. If it is an exposed SSH port, it will probably get picked up if its 69 or 22.

[-] mss-cyclist@alien.top 1 points 11 months ago

Unattended updates can be tricky.

Think of config changes which need manual adjustment, or a broken update. This is something you would probably not like to happen at night without notice. Could easily break your vital systems (e.g. homeassistant, authentication, vaults...)

[-] Daniel15@alien.top 1 points 11 months ago

+1

Use unattended updates ONLY for bug and security fixes, nor for minor or major releases. Ensure you configure your auto-updaters properly!

Debian unattended-upgrades only upgrades packages from the main and security repos by default, so it should be fine since no major updates are performed within a particular Debian version.

[-] LAKnerd@alien.top 1 points 11 months ago

Air gapped, no Internet access. I don't use Internet services for any of my stuff though so I can get away without direct Internet access

[-] Professional-Bug2305@alien.top 1 points 11 months ago

Don't worry about it, no one wants to hack your plex server xD just don't expose things directly to the internet and you'll be fine.

[-] blentdragoons@alien.top 1 points 11 months ago

automatic updates is a great strategy for breaking the system

load more comments (3 replies)
[-] theRealNilz02@alien.top 1 points 11 months ago

Unattended updates are a recipe for trouble. I'd never enable that.

I have no public services apart from 2 OpenVPN servers. To access everything else I connect to one of the OpenVPNs and use the services through the VPN routings.

The VPN can only be accessed if you possess a cert and key. I could even implement 2fa but for now SSL auth works securely enough.

[-] phein4242@alien.top 1 points 11 months ago

I run unattended-upgrades on all the debian/ubuntu deployments I manage. One of the deployments even has automatic reboots enabled. I still do major upgrades by hand/terraform, but the process itself works flawless in my experience.

[-] wallacebrf@alien.top 1 points 11 months ago
  1. strict 3-2-1 backup policy
  2. VLANs. all VLANs are controlled by my Fortigate FWF-61E (soon to be replaced by a FG-91G). the VLANs have strict access permissions on a per-device basis on what they can and cannot access.
    1. CORE network where the NAS live
      1. only specific devices can access this VLAN, and most only have access to the SMB ports for data access. even fewer devices have access to the NAS management ports
      2. this network has restrictions on how is accesses the internet
      3. I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, deep SSL inspection, and intrusion protection activities
      4. everything is logged, any and all incoming and outgoing connections both to/from the internet but also any LAN based local communications.
    2. Guest wifi
      1. can ONLY access the internet
      2. has very restrictive web and DNS filtering
      3. I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, basic SSL inspection, and intrusion protection activities
    3. APC Network Management Cards
      1. can ONLY access my SMTP2GO email client so it can send email notifications
      2. it does have some access to the CORE network (NTP, SYSLOG, SNMP)
      3. very select few devices can access the management ports of these cards
      4. I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, basic SSL inspection, and intrusion protection activities
    4. Ethernet Switch / WIFI-AP management
        1. very select few devices can access the management ports of the switches
      1. ZERO internet access allowed
    5. ROKUs
      1. restrictive web and DNS filtering to prevent ads and tracking. Love seeing the space where ads SHOULD be and seeing a blank box.
      2. can access ONLY the IP of my PLEX server on the CORE network, on ONLY the PLEX port for the services PLEX requires.
    6. IoT devices
      1. Internet access ONLY except for a few devices like my IoTaWatt that needs CORE network access to my NAS on ONLY the port required for InfluxDB logging.
    7. Wife's computer
      1. because of HIPPA due to her job, i have ZERO logging, and no SSL inspection, but do have some web and DNS filtering.
    8. print server
      1. zero internet access, and only the machines that need to print can access.
  3. as already indicated i have a fortigate router which has next generation firewall abilities to protect my network
  4. while i do not have automatic updates i am notified when updates are available for my router, my NAS, the switches, and APC network cards. i always like to look at the release notes and ensure there are no known issues that can negatively impact my operations. I do have most of my docker containers auto-update using watchtower.
  5. i keep SSH disabled and only enable when i ACTUALLY need it, and when i do, i use certificate based authentication
  6. i have disabled the default admin account on ALL devices and made custom admin/root users but also have "normal" users and use those normal users for everything UNLESS i need to perform some kind of activity that requires root/admin rights.
  7. on all devices that have their own internal firewall, i have enabled it to only allow access from VLAN subnets that i allow, and go even further by restricting which IPs on those VLANS can access the device
  8. changing default ports is fairly useless in my opinion as once someone is on your network it is trivial to perform a port scan and find the new ports.
  9. all windows based endpoint machines
    1. have a strict endpoint control using fortigate's fortiguard software with EMS server. this allows me to enforce that machines have minimum specifications,
    2. i use group policy to enforce restrictive user environments to prevent installation of programs, making system changes, accessing the C: drive etc as this prevents a decent amount of malware from executing
    3. antivirus must be enabled and active or the endpoint becomes quarantined.
    4. if the system has unusual behavior it is automatically quarantined and i am notified to take a look
    5. even though the fortigate router blocks all ads and trackers i also use a combination of UBlock Origin to prevent ads and trackers from running in the browser as ADs are now one of the most common points of entry for malware
    6. i use ESET antivirus which also performs and ties into the fortiguard endpoint protection to ensure everything on the machines is OK
  10. for all phones/tablets i have Adguard installed which blocks all ads and malicious web sites and tracking at the phones level

this is not even all of it.

the big take away is i try to layer things. the endpoint devices are most important to protect and monitor as those are the foot hold something needs to then move through the network.

i then use network level protections to secure the remaining portions of the network from other portions of the network.

[-] supercamlabs@alien.top 2 points 11 months ago

Messy...just messy

[-] zR0B3ry2VAiH@alien.top 1 points 11 months ago

Replace Fortinet with Pfsense (+Suricatta/Snort) for non-propriety. (I have a Fortinet firewall and I can't bring myself to pay for their packages). One thing I'd recommend for you, as I host a lot of stuff is DNS Proxy though cloudflare, so the services I'm hosting are not pointing to my origin IP.

load more comments (2 replies)
[-] radiantxero@alien.top 1 points 11 months ago

Anything that has internet access like your IoT can be C&C utilizing stateful connections. An outbound socket is built, and reflected traffic can come back in. Your IoT devices especially should not be exposed to the internet. They can't even have an antivirus agent installed on them.

[-] wallacebrf@alien.top 1 points 11 months ago

True, and 100% agree except I forgot to mention

1.) The fortigate has a known list of botnet command and control servers that are blocked 2.) I only allow them to access their home server domain names for the only purpose of allowing for firmware updates. They are not capable of accessing any other domains or IPs

load more comments (3 replies)
[-] jjaAK3eG@alien.top 1 points 11 months ago

Hosted reverse proxy and VPN servers. I have no open ports on my home network.

[-] calinet6@alien.top 1 points 11 months ago

UDM’s regular built in threat filtering, good firewall rules, updated services, and not opening up unnecessarily to the internet. And be vigilant but don’t worry too much about it. That’s it.

[-] Adventurous-Mud-5508@alien.top 1 points 11 months ago

My security is basically if they get past an updated opnsense firewall I could be highly inconvenienced, but everything irreplaceable is backed up in the cloud and offline in my basement.

[-] FluffyBunny-6546@alien.top 1 points 11 months ago

Armed guards at every entrance.

[-] limecardy@alien.top 1 points 11 months ago

SSH shouldn’t be internet accessible Changing an SSH port won’t stop someone more than 15 seconds. Disabling ping is security through obscurity at best.

[-] massimog1@alien.top 1 points 11 months ago

Originally I'd change the SSH port, obviously only allow pubkey based auth.

Now however, I do everything over wireguard. Every device has Wireguard Access and depending on that different rules what they can access.

[-] PolicyArtistic8545@alien.top 1 points 11 months ago

Automatic updates and strong passwords. I know that automatic update can break a system but I’ve never had it break anything super critical in my home before that can’t be fixed with 10 minutes of effort. I can think of three things that have broken and required fixing in the last 5 years of auto updating software. I’d much rather have a broke piece of software than a security breach. To those that manually update, how fast after the patch notice are you patching? One day, two days, one week, monthly? What if you are sick or on vacation? I can guarantee mine updates within 24 hours every time.

[-] avdept@alien.top 1 points 11 months ago

If your homelab local only - well all of these are unnecessary if you're the only one who uses it. If you want to expose homelab to internet - you can pretty much use VPN to connect to your homelab without needing to expose whole homelab. Just a port to connect to VPN.

Do not over complicate things

[-] AdderallBuyersClub2@alien.top 1 points 11 months ago

Rat traps… damn mice.

[-] radiantxero@alien.top 1 points 11 months ago
  1. Snort on perimeter inbound and outbound.
  2. ntopng on perimeter.
  3. Heavy VLAN segmentation. Like with like.
  4. Inter-VLAN ACLs on core switch. This is a stateless firewall. Some VLANs with certain device types have inbound and outbound. Trusted devices only have inbound.
  5. SPAN to Security Onion for all internal traffic.
  6. SNMPv3 monitoring on all devices.
  7. MAC Sticky on all camera ports because the cabling extends outside of the physical structure of the house. I am going to implement Dot1X at some point.
  8. VRFs for sensitive infrastructure to prevent outbound routing completely.
  9. A VRF for devices to be forced through an external VPN (Mullvad). Used for devices that do not support a VPN agent.
  10. No antivirus. All antivirus is a botnet.
  11. All server infrastructure is Devuan using OpenRC instead of systemd.
  12. Gaming PC is Artix.
  13. DNS blackhole.
  14. Public DNS is a Swiss no-logging provider which I use DoT to send my queries to.
  15. LibreWolf or Brave Browser on everything.
  16. Only hole into the network is a 4096 bit encrypted Wireguard instance operating in a container using an uncommon port. I wrote a custom script that can reach into the container and pull from the API in order to show active sessions, GeoIP, browser fingerprints, length of time the socket has been open, etc.
  17. I use geofencing for inbound connections to the Wireguard instance. I only allow my immediate area cellular ISPs IANA address spaces to touch my network. Same goes for the geographic area surrounding my parents house.
  18. Unattended updates using custom scripting for my servers, including rebuilding the Wireguard container every single night, updating the server, and I also fire Nessus at it every night. If in the morning there is a CVE of note on that server, the NAT rule allowing traffic to the VPN is disabled at the perimeter until a sufficient patch is released.
  19. I run STIGs on everything, within reason and where infrastructure allows, in my suite.
  20. LibreSSL over OpenSSL.
[-] WildestPotato@alien.top 1 points 11 months ago

Why has no one mentioned CIS hardening.

[-] Brilliant_Sound_5565@alien.top 1 points 11 months ago

Easy, i keep it up to date, i have nothing exposed to the internet, and i lock the door :)

[-] Illustrious_Poet5957@alien.top 1 points 11 months ago

Lock and key, shotgun by the door

[-] ellie288@alien.top 1 points 11 months ago

Also consider TCP Wrappers (hosts.allow/hosts.deny) and DenyHosts/fail2ban.

[-] daronhudson@alien.top 1 points 11 months ago

By not opening anything to the wider internet.

[-] cylemmulo@alien.top 1 points 11 months ago

Yeah I’m confused if this is all on some Ubuntu server open to the internet or what. I just vpn into my home when I’m gone, keep it simple

[-] Impressive-Cap1140@alien.top 1 points 11 months ago

Is there really any security benefit to not using default ports? Especially if the service is not open externally? I cannot find any official documentation that states you should be doing that.

[-] Snoo68775@alien.top 1 points 11 months ago

Disable ICMP? The network team sends their regards 🐴

[-] Accomplished-Lack721@alien.top 1 points 11 months ago

Only expose applications to the Internet if you have a good need to. Otherwise, use a VPN to access your home network and get to your applications that way.

If you are exposing them to the internet, take precautions. Use a reverse proxy. Use 2FA if the app supports it. Always use good, long passwords. Login as a limited user whenever possible, and disable admin users for services whenever possible. Consider an alternative solution for authentication, like Authentik. Consider using Fail2ban or Crowdsec to help mitigate the risks of brute force attacks or attacks by known bad actors. Consider the use of Cloudflare tunnels (there are plusses and minuses) to help mitigate the risk of DDOS attacks or to implement other security enhancements that can sit in front of the service.

What might be a good reason for exposing an application to the Internet? Perhaps you want to make it available to multiple people who you don't expect to all install VPN clients. Perhaps you want to use it from devices where you can't install one yourself, like a work desktop. This is why my Nextcloud and Calibre Web installs, plus an instance of Immich I'm test-driving, are reachable online.

But if the application only needs to be accessed by you, with devices you control, use a VPN. There are a number of ways to do this. I run a Wireguard server directly on my router, and it only took a few clicks to enable and configure in tandem with the router company's DDNS service. Tailscale makes VPN setup very easy with minimal setup as well. My NAS administration has no reason to be accessible over the internet. Neither does my Portainer instance. Or any device on my network I might want to SSH into. For all of that, I connect with the VPN first, and then connect to the service.

[-] electromage@alien.top 1 points 11 months ago

Well your host management interfaces shouldn't be exposed to the internet. Use a VPN if you need to access it remotely.

[-] theniwo@alien.top 1 points 11 months ago

Enabling unattended updates -> Hell no. Regular Patchdays
Enable only ssh login with key -> yes
Create user with sudo privileges -> yes
Disable root login -> no
Enable ufw with necessary ports -> Basic iptables, but not on all hosts. But fail2ban
Disable ping -> nope
Change ssh default port 21 to something else. -> nope

[-] lack_of_reserves@alien.top 1 points 11 months ago

Remember to configure fail2ban, the defaults are silly.

Also, these days I prefer crowdsec to fail2ban.

load more comments (4 replies)
[-] dinosaurdynasty@alien.top 1 points 11 months ago

Honestly I just use a good firewall and forward_auth/authelia in caddy (so authentication happens before any apps) and it works well.

I also don't expose SSH to the public internet anymore (more laziness than anything, have it semi-exposed in yggdrasil and wireguard) (mostly because the SSH logs get annoying for journalctl -f)

load more comments
view more: next ›
this post was submitted on 22 Nov 2023
4 points (100.0% liked)

Homelab

371 readers
2 users here now

Rules

founded 1 year ago
MODERATORS