636
submitted 9 months ago* (last edited 9 months ago) by 7Sea_Sailor@lemmy.dbzer0.com to c/selfhosted@lemmy.world

@selfhosted@lemmy.world

Mid 2022, a friend of mine helped me set up a selfhosted Vaultwarden instance. Since then, my "infrastructure" has not stopped growing, and I've been learning each and every day about how services work, how they communicate and how I can move data from one place to another. It's truly incredible, and my favorite hobby by a long shot.

Here's a map of what I've built so far. Right now, I'm mostly done, but surely time will bring more ideas. I've also left out a bunch of "technically revelant" connections like DNS resolution through the AdGuard instance, firewalls and CrowdSec on the main VPS.

Looking at the setups that others have posted, I don't think this is super incredible - but if you have input or questions about the setup, I'll do my best to explain it all. None of my peers really understand what it takes to construct something like this, so I am in need of people who understand my excitement and proudness :)

Edit: the image was compressed a bit too much, so here's the full res image for the curious: https://files.catbox.moe/iyq5vx.png And a dark version for the night owls: https://files.catbox.moe/hy713z.png

you are viewing a single comment's thread
view the rest of the comments
[-] 7Sea_Sailor@lemmy.dbzer0.com 27 points 9 months ago

Hey! I'm also running my homelab on unraid! :D

The reverse proxy basically allows you to open only one port on your machine for generic web traffic, instead of opening (and exposing) a port for each app individually. You then address each app by a certain hostname / Domain path, so either something like movies.myhomelab.com or myhomelab.com/movies.

The issue is that you'll have to point your domain directly at your home IP. Which then means that whenever you share a link to an app on your homelab, you also indirectly leak your home location (to the degree that IP location allows). Which I simply do not feel comfortable with. The easy solution is running the traffic through Cloudflare (this can be set up in 15 minutes), but they impose traffic restrictions on free plans, so it's out of the question for media or cloud apps.

That's what my proxy VPS is for. Basically cloudflare tunnels rebuilt. An encrypted, direct tunnel between my homelab and a remote server in a datacenter, meaning I expose no port at home, and visitors connect to that datacenter IP instead of my home one. There is also no one in between my two servers, so I don't give up any privacy. Comes with near zero bandwith loss in both directions too! And it requires near zero computational power, so it's all running on a machine costing me 3,50 a month.

I appreciate this thoughtful reply. I read it a few times, I think I understand the goal. Basically you're systematically closing off points that leak private information or constitute a security weakness. The IP address and the ports.

For the VPS, in order for that to have no bandwidth loss, does that mean it's only used for domain resolution but clients actually connect directly to your own server? If not and if all data has to pass through a data center, I'd assume that makes service more unreliable?

[-] 7Sea_Sailor@lemmy.dbzer0.com 4 points 9 months ago

Your first paragraph hits the nail on the head. From what I've read, bots all over the net will find any openly exposed ports in no time and start attacking it blindly, putting strain on your router and a general risk into your home network.

Regarding bandwith: 100% of the traffic via the domain name (not local network) runs through the proxy server. But these datacenters have 1 to 10 gigabit uplinks, so the slowest link in the chain is usually your home internet connection. Which, in my case, is 500mbit down and 50mbit up. And that's easily saturated on both directions by the tunnel and VPS. plus, streaming a 4K BluRay remux usually only requires between 35 and 40 mbit of upload speed, so speed is rarely a worry.

[-] atzanteol@sh.itjust.works 1 points 9 months ago

bots all over the net will find any openly exposed ports in no time and start attacking it blindly,

True.

putting strain on your router

I guess? Not more than it can handle mind. But sure there will be a bit of traffic. But this is also kinda true whether you expose ports or not. The scanning is relentless.

and a general risk into your home network.

Well...If your proxy forwards traffic to your home network you're still effectively exposing your home network to the internt. There's just a hop in between. Scans that attack the web applications mostly don't know or care about your proxy. If I hacked a service through the proxy I still gain access to your home network.

That said, having crowdstrike add a layer of protection here is a good thing to potentially catch something you didn't know about (eg a forgotten default admin password). But having it on a different network over a vpn doesn't seem to add any value here?

[-] 7Sea_Sailor@lemmy.dbzer0.com 2 points 9 months ago

You make a good point. But I still find that directly exposing a port on my home network feels more dangerous than doing so on a remote server. I want to prevent attackers sidestepping the proxy and directly accessing the server itself, which feels more likely to allow circumventing the isolations provided by docker in case of a breach.

Judging from a couple articles I read online, if i wanted to publicly expose a port on my home network, I should also isolate the public server from the rest of the local LAN with a VLAN. For which I'd need to first replace my router, and learn a whole lot more about networking. Doing it this way, which is basically a homemade cloudflare tunnel, lets me rest easier at night.

[-] atzanteol@sh.itjust.works 4 points 9 months ago

You make a good point. But I still find that directly exposing a port on my home network feels more dangerous than doing so on a remote server.

You do what makes you feel comfortable, but understand that it's not a lot safer. It's not useless though so I wouldn't say don't do it. It just feels a bit too much effort for too little gain to me. And maybe isn't providing the security you think it is.

It's not "where the port is opened" that matters - it's "what is exposed to the internet" that matter. When you direct traffic to your home network then your home network is exposed to the internet. Whether though VPN or not.

The proxy server is likely the least vulnerable part of your stack, though I don't know if "caddy" has a good security reputation. I prefer to use Apache and nginx as they're tried and true and used by large corporations in production environments for that reason. Your applications are the primary target. Default passwords, vulnerable plugins, known application server vulnerabilities, SQL injections, etc. are what bots are looking for. And your proxy will send those requests whether it's in a different network or not. That's where I do like that you have something that will block such "suspect" requests to slow such scanning down.

Your VPS only really makes any sense if you have a firewall in 'homelab' that restricts traffic to and from the VPN and specific servers on specific ports. I'm not sure if this is what is indicated by the arrows in and out of the "tailscale" box? Otherwise an attacker with local root on that box will just use your VPN like the proxy does.

So you're already exposing your applications to the internet. If I compromise your Jellyfin server (through the VPS proxy and VPN) what good is your VPS doing? The first thing an attacker would want to do is setup a bot that reaches out to the internet establishing a back-channel communication direct to your server anyway.

Judging from a couple articles I read online, if i wanted to publicly expose a port on my home network, I should also isolate the public server from the rest of the local LAN with a VLAN.

It's not "exposing a port that matters" - it's "providing access to a server." Which you've done. In this case you're exposing servers on your home network - they're the targets. So if you want to follow that advice then you should have your servers in a VLAN now.

The reason for separating servers on their own VLAN is to limit the reach an attacker would have should they compromise your server. e.g. so they can't connect to your other home computers. You would create 2 different networks (e.g. 10.0.10.0/24 and 10.0.20.0/24) and route data between them with a firewall that restricts access. For example 10.0.20.0 can't connect to 10.0.10.0 but you can connect the other way 'round. That firewall would then stop a compromised server from connecting to systems on the other network (like your laptop, your chromecast, etc.).

I don't do that because it's kinda a big bother. It's certainly better that way, but I think acceptable not to. I wouldn't die on that hill though.

I want to be careful to say that I'm not saying that anything you're doing is necessarily wrong or bad. I just don't want you to misunderstand your security posture.

[-] dan@upvote.au 1 points 9 months ago

it’s all running on a machine costing me 3,50 a month.

You could use a cheaper VPS (like a $15/year one) and it should be fine with this use case :)

[-] 7Sea_Sailor@lemmy.dbzer0.com 1 points 9 months ago

Very true! For me, that specific server was a chance to try out arm based servers. Also, I initially wanted to spin up something billed on the hour for testing, and then it was so quick to work that I just left it running.

But I'll keep my eye out for some low spec yearly billed servers, and move sooner or later.

[-] dan@upvote.au 3 points 9 months ago* (last edited 9 months ago)

One of my favourite hosts (GreenCloudVPS) has some cheap yearly deals: https://greencloudvps.com/billing/store/budget-kvm-sale. RackNerd have some too: https://racknerdtracker.com/ (third-party site that tracks their deals that are still active).

(I'm not affiliated with either company)

this post was submitted on 01 Feb 2024
636 points (98.2% liked)

Selfhosted

40329 readers
359 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS