637
submitted 10 months ago* (last edited 10 months ago) by 7Sea_Sailor@lemmy.dbzer0.com to c/selfhosted@lemmy.world

@selfhosted@lemmy.world

Mid 2022, a friend of mine helped me set up a selfhosted Vaultwarden instance. Since then, my "infrastructure" has not stopped growing, and I've been learning each and every day about how services work, how they communicate and how I can move data from one place to another. It's truly incredible, and my favorite hobby by a long shot.

Here's a map of what I've built so far. Right now, I'm mostly done, but surely time will bring more ideas. I've also left out a bunch of "technically revelant" connections like DNS resolution through the AdGuard instance, firewalls and CrowdSec on the main VPS.

Looking at the setups that others have posted, I don't think this is super incredible - but if you have input or questions about the setup, I'll do my best to explain it all. None of my peers really understand what it takes to construct something like this, so I am in need of people who understand my excitement and proudness :)

Edit: the image was compressed a bit too much, so here's the full res image for the curious: https://files.catbox.moe/iyq5vx.png And a dark version for the night owls: https://files.catbox.moe/hy713z.png

you are viewing a single comment's thread
view the rest of the comments
[-] xantoxis@lemmy.world 6 points 10 months ago

I can answer this one, but mainly only in reference to the other popular solutions:

  • nginx. Solid, reliable, uncomplicated, but. Reverse proxy semantics have a weird dependency on manually setting up a dns resolver (why??) and you have to restart the instance if your upstream gets replaced.
  • traefik. I am literally a cloud software engineer, I've been doing Linux networking since 1994 and I've made 3 separate attempts to configure traefik to work according to its promises. It has never worked correctly. Traefik's main selling point to me is its automatic docker proxying via labels, but this doesn't even help you if you also have multiple VMs. Basically a non-starter due to poor docs and complexity.
  • caddy. Solid, reliable, uncomplicated. It will do acme cert provisioning out of the box for you if you want (I don't use that feature because I have a wildcard cert, but it seems nice). Also doesn't suffer from the problems I've listed above.
[-] oh_gosh_its_osh@lemmy.ml 2 points 10 months ago

Fully agree to this summary. traefik also gave me a hard time initially, but once you have the quirks worked out, it works as promised.

Caddy is absolutely on my list as an alternative, but the lack of docker label support is currently the main roadblocker for me.

[-] 7Sea_Sailor@lemmy.dbzer0.com 2 points 10 months ago* (last edited 10 months ago)

May I present to you: Caddy but for docker and with labels so kind of like traefik but the labels are shorter đź‘Ź https://github.com/lucaslorentz/caddy-docker-proxy

Jokes aside, I did actually use this for a while and it worked great. The concept of having my reverse proxy config at the same place as my docker container config is intriguing. But managing labels is horrible on unraid, so I moved to classic caddy instead.

[-] oh_gosh_its_osh@lemmy.ml 1 points 10 months ago

Nice catch and thanks for sharing. Will definitely check it out.

[-] 0xZogG@hachyderm.io 2 points 10 months ago

@oh_gosh_its_osh @xantoxis for #k8s solution though I think traefik has advantage of providing configuration via CRDs, no?

[-] jkrtn@lemmy.ml 2 points 10 months ago

I feel so relieved reading that about traefik. I briefly set that up as a k8s ingress controller for educational purposes. It's unnecessarily confusing, brittle, and the documentation didn't help. If it's a pain for people in the industry that makes me feel better. My next attempt at trying out k8s I'll give Kong a shot.

I really like solid, reliable, and uncomplicated. The fun part is running the containers and VMs, not spending hours on a config to make them accessible.

[-] notfromhere@lemmy.ml 2 points 10 months ago

I have traefik running on my kubernetes cluster as an ingress controller and it works well enough for me after finagling it a bit. Fully automated through ansible and templated manifests.

[-] xantoxis@lemmy.world 1 points 10 months ago

Heh. I am, as I said, a cloud sw eng, which is why I would never touch any solution that mentioned ansible, outside of the work I am required to do professionally. Too many scars. It's like owning a pet raccoon, you can maybe get it to do clever things if you give it enough treats, but it will eventually kill your dog.

[-] notfromhere@lemmy.ml 1 points 10 months ago

Care to share some war stories? I have it set up where I can completely destroy and rebuild my bare metal k3s cluster. If I start with configured hosts, it takes about 10 minutes to install k3s and get all my services back up.

[-] xantoxis@lemmy.world 2 points 10 months ago* (last edited 10 months ago)

Sure, I mean, we could talk about

  • dynamic inventory on AWS means the ansible interpreter will end up with three completely separate sets of hostnames for your architecture, not even including the actual DNS name. if you also need dynamic inventory on GCP, that's three completely different sets of hostnames, i.e. they are derived from different properties of the instances than the AWS names.
  • btw, those names are exposed to the ansible runtime graph via different names i.e. ansible_inventory vs some other thing, based on who even fuckin knows, but sometimes the way you access the name will completely change from one role to the next.
  • ansible-vault's semantics for when things can be decrypted and when they can't leads to completely nonsense solutions like a yaml file with normal contents where individual strings are encrypted and base64-encoded inline within the yaml, and others are not. This syntax doesn't work everywhere. The opaque contents of the encrypted strings can sometimes be treated as traversible yaml and sometimes cannot be.
  • ansible uses the system python interpreter, so if you need it to do anything that uses a different Python interpreter (because that's where your apps are installed), you have to force it to switch back and forth between interpreters. Also, the python setting in ansible is global to the interpreter meaning you could end up leaking the wrong interpreter into the role that follows the one you were trying to tweak, causing almost invisible problems.
  • ansible output and error reporting is just a goddamn mess. I mean look at this shit. Care to guess which one of those gives you a stream which is parseable as json? Just kidding, none of them do, because ansible always prefixes each line.
  • tags are a joke. do you want to run just part of a playbook? --start-at. But oops, because not every single task in your playbook is idempotent, that will not work, ever, because something was supposed to happen earlier on that didn't. So if you start at a particular tag, or run only the tasks that have a particular tag, your playbook will fail. Or worse, it will work, but it will work completely differently than in production because of some value that leaked into the role you were skipping into.
  • Last but not least, using ansible in production means your engineers will keep building onto it, making it more and more complex, "just one more task bro". The bigger it gets, the more fragile it gets, and the more all of these problems rears its head.
[-] notfromhere@lemmy.ml 2 points 10 months ago* (last edited 10 months ago)
  • Dynamic inventory. I haven’t used it on a cloud api before but I have used it against kube API and it was manageable. Are you saying through kubectl the node names are different depending on which cloud and it’s not uniform? Edit: Oh you’re talking about the VMs doh

  • I’ve tried ansible vault and didn’t make it very far… I agree that thing is a mess.

  • Thank god I haven’t ran into interpreter issues, that sounds like hell.

  • Ansible output is terrible, no argument there.

  • I don’t remember the name for it, but I use parameterized template tasks. That might help with this? Edit: include_tasks.

  • I think this is due to not a very good IDE for including the whole scope of the playbook, which could be a condemnation of ansible or just needing better abstraction layers for this complex thing we are trying to manage the unmanageable with.

[-] xantoxis@lemmy.world 2 points 10 months ago

Really all of these have solutions, but they're constantly biting you and slowing down development and requiring people to be constantly trained on the gotchas. So it's not that you can't make it work, it's that the cost of keeping it working eats away at all the productive things you can be doing, and that problem accelerates.

The last bullet is perhaps unfair; any decent system would be a maintainable system, and any unmaintainable system becomes less maintainable the bigger your investment in it. Still, it's why I urge teams to stop using it as soon as they can, because the problem only gets worse.

[-] notfromhere@lemmy.ml 1 points 10 months ago

You urge teams to stop using it [ansible?] as soon as they can? What do you recommend to use instead?

[-] xantoxis@lemmy.world 2 points 10 months ago

Well people use ansible for a wide variety of things so there's no straightforward answer. It's a Python program, it can in theory do anything, and you'll find people trying to do anything with it. That said, some common ways to replace it include

  • you need terraform or pulumi or something for provisioning infrastructure anyway, so a ton of stuff can be done that way instead of using ansible. Infra tools aren't really the same thing, but there are definitely a few neat tricks you can do with them that might save you from reaching for ansible.
  • Kubernetes + helm is a big bear to wrestle, but if your company is also a big bear, it's worth doing. K8s will also solve a lot of the same problems as ansible in a more maintainable way.
  • Containerization of components is great even if you don't use kubernetes.
  • if you're working at the VM level instead of the container level, cloud-init can allow you to take your generic multipurpose image and make it configure itself into whatever you need at boot. Teams sometimes use ansible in the cloud-init architecture, but it's usually doing only a tiny amount of localhost work and no dynamic invetory in that role, so it's a lot nicer there.
  • maybe just write a Python program or even a shell script? If your team has development skills at all, a simple bespoke tool to solve a specific problem can be way nicer.
[-] notfromhere@lemmy.ml 2 points 10 months ago

Very insightful. I definitely need to check out cloud-init as that is one thing you mentioned I have practically no experience with. Side note, I hate other people’s helm with a passion. No consistency in what is exposed, anything not cookie cutter and you’re customizing the helm chart to the point it’s probably easier to start with a custom template to begin with, which is what I started doing!

this post was submitted on 01 Feb 2024
637 points (98.2% liked)

Selfhosted

40717 readers
342 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS