4
submitted 1 year ago* (last edited 1 year ago) by brownmustardminion@lemmy.ml to c/selfhost@lemmy.ml

Consider a wireguard network of many clients which all interact with each other through a central hub server on a cloud VPS. One of the clients is a desktop used for SSHing into the other various clients--again, through the central hub. If the "terminal" client connects to another client through the wireguard hub using SSH public/private key authentication, what if any information within that SSH tunnel gets exposed or leaked to the "hub" server?

My threat model is if the VPS was to ever get compromised. I previously SSH'd into the hub VPS server and from there I would SSH into any of the other clients with a password. Horrible security, I know.

My new setup is as mentioned above. Only the single desktop client has key authentication to SSH into the various clients. But I want to be sure none of that data gets exposed to the VPS hub just in case.

you are viewing a single comment's thread
view the rest of the comments
[-] jakob@lemmy.schuerz.at 1 points 1 year ago

The services should be able, to talk to each other via ssh?

Or do you have groups of servers?

How many we are talking about?

They are all virtual servers?

Where is the hub located?

In our company we have many services and many servers. We are talking about hundrets of services and servers. Snd they are very secure.

So we have the servers on a big esxi (more than one) in 3 datacenters.

There is one jumphost (high available... several instances). Direct connection from our workstations to a server is not possible. We have to use this jumphost. Login on the jumphost is not possible, only for jumping (ssh option -J).

On the jumphost is for each user the publickey from a hardwaretoken. (Yubikey, etoken, nitrokey, name it) on its user in authorized-keys file. Only one pubkey.

So you are not able to jump over the jumphost to a server, without a valid hardwaretoken.

A NAT-Rule gives each user a individual source-IP...

Then you can see in auditlog on each server who did the shit... even if he made sudo su... the source-ip is individualized for each user.

And services run in different subnets and VLAN without connection to each other. So only services can talk together, who must talk.

Another server is an ansible machine. This can connect to every single server too and fo good and really bad things... so this ansible-machine and the jumphost are in a physically secured zone in the Datacenter.

You need an extra permission and an extra physical key, to come to this machines...

And if one Service gets compromized, maximum the servers in the same vlan or subnet can be affected too. And the servers, which got an extra firewall-hole.

So... if you are afraid of using ssh in your environment...

Use hardware-keys for the ssh privatekey. No softwarekeys! If machines need to talk together via ssh, make smallest possible jails with subnets or vlans around them. Think about allowed commands in ssh-config/authorized_keys file!!! Think about a jumphost and allow different users only machines which they need. Think about physically protection about the jumphost. Think about serverinitiated backups...

๐Ÿ‘

[-] brownmustardminion@lemmy.ml 2 points 1 year ago

Interesting. I hadn't considered using a hardware key for SSH. I'm essentially using my desktop machine as a hardware key in a way, but obviously a dedicated hardware key would be best.

this post was submitted on 04 Jul 2023
4 points (83.3% liked)

Self Hosted - Self-hosting your services.

11452 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules

Important

Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!

Cross-posting

If you see a rule-breaker please DM the mods!

founded 3 years ago
MODERATORS