There should be no leaking through SSH. SSH connects directly with your targeted client. From the hub you can only see that the communication protocol is SSH but not what is transmitted.
If the host you’re connecting to is already in your known_hosts, a malicious network can’t do anything but break the connection. If it tries to mitm the ssh connection, you’ll get the alert that’s someone could be “doing something nasty”.
Information leakage: Anything between you and the ssh server will be able to see that you’re connecting to a ssh server and how much data you transfer, but not what the data actually is.
The services should be able, to talk to each other via ssh?
Or do you have groups of servers?
How many we are talking about?
They are all virtual servers?
Where is the hub located?
In our company we have many services and many servers. We are talking about hundrets of services and servers. Snd they are very secure.
So we have the servers on a big esxi (more than one) in 3 datacenters.
There is one jumphost (high available... several instances). Direct connection from our workstations to a server is not possible. We have to use this jumphost. Login on the jumphost is not possible, only for jumping (ssh option -J).
On the jumphost is for each user the publickey from a hardwaretoken. (Yubikey, etoken, nitrokey, name it) on its user in authorized-keys file. Only one pubkey.
So you are not able to jump over the jumphost to a server, without a valid hardwaretoken.
A NAT-Rule gives each user a individual source-IP...
Then you can see in auditlog on each server who did the shit... even if he made sudo su... the source-ip is individualized for each user.
And services run in different subnets and VLAN without connection to each other. So only services can talk together, who must talk.
Another server is an ansible machine. This can connect to every single server too and fo good and really bad things... so this ansible-machine and the jumphost are in a physically secured zone in the Datacenter.
You need an extra permission and an extra physical key, to come to this machines...
And if one Service gets compromized, maximum the servers in the same vlan or subnet can be affected too. And the servers, which got an extra firewall-hole.
So... if you are afraid of using ssh in your environment...
Use hardware-keys for the ssh privatekey. No softwarekeys! If machines need to talk together via ssh, make smallest possible jails with subnets or vlans around them. Think about allowed commands in ssh-config/authorized_keys file!!! Think about a jumphost and allow different users only machines which they need. Think about physically protection about the jumphost. Think about serverinitiated backups...
👍
Interesting. I hadn't considered using a hardware key for SSH. I'm essentially using my desktop machine as a hardware key in a way, but obviously a dedicated hardware key would be best.
Self Hosted - Self-hosting your services.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules
- No harassment
- crossposts from c/Open Source & c/docker & related may be allowed, depending on context
- Video Promoting is allowed if is within the topic.
- No spamming.
- Stay friendly.
- Follow the lemmy.ml instance rules.
- Tag your post. (Read under)
Important
Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!
- Lemmy doesn't have tags yet, so mark it with [Question], [Help], [Project], [Other], [Promoting] or other you may think is appropriate.
Cross-posting
- !everything_git@lemmy.ml is allowed!
- !docker@lemmy.ml is allowed!
- !portainer@lemmy.ml is allowed!
- !fediverse@lemmy.ml is allowed if topic has to do with selfhosting.
- !selfhosted@lemmy.ml is allowed!