1
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 09 Nov 2023
1 points (100.0% liked)
Homelab
371 readers
2 users here now
Rules
- Be Civil.
- Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
- No memes or potato images.
- We love detailed homelab builds, especially network diagrams!
- Report any posts that you feel should be brought to our attention.
- Please no shitposting or blogspam.
- No Referral Linking.
- Keep piracy discussion off of this community
founded 1 year ago
MODERATORS
I'm more familiar with this on linux, but the answer is bridged networking. It must be, as you have surpassed 2.5Gbit theoretical limit of the NIC, and that is before packet overheads. Basically your networking never leaves the "virtual switch" onto real NIC, so it can be a lot faster.
At one instance, I had seen around 35-40Gbps over iperf3 on a beefy PC with around 100GB free RAM and a proper Gen4 NVMe SSD. I think it was due to free RAM being able to accomodate any packet sent/recieved on cache, could not be replicated on prod probably.
So, test again with another LAN-connected machine (2.5Gbps if posaible) and you should be bound by the laws of physics once again ; )
Yes, testing iperf from those VMs to another LAN machine machine unsurprisingly never exceeds 1Gbps (my other LAN machine doesn't support 2.5), but VMWare is still slower. Maybe it's due to Workstation using the 13700k e-cores, as someone else commented.
The thing is since my Win11 PC is hosting those 2 VMs I'd expect VM-HOST/HOST-VM network transfers to be faster, even using NAT instead of bridged, yes, it does improve the transfer speeds. but VMWare is still behind Vbox, even with vmxnet3 instead of e1000.
Anyway, thanks for the reply, it might as well come down to being a "Windows thing", I never had these inconsistencies on a proxmox host, for example.
Ok a bit of trivia out of my chest first [source][1]:
I thought those two were inseperable for some time, turns out they were not.
This seems irrelevant, but VMXNET3 could be paravirtualized but not hardware assisted by virtue (emulated E1000)
While [PVRDMA][2] (allowing shared memory between VMs which have PVRDMA) does similar to what Ilinux bridges does by defauit (IPTABLES forwarding without emulation). Hardware assisted para-virtualization by virtue :D
This has the potential to run above 1Gbit, could you try?
1 2
Linux bridges use ebtables, not iptables as it operates at layer 2 and not layer 3.
TIL, thx.