My home lab attempts to duplicate much of what I use and support at work. As such, I have many VMs, each with a particular purpose or running a particular piece of software. So, for me, it's a combination of more cores and decent speed of each. However, in virtualization, it's common to run out of RAM in a host before you run out of processing power.
Often, when ESXi doesn't recognize the built-in network controller, adding a controller it does recognize is the easiest way.
Most things Intel, Broadcom, or Mellanox/Nvidia. Check VMware's hardware compatibility list for that version of ESXi.
You wouldn't need a crossover cable. 10G ports, as with most/all others these days, are auto-MDIX. They know when they should be in crossover mode and switch automatically.
I've got four HPs clustered for VMware vSAN and three more little systems -- 2 HPs and a Dell -- clustered as a VMware "remote site" to my PowerEdge primary site.
After you install ESXi to the 256GB SSD, there may or may not be any space left over for the Installer to create a datastore. In other words, the SSD is small enough that ESXi may take the whole thing for itself, and not give you any usable space for VMs.
You'll have to install and see.
I have the four-port version of this hub in a box o'switches, somewhere in a closet.
I bought a G3 from China, using a coupon code from a YouTube reviewer. 99%. Bare bones.
I added a 32GB DIMM (the machine only takes one DIMM), and a 512GB SSD. The machine came without an OS installed.
I first installed ESXi 8.0U2, added the host to vCenter, and moved a Windows 10 VM onto the NucBox. It ran pretty well.
I next created a Windows To Go boot USB with Windows 11 on it. That runs pretty well also.
It's a cute machine and worth what it costs. It should run a good few lightweight applications, either as VMs or containers.