Why We Never Dedicate a NIC Port to a VM
Posted by Reprinted Article on 13 September 2013 09:39 AM
We never dedicate a NIC port to a VM. We always _team_ NIC ports. Generally there are two teams in standalone and cluster setups.
Team0: Management (Port 0 on NIC 0 and 1)
Team 1: vSwitch (Ports 1+ on NIC 0 and 1) – Dedicated
I kinda understand the logic of doing that, that is dedicating a NIC port to a VM. However, the whole purpose of virtualization is to separate the guest operating system from the hardware. So, one needs to break from that mindset.
There is no reason why the dual Intel quad-port configurations (8 ports total with 6 for the vSwitch) we do would have a problem with the in some cases 20+ VMs running on the host.
Team configuration exception to the rule would be for CAD/CAM/High Bandwidth needs:
That leaves a dedicated pair to the higher network bandwidth VM or VMs. We would leave VM density on Team1 at two or three maximum.
BTW, in a disaster recovery scenario having things teamed makes recovery a lot simpler. Trying to keep track of all of those vSwitch names mapped to what VM would be a real PITA when things were tense. Plus, getting all that configured would be that much more time wasted getting things back. Keep It Simple Sir
Oh, and one more thing: Why would one use a dedicated physical port on each node in a cluster for a highly available guest hosted on that cluster?
That leaves a single point of failure and yet we see that it is quite common for NIC teaming to not be used.
With NIC teaming now built into Windows Server 2012 RTM and newer there is no real reason to avoid teaming NICs or NIC Port groups to avoid that single point of failure.
So, when architecting a cluster setup please use NIC Teaming.
Chef de partie in the SMBKitchen