The issue that a lot of people run into when creating ESXi home labs is getting enough network cards to properly simulate a production environment, so they are able to segregate all of their network traffic into proper VLANs. Consumer level hardware doesn’t normally come with 2 NICs onboard, and although adding additional NICs is possible, getting to four phyiscal NICs while actually keeping expansion slots for other things can be difficult.

intel-pro-1000-mt-dual-gigabit-nic-esxi

To solve this, I’ve started using Intel Pro/1000 MT Dual Gigabit NIC PCI-X cards. It’s the same as sticking two PCI GB NICs on your board (remember that a PCI bus shares the bandwidth anyway, unlike PCI-e which has lanes), except (a) you free up a PCI slot for something like a PCI video card that you can devote to the ESXi host, (b) you get a solid, proven chipset for ESXi, the Intel Pro/1000, and (c) it’s actually cheaper. I’ve been picking up these dual NIC cards off eBay for <$10 each with free shipping, and you can’t beat that for dual NICs.

ESXi Networking Configuration: 4 NICs, Two Switches

So why four NICs or more per host? In modern server or business environments, you’ll usually see 4 GB NICs (in server environments, 4 10GB NICs are about all each host will be allowed). This allows you to segregate your Management, Storage, Fault Tolerance, vMotion, and VM Traffic into proper VLANs and then segregate them. It also allows you to simulate a production environment, and to also learn about networking and VLANs. From here, you’ll attach to two physical switches. The diagram below shows a good setup for 4 NICs:

blog_4_vmnic-2

I’ll make a post later on ESXi networking configuration specifically, but this should give you the basic ideas. Please note that you will need VLAN aware switches to do this properly. That means a managed or smart switch  Personally, I use Smart Switches, simply because they are cheaper. A Smart Switch is basically a Managed switch with more GUI. For my home lab, I was able to pick up two 24 port gigabit D-Link switches (D-Link DGS-1124t) for around $75 each, and they worked just fine. You could get away with 16 port switches, if you wanted.

PCI vs PCI-X

PCI has had a long and illustrious history, starting out with PCI v1.0. running at 33Mhz and using 5 volts, and later moving to PCI v2.1, running at 66Mhz and running on 3.3 volts. A 64-bit slot was also made for high end networking. So, you could end up with up to 4 different card types in your computer, as shown below. Modern consumer grade motherboards use the 3.3V 32-bit slots. PCI-X cards ran on a 64-bit port and the last version, PCI-X v2.0 ran the bus speed at 266Mhz or 533Mhz.

pci-pcix

A PCI-X will fit in a 3.3V 32-bit PCI slot. But it’s longer! Well, it will still fit, it will simply hang out the back of the slot. This is fine, and the card runs, just at a slower speed.

intel-pro-1000-mt-dual-gigabit-nic-esxi-asrock-970-extreme3

Note that PCI-X 5V cards specifically have a tab configuration that will NOT allow them to be put into a 3.3V slot. If yours doesn’t fit, DON’T force it.

PCI-X Gigabit Network Card in a PCI Slot: Performance

PCI bus bandwidths can be calculated with the following formula: frequency * bitwidth = bandwidth. PCI busses operate at the following bandwidths (note that the 32bit slots are what consumer grade motherboards use, running at 66Mhz):

  • PCI 32-bit, 33 MHz: 1067 Mbit/s or 133.33 MB/s
  • PCI 32-bit, 66 MHz: 266 MB/s
  • PCI 64-bit, 33 MHz: 266 MB/s
  • PCI 64-bit, 66 MHz: 533 MB/s

And here are some other data bandwiths for examples and comparisons:

  • SATA 1 (SATA-150): 150 MB/s
  • SATA 2 (SATA-300): 300 MB/s
  • SATA 3 (SATA-600): 600 MB/s
  • Fast Ethernet (100base-X): 11.6 MB/s
  • Gig-E (1000base-X): 125 MB/s

So, a PCI-X dual GB NIC card running in a 66Mhz, 32-bit PCI slot should be able to max out both NICs running at full speed, with a bit left over. Of course, this is a theoretical maximum for GB NIcs, and most networks you won’t see it hit this. In addition, your home lab would rarely, if ever, hit this maximum throughput and then not for long (I’m thinking vMotion here, but then you’re limited by the speed of the datastore). So, in answer to the question of will a PCI-X dual GB NIC card work in a PCI port, the answer is yes, and it will still leave some bandwidth in the PCI bus for something like a video card.