We have an issue with a Dell switch and Dell hosts that may be a switch issue or possibly something odd with VMWare in our cluster. It's taken me a while to isolate what actually happens.
Basically we have 10GB twinax uplinks that after a host reboots stop normal operation. There are other situations that seem to be able to trigger it but it reliably is repeatable with a simple reboot. 3 physical interfaces are applied to a vSwitch. Some VLANs will be unusable after the reboot. That may mean the entire host appears to be down because the management VLAN is down or sometimes it will be a VLAN for some of the VMs. The problem is resolved by going to our Dell S4112F-ON and remove and re-add the VLANs on each interface that is affected.
It's become my standard procedure after rebooting a host to just log into the switch and do a "no switchport trunk allowed vlans x-x,x,x" and simply "switchport trunk allowed vlans x-x,x,x" and functionality resumes normally. Removing a physical NIC from a virtual switch and adding it back can sometimes trigger the same problem only it's much less common.
I assume this is either a switch firmware issue or some kind of physical or virtual switch misconfiguration.
We have several 1GB ethernet links per host going to Cisco equipment in the exact same fashion only those don't appear to have any issues.