Category Archives: HP Virtual Connect
HP has released the new firmware for the HP Virtual connect devices, on June 2013 they released the firmware version 4.01 this was a major release with lot of improvements and fixes.
You can download the firmware version 4.01 for windows (if you are doing the upgrade from a windows OS)
You need to install the latest “HP BladeSystem c-Class Virtual Connect Support Utility “ in a windows OS where you perform the upgrade. You can download this from below link
Once installed, just run the utility and you can see the below.
Run the HP firmware exe, it will extract to a binary file (*.bin)
Open the “Virtual Connect Support Utility” then give the command “update” then give the OA IP address, user name and password. Then accept the default values
It will ask for the Virtual connect user name and password and give those;
Once you gave the Virtual connect credentials, and type “YES” then it will start the upgrade process and you can see the percentage of the progress.
Once completed you will get the below, the interesting part is we don’t need to reboot the VC manually it will do automatically.
Here we are going to discuss a scenario, where I need to divide the entire network traffic which is going inside the HP blades and HP VC in a HP Blade system C7000
– HP Enclosure = BladeSystem C7000 Enclosure G2
– HP Blades = BL680c G7
– HP Virtual Connect Flex Fabric
– Dual Port FlexFabric 10Gb Converged Network Adapter
– Dual Port FlexFabric 10Gb Network Adapter
Network Traffic Details
– VMware Vcenter
– FT and VMotion
– Oracle RAC Cluster
– Production Application servers
– DMZ1 (Production Web Traffic)
– DMZ2 (Production Database Traffic)
– Corporate Servers
With the above network traffic classification, these need to be separated due to the huge network load and considering security aspect also. This is one of the scenario which I came across while designing VSphere 5 with HP 3PAR and HP C-Class Blade Center for a leading Bank.
So there is no hard and fast rules, you can divide accordingly based on the requirement.
Table-1 is the network traffic division.
Each blade is having 3 x Dual port 10Gig FlexFabric adapter on board. So total there is 6 x 10Gig ports, they are called LOM (Lan On Motherboard) ports. That is LOM1 to LOM6, and each LOM is internally further divided in to 4 adapters, and these 4 adapters share a common bandwidth that is they can have maximum of 10G. And we can divide the traffic inside for each LOM, that is the beauty of the FlexFabric adapters.
Here LOM1 to LOM4 are 10G FlexFabric Converged adapters (FCOE), so each LOM have one FC port and this is used for the SAN traffic. LOM5 and LOM6 are normal 10G FlexFabric adapters.
There are 2 HP Virtual Connect (VC) modules in the enclosure, they are connected to the BAY1 and BAY2, for redundancy LOM1, LOM3 and LOM5 is internally connected to the BAY1 and LOM2, LOM4, LOM6 to BAY2. There will one uplink for network and one uplink for FC Switch (SAN) for each VC, this will give the redundancy, HA, and load balancing and both VC are in Active/Active mode. So each traffic will have at least 2 adapters 1 from each LOM, this will give the redundancy, HA, Load balancing etc.
The VC is simply a Layer 2 network device, it wont do the routing.
Here the VMotion and FT traffic flow is happening inside the blade center Back Plane it self, and is not going to the VC or external core switch.
This is a specific scenario, here each blades inside the enclosure is configured together as one ESXi Cluster and so there is no need to do VMotion or FT outside the Blade. Here the advantage is that the VMotion and FT traffic wont overload the VC or core switch.