Monthly Archives: January 2013

VMware vCenter Upgrade from 4.1 to 5.0 : Part-1

There are many ways… to upgrade the vcenter from 4.1 to 5.0, customers have different scenario in their environment, I would like to discuss these common scenarios

OBJECTIVE: Upgrade the vCenter 4.1 to vCenter 5.0

SCENARIO : 

A- vCenter 4.1 and Database in the same server, the database is MSSQL 2005 Express edition (bundled MSSQL database, within the vCenter 4.1 DVD)

B- vCenter 4.1 and Database in the same server, the database is MSSQL 2005 Standard or Enterprise.

END RESULT :

A – vCenter 5.0 and Database will be in the same server, the Database will be the same (MSSQL 2005 Express edition ), but the SCHEMA will be upgraded.

B – vCenter 5.0 and Database will be in the same server, the Database will be the same MSSQL 2005 Standard or Enterprise database, but the SCHEMA will be upgraded.

MIGRATION STEPS :

1- Install SQL management studio and Make a full backup of the vCenter Server database.

image

image

Click ADD and give the location to save the backup of the database or use defaults.

2- Check Existing SYSTEM DSN for the vCenter 4.1 in the OBDC Data Source Administrator, and test the connection settings.

image

image

3- Back up the SSL certificates that are on the VirtualCenter or vCenter Server system before you upgrade to vCenter Server 5.0. The default location of the SSL certificates is %allusersprofile%\Application
Data\VMware\VMware VirtualCenter\SSL

4- Stop the VMware VirtualCenter Server service.

image

5- From the vSphere 5 DVD or its contents, run the AUTORUN.EXE installer for the vSphere 5 and select to install “vCenter Server” Also ensure all the prerequisites are installed.

image

6- The installer wizard will indicate that there is already previous version of vCenter present in the server.Click next and accept the EULA and give the license or later also we can give license.

image

7- The existing DSN will be automatically detected and click next,

image

8- Select these options, you need to upgrade the database SCHEMA to accommodate the new vCenter 5.0,

“A dialog box might appear warning you that the DSN points to an older version of a repository that must be upgraded. If you click Yes, the installer upgrades the database schema, making the database irreversibly incompatible with previous VirtualCenter versions. See the vSphere Upgrade documentation”

image

9- We can automatically or manually the vCenter agent in the ESXi hosts, its not a big deal !!!

image

10- Give the credentials and use the FQDN of the old vCenter server, else it will throw an error saying FQDN is not resolved and vCenter feature will not work. This we can neglect, but it is good to give FQDN.

image

11- Use the default location of the binaries or give the locations if need to customize.

image

12- It is better to use default ports for the vCenter, else we can change if needed. Rest of the next few steps are self explanatory and choose accordingly

13- If you are using Distributed switch, and large number of portgroups and virtual machines we can select the below option IF YOU ARE USING EPHEMERAL PORTS, BUT IT IS RECOMMEDED TO USE THE STATIC BINDING IN THE vDS so no need to use this option.

image

14- Click INSTALL button !!!! That’s it…

 

My next post will be..how to upgrade the vCenter and Database if they are in separate servers.

Advertisements

Magic Effect of SIOC (VMware Storage Input Output Control) in vSphere 5 : A practical study.

Recently I got a chance to implement and test the Enterprise Plus feature in vSphere called SIOC. It is really an amazing feature…..

Theory

Application performance can be impacted when servers contend for I/O resources in a shared storage environment. There is a crucial need for isolating the performance of critical applications from other, less critical workloads by appropriately prioritizing access to shared I/O resources. Storage I/O Control (SIOC), a new feature offered in VMware vSphere 4 & 5, provides a dynamic control mechanism for managing I/O resources across VMs in a cluster.

Datacenters based on VMware’s virtualization products often employ a shared storage infrastructure to service clusters of vSphere hosts. NFS, ISCSI, and Storage area networks (SANs) expose logical storage devices (LUNs) that can be shared across a cluster of ESX hosts.  Consolidating VMs’ virtual disks onto a single VMFS datastore, or NFS datastore, backed by a higher number of disks has several advantages—ease of management, better resource utilization, and higher performance (when storage is not bottlenecked).

With vSphere 4.1 we can only use SIOC with FC/ISCSI datastores, but with vSphere 5 we can use NFS.
However, there are instances when a higher than expected number of I/O-intensive VMs that share the same storage device become active at the same time. During this period of peak load, VMs contend with each other for storage resources. In such situations, lack of a control mechanism can lead to performance degradation of the VMs running critical workloads as they compete for storage resources with VMs running less critical workloads.
Storage I/O Control (SIOC), provides a fine-grained storage control mechanism by dynamically allocating portions of hosts’ I/O queues to VMs running on the vSphere hosts based on shares assigned to the VMs. Using SIOC, vSphere administrators can mitigate the performance loss of critical workloads during peak load periods by setting higher I/O priority (by means of disk shares) to those VMs running them. Setting I/O priorities for VMs results in better performance during periods of congestion.

So What is the Advantage for an VM administrator/Organization? Now a days, with vSphere 5 we can have 64TB of single LUN and with high end SAN with FLASH we are achieving more consolidation. That’s great !! but when we do this…. just like the CPU/MEMORY resource pools ensure the Computing SLA, the SIOC ensures the virtual disk STORAGE SLA and its response time.

In short, below are the benefits;

– SIOC prioritizes VMs’ access to shared I/O resources based on disk shares assigned to them. During the periods of I/O congestion, VMs are allowed to use only a fraction of the shared I/O resources in proportion to their relative priority, which is determined by the disk shares.
– If the VMs do not fully utilize their portion of the allocated I/O resources on a shared datastore, SIOC redistributes the unutilized resources to those VMs that need them in proportion to VMs’ disk shares. This results in a fair allocation of storage resources without any loss in their utilization.
– SIOC minimizes the fluctuations in performance of a critical workload during periods of I/O congestion, as much as a 26% performance benefit compared to that in an unmanaged scenario.

How Storage I/O Control Works
SIOC monitors the latency of I/Os to datastores at each ESX host sharing that device. When the average normalized datastore latency exceeds a set threshold (30ms by default), the datastore is considered to be congested, and SIOC kicks in to distribute the available storage resources to virtual machines in proportion to their shares. This is to ensure that low-priority workloads do not monopolize or reduce I/O bandwidth for high-priority workloads. SIOC accomplishes this by throttling back the storage access of the low-priority virtual machines by reducing the number of I/O queue slots available to them. Depending on the mix of virtual machines running on each ESX server and the relative I/O shares they have, SIOC may need to reduce the number of device queue slots that are available on a given ESX server.

It is important to understand the way queuing works in the VMware virtualized storage stack to have a clear understanding of how SIOC functions. SIOC leverages the existing host device queue to control I/O prioritization. Prior to vSphere 4.1, the ESX server device queues were static and virtual-machine storage access was controlled within the context of the storage traffic on a single ESX server host. With vSphere 4.1 & 5, SIOC provides datastore-wide disk scheduling that responds to congestion at the array, not just on the hostside HBA.. This provides an ability to monitor and dynamically modify the size of the device queues of each ESX server based on storage traffic and the priorities of all the virtual machines accessing the shared datastore. An example of a local host-level disk scheduler is as follows:

Figure 1 shows the local scheduler influencing ESX host-level prioritization as two virtual machines are running on the same ESX server with a single virtual disk on each.

image

Figure 1. I/O Shares for Two Virtual Machines on a Single ESX Server (Host-Level Disk Scheduler)

In the case in which I/O shares for the virtual disks (VMDKs) of each of those virtual machines are set to different values, it is the local scheduler that prioritizes the I/O traffic only in case the local HBA becomes congested. This described host-level capability has existed for several years in ESX Server prior to vSphere 4.1 & 5. It is this local-host level disk scheduler that also enforces the limits set for a given virtual-machine disk. If a limit is set for a given VMDK, the I/O will be controlled by the local disk scheduler so as to not exceed the defined amount of I/O per second.

vSphere 4.1 onwards it has added two key capabilities: (1) the enforcement of I/O prioritization across all ESX servers that share a common datastore, and (2) detection of array-side bottlenecks. These are accomplished by way of a datastore-wide distributed disk scheduler that uses I/O shares per virtual machine to determine whether device queues need to be throttled back on a given ESX server to allow a higher-priority workload to get better performance.

The datastore-wide disk scheduler totals up the disk shares for all the VMDKs that a virtual machine has on the given datastore. The scheduler then calculates what percentage of the shares the virtual machine has compared to the total number of shares of all the virtual machines running on the datastore. As described before, SIOC engages only after a certain device-level latency is detected on the datastore. Once engaged, it begins to assign fewer I/O queue slots to virtual machines with lower shares and more I/O queue slots to virtual machines with higher shares. It throttles back the I/O for the lower-priority virtual machines, those with fewer shares, in exchange for the higher-priority virtual machines getting more access to issue I/O traffic.

However, it is important to understand that the maximum number of I/O queue slots that can be used by the virtual machines on a given host cannot exceed the maximum device-queue depth for the device queue
of that ESX host.

What are the conditions required for the SIOC to work ?
– A large datastore, and lot of VMs in it and this datastore is shared between multiple ESX hosts.
– Need to set disk shares, for the vm’s inside the datastore.
– Based on the LUN/NFS type (made of SSD,SAS,SATA), select the SIOC threshold, and enable SIOC.
– The SIOC will monitor the datastore IOPS usage, latency of VM’s and monitor overall STORAGE Array performance/Latency.

SIOC don’t check the below;

How much Latency created by other physical systems to the array, that is the Array is shared for the PHYSICAL hosts also, and other backup applications/jobs etc. So it gives false ALARMS like “VMware vCenter – Alarm Non-VI workload detected on the datastore] An unmanaged I/O workload is detected on a SIOC-enabled datastore”

Tricky Question – Will it work for one ESXi host ? Simple answer – SIOC is for multiple ESXi hosts, Datastore is exposed to many hosts and in that Datastore contains a lot of VM’s. For single host also we can enable the SIOC if we have the license, but it is not intended for this CASE.

So what is the solution for a single ESXi host ? – Simple just use shares for the VMDK and the ESXi host use Host-Level Disk Scheduler and ensures the DISK SLA.

So is there any real advantage of the SIOC in Real time scenario ? the below experience of me PROVES this..

My scenario: I have 4 ESXi 5 hosts in the cluster, and many FC LUNS from the HP 3PAR storage arrays is mounted. In one or the LUN we have put around 16 VMs and this includes our vCenter and it Database, and inside that Datastore we have a less critical VM (Analytics engine, which is from the vCOPS appliance) and this consumes a lot of IOPS from the storage and eventually effects other critical VM. We enabled the SIOC and monitored for 10 days then we compared the performance of the VMDK and its LATENCY during the PEAK hours.

Below is the Datastore with many VMS and it is shared across 4 HOSTS.

image

With SIOC disabled.

image

image

With SIOC enabled.

image

image

So what is the Take away with SIOC ?

Better I/O response time, reduced latency for critical VMS and ensured disk SLA for critical VM’s. So bottom line is the crtical VM’s wont get affected during the event of storage contention and in highly consolidated environment.

 

REFERENCES:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1020651

http://blogs.vmware.com/vsphere/2011/12/using-both-storage-io-control-network-io-control-for-nfs.html

http://blogs.vmware.com/vsphere/2012/03/debunking-storage-io-control-myths.html

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1022091

http://blogs.vmware.com/vsphere/2011/09/storage-io-control-enhancements.html

http://www.vmware.com/files/pdf/techpaper/VMW-vSphere41-SIOC.pdf

http://blogs.vmware.com/performance/2010/07/sioc.html

Partition alignment in VMware vSphere 5, a DeepDrive, Part-2

In continuation to my part-1, we will see how the Virtual machine disk alignment effects the Virtualized world and eventually the performance.

Here we need to know how to align the disk partitions in OS like windows 2003, XP, RHEL 5.x etc in VMware vSphere VMFS 5 Datastore.

Let us assume the VMFS is residing above a RAID volume or a LUN from a storage array, in either case the RAID STRIP size will be from 4KB to 256KB depending upon the ARRAY and RAID levels. So now we are deployed the above mentioned operating systems on the VMFS,

What is the ISSUE with the GUEST OS disk misalignment ?

This issue is applicable to the PHSICAL and VIRTUAL also and it is not because of the VMFS layer or any other means. It is the limitation of these OS and how they do the partition in the given HDD, just like to the PHYSICAL world, in VIRTUAL world the Hypervisor also gives an HDD but it is a VIRTUAL HDD. But the OS doesn’t know if it is a virtual HDD or Actual HDD. So eventually the OS wont do the partition with correct alignment.

Leave the VITRUL world and go to PHYSICAL world,  let us assume the Guest filesystem Block/Cluster is 4KB, and it writes or reads a 4KB to the hard disk, and in turn it goes to the HDD PHYSICAL sector of 4KB. Because the partition is not ALIGNED it uses 2 PHYSICAL sectors of 4KB for this operation, here the amount of Data read/write is same but it uses 2 PHYSICAL sectors.

image

So the Harddisk HEAD need to do more work to fetch the Data to and from the PHYSICAL sectors. So for a single IO for the same 4KB data from the OS, the HDD or Array need to use 2 PHYSICAL sectors, so the IOPS response time and latency will be affected. So we will get a POOR performance. This is applicable in VIRTUAL world also,

As of now the VMFS3 and VMFS5 are already aligned to the underlying storage. The below give more info how the VMFS is aligned, open an ESXi shell or SSH and type the below;

~ # partedUtil get /vmfs/devices/disks/naa.600508b1001030394330313737300300
71380 255 63 1146734896

1 2048 1146719699 0 0
~ #

The first line displayed is disk geometry information (cylinders, heads, sectors per track and LBA [Logical Block Address] count). The second line is information about the partitions. There is only 1 partition; it starts at LBA 2048 and ends at LBA 1146719699.That’s something else to be aware of – newly created VMFS-5 partitions start at LBA 2048. This is different to previous versions of VMFS:

  • VMFS-2 created on ESX 2.x; starting LBA 63
  • VMFS-3 created on ESX 3 & 4; starting LBA 128
  • VMFS-5 created on ESXi 5; starting LBA 2048

So the VMFS is aligned with the 1MB boundary (starts from the LBA 2048), and as we all know the VMFS 5 the block size is 1MB. So the VMFS and the underlying storage is already aligned. So when a misaligned GUEST OS send’s a READ/WRITE request to the HYPERVISOR layer, that is from the VMDK to the VM SCSI controller to the VMFS and finally to the STORAGE, the STORAGE has to look for more than one PHYSICAL sector or CHUNK, this is an over head for the storage and of course it will be an over head to the VMkernel, because the VMkernel has to wait until the array does the task.

image

If the GUEST OS is aligned, for one 4KB write/read it will use only one single CHUNK from the storage, this will give GOOD Response time and LOW latency for the IOPS operation.

image

Now How to align the Guest OS;

For windows follow the below –

1- Add the required virtual HDD to the Guest OS

2- Verify the HDD is visible in the OS

image

image

3- Open Command prompt and use the Command Line Syntax below

C:\>diskpart

DISKPART> list disk

DISKPART> select disk 2

Disk 2 is now the selected disk.

DISKPART> create partition primary align=1024

image

image

 

After this format the PARTITON, Windows MSSQL databases and Exchange servers, it is recommended to format the NTFS cluster size as 64K (64 kilobytes) and for other normal server we can use 32K.

image

Generally to check a partition is aligned or not use the below command, then refer the PART 1 of this blog to do the MATH.

wmic partition get BlockSize, StartingOffset, Name, Index

image

In my case this shows both my disks having partitions that are aligned to 1024KB or 1MB …or sector2048.

(1048576 bytes)/(512 bytes/sector) = 2048 sector

To check File Allocation Unit Size – Run this command for each drive to get the file allocation unit size:

fsutil fsinfo ntfsinfo c:

image

Steps to align the partition for Linux.

To check that your existing partitions are aligned, issue the command:

fdisk –lu

The output is similar to:

image

1. Enter fdisk /dev/sd<x> where <x> is the device suffix.

2. Type n to create a new partition.

3. Type p to create a primary partition.

4. Type 1 to create partition No. 1.

5. Select the defaults to use the complete disk.

image

1. Type t to set the partition’s system ID.

2. Type 8e to set the partition system ID to 8e (Linux LVM)

3. Type x to go into expert mode.

4. Type b to adjust the starting block number.

5. Type 1 to choose partition 1.

6. Type 2048 to set it to start with the sector 2048.

7. Type w to write label and partition information to disk.

image

To check that your existing partitions are aligned, issue the command:

fdisk –lu

image

Now you can see the partition are aligned, that is started from the sector 2048 that is 1MB boundary.

NOTE:

Now in the internet there are many methods to automate the process during the TEMPLATE deployment, one method is add few 1 GB VMDK and do the partition alignment and make a template and after that when the template deployment is over, increase the VMDK for the guest OS. This will work fine for the thin disk and lazy zeroed disks, but if you do the same process for a EAGAR ZEROED disk, then we all know the outcome. Once you increase the EAGAR ZEROED disk it will become LAZY zeroed disk, so for FT and windows clusters, oracle clusters it will be a problem. Else after the VMDK increase you need to use the VMKFSTOOLS to change the VMDK from LAZY zeroed disk to EAGAR ZEROED disk. So again a management overhead, so its your decision !!

 

REFERENCE:

 

http://blogs.vmware.com/vsphere/2011/08/vsphere-50-storage-features-part-7-gpt.html

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1036609

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003565

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2003813

Partition alignment in VMware vSphere 5, a DeepDrive, Part-1

This topic has been discussed seriously for a long time… in the virtualization domain. I would like to add few insights in to this topic, with the new vSphere 5.x release there are lot of changes happened in the VMFS filesystem and also with the release of Windows 2008, 2008 R2, 2012 and RHEL 6.x, Ubuntu 12.x there is no need of doing partition alignment, but it would be good to know how these OS do the partition and handle the disk volumes.

Also the legacy OS like Windows 2003, RHEL 5.x still need partition alignment with vSpehere 5 & VMFS 5 filesystem. In short, generally this topic applies to physical, other hypervisor vendors and to VMWARE also.

There are many outdated articles in the web and below info will give a good insight to this topic.

Theory & History

As we all know a physical server or Storage array need physical HDD’s, with virtual machine it is a virtual HDD. The below image shows the hard disk geometry

clip_image001

In the case of early IDE/ATA hard disks the BIOS provides access to the hard disk through an addressing mode called Cylinder-head-sector, also known as CHS, was an early method for giving addresses to each physical block of data on a hard disk drive. CHS addressing is the process of identifying individual sectors on a disk by their position in a track, where the track is determined by the head and cylinder numbers.

In old computer system the maximum amount of addressable data was very limited – due to limitations in both the BIOS and the hard disk interface. The legacy OS like NT, DOS etc uses this method.

Modern hard disks use a recent version of the ATA standard, such as ATA-7. These disks are accessed using a different addressing mode called: logical block addressing or LBA involves a totally new way of addressing sectors. Instead of referring to a cylinder, head and sector number, each sector is instead assigned a unique “sector number”. In essence, the sectors are numbered 0, 1, 2, etc. up to (N-1), where N is the number of sectors on the disk. So all modern disk drives are now accessed using Logical Block Addressing (LBA) scheme where the sectors are simply addressed linearly from 0 to some maximum value and disk partition boundaries are defined by the start and end LBA address numbers. In LBA addressing system with each cylinder standardized to 255 heads, each head has one track with 63 logical blocks or sectors and each one has 512 bytes. You can see this info in linux

clip_image002

All the modern OS uses the LBA method to read/write data to the Harddisk.

In olden days the physical sector of the HDD is 512-byte, and this has been the standard for over 30 years. This physical sector size will match with the size of one Logical block or sector of the OS that is 512 byte, so no issue. During 2009, IDEMA (The International Disk Drive Equipment and Materials Association) and leading data storage companies introduced Advanced Format (AF) Technology in the HDD, so that the physical sector size will be 4K (4,096 bytes). Disk drives with larger physical sectors allow enhanced data protection and correction algorithms, which provide increased data reliability. Larger physical sectors also enable greater format efficiencies, thereby freeing up space for additional user data.

One of the problems of introducing this change in the media format is the potential for introducing compatibility issues with existing software and hardware. As a temporary compatibility solution, the storage industry is initially introducing disks that emulate a regular 512-byte sector disk, but make available info about the true sector size through standard ATA and SCSI commands. As a result of this emulation, there are, in essence, two sector sizes:

Logical sector: The unit that is used for logical block addressing for the media. We can also think of it as the smallest unit of write that the storage can accept. This is the “emulation.”

Physical sector: The unit for which read and write operations to the device are completed in a single operation. That is the Actual physical sector size of storage data on a disk.

Initial types of large sector media

The storage industry is quickly ramping up efforts to transition to this new Advanced Format type of storage for media having a 4 KB physical sector size. Two types of media will be released to the market:

4 KB native: Disks that directly report a 4 KB logical sector size and have a physical sector size of 4 KB – The disk can accept only 4 KB IOs to the disks. However, the software stack can provide 512-byte logical sector size support through RMW support. This media has no emulation layer and directly exposes 4 KB as its logical and physical sector size. The overall issue with this new type of media is that the majority of apps and operating systems do not query for and align I/Os to the physical sector size, which can result in unexpected failed I/Os.

512-byte emulation (512e): Disks that directly report a 512-byte logical sector size but have a physical sector size of 4 KB – Firmware translate 512 byte writes to 4k writes RMW (Read Modify Write). In today’s drives, this translation introduces a performance penalty. This media has an emulation layer as discussed in the previous section and exposes 512-bytes as its logical sector size (similar to a regular disk today), but makes its physical sector size info (4 KB) available.

Overall Windows support for large sector (4KB) media

This table documents the official Microsoft support policy for various media and their resulting reported sector sizes. See this KB article for details.

clip_image004

Windows 8, windows server 2012 and Starting with Linux Kernel Version 2.6.34 has full support to read and write 4K for the LBA by the OS. Operating systems like Windows 7, 2008, 2008 R2 still uses 512 Bytes for each logical block or sector, so one physical sector of 4K contains 8 logical sectors of size 512 Bytes.

clip_image006

512-byte emulation at the drive interface

clip_image008

clip_image010

To maintain compatibility, Western Digital, Hitachi, Toshiba emulates a 512-byte device by maintaining a 512-byte sector at the drive interface that is the firmware inside the HDD controller will do the conversion; and Seagate uses SmartAlign technology, for this emulation (firmware level). These drives are also called Advanced Format 512e. Let see how this works !!!

512-byte Read

When the host requests to read a single 512-byte logical block, the hard drive will actually read the entire 4K physical sector containing the requested 512 bytes. The 512-byte block is extracted and sent to the host. This can be done very quickly.

clip_image012

512-byte Write (Read-Modify-Write)

When the host attempts to write a single 512-byte logical block, the hard drive will first read the 4K physical sector containing the 512 bytes that are to be overwritten. Next, it will insert the 512 bytes of new data and write the entire 4K block of data back to the media. This process is called a “Read-Modify-Write”. The drive must read the existing data, modify a subset, and then write the data back to the disk. This process can require additional revolutions of the hard disk.

clip_image014

How Does Advanced Format Technology Maintain Performance?

In order to maintain top performance, it is important to ensure that writes to the disk are aligned. Ideally, writes should be done in 4K blocks, and each block will then be written to a physical 4K sector on the drive. This can be accomplished by ensuring that the OS and applications write data in 4K blocks, and that the drive is partitioned correctly. Most modern operating systems use a file system that allocates storage in 4K blocks or clusters (NTFS Cluster/File allocation unit or EXT3/4 file system block size). In a traditional hard drive, the 4K block is made up of eight 512-byte sectors (see Figure 4: 512-byte Emulated Device Sector Size).

clip_image016

In production, the RAID layer will come in to picture, these 4KB physical sectors are again combined to form a RAID STRIP size, this may vary from 4KB to 256 KB depending upon the RAID level. In the storage array also this is the same. So the RAID array controller like PERC, Intel, LSI, Smart Array etc will handle and gives a RAID volume to the OS. In both cases the OS partition needs to be aligned.

Since most modern operating systems will write in 4K blocks, it is important that each 4K logical block is aligned to a physical 4K block on the disk (see Figure 5). This is especially important because the 512e feature of the drive cannot prevent a partitioning utility from creating a misaligned partition. When misalignment occurs, a logical 4K block will reside on two physical sectors.

clip_image018

In this case, a single read or write of a 4K block will result in a read/write of two physical sectors. The impact of a “read” is minimal, whereas a single write will cause two “Read-Modify-Writes” to occur, potentially impacting performance (see Figure 6).

clip_image020

So now what will happen with the OS like XP, windows 2003, RHEL5.x etc and why we need disk Alignment in PHYSICAL world or in VIRTUAL world ?

As I mentioned the physical disks has 4KB physical sectors and RAID volume or a LUN from storage array  has 4KB to 256 KB STRIP SIZE, that is a multiple of 4KB, and the operating system has 512 bytes of logical sector. When we install the operating system, like 2003, RHEL 5.x. in a HDD or a LUN, in these OS the first 62 sectors (first track) of the HDD is reserved for BOOT area and it is hidden to the OS.

That is sectors from 0 to 62 and reserved (hidden), the master boot record (MBR) resides within these hidden sectors. The master boot record (MBR) resides within these hidden sectors. It uses the first sector of the first track for MBR data (LBA 0) and the first partition begins in the last sector of the first track, which is from (logical) block address 63. You can see this in the below;

clip_image022

Here in RHEL 5.x/LINUX older version, we can see the first partition starts from sector/LBA 63, and if you add another HDD or a LUN this host, and when you create a partition, the partition tool of these linux versions again create partition starting from sector 63.

clip_image024

Here in the above info from Windows 2003, the first partition starts from the offset 32256, in windows it won’t show the LBA/sector number, instead it shows the values in Bytes. That is an offset of 32256 means (32256/512) = 63 LBA/sector, so the partition starts from sector 63. Below is the detailed way of confirming this;

Essential Correlations: Partition Offset, File Allocation Unit Size, and Stripe Unit Size

Use the information in this section to confirm the integrity of disk partition alignment configuration for existing partitions and new implementations.

There are two correlations which when satisfied are a fundamental precondition for optimal disk I/O performance. The results of the following calculations must result in an integer value:

Partition_Offset ÷ Stripe_Unit_Size (Disk physical sector size or RAID strip size)

Stripe_Unit_Size ÷ File_Allocation_Unit_Size

Of the two, the first is by far the most important for optimal performance. The following demonstrates a common misalignment scenario: Given a starting partition offset for 32,256 bytes (31.5 KB) and stripe unit size of 4086 bytes (4 KB), the result is 31.5/4 = 7.894273127753304. This is not an integer; therefore the offset & strip unit size are not correlated.

In the second one, the NTFS cluster size or file allocation unit the default value is 4086, and we can give 32KB, 64KB etc. If it is MSSQL and EXCHANGE It is recommended to give 64KB during the formatting time, and this value is not an issue and it will be an integer. But the first one is the crucial !!!

So we have MISALIGNMENT, and we need to realign the partition, the below diagrams show pictorially;

clip_image026

So Windows 7, 8, 2008, 2008 R2, 2012, RHEL 6, Debian 6, Ubuntu 10, 11, 12, SUSE 11 onwards, automatically aligns partitions during installation. You can see this in the below;

WINDOWS 7

clip_image028

WINDOWS 2008R2

clip_image030

In Windows case the partition alignment defaults to 1024 KB or 1MB boundary (that is, startingoffset 1,048,576 bytes = 1024KB). It correlates well (as described in the previous section, 1024KB/4KB = 256 an integer) with common stripe unit sizes such as 4KB, 64 KB, 128 KB, and 256 KB as well as the less frequently used values of 512 KB and 1024 KB. That is simply the windows partition tool begin the first partition at LBA/sector 2048 (1,048,576/512 = 2048). So here not need to manual alignment and if we add another disk also it will do auto alignment.

clip_image032

RHEL 6

clip_image034

In RHEL and latest linux cases, the first partition starts from 2048 that is LBA 0 to 2047 is reserved. That is the OS is aligned to 1MB boundary, if we do the math the sector 2048 is at the offset 1,048,576 bytes (1,048,576 bytes/512 = 2048) and if we add another HDD or LUN it will do alignment automatically.

My next post will be discussing how to do the disk alignment in vSphere or any other hypervisor.

References:

http://en.wikipedia.org/wiki/Advanced_Format

http://www.tech-juice.org/2011/08/08/an-introduction-to-hard-disk-geometry/

http://www.seagate.com/tech-insights/advanced-format-4k-sector-hard-drives-master-ti/

http://en.wikipedia.org/wiki/Cylinder-head-sector

http://en.wikipedia.org/wiki/Logical_Block_Addressing

http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/

http://wiki.hetzner.de/index.php/Partition_Alignment/en

http://blogs.technet.com/b/askcore/archive/2011/09/26/alignment-changes-in-windows-2008-and-2008-r2.aspx

http://msdn.microsoft.com/en-us/library/dd758814%28v=sql.100%29.aspx

http://frankdenneman.nl/2009/05/20/windows-2008-disk-alignment/

http://support.microsoft.com/kb/2510009

http://support.microsoft.com/kb/2515143

http://technet.microsoft.com/en-us/library/ee832792.aspx#Phys

http://blogs.msdn.com/b/psssql/archive/2011/01/13/sql-server-new-drives-use-4k-sector-size.aspx

http://www.idema.org/?page_id=1936

http://wdc.custhelp.com/app/answers/detail/a_id/5655/~/how-to-install-a-wd-advanced-format-drive-on-a-non-windows-operating-system

http://msdn.microsoft.com/en-us/library/windows/desktop/hh848035%28v=vs.85%29.aspx

http://blogs.technet.com/b/filecab/archive/2011/04/26/using-4k-sector-and-advanced-format-drives-in-windows-hotfix-and-support-info-for-windows-server-2008-r2-and-windows-7.aspx

http://storage.toshiba.com/docs/services-support-documents/toshiba_4kwhitepaper.pdf

http://www.hgst.com/tech/techlib.nsf/techdocs/3D2E8D174ACEA749882577AE006F3F05/$file/AFtechbrief.pdf

http://www.seagate.com/files/docs/pdf/datasheet/disc/ds_momentus_5400_6.pdf

http://www.wdc.com/wdproducts/library/WhitePapers/ENG/2579-771430.pdf

http://www.seagate.com/docs/pdf/whitepaper/mb_smartalign_technology_faq.pdf

http://www-03.ibm.com/systems/resources/systems_i_advantages_integratedserver_pdf_vmware_storage_alignment.pdf

 

Multi-NIC vMotion Speed and Performance in vSphere 5.x – Optimum bandwidth allocation

During my recent implementation of vSphere 5 for my client, I Tested few cases regarding VMotion speed and its Network bandwidth utilization…

Very interesting results, I got…

Test Environment Specifications

Hardware

a) Enclosure BladeSystem c7000 Enclosure G2 b) ProLiant BL680c G7 Blade c) HP VC FlexFabric 10Gb/24-Port Module

d) HP NC553i 10Gb 2-port FlexFabric Converged Network Adapter (Emulex OneConnect OCe111000 10GbE, FCoE UCNA)

e) Brocade BR-DCX4S-0001-A, DCX-4S Backbone Fibre Channel Switch f) HP 3PAR T400 Storage System

g) CPU – Intel Xeon CPU E7-4820, 2GHz h) RAM – 512 GbFeatures to Test

Software

a) VMware VSphere 5 Enterprise Plus license b) VMware ESXi 5.0.0 build-623860  c) VMware VCenter 5 Standard

Test Input

a) A Windows 2008 R2 64 bit virtual machine, with 2 vCPUS and 4GB RAM b) A Windows 2008 R2 64 bit virtual machine, with 2 vCPUS and 8GB RAM

c) A Windows 2008 R2 64 bit virtual machine, with 2 vCPUS and 16GB RAM d) A Windows 2008 R2 64 bit virtual machine, with 2 vCPUS and 32GB RAM

e) VMware Standard Switch, with 2 x 1GbE PNIC uplinks (CASE-1) f) VMware Standard Switch, with 2 x 2GbE PNIC uplinks (CASE-2)

g) VMware Distributed Switch, with 2 x 2GbE PNIC uplinks (CASE-3)

Test Procedure

Steps:-

1, Configure the Multinic VMotion on the ESXi hosts, as per the link http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007467

2, In the HP VC, set the speed of the PNICS as 1GbE and do VMotion for 3 virtual machine from one host to another. (CASE-1)

3, In the HP VC, set the speed of the PNICS as 2GbE and do VMotion for 3 virtual machine from one host to another. (CASE-2)

4, Configure Vmware Distributed Switch and Multinic VMotion

5, Perform VMotion for 3 virtual machine from one host to another. (CASE-3)

Normally in a ESXi cluster there will be multiple VMotion happening in background or if we want to do a ESXi host maintenance, in these situations if the entire VMotion is very fast then the ESXi host will enter in to the maintenance mode very fast. Also for monster vm’s with 64GB/32GB RAM this will be a great relief during the live migration.

Now the tricky part…. what is the pNIC link speed… is it 1GbE or 10GbE or in between or what should be the pNICS speed we need to use. There are 3 cases we can consider…

1 – in conventional server adapters we get 1GbE pNICS

2 – when we use 10GbE cards, here we normally use NetIOC to partition the cards for ESX traffics like VMotion, FT, Management, VM traffic etc.. and we set policy, and bandwidth for these traffic.

3- When we use HP-Blades and use FlexNics, FlexFabric Converged Network Adapter, and FlexNetworking in the HP Virtual Connect.

From my understanding and Googling !!!… below are the findings.

Issue the command tail -f /var/log/vmkernel.log and than initiate a VMotion. You should get info like this:

Below is the result I got with the pNICS of 1GbE.

VMotion with 1G speed (vSS)

clip_image002[4]

image

2012-07-15T04:31:29.171Z cpu54:9720)Config: 346: “SIOControlFlag2” = 1, Old Value: 0, (Status: 0x0)

2012-07-15T04:31:31.306Z cpu0:16528)Migrate: vm 16529: 3234: Setting VMOTION info: Source ts = 1342326689209133, src ip = <2.2.2.1> dest ip = <2.2.2.3> Dest wid = 152453 using SHARED swap

2012-07-15T04:31:31.309Z cpu0:16528)Tcpip_Vmk: 1059: Affinitizing 2.2.2.1 to world 155824, Success

2012-07-15T04:31:31.309Z cpu0:16528)VMotion: 2425: 1342326689209133 S: Set ip address ‘2.2.2.1’ worldlet affinity to send World ID 155824

2012-07-15T04:31:31.310Z cpu0:155824)MigrateNet: 1158: 1342326689209133 S: Successfully bound connection to vmknic ‘2.2.2.1’

2012-07-15T04:31:31.311Z cpu0:155824)MigrateNet: 1158: 1342326689209133 S: Successfully bound connection to vmknic ‘2.2.2.1’

2012-07-15T04:31:31.311Z cpu2:8992)MigrateNet: vm 8992: 1982: Accepted connection from <2.2.2.3>

2012-07-15T04:31:31.311Z cpu2:8992)MigrateNet: vm 8992: 2052: dataSocket 0x4100368ec610 receive buffer size is 563272

2012-07-15T04:31:31.311Z cpu0:155824)VMotionUtil: 3087: 1342326689209133 S: Stream connection 1 added.

2012-07-15T04:31:31.311Z cpu0:155824)MigrateNet: 1158: 1342326689209133 S: Successfully bound connection to vmknic ‘2.2.2.2’

2012-07-15T04:31:31.312Z cpu0:155824)VMotionUtil: 3087: 1342326689209133 S: Stream connection 2 added.

2012-07-15T04:31:39.081Z cpu4:16529)VMotion: 3878: 1342326689209133 S: Stopping pre-copy: only 2280 pages left to send, which can be sent within the switchover time goal of 0.500 seconds (network bandwidth ~175.458 MB/s, 955747% t2d)

2012-07-15T04:31:39.129Z cpu6:16529)NetPort: 1427: disabled port 0x100000b

2012-07-15T04:31:39.129Z cpu6:16688)VSCSI: 6226: handle 8199(vscsi0:0):Destroying Device for world 16529 (pendCom 0)

2012-07-15T04:31:39.405Z cpu46:155824)VMotionSend: 3508: 1342326689209133 S: Sent all modified pages to destination (network bandwidth ~234.162 MB/s)

2012-07-15T04:31:39.649Z cpu30:8287)Net: 2195: disconnected client from port 0x100000b

Below is the result I got with the pNICS of 2GbE.

Vmotion with 2G speed (vSS)

image

 

2012-07-15T13:30:18.467Z cpu10:11400)Migrate: vm 11401: 3234: Setting VMOTION info: Source ts = 1342359016928867, src ip = <2.2.2.1> dest ip = <2.2.2.3> Dest wid = 21083 using SHARED swap

2012-07-15T13:30:18.470Z cpu59:12009)MigrateNet: 1158: 1342359016928867 S: Successfully bound connection to vmknic ‘2.2.2.1’

2012-07-15T13:30:18.471Z cpu10:11400)Tcpip_Vmk: 1059: Affinitizing 2.2.2.1 to world 12009, Success

2012-07-15T13:30:18.471Z cpu10:11400)VMotion: 2425: 1342359016928867 S: Set ip address ‘2.2.2.1’ worldlet affinity to send World ID 12009

2012-07-15T13:30:18.471Z cpu37:8992)MigrateNet: vm 8992: 1982: Accepted connection from <2.2.2.3>

2012-07-15T13:30:18.471Z cpu37:8992)MigrateNet: vm 8992: 2052: dataSocket 0x410036859910 receive buffer size is 563272

2012-07-15T13:30:18.471Z cpu63:8255)NetSched: 4357: hol queue 3 reserved for fifo scheduler on port 0x6000002

2012-07-15T13:30:18.472Z cpu59:12009)MigrateNet: 1158: 1342359016928867 S: Successfully bound connection to vmknic ‘2.2.2.1’

2012-07-15T13:30:18.472Z cpu59:12009)VMotionUtil: 3087: 1342359016928867 S: Stream connection 1 added.

2012-07-15T13:30:18.472Z cpu59:12009)MigrateNet: 1158: 1342359016928867 S: Successfully bound connection to vmknic ‘2.2.2.2’

2012-07-15T13:30:18.472Z cpu59:12009)VMotionUtil: 3087: 1342359016928867 S: Stream connection 2 added.

2012-07-15T13:30:21.838Z cpu12:11401)VMotion: 3878: 1342359016928867 S: Stopping pre-copy: only 1476 pages left to send, which can be sent within the switchover time goal of 0.500 seconds (network bandwidth ~455.910 MB/s, 967822% t2d)

2012-07-15T13:30:21.862Z cpu12:11401)NetPort: 1427: disabled port 0x1000008

2012-07-15T13:30:21.863Z cpu12:11648)VSCSI: 6226: handle 8196(vscsi0:0):Destroying Device for world 11401 (pendCom 0)

2012-07-15T13:30:22.014Z cpu62:12009)VMotionSend: 3508: 1342359016928867 S: Sent all modified pages to destination (network bandwidth ~442.297 MB/s)

2012-07-15T13:30:22.332Z cpu16:8287)Net: 2195: disconnected client from port 0x1000008

Below is the result I got with the pNICS of 2GbE.

Vmotion with 2G speed (vDS)

clip_image002[6]

2012-08-05T09:23:11.960Z cpu12:10134)Config: 346: “SIOControlFlag2” = 1, Old Value: 0, (Status: 0x0)

2012-08-05T09:23:12.857Z cpu0:207996)Migrate: vm 207997: 3234: Setting VMOTION info: Source ts = 1344158592033676, src ip = <2.2.2.11> dest ip = <2.2.2.13> Dest wid = 208920 using SHARED swap

2012-08-05T09:23:16.857Z cpu8:207997)VMotion: 3878: 1344158592033676 S: Stopping pre-copy: only 1754 pages left to send, which can be sent within the switchover time goal of 0.500 seconds (network bandwidth ~310.281 MB/s, 444548% t2d)

2012-08-05T09:23:17.053Z cpu50:209304)VMotionSend: 3508: 1344158592033676 S: Sent all modified pages to destination (network bandwidth ~413.803 MB/s)

2012-08-05T09:23:22.022Z cpu58:10132)Config: 346: “SIOControlFlag2” = 1, Old Value: 0, (Status: 0x0)

2012-08-05T09:23:23.685Z cpu13:208163)Migrate: vm 208164: 3234: Setting VMOTION info: Source ts = 1344158602063839, src ip = <2.2.2.11> dest ip = <2.2.2.13> Dest wid = 209100 using SHARED swap

2012-08-05T09:23:31.213Z cpu14:208164)VMotion: 3878: 1344158602063839 S: Stopping pre-copy: only 1468 pages left to send, which can be sent within the switchover time goal of 0.500 seconds (network bandwidth ~279.111 MB/s, 8217349% t2d)

2012-08-05T09:23:31.525Z cpu32:209313)VMotionSend: 3508: 1344158602063839 S: Sent all modified pages to destination (network bandwidth ~299.067 MB/s)

2012-08-05T09:23:22.022Z cpu58:10132)Config: 346: “SIOControlFlag2” = 1, Old Value: 0, (Status: 0x0)

2012-08-05T09:23:37.105Z cpu4:208338)Migrate: vm 208339: 3234: Setting VMOTION info: Source ts = 1344158615837668, src ip = <2.2.2.11> dest ip = <2.2.2.13> Dest wid = 209303 using SHARED swap

2012-08-05T09:23:58.429Z cpu8:208339)VMotion: 3878: 1344158615837668 S: Stopping pre-copy: only 2530 pages left to send, which can be sent within the switchover time goal of 0.500 seconds (network bandwidth ~360.240 MB/s, 1768884% t2d)

2012-08-05T09:23:58.769Z cpu50:209505)VMotionSend: 3508: 1344158615837668 S: Sent all modified pages to destination (network bandwidth ~302.745 MB/s)

image

 

with the above results, for 1GbE we may get a speed below 250 MB/s to 350 MB/s and if it is 2GbE and up to 10GbE we may get a speed below 600 MB/s

So why we are not Getting the FULL bandwidth of the pNICS, and why the VMotion is not using the FULL 1GbE link or above….????

During VMotion the ESXi 5.x will check the Link speed of the PNICS attached to the VSwitch, based on this it will adjust the receive buffer size up to 550 KB (550 KB is constant and hardcoded), the maximum theoretically we can get a maximum of up to 600 MB/sec. And in ESX/ESXi 4.x VMotion is using a “buffer size of 263536” which is 256 KB; I didn’t tested in vSphere 4.x

As we all know the configuration maximum, we can do a 4 VMotion if it uses 1GbE PNICS, and we can do 8 VMotion if it uses 10GbE PNICS. So with 1GbE we get around 300 MB/S so we can have 4 concurrent VMotions, and with 2GbE and above we will get only up to 600 MB/Sec so we can have 8 VMotion,  its simple MATH !!

VMware may be coded in this way and so my be this could be the reason we are not getting the Bandwidth of the pNICS and there is relation between the BUFFER SIZE & LINK SPEED and final Transfer rate.

In short even though if we give…10GbE or use 10GbE pNICS,  VMotion wont use that full capacity, only the amount of concurrent VMotions we can perform, will increase.

So what is the ADVANTAGE of giving the pNIC bandwidth greater than 1Gb

Because of the increase in buffer size and the use of Multi-Nic VMotion in vSphere we get a better utilization of pNICS, as the VMotion traffic distributed across the all the pNICS. So from the above TEST, I believe when we use 10GbE, CNA and FlexNics, we need to give VMotion network a 2GB link speed. This will improve the VMotion throughput and will take less time to complete and number of concurrent VMotions will increase.

Moreover in a well balanced ESX cluster and if the cluster is not over subscribed there will be very few VMotions happening inside the CLUSTER. So giving a dedicated 10GbE card is a WASTE.

So the OPTIMUM LINK SPEED and BANDWIDTH allocation for VMotion network is 2GbE

Multi-NIC vMotion in vSphere 5.x

As always VMware brings another feature in this release…..there is a huge improvement in the VMotion performance compared to vSphere 4.x

with this feature, now the VMotion traffic can span across multiple pNICS attached to the vmotion vswitch.

To configure it… follow the below….

 

image

To set up Multi-NIC vMotion in vSphere 5.x on a Standard vSwitch:

  1. Log into the vSphere Client and select the host from the inventory panel.
  2. Click the Configuration tab and select Networking.
  3. Click Add Networking and choose VMkernel as the Connection Type.
  4. Click Next.
  5. Add two or more NICs to the required standard switch.
    Note: You can create a new vSphere standard switch or use an existing vSwitch.
  6. Name the VMkernel portgroup (for example, vMotion-01), and assign a VLAN ID as required.
  7. Click Use this port group for vMotion, then click Next.
  8. Configure the IP address and subnet mask, then click Next..
  9. Click the Properties tab of the vSwitch, select the vMotion-01 portgroup, and click Edit.
  10. Click the NIC Teaming tab.
  11. Under Failover Order, select Override switch failover order.
  12. Configure the first adapter (for example, vmnic1) as active and move the second adapter (for example, vmnic3) to standby.
  13. Click OK.
  14. Under the vSwitch Properties, click Add to create a second VMkernel portgroup.
  15. Name the VMkernel portgroup (for example, vMotion-02), and assign a VLAN ID as required.
    Note: Ensure that both VMkernel interfaces participating in the vMotion have the IP address from the same IP subnet.
  16. Click Use this port group for vMotion, then click Next.
  17. Configure the IP address and subnet mask, then click Next.
  18. Click the Properties tab of the vSwitch, select the vMotion-02 portgroup, and click Edit.
  19. Click the NIC Teaming tab.
  20. Under Failover Order, select Override switch failover order.
  21. Configure the second adapter (for example, vmnic3) as active and move the first adapter (for example, vmnic1) to standby.
  22. On the Properties tab of the vSwitch, select each vMotion portgroup in turn and confirm that the active and standby adapters are the reverse of each other.

once you configured you can see like this.

image

 

 

To set up Multi-NIC vMotion in vSphere 5.x on a Distributed vSwitch:

  1. Log into the vSphere Client and click the Networking inventory.
  2. Click New vSphere Distributed Switch and choose version 5.0.0.
  3. Name the Distributed switch (for example, Multi-NIC-vMotion).
  4. Assign two uplink ports to the switch, then click Next.
  5. Select physical adapters to each of the hosts, then click Next and Finish.
  6. Expand the Distributed switch you just created, click the dvPortGroup and click Edit Settings.
  7. Name the dvPortgroup (for example, vMotion-01).
  8. Click VLAN and assign a VLAN ID as required.
  9. Click the Teaming and Failover tab, configure dvUplink1 as Active Uplink and move dvUplink2 to Standby Uplink.
  10. Right-click the Distributed vswitch, then click New Port Group.
  11. Name the dvPortgroup (for example, vMotion-02).
  12. Click VLAN and assign a VLAN ID as required, then click Next and Finish.
  13. Select the second portgroup created, then click the Teaming and Failover tab.
  14. Configure dvUplink2 as Active Uplink and move dvUplink1 to Standby Uplink.
  15. Go the Hosts and Clusters Inventory tab, select a host’s Networking, and click vSphere Distributed Switch.
  16. Click Manage Virtual Adapters and click Add to add new virtual adapter.
  17. Choose VMkernel as the Virtual Adapter Type.
  18. Select the vMotion-01 portgroup, click Use this port group for vMotion, then click Next.
  19. Configure the IP address and subnet mask, then click Next and Finish.
  20. Add another virtual adapter, then select the vMotion-02 portgroup.
  21. On the Distributed vSwitch, select each dvportgroup on VMKernel Port vmk1 and vmk2 in turn, and confirm that the active and standby uplinks are the reverse of each other.

once you configured you can see like this.

 

image 

 

    imageimage 

 

image   image

 

image

 

image image

 

My Test Results , I have used Network I/O Control to check with different bandwidth for pNICS

image

 

Note :-

No need to change the teaming policy for the VMotion vSS switch or vDS switch

No need to create Etherchannel,LACP,LAG etc for the VMotion network, ESXi will spread the Network traffic across the pNICS even for one VM. Also there are multiple uplinks so we get redundancy also, in the VMotion port groups we making other pNICS in the team as standby.

Need to set the “Fail back” option to “NO” in the vswitch teaming policy, because it is recommended, during  pSWITCH reboot the switch ports links will be up first but the switch wont be ready and the ESX traffic will immediately failback and tries to send traffic.

For vSS No need to change the other settings in the teaming policy for the port group, only we need to change the pNICS fail over order.

For vDS we need to change the settings in the port group “Fail back” option to “NO” and the failover order of the pNICS

For vDS we need to use STATIC BINDING

For 10GbE Ethernet cards, it is good to use Network I/O Control and set bandwidth & set shares for the VMotion network

 

REF:-

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007467

Dan Gorman's Technology News Aggregation

My Daily Readings from Zite

Virtual Reality

Eat > Sleep > Drink and Dream Virtualization

Brad Hedlund

stuff and nonsense

VCDX56

A blog focusing on day 2 day virtualization stuff

UCSguru.com

Every Cloud Has a Tin Lining.

pibytes

Experience the Datacenter Technologies

boche.net - VMware vEvangelist

Experience the Datacenter Technologies

blog.scottlowe.org

The weblog of an IT pro specializing in virtualization, networking, cloud, servers, & Macs

Eric Sloof - NTPRO.NL

Experience the Datacenter Technologies

Technodrone

Experience the Datacenter Technologies

Welcome to vSphere-land!

your ultimate VMware information destination

Michelle Laverick...

Laverick by Name, Maverick by Nature...

CloudXC

By Josh Odgers - VCDX#90

Long White Virtual Clouds

all things vmware, cloud and virtualizing business critical applications

Virtual Geek

Experience the Datacenter Technologies

Yellow Bricks

by Duncan Epping