NEW! IBM San Volume Controller, IBM Storwize V7000, V5000, V3500, V3700 V7.5.0.0 Now Generally Available

I’m pleased to announce that 7.5.0.0 is now available to download

SAN Volume Controller V7.5.0.0

IBM Storwize V7000 V7.5.0.0

IBM Storwize V5000 V7.5.0.0

IBM Storwize V3700 V7.5.0.0

IBM Storwize V3500 V7.5.0.0

Advertisements

vSphere Storage Migrations of VMs with Raw Device Mappings (RDM)

I was running into a sea of confusion over the expected behaviour of performing Storage Migrations on VMs which had Raw Device Mappings (both Physical & Virtual compatibility modes).

After a bit of Googleing I came across a VMware Blog which covers the scenarios quite nicely.

To quote:

This is what I observed, testing with both pRDMs and vRDMs.

VM with Physical (Pass-Thru) RDMs (Powered On – Storage vMotion):

  • If I try to change the format to thin or thick, then no Storage vMotion allowed.
  • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN.

VM with Virtual (non Pass-Thru) RDMs (Power On – Storage vMotion):

  • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
  • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behaviour as pRDM)

VM with Physical (Pass-Thru) RDMs (Powered Off – Cold Migration):

  • On a migrate, if I chose to change the format (via the advanced view), the pRDM is converted to a VMDK on the destination VMFS datastore.
  • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN

VM with Virtual (non Pass-Thru) RDMs (Power Off – Cold Migration):

  • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
  • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behaviour as pRDM).

As you can see, there are 3 occasions when an RDM could be converted to a VMDK. Perhaps the most surprising is the fact that a pRDM could be converted to a VMDK, when a cold migration of the VM is attempted, and the format is changed.

PowerCLI to get host ESXi version & Serial Number

Here is a handy bit of PowerCLI to return the VMhost Name, Product, ESXi build, Serial number of all hosts in vCenter. Get-Vmhost | Get-View | Sort-object Name | select Name, @{N=’Product’;E={$_.Config.Product.FullName}}, @{N=’Build’;E={$_.Config.Product.Build}}, @{Name=”Serial Number”; Expression={($_.Hardware.SystemInfo.OtherIdentifyingInfo | where {$_.IdentifierType.Key -eq … Continue reading

vSphere Best Practices with IBM SVC/Storwize family

I often get asked “What is the recommended multipathing configuration with vSphere and SVC?” and “Are there recommended LUN sizes for VMFS datastores?”.

Luckily there is a white paper that covers a lot of Best Practices when using vSphere with SAN Volume Controller and/or Storwize products.

Similarly, for larger SAN design considerations there is a Redbook that covers a large number of settings specifically regarding vSphere.

IBM SVC Warm-Swap Node Redbook

The IBM SAN Volume Controller is a fantastic storage solution to virtualize storage arrays from different vendors, offer performance improvements, replication, compression, encryption and a huge interoperability support matrix.

However, one of the annoyances of IBM SAN Volume Controller architecture is if a single node has a hardware error, or requires maintenance you’re limited to a single node in the IO group.

This issue has now been alleviated by introducing the option of a hot-swap node for when a node is offline for an extended period.

A IBM Redbook has been published here which describes the process.

This function is to be extended in future releases of SVC code to allow an automated hot-swap node during a code upgrade. Historically in a 2-node SVC cluster, you’d be left with a single node in the IO group would would be vulnerable to a single point of failure for prolonged periods of time. With this hot-swap node feature, the spare node would be upgraded first, before replacing a down-level node as it goes offline for upgrade. This should significantly improve reliability of the SVC cluster during the upgrade process.

PowerCLI 6.0r1 available to download

powercli

For those who don’t know, all vCenter tasks can be controlled/triggered/automated via a Windows Powershell command-line interface. After importing the VMware modules into your normal Powershell , you can

The new release of PowerCLI has been made available which coincides with the release of vSphere 6.0, however there doesn’t appear to be any commandlets for vVols. Download link

VMware also have a nice habit of publishing a PDF poster which covers many of the commandlets. They haven’t released it a new version yet (older version), but the more comprehensive user guide is available here.

My go-to script editor of choice is PowerGUI which offers some nice touches to make writing scripts/general admin tasks a little easier.