Storage

...now browsing by category

 

Upgrade to vSphere 5.1 – Part 6 – Upgrading datastores

poniedziałek, Listopad 12th, 2012

vSphere 5 introduced new file system version: VMFS-5. It brings a lot of improvements for scalability and performance (details can be found here) so if you haven’t done it already this is how you can upgrade your VMFS-3 datastores to VMFS-5.

The operation is non-disruptive for VMs hosted on the datastore but cannot be rolled back. To upgrade a datastore successfully, all hosts with access to this datastore must be ESXi 5.0 or higher.

Go to Hosts and Clusters inventory view, select a host that can access the datastore to be upgraded. Go to the Configuration tab and under Hardware select Storage. This will show all datastores the selected host has access to:

VMFS datasores

VMFS datasores

Select the VMFS-3 datastore. Below you will see a link Upgrade to VMFS-5. Click it.

Upgrade link

Upgrade link

A warning will appear confirming that all your hosts connected to the datastore must be in version 5. Click Ok.

All hosts must be ESXi 5 or higher

All hosts must be ESXi 5 or higher

That’s it. The datastore will be upgraded and you will be able to use new features such as GPT and uniformed block size.

What about block size and partition table on upgraded datastores?

When creating a VMFS-3 datastore you had an option to choose a block size for the new partition:

Supported block sizes

Supported block sizes

This is no longer an option as VMFS-5 uses only one, uniformed block size – 1 MB. However, when a VMFS-3 datastore is upgraded to VMFS-5, it will retain all partition characteristics with one exception: MBR will be converted (seamlessly) to GPT at the moment the datastore’s size exceeds 2TB.

The rest of the partition features will remain the same so VMware suggest temporarily moving the VMs with Storage VMotion to another datastore and recreating the empty one with VMFS-5 from the scratch.

Types of swapping on ESX hosts

piątek, Listopad 9th, 2012

One of the most important features of virtualization and the reason why virtualization is so cool is the ability to overcommit resources. In terms of RAM it means that we can configure virtual machines with more RAM than we have physical RAM installed in a host. When configured we rely on ESX server to automatically manage the resources in a way that unneeded memory is taken away (reclaimed) from a VM and instead allocated to another VM in need.

In this article I am not going into memory reclamation techniques details and I will assume you are aware of terms such as ballooning.

There are three types of swapping that can take place on ESXi host and here’s a short description of each of them. Usually 2 and 3 are confused so here is how they works and differ.

1. VMX swapping

Let’s start with VMX swapping as it is the simplest one to describe. ESX host allocates memory not only for VM’s operating system use but also for different components such as virtual machine monitor and virtual devices. In an overcommited cluster this memory can be swapped in order to allow more physical memory for VMs that need it. According to VMware this feature is able to reduce memory usage from about 50MB per VM to 10MB without noticeably impacting VM’s performance.

It is enabled by default and VMX swap files are kept in VM folder (the location can be changed with sched.swap.vmxSwapDir parameter). VMX swapping can also be disabled on a VM with sched.swap.vmxSwapEnabled = FALSE.

2. Virtual machine swapping

This is what the operating system installed on a virtual machine will do when it realizes that the memory is low – it will swap. Please note that it has nothing to do with host’s physical or virtual memory. For example a machine with 4GB of RAM configured can have this RAM allocated by the host in host’s physical RAM or into swap files (vide: point 3) – for this VM it does not matter and it will treat the whole 4GB as if it was physical memory simply because it has no idea that this RAM is virtualized.

When the VM wants to use more than 4GB of memory, it will use swap techniques depending on the operating system, in Windows the VM will swap to the pagefile, in Linux it will use swap partition, etc. Therefore the swapped data will go directly to the VM’s virtual disk, i.e. to the vmdk files on the storage (when you think about it, it is logical as pagefile/swap partition are a part of VM’s virtual drives).

This is exactly what happens when the ballooning driver is in use. When it inflates it makes the VM operating system allocate to it VM physical memory and if the VM does not have free RAM at the moment it will start swapping (ballooning driver will make sure it receives unswappable, real RAM). And this is ok as the VM operating system knows best which data can be swapped from RAM to disk.

3. Host swapping

When you realize that your host is swapping a lot, it means that the cluster is highly overcommited and all other memory reclamation and save techniques (ballooning, page sharing and memory compression) were not enough. The host is in Hard state (1%-2% memory free) or in Low state (1% or less). This condition will severly impact performance and should be avoided at all costs. Another problem with host swapping is that you have no control over what is swapped – inlike VM swapping, the host does not know which part of data in the memory is important for the VMs.

So where this time the memory is swapped? When you open a datastore you will notice a .vswp file.

.vswp file in a VM directory

.vswp file in a VM directory

This is true only for a powered on VM – at power on the host will create a file and will remove it when the machine is powered off. Lokk at its size. The size it equal to VM configured memory minus reservation. This is because for the host must assure there is RAM available for the VM when it is switched on (regardless if it is a real, physical RAM or virtual memory swapped on disk). The reservation will assure that the given amount of physical RAM is available but what about the rest. The host cannot inform the machine that there is simply no RAM for it, see ya later. Hence the vswp file.

Look at this example. I have a turned off VM configured with 4GB of RAM but on the datastore there is only 1.8GB free. What happens when we try to start the VM?

No space for the swap file = no fun

No space for the swap file = no fun

So what can we do now? Well, we can free up some space on the datastore of course but if it is not possible this is what can be done instead. By default the swap file is kept in the VM directory. This can be changed – if the host is standalone, go to Configuration > Software > Virtual Machine Swapfile Location. If it a part of the cluster have a look in cluster settings first to change it to „Store the swapfile in the datastore specified by the host” and then go to the host configuration mentioned below:

Swap file location in cluster settings

Swap file location in cluster settings

 

Host settings - swapfile location

Host settings - swapfile location

I will choose the datastore with enough space and try to start the VM and, sure enought, it starts fine.
The location of swap files can also be modified on the VM level. Just open VM settings, go to Options tab and choose the option you like:

 

Virtual machine settings - swapfile location

Virtual machine settings - swapfile location

Another reason why you may want to change the swap files location is performance – you can put swap files on a diffrerent storage tier, on a slower/ cheaper if you’re sure there will be almost no swapping (to save on better-performance storage) or, if the cluster is overcommited and there will be swapping – use a faster tier to improve performance.

Definite guide to iSCSI with FreeNAS on ESX(i) – Part 4 – Configuring iSCSI on ESXi 5.1 (standard vswitch)

środa, Październik 31st, 2012

In this part of the guide we will have a look on iSCSI configuration under ESXi, in my example ESXi 5.1. The initial configuration of the host is very similar in what we saw in the previous part. There is one vSwitch and the host is equipped with 3 network interfaces.

Initial network configuration

Initial network configuration

Network interfaces (uplinks)

Network interfaces (uplinks)

Just like before we will first configure the networking part. Go to your host configuration and click on Configuration tab. Go to Networking. Click on Add Networking…

Selecting connection type for the new portgroup

Selecting connection type for the new portgroup

Select VMkernel as your connection type. You can notice that where in ESX we had 3 options to choose from (management, VMkernel, virtual machine networking), here in ESXi management networking has been merged into VMkernel stack which now is responsible for management, iSCSI connectivity, VMotion and FT logging. Select one of your NIC specifying to create a new vSwitch. Click Next.

Available uplinks

Available uplinks

Here you can see what I was talking about. Select what this VMkernel portgroup will be used for. We will use it for iSCSI traffic only (remember: traffic separation) so we don’t have to select anything. Give your portgroup a meaningful label and click Next.

Configuration for the new VMkernel port<br>

Configuration for the new VMkernel port

Insert a correct network configuration and click Next and then Finish.

IP network settings

IP network settings

The networking configuration is ready.

vSwitch for iSCSI configured

vSwitch for iSCSI configured

Click on Storage Adapters. if iSCSI Software Adapter is not installed, click on Add.. and install it.

Software iSCSI Adapter

Software iSCSI Adapter

Right click on the iSCSI Software Adapter and select Properties.

Software iSCSI Adapter properties

Software iSCSI Adapter properties

As you can see it is already enabled so no need to do it. Click on Dynamic Discovery tab and click on Add. Insert IP and port of your FreeNAS portal (check part 2 for FreeNAS installation and configuration guidance).

Dynamic Discovery

Dynamic Discovery

Click OK and Close. when asked to rescan the hba, select Yes. when everything went fine, you should see a device – a LUN presented on your FreeNAS server.

Presented LUN

Presented LUN

If you looked carefully during iSCSI Software Adapter configuration you might have noticed that there is an additional tab called Network Configuration.

Network configuration and port binding? Hm...

Network configuration and port binding? Hm...

However, we haven’t even touched it and the iSCSI storage seems to be working fine, so what’s the big deal? Well, imagine that you want (and usually you do) to use more than one NIC for iSCSI storage for multipathing and failover, you will need to bind vmknics (virtual adapters) to vmknics (physical) in 1:1 manner. Go back to Networking and click on vSwitch1 Properties. Click on Network Adapters tab and then on Add…

Adding some redundancy

Adding some redundancy

Select the unused vmnic, click Next twice and then Finish. Close vSwitch proprieties. Now our network configuration looks like this:

Second uplink added

Second uplink added

Now we will create another vmkernel portgroup on vSwitch1. The procedure is very similar to the first one except this time we add a new portgroup to the existing vSwitch. Click on vSwitch1 Properties and on Ports tab click on Add… Here’s the result:

2 uplinks, two vmkernel portgroups

2 uplinks, two vmkernel portgroups

Let’s go back to Storage Adapters and to Software iSCSI Adapter properties. Click on Network Configuration tab and click on Add…

VMkernel ports are not compliant

VMkernel ports are not compliant

Ops, there is no VMkernel adapter available except Management Network and if you select anything else you see the following message:
The selected physical network adapter is not associated with VMkernel with compliant teaming and failover policy.  VMkernel network adapter must have exactly one active uplink and no standby uplinks to be eligible for binding to the iSCSI HBA.

Why is that? That’s why. Basically, like I have mentioned before, you need to bind VMkernel port with physical uplinks (vmknics) 1:1 and if you go back to Configuration, Networking, click on vSwitch1 properties and then edit one of two VMkernel ports you will see (on NIC teaming tab) that each of them has got two NICs selected as active.

Two active uplinks for iSCSI vmkernel port? A no-no.

Two active uplinks for iSCSI vmkernel port? A no-no.

This is not a supported solution. what we need to do is to select Override switch failover order and mark one of the uplinks as unused. Then on the second VMkernel port we need to do the same, with another uplink of course.

That's better...

That's better...

Then go back to iSCSI Software Adapter’s properties, to Network configuration tab. Click on Add… and you will see both iSCSI VMkernel ports.

vmkernel ports are now compliant

vmkernel ports are now compliant

Add them both, you see they are compliant. If they are not, these are the possible reasons:

  • The VMkernel network adapter is not connected to an active physical network adapter or it is connected to
    more than one physical network adapter.
  • The VMkernel network adapter is connected to standby physical network adapters.
  • The active physical network adapter got changed.

If you’d like to know how to configure it from CLI, here’s your documentation.

Now if you right-click on the LUN presented by your FreeNAS and select Manage Paths… you will see that you can choose different path management policies for iSCSI storage.

Changing PSP for iSCSI storage is simple as that

Changing PSP for iSCSI storage is simple as that

VMware PVSCSI overview

środa, Październik 24th, 2012

VMware Paravirtualized SCSI (VPSCSI) has been with vSphere since version 4.0. With versions 4.1 and 5.0 its limitations were generally removed so that this type of adapter can now be broadly used.

VMware Paravirtualized SCSI adapter goes hand in hand with vmxnet3 paravirtualized NIC and offers better performance for I/O intensive disks. Our old friends – BusLogic and LsiLogic adapters emulates physical devices they are based on. That means that most modern operating systems will have correct drivers shipped with them and will work with these SCSI controllers out-of-box. VPSCSI on the other hand requires drivers, has some limitations (or used to have as we will see) but works on guest systems in much more efficient manner providing you with better performance using fewer CPU resources.

When introduced in vSphere 4 VPSCSI presented some problems. This is a short lists of things you need to consider before you deploy a VM with VPSCSI on vSphere 4.0:

  • Not suitable for VMs with VMware Fault Tolerance,
  • Worse results on low I/O intensive disks (lower than 2000 IOPS),
  • Severe problem on Windows Server 2008 R2 and Windows 7 virtual machines (see this excellent post from Michael Webster for more details and where to find fixes).
  • And last but not least – it is not possible to use the VPSCSI adapter for boot disks.

So as one can see VPSCSI adapter was designed with the following in mind: “Use LSI Logic adapter for your boot and OS disk and introduce VPSCSI with read/write intensive disks for solutions such as MS SQL Server or Exchange”.  Fortunately all the problems stated above has been already fixed and in vSphere 5.0 we can enjoy VMware Paravirtualized SCSI under all circumstances.

For details check this excellent article by Scott Lowe on PVSCSI limitatuons and his update on fixes introduced in vSphere 4.1.

More resources:

VMware KB: Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters

Definite guide to iSCSI with FreeNAS on ESX(i) – Part 3 – Configuring iSCSI on ESX 3.5 (standard vswitch)

piątek, Październik 19th, 2012

This is Part 3 of the definite guide to iSCSI with FreeNAS on ESX(i). Make sure to check out the other parts as well.

1. Introduction
2. FreeNAS installation and configuration
3. Configuring iSCSI on ESX 3.5 (standard vswitch)
4. Configuring iSCSI on ESXi 5.1 (standard vswitch)
5. Configuring iSCSI on a distributed vswitch
6. Migrating iSCSI from a standard to a distributed vswitch

 

3. Configuring iSCSI on ESX 3.5 (standard vswitch)

Make sure your ESX server has got at least two NICs – if you’re doing this in the nested, virtual environment and you have created a virtual ESX with only one NIC, shut it down, add a new network card and start the server again. We will use one for management and the second for iSCSI configuration. You will probably want to use more NICs anyway (for VM traffic, VMotion, redundancy, etc.) but for this demonstration we will use only two NICs – vmnic0 (management) and vmnic1 (iSCSI).

After you have installed ESX 3.5, go to the Configuration tab and under Hardware click on Networking.  All you will see is one virtual switch (vSwitch0). First of all we will create a new vswitch for iSCSI traffic. In the upper-right corner click on ‘Add Networking…’.

Select VMkernel and click Next.

Select VMkernel

Select VMkernel

Select ‘Create a virtual switch’ (this option is available if you have at least one unassigned NIC) and select an uplink NIC to be used with your new vswitch. Click Next.

Adding an uplink tpo the vSwitch

Adding an uplink tpo the vSwitch

Now you will create a new portgroup on your new vswitch. Give it a meaningfull name (label) like ‘iSCSI’ for example. Configure VLAN (if necessary, if not – leave this field empty).

Setting IP configuration for VMkernel port

Setting IP configuration for VMkernel port

Click Next and then Finish. A new virtual switch called vSwitch1 is created.

New virtual switch

New virtual switch

Under Hardware click on Storage Adapters. And select iSCSI Software Adapter. Click on Properties.

Software iSCSI initiator

Software iSCSI initiator

The software initiator is disabled by default so click on Configure and enable it.

Enabling software adapter

Enabling software adapter

You will see the following message:

Service Console needs to be able to communicate with iSCSI storage too

Service Console needs to be able to communicate with iSCSI storage too

Here’s the thing – target’s discovery is done by the Service Console. That means that both ports: VMkernel (the one we created) and Service Console (already created on vSwitch0) must be able to communicate with iSCSI target (on the FreeNAS server). If you look on the screenshot above you will see that both Service Console and VMketnel port for IP storage are in the same subnet (as well as FreeNAS server of course) so they both will be able to communicate with the target. However, if your Service Console is in the different network (and it should be if it is a production environment in order to introduce traffic separation), you will need to create a second Service Console – on vSwitch1 – and assign it an IP address allowing it to discover the FreeNAS iSCSI target. In our case it is not necessary.

Let’s continue with the software initiator’s confiuration. Click on the Dynamic Discovery tab and click Add… Put the IP address and you configured on FreeNAS Target.

IP  and port of FreeNAS server iSCSI service (remember the portal in Part 2?)

IP and port of FreeNAS server iSCSI service (remember the portal in Part 2?)

You will be asked to rescan the HBAs. Click on Yes.

Rescan the host's HBAs

Rescan the host's HBAs

If all goes well, you will see the target device:

LUN presented on FreeNAS to ESX host

LUN presented on FreeNAS to ESX host

Now you can go on and create a new storage for ESX server on the LUN (VMFS or RDM).

FreeNAS and ESXi5 – iSCSI problems

wtorek, Sierpień 28th, 2012

Having nested ESXi servers running on a physical host is a great thing to test vSphere with limited resources. I remembered that FreeNAS was really good with providing iSCSI service for hosts so I installed 2 nested ESXi 5.0 and one FreeNAS 8.2.0 appliance. I configured iSCSI service on the filer, rescanned HBA and… nothing. The target was found and connected correctly but no device was seen.

Hmm, I definitely remember that it used to work with 0.7.1. I downloaded iso, installed another applicane – same problem. I quickly deployed another host, 4.1 U2 this time. LUN seen fine. Interesting.

I googled a bit and, voila!, what I found: http://support.freenas.org/ticket/690. According to the ticket the problem should be solved in 8.0.1-RC2. Nice, but this is much earlier version that I use. A participant of this thread reports positive results with FreeNAS 0.7.2 Sabanda (revision 7529) in the old branch – here it says „Update istgt to 20110902 which provides compatibility with ESXi 5.0″. Sadly this branch does not seem to be available for download at http://sourceforge.net/projects/freenas/files/. At the same time I noticed Beta1 of FreeNAS 8.3.0. I decided to give it a try.

The result was unfortunately as before. ESX 4.1 can see the LUN, 5.0 does not. I opened a ticket with FreeNAS http://support.freenas.org/ticket/1761

[edit 30/08/2012]

I managed to present LUNs to my ESXi 5.0 with OpenFiler 2.99. With FreeNAS still no go.