Networking

...now browsing by category

 

Installing ESXi 5.x on DELL 12th generation server – „No network adapters were detected”

czwartek, Grudzień 6th, 2012

So you have to install the hypervisor on DELL R720 server. Let’s have a look on VMware’s HCL:

 

 

 

 

 

Checked! You boot the server from a DVD and you are warmly welcomed with the following message:

 

 

 

 

 

Really? The network card is Broadcom 5720 QP 1Gb Network Daughter Card. Here it is:

 

 

 

 

 

Let’s check DELL website. A quick search and here‘s the solution. So what DELL says is: The reason for this is because the VMware native ESXi 5.0 image does not contain the required version(s) of network drivers for Network Daughter Cards (NDC) or Network adapters used in Dell 12G Servers.

Fortunately a modified ESXi image is available on support.dell.com so no need to play with the Image Builder.

iSCSI target on Windows Server 2012? Why not!

poniedziałek, Listopad 26th, 2012

I have recently described how to install, configure and use iSCSI target on FreeNAS virtual server. This time let’s have a look how to present an iSCSI target on Windows Server 2012.

iSCSI target is a service role of File and Storage Services. Go to Add Roles and Features and expand File and Storage Services > File and iSCSI Services and select iSCSI Target Server.

Click Next and finish the role installation. No configuration is needed during the installation.

When installed open Server manager’s dashboard, click on File and Storage Services on the left and select iSCSI. On the right, click on Tasks menu and select New iSCSI Virtual Disk…

Create a new LUN... ekhm... virtual disk

Create a new LUN... ekhm... virtual disk

Select a location for your new virtual disk (a disk that will be presented under iSCSI target, you can call it a LUN if you want, Microsoft preferes to call it a virtual disk). Give it a meaningful name and choose a size.

Now we’ll create a new target:

Creating a new target

Creating a new target

Give it a name and add a server:

Click on Add...

Click on Add...

Select the method to identify the initiator. You can use IQN and a wildcard * as a value to allow all of them.

On the next screen you can enable CHAP if you want.

That’s it. You can now use the LUN presented by Windows Server 2012 as a datastore on your ESX server :-)

Backup your distributed virtual switch

wtorek, Listopad 6th, 2012

I was going to wrote a tutorial how to backup a distribution switch configuration in vSphere 5.1 but I found out a terrific article wrote by Chris Wahl so I decided to simply put the link to his site as he’s done a really good job explaining step by step why and how to backup the dvs configuration. The post is here.

Windows Server 2012 Hyper-V networking: what’s new?

piątek, Listopad 2nd, 2012

With the new server from Microsoft comes new Hyper-V. What new features in the networking field brings Hyper-V 2012 and how it differs from what is in VMware’s basket?

First of all it brings more security and isolation for multiple guests providing PVLANs. Nothing really new here for virtualization administrators except one thing – on vSphere you need to have a virtual distributed switch to enable PVLANs and it comes with Enterprise+ license. Hyper-V offers it out of the box.

Then we have another cool feature that will cost us some additional money on vSphere – with Hyper-V 2012 it is possible to configure Access Control Lists on virtual ports. They are quite simple as they permit source and destination addresses as a pattern to match but on vSphere you’d need to have Cisco Nexus 1000V for ACLs. Another point for Microsoft.

The next feature introduced by Hyper-V 2012 is VLAN trunking for virtual machines, which has been in vSphere for a long time, except that it’s been for the whole portgroups. Other new features include ARP poisoning/spoofing protection, DHCP guard and such „revolutionary” things like live storage migration or API for the virtual switch (called for that reason Extensive Switch) that should have been introduced a long time ago.

In an attempt to virtualize the network (a la VXLAN) Hyper-V uses NVGRE as an encapsulation protocol for network virtualization as well as introduces IP rewrite. Also Hyper-V can benefit from native NIC teaming that at last appeared in Windows Server.

Here‘s a nice Hyper-V feature comparison between Windows Server 2008 R2 and Windows Server 2012 including the networking part.

Definite guide to iSCSI with FreeNAS on ESX(i) – Part 4 – Configuring iSCSI on ESXi 5.1 (standard vswitch)

środa, Październik 31st, 2012

In this part of the guide we will have a look on iSCSI configuration under ESXi, in my example ESXi 5.1. The initial configuration of the host is very similar in what we saw in the previous part. There is one vSwitch and the host is equipped with 3 network interfaces.

Initial network configuration

Initial network configuration

Network interfaces (uplinks)

Network interfaces (uplinks)

Just like before we will first configure the networking part. Go to your host configuration and click on Configuration tab. Go to Networking. Click on Add Networking…

Selecting connection type for the new portgroup

Selecting connection type for the new portgroup

Select VMkernel as your connection type. You can notice that where in ESX we had 3 options to choose from (management, VMkernel, virtual machine networking), here in ESXi management networking has been merged into VMkernel stack which now is responsible for management, iSCSI connectivity, VMotion and FT logging. Select one of your NIC specifying to create a new vSwitch. Click Next.

Available uplinks

Available uplinks

Here you can see what I was talking about. Select what this VMkernel portgroup will be used for. We will use it for iSCSI traffic only (remember: traffic separation) so we don’t have to select anything. Give your portgroup a meaningful label and click Next.

Configuration for the new VMkernel port<br>

Configuration for the new VMkernel port

Insert a correct network configuration and click Next and then Finish.

IP network settings

IP network settings

The networking configuration is ready.

vSwitch for iSCSI configured

vSwitch for iSCSI configured

Click on Storage Adapters. if iSCSI Software Adapter is not installed, click on Add.. and install it.

Software iSCSI Adapter

Software iSCSI Adapter

Right click on the iSCSI Software Adapter and select Properties.

Software iSCSI Adapter properties

Software iSCSI Adapter properties

As you can see it is already enabled so no need to do it. Click on Dynamic Discovery tab and click on Add. Insert IP and port of your FreeNAS portal (check part 2 for FreeNAS installation and configuration guidance).

Dynamic Discovery

Dynamic Discovery

Click OK and Close. when asked to rescan the hba, select Yes. when everything went fine, you should see a device – a LUN presented on your FreeNAS server.

Presented LUN

Presented LUN

If you looked carefully during iSCSI Software Adapter configuration you might have noticed that there is an additional tab called Network Configuration.

Network configuration and port binding? Hm...

Network configuration and port binding? Hm...

However, we haven’t even touched it and the iSCSI storage seems to be working fine, so what’s the big deal? Well, imagine that you want (and usually you do) to use more than one NIC for iSCSI storage for multipathing and failover, you will need to bind vmknics (virtual adapters) to vmknics (physical) in 1:1 manner. Go back to Networking and click on vSwitch1 Properties. Click on Network Adapters tab and then on Add…

Adding some redundancy

Adding some redundancy

Select the unused vmnic, click Next twice and then Finish. Close vSwitch proprieties. Now our network configuration looks like this:

Second uplink added

Second uplink added

Now we will create another vmkernel portgroup on vSwitch1. The procedure is very similar to the first one except this time we add a new portgroup to the existing vSwitch. Click on vSwitch1 Properties and on Ports tab click on Add… Here’s the result:

2 uplinks, two vmkernel portgroups

2 uplinks, two vmkernel portgroups

Let’s go back to Storage Adapters and to Software iSCSI Adapter properties. Click on Network Configuration tab and click on Add…

VMkernel ports are not compliant

VMkernel ports are not compliant

Ops, there is no VMkernel adapter available except Management Network and if you select anything else you see the following message:
The selected physical network adapter is not associated with VMkernel with compliant teaming and failover policy.  VMkernel network adapter must have exactly one active uplink and no standby uplinks to be eligible for binding to the iSCSI HBA.

Why is that? That’s why. Basically, like I have mentioned before, you need to bind VMkernel port with physical uplinks (vmknics) 1:1 and if you go back to Configuration, Networking, click on vSwitch1 properties and then edit one of two VMkernel ports you will see (on NIC teaming tab) that each of them has got two NICs selected as active.

Two active uplinks for iSCSI vmkernel port? A no-no.

Two active uplinks for iSCSI vmkernel port? A no-no.

This is not a supported solution. what we need to do is to select Override switch failover order and mark one of the uplinks as unused. Then on the second VMkernel port we need to do the same, with another uplink of course.

That's better...

That's better...

Then go back to iSCSI Software Adapter’s properties, to Network configuration tab. Click on Add… and you will see both iSCSI VMkernel ports.

vmkernel ports are now compliant

vmkernel ports are now compliant

Add them both, you see they are compliant. If they are not, these are the possible reasons:

  • The VMkernel network adapter is not connected to an active physical network adapter or it is connected to
    more than one physical network adapter.
  • The VMkernel network adapter is connected to standby physical network adapters.
  • The active physical network adapter got changed.

If you’d like to know how to configure it from CLI, here’s your documentation.

Now if you right-click on the LUN presented by your FreeNAS and select Manage Paths… you will see that you can choose different path management policies for iSCSI storage.

Changing PSP for iSCSI storage is simple as that

Changing PSP for iSCSI storage is simple as that

Definite guide to iSCSI with FreeNAS on ESX(i) – Part 3 – Configuring iSCSI on ESX 3.5 (standard vswitch)

piątek, Październik 19th, 2012

This is Part 3 of the definite guide to iSCSI with FreeNAS on ESX(i). Make sure to check out the other parts as well.

1. Introduction
2. FreeNAS installation and configuration
3. Configuring iSCSI on ESX 3.5 (standard vswitch)
4. Configuring iSCSI on ESXi 5.1 (standard vswitch)
5. Configuring iSCSI on a distributed vswitch
6. Migrating iSCSI from a standard to a distributed vswitch

 

3. Configuring iSCSI on ESX 3.5 (standard vswitch)

Make sure your ESX server has got at least two NICs – if you’re doing this in the nested, virtual environment and you have created a virtual ESX with only one NIC, shut it down, add a new network card and start the server again. We will use one for management and the second for iSCSI configuration. You will probably want to use more NICs anyway (for VM traffic, VMotion, redundancy, etc.) but for this demonstration we will use only two NICs – vmnic0 (management) and vmnic1 (iSCSI).

After you have installed ESX 3.5, go to the Configuration tab and under Hardware click on Networking.  All you will see is one virtual switch (vSwitch0). First of all we will create a new vswitch for iSCSI traffic. In the upper-right corner click on ‘Add Networking…’.

Select VMkernel and click Next.

Select VMkernel

Select VMkernel

Select ‘Create a virtual switch’ (this option is available if you have at least one unassigned NIC) and select an uplink NIC to be used with your new vswitch. Click Next.

Adding an uplink tpo the vSwitch

Adding an uplink tpo the vSwitch

Now you will create a new portgroup on your new vswitch. Give it a meaningfull name (label) like ‘iSCSI’ for example. Configure VLAN (if necessary, if not – leave this field empty).

Setting IP configuration for VMkernel port

Setting IP configuration for VMkernel port

Click Next and then Finish. A new virtual switch called vSwitch1 is created.

New virtual switch

New virtual switch

Under Hardware click on Storage Adapters. And select iSCSI Software Adapter. Click on Properties.

Software iSCSI initiator

Software iSCSI initiator

The software initiator is disabled by default so click on Configure and enable it.

Enabling software adapter

Enabling software adapter

You will see the following message:

Service Console needs to be able to communicate with iSCSI storage too

Service Console needs to be able to communicate with iSCSI storage too

Here’s the thing – target’s discovery is done by the Service Console. That means that both ports: VMkernel (the one we created) and Service Console (already created on vSwitch0) must be able to communicate with iSCSI target (on the FreeNAS server). If you look on the screenshot above you will see that both Service Console and VMketnel port for IP storage are in the same subnet (as well as FreeNAS server of course) so they both will be able to communicate with the target. However, if your Service Console is in the different network (and it should be if it is a production environment in order to introduce traffic separation), you will need to create a second Service Console – on vSwitch1 – and assign it an IP address allowing it to discover the FreeNAS iSCSI target. In our case it is not necessary.

Let’s continue with the software initiator’s confiuration. Click on the Dynamic Discovery tab and click Add… Put the IP address and you configured on FreeNAS Target.

IP  and port of FreeNAS server iSCSI service (remember the portal in Part 2?)

IP and port of FreeNAS server iSCSI service (remember the portal in Part 2?)

You will be asked to rescan the HBAs. Click on Yes.

Rescan the host's HBAs

Rescan the host's HBAs

If all goes well, you will see the target device:

LUN presented on FreeNAS to ESX host

LUN presented on FreeNAS to ESX host

Now you can go on and create a new storage for ESX server on the LUN (VMFS or RDM).

Definite guide to iSCSI with FreeNAS on ESX(i) – Part 1 – Introduction

piątek, Październik 12th, 2012

This is Part 1 of the definite guide to iSCSI with FreeNAS on ESX(i). Make sure to check out the other parts as well.

1. Introduction
2. FreeNAS installation and configuration
3. Configuring iSCSI on ESX 3.5 (standard vswitch)
4. Configuring iSCSI on ESXi 5.1 (standard vswitch)
5. Configuring iSCSI on a distributed vswitch
6. Migrating iSCSI from a standard to a distributed vswitch

 

1. Introduction

Since many search queries on my blog lead to this article where I struggled with FreeNAS configuration on nested ESXi 5.0, I decided to write a guide how to configure FreeNAS to work with ESX and ESXi, both in „regular” and „nested” environment, on standard and distributed switches. By „nested” I mean ESX servers installed as virtual machines on a physical ESX(i) or on VMware Workstation.

iSCSI is configured on ESX and ESXi in slightly different manners so they will be covered separately. I would also like to show you how to configure iSCSI connection on a distributed virtual switch and how to migrate to a distributed vswitch a working connection from a standard vswitch.

When writing this article I assume that you’ve got:

  • basic understading of VI / vSphere netowrking concepts
  • working knowledge of vSphere client

Have fun reading this guide and please laeve me a comment or contact me directly if something’s not clear or if you have additional questions/issues/doubts.

Distributed virtual switch and the .dvsData folder

poniedziałek, Październik 8th, 2012

If you use a distributed virtual switch you can notice that on some of your datastores there is a folder called .dvsData. Why is it created and what it is used for? Here you will find a description of this folder’s role.

As you can see below, a virtual machine called Workstation 1 is connected to the „Virtual machines internal” vdportgroup which in turn is on the vds called simply „ds”.

Workstation 1 connected to the distributed portgroup on the distributed virtual switch

Workstation 1 connected to the distributed portgroup on the distributed virtual switch

Virtual switches view

Virtual switches view

The VM is stored on the VMFS I datastore:

Datastore view

Datastore view

where one can also see a folder called .dvsData. Let’s have a look inside:

A look inside .dvsData folder

A look inside .dvsData folder

In the .dvsData folder there is a sub-folder which name suits the UUID of the distributed switch:

Checking vds's UUID with the esxcli command

Checking vds's UUID with the esxcli command

In this subfolder you will find one more files and their numbers will suit the number of ports the virtual machine(s) is connected to:

File listing

File listing

The file name's matches the port's number

That means that .dvsData will appear on the datastore where your VM’s configuration file (vmx) is stored. If you open the configuration file you will find the same information:

VM's configuration file networking part

VM's configuration file networking part

This information is synchronized (from vmx to .dvsData folder) by the host every 5 minutes. So now you know what this folder is, why subfolder and files’ names are so strange, etc. But what’s the purpose of the .dvsData folder and its content?

It is used by HA. Imagine the situation when your host fails and HA starts a VM on another physical server. It needs to know which port is should connect the machine to. I did a very simple test – I removed the file called 264 from the datastore and I stopped the ESXi server the VM Workstation 1 was running on. A few seconds later HA detected a possible failure of the host. However this was the result on the VM:

Oops, the failover failed...

Oops, the failover failed...

Leson learnt: don’t touch the folder .dvsData unless you really know what you are doing.

PS. As you see all operations in this note were done using the new Web-Client. More on this client soon.

VMotion failing when VLANs are configured

piątek, Wrzesień 7th, 2012

You set up vmkernel port for vMotion on a standard vSwitch or you connect a virtual adapter for vMotion  vdportgroup on a distributed vSwitch. If you lack physical uplinks you may want to segregate the network traffic using VLANs. So what you do – you configure VLAN id on a vmkernel port (vSS) or on a  dvportgroup (vDS), depending on which type of vSwitch you use.

An example of adding a VLAN tag on a virtual distributed portgroup

Then you try VMotion but it fials at 9% with the following error:

A general system error occurred: The VMotion failed because the ESX hosts were not able to connect over the VMotion network. Please check your VMotion network settings and physical network configuration.

Well, something’s wrong with the network configuration so you strip VLAN id and it is working again.

The solution is very simple – make sure the vmknics (virtual adapters you configured for vMotion) on your hosts are in a different subnet than the service console (or management vmknic). If they are not – VMotion vmknic will try to communicate with the default gateway and since the default gateway is not tagged with the VLAN id you choose for VMotion, the whole operation fails.

Setting IP on VMotion vmknic - make sure VMotion vmknic and Service Console (or Management vmknic) are in different subnets

Setting IP on VMotion vmknic - make sure VMotion vmknic and Service Console (or Management vmknic) are in different subnets

Once vmknics for VMotion are put in a different subnet, everything starts working fine.

Tip: if you do it on nested ESXi servers (virtual ones) remember to edit properties of the Virtual Machine Port Group where your nested ESX(i)s are connected on your physical ESX(i) and set tagging to 4095 (enabling trunking – all VLAN ids will be allowed). Otherwise your tagged traffic will be stopped there.