vSphere / Lab For Beginners: Part 4 – Virtual Distributed Switches & Migrating Networking (VSS to vDS)

Where are We?

By this point we should be in the position of having our lab cluster up and running, configured for storage and able to authenticate against a real domain with all the control that gives us.  We haven’t yet enabled any of the advanced features like vMotion though as this would have required network configuration that we would have removed in this stage.

What’s Next?

VMware provides two different ways to configure networking Virtual Standard Switches  (VSS) and Virtual Distributed Switches (vDS).  So, what’s the difference?  Standard switches are, simple, easily configurable switches that have to be configured individually on every ESXi host you have.  They also dont need Virtual Center to work.  Additionally, for features like vMotion to work they must be configured identically across all hosts.  This a management pain and when you scale, it doesn’t!

Virtual Distributed Switches are a centeralised switch that hosts can be members of.  They are managed from vCenter and provide unified management of the estates networking and advanced features not in VSS (such as pVLAN tagging).

So, in this part of our tutorial we are going to do a few things.

  1. Create a new Virtual Distributed Switch
  2. Migrate the initial VSS configuration and virtual machine networking over to the vDS
  3. Create a vDS VMKernel Port Group and enable vMotion (because we havent set this up in VSS at this moment).

A small note.  In a real environment you would be running with network uplink redundency and would be able to do this in 2 stages.  In this example we only have 3 NICs and we will have to migrate components one at a time using the ‘spare’ physical NIC.  This means that there is a lot of repition in this part of the blog.  It’ll help to understand the process!

Why Do I Want to Do This?

Simply because in the real world you’re unlikley to encounter many enterprises using VSS configurations.  vDS setups are more flexible and more widly in use.  From a lab perspective it also means you get  to play with more advanced features once you’re familar with vSphere so you may as well enable that functionality now.

Step 1: Create a new Virtual Distributed Switch

First we have to create the actual switch within vCenter.  So, log on to the vCenter Web Client as before with administrator rights and swititch to the networking tab on the left.  Select you Data Center and then click the Actions dropdown.  Expand Distributed Switch and select New Distributed Switch.

screen-shot-2016-10-21-at-20-54-55

This brings up a familiar looking wizard.  Give your switch a friendly name (it’s good practice to denote that it’s a distributed switch in the name). Click Next.

screen-shot-2016-10-21-at-20-55-48

You can now select the version (feature level) of you vDS.  In this example we’re going for the newest to enable all features.  In the real world you may want to select an older version if you are integrating with an older VSphere suite. Select the newest version. Click Next.

screen-shot-2016-10-21-at-20-56-03

We now get to choose the number of uplinks we want to assign to the switch.  Uplinks map to physical network adapters.  The default is 4  and we are going to go with this (even though the lab in this example only has 3 Physical NICs).  You can have more uplinks than Physical NICs no problem (they just wont work or don anything).

We also get the option to Create a default Port Group (A port group is analgous to a set of network ports you’d plug wires into, grouped together for a similar task).  This first Port Group is the one you’d probably assign for connecting Virtual Machine vNIC to (to enable communication).  Give it a friendly name and click  Next.

screen-shot-2016-10-21-at-20-56-31

You now get a summery page detailing what has happened and, interestingly, what your next actions should be.  Click  Finish.

screen-shot-2016-10-21-at-20-57-25

So, we’ve now created a basic Distributed Switch, created a set of uplinks for it (as yet NOT assigned to a Physical NIC) and created a default Purt Group which we shall use for VM connectivity.

Step 2: Add Hosts to the vDS

Now the vDS is created we have to assign our ESXi hosts to the switch and create the additional port groups we are going to need (for Storage and vMotion in our case).  To do this navigate to the Networking tab in the Webclient,  select the distributed switch we created above, Select the Manage tab (configure in version 6.5), selct settings and then Topology  you’ll now need to click on the screen-shot-2016-10-22-at-16-18-09 icon.

screen-shot-2016-10-22-at-16-17-17

This will bring up the  Add and Manage Hosts configuration Wizard.  This is a Wizard we will keep returning to whenever we make a change to the vDS.

Firstly we will need to add our hosts to the vDS.  Select Add Hosts and click  Next.

screen-shot-2016-10-21-at-21-18-07

Here you’ll need to select all the hosts you have in your lab and click OK.

screen-shot-2016-10-21-at-21-18-31

You’re shown a confirmation screen. Click Next.  Continue to the end of the Wizard and  Finish (without altering any configuration). Remember, we’re just connecting the hosts at this point, taking it step by step.

screen-shot-2016-10-21-at-21-18-41

Step 3: Crete Other Port Groups

Now we’re going to create the remaining Port Groups we will need for ther lab.  These include:

  • A portgroup for iSCSI storage that we will migrate our ‘storage’ VSS to.
  • A portgroup for vMotion to enable this feature.

Each portgroup will have a dedicated uplink associated with it (and each uplink will have a dedicated physical NIC).

So, from the vSphere Web Client, navigate to the Networking tab, select the  Distributed Switch  we have created and right click on it.  Select Distributed Port Group and then  New Distributed Port Group.

screen-shot-2016-10-21-at-21-23-24

You’ll now be presented with a simple wizard:

Give the Port Group a friendly, descriptive name. Click  Next.

screen-shot-2016-10-22-at-16-41-06

Keep the default options for the switch (we can do configuration and explination in detail another time).  Click  Next.

screen-shot-2016-10-22-at-16-41-20

The Summary screen is shown.  Click  Finish.

screen-shot-2016-10-22-at-16-41-32

Repeat this Wizard three times.  I have created three Port Groups called:

  • StorageDPG (for iSCSI traffic and access to storage).
  • VMNetworkDPG (for Management and VM communication) [Renamed default Port Group from Step 2].
  • vMotionDPG (for vMotion traffic).

At the end of the process you should have something like this.

screen-shot-2016-10-21-at-22-11-41

Back in the Topology view for the vDS you should now see something like this.  IT shown the Distributed Switch with it’s uplinks (notice there’s still no physical NICs associated with them).  You can also see t he portgroups on the left (currently with no details or items assigned to them).

Now, we have to add Physical NICs to the Uplinks.

Step 4: Add Physical NICS to Uplinks

Click on the screen-shot-2016-10-22-at-16-18-09icon fromt he topology view to initiate the  Add and Manage Hosts wizard again.

screen-shot-2016-10-21-at-22-13-27

Click the Green plus symbol labelled  Attached hosts.

screen-shot-2016-10-21-at-22-13-48

Select all the hosts in the lab cluster (all of the ones shown below in this example).

screen-shot-2016-10-21-at-22-18-05

The confirmation will be shown as below. Click  Next.

screen-shot-2016-10-21-at-22-14-09

On the next wizard screen select the  Manage Host Networking  option and click  Next.

screen-shot-2016-10-21-at-22-17-51

Now ensure only  Manage Physical Adapters is selected and click  Next.   In this step we are only going to add the spare adapter.

screen-shot-2016-10-21-at-22-18-23

Select the currently not used (or extra) vNIC and click the Assign Uplink button. Assign it to Uplink 1.  Note: In the example below if we try to assign one of the vNICS from vSwitch0 or Storage we would end up disconnecting the physical link from the switches BEFORE migrating the networking over to the vDS.  This would mean that either Management+VM networking or (worse) storage to the running VMs (Including this vCSA) would die.  This causes a horrible mess and is why you should probably run dual NICS / switch in reality (so we could connect half to the new vDS and leave half where they were and do a seemles switchover).

As mentioned above this lab example doesnt have this so we have to perform a rolling migration with our currently unassigned NIC.  If you’re messing around in a lab that has multi physical NICs but no spare (but vMotion has been configured) then use the NIC assigned to the vMotion interface as the ‘spare’ as this isn’t a critical componant of keeping a VM alive.

Check everying is assigned to the correct (free) NIC.  Click Next.

screen-shot-2016-10-21-at-22-18-55

The next screen shows an impact summary and should alert you is you’re about to do anything stupid.  We’re not.  Click Next.

screen-shot-2016-10-21-at-22-19-10

Click Finish and the process should complete momentarily.  Back at the Topology screen you should notice that the Uplinks sections of the diagram now shows adapters assigned to Uplink 1.  In tis example, 2.  One for each host.

screen-shot-2016-10-21-at-22-19-24

Step 5: Migrate Networking

Again, click the screen-shot-2016-10-22-at-16-18-09 Add and Manage Hosts button from the topology view and ensure, this time, that just  Manage host Networking is selected.

screen-shot-2016-10-21-at-22-19-34

Select both hosts again.

screen-shot-2016-10-21-at-22-19-49

Now ensure Manage VMkernel adapter  and Migrate virtual machine networking options are selected.

screen-shot-2016-10-21-at-22-20-04

Now select the VMK0 adapter currently assigned to vSwitch0 (Management Network) and select the Assign Port Group button.

screen-shot-2016-10-21-at-22-20-26

Assign this to the newley created VMNetworkDPG vDS port group and ensure the same is done for the second (and any other additional) hosts in your environment.  Click  Next. Leave the storage adapter alone for the moment.

screen-shot-2016-10-21-at-22-20-36

Check that nothing will be broken in the Analyze Impact window.

screen-shot-2016-10-21-at-22-20-49

Now, on the Migrate vm Networking  window expand and ensure all the VMs currently in the lab are migrated over to the new Port Group.  In the example be low you can see the three VMs already in my lab (including this VCSA) ready to migrate from the VM Network VSS Port group to the VMNetworkDPG  vDS Portgroup.

screen-shot-2016-10-21-at-22-21-25

Review the settings to ensure everything is as it should be.  Finish the wizard.

screen-shot-2016-10-21-at-22-21-34

You should now see, in the topology view, the three VMs attached to Uplink 1 and, crucially, you should still have network connectivity to the LVCA web interface.

screen-shot-2016-10-21-at-22-22-01

Next, restart the Add and Manage Hosts wizard to move the next set of items over.

screen-shot-2016-10-21-at-22-24-24

Select all the hosts in the lab.

screen-shot-2016-10-21-at-22-24-37

Select  Manage physical adapters  and  Manage VMkernel adapters.

screen-shot-2016-10-21-at-22-24-55

Now assign the NIC in use by vSwitch0 (which we migrated the networking OFF off in the last step through the wizard) to Uplink 2.  Do this for all hosts in the environment.

screen-shot-2016-10-21-at-22-25-22

Now click the Assign Port Group button and ensure that the vmkernel  port currently used for storage in the VSS is migrated to the StorageDPG .  Notice how we are rotating the next VSS switch to the DPG to free up the final adapter in the next step.

screen-shot-2016-10-21-at-22-25-43

A final check on the Analyse Impact screen and it is showing a warning.  In this instance it is simply telling us that we are switching physical NICs in this operation.  We know this to be the case as were having to shuffle non resilliant connections.

screen-shot-2016-10-21-at-22-26-04

Check the summery screen and Finish

screen-shot-2016-10-21-at-22-26-19

Once complete we, again, should still ahve access to our VMs (the storage is still connected) and the StorageDPG  portgroup and vmk ports anre connected to Uplink 2

screen-shot-2016-10-21-at-22-27-21

For one final time.  Restart the Wiard and select Manage host networking 

screen-shot-2016-10-21-at-22-27-40

Add all the hosts from the environment.

 

screen-shot-2016-10-21-at-22-27-51

 

Ensure the Manage physical adapters  and  Manage VMkernel adapters options are selected.

screen-shot-2016-10-21-at-22-28-07

Assign the final unused NIC from the VSS to Uplink 3.  This should be the NIC assigned to the Storage switch in the old networking.

screen-shot-2016-10-21-at-22-28-27

On the Manage VMkernel adapters screen click the New adapter button.

Screen Shot 2016-10-24 at 21.09.30.png

One the Select target device  screen click  browse to select an existing network.

screen-shot-2016-10-21-at-22-28-58

Now select the vMotionDPG portgroup that was created right back at the start of this stage of the guide.  Note in the screenshot belowthe WRONG network is hilighted…

screen-shot-2016-10-21-at-22-29-08

For the Port Propeties tick vMotion traffic  under  enable services.

screen-shot-2016-10-21-at-22-29-26

Assign the new VMkernel port for vMotion an IP address and appropriate subnet.

screen-shot-2016-10-21-at-22-31-08

Now assign this new VMK port to the vMotionDPG distributed port group on all hosts. NOTE: In the picture below I got it wrong for host esxi01.  Host esxi02 is CORRECT.

screen-shot-2016-10-21-at-22-32-21

One final Analyse Impact  screen is shown.  Move on to thee Summary screen and complete the wizard.

screen-shot-2016-10-21-at-22-32-32

Like before, you should be able to see the two new VMkernel ports assigned to the vMotionDPG port group.

screen-shot-2016-10-21-at-22-33-52

That’s it.  We have migrated all the networking from VSS to vDS and created a final DPG and VMK port for vMotion capabilities.  We now have centerally managed networking from within vCenter with the ability to migrate VMs across hosts.  We also have the storage and regular network traffic controlled fromt he same area.

Step 6: Cleanup

Now everything is controlled by the vDS we just need to clean up the older VSS configuration.  To do this from the Web Client select the individual host from the  Hosts and Clusters  view, select the 1st host, the  manage  tab,  networking and then  virtual switches.  This will list the vDS and the two (obsolete) standard switches (vSwitch0 and Storage).  Select the 1st VSS and click the red ‘x’ to delete it.  Now do the same for the final VSS. NOTE:  In version 6.5 select the switch, click actions then select remove.

Remember that you will have to do this for all the hosts you have as VSS and not centerally controlled.

screen-shot-2016-10-24-at-21-37-33

What’s Next?

Next we will roll through some of the feaures in vCenter such as HA, DRS and vMotion.  this will be in part 5 of this beginners series.

Advertisements

vSphere / Lab For Beginners: Part 1 – Installing ESXI To USB

This post is all about how to install the first part of any vSphere / HomeLab setup.  the basic ESXi client.  It’s intended for beginners who haven’t used vSphere before or those who know a little but are installing on their own for the first time.

Assumptions

This guide assumes that you’re installing on to a physical piece of hardware that will boot from a USB key. Although the process is pretty much the same for SD cards, local storage etc).  It also assumes that you’re installing from an ISO image and no optical drive.

All software used in this guide can be obtained with a free trial licence so you get going  quickly and have a limited playa round.

For advice on choosing hardware suitable for a lab or test environment check out the Open Home Lab project (community run) or ‘Part 0’ of this series for information on the kit this lab weas built on.

Note: This guide was based off vSphere 6.0 however.  This processw is the same for vSphere 6.5 based installs of ESXi.  Differences are called out and noted.

What You’ll Need

For this part of the guide you’ll need:

  • Aa blank USB key (8GB or more, 16GB recomended [for logs])
  • An installed copy of VMware Workstation (Windows) or Fusion (Mac)
  • An ESXi Install.iso image
  • Some shared storage (iSCSI via a NAS shown in guide).
    • any NFS share would also be viable but not shown here.
  • 2 x ‘computers’ to act as hosts for ESXI and run our workloads
    • These will run ESXi from USB.

Trial versions of the software can be downloaded form VMware’s website.

What Are We Going To Do?

The aim here is to install ESXi 6.0 (the beating heart of vSphere) on to a USB stick and then get our hosts to boot  ESXi from that.  We’re then going to set the hosts up so they can communicate and have access to some shared storage.  Then they will be ready to run VMs.

Step 1: Create a Blank VM In Workstation

Open up VMware Workstation and create a new VM  from File > New Virtual Machine.  This bring up a handy Wizard.

SnapCrab_NoName_2016-6-9_16-38-43_No-00

Workstation provides an option to attach an ISO to a new VM and boot straight to it when it’s created.  This is perfect for Installing ESXi as, with a few clicks, we’ll have the installer booted and ready to go without any faff!.

Select Installer disc image (.iso): as the option and then browse to the ESXi .iso file you have downloaded from VMware. It’ll probably have an unfriendly name. e.g. VMware-VMvisor-Installer-6.0.0.update02-3620759.x86_64.iso. Click Next.

SnapCrab_NoName_2016-6-9_16-39-2_No-00

Now name the VM something friendly (its not being kept so dont worry too much). and, if you can, make the location somewhere fast and local to speed up install times.  Click Next.

SnapCrab_NoName_2016-6-9_16-39-30_No-00

This VM isn’t going to do much apart from boot an ISO image (we’re installing to USB remember) so make the disk size 2GB and click Next.

SnapCrab_NoName_2016-6-9_16-39-43_No-00

Make sure you tick the Power on this VM after createion option and click Finish.

SnapCrab_NoName_2016-6-9_16-39-54_No-00

The VM used to boot the ESXi installer will now be created, turn itself on, and then load the installer program.  Probably less than 30 seconds!.

Step 2: Install ESXi to USB Stick

When you open a console to you’re VM you should probablu see something like this.  Notice there is a countdown timer so, if you’ve been a biy slow, the default option will already have been selected for you…

SnapCrab_NoName_2016-5-24_11-37-58_No-00

Once you’ve selected you want to install ESXi you’ll be presented with a chance to back out.  Don’t! Go forth and install (Enter).

SnapCrab_NoName_2016-5-24_11-40-19_No-00

Before you go any further you’ll want to ensure that you’ve plugged in your USB key that you with to install ESXi on to and connected it to the VM via VM > Removable Devices > [Your USB Drvice] > Connect.USB Connect

Accept the EULA, you know there’s nothing important contained in it right? (Press F11).

SnapCrab_NoName_2016-5-24_11-40-36_No-00

Now you have to start paying attention.  Use the cursor keys to select the USB key from the list If it’s not shown check that you’ve connected the key to the host via the VM menu. Press Enter.

SnapCrab_NoName_2016-5-24_11-40-59_No-00

The installer will now scan the disk to see if it’s blank or already has something on it.  Wait a few moments.

SnapCrab_NoName_2016-5-24_11-41-18_No-00

In My case I already had ESXi installed on this USB stick so I got the warning shown below (sorry about that).  I chose Install as I wanted to show this as if it were a blank drive going forward.

SnapCrab_NoName_2016-5-24_11-41-55_No-00

You should now select a keyboard layout.  Ensure you get this right as it’s a total pain if you set a password down the line and then change the keyboard layout. Press Enter.

SnapCrab_NoName_2016-5-24_11-42-13_No-00

Now enter a password for the root account.  This should be secure as it give total access to ESXi. Press Enter.

SnapCrab_NoName_2016-5-24_11-42-36_No-00

ESXi will now do some checks to work out what it needs to configure during the install. Just wait a moment.

SnapCrab_NoName_2016-5-24_11-42-51_No-00

Now is your final choice to back out.  Check it’s going to install to the correct device (you memorised the HBA number from earlier right?).  Press F11 to begin the install.

SnapCrab_NoName_2016-5-24_11-44-19_No-00

The isntall will begin and progress will be shown.  It only takes about 10 mins to a normal (slow) USB stick.

SnapCrab_NoName_2016-5-24_11-44-29_No-00

At the end of the process you’ll be greeted with a success screen as shown below. Remove your USB key and turn off the VM in workstation.  You don’t need to press Enter to reboot as we’re done with the VM now.  We just care about the contents of the USB stick.

SnapCrab_NoName_2016-5-24_11-51-9_No-00

Step 3: Booting ESXi and Initial Configuration

NOTE: Going forward I’m using a host with no monitor attached.  Instead I have an Intel vPro CPU installed allowing me to use Intem AMT KVM to view the servers boot process.  If you’re intalling to a regular computer ensure you can see the servers output and have a working keyboard to hand before continuing.

NOTE: Most systems are not set to boot from USB by default.  You should chnage the boot priotory in your systems BIOS / UEFI at this point.

Insert the USB key in to your server / computer / host / PC and power it on.  ESXi will load (take about 10 mins) and then will present you with a screen as shown below.

SnapCrab_NoName_2016-5-24_14-53-25_No-00

The first thing you must do after installing ESXI is get the basic management network configured.  This is the initial IP  and NIC assignment that ESXi uses to send all traffic between hosts, VMs and your system.  By default it’s set to DHCP and you dont want your IP address changing all the time!

Press F2 to bring up the logon prompt.  Enter root as the username and the password you set in step 2. Press Enter.  If your log on was ucessful nothing wil appear to happen (yes really).  Press F2 again.

SnapCrab_NoName_2016-5-24_14-53-41_No-00

The system customisation screen will now be displayed.  This is the area that, in the event of a massive SNAFU in configuration you will always come back to in order to fix things (generally networking).

At this stage we are interested in Configuring the Management Network.  So, select this opeion and press Enter.

SnapCrab_NoName_2016-5-24_14-57-9_No-00

This shows the Configure Management Network screen.  We’ll need to configure all of these options but, to start, select Network Adapters. Press Enter.SnapCrab_NoName_2016-6-13_13-30-45_No-00

This is where you can select the NIC that you want to use for the basic management network.  You can select more than one for failover if required but advanced configiuration is far easier from within Virtual Center (covered later).

In this example there are three NICs in my host (onboard lan and an intel Dual port PT adapter [the onces labelled “J6B2….”).  Select the most appropriate one for your system. Press Enter to return to the Configure Management Network screen.

SnapCrab_NoName_2016-5-24_15-1-40_No-00

Now select IPv4 Configuration. Press Enter.  This brings up the network settings screen for the NIC assigned to the management network (previous step).  As noted when we booted the host this will be set to DHCP as default.  It is recomended to change this to static and then configure the network settings based on your environment.

The example below shows my setup.  Press Enter. You will rturn to the Configure Management Network screen.

SnapCrab_NoName_2016-5-24_15-1-55_No-00

Select IPv6 Configuration and disable IPv6 (restart required).  I’m doing this to simplify things later on and remote and long format IPv6 addresses from troubleshooting steps.  If you want to use IPv6 there is no reason why you can’t leave it on. Press Enter. You will return to the the Configure Management Network screen.

SnapCrab_NoName_2016-5-24_15-2-5_No-00

Select DNS Configuration and enter information relevent for your network.  In the example below the primary and secondary DNS entries are my Acrive Directory servers.  It’s crucial that the primary DNS server actually EXISTS at this point.  So, in your environment this may be your internet router.  You should also set the Hostname at this point.  Press Enter.  You will return to the Configure Management Network screen.

SnapCrab_NoName_2016-5-24_15-2-24_No-00

Select Custom DNS Suffixes and enter the suffix you are creating for your lab.  This doesn’t have to exist at the moment but if you’re planning on building a domain on the lab enter here what you’re calling the domain.  In my case lab.local. Press Enter.  You will return to the Configure Management Network screen.

SnapCrab_NoName_2016-5-24_15-2-57_No-00

Now we have finished configuring the Management Network. Press Escape and the following confirmation should appear.  Press Y to reboot the ESXi host.

NOTE: If you chose to leave IPv6 ENABLED you will simple be asked to Restart the Management Network.  Again, press Y and wait a second.

SnapCrab_NoName_2016-5-24_15-3-13_No-00

The host will now restart (a process that takes about 10 mins

SnapCrab_NoName_2016-5-24_15-3-23_No-00

Once the host has rebooted you wil have to log in again to be presented witht he main options screen.  We’re going to skip over some of the options here  as they relate to tests or service restarts.  Select Troubleshooting Options. Press Enter.

SnapCrab_NoName_2016-5-24_14-57-9_No-00

This display the Troubleshoting Mode Options screen.  Select Enable SSH and Press Enter. this allows us to connect to the ESXi host using PuTTY or similar (iTerm on Mac).  This is handy in a lab as it enables cut-paste of commands.

NOTE: This is only being enabled here as we’re building a lab and it’s useful.  This should obviously not be enabled in a production environment unless there is actuall a problem.

Press Esc to return to the main menu and log out.

SnapCrab_NoName_2016-5-24_16-3-6_No-00

This is the basic configuration of ESXi done.  It will now be reachable via https://<IP Address>.  From here you can download the vSphere client for Windows to gain access and install Virtual center.  However, this is useless if you’re on a Mac and the Windows client is going to be replaced soon.  There is a better way….

Step 3: Install the ESX UI Utility

That better way is the ESXi Embedded Host Client.  This is an HTML 5 based management component that isntalls directly on to the host and allows management and configuration of the ESXi hosts from any modern web browser.

NOTE: As of vSphere 6.0U2 this is included as part of the main install and the following step is not technically required.  However, I would always install the latest version and I even came across a bundles version that would not allow me to configure iSCSI until I had upgraded.

Download it here: VMware Embedded Host Client

Essentially, this is a plug-in for ESXi.  These are known as “VIBs” (vSphere Installation Bundle).  Once You’ve downloaded the file and extracted it we need to install it.  The easiest way to do this is to copy the VIB over to the ESXi host using WinSCP.  Place it in simple to get to location (such as /tmp/).

Now, as we enabled SSH in the previous step we can open a PuTTY session to the esxi host and install the  UI utility.

I ran the command esxcli ssoftware vib install -v /tmp/esxui-signed-3843236.vib

SnapCrab_NoName_2016-5-24_16-15-24_No-00

Output from that command should look something like the screenshot below.

SnapCrab_NoName_2016-5-24_16-16-3_No-00

Once finished you can enter the url https://<IP of ESXi>/ui/ and you’ll get the lovley new html5 interface.  VMware have intimated that this is the way everything is going in the next version of vSphere but, for the moment, this remains an unsupported method of connection.  IMO it works and is FAR better than the old method.

SnapCrab_NoName_2016-5-24_16-20-24_No-00

You’ll want to log in at this point.  Use the username root and the password you set up earlier. Click Login.

SnapCrab_NoName_2016-5-24_16-16-37_No-00

Welcome to ESXi!

SnapCrab_NoName_2016-6-13_16-29-24_No-00.png

Step 4: Configure Time Synchronisation

ESXi and the rest of the vSphere infrastructure relies heavily on time synchronisation for proper and reliable operation.  Becasue of this it should be configured now before anything else is configured.  This needs to done per installed host.

Click  Manage in the left pane under the host and select the system tab and select Time & Date option.  Select Edit Settings.Screen Shot 2016-07-26 at 22.00.20

Select the Use Network Time Protocol option.  Change the NTP service startup policy and NTP servers to as shown below and click Save.

Screen Shot 2016-07-26 at 22.00.55

Back in the main area select the Actions button and expand NTP service option.  Select start.Screen Shot 2016-07-26 at 22.01.19

NTP will now start and time will be configured on the ESXi server.  Repeat for all installed ESXi server you have.

Step 5: Configure Storage

Once we’re at this point we have a functioning ESXi system with networking but we are still missing one crucial piece of the puzzle. Storage!

Note: vSphere shines and is most useful with shared storage (it’s a requirement for anything vaguley real world) but there is nothing to stop you playing around with one host and locak storage.  You just wont be able to do much.

For the lab to be useful we’ll have to configure some shared storage.  You can use a SAN, NFS shares or iSCSI without issue.  For this lab I’m going to be demonstrating iSCSI running from a Synology NAS (DS1513+).  However, if you dont have iSCSI capability use NFS from whatever share you feel like.  I’ll write an NFS section later.  I’m not going to go over how to set up your storage as that is generally device specific.  We are going to start from within the ESXi Host UI and configure from there.

Example Setup Details

Going forward my example setup consists of 4 iSCSI targets each representing a datastore.  These are called Datastore1, Datastore2, Datastore3 and ISO Store.  These reside on A synology NAS presenting iSCSI over 192.168.2.200 (note the different subnet to the management network).  This is to ensure segregation of storage traffic from data traffic.  It also allows me to monitor my system more easily.

iSCSI Configuration Process

Log on to the ESXi UI via the URL https://<IP Address of ESXi>/ui/ log in as root user with the password you set earlier.  On the left pane, select storage.

SnapCrab_NoName_2016-5-24_16-51-17_No-00

In the right hand pane select the Adapters tab and notice that there is only one adapter listed.  This is the USB adapter (if you have a host with a physical HBA this will probably be listed here at this stage, I don’t).  Click the Configure iSCSI item.

SnapCrab_NoName_2016-5-24_16-51-23_No-00

This brings up the screen to configure a new Adapter for iSCSI.  For now Enable iSCSI and click the Save Configuration button.

SnapCrab_NoName_2016-5-24_16-51-29_No-00

Notice that this now adds another adapter in the list.SnapCrab_NoName_2016-6-13_17-30-43_No-00

iSCSI requires a network connection over a vmkernel port to function correctly and, as mentioned at the start, I am running iSCSI on a seperate subnet.  This requires a little network configuration before we start.  Fronm the left pane, select Networking.

SnapCrab_NoName_2016-6-14_9-55-6_No-00

Select the Virtual Switches tab and then click the Add Virtual Standard Switch item.

SnapCrab_NoName_2016-6-14_9-55-15_No-00

Call it something relevent (such as Storage) and select an uplink (NIC).  I’ve chosen the 2nd NIC in my system. leave everything else as standard. Click Add.

SnapCrab_NoName_2016-5-24_16-52-48_No-00

Switch to the Port Groups tab select the new vSwitch and click the Add Port Group item.

SnapCrab_NoName_2016-6-14_9-57-19_No-00

Call this Storage and assign it to the Storage virtual switch.  Click Add.  This essentially, binds the uplink to the portgroup to the switch to create a dedicated path way for storage traffic.

SnapCrab_NoName_2016-5-24_16-54-32_No-00

Finally, we need to create a VMkernel NIC. VMware uses these to pass certain types of traffic within the system.  There is already one created for management by default (called vmk0) but we need to create one for storage traffic.Select the VMkernel NIC’s tab and select the Add VMkernel NIC item.

SnapCrab_NoName_2016-5-24_16-57-13_No-00

Select the Storage Port Group and change the IPv4 Settings to Static.  You’ll need to click the little arrow to actually show the fields to enter the address. Now add in the networking information for the port.  You will need an IP address on the same subnet as the iSCSI storage as well as the subnet information and gateway.  You do not need to specify the type of traffic for the kernel port when configuring for storage.  Click Create.

SnapCrab_NoName_2016-6-14_10-15-22_No-00

Now head back to the storage information by selecting Storage from the left pane.  Select the Adapters tab and select the Configure iSCSI item to bring back up the configuration screen.

SnapCrab_NoName_2016-6-14_10-23-19_No-00

Click the Add Port Binding item in the Network Port Bindings  section and select the storage (vmk1) interface we just created.

SnapCrab_NoName_2016-6-14_10-32-50_No-00

Now select the Add Dynamic Targets from the Dynamic targets section.  Add in the IP address of the iSCSI server.and click Save Configuration.  In my case this is the IP address of the network port on my NAS which handels iSCSI traffic.  The port is default at 3260 inless you’ve configured your iSCSI server with something different.

SnapCrab_NoName_2016-6-14_10-34-18_No-00

Click Save Configuration. VMware should rescan all your adapters and, if configured correctly. you should see your iSCSI LUNS listed in the Devices tab.

SnapCrab_NoName_2016-5-24_17-1-38_No-00

Finally, select the datastores tab and click refresh.  This should refresh the screen and show that there are now datastores available to the ESXi Host.

SnapCrab_NoName_2016-6-14_10-42-0_No-00

Wrap Up

That’s it.  You now have an ESXi host ready to be used for creating VMs and your lab.  At this point I would recomend repeating the steps above for all the other physical hosts you have.  Then you are in the position where you can install Virtual Center and really start to use the softwares power.  I’ve got a sectionon how to isntall  the VCSA in Part 2 of this beginners guide.