Category Archives: VCF

VMware Cloud Foundation

Bring Your Own Network with VCF 3.0 Pre-Requisites

Switches, we don’t need no stinkin’ switches…. Well, yes we still need switches.  That being said we no longer require specific switches in VMware Cloud Foundation 3.0.  This article is not a replacement to the User Guide or the Release Notes.


In VMware Cloud Foundation 3.0 (VCF 3.0), we have moved to a Bring Your Own Network (BYON) model to allow customers to choose their favorite network vendor and switch model and provide the underlay network that VCF will use.  This will allows customers the flexibility to use the fabric manager of choice to control their underlay network.


Now that BYON is in place and you have selected your switching vendor of choice, or incorporated to your existing network topology there will be a few pre-requisites that are required from your networking team.  Those pre-requisites are as follows:

  1. Each port to be used by a physical server will need to be a trunk port, however that trunk port can be limited to the specific VLAN’s as required by VCF and your vSphere environment
  2. Creation of a management network to support the SDDC Manager, PSC’s, vCenters, NSX Managers, NSX Controllers, and vRealize LogInsight cluster, in addition to additional vCenters, NSX Managers, and NSX Controllers for additional workload domains, as well as any additional services to be deployed.  This will be the default network used for all of these services when Cloud Builder brings up the environment. (more on that in a later post)
  3. While not explicitly required an ESXi management network is also nice to have assuming it has a route to the above management network.  This way your physical hosts are logically separated on another vLAN in your environment.  Your hosts management (in band) IP’s can also reside on your management network above (optional)
  4. VCF 3.0 introduces a new construct called a Network Pool.  A Network Pool simply consists of two networks ensuring isolation of your vMotion and vSAN traffic in your environment.  In VCF 3.0 we also support multi cluster workload domains, therefore plan accordingly if those should have shared or segregated vMotion and vSAN traffic.
    • vMotion network per Workload Domain, this is due to ensure isolation within workload domains or clusters.  Please keep in mind this is not a requirement but rather a best practice in VCF.
    • vSAN network per Workload domain, this is due to ensure data isolation within workload domains or clusters.  Please keep in mind this is not a requirement but rather a best practice in VCF.
  5. VTEP network with DHCP provided.  Please keep in mind that in VCF 3.0 we configure a Load Based Teaming NIC policy.  This in turn requires two (2) IP addresses per physical host in the environment.
  6. While not explicitly required an network allowing out-of-band access to the BMC ports of the servers is also nice to have assuming it can route to the above management network. This allows you to access the Baseboard Management Controller (iLo, iDrac, IMC, ect) as needed remotely. (optional)
  7. When planning all of your IP requirements allocate a subnet with ample capacity and then use inclusion ranges to limit the use of that subnet.  Keep in mind when using network pools an overlapping subnet cannot be reutilized (i.e. and
  8. Ensure that your networking guy is not utilizing any ethernet trunking technology (LAG/VPC/LACP) for connection to ESXi servers.
  9. Example of this schema in action:
    As you can see in this example I would be sharing the management network for ESXi servers as well as PSC’s, vCenters, NSX Managers, NSX Controllers, and vRealize LogInsight.  My additional Workload Domain though would receive it’s own dedicated vMotion and vSAN network pool.  Based upon the VXLAN/VTEP DHCP network being a I could in theory support up to 126 ESXi hosts in this environment after removing .1 for the gateway and .255 is utilized for broadcast.  Also one other thing to note is the management network can be MTU of 1500 or higher.  Jumbo Frames (1600+), are required for all other networks and is recommended to be 9000 as illustrated in this graphic.  All of the above networks also require gateways, and these need to be routable for the bringup portion of the environment.  Lastly VLAN Tagging through 802.1Q should be utilized, Jumbo Frames enabled, and IGMP Snooping Querier.


Now that the networking has been plumbed to the environment and host ports are set appropriately the next topic is DNS.  VCF will now support custom naming of attributes, in fact it manages ZERO of the DNS going forward.  In order to ensure your environment is ready for VCF, forward and reverse DNS entries need to be established for every item to be spun up in VCF.  Also one note here is that the DNS server to be used needs to be an authoritative DNS server therefore services like unbound will not allow this to function properly.  Below is a table illustrating what is required in addition to the ESXi Hosts having appropriate forward and reverse DNS:


In VCF 3.0 the support matrix for physical servers has been dramatically expanded.  In fact we support almost every vSAN ready node from this list.  A couple of things to keep in mind:

  1. Minimum Cache disk size requirement is 200Gb per disk
  2. Minimum Capacity disk size requirement is 1Tb per disk
  3. Minimum Two NIC’s of 10GbE, however VCF 3.0 supports 25GbE and 40GbE or any other NIC that has been qualified for ESXi and is IOVP certified.  This will allow for the use of 100GbE IOVP NIC’s as well.
  4. Configure BIOS and FW levels to VMware and manufacturer recommendations for vSAN (NICs are certified on the ESXi qualifying hardware and HBAs are the most important things to confirm and set before proceeding)
  5. Disable all ethernet ports except the two that will be used in VCF (currently VCF only supports two physical NICs, in addition to BMC, and they must be presented in vSphere as vmnic0 and vmnic1 respectively)
  6. Install a fresh copy of ESXi 6.5U2c/EP8 with a password defined.  Use the vendor build if possible if not patch the VIBs for NICs and disk controllers as stated above.
    * If you are using Dell Servers you are in luck as 6.5U2c/EP8 is directly installable with Dell’s branded ISO located here 
    **If you are using HP Server you are in luck as 6.5U2c/EP8 is directly installable with HP’s branded ISO located here
    *** Here is a link to additional custom ISO’s 
  7. Enable SSH and NTP to start/stop with host
  8. Configure DNS server IP on host.
  9. What you Can (and Cannot) Change in a vSAN Ready Node
  10. In an all flash configuration your hosts capacity SSD’s need to be marked as capacity.  This scrip would be run on each ESX host participating in VCF  Chris Mutchler @chrismutchler wrote this script to automate this:

    ** Please note the Size would need to be updated with the size of your capacity disk


In VCF 3.0 as mentioned above, has the ability to instantiate multiple clusters in a single workload domain.  To make this easier in the planning phase, coming soon, one should think if multi cluster or another workload domain makes more sense.  To discuss this I think we need to lay out the features and use cases for each one to compare and contrast.

Multi Cluster:

  1. Hardware
    • GPU’s for VDI
    • Storage dense cluster
    • High Memory cluster
    • Machine Learning is a great example that may need GPU’s for processing
  2. Licensing
    • Segregation of Web vs SQL components
    • Another licensing related component could be OS related, for example a Windows cluster and a Linux cluster (insert your distro of choice).
  3. Failover Zones
    • Ability to create disparate storage for a separate failover zone for an application.
  4. Patch Consistency
    • In multi cluster environment it will allow for patch conformity amongst all participating hosts.
  5. Vendor Requirements
    • Pivotal actually requires multiple clusters

New Workload Domain:

  1. Patching
    • Be able to run patches to test/dev before running them in production
  2. Multi Tenancy
    • Role Based access Control to a separate vCenter
  3. Isolation
    • Full isolation of NSX networks and policies
    • Full isolation of vSAN data controlled by a distinct vCenter server and storage policy based management
  4. Licensing
    • Oracle… Would be a great use case to give them their own vCenter to appease their licensing requests.

In-Rack Imaging for Cloud Foundation

So you have your VMware Cloud Foundation environment humming along and you need to add some additional capacity, how do you add that?  By following the VCF Administrators Guide and the VIA Guide it looks like you need to set this up on a laptop and plug into port 48 on the management switch, however I believe I have an easier option, that doesn’t require you to sit in the datacenter for hours on end.  Let’s look at how to setup In-Rack Imaging.


  1. This applies to VCF 2.2.x and 2.3.x
  2. Download the VCF Bundle ISO as well as the MD5SUM.txt from your entitled MyVMware account
  3. Download the VIA OVA from your entitled MyVMware account
  4. You have completed the Jump VM setup detailed here (coming soon)

Now let’s get started:

  1. Ensure the servers to be imaged have been cabled to the wiremap (document available under downloads for Cloud Foundation)
  2. If Imaging with VCF 2.2: Ensure the servers to be imaged have iDRAC/BMC static IP information set to
    IP Range:
    Subnet Mask:
    **Failure to set these IP addresses will result in a failure during the Add Host wizard later in this process.
  3. Ensure appropriate BIOS settings have been set and PXE boot is enabled on 10g NIC’s
  4. Deploy the VIA OVA in your Management Workload Domain
  5. Ensure vsanDatastore is selected for storage location
  6. Select the vRack-DPortGroup-NonRoutable portgroup
  7. Select defaults on all other screens and click finish
  8. Before booting the VIA Appliance we will need to edit settings of the VM
  9. Map the CD-ROM drive to Datastore ISO
  10. If the system was imaged at the same version it is now, you can point to bundle-iso folder and select the latest version.  If it was imaged at an older version and you want to bring the hosts in at a newer version, upload the ISO and select.
  11. Click OK and then Power On VIA
  12. If you are on version 2.2 or 2.2.1 and have All Flash R730 or R730xd follow this post before proceeding
  13. Now connect to your jump host that has access to the and subnets in your environment (post coming soon)
  14. Load the VIA interface
  15. You should see CD Mounted Successfully, if not click Refresh if VIA was booted before mounting CD-ROM/ISO
  16. Click Browse under ‘Bundle Hash’ Locate your MD5SUM.txt for the respective Imaging version and then click Upload Bundle

  17. Now that the bundle has been uploaded click Activate Bundle
  18. Select the Imaging Tab at the top
  19. Fill in an Appropriate Name in our case EUC Workload Domain
  20. Description is optional
  21. Change Deployment type to ‘Cloud Foundation Individual Deployment’
  22. Ensure Device Type is set to ESXI_SERVER
  23. Under Server ensure the ‘All Flash’ is Selected and the appropriate number of servers to image based upon the quantity you are adding
  24. Ensure the Vendor and Model are correct
  25. The default IP’s should be acceptable as this is considered a temporary range, however if you have two sets of servers you are imaging, make sure they don’t overlap before completing the ‘Add Host’ wizard towards the end of this post.
  26. Confirm all settings are correct before clicking Start Imaging
  27. Click Start Imaging
  28. One thing to note here is that if the servers had ESXi on them before, you will need to manually reboot them to get them to PXE boot.  Otherwise if they are new servers they should be looping through PXE boot
  29. As servers PXE boot, they will show up with a progress bar
  30. Looking at the console of a servers denoted to be in progress we can see that it is Loading ESXi
  31. Clicking on one of the servers denoted to be in progress will show the task list of what has been completed and what is remaining:
  32. Imaging process for all servers will take about 1 hour total
  33. As servers finish Imaging, their progress bar will turn into a green check
  34. Once Imaging has completed it will proceed onto Verify
  35. After which it will go to the Finish task, please wait until this screen shows completed
  36. Now Click the Inventory Tab, and then click download at the end of the Run ID that corresponds to the name given at step 19.  Save this file to location from which has access to the SDDC Manager Interface.  If this is the Jump VM saving on any local drive should be acceptable.
  37. If this is your only imaging operation it would be best practice to Shutdown the VIA appliance as it runs a PXE boot server while in the Imaging process.  Also to note, don’t reboot any other server in the rack during imaging to ensure it isn’t picked up by VIA.

The process of Imaging is now complete, however SDDC Manager does not know about the new inventory.  We will now need to login to your SDDC Manager UI to complete the task

  1. After logging into SDDC Manager click Settings on the left, then Add Host in the ribbon bar.
  2. Select Rack-1 if you are in a single rack configuration, or if you have multi-rack select the appropriate physical location
  3. Click Browse and point to the vcf-imaging-details manifest file that we downloaded above in step 36
  4. Confirm the details and click Add Host
  5. After a few minutes you should see something similar
  6. If the Continue button doesn’t light up blue after a 3 minutes, click Refresh on the browser and something like this should be displayed allowing you to click Continue
  7. Now we see the final steps in Add Host, this will take about 30 minutes to complete depending upon the number of hosts

  8. Once Host bring up completes the following should be displayed, you can click OK at this poing
  9. Clicking on dashboard shows that our environment now has 14 Hosts, when we started with 10 Hosts

VCF Imaging VIA Doesn’t Display My Server!

When going through imaging, or when adding capacity, on Cloud Foundation 2.2 or 2.2.1 you may notice that your server choice is unavailable.

Example: When selecting All Flash, the only Dell option is Dell R630

This is due to the JSON file having all flash set to false.  In order to resolve this and add for example Dell R730 or R730xd as an option we will need to follow a process to make a REST POST to add this as an option.  First, save the below code as: via-manifest-vcf-bundle-2.2.1-7236974.json

I will assume you know how to deploy the VIA appliance, or you can follow my post: coming soon to setup in rack imaging in Cloud Foundation 2.2/2.3 environments.

It is easiest to start with an ’empty’ VIA appliance, given that this is a small VM it would be worthwhile to delete the existing VIA appliance and re-deploy the OVA.  Once VIA has booted for the first time you will point your browser at the VIA interface

  1. Ensure that the VCF Bundle ISO has been mounted as the CD-ROM for the VIA VM.
  2. Ensure the MD5SUM.txt file for the specified bundle has been downloaded and is saved.
  3. By saving the above JSON file this will enable the ‘All Flash’ option for Dell R730 and R730xd
    **Important note, by editing this file and adding a server that does not appear on the VCF Hardware Compatibility List this will not allow it to work as the imaging process adds specific VIB’s for these servers.
  4. Now with the ISO mounted, the MD5SUM and JSON files saved on machine with Postman installed (or similar REST Client) we will need to perform the following operation:
    1. URL:
    2. Request Type: POST
    3. Request Body Type: form-data
    4. Parameters:
    5. bundleHashType: <Text> MD5
    6. txtBundleHashFile: <FILE> C:\files\MD5SUM.txt
    7. txtInventoryJson: <FILE> C:\files\via-manifest-vcf-bundle-2.2.1-7236974.json
  5. Click the send button and you should receive a “status”: “Success”, message as seen below.
  6. Switching back to the VIA interface at this point you will see the upload is in progress
  7. Once upload is complete R730/R730xd is now available to image: