Category Archives: VCF

VMware Cloud Foundation

VLC-Build it for me, VRSLCM deployment

After getting your external access up and running, I’m sure your ready to start deploying some additional solutions! Let’s start with the vRealize suite and that all begins with downloading and deploying VRSLCM – (vRealize Suite Lifecycle Manager). Go ahead and get that queued up and downloading, it’s about 3GB in size and it should be available under the Repository -> Bundles page, click the Download Now button next to the vRealize Suite Lifecycle Manager bundle.

Continue reading VLC-Build it for me, VRSLCM deployment

Make a Local VCF Depot

VMware Cloud Foundation (VCF) communicates periodically through the internet with a hosted web service in order to check for and retrieve software updates and bundles. What if you don’t have internet access? Today, I’m going to demonstrate how you can build out your own software repository for VCF.

Before we begin, it’s important to note that this process is not supported for production environments. To understand why, I need to give you a short overview of what I will be showing you today.

Continue reading Make a Local VCF Depot

VLC – Build it for me, External Access

So you got through all the BGP fun and have a fully deployed VCF instance, congrats! Of course now, you want do add some functionality and get your FULL SDDC on. Thankfully, there are only a few more steps to go and you’re already an expert at this.

The long and short of it is that SDDC manager will need access to https://depot.vmware.com. That means you’ll need outbound network connectivity and DNS resolution. Let’s talk about the outbound network connectivity first.

Continue reading VLC – Build it for me, External Access

VCF Bringup with Multiple Physical NICs

I’ve talked about how to use hosts that have multiple physical NICs to create NSX-V and NSX-T backed workload domains and even how to expand a cluster in one of these workload domains. But what if you want to do an initial installation of VMware Cloud Foundation (VCF) using hosts with multiple physical NICs?

As I’ve mentioned before, the support for multiple physical NICs with VCF is new with VCF 3.9.1. All of the operations we performed previously relied on the VCF API. This worked well for our intended use, but bringup is a different animal.

Continue reading VCF Bringup with Multiple Physical NICs

NSX-T Backed Workload Domains with Multiple Physical NICs

Last week, I shared an example of how to create a new workload domain with VMware Cloud Foundation (VCF) using hosts with more than two physical NICs. In that example, I used NSX-V in the creation of the workload domain. Today, I’d like to provide an example of how you would do this for a NSX-T backed workload domain.

As support for multiple physical NICs (>2) is a newly supported feature with the VCF 3.9.1 release, doing this requires the use of the VCF APIs.

Continue reading NSX-T Backed Workload Domains with Multiple Physical NICs

Multiple Physical NICs in VCF

With the VCF 3.9.1 release, support for hosts with multiple physical NICs has been added. This allows you to dedicate specific traffic across specific physical NICs to conform to your best practices. Let’s take a quick look at how this is configured…

By default, VCF will use the first two physical NICs (vmnic0 and vmnic1) on a host for all traffic. When working with a host with multiple physical NICs, you will need to define what the physical NICs are connected to (VDS or N-VDS). The VDS or N-VDS will need to exist, of course.

Continue reading Multiple Physical NICs in VCF

Manually Uploading VCF Bundles

Although VMware Cloud Foundation (VCF) will automatically download the software bundles once connected to the Internet, there are times that you may wish to manually inject the software bundles into the SDDC Manager using the API.

After the initial deployment of VCF is completed, additional bundles may be required to enable optional functionality. For example, the bundles for Horizon, PKS, and vRealize Automation are optional and would be downloaded and then installed as needed. Because these bundles can be quite large and VCF downloads them in a serial fashion, it can be faster at times to inject the bundles into VCF manually through the API.

Continue reading Manually Uploading VCF Bundles

Resizing the LCM Volume group on SDDC Manager

One of the users of the VLC (VCF Lab Constructor) had an issue with drive space when attempting to upgrade from VCF 3.9 -> 3.9.1. This has been a problem in previous releases at times as well, so I thought it’d be a good opportunity to post about it. That and I don’t post nearly as often as I want to!

SDDC Manager uses LVM for several of it’s critical mount points. Coupled with the EXT4 filesystem this allows those mounts to be very flexible and non-disruptive when increasing their size.

Continue reading Resizing the LCM Volume group on SDDC Manager

Single SSO Domain With Multi VCF Instances

So, you have adopted VMware Cloud Foundation (VCF), or maybe you have spent some time reviewing the VMware Validated Design (VVD) and found that you would like to deploy a Single SSO domain.  In VVD architecture they propose having two regions with a Single SSO Domain, but natively the VCF deployment process expects a greenfield SSO domain.

As of VCF 3.8 release notes:

Provides the ability to link between the SSOs (PSCs) of two or more VMware Cloud Foundation instances so that the management and the VI workload domains are visible in each of the instances.

What does this mean exactly?  This translates into the ability for Region B as per the VVD, to join the SSO instance of Region A.  This allows VCF to align to the VVD from a design perspective to share SSO domains where it makes sense based upon Enhanced Linked Mode 150ms RTT limitation.  In order facilitate this, the Excel Document Deploy Parameters tab, has been updated (shown below) and allows you to enter Region A SSO domain, PSC IP Address, and SSO Credentials.  During bringup process on Cloud Builder it will still deploy two PSC’s for that region, but they will be joined to Region A.  This will provide Enhanced Linked Mode in vCenter and allow you to manage two VCF environments Role Based Access Controls and VM’s from a single login.

The ability to join SSO domains does come with some limitations though:

  1. VCF can only join an SSO domain of another VCF instance. The first VCF deployed in your environment still needs to be greenfield.
  2. ELM limitations of 15 vCenters applies, and that is now shared between two VCF instances. This means instead of a VCF instance being allowed to have 14 Workload Domains plus Management. Only 13 workload domains could be created in a shared deployment as minimum of two would be used for management.
  3. ELM limitation of 150ms Round Trip Time for latency should be advised, this would mean sharing SSO domain between New York and Sydney will likely not be supported or advised.
  4. Patches need to stay consistent especially for PSC’s and vCenters between deployments. Patch all PSC in both VCF instances before patching vCenters, and then subsequently patch all vCenters in a timely manner.
  5. SDDC Manager in ‘Region A’ cannot see the Workload Domains created by the SDDC Manager in ‘Region B’. We are looking to address this in the future.
  6. NSX-T cannot be shared between ‘Region A’ and ‘Region B’ deployments.

This is great news for customers that are looking to align their VCF environments with the best practices in VVD and also allow for a unified vCenter support experience for their Admins.  In addition this will allow for easier migrations on Day X as they will reside in the same vSphere Client, potentially as easy as a drag and drop to Region B depending upon networking and overall customer architecture.

 

Interested in deploying VMware Cloud Foundation in your Home Lab?  Get the info here: http://tiny.cc/getVLC and on slack at http://tiny.cc/getVLCSlack.




Bring Your Own Network with VCF 3.0 Pre-Requisites

Switches, we don’t need no stinkin’ switches…. Well, yes we still need switches.  That being said we no longer require specific switches in VMware Cloud Foundation 3.0.  This article is not a replacement to the User Guide or the Release Notes.

WHAT DOES THAT MEAN???

In VMware Cloud Foundation 3.0 (VCF 3.0), we have moved to a Bring Your Own Network (BYON) model to allow customers to choose their favorite network vendor and switch model and provide the underlay network that VCF will use.  This will allows customers the flexibility to use the fabric manager of choice to control their underlay network.

THIS SOUNDS GREAT! MY NETWORKING GUY WILL BE SO HAPPY! WHAT DO I NEED TO TELL ASK HIM NICELY TO CONFIGURE?

Now that BYON is in place and you have selected your switching vendor of choice, or incorporated to your existing network topology there will be a few pre-requisites that are required from your networking team.  Those pre-requisites are as follows:

  1. Each port to be used by a physical server will need to be a trunk port, however that trunk port can be limited to the specific VLAN’s as required by VCF and your vSphere environment
  2. Creation of a management network to support the SDDC Manager, PSC’s, vCenters, NSX Managers, NSX Controllers, and vRealize LogInsight cluster, in addition to additional vCenters, NSX Managers, and NSX Controllers for additional workload domains, as well as any additional services to be deployed.  This will be the default network used for all of these services when Cloud Builder brings up the environment. (more on that in a later post)
  3. While not explicitly required an ESXi management network is also nice to have assuming it has a route to the above management network.  This way your physical hosts are logically separated on another vLAN in your environment.  Your hosts management (in band) IP’s can also reside on your management network above (optional)
  4. VCF 3.0 introduces a new construct called a Network Pool.  A Network Pool simply consists of two networks ensuring isolation of your vMotion and vSAN traffic in your environment.  In VCF 3.0 we also support multi cluster workload domains, therefore plan accordingly if those should have shared or segregated vMotion and vSAN traffic.
    • vMotion network per Workload Domain, this is due to ensure isolation within workload domains or clusters.  Please keep in mind this is not a requirement but rather a best practice in VCF.
    • vSAN network per Workload domain, this is due to ensure data isolation within workload domains or clusters.  Please keep in mind this is not a requirement but rather a best practice in VCF.
  5. VTEP network with DHCP provided.  Please keep in mind that in VCF 3.0 we configure a Load Based Teaming NIC policy.  This in turn requires two (2) IP addresses per physical host in the environment.
  6. While not explicitly required an network allowing out-of-band access to the BMC ports of the servers is also nice to have assuming it can route to the above management network. This allows you to access the Baseboard Management Controller (iLo, iDrac, IMC, ect) as needed remotely. (optional)
  7. When planning all of your IP requirements allocate a subnet with ample capacity and then use inclusion ranges to limit the use of that subnet.  Keep in mind when using network pools an overlapping subnet cannot be reutilized (i.e. 192.168.5.0/24 and 192.168.5.128/25)
  8. Ensure that your networking guy is not utilizing any ethernet trunking technology (LAG/VPC/LACP) for connection to ESXi servers.
  9. Example of this schema in action:
    As you can see in this example I would be sharing the management network for ESXi servers as well as PSC’s, vCenters, NSX Managers, NSX Controllers, and vRealize LogInsight.  My additional Workload Domain though would receive it’s own dedicated vMotion and vSAN network pool.  Based upon the VXLAN/VTEP DHCP network being a 192.168.5.0/24 I could in theory support up to 126 ESXi hosts in this environment after removing .1 for the gateway and .255 is utilized for broadcast.  Also one other thing to note is the management network can be MTU of 1500 or higher.  Jumbo Frames (1600+), are required for all other networks and is recommended to be 9000 as illustrated in this graphic.  All of the above networks also require gateways, and these need to be routable for the bringup portion of the environment.  Lastly VLAN Tagging through 802.1Q should be utilized, Jumbo Frames enabled, and IGMP Snooping Querier.

I HAVE COMPLETED ALL THAT AND GAVE MY NETWORKING GUY COPIOUS AMOUNTS OF COFFEE, WHAT’S NEXT?

Now that the networking has been plumbed to the environment and host ports are set appropriately the next topic is DNS.  VCF will now support custom naming of attributes, in fact it manages ZERO of the DNS going forward.  In order to ensure your environment is ready for VCF, forward and reverse DNS entries need to be established for every item to be spun up in VCF.  Also one note here is that the DNS server to be used needs to be an authoritative DNS server therefore services like unbound will not allow this to function properly.  Below is a table illustrating what is required in addition to the ESXi Hosts having appropriate forward and reverse DNS:

OK WE HAVE COMPLETED DNS AND ARE READY TO PROCEED WHAT IS NEXT!

In VCF 3.0 the support matrix for physical servers has been dramatically expanded.  In fact we support almost every vSAN ready node from this list.  A couple of things to keep in mind:

  1. Minimum Cache disk size requirement is 200Gb per disk
  2. Minimum Capacity disk size requirement is 1Tb per disk
  3. Minimum Two NIC’s of 10GbE, however VCF 3.0 supports 25GbE and 40GbE or any other NIC that has been qualified for ESXi and is IOVP certified.  This will allow for the use of 100GbE IOVP NIC’s as well.
  4. Configure BIOS and FW levels to VMware and manufacturer recommendations for vSAN (NICs are certified on the ESXi qualifying hardware and HBAs are the most important things to confirm and set before proceeding)
  5. Disable all ethernet ports except the two that will be used in VCF (currently VCF only supports two physical NICs, in addition to BMC, and they must be presented in vSphere as vmnic0 and vmnic1 respectively)
  6. Install a fresh copy of ESXi 6.5U2c/EP8 with a password defined.  Use the vendor build if possible if not patch the VIBs for NICs and disk controllers as stated above.
    * If you are using Dell Servers you are in luck as 6.5U2c/EP8 is directly installable with Dell’s branded ISO located here 
    **If you are using HP Server you are in luck as 6.5U2c/EP8 is directly installable with HP’s branded ISO located here
    *** Here is a link to additional custom ISO’s 
  7. Enable SSH and NTP to start/stop with host
  8. Configure DNS server IP on host.
  9. What you Can (and Cannot) Change in a vSAN Ready Node
  10. In an all flash configuration your hosts capacity SSD’s need to be marked as capacity.  This scrip would be run on each ESX host participating in VCF  Chris Mutchler @chrismutchler wrote this script to automate this:
    esxcli storage core device list | grep -B 3 -e "Size: 3662830" | grep ^naa > /tmp/capacitydisks; for i in `cat /tmp/capacitydisks`; do esxcli vsan storage tag add -d $i -t capacityFlash;  vdq -q -d $i; done

    ** Please note the Size would need to be updated with the size of your capacity disk

ONE LAST STEP SHOULD I CHOOSE TO CREATE A NEW WORKLOAD DOMAIN OR USE THE NEW MULTI CLUSTER SUPPORT IN VCF 3.0?

In VCF 3.0 as mentioned above, has the ability to instantiate multiple clusters in a single workload domain.  To make this easier in the planning phase, coming soon, one should think if multi cluster or another workload domain makes more sense.  To discuss this I think we need to lay out the features and use cases for each one to compare and contrast.

Multi Cluster:

  1. Hardware
    • GPU’s for VDI
    • Storage dense cluster
    • High Memory cluster
    • Machine Learning is a great example that may need GPU’s for processing
  2. Licensing
    • Segregation of Web vs SQL components
    • Another licensing related component could be OS related, for example a Windows cluster and a Linux cluster (insert your distro of choice).
  3. Failover Zones
    • Ability to create disparate storage for a separate failover zone for an application.
  4. Patch Consistency
    • In multi cluster environment it will allow for patch conformity amongst all participating hosts.
  5. Vendor Requirements
    • Pivotal actually requires multiple clusters

New Workload Domain:

  1. Patching
    • Be able to run patches to test/dev before running them in production
  2. Multi Tenancy
    • Role Based access Control to a separate vCenter
  3. Isolation
    • Full isolation of NSX networks and policies
    • Full isolation of vSAN data controlled by a distinct vCenter server and storage policy based management
  4. Licensing
    • Oracle… Would be a great use case to give them their own vCenter to appease their licensing requests.