Last week, I shared an example of how to create a new workload domain with VMware Cloud Foundation (VCF) using hosts with more than two physical NICs. In that example, I used NSX-V in the creation of the workload domain. Today, I’d like to provide an example of how you would do this for a NSX-T backed workload domain.
As support for multiple physical NICs (>2) is a newly supported feature with the VCF 3.9.1 release, doing this requires the use of the VCF APIs.
For this example, we are going to create a new workload domain called ‘WLD-1’. Referencing the documentation here, we see that one supported configuration is to use four physical NICs divided between one vDS and one N-VDS. Keeping in line with this, our example workload domain will contain three hosts, each host having four physical NICs. We will evenly divide the physical NICs between the switches, with two physical NICs (vmnic0 and vmnic1) being used with the VDS and two physical NICs (vmnic2 and vmnic3) being used with the NVDS.
To start, first you need to commission the hosts. This is the same process you would use for commissioning any host with VCF.
After this, make sure you have the NSX-T bundle available in the SDDC Manager. Now is also a good time to ensure that you have added in a NSX-T license key to the SDDC Manager.

Next, we need to craft the payload that we are going to send via the API to perform the validation of the workload domain spec and initiate the workload domain creation workflow. I prefer to reference a file when doing this, as it’s easier for me to edit. The following is the complete file that we’ll need:
{ "domainCreationSpec":{ "domainName":"WLD-1", "vcenterSpec":{ "name":"wld-vcenter", "networkDetailsSpec":{ "ipAddress":"10.0.0.30", "dnsName":"wld-vcenter.vcf.sddc.local", "gateway":"10.0.0.1", "subnetMask":"255.255.255.0" }, "rootPassword":"VMware123!", "datacenterName":"WLD-1-DC", "vmSize":"tiny" }, "computeSpec":{ "clusterSpecs":[ { "name":"Cluster-01", "hostSpecs":[ { "id":"5df23efc-5483-452d-99f6-d57633090c44", "license":"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX", "hostNetworkSpec":{ "vmNics":[ { "id":"vmnic0", "vdsName":"SDDC-Dswitch-Private1", "moveToNvds":false }, { "id":"vmnic1", "vdsName":"SDDC-Dswitch-Private1", "moveToNvds":false }, { "id":"vmnic2", "vdsName":"SDDC-NVDS-Private2", "moveToNvds":true }, { "id":"vmnic3", "vdsName":"SDDC-NVDS-Private2", "moveToNvds":true } ] } }, { "id":"8a51a87c-e142-4442-855d-edd58ec47b21", "license":"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX", "hostNetworkSpec":{ "vmNics":[ { "id":"vmnic0", "vdsName":"SDDC-Dswitch-Private1", "moveToNvds":false }, { "id":"vmnic1", "vdsName":"SDDC-Dswitch-Private1", "moveToNvds":false }, { "id":"vmnic2", "vdsName":"SDDC-NVDS-Private2", "moveToNvds":true }, { "id":"vmnic3", "vdsName":"SDDC-NVDS-Private2", "moveToNvds":true } ] } }, { "id":"54f7ea1f-1338-44b9-b203-86383b4c8b39", "license":"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX", "hostNetworkSpec":{ "vmNics":[ { "id":"vmnic0", "vdsName":"SDDC-Dswitch-Private1", "moveToNvds":false }, { "id":"vmnic1", "vdsName":"SDDC-Dswitch-Private1", "moveToNvds":false }, { "id":"vmnic2", "vdsName":"SDDC-NVDS-Private2", "moveToNvds":true }, { "id":"vmnic3", "vdsName":"SDDC-NVDS-Private2", "moveToNvds":true } ] } } ], "datastoreSpec":{ "vsanDatastoreSpec":{ "failuresToTolerate":0, "licenseKey":"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX", "datastoreName":"vSanDatastore" } }, "networkSpec":{ "vdsSpecs":[ { "name":"SDDC-Dswitch-Private1", "portGroupSpecs":[ { "name":"SDDC-DPortGroup-Mgmt", "transportType":"MANAGEMENT" }, { "name":"SDDC-DPortGroup-VSAN", "transportType":"VSAN" }, { "name":"SDDC-DPortGroup-vMotion", "transportType":"VMOTION" } ] } ], "nsxClusterSpec":{ "nsxTClusterSpec":{ "geneveVlanId":0 } } } } ] }, "nsxTSpec":{ "nsxManagerSpecs":[ { "name":"nsxt-1", "networkDetailsSpec":{ "ipAddress":"10.0.0.35", "dnsName":"nsxt-1.vcf.sddc.local", "gateway":"10.0.0.1", "subnetMask":"255.255.255.0" } }, { "name":"nsxt-2", "networkDetailsSpec":{ "ipAddress":"10.0.0.36", "dnsName":"nsxt-2.vcf.sddc.local", "gateway":"10.0.0.1", "subnetMask":"255.255.255.0" } }, { "name":"nsxt-3", "networkDetailsSpec":{ "ipAddress":"10.0.0.37", "dnsName":"nsxt-3.vcf.sddc.local", "gateway":"10.0.0.1", "subnetMask":"255.255.255.0" } } ], "vip":"10.0.0.34", "vipFqdn":"nsxt.vcf.sddc.local", "licenseKey":"XXXXX-XXXXX-XXXXX-XXXXX-XXXXX", "nsxManagerAdminPassword":"VMware123!" } } }
Assuming you copy this file, the first thing you want to do is replace the host IDs with the actual IDs for your hosts. To do this, you need to execute a command like this on the SDDC Manager:
curl -k https://sddc-manager.vcf.sddc.local/v1/hosts?status=UNASSIGNED_USEABLE -u ‘admin:VMware123!’ -X GET -H ‘accept: application/json’ | json_pp
This will display the information for all the unassigned hosts, which is where your newly commissioned hosts should be at this time. Just locate the ID for the host and replace that in our example.
I’d like to draw your attention to the host definition in our example file. Under this, you will see each of the four physical NICs on the host assigned to a switch. Note that the argument “moveToNvds” is set to false for the physical NICs that will be on the VDS and true for the physical NICs that will be migrated to a NDVS.
"vmNics":[ { "id":"vmnic0", "vdsName":"SDDC-Dswitch-Private1", "moveToNvds":false }, { "id":"vmnic1", "vdsName":"SDDC-Dswitch-Private1", "moveToNvds":false }, { "id":"vmnic2", "vdsName":"SDDC-NVDS-Private2", "moveToNvds":true }, { "id":"vmnic3", "vdsName":"SDDC-NVDS-Private2", "moveToNvds":true } ] }
Another change that you will notice from our NSX-V backed example is that I’ve only defined three portgroups for Management, vSAN, and vMotion.
"portGroupSpecs":[ { "name":"SDDC-DPortGroup-Mgmt", "transportType":"MANAGEMENT" }, { "name":"SDDC-DPortGroup-VSAN", "transportType":"VSAN" }, { "name":"SDDC-DPortGroup-vMotion", "transportType":"VMOTION" }
You’ll also notice that the nsxClusterSpec is now set for NSX-T and there is none for NSX-V.
"nsxClusterSpec":{ "nsxTClusterSpec":{ "geneveVlanId":0 } }
Lastly, you will notice the NSX-T spec at the end of the file. As the architecture of NSX-T varies a bit from NSX-V, there are some different attributes used. Here, we will provide a cluster of three NSX-T managers that will be configured with a VIP.
As you create your file, ensure that the spelling and syntax is correct throughout the file. What I like to do is to take the whole JSON file and input into a JSON validator. This just is to make sure that I haven’t accidentally messed up the syntax.

Also make sure that the license keys that you use in the JSON match the ones that you have already added to the SDDC Manager.
Now we’re ready to rock!
First we have to validate the workload domain spec. You’ll do this through the API with a command similar to this:
curl -k https://sddc-manager.vcf.sddc.local/v1/domains/validations -i -u ‘admin:VMware123!’ -X POST -H ‘Content-Type: application/json’ -d @/root/mk_wld_3host_mpnic-mod3.json
If this completes successfully, we will then need to trigger the workload domain creation. Before doing this, however, we need to first edit out file as the API calls to validate and trigger a workload domain require a slightly different syntax. All we need to do is make a copy of our JSON and remove the line at the top that says:
“domainCreationSpec”:{
And remove the corresponding bracket at the end of the file.
Now, use the API to trigger the workload domain creation workflow using a command like this:
curl -k https://sddc-manager.vcf.sddc.local/v1/domains -i -u ‘admin:VMware123!’ -X POST -H ‘Content-Type: application/json’ -d @/root/mk_wld_3host_mpnic-mod3-t2.json
This is what it will look like if everything is correct:

Looking at the SDDC Manager UI, you should see the workload domain creation workflow task in progress. When this completes, you can take a look at your systems and you should see the NICs are assigned to the VDS and NDVS, as planned.

Looking at the VDS, you’ll see the portgroups that we defined have been created automatically by the workflow.

That’s all there is to it!
As a side note, you can do this in a lab environment using the VMware Lab Constructor (VLC). Just add some additional hosts using the Expansion Pack option. After the hosts have been created, simply power-off the nested hosts and add two (or more) NICs to the VM. Set the NIC to use the VMXNET3 adapter type.
