After getting your external access up and running, I’m sure your ready to start deploying some additional solutions! Let’s start with the vRealize suite and that all begins with downloading and deploying VRSLCM – (vRealize Suite Lifecycle Manager). Go ahead and get that queued up and downloading, it’s about 3GB in size and it should be available under the Repository -> Bundles page, click the Download Now button next to the vRealize Suite Lifecycle Manager bundle.
I apologize for the monster post, but know that the work we do here will put in place the several pieces necessary for deploying vRealize Operations and vRealize Automation, so it’s worth the effort if those are an ultimate goal. While VRSLCM is downloading, there are a few more things we need to do. VRSLCM is installed on an Application Virtual Network (AVN), specifically the xRegion AVN. While I won’t go into all the details here, I will point you to the VVD documentation. What this means for us is that VRSLCM exists on a different subnet (192.168.11.x/24) than our management (10.0.0.x/24) components, thus we’ll need to put an additional route in place. Then we’ll need to add the subnet to our reverse DNS resolver and forward DNS access list.
Since this is a “Build it For Me” deployment CloudBuilder is taking care of all the required infrastructure services, including the L3 router into and out of the environment, and the BGP route server for the AVN’s. This means it’s the gateway for all the internal nested components that got deployed. In the illustration below you can see all the services CloudBuilder provides listed, along with all it’s IP addresses, the NSX Edges that make up the front end of the AVN construct, the Logical router, Logical switches and finally the workloads.
The first place we’ll add a route is CloudBuilder. Lets take a look at what CloudBuilder’s kernel routing table looks like and it’s BGP routing information base (RIB).
gobgp global rib
netstat -rn

Adding the route to CloudBuilder
In the BGP routing information base we can see that the route for 192.168.11.x already exists, however in the kernel routing table it does not. It already exists in the BGP RIB because the Logical Router (DLR) is advertising it already, however the gobgp daemon on CloudBuilder does not update the kernel routing table. Let’s update the kernel routing table and test connectivity.
route add -net 192.168.11.0/24 gw 172.27.11.2 eth0
netstat -rn
ping 192.168.11.1

Above we pointed CloudBuilder (and everything else that uses CloudBuilder as it’s gateway) to only one of the ECMP edges (172.27.11.2), this is ok as it’s a lab but normally you’d want to point to both edges in case one fails. You may also notice that there is another subnet (192.168.31.x) that is configured similarly. If you look back at the network illustration you’ll see that this subnet has vRealize Log Insight deployed on it. The route in CloudBuilder that we see was setup automatically by VLC during bringup.
Adding a Route to the Jump Box
Unless you have CloudBuilder set as the gateway for your Jump box, you will need to add routes to get to the vRealize components. As we’ve already discussed the vRealize Suite components are deployed on 2 different subnets, and the gateway for those subnets in the Cloudbuilder. Your Jump host is deployed with an IP address that is either on or routeable to the management network, so as long as we can communicate with CloudBuilder, we’ll be able to communicate with the vRealize Components. To add the routes we need to open a Command Prompt as administrator, I typically click the start ribbon, type cmd and then right click on Command Prompt icon and click Run as administrator.

To see the current routes and add the new ones type
route print -4
route add 192.168.11.0 mask 255.255.255.0 10.0.0.221
route add 192.168.31.0 mask 255.255.255.0 10.0.0.221
You should now be able to ping all the way to the Logical Router on both subnets from your Jump box.
ping 192.168.31.1
ping 192.168.11.1
Great! We have setup and verified network connectivity between all the components and now we need to update the DNS. In your deployment forward DNS is taken care of by the maradns service, while reverse DNS is taken care of by the maradns.deadwood service. We will need to do the following things:
- Add the new FQDN and IP address to the domain zone file
- /etc/maradns/db.vcf.sddc.local
- Add the new reverse subnets to the recursive server config file
- Add the new subnets to acl in the recursive server config file
- /etc/dwood3rc
- Restart the DNS services
Configuring the DNS for VRSLCM deployment on Cloudbuilder
Make a backup copy of the the /etc/maradns/db.vcf.sddc.local file, then open it with your favorite editor and add this line to the bottom of the file and save it.
vrslcm.vcf.sddc.local. FQDN4 192.168.11.10
Maradns is fairly particular about formatting so make sure you include the “.” after the FQDN.
Next make a backup copy of the /etc/dwood3rc file, then open it with your favorite editor. Add the following to tell maradns that it’s handling the reverse lookups for the 192.168.11.x subnet, and that it’s ok to handle requests from the 192.168.11.x subnet. Save the file.
upstream_servers["11.168.192.in-addr.arpa."] = "127.0.0.1"
-Add ,192.168.11.0/24 to the recursive ACL line
Then you’ll need to restart the maradns services and check to make sure that that are up and running.
systemctl restart maradns
systemctl restart maradns.deadwood
systemctl status maradns
systemctl status maradns.deadwood
Deploying vRealize Suite Lifecycle manager
Alright, the hard part is done! Now it’s time to use SDDC Manager to deploy VRSLCM, so get logged in to SDDC Manager and make sure your bundle is done downloading. You should see a “Task” down at the bottom of the dashboard indicating this.
Navigate to the “vRealize Suite” menu item and then click the Deploy button under the vRealize Suite Lifecycle Manager.
The wizard will pop up and you’ll need to click through a few screens that won’t need much information, you can select the “Select All” checkbox, these are all the steps that we completed before this point! Then click Begin on this first screen.
The Network Settings screen has no entry, and is just confirming everything we know about the network settings already, click Next.
The Appliance Settings screen will simply require the FQDN that we added to DNS prior, as well as a password for administrator and the ssh root account that conforms with password complexity requirements.
After filling that out, click Next and then Finish on the review summary screen and we’re off to the races. You’ll be able to check the progress in the tasks window at the bottom of the dashboard.

Typically it takes around an hour to deploy VRSLCM, once the main task comes back as successful you’ll see a green checkmark and a hyperlink under vRealise Suite -> vRealize Suite Lifecycle Manager.
Click the link for vRealize Suite Lifecycle Manager and log in as
admin@localhost with the administrator that you set before.
Once you get logged in you’ll be able to see that VRSLCM has already been configured with the existing vRealize Log Insight environment.. a word of caution, *don’t* do anything with the vRLI environment inside of VRSLCM, this is SDDC Managers job to update, change passwords, and manage certificates.
This completes the installation of VRSLCM in your VCF nested environment, thanks for reading!