So you got through all the BGP fun and have a fully deployed VCF instance, congrats! Of course now, you want do add some functionality and get your FULL SDDC on. Thankfully, there are only a few more steps to go and you’re already an expert at this.
The long and short of it is that SDDC manager will need access to https://depot.vmware.com. That means you’ll need outbound network connectivity and DNS resolution. Let’s talk about the outbound network connectivity first.
Upon setup CloudBuilder will be the default gateway for all the deployed components of VCF. It’s default gateway will be pointed at itself, this isn’t going to do us any good if we need to send traffic outside of our nested deployment so we’ll need a real gateway that can route traffic. In my lab I use PFSense to accomplish this. But VYOS, and NSX Edge or other options exist for a virtual router. The main things it needs to do are the ability to route between at least 2 interfaces, one for the WAN (external) side and one for the LAN (internal) side, and the ability to NAT internal addresses. Let’s check out the interfaces on my virtual router:
You’ll need to ensure that the NICs of your virtual router are plugged into the correct portgroup. In my case on my ESXi server the WAN is the “VM Network” portgroup and the LAN is the “VCF” portgroup.
The easiest way I’ve found to ensure I’ve “plugged it in right” is to match up the MAC addresses on the VM nics is ESXi with the config on the VM, in the above two images you can see that this is the case.
Next you’ll need to assign some IP addresses to the interfaces. The WAN interface could be DHCP from your internet router, or a static IP on your upstream network that will let you connect to the internet.
The LAN interface will need to be an IP address on the management subnet of your VCF deployment, this is the same subnet that SDDC Manager, vCenter and NSX Manager are deployed on so make sure the IP you choose is not in use.
Once you have this done ssh to CloudBuilder and try pinging from the CloudBuilder to your virtual router. In all the following screens I am using the root user. Once you log in to Cloudbuilder as admin, issue
sudo su - to become root.
At this point if you try pinging the other interface on your virtual router from CloudBuilder it will fail and the reason is, it doesn’t have a route to get there.
Lets take a look at the routing table before making any changes so we have a starting place. You’ll see that the default gateway (0.0.0.0) is set to 10.0.0.221 which is the IP address of the CloudBuilder.
netstat -rn at the prompt.
To make communication with the rest of the world possible, we’ll need to re-configure the default gateway on CloudBuilder to be the LAN IP address of our virtual router.
First we’ll remove the current default gateway, then we’ll add our new default gateway pointed at the virtual router and finally take a look at the updated routing table.
route delete default gw 10.0.0.221 eth0
route add default gw 10.0.0.1 eth0
Once this is done you should be able to ping the WAN side of your router.
The final step on the networking side of things is to enable NAT on the WAN interface. This will allow the virtual router to translate internal IP addresses to it’s WAN interface IP address and back again as the traffic flows in and out. The methods for doing this vary from router to router but the concept is the same. On PFSense it’s just a Radio button and click save.
Once this is done you should be able to ping external addresses, I’ve used one of Google’s public DNS server IP’s for the test below.
So, can we get to depot.vmware.com at this point? There are one of two possible outcomes at this point… Let’s try it!
If you see the above, you should be done! Go ahead and log in to SDDC manager, put in your MyVMware credentials and start downloading bundles and patches! The rest of this article is further DNS troubleshooting.
Oh snap, if this happened to you keep reading!
What is likely happening here is that your ISP, or company has blocked access to external DNS servers. By default in CloudBuilder, the maradns.deadwood server is configured to use Google’s public DNS servers. If you are unable to resolve external DNS names you will need to change these servers to internal ones.
The file that you’ll want to change is /etc/dwood3rc. In particular the line starting with upstream_servers[“.”]
Open the file in your favorite text editor (I like vi) and change the IP addresses that are in that line to your internal DNS servers. If you only have one that is fine, just be sure to remove the “,” My internal DNS server is 192.168.10.1, it’s actually my WAN router but acts as a DNS forwarder. Here is my edited file and a successful pinging of depot.vmware.com!
At this point you should be able to log in to SDDC Manager, input your MyVMware Credentials under Administration -> Repository Settings and get your green check!
If you’re still having difficulty there could be other things such as firewalls, or MyVMware account problems that could be problematic to troubleshoot. Consider using the LCM offline bundle utility in cases like these! Thanks for reading, I hope this helps!