I wanted to summarize a few things I’ve found when working with VMware vCloud Availability 3.0 (vCAv) over the past few months that will be helpful to providers and tenants. I’m sure there’s others, but these are the ones that come to mind.
Tunnel (Public API) Endpoint
This is a very important step for production deployments – setting the URL endpoint to ensure proper cloud access from tenants and other vCD instances.
The public API endpoint can be configured from the Cloud Replication Management (CRM) under Configuration:
Or can be configured directly on the tunnel:
When one configures it from the CRM, it does push this change to the tunnel appliance over port 8047 (internal communication port between CRM and the tunnel appliance).
Note that you will need a proper DNS or public IP address along with the DNAT port utilized (such as 443). Any configuration or reconfiguration requires a service restart on the CRM and Tunnel appliance –
The above table summarizes all necessary ports for proper vCAv management. While this is ingress communication to the vCAv appliances, utilizing the combined appliance does present a different path for configuration. Below are the explicit ports for configuration of each role inside of a single appliance:
- vApp Replication Manager + Portal – https://appliance-ip:8046/ui/admin
- Provides the main interface for the cloud-to-cloud replication operations. It understands the vCloud Director level concepts and works with vApps and virtual machines.
- Replication Manager – https://appliance-ip:8044/ui/admin
- A management service operating on the vCenter Server level. It understands the vCenter Server level concepts for starting the replication workflow for the virtual machines.
- Replicator – https://appliance-ip:8043/ui/admin
- Exposes the low-level HBR primitives as REST APIs.
- Tunnel – https://appliance-ip:8047/ui/admin
- Simplifies provider networking setup by channeling all incoming and outgoing traffic for a site through a single point
In my lab, I’ve rebuilt vCAv many times and in some scenarios, I’ve forgot that I had an active replication/protection on a VM. When re-enabling protection, one would receive an error stating “This VM or vApp is already replicated” and fails to protect.
This is due to the host-based replication process still enabled on this VM. To disable it, the provider will need to locate the VM and SSH to the ESXi host. From there, we can utilize the “vim-cmd” process and utilize the hbrsvc scope.
There’s two command you’ll need to know:
- vim-cmd hbrsvc/getallvms
- This will get the world ID of the VM’s so you can utilize this on the next command.
- vim-cmd hbrsvc/vmreplica disable <VM_ID>
- disables HBR on said VM
From there, one can successfully re-enable protection for this VM.
There are two things I want to discuss:
- Plugin Visibility
- Changes to the vCAv Configuration and how it relates to the Availability Plugin
When the provider configures vCAv for the first time and connects the vCD instance, it immediately makes an API call to push the Availability plugin. This is available to all by default –
You have two options on configuring visibility to the Availability plugin: 1) in vCD 9.7, utilize the Customize Portal plugin or 2) utilize the API calls to restrict access.
For 9.7 installs, it’s fairly easy -go to Customize Portal -> select vCloud Availability -> Publish and remove the selected tenants:
Pre 9.7, we will need to utilize the API to manage accessibility. My esteemed peer Chris Johnson did a writeup on managing access to vCAv 3.0 recently, but my older vCAv C2C 1.5 article also applies.
Changes to the vCAv Configuration
When the provider changes a vCAv service/system configuration, the plugin must also be updated with this new information. This is important as the change could have a new tunnel address.
This is very easy – all we need to do is re-register the vCD instance from vCAv. When we re-authenticate, vCAv will push the updated plugin to vCD.
Provider and Tenant Diagrams
The below are two diagrams that depict port communication for a provider and tenant environment. This is very helpful to understand what is required from an ingress and egress perspective –
Tenant – as discussed before, there is no need for a DNAT rule as all traffic originates from the on-premises tunnel to communicate to the provider vCAv environment.
That’s all for now.