The intent of these videos is to discuss setting up Cross-VDC networking in vCloud Director but also have a live chat on items we’ve learned along the way with working with it. Quite frankly, it was an open discussion between the team on the inner workings on vCD/NSX and what our development team has done in the backend.
In the first video, we discuss the pre-requisites before we can start configuring vCloud Director for Cross-VDC networking. In essence, the assumption is cross-vCenter NSX has already been established and we have the primary and secondary NSX managers registered.
Next, we review the concept of creating a Datacenter group and what are the different egress options. This is very important as it explicitly controls how traffic exits the overlay environment.
Here, we discuss how BGP weights control our active/passive egress points and what vCD automates in the backend. The key is this is all done without provider/tenant configuration – vCD automates this process.
As a final wrap-up of the BGP weights, we review creation of the stretched networks inside of vCloud Director along with operational management inside of the vCD H5 UI.
Last of all, we demonstrate testing of Cross-VDC and failover of my “Daniel-App” between the two sites. What’s interesting is the ability to migrate egress points without any loss of connectivity. Unintended failover is managed by BGP weights, which the default timer is 60 seconds and could be revised if required.
As stated before, this shows the requirement of having a mirror Edge configuration, especially for NAT configuration and failover testing between sites.
This was a fun experience with the team while reviewing and having open discussions on Cross-VDC networking. We are hoping these are valuable for those of you that are interested in bringing this as a new service inside of vCloud Director.
Recently, I received a request from one of our aggregators regarding how Equal Cost Multipathing (ECMP) is metered within the VMware Cloud Provider Program (VCPP), specifically Tom Fojta’s recommendation on architecting Provider-managed NSX Edges and Distributed Logical Router (DLR) in ECMP mode, specifically this diagram from the Architecting a VMware vCloud Director Solution –
As shown in the diagram above – How does Usage Meter handle bill these tenant virtual machines (VMs) when we have a provider NSX architecture that utilizes ECMP?
For you TL;DR readers – any VM connected to a Tenant Edge / direct network that has ECMP enabled northbound, NSX Advanced will be charged for said VM. Read on if you want to learn how this is done.
First off, let’s talk about why this matters. Per the Usage Meter Product Detection whitepaper (this can be found on VMware Partner Central), we can see how Usage Meter detects specific NSX features based on the pattern of usage. Regarding dynamic ECMP, it is metered by the “Edge gateway” which could be a little ambiguous. If one utilizes ECMP, they would be metered for NSX Advanced within VCPP.
One of the scenarios from the whitepaper does show ECMP-enabled Edges but not an Edge that is abstracted away from the provider environment –
My initial reaction was that Usage Meter would not look at the northbound provider configuration and the interconnectivity to vCloud Director. However, I was not confident and wanted to verify this explicit configuration and expected metering. Let’s review my findings.
In the above diagram, we can see I created a similar Provider managed NSX configuration with ECMP enabled from the DLR to the two Provider Edges with dynamic routing enabled (BGP). From there, I expose a LIF/Logical Switch named “ECMP-External-Network” to vCloud Director that is then exposed to my T2 organization as a new External Network.
From there, I created a dedicated Tenant Edge named “T2-ECMP-ESG” that will be attached to this newly created network along with a VM named “T2-ECMP-VM.” The goal is to verify how T2-ECMP-VM and T2-TestVM are metered by Usage Meter with this newly created Tenant Edge.
My Edges are setup for BGP and reporting the correct routes from the southbound connected DLR (and tenant Edges) –
From the DLR, we can see that I have two active paths to my Provider Edges (Provider-Edge-1 and 2) –
Last of all, my T2-ECMP-ESG is operational and attached to the newly created ECMP External Network –
Last of all, I have my VM’s created and powered on (remember, Usage Meter will only meter powered on VM’s). We can see T2-ECMP-VM is attached to a org routed network from T2-ECMP-ESG named “T2-ECMP-Network” –
Let’s work from the north to south – start with the Provider Edges and show how Usage Meter detects and bills.
Note – I have vROps Enterprise in my lab environment, so we will see Usage Meter picking up vROps registration and placing it in the appropriate bundle.
Provider Edges / DLR
As expected, the Provider Edges and DLR are detected along with registration to vROps. By design, NSX Edges are charged for the Advanced SP Bundle as they are metered as a management component (minimum Advanced bundle / 7-point). However, in my case, we see detection, and then registration to vROps Enterprise. Therefore, since it’s a bundle ID (BND) of 12, this is correlated to Advanced Bundle with Management (10-point) –
Tenant Edge – T2-ECMP-ESG
Just like the Provider Edges and DLR, we see T2-ECMP-ESG register to UM along with vROps Enterprise registration. Same billing model as above.
Tenant VM – T2-TestVM
I would not expect any change to this VM, but wanted to showcase that having a separate Edge with standard networking (i.e. no ECMP) will bill based off the NSX SP Base level. As expected, T2-TestVM was handled by Usage Meter just as anticipated – we can see registration, NSX SP Base usage, along with registration to vROps Enterprise –
Tenant VM – T2-ECMP-VM
Finally, let’s take a look at my T2-ECMP-VM – as discussed before, this is wired to a Tenant Edge that is connected to the ECMP-enabled DLR via an External Network.
We see initial registration, registration to vROps Enterprise, then NSX Advanced usage! This would be metered at Advanced Bundle with Networking and Management due to the NSX Advanced usage (12-point).
Summary of Findings
Here’s what we learned:
Edges/DLR Control VM’s are not charged for NSX usage since UM handles them as a management component. If you are using vROps, it will place it in the most cost effective bundle.
Utilizing ECMP at the provider-level DOES impact any southbound connected VM from a billing perspective, even if an Edge sits in between the ECMP enabled configuration and the tenant VM. Per the findings, NSX Advanced will be metered.
Therefore, be aware of any NSX provider architecture and the use of NSX specific features.
Again, this shows the logic inside of Usage Meter and how it relates to metering for tenant workloads. Cheers!
I was recently asked by a colleague if we have any existing collateral on VMware vCloud Director (vCD) that maps to the VMware Cloud Provider Program (VCPP) NSX levels that are currently available to partners. Well, there wasn’t, until now. 🙂
First, let’s talk about the NSX bundles inside of VCPP –
There are three levels identified within VCPP:
NSX-SP Base – this is is your fundamental level of NSX. It does include your normal Edge services, Edge Firewall, NAT, Load Balancing, Dynamic/Static Routing, IPSEC/SSL VPN+, and Distributed Routing and Switching. This is typically referred to as “vCNS” mode (callout to the vCD old days) but does use NSX.
NSX-SP Advanced – this includes Base, plus ECMP and Distributed Firewall functionality. Service Insertion, AD Integrated Firewall, etc. are all functions that the Provider can consume from the backend management.
NSX-SP Enterprise – this includes Advanced along with HW VTEP integration, cross-vCenter NSX functionality, along with the L2VPN (Remote Gateway) solution. The new addition here is vCD 9.5 Multi-Site and Cross-VDC capability.
Last of all, the “Convert to Advanced Gateway” inside of vCloud Director for organization Edges DOES NOT mean you will be using NSX Advanced right away! This is just a change in how vCD presents the Edge UI (with Advanced, it’ll show the H5) along with the API rights available. I demonstrate this in the above post too.
So let’s talk about the NSX levels and how they can pertain to vCD rights and role. I worked up the following roles in my vCD environment:
NSX SP-Base Rights
NSX SP-Advanced Rights
NSX SP-Enterprise Rights
So what does one gain when using these rights? Well, they are now aligned to the VCPP NSX bundles and can be utilized as a starting to monetize NSX inside of a vCD environment.
Now, in my experience, these specific vCD permissions will apply to the VCPP NSX levels as stated above. The big thing I found in my testing is ECMP can be toggled with Static Routing, so I set this as “View Only” for any routing capability for SP-Base.
If you are using vCD 9.5, one could also create a rights bundle that is published to an organization along with utilizing Global Roles to make this *much* easier.
The steps for this would be: Creation of Rights Bundle(s) -> Publish to org(s) -> Creation of Global Roles -> Publish to org(s) -> Apply role to user(s)
Alright, here are the three exports for these rights required. This is not comprehensive for all vCloud permissions required, but gives you an idea of what to append to your existing role (which could be easily done via REST API). Note that the exports below show my vCD instance (vcd-01a.corp.local) along with the org UUID, so replace this if you are doing a POST.
In this post, I will be reviewing the necessary steps to support Cross-VDC Networking inside of VMware vCloud Director 9.5. These are fairly straightforward since it aligns to the standard requirements set forth from Cross-vCenter NSX.
Multi-Site management must be configured between the vCloud Director instances. I will try to add a post on establishing this at a later time.
Ensure you have a unique vCloud Director installation ID. If you have duplicate IDs, this can lead to MAC address conflicts. Fojta did a blog post on updating your ID – please accomplish this before continuing.
Cross-vCenter NSX Configuration
vCD 9.5 does require a standard Cross-vCenter NSX configuration implemented between the resource/payload vCenters before we can do any configuration at the vCloud Director level.
This can be a single or multi-SSO domain topology. In my environment, here’s what I’ve configured between my two sites: Site-A and Site-B.
From the Networking and Security plugin, I’ve assigned my Site-A NSX Manager while linking Site-B NSX Manager as the secondary instance –
From there, I need to establish my Universal Segment ID pool and Transport Zone.
Keep in mind you do not want to overlap with an existing Segment ID pool, so pick a number that’s high enough (or out of reach from other pools) –
From the Transport Zone screen, I’ve created my new Transport Zone named “Universal-TZ” –
Now I’m ready to connect it to my respective clusters on Site-A and Site-B. Keep in mind I need to hit the drop down for the NSX Manager and attach the respective cluster at your secondary (or additional) location.
That’s it! Onto the next configuration which is at the vCloud Director level.
vCloud Director Initial Provider Setup
In this step, we need to assign the correlated NSX Manager to each vCenter instance that’s participating in the Cross-VDC networking solution. I will be showing how I’ve done this for my two sites, Site-A and Site-B, while establishing a fault domain.
From my Site-A, navigate to System -> Manage & Monitor -> vSphere Resources -> vCenters –
We are going to right click, go to Properties of this vCenter –
From there, we need to go the NSX Manager tab. This is where we populate the following:
Host/IP of NSX Manager
Control VM Resource Pool vCenter Path – this can be either the MOref object id OR the ‘Cluster/RP’ path – I chose the former.
Control VM Datastore Name – full name
Control VM Management Interface Name – again, full name
Network Provider Scope – now this is where we establish a fault domain. This Network Provider Scope could cover one or many vCenters in a single vCloud Instance. However, when we establish the vdc-Group, we must have a minimum of two different/unique fault domains (or network provider scope) inside of the created vdc-Group.
Now, on my Site-B, I will configure my respective properties along with a Network Provider Scope of “region-b” –
Great! Next step is to add the Universal Transport Zone as a new network pool on each vCD instance. This is purely importing the created Universal-TZ and moving on, so very easy –
That’s it – now we are ready to enable a specific orgVDC for cross-VDC networking.
Enabling an orgVDC for Cross-VDC Networking
This is a very simple process – really just enable it on a per orgVDC basis.
Go to your orgVDCs and right click on the orgVDC you want to enable cross-VDC Networking on. For example, I am enabling this on my Daniel oVDC’s –
Click on the Network Pool and Services sub-tab and you’ll see a new box below the Network Pool that states ‘Enable Cross VDC Networking (Using Network Pool “<Universal-TZ-Pool>”‘ Check this box.
This still allows for local oVDC network creation using the traditional network pool as stated in the screenshot above – this is not a complete conversion to the Universal Transport Zone.
Now, enabling this on my Site-B –
Permissions/Rights required for Cross-VDC Networking
As discussed in the previous blog post, there are specific rights and roles required for Cross-VDC networking that are not enabled by default for the organization administrator. Please review these before the tenant utilizes Cross-VDC networking.