Two-part vCD series since it was longer than I expected!
I had a question come in from a Cloud Provider on what are the actual key differences between a standard Edge Gateway Service and an Advanced Edge inside of the vCloud Director (vCD) User Interface (UI). While I could explain a few things on my own, I decided to do a little bit of legwork to confirm my suspicions. While some of you may already know the following, I thought this was an interesting exercise and wanted to share my results.
Before I get to that, I’m sure everyone is aware vCloud Director started off with vCloud Network and Security (VCNS) and this was the network backing before NSX. With recent versions of vCloud Director, everything is backed by NSX.
With that said, the Advanced Gateway experience is what VMware will eventually migrate to. Therefore, get used to the nice HTML5 intuitive and speedy UI! 🙂
In my vCD 9.x instance, I have two edges deployed:
SiteB-T1-ESG is my advanced edge. I can verify this by right-clicking on the edge and seeing that I do not have an option to Convert to Advanced Gateway
Moreover, you can see I am running version 9.x of vCD – I can convert it to a Distributed Logical Router!
However, with my SiteB-T1-ESG-2, I can see it’s not an Advanced Gateway as I’m able to convert it
Let’s get to the comparisons now. Again, this is going to be in the context of the UI – not going to talk about the API right now. Going to state the advantage based on service in the title.
Firewall Services – Advantage: Advanced Gateway
I can create granular firewall rules using grouping objects associated with the HTML5 interface.
This provides a very similar experience to NSX within vCenter. To be honest, anyone that has used NSX should be able to figure this out very quickly.
From the standard interface, I can only create rules from a IP/CIDR and key words such as “any, internal, external.”
Pretty limited to say the least.
DHCP Services – Advantage: Advanced Gateway
From the DHCP subtab, I am able to establish pools, bindings, and relay configurations. Moreover, configuring IP Sets and DHCP Relay Agents.
We have the ability to add a DHCP pool that’s applied on an internal network that’s connected to this ESG. Pretty basic capabilities, but works.
NAT Services – Advantage: Tie
Ability to establish Destination or Source NAT’s. I see the same options between both Advanced and the Standard gateway, so it’s hard to call an advantage either way.
As stated with the Advanced Gateway, I have the ability to establish a DNAT or SNAT. Seems like the same options to me.
Routing Services – Advantage: Advanced Gateway
This seems like a night and day difference in routing options. I’m able to get an NSX-like experience from an HTML5 interface (that’s been around for over 1 year or so!)
Ability to set ECMP, Routing ID’s, utilize OSPF, BGP, and Route Redistribution with prefixes to boot.
If you’re used to NSX and applying routing configurations to an Edge, this is a very similar experience.
Yeah, how do static routes sound to you? That’s all I can apply here from the UI.
The Advanced Gateway is very similar to what we see in NSX – just in an HTML5 format.
We get to see our Global Configuration, Application Profiles, Monitoring, Rules, Pools and Virtual Servers.
I also see we have additional algorithms available from an LB perspective. I wouldn’t say it’s a stark difference between Advanced and Standard, but more comprehensive than the Standard Gateway.
Standard Gateway has very similar options as the Advanced UI, just in a different UI format.
As stated above, we don’t have UDP available as a type and fewer algos for the Pool configuration. With that said, it’s very comparable, but giving a slight advantage to Advanced for some of the other options available.
So the VMware Cloud Provider BU has dropped the next release of VMware vCloud Director – version 9.1.
Release notes are posted here, but I’d like to summarize some of the great additions to vCD. I’ll probably miss a few things, but the below is what’s very interesting and shows the power of vCloud Director as we expand the platform.
Continued HTML5 transition – while this is a multi-phased approach, the following have been accomplished in this release. Quite a bit has been accomplished on the tenant side. Next release will focus on finishing the Provider side of the vCD management.
Client Integration Plugin (CIP) for upload management – yay!!! Ability to upload OVF/OVA.
Multi-Site Navigation Portal – check it out. Very clean looking. I can also provide the organization association through the portal.
Create VM or vApp – nice simple workflows
Standalone VMRC Availability – great addition rather than the previous console access, which was always a pain. 9.0 released the HTML5 VM Console and now we have the standalone VM Remote Console support in 9.1. Again, no need for the vSphere CIP anymore with the HTML5 portal.
vRealize Orchestrator Integration – in my opinion, great addition to the vCD platform. Now we can provide direct vRO integration to vCD to kick off workflows. This is all done through the Content/Service Library.
Python SDK and vCD-CLI – embracing the automation community. The SDK supports automation with Python, and the CLI enables Providers and tenant operations to integrate services within vCD. All open-sourced. Check it out here: http://vmware.github.io/pyvcloud/
Container Service Extension – vCD will now support lifecycle management of K8s clusters through the Container Service Extension (CSE). K8s cluster nodes will be treated the same way as VM’s. One platform for both VMs and containers. Will be documented on the GitHub page also: https://vmware.github.io/container-service-extension/
Support for SR-IOV / NFV Requests – this is a big item for our NFV friends, especially to guarantee network resources for low-latency demanding workloads. To add to this, we also added support for Large Page VM’s, guaranteed VM latency sensitivity for specific VMs.
FIPS Mode for NSX – FIPS was introduced in NSX 6.3.0, but now we have the ability to toggle this within the vCD UI on per edge gateway. Obviously, you must be running NSX 6.3x or later for this to work.
Topics of Interest
Moving to Cassandra 3.x for metric data – any legacy upgrade using KairosDB has Cassandra 2.2.6 support. Be aware of this for new installations.
End of Support for Oracle Database – 9.1 will be the last release to support Oracle databases. I don’t see Oracle that often, but be aware of this for future releases!
I would also advise all of my providers to get used to Postgres as the database option. We are trying to simplify vCD further….hint here.
End of Support for vCloud API 1.5 and 5.1 – if you are using the 1.5 or 5.1 API for any API calls, it will not work in 9.1. Ensure you are changing any code before upgrading to vCD 9.1. Moreover, any API versions earlier than 20.0 will be not supported in future releases, so plan accordingly.
Note that the SP Admin HTML5 UI is still underway. You will still continue to use the Flex UI for everything except the vRO registration and content library creation.
There will be a patch release for Usage Meter and vCloud Director Extender shortly to support this release. Please be aware of this before any upgrades.
Another solid release from our team. I look forward to seeing this in production at our Cloud Providers!
I recently created a vCloud Availability demonstration video and wanted to share this out with others.
vCloud Availability (vCAv) for vCloud Director is a very powerful solution that provides Disaster Recovery as a Service (DRaaS) that’s built on top of vCloud Director. What’s great about vCAv is the ability to protect, migrate, and failover workloads from a tenant environment just from vCenter.
vCAv utilizes vSphere Replication as its replication engine while our Cloud Provider Business Unit built the architecture around vCD to provide scalable multi-tenancy. Granular selection of VM’s is possible that correlate to a DR-enabled VDC.
For VMware Cloud Providers that are interested in further details, here are some good links to start on vCAv:
I’ve been rebuilding my vCloud Director (vCD) lab and running through a few connectivity scenarios. Moreover, I wanted to write and share my findings on orchestrating an on-prem NSX environment connecting to a vCD/Provider environment using vCloud Director Extender (VXLAN to VXLAN). In vCD Extender, this is also known as a DC Extension.
To back up, let’s talk about how NSX provides flexible architecture as it relates to Provider scalability and connectivity. My esteemed colleagues did some great papers from our vCloud Architecture Toolkit (vCAT):
Before I get started, I also think this is a good guide for planning out VXLAN <–> VXLAN VPN connectivity.
Let’s look at my lab design from a network / logical perspective –
As you can see above, I have my Acme-Cloud organization available on the left with a single VM in the 172.16.20.x/24 network that’s running on NSX/VXLAN.
On the right, we have “Acme DC” that’s also using NSX and has a logical switch named “Acme-Tier” with the same network subnet.
The orange Extender Deployed L2VPN Client is what’s deployed by vCD Extender on tunnel creation. We’re going to walk through how Extender creates this L2VPN tunnel within an on-prem NSX environment.
This is very similar to my warm migration setup, so I’m going to try not to duplicate material.
I have my Acme-Cloud-Tier Org VDC Network that was converted to a Subinterface inside of vCD:
We can see in the Edge Services, my L2VPN Server has been set up with a default site configuration. However, vCD Extender creates its own site configuration –
Extender generates a unique ID for the username and password. This is done when the DC Extension is executed by the tenant. I also established the egress optimization gateway address for local traffic.
Tenant – vCD Extender Setup
Before we can create a Data Center Extension, we need two required fields for NSX deployments.
First, we need to give the required information to successfully deploy a standalone Edge that will be running our L2VPN client service. This would include the uplink network along with an IP address, gateway, and VMware-specific host/storage information.
Second, we need to provide the required NSX Manager information. I’m sure this is to make the API calls required to deploy the Edge device(s) to the specified vCenter.
Once the DC Extension has been created, we would see a new Edge device under Networking & Security
Tenant – DC Extension (L2VPN) Execution
So what happens when we attempt to create a new DC Extension (or L2VPN Connection)? A few things:
Creation of our trunk port for our specified subinterface
Deployment of the new Edge device that will act as the L2VPN Client
Reconfiguration of the trunk port (uses mcxt-tpg-l2vpn-vxlan-** prefix)
Allowing NSX to do its magic along with L2VPN
We can see within my task console what happened –
Voila, we have a connected L2VPN tunnel. As you can see, the blue “E” stipulates that we have a local egress IP set. I did this since I wanted to route traffic to its local interface for traffic optimization.
So, what happens in the background? Well, let’s take a look at the Edge device. We can see the trunk interface was created while the subinterface is configured to point to my logical switch “Acme-Tier.”
Last of all, the L2VPN configuration was completed and established as the Client. We can now see the tunnel is alive too.
From the main vCD Extender page, we can also see traffic utilization over the tunnel. Pretty nice graph!
Just a quick ping test, yep! WebVM can access WebVM2.
In summary, NSX to NSX DC Extensions within vCD Extender works pretty similar to Provider/VXLAN to On-Prem/VLAN. The key difference is the on-prem vCD Extender has the embedded Standalone Edge to deploy to vCenter.