vCD 10.0 continues to demonstrate and focus on the following objectives:
A Service Delivery Platform – rapidly deploy and integrate services into a multi-tenant, scalable platform with ecosystem partnership.
Multi-Workload and Extensible Capabilities – it’s not about traditional VM’s anymore. Distinct additions like native PKS integration or next-generation virtual network services are integrated out of the box.
Intuitive User Experience – while technology can be difficult sometimes, vCD orchestrates and automates many of the backend infrastructure tasks to get services operationalized. With the new H5 interface, this is very evident.
I’m going to summarize a few items that are pertinent to this release. However, this is not this is a comprehensive list by any means. The goal of this post is to provide an overview of why each of these additions/updates matter to our service providers and consumers.
Disclaimer: while vCD 10.0 is announced as of today, features might change based on the general availability (GA) date. I’ve been working with a release candidate build for my review. Therefore, this is subject to change.
Recently, I’ve been spending time on reviewing new functionality inside of VMware vCloud Director (vCD) 9.7, specifically Edge Clusters. Edge Clusters provides distinct capabilities to control tenant Edge placement while achieving a higher level of availability. While Edges are a distinct function of NSX to control traffic that ingresses/egresses out of NSX, vCD can provide a significant level of additional functionality.
Abhinav Mishra and I have spent some time writing about the rationale, implementation, migration, and design decisions in regards to Edge Clusters in version 9.7. Below are the links to each of these respective blog posts:
Currently, I am working on some overall design content for Edge Clusters inside of VMware vCloud Director 9.7. However, I wanted to share a post on providing a step by step guide on establishing an Edge Cluster inside of vCD. I will much more to share on our corporate blog shortly, but this should start some thoughtful discussions.
Quick Intro to Edge Clusters
So what’s the deal with Edge Clusters? Edge Clusters now allow a provider to discrete control of tenant Edge placement. Previously, this was rather limited and only controlled at the Provider Virtual Data Center (pVDC) layer. With Edge Clusters, we now can establish this on a per-oVDC basis. In essence, the three main value points of Edge Clusters:
Consumption of dedicated Edge Clusters for North/South
traffic – optimized traffic flow while minimizing the span of Layer 2 broadcast
Provide a higher level of availability to Edge
nodes that can distinctly fail between two clusters.
Ability to balance organization Edge services
between multiple Edge Clusters – I do not have to use the “same” Primary and
Secondary Edge Cluster for every org VDC. This can be configured on a per
Below is a overall high level design of Edge Clusters from a physical and logical layer –
I get this question quite a bit due to the new vCloud Director 9.5 Cross-VDC networking functionality – does vCloud Availability for Cloud-to-Cloud 1.5 (C2C) work with stretched networking inside of Cross-VDC networking?
The answer is: yes!
This is another great addition for recoverability considerations as one could fail over between vCloud Director instances without modifying the guest OS IP address. Furthermore, based on the application architecture, one could have active-active applications and ensure replication/failover in the event of a disaster.
Let’s go through my example high-level design I’ve worked up in my lab –
In the above diagram, we can see I have two active vCloud Director instances, Site-A and Site-B. I have two organizations, “Daniel” that resides in Site-A along with “Daniel-B” that resides in Site-B.
C2C is deployed on each site in the combined model and I have multi-site pairing completed so I can easily manage this between my two sites –
Within my Cross-VDC networking setup, I currently have my active egress setup to Site-A as depicted in the diagram above.
Last of all, I ran a protected workflow from Site-A to Site-B for my Stretched-vApp-VM –
From there, one can either migrate or failover the workload and without any guest OS IP changes. I am going to do a video shortly, but here’s a short GIF I created that shows the ease of use of failing over between my Site-A and Site-B –
After failover, I can then access Stretched-vApp-VM from the new NAT address on Site-B.
An Organization Administrator could also configure active/active or active/passive egress points for additional resiliency. This provides further capability inside of vCloud Director, especially with stretched networking and a complementary availability solution.