VMware Cloud Director 10.1 is now out! I’d like to highlight ten (technically, 10.1 🙂 ) items that showcase some of the new functionality within this release. This is not by any means complete. However, these are top of mind items I’ve interacted or reviewed.
New Name!
Let’s cover the “.1” of my list – the name change. When logging into VCD, the little “v” is dropped from Cloud. Going forward, we are not using vCloud Director. It will be known as Cloud Director:
This also aligns naming to our upcoming VMware Cloud Director Service that will be available shortly.
Since we are all fans of acronyms, I plan on using VCD to correlate to a provider-managed instance. VCDS could be referred to as the service instance.
HTML5 User Interface
That’s it for the Flex UI interface, it’s gone. The code has been removed from the 10.1 release and cannot be enabled or re-added.
First, we are now on what I consider functional parity with the Flash interface. The Provider interface has been enhanced for an intuitive experience as well. We can see that Cloud and backed vSphere Resources are under a sub-menu:
Within a specified object, we can also see that sub-menu options are also embedded as a vertical list:
Many of the wizard workflows have been rethought out also to ensure one can understand VCD concepts –
From a Tenant perspective, one can see significant changes to the vApp (or Virtual Applications) and VM sub-menu:
The card layout makes it very intuitive to see what’s available and continue on typical service operations. From a UI Extensibility perspective, Badges are available in 10.1 –
This is a great way of categorizing specific workloads. I’m looking forward to further integration from our solutions and ecosystem partners on what’s possible here.
There’s also a multi-select option for VM’s or vApps, which is an awesome addition. This works in the card or list view:
From vApp sub-menu perspective, we can also see a vApp Network Diagram available –
Last of all, there’s much more I’m not covering here. However, these were the pertinent items I wanted to call out.
vApp and Importing User Enhancements
This was a missed option from the 10.0.x H5 menu – ability to import an existing VM or vApp from a backed vCenter. While one could import using the “Adopt-a-VM” behavior, this is a more intuitive and direct method.
From the UI, we have the ability to “Import from vCenter” vApp Templates or an existing vApp –
Within the import operation, the provider administrator can select the VM along with its target oVDC and Storage Policy –
NSX-V to NSX-T VCD Migration Tool
Next, let’s discuss the first iteration of the NSX-V to NSX-T migration tool for VCD. I’m sure Tomas Fojta will have more on this as he worked closely with the team developing this migration capability.
This initial tool allows a provider to migrate between a -V backed oVDC/pVDC to a -T backed oVDC/pVDC destination. This is a scripted migration and utilizes a NSX-T edge cluster that will operate in bridge mode for the migration scenario. Below is a diagram that depicts the workflow –
This migration is done through a YAML input that includes the source and destination properties as shown below –
We also have the ability to “rollback” to the original -V backed state in the event there’s an issue. There are many considerations with this initial release of the NSX migration tool, so definitely review the notes and understand the current supported (and unsupported) VCD networking topologies.
NSX-T UI Additions
Progress has been made on VCD 10.1 in regards to NSX-T User Interface and API capability. The goal of this release was to have “base” NSX-T functionality within the user interface. This consists of:
- IPsec VPN UI support (API was available in 10.0)
- Dedicated External Network
- BGP and Route Advertisement
- NSX-T Edge Cluster Manageability
- Attach and Detach Networks
There are several other enhancements, but I will focus on the following list for now.
IPsec VPN
First, let’s discuss IPsec VPN. Within 10.1, we now have the ability to configure a IPsec VPN tunnel via the UI. Within the enhanced UI, we can now see it within the sub-menu –
VPN tunnels will terminates on the T1. We also have the ability the create a split tunnel configuration. This is done via a NAT on the T1, or orgEdge instance.
Dedicated External Network
Next addition is what we call the Dedicated External Network capability. In NSX-T terms, this is the ability to dedicate a T0 to a specific organization and be part of the entire tenant networking stack.
The value of a Dedicated External Network is the ability to configure a multitude of upstream or northbound connectivity options. This could range from dedicating a MPLS connection or even a remote data center interconnect. NSX-T design considerations are different than -V, hence, a transition to dedicating T0’s.
Let’s start off with a diagram of what a shared model looks like.
As one can see, we are sharing the T0 router between organization VDCs. This is auto-plumbed based on the T intra-routing. From there, organizations can create routed networks that route to their T1.
Within a dedicated model, we are exposing the T0 directly to the T1 –
We can see that Acme and Bravo are dedicated to a specific External Network, or Tier-0. From there, the provider can configure any northbound requirements that each tenant (organization) might require.
The provider instantiates this by creating a T0 within NSX-T and adding it as an External Network. From there, the provider could convert an existing org Edge (T1) to this new dedicated T0, or create a new org Edge.
From my environment, I have a switch available to dedicate an existing Edge to this External Network –
Or we now have the ability to dedicate it during the new Edge Gateway creation process –
Under External Networks, the provider sees which External Networks are configured as “Dedicated” by the middle column –
BGP and Route Advertisement
Next, BGP peering is now available within the UI for self-serviceability. This requires a dedicated T0, or Dedicated External Network, for one to see this new UI functionality.
Within my lab environment, I created a new edge that was dedicated to a newly created T0 –
From the Routing section, we now have the ability to configure BGP peering, neighbors, and IP Prefix Lists.
We can also advertise routed networks. This will propagate up to the T0 which allows for a fully routed network. For example, this is helpful especially when connected to a northbound MPLS connection. You have the ability to specify specific CIDRs –
One can also view advertisements within the Networks pane (see Public-Network) –
NSX-T Edge Cluster Manageability
Within the Edge Gateway configuration, the provider has the ability to explicitly select a NSX-T Edge Cluster that’s available to the northbound T0 –
This is a nice additional for flexible connectivity options.
Attach and Detach Networks
This is a great addition – ability to remove (or connect) a Network to an Edge Gateway. This functionality now exists within -T and -V backed Edges –
VCD Appliance Changes
Let’s talk about my favorite topic – the VCD appliance! Three things I am going to discuss:
- Automatic Recovery
- API Endpoint
- Appliance UI Enhancements
Automatic Recovery
Within this release, we now have the ability to failover to a standby instance. From the documentation, there’s a great diagram that reviews the capability between Manual and Automatic:
By default, the initial failover mode is set to manual. The provider will need to configure the automatic mode using the Appliance API, which we will cover next.
API Endpoint
There is now an API available from the VCD appliance. This is listed under:
https://vcd-fqdn:5480/api/1.0.0/
This utilizes standard authentication utilizing the root account.
Doing a GET to /nodes, I can see I have my single instance in manual mode –
I haven’t gone through the API extensively, but the provider can use the API to promote a standby instance, review storage stats, and so forth.
Appliance UI Enhancements
From the VAMI UI, we can see that we now have the status of the database along with services within this specific instance –
Encryption Enhancements
Next, let’s discuss encryption capabilities now with VCD 10.1. While VCD has been able to consume underlying storage policies, this could be problematic when using vCenter-aware encryption capabilities.
Within this release, we now have the capability to encrypt VMs and disks from a supported vCenter instance. This new capability allows for the following profile offerings:
- Provider-Managed VM Encryption – all org workloads are encrypted by the selected storage profile which meets internal security/regulatory requirements
- Provider-Managed Tenant Selectable VM Encryption – Provider can publish a multitude of storage policies, including an encrypted policy. Organization users can select encryption policy to ensure it meets their end-requirements.
- Dedicated vCenter/pVDC Encryption – Provider can instantiate a dedicated vCenter and work with the tenant on management of the Key Management System (KMS) that is required for vCenter-level encryption.
This is an area I need to set up further, but I did see how the VM-encryption shows up within the VCD UI when I reviewed one of our videos:
Global Rights and Roles Enhancements
I realize this might be minor, but I consider this a major addition – we now have the ability to clone roles, Global Roles, and Rights Bundles!
During the Cloning operation, the user has the ability to modify the selected rights before creation –
If you’re new to Rights Bundles, please check out Jeff’s post on this topic on our corporate site.
Central Point of Management Additions
Next, let’s discuss Central Point of Management, or CPoM. With this release, we are awaiting approval to publish this new extension to the Google Chrome marketplace.
The premise of the browser extension is to improve the tenant experience as it relates to the proxy configuration. When using this Google Chrome extension, one does not need to configure the proxy configuration details.
This extension provides the capability of retrieving the information around the proxy, tenant login information, provide the optimal configuration based on the H5 UI, and interoperate with multi-site and different version of VCD.
I loaded up the current build of the extension on my Google Chrome browser. One can see a different certificate message (I am using a self-signed cert for this VCSA) along with the “cloud” icon in the top right corner –
Once I trusted the instance, the blue bar was removed –
One can also see that there’s no proxy configuration anymore as we are using this extension –
Awesome addition to allow for secure connectivity to a dedicated vCenter instance.
Local Data Center Group Capability
This is a nice new addition to Cross-VDC networking capability with NSX-V.
Previously, Cross-VDC networking required two distinct fault domains while utilizing Cross-vCenter NSX. While VCD would automate the creation of the universal objects within this context, Cross-vCenter NSX has additional requirements that might not meet provider or tenant requirements.
With that, let’s review the diagram I created to depict this –
The team is introducing this “Local Group” concept to provide further network availability between organization VDCs within a single vCenter. This is a great addition for metro-clusters that constitute one or many clusters within the same vCenter instance.
As stated above, this is only available within NSX-V as of today.
High-Level Setup
This is going to probably require a larger post at a later time, but the setup instructions are fairly similar to Cross-VDC networking, just without the Universal setup.
One needs to still setup the Network Provider Scope and select the Cross-VDC Networking switch. From there, we require the resource pool path, datastore name, and management interface for the control VM –
From there, the provider (or tenant) can create the local VDC group by following the wizard –
Once the local VDC group has been established, we can now configure our egress points –
We will now select my first edge as the active egress point –
While I did see new tasks stating creation of universal objects, this was not the case…
I can see new -V networks created within my backed vCenter –
Along with a DLR Control VM –
From there, we are ready to create our first “stretched” network –
I can see this depicted within my backed vCenter –

Along being available in my joined oVDCs –
Definitely more to come here.
SSL and Certificate Management
Last of all, Certificate management is changing in 10.1 to make this more intuitive and have a central store going forward. I’ve written quite a bit on this change in this post, but the goal is to make certificates easier to manage going forward with VCD.
Conclusion
Finally, one can see that this was a jam-packed release by our team. I didn’t even discuss the new App Launchpad, new additions to Container Service Extension, Terraform Provider 2.7, and so forth. I look forward to seeing this release in production for our global VMware Cloud Providers.
Last comment – I couldn’t have done all of this testing without Timo Sugliani and the automation he’s built for us. Big thanks for him on making our lives easier each day.
-Daniel
Thanks for the good reading.
After years struggling with the transition, feature parity with Flex is for sure good news.
Conversely, any NSX-T migration gimmick is useless until NSX-T ‘Edge’ is missing basic functionalities like SSL-VPN and load balancing. How many years, yet, would it take VMware to reach NSX feature parity, in your opinion?
Best, e.
Hi Emmanuele –
Thanks for the reply and I understand it. However, just like the H5 transition, it took a few phases to complete this. I expect us to have iterations of the NSX-V to T migration for VCD as time progresses. Moreover, NSX-T 3.0 introduces several multi-tenant features and scale such as VRF-lite that will be pertinent for Cloud Providers. As for Load Balancer, we are transitioning to Avi Networks, so this is taking additional investment compared to traditional LB’s.
Not trying to make excuses, but it comes down to prioritization.
Cheers,
-Daniel
I wouldn’t wait around for SSL vpn. I don’t believe it’s even on the roadmap.
It isn’t….for now. 🙂
It should be.
With the current ̶p̶i̶t̶i̶f̶u̶l̶ reduced set of functionalities in NSX-T edge, using it for customers doesn’t have so much appeal.
I don’t understand well what was the reasoning made, when not providing feature parity to the T edges from instant zero; no reasoning at all, I guess 🙂
Best, e.
We’re running a VCD environment with NSX-T under the hood. It’s great as long as you’re ok with the limitations. I’d probably be more annoyed if I had a big NSX-V based environment that I started with. But I don’t =)
The 2 tier routing in NSX-T is worth the price of admission if you ask me. And then with VRF lite in 3.0, I’m set. There are like a billion ways to do SSL VPN, I don’t really think it’s necessary for VMWare to integrate it.
Just build an NSX-T version and run it in parallel to V and use the environment that makes sense.
I couldn’t have said it better, Danny. Nice job.
Hi There, does anyone know how to import multiple vms into a vapp in cloud director 10.x ?
my workflow in 9.x is as follows:
– crate vapp
– import 8 vms [nested esxi / other vms] into vapp from vcenter
any ideas?
Hi Dave, if you’re looking to batch import into a specific vApp, there’s no way to do that today from the adopt-a-VM behavior. You can import each respective VM and move it to the selected vApp once imported.