I've integrated vCAC and NSX and am noticing that the Edge Service Router that gets deployed as part of a multi-machine blueprint is receiving 2 IPs on its single "uplink" interface from the External network profile. Below is my setup and what is going on. Any help in understanding why this is happening would be helpful. Not a huge deal as this is a learning lab, but when I go to implement this in a production environment I need to know if this is expected behavior or if something is wrong as this effectively cuts the # of possible deploy-able networks in half. Again, not a big deal as this "transport network" (the segment between the manually deployed Edge Gateway and the dynamic Edge Service Routers) will exist entirely within the vSphere environment and can be as large as a class A network if needed, however this is a huge waste of IP spaces which I'd like to resolve if possible.
Topology:
I have a NSX Edge and virtual wire (NSX L2 switch) already deployed in the environment. Within vCAC I have:
- a reservation that is linked to the dvPortGroup created by the NSX L2 switch
- an External network profile that is used to configure the uplink port of dynamically deployed NSX Edge Service Routers to connect to the LAN segment between the dynamic ESR and the already deployed NSX Edge/L2 switch
- a 1-Many NAT network profile that is used to configure the virtual machines deployed from vCAC blueprints
- a vCAC vSphere VM blueprint pointing to a snapshot of a VM within the vSphere environment (linked clone deployment)
- a vCAC Multi-Machine blueprint that contains the above blueprint and assigns a network interface to the VM and using the 1-Many NAT network profile to configure the IP settings on the VM. The MM blueprint only contains a single VM for the purpose of testing the dynamic network creation and IP assignment features/integration between vCAC and NSX.
Resulting Topology once VM is deployed:
NSX Edge Gateway (manually deployed)
v
v
NSX L2 Switch/Virtual Wire (manually deployed
v
v
NSX Edge Service Router (deployed as part of vCAC blueprint deployment)
v
v
Virtual Machine (deployed as part of vCAC blueprint deployment)
In theory what should happen is that when I request a resource from the MM blueprint:
1. The ESR is deployed with 2 interfaces: 1 for the External network configured with an available IP on the corresponding subnet, and 1 for the internal NAT network configured with the IP of the default gateway which is configured in the NAT network profile.
2. Rules for NAT and traffic handling are automatically configured within the ESR
3. The VM is deployed and configured with a NIC with the appropriate IP configurations as specified within the NAT network profile.
What actually happens:
1. The ESR gets deployed with 2 NICs; 1 NIC for the uplink to the External network which gets 2 IPs from the 13 subnet (instead of 1) and 1 NIC for the default gateway of the NAT'd network configured with the IP of the NAT network default gateway. Steps 2 and 3 still occur.