Get your nerd hats on! We’re freaking pushing labels over our DMVPN network, like a boss. As you might have gathered thus far, I’m a little excited. The only downer here is your label switch path has to be hub-to-spoke, so no mas’ spoke-to-spoke tunnels. If you want labels between your spokes, per Cisco documentation, traffic flow absolutely has to be spoke-hub-spoke. Calm down, dry those tears sunshine… because this is still awesome. I can hear you all now “But Jon! One of the best things about DMVPN is building dynamic tunnels between spokes!” shut up Debbie downer. We do loose dynamic tunnels, but we gain having full blown PEs connected only via DMVPN.
Ok, enough build up. How does this work? Surprisingly easy, if you’ve configured MPLS before… this isn’t going to be super exciting. First things first, here’s our topology:
All spokes are connected back to the hub via Serial links in a 192.168.zy.x/30 space (where z=lower router number, and y=higher router number). For example the link between R1-Hub and R2-Spoke is 192.168.12.0/30. Then we have Loopback0 configured on each router in the 192.168.x.x/32 space, this is our tunnel source. All traffic supporting DMVPN backhaul is routed via OSPF. Finally, for routing within the DMVPN cloud we’re using good old reliable EIGRP. Here’s our base DMVPN configurations.
R1-Hub
!interface Tunnel100
ip address 10.10.100.1 255.255.255.0
no ip redirects
no ip split-horizon eigrp 100
ip nhrp map multicast dynamic
ip nhrp network-id 100
mpls ip
tunnel source Loopback0
tunnel mode gre multipoint
!
!
interface Loopback100
description BGP peering over DMVPN
ip address 10.10.1.1 255.255.255.255
!
router eigrp 100
network 10.0.0.0
R2/R3/R4
interface Tunnel100
ip address 10.10.100.x 255.255.255.0
no ip redirects
ip nhrp map multicast 192.168.1.1
ip nhrp map 10.10.100.1 192.168.1.1
ip nhrp network-id 100
ip nhrp nhs 10.10.100.1
mpls ip
tunnel source Loopback0
tunnel mode gre multipoint
!
interface Loopback100
description BGP peering over DMVPN
ip address 10.10.2.2 255.255.255.255
!
router eigrp 100
network 10.0.0.0
Pretty simple so far right? Alright, lets get some labels in here.
All Routers
mpls ip
mpls ldp router-id lo100
!
int tun100
mpls ip
!
I know what you’re thinking “Jon, here’s $5… because you just blew my mind.” Well thank you, and I do accept donations. So let’s check the output on R1
*Aug 21 01:21:12.009: %LDP-5-NBRCHG: LDP Neighbor 10.10.2.2:0 (1) is UP
*Aug 21 01:21:13.005: %LDP-5-NBRCHG: LDP Neighbor 10.10.3.3:0 (2) is UP
*Aug 21 01:21:14.106: %LDP-5-NBRCHG: LDP Neighbor 10.10.4.4:0 (3) is UP
Sweet sweet success, but do we have labels? Best place to check is on one of the spokes, I’ll look at R4 (he seems lonely).
R4-MPLS#show mpls forwarding-table | ex No Label
Local Outgoing Prefix Bytes Label Outgoing Next Hop
Label Label or Tunnel Id Switched interface
16 Pop Label 10.10.1.1/32 0 Tu100 10.10.100.1
17 16 10.10.2.2/32 0 Tu100 10.10.100.1
18 17 10.10.3.3/32 0 Tu100 10.10.100.1
Awesome! Don’t ignore the next hop, remember that’s the secret sauce here. Since we excluded “no ip next-hop-self eigrp 100” from our Hub config, we’re forcing all traffic between spokes to route through the hub. As I demonstrate in the video, if we allow the dynamic tunnels this all breaks. So we have a functioning LSP it would seem between spokes so lets get some VRF running and go ping crazy! You don’t have to configure BGP on the Hub, but I am and I’ll configure spokes as route reflector clients to minimize spoke configuration.
R1-Hub
router bgp 65000
bgp log-neighbor-changes
neighbor 10.10.2.2 remote-as 65000
neighbor 10.10.2.2 update-source Loopback100
neighbor 10.10.2.2 send-community both
neighbor 10.10.3.3 remote-as 65000
neighbor 10.10.3.3 update-source Loopback100
neighbor 10.10.3.3 send-community both
neighbor 10.10.4.4 remote-as 65000
neighbor 10.10.4.4 update-source Loopback100
neighbor 10.10.4.4 send-community both
!
address-family vpnv4
neighbor 10.10.2.2 activate
neighbor 10.10.2.2 send-community extended
neighbor 10.10.2.2 route-reflector-client
neighbor 10.10.3.3 activate
neighbor 10.10.3.3 send-community extended
neighbor 10.10.3.3 route-reflector-client
neighbor 10.10.4.4 activate
neighbor 10.10.4.4 send-community extended
neighbor 10.10.4.4 route-reflector-client
exit-address-family
Spokes
router bgp 65000
bgp log-neighbor-changes
neighbor 10.10.1.1 remote-as 65000
neighbor 10.10.1.1 update-source Loopback100
neighbor 10.10.1.1 send-community both
!
address-family vpnv4
neighbor 10.10.1.1 activate
neighbor 10.10.1.1 send-community extended
exit-address-family
!
Now that we have BGP up and running, we’ll configure a basic VRF, assign a loopback to said VRF and redistribute connected with our ipv4 address-family (for the VRF).
All Spokes
ip vrf MPLS
rd 65000:1
route-target export 65000:65000
route-target import 65000:65000
!
router bgp 65000
address-family ipv4 vrf MPLS
redistribute connected
exit-address-family
R2
int lo1001
ip vrf f MPLS
ip address 172.16.2.1 255.255.255.0
!
R3
int lo1001
ip vrf f MPLS
ip address 172.16.3.1 255.255.255.0
!
R4
int lo1001
ip vrf f MPLS
ip address 172.16.4.1 255.255.255.0
!
Last but not least, let’s test from R3.
R3-MPLS#show ip bgp vpnv4 vrf MPLS | b Route
Route Distinguisher: 65000:1 (default for vrf MPLS)
*>i 172.16.2.0/24 10.10.2.2 0 100 0 ?
*> 172.16.3.0/24 0.0.0.0 0 32768 ?
*>i 172.16.4.0/24 10.10.4.4 0 100 0 ?
!
R3-MPLS#show ip route vrf MPLS bgp | b Gateway
Gateway of last resort is not set172.16.0.0/16 is variably subnetted, 4 subnets, 2 masks
B 172.16.2.0/24 [200/0] via 10.10.2.2, 00:31:16
B 172.16.4.0/24 [200/0] via 10.10.4.4, 00:31:11
!
R3-MPLS#ping vrf MPLS 172.16.2.1 source lo1001
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.2.1, timeout is 2 seconds:
Packet sent with a source address of 172.16.3.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 19/19/20 ms
!
R3-MPLS#traceroute vrf MPLS 172.16.2.1
Type escape sequence to abort.
Tracing the route to 172.16.2.1
VRF info: (vrf in name/id, vrf out name/id)
1 10.10.100.1 [MPLS: Labels 16/24 Exp 0] 20 msec 20 msec 20 msec
2 172.16.2.1 20 msec 19 msec 20 msec
Well that’s it every body! MPLSoDMVPN! See attached video if you want to hear me talk really fast about doing everything you just read.
Cisco has the spoke-to-spoke stuff now with the MPLS labels.
Hi Jon,
Great post.
Hi Fish, please, could you share this new stuff ?
Thanks and Regards,
Jerome
+1 on spoke to spoke labels. I'd love to see that in place.
Hi folks,
Spoke-spoke support with MPLS over DMVPN has been there on IOS-XE routers since XE3.11 and the equivalent release for ISR-G2 routers(those that support MPLS) as well. The routing design stays pretty much the same, the only config that would change is replacing 'mpls ip' on the tunnel interfaces with 'mpls nhrp'. That takes care of all the magic and also doesn't need LDP. So you can disable LDP over the tunnel if you're using that. That's all!
I read up on that as well! (Keep in mind this post is over a year old now lol). Having said that, last I tried to lab this out, mpls nhrp does absolutely nothing. I get no label exchange over the tunnel interfaces after removing ldp. So, good the syntax is there, but it appears it's a work in progress.
OR at least a work in progress on CSR 1000v.
Please any doc, design guide for Spoke-spoke support with MPLS over DMVPN I don't find any valid cisco doc
There is support, but it's kind of flaky in my opinion. So far, it only seems to work if your spokes are PEs. I'll write a post about it, but the skinny is:
1. Replace 'mpls ip' with 'mpls nhrp' on the tunnel interfaces.
2. Change vpnv4 peering from loopbacks to Tunnel interface IPs.
What will happen is, you'll forward traffic ONLY with its vpn label (no transport label). So, if your PEs are DMVPN spokes then it works wonderfully. However, if your spokes are just LSRs it totally breaks MPLS lol.
Hi
Is there any progress I this case?
This is exactly what happens in my lab and I really need he feature.
Unfortunately no, mpls nhrp only seems to work if your DMVPN routers are PEs.
For me, to change ospf network type to point-to-multipoint did it.
Big thanks to Cisco TAC
We use this heavily in our environment and mpls nhrp does indeed work as the solution for running mpls over dmvpn while still permitting spoke to spoke traffic.