Sunday, February 13, 2011

Label Switched Multicast - P2MP TE

On any MPLS cloud, current deployment is in a way that Unicast traffic were label switched while Multicast traffic were not label switched. Current deployment requires Multicast to be enabled on SP core in order to provide Multicast service to SP Customers. Multicast traffic from customer will be encapsulated using GRE with destination as multicast address within SP cloud and will be flooded downstream to required PE devices.

Recent enhancement allows providing Multicast service to end customers without enabling Multicast on SP core. This is done by using Multipoint LSP. Below are the 2 protocols that can be used to build Multipoint LSP:



1.       Multicast LDP
2.       Point to Multipoint Traffic Engineering Tunnel (P2MP TE)

There are 2 types of Multipoint LSPs as below:

1.       Point to Multipoint LSP
2.       Multipoint to Multipoint LSP


Point to Multipoint LSP:

Point to Multipoint LSP is an unidirectional LSP which is supported by both mLDP as well P2MP TE. This can be used in application like IPTV video distribution.

Multipoint to Multipoint LSP:

This is bidirectional LSP which is currently supported by mLDP only. This can be used in application like Video conference which required bidirectional communication.

LSM helps us introduce MPLS features like FRR to Multicast traffic as well. It also helps us predefine the path (using MPLS TE) that a particular Multicast stream should take.

In this document, we will discuss about basics of P2MP TE and the related configuration.

P2MP TE tunnel:

P2MP tunnel comprises of LSP which has single ingress and multiple egress points which can be used for services that delivers data from single source to multiple receivers. (Example: Multicast traffic). Incoming MPLS label packets can be replicated to different outgoing interface with different labels.  It consists of multiple sub-LSP which connects the source to multiple destinations.

Below are the few elements which are part of P2MP LSP,


Terminology
Description
Ingress LSR
The head end router where the P2MP LSP is originated. This is responsible for initiating the signaling messages on control plane to setup the Multipoint LSP
Egress LSR/Leaf LSR
One of the tail end routers where the Multipoint LSP will be terminating
Bud LSR
An LSR that is an egress LSR but also have downstream LSRs connected. This LSR will replicate the incoming Multicast packet.
Branch LSR
The LSR which has more than one downstream LSRs connected. This is responsible for replicating the incoming label packet and sent through multiple downstream routers with different labels.

 
RSVP Extensions:

As RSVP is the key protocol to have MPLS TE work, It has been extended with new modified objects which can be used for P2MP TE signaling. 2 New Class types (C-Types) are defined for SESSION object as below,

13  à P2MP_LSP_TUNNEL_IPv4
14  à P2MP_LSP_TUNNEL_IPv6
 
“Destination ID” in SESSION Object is replaced by “P2MP IP”. The combination of P2MP_ID, Tunnel ID and Extended Tunnel ID provides a globally unique identifier for P2MP tunnel.

New Object S2L_SUB-LSP IPv4 has been introduced which will be identified by Class number 50 to define the Tunnel destination (tail end address).

How does RSVP signaling works?
The head end LSR will send RSVP PATH message to the pre defined tail end routers. This PATH message for a P2MP tunnel will have the below objects,

SESSION Object with  unique (P2MP_ID, Tunnel_ID, Extended_Tunnel_ID)
HOP Object with next hop details to reach the tail end.
ERO Object with path details to reach the tail end (with strict/loose sub object information)
LABEL REQUEST object
SESSION ATTRIBUTE with setup, Hold Priority
SENDER TEMPLATE Object with S2L Sub LSP destination address which is the tail end address.

For each destination (Leaf LSR) part of P2MP tunnel, one PATH message will be unicast with “Router Alert” set and with above mentioned objects.

P2MP Configuration Example:

This configuration is under assumption that basic routing and mpls traffic-eng configuration is enabled already. No special configuration is required on any mid routers. Only TE headend and tailend will require special configuration which is as below,

HeadEnd device (R1) Configuration:

1.       Tunnel mode should be configured as “tunnel mode mpls traffic-eng point-to-multipoint”
2.       Tunnel destination will be a list with all leaf node address mentioned.
3.       IP multicast should be enabled with PIM mode as Source Specific Multicast.
4.       Tunnel interface should be enabled with “pim passive”.
5.       IGMP static-group should be configured for each multicast group that will be sent out of P2MP  tunnel interface.
 

hostname R1
!
ip cef   
!
mpls traffic-eng tunnels
!
mpls traffic-eng destination list name P2MP_DESTINATION
ip 10.1.3.3 path-option 1 dynamic
 ip 10.1.4.4 path-option 1 dynamic
 ip 10.1.5.5 path-option 1 dynamic
!
interface Loopback0
 ip address 10.1.1.1 255.255.255.255
!        
interface Tunnel100
 ip unnumbered Loopback0
 tunnel mode mpls traffic-eng point-to-multipoint
 tunnel destination list mpls traffic-eng name P2MP_DESTINATION
 tunnel mpls traffic-eng priority 7 7
 tunnel mpls traffic-eng bandwidth 1000
!
interface Ethernet0/0
 no ip address
!
interface Ethernet0/0.12
 encapsulation dot1Q 12
 ip address 10.1.12.1 255.255.255.0
 mpls traffic-eng tunnels
 ip rsvp bandwidth 5000
!
router ospf 1
 router-id 10.1.1.1
 log-adjacency-changes
 network 10.1.0.0 0.0.255.255 area 0
 mpls traffic-eng router-id Loopback0
 mpls traffic-eng area 0
!
end
  
TailEnd device (R5) Configuration:

1.       IP multicast shold be enabled with PIM mode as Source Specific Multicast.
2.       Static mroute should be configured for each source address of the multicast with next hop as tunnel source address.

hostname R5
!
ip cef
!
!        
ip multicast-routing
ip multicast mpls traffic-eng
!
!
no ipv6 cef
multilink bundle-name authenticated
mpls traffic-eng tunnels
!
interface Loopback0
 ip address 10.1.5.5 255.255.255.255
 ip pim sparse-mode
 ip igmp join-group 232.1.1.1 source 192.168.1.1
!
interface Ethernet0/0
 no ip address
!        
interface Ethernet0/0.5
 encapsulation dot1Q 5
 ip address 10.1.55.5 255.255.255.0
 ip pim sparse-mode
 ip igmp static-group 232.1.1.1 source 192.168.1.1
!
interface Ethernet0/0.35
 encapsulation dot1Q 35
 ip address 10.1.35.5 255.255.255.0
 mpls traffic-eng tunnels
 ip rsvp bandwidth 5000
!
router ospf 1
 router-id 10.1.5.5
 log-adjacency-changes
 network 10.1.0.0 0.0.255.255 area 0
 mpls traffic-eng router-id Loopback0
 mpls traffic-eng area 0
 mpls traffic-eng multicast-intact
!
ip pim ssm default
ip mroute 192.168.1.1 255.255.255.255 10.1.1.1
!
end 

Verification:

Below is the output of multicast ping triggered from CE device connected to Tunnel Headend,

CE#ping 232.1.1.1                  

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 232.1.1.1, timeout is 2 seconds:

Reply to request 0 from 10.1.3.3, 4 ms
Reply to request 0 from 10.1.4.4, 4 ms
Reply to request 0 from 10.1.5.5, 4 ms
CE#

On Headend device, below are the few commands which can be helpful for troubleshooting.

1. "show ip mroute" and "show ip mfib" will show the incoming and outgoing interfaces for each group along with the packet counters. From the below capture, it can be observed that incoming interface for group (232.1.1.1, 192.168.1.1) is E0/0.16 and outgoing interface is Tunnel100.


R1#show ip mfib 232.1.1.1
Entry Flags:    C - Directly Connected, S - Signal, IA - Inherit A flag,
                ET - Data Rate Exceeds Threshold, K - Keepalive
                DDE - Data Driven Event, HW - Hardware Installed
I/O Item Flags: IC - Internal Copy, NP - Not platform switched,
                NS - Negate Signalling, SP - Signal Present,
                A - Accept, F - Forward, RA - MRIB Accept, RF - MRIB Forward
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kbits per second
Other counts:      Total/RPF failed/Other drops
I/O Item Counts:   FS Pkt Count/PS Pkt Count
Default
 (192.168.1.1,232.1.1.1) Flags:
   SW Forwarding: 1560/0/100/0, Other: 15/0/15
   Ethernet0/0.16 Flags: A
   Tunnel100 Flags: F NS
     Pkts: 786/0

R1#


2. "show mpls traffic-eng tunnel tunnel <>" will show the outgoing physical interface and the label used for each Sub-LSP as below,

R1#show mpls traffic-eng tunnels tunnel 100

Tunnel100   (p2mp),  Admin: up, Oper: up
  Name: R1_t100  

  Tunnel100 Destinations Information:

    Destination     State SLSP UpTime 
    10.1.3.3        Up    00:00:44  
    10.1.4.4        Up    01:19:17  
    10.1.5.5        Up    01:19:17  

    Summary: Destinations: 3 [Up: 3, Proceeding: 0, Down: 0 ]
      [destination list name: P2MP_DESTINATION]

  History:
    Tunnel:
      Time since created: 2 hours, 19 minutes
      Time since path change: 1 hours, 19 minutes
      Number of LSP IDs (Tun_Instances) used: 5
    Current LSP: [ID: 5]
      Uptime: 1 hours, 19 minutes
    Prior LSP: [ID: 3]
      Removal Trigger: tunnel shutdown
         
  Tunnel100 LSP Information:
    Configured LSP Parameters:
      Bandwidth: 1000     kbps (Global)  Priority: 7  7   Affinity: 0x0/0xFFFF
      Metric Type: TE (default)

  Session Information
    Source: 10.1.1.1, TunID: 100

    LSPs
      ID: 5 (Current), Path-Set ID: 0xDD000004
        Sub-LSPs: 3, Up: 3, Proceeding: 0, Down: 0

      Total LSPs: 1

P2MP SUB-LSPS:

 LSP: Source: 10.1.1.1, TunID: 100, LSPID: 5
     P2MP ID: 100, Subgroup Originator: 10.1.1.1
     Name: R1_t100
     Bandwidth: 1000, Global Pool

  Sub-LSP to 10.1.4.4, P2MP Subgroup ID: 1, Role: head
    Path-Set ID: 0xDD000004
    OutLabel : Ethernet0/0.12, 17
    Next Hop : 10.1.12.2
    Explicit Route: 10.1.12.2 10.1.23.2 10.1.23.3 10.1.34.3
                    10.1.34.4 10.1.4.4
    Record   Route (Path):  NONE
    Record   Route (Resv):  NONE

  Sub-LSP to 10.1.5.5, P2MP Subgroup ID: 2, Role: head
    Path-Set ID: 0xDD000004
    OutLabel : Ethernet0/0.12, 17
    Next Hop : 10.1.12.2
    Explicit Route: 10.1.12.2 10.1.23.2 10.1.23.3 10.1.35.3
                    10.1.35.5 10.1.5.5
    Record   Route (Path):  NONE
    Record   Route (Resv):  NONE

  Sub-LSP to 10.1.3.3, P2MP Subgroup ID: 3, Role: head
    Path-Set ID: 0xDD000004
    OutLabel : Ethernet0/0.12, 17
    Next Hop : 10.1.12.2
    Explicit Route: 10.1.12.2 10.1.23.2 10.1.23.3 10.1.3.3
    Record   Route (Path):  NONE
    Record   Route (Resv):  NONE
R1#


On midpoint device, below commands can be used for troubleshooting.

1. Use "show mpls forwarding-table label <>" to check the egress label and interface details,
R2#sh mpls forwarding-table labels 17 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop   
Label      Label      or Tunnel Id     Switched      interface             
17         17         10.1.1.1 100 [5] 103578        Et0/0.23   10.1.23.3  
        MAC/Encaps=18/22, MRU=1500, Label Stack{17}
        AABBCC03EB00AABBCC03EA00810000178847 00011000
        No output feature configured
    Broadcast
R2#


On Bud router and Tailend devices, below commands can be used for troubleshooting.

R3#show mpls forwarding-table labels 17 detail
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop   
Label      Label      or Tunnel Id     Switched      interface             
17         17         10.1.1.1 100 [5] 105774        Et0/0.34   10.1.34.4  
        MAC/Encaps=18/22, MRU=1500, Label Stack{17}
        AABBCC03EC00AABBCC03EB00810000228847 00011000
        No output feature configured
    Broadcast
           17         10.1.1.1 100 [5] 105774        Et0/0.35   10.1.35.5  
        MAC/Encaps=18/22, MRU=1500, Label Stack{17}
        AABBCC03ED00AABBCC03EB00810000238847 00011000
        No output feature configured
    Broadcast
           No Label   10.1.1.1 100 [5] 0             aggregate 
        MAC/Encaps=0/0, MRU=0, Label Stack{}, via Ls0
        No output feature configured
    Broadcast
R3#

R3#show ip mroute 232.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(192.168.1.1, 232.1.1.1), 00:01:20/00:01:39, flags: sLTI
  Incoming interface: Lspvif0, RPF nbr 10.1.1.1, Mroute
  Outgoing interface list:
    Ethernet0/0.3, Forward/Sparse, 00:01:20/00:01:39

R3#


Points to Remember:



  • P2MP TE can be used only on source driven multicast application. Currently it doesnt support receiver driven applications.
  • Only PIM SSM can be used and other may not be supported.
  • On Tail end devices, a new virtual interface LSPVIF will be created and used as incoming interface for LSM traffic.
  • On Headend router, tunnel mode must be point-to-multipoint traffic-eng. (Use "tunnel mode mpls traffic-eng point-to-multipoint")
  • On Headend router, tunnel destination is a list of sub LSP address.

2 comments:

  1. Hi ,
    Can you explain the concept of S-PMSI and I-PMSI in P2MP TE context?

    ReplyDelete
  2. Hi! Many thanks! : ) Great and clear explanation! Keep up the good work! : )

    ReplyDelete