Pages

CCNP ROUTE - EIGRP part 2

Configuring and Verifying EIGRP in an Enterprise WAN

EIGRP over Frame Relay
By default, a Frame Relay network is an NBMA network. In an NBMA environment all routers are on the same subnet, but broadcast (and multicast) packets cannot be sent just once as they are in a broadcast environment such as Ethernet.

To emulate the LAN broadcast capability that is required by IP routing protocols (EIGRP hello or update packets to all neighbors reachable over an IP subnet), the Cisco IOS implements pseudo-broadcasting, in which the router creates a copy of the broadcast or multicast packet for each neighbor reachable through the WAN media, and sends it over the appropriate PVC for that neighbor.

Pseudo-broadcasting can be controlled with the broadcast option on static maps in a Frame Relay configuration. However, pseudo-broadcasting cannot be controlled for neighbors reachable through dynamic maps created via Frame Relay Inverse Address Resolution Protocol (ARP).

Dynamic maps always allow pseudo-broadcasting.

EIGRP on a Physical Frame Relay Interface
EIGRP on a Physical Frame Relay Interface with Dynamic Mapping
To deploy EIGRP over a physical interface using Inverse ARP dynamic mapping is easy because it is the default.
Split horizon is disabled by default on Frame Relay physical interfaces.
Inverse ARP is on by default and will automatically map the IP addresses of the devices at the other ends of the PVCs to the local DLCI number. 
EIGRP on a Physical Frame Relay Interface
Inverse ARP does not provide dynamic mapping for the communication between Routers R2 and R3 because they are not connected with a PVC. You must configure this mapping manually.
Router R1 forms the adjacency with Router R2 and R3 over the Serial 0/0 physical interface.
No EIGRP relationship exists between Routers R2 and R3.

EIGRP on a Frame Relay Physical Interface with Static Mapping
Using static mapping, thus disabling the Inverse ARP, no changes are needed to the basic EIGRP configuration.
R1#interface Serial0/0
encapsulation frame-relay
ip address 192.168.1.101 255.255.255.0
frame-relay map ip 192.168.1.101 101
! Frame Relay map to its own IP address so that the Serial 0/0 local IP address can be pinged from Router R1 itself
frame-relay map ip 192.168.1.102 102 broadcast
frame-relay map ip 192.168.1.103 103 broadcast
!
R3# interface Serial0/0
encapsulation frame-relay
ip address 192.168.1.103 255.255.255.0
frame-relay map ip 192.168.1.101 130 broadcast
!
router eigrp 110
network 192.168.1.0
* broadcast (Optional) - Allows broadcasts and multicasts over the VC, permitting the use of dynamic routing protocols over the VC.
Routers R2 and R3 can also form an EIGRP adjacency to each other if the IP-to-DLCI mapping for that connectivity is provided.

EIGRP over Frame Relay Multipoint Subinterfaces
You can create one or several multipoint subinterfaces over a single Frame Relay physical interface. These multipoint subinterfaces are logical interfaces emulating a multiaccess network.
They act like an NBMA physical interface, and therefore use a single subnet, preserving the IP address space.
Frame Relay multipoint is applicable to partial-mesh and full-mesh topologies.
Partial mesh Frame Relay networks must deal with split-horizon issues, which prevent routing updates from being retransmitted on the same interface on which they were received.
Split horizon is enabled by default on Frame Relay multipoint subinterfaces.

EIGRP timers on these interfaces are 60 seconds for the hello timer and 180 seconds for the hold timer.
On Frame Relay multipoint subinterfaces, all of the PVCs attached to the subinterface must be lost for the subinterface to be declared down.

For Frame Relay, the IP address-to-DLCI mapping on multipoint subinterfaces is done by
either specifying the local DLCI value (using the framerelay interface-dlci <dlci> command) and relying on Inverse ARP,
or using manual IP address-to-DLCI mapping.
interface Serial0/0
no ip address
encapsulation frame-relay
!
interface Serial0/0.1 multipoint
ip address 192.168.1.101 255.255.255.0
no ip split-horizon eigrp 110
frame-relay map ip 192.168.1.101 101
frame-relay map ip 192.168.1.102 102 broadcast
frame-relay map ip 192.168.1.103 103 broadcast
!
router eigrp 110
network 172.16.1.0 0.0.0.255
network 192.168.1.0
EIGRP Unicast Neighbors (static neighbor)
neighbor{ip-address| ipv6-address} <interface-type> <interface-number>
router configuration command is used to define a neighboring router with which to exchange EIGRP routing information. Instead of using multicast packets, EIGRP exchanges routing information with the specified neighbor using unicast packets.
The router does not process any EIGRP multicast packets coming inbound on that interface and it stops sending EIGRP multicast packets on that interface.
EIGRP on a Multipoint Frame Relay Subinterface
Because Router R3 is not using the neighbor command, it tries to communicate with multicast packets on its Serial 0/0/.1.
However, neighborship is not established because neither Router R1 nor Router R2 is accepting multicast packets

router eigrp 110
network 172.16.1.0 0.0.0.255
network 192.168.1.0
neighbor 192.168.1.102 serial0/0.1
EIGRP over Frame Relay Point-to-Point Subinterfaces
One or several point-to-point subinterfaces can be created over a single Frame Relay physical  interface. These point-to-point subinterfaces are logical interfaces emulating a leased line network and provide a routing equivalent to point-to-point physical interfaces.
As with physical point-to-point interfaces, each interface requires its own subnet. Frame Relay point-to point is applicable to hub and spoke topologies.

EIGRP hello timer and the EIGRP hold timer are identical to the values used on point-to-point physical links (5 seconds for the hello timer and 15 seconds for the hold timer)

EIGRP on Frame Relay Point-to-Point Subinterfaces
interface serial <number>.<subinterface number> point-to-point
IP address-to-DLCI mapping on point-to-point subinterfaces is done by specifying the local DLCI value, using the frame-relay interface-dlci <dlci> command.
interface Serial0/0
no ip address
encapsulation frame-relay
!
interface Serial0/0.2 point-to-point
ip address 192.168.2.101 255.255.255.0
frame-relay interface-dlci 102
!
interface Serial0/0.3 point-to-point
ip address 192.168.3.101 255.255.255.0
frame-relay interface-dlci 103
!
router eigrp 110
network 172.16.1.0 0.0.0.255
network 192.168.2.0
network 192.168.3.0

EIGRP over MPLS

With MPLS, short fixed-length labels are assigned to each packet at the edge of the network. Rather than examining the IP packet header information, MPLS nodes use this label to determine how to process the data.

The MPLS standards evolved from the efforts of many companies, including Cisco’s tag-switching technology.

MPLS enables scalable VPNs, end-to-end quality of service (QoS), and other IP services that allow efficient utilization of existing networks with simpler configuration, management, and quicker fault correction.

A label identifies a flow of packets (for example, voice traffic between two nodes), also called a forwarding equivalence class (FEC). An FEC is a grouping of packets. Packets belonging to the same FEC receive the same treatment in the network.

FEC can define the flow’s QoS requirements, appropriate queuing and discard policies.

The MPLS network nodes, called label-switched routers(LSRs), use the label to determine the next hop for the packet. The LSRs do not need to examine the packet’s IP header; rather, they forward it based on the label.
After a path has been established, packets destined to the same endpoint with the same requirements can be forwarded based on these labels without a routing decision at every hop. Labels usually correspond to Layer 3 destination prefixes, which makes MPLS equivalent to destination-based routing.

A label-switched path(LSP) must be defined for each FEC before packets can be sent. It is important to note that labels are locally significant to each MPLS node only. Therefore, the nodes must communicate what label to use for each FEC. One of two protocols is used for this communication: the Label Distribution Protocol or an enhanced version of the Resource Reservation Protocol. An interior routing protocol, such as OSPF or EIGRP is also used within the MPLS network to exchange routing information.

An MPLS label is a 32-bit field placed between a packet’s data link layer header and its IP header.
As each LSR receives a labeled packet, it removes the label, locates the label in its table, applies the appropriate outgoing label, and forwards the packet to the next LSR in the LSP.

Initial VPNs were built using leased lines with PPP and HDLC encapsulations. Later, service providers offered Layer 2 VPNs based on point-to-point data link layer connectivity, using ATM or Frame Relay virtual circuits.

MPLS VPNs were introduced to provide a unified network for Layer 3 VPN services. For customers still wanting Layer 2 connections, ISPs could deploy Ethernet VLAN extensions across a metropolitan area or ATM services. Any Transport over MPLS (AToM) was introduced to facilitate this Layer 2 connectivity across an MPLS backbone.

AToM unifies Layer 2 and Layer 3 offerings over a common MPLS infrastructure. In AToM, VCs represent Layer 2 links, and MPLS labels identify VCs.

The Layer 2 MPLS VPN provides a Layer 2 service across the backbone, where customer routers are connected together on the same IP subnet.

The Layer 3 MPLS VPN provides a Layer 3 service across the backbone, where customer routers are connected to ISP edge routers. On each side, a separate IP subnet is used.

MPLS Terminology
The network is divided into the customer-controlled part (the C-network) and the provider-controlled part (the P-network).
Contiguous portions of C-network are called sites and are linked to the P-network via customer edge routers (CE routers). The CE routers are connected to the provider edge (PE) routers, which serve as the edge devices of the Provider network. The core devices in the Provider network (the P-routers) provide the transport across the provider backbone and do not carry customer routes. The service provider connects multiple customers over a common MPLS backbone using MPLS VPNs.

Each customer is assigned an independent routing table—the virtual routing and forwarding (VRF) table in the PE router—that corresponds to the dedicated PE router in a traditional peer-to-peer model.
PE routers carry a separate set of routes for each customer, isolating each customer from other customers.

The internal topology of the MPLS backbone is transparent to the customer. The internal P-routers are hidden from the customer’s view and the CE routers are unaware of the MPLS VPN.

EIGRP over Layer 3 MPLS VPN
 The customer has to agree on the EIGRP parameters (such as the autonomous system number, authentication password, and so on) with the SP to ensure connectivity. The SP often governs these parameters.
The PE routers receive routing updates from the CE routers and install these updates in the appropriate VRF table. This part of the configuration and operation is the SP’s responsibility.

Layer 2 MPLS VPNs
AToM and EoMPLS (Ethernet over MPLS) do not include any MAC layer address learning and filtering.
Therefore routers PE1 and PE2 do not filter any frames based on MAC addresses, do not use the Spanning Tree Protocol (STP). A service provider can use LAN switches in conjunction with AToM and EoMPLS to provide these features (L2 loop protection).

When deploying EIGRP over EoMPLS, there are no changes to the EIGRP configuration from the customer perspective.


EIGRP Load Balancing

EIGRP supports equal-cost and unequal-cost load balancing.

EIGRP Equal-Cost Load Balancing
Equal-cost load balancing is a router’s capability to distribute traffic over all the routers that have the same metric for the destination address. All IP routing protocols on Cisco routers can perform equal-cost load balancing.

Notice that the terminology is equal-costeven though the metric used in the routing protocol may not be called cost (as is the case for EIGRP).

By default, the Cisco IOS balances between a maximum of four equal-cost paths for IP.
Using the maximum-paths <maximum-path> router configuration command, you can request that up to 16 equally good routes be kept in the routing table.
Set the <maximum-path> parameter to 1 to disable load balancing.

When a packet is process-switched, load balancing over equal-cost paths occurs on a per-packet basis. 
When packets are fastswitched (CEF), load balancing over equal-cost paths is on a per-destination basis.

Load balancing is performed only on traffic that passes through the router, not traffic generated by the router.

Test load  balance: ping or traceroute (CPU generated packets and router will balance them per packet basic, isstead of CEF - which is per-destination balancing)
Traceroute makes three attempts for each given TTL, so it can discover up to 3 paths in parallel between two router hops with default settings.
This number may be equal or less of the effective number of links between the two routers.
Important: traceroute generated traffic is process switched and not processed by CEF.
The traceroute probe is an UDP packet with a very high destination port (> 30000) to trigger an ICMP unreachable on receiving device.
For being process switched traceroute probes have the potential to discover multiple equal cost paths.


EIGRP Unequal-Cost Load Balancing
EIGRP can also balance traffic across multiple routes that have different metrics—this is called unequal-cost load balancing.

The degree to which EIGRP performs load balancing is controlled by the router configuration command:
variance <multiplier>
The multiplieris a variance value, between 1 and 128, used for load balancing.
The default is 1, which means equal-cost load balancing.
The multiplier defines the range of metric values that are accepted for load balancing.
Setting a variance value greater than 1 allows EIGRP to install multiple loop-free routes with unequal cost in the routing table.

The variance allows feasible successors to also be installed in the routing table.

Only paths that are feasible can be used for load balancing, and the routing table only includes feasible paths. The two feasibility conditions are as follows:
■ The route must be loop free. As noted earlier, this means that the best metric (the AD) learned from the next router must be less than the local best metric (the current FD). In other words, the next router in the path must be closer to the destination than the current router.
■ The metric of the entire path (the FD of the alternative route) must be lower than the variance multiplied by the local best metric (the current FD). In other words, the metric for the entire alternate path must be within the variance.

EIGRP itself does not load-share between multiple routes. It only installs the routes in the local routing table. The local routing table enables the router’s switching hardware or software to load share between the multiple paths.

If the variance is set to 2, any EIGRP-learned route with a metric less than two times the successor metric will be installed in the local routing table.
Path to 172.16.2.0/24 with eigrp variance 2
       AD    FD      FCondition?                          LOAD-BALANCE?
path1: 10    30      FS   (AD=10   < FD of Succesor)      YES, 30< 40  (20=metric of FS * 2=variance)
path2: 10    20      BEST (Succesor)                      YES
path3: 25    45      NO   (AD  >  FD of Seccesor)         NO,   45*2=90 >  20 * 2
path4: 10    50      FS   (AD < FD of Succesor)           NO, 50*2=100 > 20 * 2

To control how traffic is distributed among routes when multiple routes exist for the same destination network and they have different metrics, use the router configuration command:
traffic-share [balanced| min across-interfaces] 
With the keyword balanced (the default behavior), the router distributes traffic proportionately to the ratios of the metrics associated with the different routes. With the min across-interfaces option, the router uses only routes that have minimum costs. (In other words, all routes that are feasible and within the variance are kept in the routing table, but only those with the minimum cost are used.) This latter option allows feasible backup routes to always be in the routing table, but only be used if the primary route becomes unavailable so that it is no longer in the routing table.


EIGRP Bandwidth Use Across WAN Links
In EIGRP operation, the default configuration of WAN connections might not be optimal.
A solid understanding of EIGRP operation coupled with knowledge of link speeds can yield an efficient, reliable, scalable router configuration.

EIGRP Link Utilization
By default, EIGRP uses up to 50 percent of the bandwidth declared on an interface or subinterface. EIGRP uses the bandwidth of the link set by the bandwidth command, or the link’s default bandwidth if none is configured, when calculating how much bandwidth to use.

You can adjust this percentage on an interface or subinterface with the interface configuration command:
ip bandwidth-percent eigrp <as-number> <percent>
The percent parameter is the percentage of the configured bandwidth that EIGRP can use.
You can set the percentage to a value greater than 100, which might be useful if the bandwidth is configured artificially low for routing policy reasons.
Router(config)#interface serial0/0/0
Router(config-if)#bandwidth 20
Router(config-if)#ip bandwidth-percent eigrp 1 200
When configuring these subinterfaces of Frame Relay, set the bandwidth to match the contracted CIR.
When configuring multipoint interfaces (especially for Frame Relay, but also for ATM and ISDN PRI), remember that the bandwidth is shared equally by all neighbors.

EIGRP uses the bandwidth command on the physical interface divided by the number of Frame Relay neighbors connected on that physical interface to get the bandwidth attributed to each neighbor.

When configuring multipoint interfaces, configure the bandwidth to represent the minimum CIR times the number of circuits.  (if 4 VC with min VC CIR is 56 kbps -> BW will be 56*4 =224)
Bandwidth = Lowest CIR * nr_of_VCs   (lowest speed connection multiplied by the number of circuits)

This approach might not fully use the higher-speed circuits, but it ensures that the circuits with the lowest CIR will not be overdriven