Limitless Networking

Author: Rick Mur (Page 10 of 11)

CCIE Data Center

The long rumored, highly expected and very desired has finally been released and it’s a beast!

Since the release of the Nexus platform there has been talk about when these platforms were to be introduced in a CCIE track. With the introduction of UCS in 2009 this became an even higher request especially since UCS really took off in sales. When I started my CCIE Storage studies in 2010 I initially wrote an article for IPexpert about my predictions for the CCIE DC (http://blog.ipexpert.com/2010/01/13/storage-and-datacenter-ccie/). Most of them where very easy guesses, but those also became reality in the track, though with new hardware that is now available (2 years later).

You might have already read most information on other blogs, but I’m trying to consolidate that information. During the coming weeks/months more and more information will become available and during Cisco Live in June there will be a huge amount of information and questions during the 4-hour Techtorial (TECCCIE-9544).

The scope of the exam is pretty much based on the usual suspects, so in summary you should know the:

  • UCS B-series blade systems
  • UCS C-series rackmount systems connected to UCS Manager via FEX
  • Virtual Interface Cards (virtualized NICs and HBAs) in all servers
  • Nexus 7000 with all features like VDC, OTV, FabricPath, etc.
  • Nexus 5500 with all features like FCoE, FEX
  • Nexus 2000 connected to either the 5k or the 7k
  • Nexus 1000V distributed virtual switch in ESX
    • There is no mention of any VMware product in the blueprint, so expect ESX and vCenter to be pre-installed on the UCS blades and FC boot to pre-configured disks
  • MDS 9222i for connecting FC storage to UCS
  • ACE appliance
  • DCNM management software

Availability

From May 1st the written exam is available in BETA all the way up to June 15. They specifically mentioned that the beta test is available for testing during Cisco Live, which is also when I’m going to take it. The live exam is available from September 1st.

Currently there are no dates when the lab is available.

Written exam

The written exam has an extensive blueprint published to Cisco Learning Network (CLN) including a reading list. As mentioned before the beta version of the CCIE Data Center Written Exam will be available for scheduling and testing at all worldwide Cisco-authorized Pearson VUE testing centers beginning May 1 through June 15, 2012. The beta test will also be offered during Cisco Live San Diego event from June 10-14, 2012. Candidates may schedule and take the exam on the same day.  The beta exam will be offered at a discounted price of US$50, with full recertification or lab qualification credit granted to all passing candidates.

The current published reading list:

Data Center Fundamentals (ISBN-10: 1-58705-023-4)

NX-OS and Cisco Nexus Switching (ISBN-10: 1-58705-892-8)

Cisco Unified Computing System (UCS) (ISBN-10: 1-58714-193-0)

I/O Consolidation in the Data Center (ISBN-10: 1-58705-888-X)

Storage Networking Fundamentals (ISBN-10: 1-58705-162-1)

Please find the extensive blueprint published by Cisco on the bottom of this blogpost.

Lab exam

There is not much information available regarding the lab exam. Availability is not mentioned. There is however information regarding the hardware list and this is an immense list of expensive hardware you require:

Hardware blueprint:

Cisco Catalyst Switch 3750 = Switch for management connections
Cisco 2511 Terminal Server = Lab terminal server
MDS 9222i
Nexus 7009
– (1) Sup
– (1) 32 Port 10Gb (F1 Module)
– (1) 32 Port 10Gb (M1 Module)
Nexus 5548
Nexus 2232
Nexus 1000V
UCS C200 Series Server
– VIC card for c-series
UCS-6248 Fabric Interconnects
UCS-5108 Blade Chassis
– B200 M2 Blade Servers
– Palo mezzanine card (VIC card)
– Emulex mezzanine card (2 vNICs and 2 vHBAs)
Cisco Application Control Engine Appliance – ACE4710
Dual attached JBODs (prepare for pre-configured disks)

Software Versions
NXOS v6.0(2) on Nexus 7000 Switches
NXOS v5.1(3) on Nexus 5000 Switches
NXOS v4.2(1) on Nexus 1000V
NXOS v5.2(2) on MDS 9222i Switches
UCS Software release 2.0(1x) for UCS-6248 Fabric Interconnect and all UCS systems
Software Release A5(1.0) on ACE4710
Cisco Data Center Manager software v5.2(2)

How do I get my hands on this?

Now this is a huge list. I’m so fortunate that I work for Telindus-ISIT a Cisco Gold partner with huge focus on Nexus and UCS, so we have most of this already available in our lab! Cisco knows that not everybody will be able to purchase a lab or even lab rental companies can’t afford this. Therefore they confirmed at Cisco Live Melbourne that Cisco will start offering rack rentals for the CCIE Datacenter, probably through the Cisco 360 program.

Other available information

During the session at Cisco Live Melbourne, more information was provided than already mentioned. Some important topics are:

CCIE Storage?

There are currently NO plans for replacing CCIE Storage for CCIE Datacenter. Because of this, there will not be a large focus on MDS/FC configuration as there is another track for that.

What about P and A tracks?

A CCNA Data Center and CCNP Data Center will be released soon!

Troubleshooting

Troubleshooting will be a big part of the exam, which is also pretty clear in the blueprint. There is no confirmation yet how this will be introduced, either using tickets in the CCIE R&S or just by pre-configuration on the lab. I can imagine that they pre-configured a broken Nexus 1000V on a ESX installation on one of the JBODs. More information on how this troubleshooting is done will be available during other Q&A sessions. The implication is that it might be trouble tickets like the CCIE R&S.

Written Blueprint

Cisco Data Center Architecture

  • Describe the Cisco Data Center Architecture
  • Describe the products used in the Cisco Data Center Architecture
  • Describe Cisco unified I/O solution in access layer
  • Determine which platform to select for use in the data center different layers

Cisco Data Center Infrastructure—NX-OS

  • Describe NX-OS features
    Describe the architecture of NX-OS
    Describe NX-OS Process Recovery
    Describe NX-OS Supervisor Redundancy
    Describe NX-OS Systems file management
    Describe Virtual Output Queuing (VoQ)
    Describe Virtual Device Contexts
    Configure and Troubleshoot VDCs
    Describe fabric extension via the nexus family
  • Design and implement NX-OS Layer 2 and Layer 3 functionality
    Describe VLANs
    Describe PVLANs
    Describe Spanning-Tree Protocols
    Describe Port-Channels and Virtual Port Channels
    Compare and contrast VPC options
    Describe basic features of routing protocols in a data center environment
    Implement jumbo frames end-to-end in a data center
    Describe FabricPath
    Describe VRF lite in a data center environment
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands.
  • Describe Multicast
    Describe Multicast Operation in a data center environment
    Describe Basic PIM configuration
    Describe IGMP operation and configuration on the Nexus Platform
    Validate Configurations and troubleshoot problems and failures using command line, show and debug commands
  • Describe basic NX-OS Security features
    AAA Services
    RBAC, SSH, and SNMPv3
    Control Plane Protection and Hardware Rate Limiting
    IP ACLs, MAC ACLs, and VLAN ACLs
    Port Security
    DHCP Snooping, Dynamic ARP Inspection, and IP Source Guard
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands
  • Implement NX-OS high availability features
    Describe First-Hop Routing Protocols
    Describe Graceful Restart and nonstop forwarding
    Describe OTV
    Describe the ISSU process
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands
  • Implement NX-OS management
    Describe DCNM LAN features
    Implement SPAN and ERSPAN
    Implement embedded Ethernet analyzer and Netflow
    Describe XML for network management and monitoring
    Describe SNMP for network management and monitoring
    Describe Implement Embedded Event Management
    Describe configuration management in Data Center Network Manager
    Describe Smart Call Home
    Detail connectivity and credentials required for Data Center Network Manager
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands

Cisco Storage Networking

  • Describe Standard-based SAN Protocols
    Describe Fiber Channel Standards and protocols
    Describe SCSI standards and protocols
    Describe iSCSI standards and protocols
    Describe FCIP standards and protocols
  • Implement Fiber Channel Protocols features
    Describe Port Channel, ISL, trunking and VSANs
    Design basic and enhanced zoning
    Describe FC domain parameters
    Describe Cisco Fabric services and benefits
    Design and implement proper oversubscription in an FC environment
    Validate proper configuration of FC storage based solutions
  • Implement IP Storage based solution
    Implement FC over IP (FCIP)
    Describe iSCSI and its features
    Validate proper configuration of IP Storage based solutions
  • Design and describe NX-OS Unified Fabric features
    Describe Fiber Channel features in the NX-OS environment
    Describe Fiber Channel over Ethernet Protocol and technology
    Design and implement data center bridging protocol and lossless Ethernet
    Design and implement QoS features
    Describe NPV and NPIV features in a Unified Fabric environment
    Describe FCoE NPV features
    Describe Unified Fabric Switch different modes of operations
    Describe multihop FCoE
    Describe and configure universal ports
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands
  • Design high availability features in a standalone server environment
    Describe server-side high availability in the Cisco Unified I/O environment
    Describe Converged Network Adapter used in FCoE topologies
    Configuring NIC teaming
  • Implement SAN management
    Describe Device Manager for element management
    Describe configuration management in Data Center Network Manager
    Describe connectivity and credentials required for DCNM-SAN
    Describe how to monitor and trend utilization with DCNM Dashboard

Cisco Data Center Virtualization

  • Implement Data Center Virtualization with Nexus1000v
    Describe the Cisco Nexus1000v and its role in a virtual server network environment
    Describe Virtual Ethernet Module (VEM) on Nexus1000v
    Describe Virtual Supervisor Module (VSM)
    Describe the Cisco Nexus 1010 physical appliance and components
    Describe Port Profiles and use cases in Nexus1000v
    Describe QoS, Traffic Flow and IGMP Snooping in Nexus1000v
    Describe Network monitoring on Nexus1000v
    Explain the benefits of DHCP snooping in a VDI environment
    Describe how to intercept traffic using Vpath and its benefits
    Describe and implement Nexus1000v port channels
    Describe Virtual Service Domain
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands

Cisco Unified Computing

  • Unified Computing System components and architecture
    Describe Cisco Unified Computing System components and architecture
    Describe the Cisco Unified Computing server deployment and implementation model
    Describe Cisco UCS Management features
    Describe Cisco UCS Connectivity from both LAN and SAN perspective
    Describe Cisco UCS High Availability
    Describe what the capability catalog is and how it is used
    Describe Cisco UCS C Series Integration
    Describe the functional differences between physical and virtual adaptors
  • Describe LAN connectivity in a Cisco Unified Computing environment
    Describe Fabric Interconnect for LAN connectivity
    Implement server and uplink ports
    Describe End Host Mode
    Implement Ethernet Switching Mode
    Implement VLANs and port channels
    Implement Pinning and PIN groups
    Describe Disjoint Layer 2 and design consideration
    Describe Quality of Service (QoS) options and configuration restrictions
    Design and verify scalable Cisco Unified computing systems
  • Describe Implement SAN connectivity in a Cisco Unified Computing environment
    Describe Fabric Interconnect for SAN connectivity
    Describe End Host Mode
    Implement NPIV
    Implement FC Switch mode
    Implement FC ports for SAN connectivity
    Implement Virtual HBA (vHBA)
    Implement VSANs
    Implement SAN port channels
    Describe and implement direct attach Storage connectivity options
    Describe and implement FC trunking and SAN pinning
  • Describe Cisco Unified Computing Server resources
    Describe Service Profiles in Cisco UCS including templates and contrast with cloning
    Describe Server Resource Pools
    Implement updating and initial templates
    Describe Boot From remote storage
    Detail best practices for creating pooled objects
    Explain how to use the Cisco UCS KVM with Vmedia and session management
    Describe local disk options and configuration protection
    Describe power control policies and their effects
  • Describe role-based Access Control Management Groups
    Understand Cisco UCS Management Hierarchy using ORG and RBAC
    Describe roles and privileges
    Implement integrated authentication
  • Cisco Unified Computing troubleshooting and maintenance
    Understand backup and restore procedures in a unified computing environment
    Manage high availability in a Cisco Unified Computing environment
    Describe monitoring and analysis of system events
    Implement External Management Protocols
    Analyze statistical information
    Understand Cisco Unified Computing components system upgrade procedure
    Describe how to manage BIOS settings
    Describe memory extension technology

Cisco Application Networking Services—ANS

  • Data center application high availability and load balancing
    Describe standard ACE features for load balancing
    Describe different Server Load Balancing Algorithm
    Describe health monitoring and use cases
    Describe Layer 7 load balancing
    Describe sticky connections
    Understand SSL offload in SLB environment
    Describe Protocol Optimization
    Describe Route Health Injection (RHI)
    Describe Server load balancing Virtual Context and HA
    Describe Server load balancing management options
  • Global load balancing
    Describe basic DNS resolution process
    Describe the benefits of the Cisco Global Load Balancing Solution
    Describe how the Cisco Global Load Balancing Solution integrate with local Cisco load balancers
    Implement a Cisco Global Load Balancing Solution into an existing network infrastructure

Lab Blueprint

Cisco Data Center Infrastructure – NXOS

  • Implement NXOS L2 functionality
    Implement VLANs and PVLANs
    Implement Spanning-Tree Protocols
    Implement Port-Channels
    Implement Unidirectional Link Detection (UDLD)
    Implement Fabric Extension via the Nexus family
  • Implement NXOS L3 functionality
    Implement Basic EIGRP in Data Center Environment
    Implement Basic OSPF in Data Center Environment
    Implement BFD for Dynamic Routing protocols
    Implement ECMP
    Implement FabricPath
  • Implement Basic NXOS Security Features
    Implement AAA Services
    Implement SNMPv3
    Configure IP ACLs, MAC ACLs and VLAN ACLs
    Configure Port Security
    Configure DHCP Snooping
    Configure Dynamic ARP Inspection
    Configure IP Source Guard
    Configure Cisco TrustSec
  • Implement NXOS High Availability Features
    Implement First-Hop Routing Protocols
    Implement Graceful Restart
    Implement nonstop forwarding
    Implement Port-channels
    Implement vPC and VPC+
    Implement Overlay Transport Protocol (OTV)
  • Implement NXOS Management
    Implement SPAN and ERSPAN
    Implement NetFlow
    Implement Smart Call Home
    Manage System Files
    Implement NTP, PTP
    Configure and Verify DCNM Functionality
  • NXOS Troubleshooting
    Utilize SPAN, ERSPAN and EthAnalyzer to troubleshoot a Cisco Nexus problem
    Utilize NetFlow to troubleshoot a Cisco Nexus problem
    Given an OTV problem, identify the problem and potential fix
    Given a VDC problem, identify the problem and potential fix
    Given a vPC problem, identify the problem and potential fix
    Given an Layer 2 problem, identify the problem and potential fix
    Given an Layer 3 problem, identify the problem and potential fix
    Given a multicast problem, identify the problem and potential fix
    Given a FabricPath problem, identify the problem and potential fix
    Given a Unified Fabric problem, identify the problem and potential fix

Cisco Storage Networking

  • Implement Fiber Channel Protocols Features
    Implement Port Channel, ISL and Trunking
    Implement VSANs
    Implement Basic and Enhanced Zoning
    Implement FC Domain Parameters
    Implement Fiber Channel Security Features
    Implement Proper Oversubscription in an FC environment
  • Implement IP Storage Based Solution
    Implement IP Features including high availability
    Implement iSCSI including advanced features
    Implement SAN Extension tuner
    Implement FCIP and Security Features
    Implement iSCSI security features
    Validate proper configuration of IP Storage based solutions
  • Implement NXOS Unified Fabric Features
    Implement basic FC in NXOS environment
    Implement Fiber channel over Ethernet (FCoE)
    Implement NPV and NPIV features
    Implement Unified Fabric Switch different modes of operation
    Implement QoS Features
    Implement FCoE NPV features
    Implement multihop FCoE
    Validate Configurations and Troubleshoot problems and failures using Command Line, show and debug commands.

Cisco Data Center Virtualization

  • Manage Data Center Virtualization with Nexus1000v
    Implement QoS, Traffic Flow and IGMP Snooping
    Implement Network monitoring on Nexus 1000v
    Implement n1kv portchannels
    Troubleshoot Nexus 1000V in a virtual environment
    Configure VLANs
    Configure PortProfiles
  • Implement Nexus1000v Security Features
    DHCP Snooping
    Dynamic ARP Inspection
    IP Source Guard
    Port Security
    Access Control Lists
    Private VLANs
    Configuring Private VLANs

Cisco Unified Computing

  • Implement LAN Connectivity in a Unified Computing Environment
    Configure different Port types
    Implement Ethernet end Host Mode
    Implement VLANs and Port Channels.
    Implement Pinning and PIN Groups
    Implement Disjoint Layer 2
  • Implement SAN Connectivity in a Unified Computing Environment
    Implement FC ports for SAN Connectivity
    Implement VSANs
    Implement FC Port Channels
    Implement FC Trunking and SAN pinning
  • Implement Unified Computing Server Resources
    Create and Implement Service Profiles
    Create and Implement Policies
    Create and Implement Server Resource Pools
    Implement Updating and Initial Templates
    Implement Boot From remote storage
    Implement Fabric Failover
  • Implement UCS Management tasks
    Implement Unified Computing Management Hierarchy using ORG and RBAC
    Configure RBAC Groups
    Configure Remote RBAC Configuration
    Configure Roles and Privileges
    Create and Configure Users
    Implement Backup and restore procedures in a unified computing environment
    Implement system wide policies
  • Unified Computing Troubleshooting and Maintenance
    Manage High Availability in a Unified Computing environment
    Configure Monitoring and analysis of system events
    Implement External Management Protocols
    Collect Statistical Information
    Firmware management
    Collect TAC specific information
    Implement Server recovery tasks

Cisco Application Networking Services – ANS

  • Implement Data Center application high availability and load balancing
    Implement standard ACE features for load balancing
    Configuring Server Load Balancing Algorithm
    Configure different SLB deployment modes
    Implement Health Monitoring
    Configure Sticky Connections
    Implement Server load balancing in HA mode

 

Happy studying!

Fast Restoration on IP – MPLS Fast ReRoute

Service providers that have a lot of real-time traffic through their network, like mobile network operators (MNOs), are very keen on a fast restoration of service once a failure occurs in the network. In the past a lot of networks were based on SDH/SONET transport networks, which took care of sub-second (50ms) failovers. Nowadays Ethernet is THE standard for any transport within a service provider network. This introduces an issue, as Ethernet is not built for automatic failover when certain things fail.

Now there are many ways to solve this and I want to dig deeper in these technologies in several posts.  I will discuss various protocols that can solve the fast restoration requirement in different ways. Some are used in local situations (so failover to local neighbor, like a twin sibling) and others can be used in inter-site locations or can be an end-to-end protection for certain traffic.

The posts are broken down as follows:

  1. MPLS Fast ReRoute (this post)
  2. IP Loop Free Alternate
  3. BGP PIC Core/Edge
  4. Hierarchical Forwarding

Please be aware that these technologies are all related to fast restore the layer 3 forwarding path, therefore restoring the MPLS forwarding path. The MPLS forwarding path may be used for layer 2 forwarding as well. What these posts do not cover is fast restoration on layer 2 level. With the current “cloud” initiatives and next generation datacenter networks we have some extensive options for layer 2 failovers.

I can (and probably will :)) write another blog post series on those kinds of failover mechanisms.

The current blog posts are focused on the Core service provider routing to offer resilient paths through the core layer 3 or MPLS cloud in the service provider network.

MPLS Fast Reroute introduction

When MPLS was invented the first application apart from fast packet switching was creating dedicated ‘circuit-like’ connections through the network. This was done using the RSVP protocol that signals a PATH message through the network and each hop reports a label back, creating an end-to-end label switched path (LSP) according to a pre-defined path through the network.

When this initial (unidirectional) path is set-up through the network, all traffic can be send through it. Now in case of a failure we want to protect this primary path. The path is signaled with either static next-hops or the ingress node can use the IGP database to calculate the path.

Be aware that your IGP needs support for this and it needs to be a link-state protocol (OSPF or IS-IS) as then every router has a full overview of the connections in the network. I will not go in to very much detail on how RSVP works and how it utilizes the IGP database to perform a C-SPF calculation. If you want I can spend another blog about this. Just leave a comment :).

Now we have a path that we can use for our traffic we want some protection. MPLS FastReroute (or FRR) is a technique that ensures this RSVP signaled path is protected. There are a couple ways to do this.

Protection

There are three ways to protect the path:

  • Link protection
  • Node protection
  • LSP protection / end-to-end protection

It very much depends on your network topology and what you want to accomplish as far as path protection. Then there are two ways of ensuring the protection. One is a manual protection where the backup path is manually configured and signaled as an additional tunnel through the network. The second is automatic, where the router figures out which links to use for the protection and automatically signaling those paths through the network.

Why do we need it? Well the technology is introduced to ensure equal failover times as with SDH/SONET transmission networks. When using a LDP network, you need to wait for IGP convergence before the new path is ready for traffic. During tests I found out that this takes around 300-400ms when using core routing platforms (Juniper MX, Cisco ASR9k).  When using MPLS FRR you reduce this to around 50ms as the routers already have a backup path ready that should already be programmed in the relevant ASICs.

Link protection

In smaller networks I usually see link protection used. For node protection you need a larger topology so this is not always possible, or when possible not very useful. Link protection is to ensure all links are secured using a backup path as the following drawing illustrates:

The primary tunnel follows the path R1-R2-R3-R5-R6 using MPLS labels according to the drawing. When the link between R2 and R3 fails a backup tunnel is signaled by R2 to R3, around the protected link. When the link breaks R2 pushes an additional label on top of the label stack and sends it to R4. Then R4 will pop off this label (PHP behavior) and R3 will see the standard label 15 as it usually expects.

Node protection (or link-node-protection)

Node protection is used in larger environments to protect the link and the node in case of failures. As the name already says, this is the same technology as link protection, but then the backup path is signaled completely around the node, instead of just the link. As you can see in the previous example R3 is still used in transit and just it’s link to R2 is protected. In the following drawing you can see that LSR3 is fully protected as the backup path terminates on LSR4.

LSP / end-to-end protection

From what I’ve seen, Juniper is the only one that actually implements this. I’m sure it’s possible on Cisco as well, but when configuring the ‘fast-reroute’ command on a LSP it will signal a backup path through the network fully excluding any node/link that the primary path travels through. This sound pretty rigorous and it is J, but it makes sense in a square based (ladder) design as seen in the drawing below

The orange path from R1 to R3 is the primary tunnel and the red-large-dashed tunnel from R1 through R4, R5 and R6 is the back-up path that Juniper routers automatically signal when fast-reroute is enabled.

In smaller topologies with just a couple PE’s, this is do-able, but when your topology grows you require a backup path for every LSP and that can be hundreds or thousands in the larger deployments, making it very difficult to troubleshoot.

The other protections like link and node protection create a backup path around a specific link or node and all LSP’s that travel through those routers can use the same backup path in case of failures.

So when you have a specific case where you want end-to-end protection of your LSP, this is the way to go, but under normal circumstances I would recommend using link or node protection, which scales much better!

Interoperability

Now vendor interoperability is very important when it comes to Fast Rerouting. In the beginning when this was developed there were several drafts published that all used different objects in RSVP (DETOUR, BYPASS, etc.). Therefore some people might tell you that Cisco and Juniper FRR doesn’t work together.

This is long gone! But you have to configure it correctly. Like I already said, when you configure fast-reroute on a Cisco LSP it means it will use a backup tunnel when it’s available (manually configured). You require additional commands for creating the backups automatically (auto-tunnel), where you also configure whether you want link or node protection.

When you configure fast-reroute under a Juniper LSP it will signal a end-to-end protected path, which might not be what you want. You need to configure link or node-link protection under the Juniper LSP to advertise the desired protection. Then RSVP needs to be configured on each router to support either link and/or node-protection by enabling this under the interfaces configured in RSVP.

When configured correctly they perfectly interoperate!

RFC 4090 (http://tools.ietf.org/html/rfc4090) defines the finalized Fast Reroute standard, which is based on a draft by Avici. All vendors implemented this RFC and like I said, when configured with the correct commands, you can let them interoperate perfectly with each other.

Configuration

Below are some configuration examples. The first is a example of a Cisco IOS router. You see a tunnel configured and auto-tunnel being enabled to signal the backup path automatically for link-protection. Keep in mind that the backup tunnel needs to be configured on every node that you want link protection on. The ‘n-hop’ command configured ensures the link-protection, when ‘nnhop’ would be configured it would mean node-protection.

mpls traffic-eng auto-tunnel backup nhop-only 
interface Tunnel1
 ip unnumbered loopback0
 tunnel destination x.x.x.x
 tunnel mode mpls traffic-eng
 tunnel mpls traffic-eng path-option 1 dynamic
 tunnel mpls traffic-eng fast-reroute

The following example is for Juniper JUNOS routers. You see the same type of protection configured including the automatic protection for links. This is done using the link-protection command under the RSVP protocol. Additionally the same command needs to be configured under the LSP configuration.

[edit protocols]
mpls {
      label-switched-path lsp-name {
            to x.x.x.x;
            link-protection;
      }
}
rsvp {
      interface interface-name {
            link-protection;
      }
}


Summary

I hope I was able to give you a quick and brief overview of the different ways of protection for traffic engineering tunnels in MPLS networks. This was only one way of protecting traffic. Currently this is the most commonly used technology by service providers in the world, but others are rising that don’t require so much configuration, but do require tuning and sometimes specific network designs.

Stay tuned for the next blogpost about IP Loop Free Alternate!

Rick

JNCIE-ENT lab set-up

As I’m preparing for the various exams (up to the Expert lab) of the Enterprise Routing & Switching track of Juniper I needed a lab to support this. In this blogpost I would like to explain my choice of hardware and software and how I’m going to use this set-up to prepare for the written exams and the lab exam.

Hardware and Software

Based on the blueprint, available on the Juniper website (http://www.juniper.net/us/en/training/certification/resources_jncieent.html), I needed to select hardware and software. The current software version used in the lab is JUNOS 10.4. On the various communities I heard that they want to upgrade this to a JUNOS 11.x (probably 11.4, which is a long-term-support version) software track somewhere this year, but until that time I chose the latest version of 10.4. At time of this writing this is JUNOS 10.4R9.

On the official blueprint there is no real indication of which hardware is used on the lab exam, but when you find your ways through the community sites and with the help from some community friends (special thanks to Chris 😉 I decided to use the SRX100H as router and EX4200 as L3 switch.

The SRX and EX platforms are the platforms of choice for enterprise deployments. They are extensively used in the classroom trainings offered by Juniper and are according to the community used in the lab exam itself as well. Now the advantage of the SRX branch platform is that, in terms of features, all branch-office SRX devices are pretty much equal. Then I chose the lightest model with high memory (SRX100H) based on these reasons:

  • All features supported! (including MPLS, clustering, etc.)
  • Two units fit into one rackmount kit, saving space
  • Enough connectivity (no GigE, but who cares in a lab?)
  • High memory version to run multiple virtual routers with large routing tables
  • Very low cost!

For the switching layer I chose the EX4200 as virtual chassis technology is on the blueprint and the only 1G fixed switch supporting this is the EX4200. I chose the smallest model offering 24 GigE ports of which 8 are PoE enabled. The EX4200 is a full layer 3 switch and even capable of some MPLS features.

As the number of routers and switches is unknown (and under NDA of course) I chose a set-up in which I can practice anything. This means that I can do anything with two EX4200s as you can disable the virtual-chassis ports on the back from CLI. Therefore I can use the switches individually when this is necessary to practice for example spanning-tree stuff. The number of routers I chose six. You should be able to practice all kinds of routing and multicast stuff with 4 routers, but you also need backbone devices to inject routes or to act as multicast receiver or source. This is also a reason why I chose the high memory version of the SRX100, to ensure there is enough memory for multiple virtual-routers (routing-instances) with large routing tables. According to the Juniper specifications the SRX100 should only be capable of running 3 virtual-routers, but I already tested up to 10, so I guess this should run up to the memory is full as there is no fixed limitation. Same accounts for other ‘advanced’ features like BGP. On other SRX devices you need to have a license to support stuff like Route Reflection, but on the SRX100H this seems to work flawlessly!

One feature that isn’t available on the SRX100H is logical-systems. This is a way to spawn a new routing protocol daemon and therefore a separate configuration file and run multiple truly separated routers. Unfurtunately the branch SRX doesn’t support this, but I’m in the luxury position of also having two packed MX480 routers in my lab as well :).

Below is a picture of the physical lab set-up. I have the advantage that I can use the lab facilities of my employer, but this set-up is actually pretty silent. The SRX’s have external power supplies, the EX are the noisiest, but also pretty good to handle in a house environment when only used for labs.

Now the big advantage of the SRX100 is that the rack mount kit (separate item to order) can hold two units including a special space for the external power supply. I think this is very nicely done which creates an ultimate lab set-up experience. On the SRX all the connections including console are made on the front, so access to the back is not necessary. The EX switches however have console and management Ethernet ports on the back, including the virtual chassis ports (VCP). Although now shown on the picture, I connected the virtual chassis ports so I can practice virtual chassis technology. During the real lab you will have more switches, but for a practice lab you just need to practice how virtual chassis works and how multi-chassis LAGs and stuff work. After you practiced that you can disable the VCP ports using CLI commands and use the switches independently.

Study material

Now a tough part of the studying, especially lab exercises, is finding the right study materials. The only official Juniper training material is based on instructor-led courses. You require multiple courses to cover all material of a certain exam. Now you are able to order the books of these courses online, but there is no option to rent the lab environment used in those books. Now you do get the lab guides with those print-outs of the courses, so together with this SRX and EX topology you should be able to do all the labs that are taught in the courses, which might require some re-cabling, but on the other hand, as you will see below, my set-up offers a lot of virtualization options that you can use to create your own logical topology based on this single physical topology.

These kinds of set-ups are usually used in labs that are offered for rent, as you don’t want to be re-cabling your lab every time, especially not when it’s hosted overseas :-).

You can order the books of the courses by following this link (requires Juniper website credentials): http://www.onfulfillment.com/JuniperTrainingPublic/WelcomePublic.aspx?sid=323

Now the more publically available materials are the books published by O’Reilly. These books are officially not linked to Juniper, but they are developed with close attention and have a lot of specific information. There are multiple books available, but the ones that are of interest to the –ENT track are:

When read carefully these books should be enough to prepare you for all the exams in the –ENT track which consists of the following exams:

  1. JNCIA-Junos (JN0-101)
  2. JNCIS-ENT (JN0-343)
  3. JNCIP-ENT (JN0-643)
  4. JNCIE-ENT (JPR-943)

The first three are written exams that can be taken at Prometric testing centers around the world. The last exam (JNCIE-ENT) is a 8-hour proctored lab exam that is available at a few Juniper offices around the world. Now especially for the JNCIP-ENT and JNCIE-ENT you will need a lot of CLI experience and will need to do hands-on labs! Even though the JNCIP-ENT exam is a written test, you will be exposed to a lot of show and configuration outputs from the CLI where you will need to identify what’s wrong/correct/configured/etc. Therefore you really need a lot of exposure to the CLI and all of the possible quirks. Although my experience with Juniper exams is that they are straightforward and will not test you about exotic features, but really want you to know what is used in day-to-day networks and what you will see when working with this equipment in the Enterprise environment.

There is one company that offers custom JNCIE training. Proteus Networks (http://www.proteus.net) offers excellent boot camps and labs! I already used their proctored practice labs for my JNCIP-M and JNCIE-M lab and I really had a lot of advantages by doing them, so knowing what to expect on the lab was a huge advantage.

Currently they only offer remote proctored labs and a self-paced workbook for the JNCIE-SP exam, but they confirmed the same offering would become available for JNCIE-ENT very soon (2012)!

(Hint: When you like them on Facebook, you will get discount on your first purchase!)

For the written exams I will use the O’Reilly books and will practice all the technologies on my practice rack by just testing them out. This should prepare you more than enough to pass them. The combined use of the O’Reilly books and the soon-to-be-released self-paced and proctored labs of Proteus will prepare you well enough for the JNCIE-ENT lab exam! Or in the meanwhile use the labs from the instructor-led courses offered by Juniper or when you are creative yourself, just create labs yourself by coming up with a decent logical topology and by testing the more exotic features like multicast.

Finally there are the communities that you can use to ask questions and you will get some very intelligent and helpful people answer them. I use the following communities to ask my Juniper related questions:

  • J-Net forums (http://forums.juniper.net)
    • This is my primary source for asking questions. Quite some Juniper employees are very active on these forums. You can subscribe to them and receive e-mails once replies are available.
  • The Champion Community (http://www.thechampioncommunity.com)
    • Very new, but very promising!
  • GroupStudy Juniper mailing list (http://www.groupstudy.com)
    • Usually pretty silent, but there are some very intelligent people subscribed tot this mailing list that will answer to your queries

Topology

As I don’t want to be re-cabling my lab when I’m doing exercises I came up with a topology that offers me a lot of flexibility in creating all the logical topologies I need.  Therefore I connected a cable from every router to both switches. Interface 1 on each router connects to switch 1 where the port number corresponds to the router number. Interface 2 on each router connects to switch 2. Additionally I connected two routers to each other to test both interlinks between routers and test clustering (not a blueprint item for the –ENT track) functionality of the SRX.

As I don’t want to use the console port all the time, but just have an SSH session to my devices, I use a dedicated interface on every device connected to a third switch that is solely used for access to the rest of the network and also connecting to the internet. To ensure the management access (and required interface and routing configuration) does not interfere with the rest of the configuration of the devices I created a virtual-router routing-instance on each device to have the management routing configuration separated from the global routing table.

Configuration example:

system {
     services {
        ssh;
    }
}
interfaces {
    fe-0/0/0 {
        unit 0 {
            family inet {
                address <MGMT_IP>/24;
            }
        }
    }
}
routing-instances {
    MGMT {
        instance-type virtual-router;
        interface fe-0/0/0.0;
        routing-options {
            static {
                route 0.0.0.0/16 next-hop <DefGW>;
            }
        }
    }
}

This connectivity ensures flexibility as ports on the switch can be configured either as access, trunk or routed. So depending on the lab exercise that I want to do I will configure either one IP address on the interface, or tagged sub-interfaces on the routers. Therefore I’m able to create tons of interfaces, whenever necessary.

When configuring routing-instances, it is possible to connect only the sub-interface to the instance/system, so this also doesn’t require additional physical interfaces to be used.

One important thing configuration wise to not forget is by enabling packet-mode forwarding on the SRX devices. Within the exams and labs the SRX is used as an enterprise router instead of a security device, so the default flow-mode should be disabled.

You can do this with the following configuration followed by a reboot:

security {
    forwarding-options {
        family {
            inet6 {
                mode packet-based;
            }
            mpls {
                mode packet-based;
            }
        }
    }
}

Summary of connections per SRX:

  • fe-0/0/0 connects to management switch
  • fe-0/0/1 connects to SW1 ge-0/0/x
  • fe-0/0/2 connects to SW2 ge-0/0/x
  • fe-0/0/7 connects to fe-0/0/7on SRX according to the following mapping:
    • R1 <-> R2
    • R3 <-> R4
    • R5 <-> R6

Summary of connections per EX:

  • ge-0/0/1 connects to R1 fe-0/0/<1-2>
  • ge-0/0/2 connects to R2 fe-0/0/<1-2>
  • ge-0/0/3 connects to R3 fe-0/0/<1-2>
  • ge-0/0/4 connects to R4 fe-0/0/<1-2
  • ge-0/0/5 connects to R5 fe-0/0/<1-2>
  • ge-0/0/6 connects to R6 fe-0/0/<1-2>
  • ge-0/0/20 connects to SPsw<1-2> Gi1/0/14
  • ge-0/0/22 connects to SW<1-2> ge-0/0/22
  • ge-0/0/23 connects to SW<1-2> ge-0/0/23
  • me0 connects to management switch

The following diagram illustrates how all physical connections are made:

 

Summary

I hope I was able to give you an insight in how I built my JNCIE-ENT lab set-up and how I’m going to prepare for the written and practical exam(s). If you have any questions please don’t hesitate to comment on this post or ask questions on the community websites that I tipped in an earlier paragraph.

You will find me being active on those community websites as well!

Finally I wish you the best of luck in all of your current and future endeavors!

Stay hungry, stay foolish! 

« Older posts Newer posts »

© 2025 Rick Mur

Theme by Anders NorenUp ↑