VXLAN, MBGP EVPN with ingress replication – Part 2 – Configure VXLAN L2 VNI on a single POD

This is the Part 2; Part 1 is here

Example configuration in here are based on Cisco Nexus 9K.

Configurations are very straight forward and simple. As I said earlier – if you have ever configured MP-BGP address families, this will be super easy for you. This doco describes L2 VNI only – there will be another one doco covering L3VNI.

L2 VNI is Type-2 route within EVPN VXLAN – which is “MAC with IP advertisement route”.

L3 VNI is Type-5 route within EVPN VXLAN – which is “IP prefix Route”.

Following is the network topology design diagram I used here in this reference doco.

Notes regarding the network topology-

  • 2 x SPINE switches (IP 172.16.0.1 and 172.16.0.2)
  • 2 x LEAF switches (IP 172.16.0.3 and 172.16.0.4)
  • IP address “unnumbered” configured on the interfaces connected between SPINE and LEAF
  • 2 x VXLAN NVE VTEP interfaces on LEAF switches (source IP 172.16.0.5 and 172.16.0.6)
  • SPINE switches are on the OSPF Area 0
  • LEAF switches are on the OSPF Area 1
  • iBGP peering between “SPINE to LEAF” – mesh iBGP peering topology
  • Both SPINE switches are “route reflector” to the LEAF switches
  • Physical layer connectivity are – “SPINE to LEAF” only. NO “LEAF to LEAF” or “SPINE to SPINE” required. However, “SPINE to SPINE” are optional on a single-pod which can be leverage later on a multi-pod or multi-site EVPN design.
  • VNI ID for this POD-1 is 1xxxx; VNI ID for POD-2 will be 2xxxx. VNI IDs will be mapped to VLAN IDs. Example – VLAN ID 10 mapped to VNI 10010, VLAN ID 20 mapped to 10020.
  • “route-distinguisher (RD)” ID number for this POD-1 is 100; RD ID for POD-2 will be 200.

POD-Underlay-Overlay-SinglePOD-Diag1.2

Before we move to the “step by step” configuration – enable the following features on the Cisco Nexus switches.

Features to be enabled on the SPINE switches –

!
nv overlay evpn
feature ospf
feature bgp
!

Features to be enabled on the LEAF switches –

!
nv overlay evpn
feature ospf
feature bgp
feature interface-vlan
feature vn-segment-vlan-based
feature nv overlay
!

Step 1: Setup Loopback IPs on all the SPINE and LEAF switches

We need “loopback 0” IP address for router ID both for OSPF and BGP. Also, we will be using this loopback IP address for SPINE-LEAF connections as “ip unnumbered” source IP address.

Note: you need to configure OSPF routing first.

SPINE switches-

!---SPINE-01
!
interface loopback 0
description “Underlay – Interfaces and Router ID”
ip address 172.16.0.1/32
ip router ospf VXLAN-UNDERLAY area 0.0.0.0
!
!

!---SPINE-02
!
interface loopback 0
description “Underlay – Interfaces and Router ID”
ip address 172.16.0.2/32
ip router ospf VXLAN-UNDERLAY area 0.0.0.0
!

LEAF switches-

!---LEAF-01
!
interface loopback 0
description “Underlay – Interfaces and Router ID”
ip address 172.16.0.3/32
ip router ospf VXLAN-UNDERLAY area 1.1.1.1
!
!

!---LEAF-02
!
interface loopback 0
description “Underlay – Interfaces and Router ID”
ip address 172.16.0.4/32
ip router ospf VXLAN-UNDERLAY area 1.1.1.1
!

Step 2: Setup OSPF routing and Ethernet interfaces IP address

SPINE-LEAF VXLAN is a “full mesh” network – thus, it is hard to track interface IP addresses if they are configured individually with unique IP address; it will be too many IPs and too many IP Subnets!! IP address “unnumbered” is a nice way to avoid too many IPs and Subnets.

I will be using the same “loopback 0” IP address to all the interconnect interfaces between SPINE & LEAF switches with “ip unnumbered” feature.

Interfaces E1/1 & E1/2 are connected to each other between SPINE and LEAF; SPINE to SPINE connection using interfaces E1/10 & E1/11 on both the switches (as per the above network design diagram).

!---SPINE-01 and SPINE-02
!
interface e1/1,e1/2
no switchport
description “connected to LEAF switches – IP Fabric”
mtu 9216
medium p2p
ip unnumbered loopback0
ip ospf authentication message-digest
ip ospf message-digest-key 0 md5 3 yourOSPFsecret
ip ospf network point-to-point
no ip ospf passive-interface
ip router ospf VXLAN-UNDERLAY area 1.1.1.1
no shutdown
!
!---LEAF-01 and LEAF-02
!
interface e1/1,e1/2
no switchport
description “connected to SPINE switches – IP Fabric”
mtu 9216
medium p2p
ip unnumbered loopback0
ip ospf authentication message-digest
ip ospf message-digest-key 0 md5 3 yourOSPFsecret
ip ospf network point-to-point
no ip ospf passive-interface
ip router ospf VXLAN-UNDERLAY area 1.1.1.1
no shutdown
!
!---SPINE-01 and SPINE-02
!---This is NOT required on a single-POD only solution
!
interface e1/10,e1/11
no switchport
description “connected to SPINE switches – back-to-back IP Fabric”
mtu 9216
medium p2p
ip unnumbered loopback0
ip ospf authentication message-digest
ip ospf message-digest-key 0 md5 3 yourOSPFsecret
ip ospf network point-to-point
no ip ospf passive-interface
ip router ospf VXLAN-UNDERLAY area 0.0.0.0
no shutdown
!
!---all the SPINE & LEAF; adjust the loopback0 IP address for each sw
!
router ospf VXLAN-UNDERLAY
router-id 172.16.0.LOOPBACK0-IP
passive-interface default
!

At this stage – OSPF adjacency should be formed between SPINE and LEAF and SPINE to SPINE switches.

Verify your OSPF configuration and connectivity between switches-

show ip ospf neighbors
show ip ospf route
show ip ospf database
show ip ospf interface

Make sure you able to ping between all the SPINE and LEAF switches.

(Screenshot – “show ip ospf neighbor” from SPINE-01)

OSPF-Neighbors

Step 3: Setup MP-BGP peering across all the SPINE and LEAF switches

I will now configure MP-BGP along with address family “l2vpn evpn” on all the switches; we will have to enable “send-community extended” on all the BGP peer to allow exchange of L2/L3 evpn VXLAN encapsulations.

  • iBGP peering between SPINE-01 and SPINE-02;
  • iBGP peering from all SPINE to LEAF – SPINE switches are route reflector server to LEAF switches
  • NO LEAF to LEAF BGP peering
!
!---SPINE switches MP-BGP config SPINE-01
!---adjust the RID IP address for SPINE-02
!---iBGP to LEAF switches
!
router bgp 65501
  router-id 172.16.0.1
  log-neighbor-changes
  address-family l2vpn evpn
    retain route-target all
  !
  neighbor 172.16.0.3
    remote-as 65501
    description "LEAF-01 - iBGP peer - RR client"
    password 3 ef6a8875f8447eac
    update-source loopback0
    address-family l2vpn evpn
      send-community extended
      route-reflector-client
  !
  neighbor 172.16.0.4
    remote-as 65501
    description "LEAF-02 - iBGP peer - RR client"
    password 3 ef6a8875f8447eac
    update-source loopback0
    address-family l2vpn evpn
      send-community extended
      route-reflector-client
!
!---LEAF switche LEAF-01 
!---adjust the RID IP address for LEAF-02
!---iBGP peering to SPINEs in the same POD only
!
router bgp 65501
  router-id 172.16.0.3
  address-family l2vpn evpn
    retain route-target all
  !
  neighbor 172.16.0.1
    remote-as 65501
    description "SPINE-01 - iBGP peer - RR server"
    password 3 ef6a8875f8447eac
    update-source loopback0
    address-family l2vpn evpn
      send-community extended
  !
  neighbor 172.16.0.2
    remote-as 65501
    description "SPINE-02 - iBGP peer - RR server"
    password 3 ef6a8875f8447eac
    update-source loopback0
    address-family l2vpn evpn
      send-community extended
!

BGP Verification –

show bgp all summary
show bgp all neighbors 172.16.0.1
show bgp all neighbors 172.16.0.2
show bgp all neighbors 172.16.0.3
show bgp all neighbors 172.16.0.4

Make sure BGP peering status is “Established” for all.

(Screenshot – “show bgp all summary” from SPINE-01)

BGP-all-summary-1.1

As of now – all the above configurations are typical routing & interface configurations. Next sections describe VTEP, NVE, VNI & EVPN configurations.

Step 4: Setup NVE interface on the LEAF switches only

NVE interface is the VXLAN VTEP. This only requires to configured on the LEAF switches. VXLAN encapsulation happen on the NVE interfaces.

The VXLAN encapsulation/decapsulation concept is very similar to MPLS “PE” and “P” routers; source “PE” routers encapsulate MPLS labels and transport them over the “P” routers to the destination “PE” routers. Source LEAF switch NVE VTEP interface encapsulate VXLAN packets and transport them over SPINE switches to destination LEAF switch NVE VTEP interface.

NVE interface requires a dedicated loopback interface; we will setup “loopback 1” on both the LEAF and bind them to “NVE 1” interface.

!---LEAF-01
!
interface loopback 1
description “Underlay – NVE VTEP source IP”
ip address 172.16.0.5/32
ip router ospf VXLAN-UNDERLAY area 1.1.1.1
!

!---LEAF-02
!
interface loopback 1
description “Underlay – NVE VTEP source IP”
ip address 172.16.0.6/32
ip router ospf VXLAN-UNDERLAY area 1.1.1.1
!

!---both LEAF-01 and LEAF-02
!
interface nve1
  no shutdown
  description "VXLAN - VTEP interface"
  host-reachability protocol bgp
  source-interface loopback1
!

show-nve-interface

Verification –

>ping loopback1 IP address between LEAF-01 and LEAF-02; make sure they are reachable

show interface nve1; make sure nve1 interface is UP

Step 5: Create VLAN, VNI and configure EVPN

We will create traditional VLAN IDs and name, then associate an unique VNI ID to the VLAN ID; finally, we will configure the VNI ID onto NVE and EVPN.

!---on both the LEAF-01 and LEAF-02
!
vlan 10
name VLAN10-TEST
vn-segment 10010
!
vlan 20
name VLAN20-TEST
vn-segment 10020
!

Now add the above VLAN10 and VLAN20 onto NVE1 and EVPN on both the switches-

!---on both the LEAF-01 and LEAF-02
!
interface nve1
!
member vni 10010
    ingress-replication protocol bgp
  member vni 10020
    ingress-replication protocol bgp
!

!---both the LEAF-01 and LEAF-02
!
evpn
  !
  vni 10010 l2
    rd auto
    route-target import 100:10010
    route-target export 100:10010
  !
  vni 10020 l2
    rd auto
    route-target import 100:10020
    route-target export 100:10020
!

At this stage MP-BGP EVPN start exchange VXLAN encap packets between NVE VTEP peer LEAF switches.

Verification –

show nve peers; make sure its showing remote IP of peer & status is UP
show nve vni; make sure status is UP

show nve interface nve1
show nve vxlan-params

NVE-PIC-01

Following screenshot showing NVE port number on Cisco Nexus UDP/4789 –

VXLAN-port-number

Part 6: Verification – MAC Learning, VLAN, VNI ID, VXLAN, BGP EVPN

This is the final part.

As a part of end-to-end L2 VNI VLAN test & verification – we have configured the interface “E1/15” on both the LEAF switches as a “trunk” and connected two routers and configured VLAN10 on them.

Router-01 interface MAC address is “50:00:00:05:00:00”; this is connected to LEAF-01; this MAC is local to LEAF-01; IP address configured here is 192.168.10.1/24.

Router-01 interface MAC address is “50:00:00:06:00:00”; this is connected to LEAF-02; this MAC is local to LEAF-02; IP address configured here is 192.168.10.2/24.

Let’s verify MAC address learning on LEAF-01 and LEAF-02 for VLAN10 –

>ping 192.168.10.2 from Router-01; same way ping 192.168.10.1 from Router-02

show l2route mac all

show-l2route-mac-all-LEAF-01

show mac address-table (on the NXOS 9000v this command is >show system internal l2fwder mac)

show-mac-address-table

The above command on the LEAF-01 returns the following output –

This shows the local MAC “50:00:00:05:00:00” as “Local” on “Port E1/15

This will show the remote MAC “50:00:00:06:00:00” as “BGP” learned and port is “nve-peer1

The above command on the LEAF-02 will return the following output –

show-l2route-mac-all-LEAF-02

show-mac-address-table-LEAF-02

This shows the local MAC “50:00:00:06:00:00” as “Local and “Port E1/15

This will show the remote MAC “50:00:00:05:00:00” as “BGP” learned and port is “nve-peer1

The folloiwng BGP l2vpn command will show EVPN MAC, VNI ID & Route-Type

show bgp l2vpn evpn vni-id 10010

show-bgp-l2vpn-evpn-vniid-LEAF-01

This above command returns BGP EVPN details for the VNI 10010 (VLAN 10); note the first part *>l[2]” – this specify the type of route which is Type-2 for L2VNI.

The above verificaiton clearly showing remote “Layer 2 MAC address” learning over BGP which is MAC over Layer 3 routing protocol!!

Thats all.

VXLAN, MBGP EVPN with Ingress Replication – Part 1 – Basic Facts, Design Considerations and Security

I found too many reference docs on VXLAN, most of them cover early solutions that do not use MP-BGP EVPN and manage advertisement of BUM traffic (broadcast, unknown unicast and multicast) via multicast. I know people who do not want to run mcast in their network!

Here, I focus on VXLAN with MP-BGP EVPN with ingress replication to manage BUM traffic (VXLAN + MPBGP EVPN + Ingress Replication).

So, What Is “Ingress Replication” Compared To “Multicast” Based VXLAN Solution?

The answer is – ingress replication is called head-end-replication which performs unicast delivery of VXLAN encapsulated packet across remote VTEPs. Unicast replication requires a source VTEP to delivery same data to every single remote VTEPs in “one-to-one” fashion – whereas in multicast a rendezvous point (preferred is PIM-SM RP) defined where all the VTEPs join to receive delivery of VXLAN encapsulated data in “one-to-many” fashion. Multicast has lower overhead and can provide faster delivery compared to unicast; however, multicast is less secure.

MP-BGP EVPN is the next generation solution becoming widely popular in Data Center networks (VXLAN EVPN) and Service Provider networks (MPLS PBB-EVPN).

My plan is to create following step-by-step reference documents for VXLAN EVPN with ingress replication.

  • VXLAN, MBGP EVPN with ingress replication – Part 1 – Basic Facts, Design Considerations and Security
  • VXLAN, MBGP EVPN with ingress replication – Part 2 – Configure VXLAN on a single POD – L2 VNI – here
  • VXLAN, MBGP EVPN with ingress replication – Part 3 – Configure VXLAN on multi PODs – L2 VNI
  • VXLAN, MBGP EVPN with ingress replication – Part 4 – Configure VXLAN on multi PODs – L3 VNI
  • VXLAN, MBGP EVPN with ingress replication – Part 5 – Configure VXLAN on multi PODs including a collapsed POD besides Spine and Leaf PODs

This is the Part 1.

Let’s Get Some Basic Facts About VXLAN

  • the initial specification of VXLAN described in RFC 7348; this describes the need for overlay networks within virtualized data centers accommodating high density tenants (4096++) as traditional VLAN based segmentation can go max up to 4096
  • so based on RFC7348, VXLAN is the solution to get rid of classical ethernet (CE) in a data center and extend VLAN boundaries from 4096 to above; ah! NO more VLAN and STP!
  • similar alternative are TRILL, NVGRE, Cisco OTV; however, none are widely accepted except VXLAN
  • VXLAN header size is 8-byte; this includes a layer 2 virtual network identifier (VNI), which is 24-bit long
  • VNI represents a broadcast domain; traditional VLANs are associated with a unique VNI number; you often see the term “bridge-domain” which are in a sense similar to VLANs (or multiple VLAN/subnets of a tenent); a VNI can represent a bridge-domain
  • since maximum number of IEEE 802.1Q VLAN is 4096 – you can have max 4096 VNIs per POD (point of delivery); yes – you sill use VLANs for segregation!
  • in a large DC environment, you have multiple PODs inter-connected together per data center or across multiple data centers; thus you can have a high density multi-tenancy network that goes beyond 4096! Here you need VXLAN VNIs that gives you segments up to 24-bit (16,777,216 unique network segments in decimal)
  • so, VLANs are still there! ah! YES, they are! but VLANs are now local to per POD and/or per switch only; VLAN extension to intra-switches and intra-PODs are done via VXLAN VNIs; switch-to-switch connections are L3 for VXLAN instead of L2 in a typical VLAN based network
  • VXLAN use UDP instead of TCP and use port number 4789
  • L3 connectivity leverage equal cost multi-paths (ECMP) and use all inter-connect links that provide max throughput and redundancy compared to L2 STP that do not forward traffic over all links because of STP block ports
  • since VXLAN header size is 8-byte; VXLAN adds extra overhead to traditional 1500 MTU; VXLAN MTU size is “1500 payload with original IP header + 14 byte Ethernet header + 8 byte VXLAN header + 8 byte UDP header + 8 byte IP header”
  • in a real world – VXLAN deployments are done on 9K MTU size end-to-end; none use 1500 MTU
  • VXLAN requires end-to-end L3 reachability in the underlay network; underlay reachability is done via IGP, most cases OSPF or IS-IS
  • VXLAN encapsulate MAC address into IP packet and transport over L3 network
  • VXLAN is the “overlay networking” that runs on the top of underlay that use local “VXLAN Tunnel End Point – VTEP” interfaces to encapsulate packages into VXLAN
  • VTEP is the interface where VXLAN traffic encapsulation and de-encapsulation happen (origination and termination of VXLAN traffic)
  • VTEP can be hardware based – which is a dedicated network device capable of encapsulation and de-encapsulation of VXLAN packets; Cisco Nexus/Juniper QFX/Arista are good example
  • VTEP can be software based – VXLAN encapsulation and de-encapsulation happen on software based virtual network appliance within a virtualization “hypervisor servers”; underlaying physical network is totally unaware of VXLAN; VMware NSX is an example of software based VXLAN VTEP
  • software based VTEP handles only traffic those traverse via the hypervisor host machine; whereas hardware based VTEP can handle VXLAN traffic in a much broader space
  • VXLAN EVPN involves a “control plane” that handle the MAC address learning (BUM traffic)
  • VXLAN EVPN support “ARP suppression” which can reduce arp flood for “silent” hosts/clients (most hosts send GARP/RARP to the network when they come online; silent hosts dont do that)
  • VXLAN L3VNI requires “anycast gateway” on the Leaf switches which has a shared IP address across all the participating Leaf switches; very similar to other FHRP (VRRP/HSRP/etc…)

vxlan-header-cisco-com

(VXLAN header details – picture copied from cisco.com)

Security in VXLAN MP-BGP EVPN based VTEP

  • previous multicast based VTEP peer discovery didn’t have a mechanism or a method for authenticating VTEP peers; in plain English there was “NO” whitelist for VTEP peers!
  • the above limitations present major security risks in real-world VXLAN deployments because it allows insertion of a rogue VTEP into a VNI segment!
  • if a rogue VTEP has been inserted into the segment, it can send and receive VXLAN traffic! ah! goccha!
  • MP-BGP EVPN based VTEP peers are pre-authenticated and whitelisted by BGP; BGP sessions must be established first for a VTEP device to discover remote VTEP peers
  • in addition to an established BGP session requirements – BGP session authentication can be added to BGP peers (MD5 3DES)
  • in addition to BGP session security – IGP security (aka. auth) can be added to the “underlay” routing protocols

Few Quick Notes on VXLAN Network Design

  • VXLAN network design doesn’t follow traditional “three layers” network design approach (core – dist – access)
  • VXLAN network design typically has two tiers – Spine and Leaf; this design can grow horizontally “pay as you go”; you can add more Spine and Leaf anytime! No more fixed number of switch ports per POD!
  • you can have “super spine” on the top of Spine switches
  • Leaf switches are connected to Spine switches within the same POD
  • Spine to Spine direct network connections are “not” necessary but they “can be” connected
  • underlay IGP ensure end-to-end L3 connectivity within Leaf and Spine switches
  • clients are connected to the Leaf switches (servers, hypervisors, routers etc…)
  • in a multi-POD DC scenario – Spine switches need to be inter-connected (same EVPN control plane across multi-POD); intra-site DCI
  • in a multi-site data centre inter-connect (DCI) scenario “segmented” VXLAN “control plane” are deployed to minimise BUM per data center; inter-DC traffic are handled by VXLAN Border Gateway (BGW) routers
  • in a multi-site DCI scenario, the Border Gateway router (BGW) can be configured on the Spine switches (there are many other connectivity model/scenarios available for BGW); in this case Spine DC-x to Spine DC-y are connected back to back over via L3 link which is very similar to multi-POD Spine to Spine connectivity

Typical VXLAN Design Diag

VXLAN traffic flow diagram – inter-switch VLAN traffic follow L3 path.

VXLAN-VLAN-Path-Diag

Few Notes While Configuring VXLAN on Cisco Nexus NXOS

  • VXLAN EVPN is based on MP-BGP; this is just an extension to MP-BGP which is very similar to MPLS VPNV4 or VPLS l2vpn
  • if you have configured MP-BGP MPLS before – you will find VXLAN EVPN configuration is super easy
  • VXLAN VTEP switches are much like “PE” router in a typical MPLS network

 

Cisco Nexus vPC and non-vPC VLANs together on the same platform (Nexus hybrid setup) – NXOS v7.0(3)4(1)

Cisco discontinued “spanning-tree pseudo-information” starting from NXOS version 7.0(3)4(1) on the 9000 platforms. So what is the solution for Nexus vPC and non-vPC VLANS on the same platform (hybrid)? Is it no longer going to be supported on NXOS/9000 platforms?

Although hybrid is not recommended vPC design for “aggregation layer” but you find a lot scenarios where you need to have both vPC and non-vPC within the same platform (mostly in mid-size data centres; where you have a lot reasons you can’t deploy traditional ethernet switches; hence you have considered the Cisco Nexus platforms).

If you carry everything (both vPC VLANs and non-vPC VLANs) over the vPC peer-links (yes, vPC carry orphaned VLANs as well!) – in this case if there is any issue happen on the vPC peer links and that stops the vPC working, the downstream switches those are connected to the Nexus via non-vPC/non-vPC VLANs will stop forwarding frames due to STP (any STP – STP/RSTP/PVSTP/MSTP) blockage as they don’t have any information about what happened between the vPC peer Nexus switches in a vPC peer failed scenario.

Experts suggest that you should have an additional Layer 2 trunk port-channel alongside your vPC peer-link; this Layer 2 port-channel will carry non-vPC VLANs in case vPC stops working (whatever reason; could be a Nexus reboot during maintenance!).

Well, now you have setup a seperate Layer 2 trunk port channel for non-vPC VLANs and shutdown the vPC peer link to test it – but you found it is still not working as expected! STP is blocking the Layer 2 trunk link, whats the problem! Cisco used to have a solution for this called “spanning-tree pseudo-information“; as I have mentioned in the beginning Cisco discontinued this starting from version 7.0(3)l4(1) on the Nexus 9000 platform. So what is your option for this? Should you stop using Nexus if you don’t follow spine and leaf design?

Yes – there is an answer to this, there is a small trick to make this working! By discontinuing psuedo-information, Cisco basically makes it ever easier to configure; you need to do the following –

i. First, you need to set STP root priority for the VLANs to a lower priority number on one of Nexus switch (default priority is 32768, lower is prefer; this should be done on the primary vPC role Nexus switch) and leave STP untouched on the other Nexus switch (this is the vPC secondary role switch).

ii. Second, on the non-vPC trunk port-channel set “spanning-tree port type normal” on “both” the switches; (“spanning-tree port type network” is recommended for vPC peer-link).

Here is an example config -Here is an example config –

On the vPC primary role switch, apply the following -

(config)#spanning-tree vlan 10,20,30,40-50 priority 8192

On both the switches, apply the following -

(config)#interface port-channel10
(config-if)#description "non-vPC trunk port" 
(config-if)#switchport 
(config-if)#switchport mode trunk 
(config-if)#switchport trunk allowed vlan 10,20,30,40-50 
(config-if)#spanning-tree port type normal

Without having the above configuration applied you will find STP is blocking the non-vPC Layer 2 trunk link even if the vPC peer-link is shutdown. Also in this example – the vPC primary role switch will be the STP “root bridge” because of lower priority configured (8192).

To test your configuration – shutdown the vPC peer-link, run “show spanning-tree vlan xxx” – you should see STP put the L2 trunk interfaces in forwarding state immediately.

Here is a vPC and non-vPC VLANs on the same platform diagram –

CiscoNexus-Hybrid

 

Cisco UCS Platform Emulator – UCSPE 3.1(2ePE1)

Cisco UCS Platforms are expensive kit to play with. UCS Platforms are not just a standalone router, switch or a firewall that could easily be emulated on a PC – they are bunch of kits interconnected together to deliver the UCS platform. UCS is not a server – it’s a system or a platform compared to traditional computing; UCS comes with unified (LAN/SAN/FC/FCoE/HBA/others) and stateless computing together.

Cisco has come up with a solution to help engineers to get their hands dirty on UCS Platforms – the solution is called Cisco UCS Platform Emulator (UCSPE). The latest UCSPE is version 3.1(2ePE1). Cisco made UCSPE available to everyone – you only need to have a Cisco login.

The whole UCSPE comes with the following-

i. 2 x Fabric Interconnect (Model: UCS-FI-6332-16UP)
ii. 2 x FEX (Model: Nexus 2348UPQ N2K-C2348UPQ)
iii. 3 x UCS 5108 Chassis with 12 x different model blades
iv. 1 x UCSC C3X60 server (Model: UCSC-C3X60)
v. 7 x UCS C series servers (220/240/460)

The above are enough to emulate a complete decent size UCS Platform!

Yes! you can create bunch of full fledged “Service Profiles” with different configurations settings and applied them to the servers; also you can configure the fabric interconnect ports with different options (LAN uplink/Server uplink/FC/FCoE/NAS/Port Channels etc)

UCSPE comes in “OVA” and “VMware VMX/VMDK” format (in a ZIP file); you can run it on VMware Workstation or Fusion (I use Fusion).

The pre-defined OVA/VMX requires only one (01) vCPU, 1024MB memory and 3 x vNICs.

You need 3 x IP address (could be from DHCP or static) to make it accessible – one IP address is for Fabric Interconnect “A”, second one is for Fabric Interconnect B and the third IP address is the VIP of FI-A and FI-B Cluster.

Thats all! Its a great tool for candidates learning towards CCNA/CCNP/CCIE Data Centre certifications as well.

UCSPE download link is the following:
https://communities.cisco.com/docs/DOC-71877

Following are few screenshots:

UCSPE-VM-1

[UCSPE VM console]

UCSPE-Platform-01

[UCSPE devices list. They covered a lot devices in here! The red arrow is showing the icon of UCSM – you need to click on this to launch UCSM]

UCSPE-Platform-02

[UCSM – Login Screen]

UCSPE-Platform-Topology-01

[UCSM- The Topology]

UCSPE-Platform-Fabric-A-01

[UCSM- FI “A”]

UCSPE-Platform-01-5108-Chassis

[UCSM – UCS 5108 Chassis]

 

Cisco UCS 5108 Chassis Power Policy Options and Redundancy Demystify

Cisco UCS server chassis 5108 comes with three (03) different Power Policy options; UCS power management is efficient and energy saving by default but NONE of the policy option explanation says which PSUs will be full ON and which PSUs will be put on to Power Saving Mode. This is what official Cisco document says about these options:

  • Non Redundant – All installed power supplies are turned on and the load is evenly balanced. Only smaller configurations (requiring less than 2500W) can be powered by a single power supply.
  • N+1 – The total number of power supplies to satisfy non-redundancy, plus one additional power supply for redundancy, are turned on and equally share the power load for the chassis. If any additional power supplies are installed, Cisco UCS Manager sets them to a “turned-off” state.
  • Grid – Two power sources are turned on, or the chassis requires greater than N+1 redundancy. If one source fails (which causes a loss of power to one or two power supplies), the surviving power supplies on the other power circuit continue to provide power to the chassis.

Probably the above definition applied to old version UCS and not the latest. Also I couldn’t find details on what exactly happen on power redundancy when you have 2 x PSUs and 4 x PSUs installed and your power load is not very high due to not all the blades are installed and functional.

Cisco-UCS-PowerPolicy-1

Following are what I captured regarding which PSUs will be ON and PSUs will be put on to Power-Saving-Mode when you set different “Power Policy” options – this is done a four (04) PSU server chassis. Changing in Power Policy takes effect immediately (might be a 10-20 second delay to refresh the Web GUI) and it doesn’t require any system reboot.

Assumption is all the four (04) PSUs are installed and connected to power socket; also the chassis is running 2 blades.

When Power Policy is set N+1, this is what happen:

PSU1 – ON
PSU2 – ON
PSU3 – OFF (Power Saving Mode)
PSU4 – OFF (Power Saving Mode)

When Power Policy is set GRID, this is what happen:

PSU1 – ON
PSU2 – OFF (Power Saving Mode)
PSU3 – ON
PSU4 – OFF (Power Saving Mode)

When Power Policy is set Non-Redundant, this is what happen (only one is ON! not ALL):

PSU1 – ON
PSU2 – OFF (Power Saving Mode)
PSU3 – OFF (Power Saving Mode)
PSU4 – OFF (Power Saving Mode)

Data centre racks are equipped with two power rails – A (left hand side) & B (right hand side) for redundancy. Now interesting thing is – your physical power connection must be in-line with the UCS Power Policy options, otherwise your blades will be rebooted in case any power issue on “power rail A”.

You can have only two different combinations of power connections to connect all the four (04) PSUs to the power rails A & B. The combinations are following –

Option 1:
PSU1/PSU2 (first two) connected to > power rail A
PSU3/PSU4 (last two) connected to > power rail B

Option 2:
PSU1/PSU3 (odd numbers) connected to > power rail A
PSU2/PSU4 (even numbers) connected to > power rail B

I had N+1 configured with “Option 1” and power issue due to maintenance on rail A rebooted my blades!

This is what happen as following – either “you are saved” during a power failure on rail A! or “you are not saved!

GRID Mode with (Option 1) PSU1/PSU2 to A & PSU3/PSU4 to B;You are saved!
PSU1 – A ON
PSU2 – A OFF (Power Saving Mode)
PSU3 – B ON
PSU4 – B OFF (Power Saving Mode)

GRID Mode with (Option 2) PSU1/PSU3 to A & PSU2/PSU4 to B;You are NOT saved
PSU1 – A ON
PSU2 – B OFF (Power Saving Mode)
PSU3 – A ON
PSU4 – B OFF (Power Saving Mode)

Screenshot of GRID mode following –

Cisco-UCS-PSU-GRID-Pic1

N+1 Mode (Option 1) PSU1/PSU2 to A & PSU3/PSU4 to B;You are NOT saved!
PSU1 – A ON
PSU2 – A ON
PSU3 – B OFF (Power Saving Mode)
PSU4 – B OFF (Power Saving Mode)

N+1 Mode (Option 2) PSU1/PSU3 to A and PSU2/PSU4 to B;You are saved!
PSU1 – A ON
PSU2 – B ON
PSU3 – A OFF (Power Saving Mode)
PSU4 – B OFF (Power Saving Mode)

Screenshot of N+1 mode following –

Cisco-UCS-PSU-N1-Pic1

Non-Redundant Mode (Option 1) PSU1/PSU2 to A & PSU3/PSU4 to B;NOT saved!
PSU1 – A ON
PSU2 – A OFF (Power Saving Mode)
PSU3 – B OFF (Power Saving Mode)
PSU4 – B OFF (Power Saving Mode)

Non-Redundant Mode (Option 2) PSU1/PSU3 to A & PSU2/PSU4 to B;NOT saved!
PSU1 – A ON
PSU2 – B OFF (Power Saving Mode)
PSU3 – A OFF (Power Saving Mode)
PSU4 – B OFF (Power Saving Mode)

Screenshot of Non Redundant mode following –

Cisco-UCS-PSU-NonRedundant-1.jpg

Cisco IOS Events to Splunk – Track IOS Command Execution History

Cisco IOS event details can be send to an external system via “syslog”. Splunk server itself and Splunk Universal Forwarder both can act as a syslog server to accept logs from Cisco IOS devices.

To add more cream to Splunk log consolidation solution for Cisco IOS devices – there are few Splunk plugins already available on Splunk App store! These plugins display IOS events on nice colorful dashboards with graphs & charts.

Let’s talk about how we can get this solution in place.

Technical dependencies to get this solution are following –

1. Cisco IOS devices (routers, switches, wlc, asa) configured to send IOS event to Splunk via “syslog”
2. Splunk Indexer (actually this is the Splunk server)
3. (optional) to get nice dashboards it needs two Splunk Apps – (i)Cisco Networks Add-on (TA-cisco_ios) (ii)Cisco Networks (cisco_ios)

Regarding the solution design, there are two options as following –

1. Send logs to Splunk via Splunk Universal Forwarder; this design suits very well in a large infrastructure. Splunk Universal Forwarder can act as local “syslog” for IOS devices; picture below-

splunk-uf-pic-1

2. Send logs directly to the Splunk server –

splunk-server-pic-1

Installation technical procedures are following –

Step 1: Configure Cisco IOS to Send Logs to Splunk “syslog”

Following is an example configuration on a Cisco router –

router# config t
router(config)# logging trap notifications
router(config)# logging 1.1.1.1   ;IPAddr of Splunk syslog – if syslog is running other than UDP 514 – this needs to be specify here

The following commands will send Cisco IOS command execution history to syslog –

router(config)# archive
router(config-archive)# log config
router(config-archive-log-cfg)# logging enable
router(config-archive-log-cfg)# logging size 1000
router(config-archive-log-cfg)# hidekeys ;this will not send passwords to syslog
router(config-archive-log-cfg)# notify syslog
router(config-archive-log-cfg)#exit

Step 2: Configure Splunk or Splunk Universal Forwarder to Accept Logs on UDP://514

There are multiple ways to ways to do this. Adding new listener & sourcetype to “inputs.conf” works for both universal forwarder and Splunk server running on any platform.

On Linux/Unix the default location of this file is – $SPLUNK_INSTALLATION_DIR/etc/system/local/

On Windows the default location of this file is – x:\Program Files\SplunkUniversalForwarder\etc\system\local\

Add the following to the “inputs.conf” file –

[udp://514]
sourcetype = cisco:ios

Restart “splunk” service or “SplunkUniversalForwarder” service to get this change take effect.

If you add “sourcetype = syslog” – this will also work. The “Cisco Network Add-on (TA_cisco-ios)” transforms Cisco syslog to “cisco:ios” sourcetype automatically.

At this stage you should start getting logs coming on to Splunk. Execute some random commands on Cisco IOS and search for sourcetype=”cisco:ios” on Splunk search tab – you should be able to see logs like similar to following –

splunk-search-ciscoios-2

Step 3 (optional): Install Splunk Cisco Apps to Display IOS Events on Dashboards

Install the following two Apps from “Apps > Find More Apps > search Cisco” –

  1. Cisco Network Add-on (TA-cisco_ios)
  2. Cisco Networks (cisco_ios)

Installation is very straight forward – just click on the icon to install it.

If you still not seeing any logs on the Dashboard of Cisco Networks – this might be incorrect “sourcetype” issue and “TA-cisco_ios” is not doing the source type transformation – in this case change your source type to “cisco:ios” manually or you can log a support case with Splunk support to get the TA-cisco_ios fixed for you.

You should be able to see the following on Dashboards –

(the main dashboard)
splunk-cisco-dashboard

(command execution history – who has done what?)
splunk-cisco-exechistory

There are lot more you can find here on this dashboard – explore it.

Cisco New Aironet 1700 Series & WLC Software Compatibility Matrix

Cisco recently released Aironet 1700 series access points which support high speed WiFi IEEE 802.11ac. One of the key specifications of this 1700 series – they “only” work with Cisco Unified Wireless Network Software Release 8.0 or later.

It’s now time to upgrade your WLC to version 8.x to get this work for you.

Here is the latest APs & WLCs software compatibility matrix (as of December 15, 2014) –

cisco-aironet-wlc-software

Details @ http://www.cisco.com/c/en/us/td/docs/wireless/compatibility/matrix/compatibility-matrix.html

Cisco AIP SSM Email Alert – Cisco IPS Manager Express (IME)

Cisco AIP SSM are pluggable hardware modules for advanced intrusion prevention security services (IDS/IPS) to Cisco ASA 5500 series firewalls.

Although lots of AIP-SSM configuration parameters can be set via ASDM > IDM (Cisco IPS Device Manager) or via CLI – however there is no such thing on IDM or CLI to send security events or security reports via email.

So, how do I know > what is happening on the IDS/IPS? Is the IPS device capable of detecting threats? Is IPS is blocking attacker IP address?

The answer is – there is a separate piece of software called Cisco IPS Manager Express (IME) to manage, configure and send email alert notifications for AIP-SSM modules. This software needs to be installed on a Windows machine. As of today the latest version is 7.2.7. Supported windows platforms are – Windows Vista Business+/XP Pro/Windows 8+/Windows 2003 R2/Windows 2008 and above.

Apart from all ASA AIP-SSM modules; this IME software does support following Cisco IPS hardware platforms – 4240, 4255, 4260, 4270-20, 4345, 4360, 4510 and 4520.

Here is the download URL (you need valid Cisco login),

https://software.cisco.com/download/type.html?mdfid=282052550&catid=null

Installation is very straight forward; start the installation > follow next, next and finish.

Once IME installation is finished; add all of your AIP-SSMs or IPS devices to IME console via IP address. Make sure IME Windows machine is able to communicate to AIP-SSM or IPS device’s management interface IP address. You can have bunch of IPS devices under one IME.

Setting Up Email Notification

This is very easy task. Open IME console > click on “Tools” > click on “Preferences”; enter your SMTP server details under “Email Setup” tab; screenshot–

IME-EMailSetup

You should send test email to confirm – IME is OK sending email.

Click on the next tab “Notifications” for IDS/IPS security events – configure your preferred notification parameters here; screenshot-

IME-Notifications

Lastly you might want to see consolidated security events in a report – such as what happened in last 24 hours or last 7 days or last 30 days; go to the next tab called “Reports” – all the report parameters are here; this will send PDF report with colorful presentation of data with graphs or charts; screenshot–

IME-Reports

Questions again –

i. You have done configuration of all the email notification parameters, do you need to keep IME running on desktop? Should you close the IME console and logoff?

Answer: Yes – you close IME console and logoff from the Windows computer; IME is still running on the background as a Windows service.

ii. You have added 4 IPS devices on your IME – is email alert notification working on ALL of them?

Answer: Yes – email notification is a global setting within IME that applied to ALL IPS devices those been added to the console. There is no option here to configure email notifications on individual IPS device within the same IME console.

Cisco IOS Site-to-site IPSec VPN with VRF-lite

Just few years ago if there was a requirement of connecting to destinations with same IP networks address or for a low-level network segregation – the solution was to get separate network devices. These days the same can be done on a single hardware platform using VRF (VRF-lite).

On server platform – it’s virtualization everywhere these days; why not VRF-lite on networking then! I have seen lots of routers they never use above 50% of its capacity! This saves us the following –

i. Buying new router hardware
ii. Less power consumption, less power outlet
iii. Less number of switch ports required
iv. Overall high gain total cost of ownership

That’s why I have started implementing VRF-lite on all my new implementations! Why “all” – because if there any new requirements comes into the picture I still can use the same device; no need to reconfigure the existing platform or buy new devices.

So far I experienced – Cisco IOS does support all IP features with VRF-lite; such as static routing, dynamic routing, BGP, site-to-site vpn, nat and packet filtering firewall. On HP Comware5 platform (A-Series, 5xxx) – VRF-lite doesn’t support Layer-3 packet filtering – other than this they support most of IP services.

Let’s talk about IPSec site-to-site VPN with VRF-lite. Following are the key configurable components of a site to site IPsec VPN –

  1. Remote peer with secret keys
  2. IKE Phase 1 security details
  3. IKE Phase 2 security details
  4. Crypto map
  5. NAT
  6. Access List

On a VRF environment – the whole VPN concept and commands remain same except the following only where we specify network addresses –

1. Remote peer & keys – remote peer is reachable via which VRF domain; instead of global “key” we need to configure “keyring” with specific vrf domain name here.
5. NAT – internal source address belongs to which VRF domain; we need to specify vrf domain name in the NAT rules.
6. Although “access-list” contain of IP addresses – no VRF name need to be specify here.

Following are the command syntaxes for remote peer with VRF details –

(config)#crypto keyring tunnelkey vrf my-vrf-A
  (config-keyring)#pre-shared-key address 10.100.200.1 key 6 mysecretkey
 
 (config)#crypto keyring tunnelkey vrf my-vrf-B
  (config-keyring)#pre-shared-key address 20.100.200.1 key 6 mysecretkey

Without VRF the syntax is (this is called “global key”)-
(config)#crypto isakmp key mysecretkey address 10.100.200.1

Following are the command syntaxes for NAT rules with VRF domain name –
(config)#ip nat inside source static my_src_ip  my_nat_inside_global_ip vrf my-vrf-A

(config)#ip nat inside source static 192.168.1.10 10.100.200.10 vrf my-vrf-A

Show configuration commands for the above are –

#show crypto isakmp key
#show ip nat translations vrf my-vrf-A

If the above are not specified correctly – you might receive the following error on the router log file;

No pre-shared key with 10.x.x.x!
Encryption algorithm offered does not match policy!
atts are not acceptable. Next payload is 3
phase 1 SA policy not acceptable!
deleting SA reason “Phase1 SA policy proposal not accepted” state (R) MM_NO_STATE (peer 10.x.x.x)

 

Cisco ASA 5500 series recommended IOS upgrade path

You might encounter problem while attempt to upgrade Cisco Adaptive Security Appliance (ASA) to Version 8.4.7 or later or to Version 9.1.3 or later. Here is below the recommended upgrade path – that resolves the problem.

ASA-firmware

ASA version pre-8.3 NAT rule syntax are different than version 8.4 and later. This upgrade path will automatically convert pre-8.3 version NAT rule syntax to version 8.4 and later.

Few well known errors are “”No Cfg structure found in downloaded image file” and “no NAT rule found after the upgrade”.