Network Integration
ISR4k with UCS-E SM-X Module NIM/SM NIM/SM NIM/SM Module Module Module 2x1 GE
Route/Forwarding Processor
UCS-E Module App
VNF
App
VNF
2x1 GE vSwitch Hypervisor BMC
x86 Processor
CIMC
1 GE
MultiGigabit Fabric (Internal)
IOS-XE Data Plane
IOS-XE Control Plane
10 GE Linux
x86 Processor
3x1 GE WAN Interfaces
Cisco UCS E-Series server Network options to steer VM traffic • •
Routing traffic to and from VMs via backplane or external ports Best practice depends on the application: • •
Network functions (e.g. WAAS, FireSight) typically in path of router WAN/LAN interfaces, use backplane applications (POS, AD/DNS, print file) only need access to LAN, use external ports
ISR4000 backplane config examples: Using IP Unnumbered basic config: interface g0/0/0 ip address 10.0.0.1 255.0.0.0 ! ucse subslot 1/0 imc access-port shared-lom console imc ip address 10.0.0.2 255.0.0.0 default-gateway 10.0.0.1 ! interface ucse 1/0/0 ip unnumbered g0/0/0 ! ip route 10.0.0.2 255.255.255.255 ucse 1/0/0
Using an SVI: interface vlan 1 ip address 10.0.0.1 255.0.0.0 ! ucse subslot 1/0 imc access-port shared-lom console imc ip address10.0.0.2 255.0.0.0 default-gateway 10.0.0.1 platform switchport 0 svi !Ena/Dis SVI on UCSE needs an OIR or Router reload ! interface ucse1/0/0 switchport mode trunk ! !Best Practice Spanning-tree enablement and config: spanning-tree mode rapid-pvst spanning-tree vlan 1 priority 24576
Using dedicated subnet: ucse subslot 1/0 imc access-port shared-lom console imc ip address 10.0.0.2 255.255.255.0 default-gateway 10.0.0.1 ! interface ucse 1/0/0 ip address 10.0.0.1 255.255.255.0 ! end
Cisco UCS E-Series server How server interfaces map to virtual NICs VMware ESXi networking settings
Cisco IOS/IOS-XE
CIMC CLI/GUI
Note: A double-wide UCS E-series server will have a fourth interface labeled ge3. This is an external facing interface that maps to vmnic3 on the virtual network side
•
The backplane network interfaces can be monitored via installed OS and router monitoring features
•
The backplane interfaces router BDI, sub-interface, VLAN and SVI and other IOS/IOS-XE features
•
The external front-facing network interfaces are only accessible by the server and can only be monitored by the installed OS
•
On a double-wide server you can configure NIC teaming using the two front-facing interfaces to create redundancy or increase bandwidth
Service Chaining Applications To WAN
Motherboard
3 WC IN
UCSE1/0/0 (BDI 10)
1
UCS-E1/0/1 (BDI 20)
2 GE 0
2
vWAAS will redirect traffic back to the ISR router
3
Use standard routing to route traffic from vWAAS to BDI/VLAN 20 to the UCS-E blade
4
Traffic will be routed to the vASA outside interface set to its own internal switch
5
Traffic is filtered and only authorized traffic is allowed out to the vASA inside network
6
vWSA and miscellaneous LAN apps are installed behind the firewall so they are accessible to LAN devices
7
All LAN traffic accesses the LAN apps through the physical external GE 2 port on the UCS-E module
GE 1
5 vmnic2
outside vNIC
vWSA
6
inside vNIC
7
vNIC
vASA
vWLC
ESX Host
vSwitch2
UCS-E Server Module
vmnic1
vSwitch1
vNIC
vmnic0
vNIC
Ingress WAN traffic from the ISR WAN port is redirected to vWAAS running on the UCS®-E
4
vSwitch0
vWAAS
Cisco® ISR Chassis
GE 0/0/0
1
GE 2
To LAN Switch
Configuration Example: vWAAS + vNGIPS LAN access sw
10.0.1.0 /16 WAN Intfc GE 0/0/0 desription WAN intfc
ISR Global Route table (WAN) Intfc ucse 1/0/0.10 description wc WAAS
Intfc ucse 1/0/0.100 description GRT LAN
192.168.24.1/30
SourceFire IPS
vWAAS
Intfc ucse 1/0/1.200 description VRF inside LAN
192.168.25.1/24
192.168.24.2/30
ISR VRF inside (LAN)
Configuration Example: vWAAS + vNGIPS LAN access sw
10.0.1.0 /16 WAN Intfc GE 0/0/0 desription WAN intfc
ISR Global Route table (WAN) Intfc ucse 1/0/0.10 description wc WAAS
Intfc ucse 1/0/0.100 description GRT LAN
192.168.24.1/30
SourceFire IPS
vWAAS
Intfc ucse 1/0/1.200 description VRF inside LAN
192.168.25.1/24
192.168.24.2/30
ISR VRF inside (LAN)
Using IP SLA and EEM script to provide FailOpen backup if IPS service fails • • • •
•
IP SLA continuously monitors connection across FirePower If connectivity fails the EEM script configures the LAN facing router GE interface into the “global route table” and send a “fail” email notification During IPS failure LAN devices can still reach the outside, but have no IPS/IDS protection Once the IPS is back online the IP SLA ping will be successful and activate a second EEM script The second EEM script will reconfigure the LAN facing router GE interface back to the “vrf inside” to force traffic across FirePower
Cisco IP SLA ping and EEM script
Reference
IPS down EEM script config:
IP SLA ping config: track 1 ip sla 1 delay down 3 ! ip sla 1 icmp-echo 192.168.24.1 source-ip 192.168.24.2 vrf inside threshold 500 timeout 1000 frequency 2 ip sla schedule 1 life forever start-time now ! end
IPS up EEM script config: event manager applet ipsla_ping-up
"
event syslog pattern "1 ip sla 1 state Down -> Up action 1.0 cli command "enable" action 1.5 cli command "config term" action 2.0 cli command "interface g0/0/2" action 2.5 cli command "ip vrf forwarding inside" action 2.6 cli command "ip address 192.168.25.1 255.255.255.0" action 2.7 cli command "no ip nat inside" action 2.8 cli command "no ip wc 61 redirect in" action 3.0 cli command "end" action 3.1 cli command "wr mem"
event manager environment _email_to
[email protected] ! event manager applet ipsla_ping-down event syslog pattern "1 ip sla 1 state Up -> Down" action 1.0 cli command "enable" action 1.5 cli command "config term" action 2.0 cli command "interface g0/0/2" action 2.5 cli command "no ip vrf forwarding" action 2.6 cli command "ip address 192.168.25.1 255.255.255.0" action 2.7 cli command "ip nat inside" action 2.8 cli command "ip wc 61 redirect in" action 3.0 cli command "end" action 3.1 cli command "wr mem“ action 4.0 mail server "$_email_server" to "$_email_to" from "$_email_from" subject "$_event_pub_time: IPS down!" body "$_syslog_msg" action 4.1 syslog priority notifications msg "priority" facility "state Up -> Down - Mail Sent"
Server Performance
vWAAS and FirePower Tests
• • • • • •
UCSE 140S-M2
LAN (Mbps)
WAN (Mbps)
Router DP U%
UCS-E U%
No Service
1200
1180
12
n/a
vWAAS
617
308
29
97
FirePower (IDS)
648
600
98
96
vWAAS + FirePower (IDS)
365
190
89
100
Example: vWAAS test 182 Mbit/s
128 Mbit/s
ISR4451 2 Gbit/s
LAN 435 Mbit/s
UCS-E M2 with 16 GB memory and 2 * SAS900 Drives (no RAID) Throughput numbers are aggregate (in+out) Traffic profile is SFR traffic profile (mixed traffic), about 1/3 , 2/3 WAN condition 1G , 20 ms RTT delay , 0.01% loss In “No Service” test case interface speed limited the throughput FirePower in IDS mode = Router replicates traffic which is quite U intense
WAN 180 Mbit/s
High Availability
Cisco UCS E-Series and Cisco ISRs Power & Cooling: The E-Series are “Power Sucking Aliens” Soft reload of the ISR router does not affect UCS E-Series (SM-Xs, EHWICs & NIMs!)
A hard reset or power-down of the router will cause the
E-Series to power down ISR router dual power supplies: − 3900 and 4400 Series can have built-in dual power supply − 2900 Series ISRs can have external RPS 2300 power supply − 1900 and 4300 Series ISRs have no power supply redundancy
Online Insertion and Removal (OIR): ed on 3900 and all 4000 Series ISR platforms Hard drives on the UCS E-Series can be removed and installed without powering
down the blade or the router (Note: RAID disks would have to be rebuilt)
Disaster Recovery To recover after a disaster you need to backup your storage to the
datacenter as well − Use technologies like VMware vSphere replication to set up automatic
backup of data between E-Series Servers and the data center − Backup is asynchronous, often done nightly after hours
Redundant Storage with StorMagic One box two server solution
Requires a 2 server cluster, centralized vCenter can act as tiebreaker between two out-of-sync servers Uses direct-attached HDDs/SSDs to create a shared storage iSCSI target Virtual machine files (.vmdk, .vmx, .nvram, etc..) are mirrored across servers
Leverage UCS E-series backplane interfaces to connect management, mirroring and iSCSI network traffic If one server fails the VMs survive running on the available server When the failed server is recovered, SvSANs communicates with the neutral storage host to determine which host
contains the most up-to-date data, and begin to re-synchronize
Cisco UCS E-series application survivability Box-to-Box Redundancy
Should be used if: − 2 routers are available for redundancy − Double-wide servers are used − Switch-Module SM-Xs are used Each Cisco ISR can host a Cisco UCS® E-Series server − Network connectivity between UCS E-Series Servers is done using the front- GE interfaces for data replication (mirror) traffic − Each E-Series Server runs the SvSAN VM with data mirroring − Both Network and application survivability is delivered
How StorMagic works from a high level VMWare StorMagic Plugin
VMs VMs VMs
Resolves dual-active scenarios
StorMagic
StorMagic mirror
mgmt
ESXi
UCS-E
mgmt
ESXi
UCS-E
VMs VMs VMs
SSD caching Current version is a “write back” cache • Improves overall I/O write performance significantly • Delivers low latency access improving application response times • Reduces the number of I/Os going directly to disk
VM 1. Data written directly to cache
All new data is written to SSD • Efficient and flexible ̶ Data is written as variable sized extents 3. • Extents are merged and coalesced in the background ̶ Data in cache is flushed to hard disk regularly in small bursts • Data is retained in cache until space is required ̶ Enables data previously written to be read from cache, improving read performance In ROBO environments the amount of data written per day is relatively low • Ranging from a few tens of Gigabytes to hundreds of Gigabytes • A 250GB SSD could cache a days worth of data
2. Write acknowledged, data in cache is “dirty”
SSD Data flushed from cache to persistent storage
4. Flush complete, data in cache marked as “clean”