Michael McNamara https://blog.michaelfmcnamara.com technology, networking, virtualization and IP telephony Sat, 30 Oct 2021 18:19:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.4 Avaya Contact Center Agent Desktop Display Quietly Crashing https://blog.michaelfmcnamara.com/2013/05/avaya-contact-center-agent-desktop-display-quietly-crashing/ Fri, 17 May 2013 13:01:29 +0000 http://blog.michaelfmcnamara.com/?p=3677 I thought I would share this story… it’s another story of “it’s the network’s fault” when in reality it really has nothing to-do with the network but it falls to the network engineers and consultants to prove the point beyond a reasonable doubt.

I can’t tell you how it irks me to hear people say “it’s the networks fault” when they have absolutely no clue as to how anything works and have no data to support their wild claims. I would think a lot more of them if they just said, “I’m sorry, I haven’t got a frigging clue what’s happening here but can you help me?” And of course the problem always needs to be resolved yesterday as if the building itself was on fire.

We have multiple Avaya Aura Contact Center (formerly Nortel Symposium) installations. At one of these locations we began receiving trouble tickets that the Agent Desktop Display (ADD) which is a small software application that listens to a Multicast stream and displays a ticker tape banner showing the contact center queue details was quietly closing after only a few minutes of running on the local desktop/laptop. The local telecom technician verified that the problem only occurred on a specific floor, the users on the other floors had no issues or problems. A quick check of the core Avaya Ethernet Routing Switch 8600 and edge Avaya Ethernet Routing Switch 5520s indicated that IGMP and PIM were configured and working properly.

Note:A few years back now I detailed how to configure IGMP, DVMRP and PIM for Multicast routing.

I asked the local telecom technician to perform a packet trace so I could see what was happening on the wire. The packet trace indicated that the desktop/laptop was issuing an IGMP leave request and was closing the HTTP/TCP socket it had open to the web server so that was proof enough for me that the application was silently crashing and the operating system was cleaning up all the open ports and IGMP sessions.

6376 2013-05-16 07:47:46.052281 10.1.46.144 10.1.38.55 TCP     54   3317 > 80 [RST, ACK] Seq=4467 Ack=10430 Win=0 Len=0
6377 2013-05-16 07:47:46.052595 10.1.46.144 224.0.0.2  IGMPv2  46   Leave Group 230.0.0.2

The actual Multicast stream from the application/web server was fine;

6353	2013-05-16 07:47:43.995183	10.1.38.55	230.0.0.2	UDP	511	Source port: 1031  Destination port: 7040
6354	2013-05-16 07:47:43.995502	10.1.38.55	230.0.0.2	UDP	502	Source port: 1025  Destination port: 7050
6355	2013-05-16 07:47:43.995885	10.1.38.55	230.0.0.2	UDP	813	Source port: 1026  Destination port: 7030
6356	2013-05-16 07:47:43.996301	10.1.38.55	230.0.0.2	UDP	860	Source port: 1032  Destination port: 7020
6357	2013-05-16 07:47:43.996505	10.1.38.55	230.0.0.2	UDP	343	Source port: 1033  Destination port: 7060
6358	2013-05-16 07:47:43.996726	10.1.38.55	230.0.0.2	UDP	331	Source port: 1027  Destination port: 7070
6359	2013-05-16 07:47:43.996886	10.1.38.55	230.0.0.2	UDP	153	Source port: 1028  Destination port: 7110
6360	2013-05-16 07:47:43.997048	10.1.38.55	230.0.0.2	UDP	153	Source port: 1034  Destination port: 7100
6361	2013-05-16 07:47:43.997199	10.1.38.55	230.0.0.2	UDP	135	Source port: 1030  Destination port: 7090
6362	2013-05-16 07:47:43.997371	10.1.38.55	230.0.0.2	UDP	135	Source port: 1036  Destination port: 7080
6363	2013-05-16 07:47:43.997525	10.1.38.55	230.0.0.2	UDP	127	Source port: 1035  Destination port: 7120
6364	2013-05-16 07:47:43.997647	10.1.38.55	230.0.0.2	UDP	127	Source port: 1029  Destination port: 7130

The packet trace did show some odd UDP broadcast traffic from one specific desktop that happen to be running GE’s Centricity Perinatal (CPN). This is a software application used to monitor Labor & Delivery, the Nursery and the NICU. We use it to actually monitor, chart and graph the strips put out by the fetal monitors. There’s a software component of the GE CPN solution called B-Relay which is the piece of software that floods the VLAN with all those UDP broadcasts. Unfortunately this UDP flooding is by design and is required for the application to function properly.

6205	2013-05-16 07:47:26.710685	10.1.47.210	10.1.47.255	UDP	251	Source port: 1759  Destination port: 7005
6206	2013-05-16 07:47:26.853810	10.1.47.210	10.1.47.255	UDP	822	Source port: 1760  Destination port: 7043
6211	2013-05-16 07:47:28.215486	10.1.47.210	10.1.47.255	UDP	60	Source port: 1783  Destination port: 7013

Looking at the packet traces I quickly noticed that while there are multiple destination ports they are overlapping between 7001 and 7999. I would theorize that the GE CPN software was eventually hitting a UDP port that the ADD software was listening on and since it was a broadcast packet it tried to process the data and was quietly choking and crashing. I shutdown the Ethernet port connecting the GE CPN desktop and had the local telecom technician run his test again. He called back about 30 minutes later to let me know that everything was working fine and that whatever I had done had fixed the problem. Well it wasn’t really fixed because now I had to figure out how to get both applications to co-exist.

The solution was to isolate the GE CPN desktops to their own VLAN so that the UDP broadcasts wouldn’t hit the closet VLAN where the Contact Center users resided. Another possible solution might have been to try and change the UDP ports that either GE CPN or Avaya ADD software was using but that change would have probably taken weeks if not months. I was able to spin up a new VLAN in about 30 minutes and get everyone back up and running again.

Have you got a story to share? I’d love to hear it!

Cheers!

]]>
PIM-SM on Avaya Ethernet Routing Switch 5000 https://blog.michaelfmcnamara.com/2011/06/pim-sm-on-avaya-ethernet-routing-switch-5000/ https://blog.michaelfmcnamara.com/2011/06/pim-sm-on-avaya-ethernet-routing-switch-5000/#comments Fri, 24 Jun 2011 10:00:45 +0000 http://blog.michaelfmcnamara.com/?p=2182 There was yet another question recently on the discussion forums (I almost never have to search too hard for ideas to write about) concerning how to configure PIM-SM on the Avaya Ethernet Routing Switch 5000 series. While I’ve written in the past about DVMRP and PIM-SM on the Ethernet Routing Switch 8600 in I’ve never written about running PIM-SM on any of the stackable Ethernet Routing Switches (the 4500 or 5000 series). It honestly took me longer to figure out to configure VLC (with all the changes it’s gone through) than it took for me to configure the Ethernet Routing Switch 5520 or setup the two Windows XP clients. I downloaded VLC v1.1.10 and configured one Windows XP desktop (192.168.200.10) to act as the streaming Multicast server while the other Windows XP laptop (192.168.100.10) would act as the Multicast receiver. I utilized a Multicast address of 239.255.1.1 for this test and I made sure to set the TTL for the UDP stream greater than 1.

While running through the initial configuration I realized that you must have an Advanced License to enable PIM-SM on the Ethernet Routing Switch 5000 series. Since I don’t have any “spare” Advanced Licenses I downloaded the evaluation license from Avaya’s support website and loaded it on my test switch.

Here’s the configuration I used for the Ethernet Routing Switch 5520;

interface vlan 100
ip address 192.168.100.1 255.255.255.0 2
ip pim enable
interface vlan 200
ip address 192.168.200.1 255.255.255.0 3
ip pim enable
exit
ip pim enable
ip pim static-rp
ip pim static-rp 239.255.1.1/32 192.168.200.1

With PIM-SM configured I setup VLC on the Windows XP desktop (192.168.200.10) to Multicast the video stream to 239.255.1.1. I then setup the Windows XP laptop (192.168.100.10) to receive the Multicast stream on udp://239.255.1.1:1234. It took me a few minutes to work through some of the new menus on VLC but I eventually got it working.

I was able to confirm everything was working properly with the “show ip pim mroute” command.

5520-48T-PWR(config)#show ip pim
PIM Admin Status:  Enabled
PIM Oper Status:  Enabled
PIM Boot Strap Period:  60
PIM C-RP-Adv Message Send Interval:  60
PIM Discard Data Timeout:  60
PIM Join Prune Interval:  60
PIM Register Suppression Timer:  60
PIM Uni Route Change Timeout:  5
PIM Mode:  Sparse
PIM Static-RP:  Enabled
Forward Cache Timeout:  210

5520-48T-PWR(config)#show ip pim static-rp
Group Address   Group Mask      RP Address      Status
--------------- --------------- --------------- -------
239.255.1.1     255.255.255.255 192.168.200.1   Valid

5520-48T-PWR(config)#show ip pim mroute
 Src: 0.0.0.0       Grp: 239.255.1.1  RP: 192.168.200.1 Upstream: NULL
 Flags: WC RP
 Incoming  Port: Vlan200-null,
 Outgoing Ports: Vlan100-21
 Joined   Ports:
 Pruned   Ports:
 Leaf     Ports: Vlan100-21
 Asserted Ports:
 Prune Pending Ports:
 Assert Winner Ifs:
 Assert Loser Ifs:
TIMERS:
  Entry   JP   RS  Assert
    178    0    0       0
 VLAN-Id:   100   200
  Join-P:     0     0
  Assert:     0     0
  Src: 192.168.200.10  Grp: 239.255.1.1  RP: 192.168.200.1 Upstream: NULL
 Flags: SPT CACHE SG
 Incoming  Port: Vlan200-31,
 Outgoing Ports: Vlan100-21
 Joined   Ports:
 Pruned   Ports:
 Leaf     Ports: Vlan100-21
 Asserted Ports:
 Prune Pending Ports:
 Assert Winner Ifs:
 Assert Loser Ifs:
TIMERS:
  Entry   JP   RS  Assert
    179    0    0       0
 VLAN-Id:   100   200
  Join-P:     0     0
  Assert:     0     0

Total Num of Entries Displayed 2
Flags Legend:
        SPT = Shortest path tree
        WC = (*,Grp) entry
        RP = Rendezvous Point tree
        CACHE = Kernel Cache
        ASSERTED = Asserted
        SG = (Src,Grp) entry
        FWD_TO_RP = Forwarding to RP
        FWD_TO_DR = Forwarding to DR
        SG_NODATA = SG Due to Join
        IPMC_ERR = IPMC Add Failed

Cheers!

]]>
https://blog.michaelfmcnamara.com/2011/06/pim-sm-on-avaya-ethernet-routing-switch-5000/feed/ 6
IST Instability in large Multicast networks https://blog.michaelfmcnamara.com/2010/10/ist-instability-in-large-multicast-networks/ https://blog.michaelfmcnamara.com/2010/10/ist-instability-in-large-multicast-networks/#comments Fri, 08 Oct 2010 22:00:57 +0000 http://blog.michaelfmcnamara.com/?p=1699 Avaya has released a technical support bulletin detailing an issue that can impact IST stability in a large Multicast network. I know a number of readers have had issues with Multicast support in extremely large networks.

In large campus networks with SMLT topologies where multicast routing protocols (such as PIM) have been provisioned and scaled to large amounts of multicast senders and receivers, it has been observed that high CPU utilization
(sometimes combined with high CPU buffer utilization) leading to IST instability may occur during re-convergence of the multicast routing protocols after failures.

Additional information;

Release 5.1.3.0 has been modified with changes that were originally introduced in release 7.0.0.0. These changes allow IST protocol messages to be processed even under high CPU utilization. This is achieved by checking to see if IST control messages are queued up (but not yet processed) before deciding that the IST session has timed out and needs to be brought down. Each line card recognizes and counts IST control messages when they arrive and before they are sent to the CP, and the IST message processing logic on the CP will check for outstanding IST control messages before deciding the IST needs to be brought down due to inactivity.

Cheers!

]]>
https://blog.michaelfmcnamara.com/2010/10/ist-instability-in-large-multicast-networks/feed/ 4
Multicast Routing Protocol (Part 2) https://blog.michaelfmcnamara.com/2008/04/multicast-routing-protocol-part-2/ https://blog.michaelfmcnamara.com/2008/04/multicast-routing-protocol-part-2/#comments Tue, 29 Apr 2008 13:00:00 +0000 http://maddog.mlhs.org/blog/2008/04/multicast-routing-protocol-part-2/ In part 1 of this post I looked at how to configure DVMRP to facilitate inter-VLAN Multicast commuications on a single switch. In this post I’ll look at how to configure PIM to facilitate inter-VLAN Multicast communications across multiple switches and routers (Layer 3 switches).

I took a few minutes and threw together a quick diagram to help layout the topology (a picture is truly worth a thousand words). There are two core ERS 8600 switches (a switch cluster as Nortel likes to call it these days). There are three VLANs bridged across all four switches in the diagram, VLAN 55, 56 and 200. There is a fourth VLAN, 57, that is routed from ERS 8600 C. The ERS 5520 in the diagram will only be used as a Layer 2 even though it could potentially be used as a Layer 3 device (router).


I’m going to review two possible configurations. The first scenario will be for a client device (VLC Client A) in a VLAN routed by the core ERS 8600s. The second scenario will be for a client device (VLC Client B) in a VLAN routed by a closet ERS 8600.

Lets get on with configuring some ERS 8600 switches. First lets enable PIM globally;

ERS8600-A# config ip pim enable
ERS8600-A# config ip pim fast-joinprune enable

Then we’ll enable PIM on the specific VLANs;

ERS8600-A# config vlan 55 ip pim enable
ERS8600-A# config vlan 56 ip pim enable
ERS8600-A# config vlan 200 ip pim enable

We need to create a CLIP interface to use for PIM routing, we don’t want to tie the PIM routing to a physical interface in case that interface goes down for whatever reason. We’re already using CLIP 1 for our OSPF router ID of 10.1.0.5/32.

ERS8600-A# config ip circuitless-ip-int 2 create 10.1.0.15/255.255.255.255
ERS8600-A# config ip circuitless-ip-int 2 ospf enable
ERS8600-A# config ip circuitless-ip-int 2 pim enable

We need to add a candidate Rendezvous Point Router (RP) pointing it to our CLIP address.

ERS8600-A# ip pim candrp add grp 239.255.1.1 mask 255.255.255.255 rp 10.1.0.15

We need to set the priority of the Bootstrap Router (BSR) for dynamic PIM routing.

ERS8600-A# ip pim interface 10.1.0.15 cbsrpreference 100

Then on the second core ERS 8600 switch;

ERS8600-B# config ip pim enable
ERS8600-B# config ip pim fast-joinprune enable
ERS8600-B# config vlan 55 ip pim enable
ERS8600-B# config vlan 56 ip pim enable
ERS8600-B# config vlan 200 ip pim enable
ERS8600-B# config ip circuitless-ip-int 2 create 10.1.0.16/255.255.255.255
ERS8600-B# config ip circuitless-ip-int 2 ospf enable
ERS8600-B# config ip circuitless-ip-int 2 pim enable
ERS8600-B# config ip pim candrp add grp 239.255.1.1 mask 255.255.255.255 rp 10.1.0.16
ERS8600-B# config ip pim interface 10.1.0.16 cbsrpreference 50

That’s really all there is to configure with the two core ERS 8600 switches.

ERS5520 Switch (Edge)
In the case of the ERS 5520 switch there really isn’t anything you need to configure per say. You could enable IGMP (generally disabled by default) to filter the multicast traffic from ports that are not subscribing to any multicast groups. Since the ERS8600s are performing the routing the ERS5520 acts just like a Layer 2 switch.

VLC Client A (10.1.56.50) should now be able to connect to the multicast group 239.255.1.1 from the ERS 5520 which will be sourced from the VLC Server (10.1.55.50).

ERS8600 C Switch (Edge)
In the case of the ERS 8600 switch (edge) you need to configure and enable PIM. We’ll using VLAN 200 to interface with the upstream ERS8600 switches

ERS8600-C:5# config ip pim enable
ERS8600-C:5# config vlan 57 ip pim enable
ERS8600-C:5# config vlan 57 ip pim interface-type passive
ERS8600-C:5# config vlan 200 ip pim enable

Since there won’t be any other Layer 3 PIM switches on VLAN 57 we set the PIM interface to passive (much like the OSPF equivalent of passive).

VLC Client B (10.1.57.50) should now be able to connect to the multicast group 239.255.1.1 from the ERS 8600 C which will be sourced from the VLC Server (10.1.55.50).

We can dump the multicast (PIM) routing table with the following command from the edge ERS8600 switch;

ERS8600-C:5# show ip pim mroute

================================================================================
Pim Multicast Route
================================================================================
Src: 0.0.0.0 Grp: 230.0.0.2 RP: 10.1.0.5 Upstream: 10.1.200.5
Flags: WC RP CACHE
Incoming Port: Vlan200-1/1,
Outgoing Ports: Vlan127-2/42,
Joined Ports:
Pruned Ports:
Leaf Ports: Vlan127-2/42,
Asserted Ports:
Prune Pending Ports:
Assert Winner Ifs:
Assert Loser Ifs:
TIMERS:
Entry JP RS Assert
151 1 0 0
VLAN-Id: 200
Join-P: 0
Assert: 0
——————————————————————————–
Src: 10.1.233.30 Grp: 230.0.0.2 RP: 10.1.0.5 Upstream: 10.1.200.5
Flags:
SPT CACHE SG
Incoming Port: Vlan200-1/1,
Outgoing Ports: Vlan127-2/42,
Joined Ports:
Pruned Ports:
Leaf Ports: Vlan127-2/42,
Asserted Ports:
Prune Pending Ports:
Assert Winner Ifs:
Assert Loser Ifs:
TIMERS:
Entry JP RS Assert
64 4 0 0
VLAN-Id: 200
Join-P: 0
Assert: 0
——————————————————————————–

Total Num of Entries Displayed 2
Flags Legend:
SPT = Shortest path tree, WC=(*,Grp) entry, RP=Rendezvous Point tree, CACHE=Kernel Cache, ASSERTED=Asserted, SG=(Src,Grp) entry, PMBR=(*,*,RP) entry, FWD_TO_RP=Forwarding to RP, FWD_TO_DR=Forwarding to DR, SG_NODATA=SG Due to Join, CP_TO_CPU=Copy to CPU, STATIC_MROUTE=Static Mroute, MRTF_SMLT_PEER_SG=Peer SG On Non-DR For SMLT
——————————————————————————–

Troubleshooting

Here are some basic commands that should help you troubleshoot any PIM issues;

ERS8600-A:5# show ip pim neighbor

================================================================================
Pim Neighbor
================================================================================
INTERFACE ADDRESS UPTIME EXPIRE
——————————————————————————–
Vlan55 10.1.55.6 31 day(s), 00:09:53 0 day(s), 00:01:40
Vlan56 10.1.56.6 31 day(s), 00:09:53 0 day(s), 00:01:40
Vlan200 10.1.200.6 31 day(s), 00:09:53 0 day(s), 00:01:34

Total PIM Neighbors = 3

We can see that all three VLAN interfaces have PIM neighbors with the ERS 8600 B switch. Lets just check the RPs and make sure we have the correct multicast groups (addresses).

ERS8600-A:5# show ip pim rp-set

================================================================================
Pim RPSet
================================================================================
GRPADDRESS GRPMASK ADDRESS HOLDTIME EXPTIME
——————————————————————————–
230.0.0.1 255.255.255.255 10.1.0.15 150 137
230.0.0.2 255.255.255.255 10.1.0.15 150 137
239.255.1.1 255.255.255.255 10.1.0.15 150 137

The multicast addresses of 230.0.0.1 and 230.0.0.2 listed above are used for Nortel’s Contact Center (formerly Symposium Call Center software). Here’s how we can list the candidate RPs;

ERS8600-A:5# show ip pim candidate-rp

================================================================================
Pim Candidate RP Table
================================================================================
GRPADDR GRPMASK RPADDR
——————————————————————————–
230.0.0.1 255.255.255.255 10.1.0.15
230.0.0.2 255.255.255.255 10.1.0.15
239.255.1.1 255.255.255.255 10.1.0.15

If we’re dynamically choosing a RP we need to make sure that there is a BSR active;

ERS8600-A:5# show ip pim bsr

================================================================================
Current BootStrap Router Info
================================================================================

Current BSR address: 10.1.0.15
Current BSR priority: 100
Current BSR HashMask: 255.255.255.252
Current BSR Fragment Tag: 44590
Pim Bootstrap Timer : 31

I may need to update this article to make it cleaner and clearer.

Cheers!

]]>
https://blog.michaelfmcnamara.com/2008/04/multicast-routing-protocol-part-2/feed/ 10