Michael McNamara https://blog.michaelfmcnamara.com technology, networking, virtualization and IP telephony Sun, 31 Oct 2021 01:41:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.3 How to troubleshoot Faceook, Instagram, WhatsApp outages? https://blog.michaelfmcnamara.com/2021/10/how-to-troubleshoot-faceook-instagram-whatsapp-outages/ Mon, 04 Oct 2021 20:52:27 +0000 https://blog.michaelfmcnamara.com/?p=6955

Things certainly went south for Facebook today in a spectacular way as Reddit and other forums lit up with posts about Facebook, Instagram and WhatsApp being down and unreachable. Someone asked me a simple question? How do you troubleshoot an outage like that? We’re obviously limited as “outsiders” but even as a regular netizen we can do a bit of investigative troubleshooting to get some idea of what’s going on at Facebook.

If you tried to visit Facebook earlier today you would have likely seen this message in your web browser.

This site can’t be reached
www.facebook.com’s server IP address count not be found.

Let’s start with the basics…. DNS resolution.

[root@woodstock ~]# dig facebook.com +short
[root@woodstock ~]#

That’s not good… we can’t get an IP address for facebook.com, let’s try www.facebook.com as well.

[root@woodstock ~]# dig www.facebook.com +short
[root@woodstock ~]#

Ok, equally bad… let’s try to find the authoritative DNS servers for the domain facebook.com. We know from experience that a.gtld-servers.net. is a top level DNS server for the .com TLD, but let’s confirm it’s still in the list of servers. (I’ll edit the output below to help save space and focus our attention)

[root@woodstock ~]# dig ns com

;; ANSWER SECTION:
com. 170780 IN NS b.gtld-servers.net.
com. 170780 IN NS i.gtld-servers.net.
com. 170780 IN NS m.gtld-servers.net.
com. 170780 IN NS j.gtld-servers.net.
com. 170780 IN NS l.gtld-servers.net.
com. 170780 IN NS e.gtld-servers.net.
com. 170780 IN NS k.gtld-servers.net.
com. 170780 IN NS h.gtld-servers.net.
com. 170780 IN NS g.gtld-servers.net.
com. 170780 IN NS d.gtld-servers.net.
com. 170780 IN NS c.gtld-servers.net.
com. 170780 IN NS a.gtld-servers.net.
com. 170780 IN NS f.gtld-servers.net.

;; ADDITIONAL SECTION:
a.gtld-servers.net. 69518 IN A 192.5.6.30
b.gtld-servers.net. 82780 IN A 192.33.14.30
c.gtld-servers.net. 84678 IN A 192.26.92.30
d.gtld-servers.net. 84679 IN A 192.31.80.30
e.gtld-servers.net. 84678 IN A 192.12.94.30
f.gtld-servers.net. 84138 IN A 192.35.51.30
g.gtld-servers.net. 84679 IN A 192.42.93.30
h.gtld-servers.net. 84678 IN A 192.54.112.30
i.gtld-servers.net. 84679 IN A 192.43.172.30
j.gtld-servers.net. 82780 IN A 192.48.79.30
k.gtld-servers.net. 84679 IN A 192.52.178.30
l.gtld-servers.net. 84138 IN A 192.41.162.30
m.gtld-servers.net. 84679 IN A 192.55.83.30
a.gtld-servers.net. 81113 IN AAAA 2001:503:a83e::2:30

Ok, so a.gtld-servers.net is still in there… so let’s ask that DNS server who are the DNS servers for the domain facebook.com.

[root@woodstock ~]# dig @a.gtld-servers.net. ns facebook.com

;; QUESTION SECTION:
;facebook.com. IN NS

;; AUTHORITY SECTION:
facebook.com. 172800 IN NS a.ns.facebook.com.
facebook.com. 172800 IN NS b.ns.facebook.com.
facebook.com. 172800 IN NS c.ns.facebook.com.
facebook.com. 172800 IN NS d.ns.facebook.com.

;; ADDITIONAL SECTION:
a.ns.facebook.com. 172800 IN A 129.134.30.12
a.ns.facebook.com. 172800 IN AAAA 2a03:2880:f0fc:c:face:b00c:0:35
b.ns.facebook.com. 172800 IN A 129.134.31.12
b.ns.facebook.com. 172800 IN AAAA 2a03:2880:f0fd:c:face:b00c:0:35
c.ns.facebook.com. 172800 IN A 185.89.218.12
c.ns.facebook.com. 172800 IN AAAA 2a03:2880:f1fc:c:face:b00c:0:35
d.ns.facebook.com. 172800 IN A 185.89.219.12
d.ns.facebook.com. 172800 IN AAAA 2a03:2880:f1fd:c:face:b00c:0:35

There are the DNS servers for the domain facebook.com, so let’s see if we can communicate with any of them.

Let’s start by pinging the servers (for brevity I’m only going to go through the first server above… but they all were having issues today)

[root@woodstock ~]# ping a.ns.facebook.com -c 5 -q
PING a.ns.facebook.com (129.134.30.12) 56(84) bytes of data.

--- a.ns.facebook.com ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms

That’s not completely unexpected as most networks today block ICMP traffic by default to prevent DoS attacks so let’s try a simple DNS query to that server.

[root@woodstock ~]# dig @a.ns.facebook.com ns facebook.com

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.5 <<>> @a.ns.facebook.com ns facebook.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

That’s definitely not good, so we can assume at this point that we’re unable to communicate with the DNS servers for the facebook.com domain name, hence the error message we’re gettting in the web browser. But let’s dig a little deeper to see if the IP networks that are associated with those DNS servers are “online” and reachable. We can do that by looking at a BGP looking glass or full BGP routing table and see if that prefix is being advertised, we can also try to traceroute to the IP address in question and see if we can reach the Facebook network.

Let’s use WHOIS to see what network that IP address is a member of (again I’ve cut out some of the output below).

[root@woodstock ~]# whois 129.134.30.12
[Querying whois.arin.net]
[whois.arin.net]

NetRange: 129.134.0.0 - 129.134.255.255
CIDR: 129.134.0.0/16
NetName: THEFA-3
NetHandle: NET-129-134-0-0-1
Parent: NET129 (NET-129-0-0-0-0)
NetType: Direct Assignment
OriginAS:
Organization: Facebook, Inc. (THEFA-3)
RegDate: 2015-05-13
Updated: 2015-05-13
Ref: https://rdap.arin.net/registry/ip/129.134.0.0

Ok, so the original netblock assigned to Facebook from ARIN was 129.134.0.0/16 but Facebook could have subnetted that so we need to mindful that it could be smaller than the /16 we see allocated above.

There was a mention in some of the forums that all BGP peers to Facebook were down, so let’s check there. Let’s look at the Hurricane Electric’s Network Looking Glass using the IP address of 129.134.30.12. That shows us the following (as of 5:00PM EDT Monday October 4, 2021).

core1.mnz1.he.net> show ip bgp routes detail 129.134.30.12
Number of BGP Routes matching display condition : 2
S:SUPPRESSED F:FILTERED s:STALE x:BEST-EXTERNAL
1 Prefix: 129.134.0.0/17, Rx path-id:0x00000000, Tx path-id:0x00000001, rank:0x00000001, Status: BI, Age: 28d7h21m27s
NEXT_HOP: 65.49.109.182, Metric: 1486, Learned from Peer: 216.218.252.172 (6939)
LOCAL_PREF: 100, MED: 0, ORIGIN: igp, Weight: 0, GROUP_BEST: 1
AS_PATH: 3491 32934
COMMUNITIES: 6939:1111 6939:7039 6939:8392 6939:9003
2 Prefix: 129.134.0.0/17, Rx path-id:0x00000000, Tx path-id:0x00040001, rank:0x00000002, Status: Ex, Age: 86d22h8m40s
NEXT_HOP: 62.115.42.144, Metric: 0, Learned from Peer: 62.115.42.144 (1299)
LOCAL_PREF: 70, MED: 48, ORIGIN: igp, Weight: 0, GROUP_BEST: 1
AS_PATH: 1299 32934
COMMUNITIES: 6939:2000 6939:7297 6939:8840 6939:9001
Last update to IP routing table: 2d3h2m25s

Entry cached for another 60 seconds.

So it would appear that the routes are in the Internet BGP tables for that first server… I’m going to guess that Facebook is in recovery mode and slowly restoring their network – assuming it’s not a DoS attack or something similar.

Let’s try a traceroute using ICMP packets, again we need to be mindful that some organizations will block all ICMP traffic to protect themselves against the miscredants and to better conceal their network topology.

[root@woodstock~]# traceroute -I 129.134.30.12
traceroute to 129.134.30.12 (129.134.30.12), 30 hops max, 60 byte packets
1 107.170.19.254 (107.170.19.254) 4.061 ms 4.040 ms 4.037 ms
2 138.197.248.154 (138.197.248.154) 1.545 ms 1.558 ms 1.558 ms
3 157.240.71.232 (157.240.71.232) 41.384 ms 41.345 ms 41.380 ms
4 157.240.42.70 (157.240.42.70) 1.893 ms 1.911 ms 1.913 ms
5 157.240.40.230 (157.240.40.230) 3.552 ms 3.529 ms 3.538 ms
6 129.134.47.188 (129.134.47.188) 8.797 ms 7.276 ms 7.229 ms
7 * * *
8 * * *
9 * * *
10 * * *
11 * * *
12 * * *

Ok, so we’re definitely reaching parts of the Facebook network, as 129.134.47.188 is on the same advertised network as a.ns.facebook.com (129.134.30.12).

Unfortunately that’s about as far as we can take it from here, we’ll need to wait for the news from Facebook itself.

Cheers!

]]>
LastPass – Internet Upheaval https://blog.michaelfmcnamara.com/2021/03/lastpass-internet-upheaval/ https://blog.michaelfmcnamara.com/2021/03/lastpass-internet-upheaval/#comments Mon, 08 Mar 2021 04:48:54 +0000 https://blog.michaelfmcnamara.com/?p=6903

It seems that everyone and anyone wants to talk about LastPass since their announcement on February 16th that they were going to limit their free tier product offering. The vast majority of videos and articles haven’t been kind to LastPass or their current owners, LogMeIn.

I haven’t really mentioned LastPass since I first talked about them in December of 2014. I’m a paying LastPass customer since 2013. At the time a LastPass premium account was $12/year. A small cost for any IT professional that values their time (and productivity) and security in trying to keep the passwords for every application they use or every system they manage in their head. I currently have 763 passwords in my vault.

It seems that anytime a vendor takes away something that was free the Internet masses take to their media of choice to rail against the injustice. A large number of tech savvy users already scowl at the mention of LogMeIn. The company eliminated it’s free account offering of the popular remote control application by the same name in 2014. In 2016 the company acquired GoToMyPC, the largest competitor to LogMeIn, and subsequently raised the pricing on that service.

I’m no fan of LogMeIn, but I support paying for products that provide a value and service in my day to day life. As an Information Technology professional a Password Manager should be an essential part of your kit. Thankfully there are plenty to choose from and they all have their own strengths and weaknesses.

I believe prior to the LogMeIn acquisition you needed a Premium LastPass account to use the mobile application on either Android or iOS. Someone feel free to correct me in the comments below. I’m not sure where or when that changed was made but somewhere along the line they started allowing non-Premium users to use the mobile app. The timing here is important because it does feel like a potential bait and switch play. Opening the mobile app for a few years and then squeezing that group in hopes of getting some percentage to switch to a Premium account.

If I had to choose a password manager today I wouldn’t necessarily jump at spending $36/year – the current pricing for new LastPass Premium customers. However, I might be convinced to purchase their new LastPass Family for 6 family members at $48/year. That said I’ve been pretty happy with LastPass to date.

What password manager are you using? Hopefully you are using a password manager!

Cheers!

]]>
https://blog.michaelfmcnamara.com/2021/03/lastpass-internet-upheaval/feed/ 1
Troubleshooting Application Performance and Monitoring with Selenium https://blog.michaelfmcnamara.com/2021/01/troubleshooting-application-performance-and-monitoring-with-selenium/ Fri, 29 Jan 2021 00:27:53 +0000 https://blog.michaelfmcnamara.com/?p=6620 It was yet another exciting week…

When Cloud or SaaS application performance starts impacting user productivity how do you go about troubleshooting? Performance can be extremely subjective… what is fast to some people is slow to others and vice versa. How do you even measure performance? Invariably people want to blame the network because that’s the simplest answer. However, it can take a lot of effort and due diligence to dig down and find the actual culprit.

In this specific case we had ~ 8,000 miles between the users and the server infrastructure. So I’m personally expecting additional challenges due to the extreme round trip times (220ms) and latency that may play some roll in any possible issue or issues.

Let’s try to frame the issue;

  • Is the issue persistent or intermittent? Intermittent
  • Is the issue occurring with any regularity? Yes, 11:00AM – 12:30PM local time daily
  • Is the issue impacting every user or just specific users? Multiple users, not clear if every user is impacted but a majority of users
  • Is there anything common among the impacted users? They are all using the same VPN and proxy server infrastructure, they are all located in the same country.
  • When did the problem start? Users have been working for 3+ months without issue, but this problem is fresh within the past 2 weeks.

The last point is likely key… so what’s changed in the past 2 weeks that’s causing this issue? Let’s get to that later but those simple facts are key in driving your investigation.

We start with the simple baseline network tests;

  • ping – good with minimal pack loss
  • traceroute (mtr) – looks like pathways with multiple ISPs
  • speed tests – generally good
  • packet capture – in general looks good, some out of order packets, some dupe ACKs, these are likely the result of the ~ 8,000 miles between the endpoints.

In the baseline results there are no smoking guns but there are some suspect data points in there, although we need to remember that this isn’t a LAN based application. This is an Internet based application with 8,000 miles between the endpoints so there is going to be some noise in the packet trace.

Note: I’ve seen all sorts of interesting Internet issues since March 2020 when the pandemic lock-down first kicked off here in the US, and again recently at the beginning of September 2020 when the majority of US school students returned to remote learning. I observed a large number of my US users had better latency to our UK VPN gateways than to our local US VPN gateways. Ultimately we found a number of Internet peering points between the different Internet Service Providers (I’m being nice here and not naming names) were getting completely blasted and was adding 75-125ms to every packet. Eventually the providers addressed this problem with additional peering but it was a painful couple of weeks.

Now what we need are some additional data points that can be collected during the issue;

  • HAR (HTTP Archive) from Chrome web browser collected from user experiencing issue – this was a key piece of data that helped move the issue forward
  • packet capture – wasn’t able to be captured due to locked down computers

What can we do to monitor the performance of the cloud application?

  • ping – We setup pings monitors from a number of data centers globally to monitor for basic availability
  • curl – We setup some simple HTTP/HTTPS monitoring using cURL
  • selenium – At the recommendation of the application provider we setup ThousandEyes and a transaction monitor to generate synthetic transactions by logging into the application and working through a few different functions which themselves have dependencies on external REST and SOAP APIs.

The application itself has a number of dependencies from external microservices, so initially we were concerned that these external services might be having performance issues themselves which might be impacting the application itself. So we had to setup additional monitoring to try and validate the performance of those REST and SOAP APIs during the reported timeframes.

This was my first foray into working with Selenium and ThousandEyes but I was able to kludge my way through the solution after about 2 days. I did run into a few problems with the application website using dynamic Class IDs but eventually I got some basic tests working properly. The solution itself worked fairly well… we had some decent “front door” statistics within hours and the synthetic transaction data gave us a good idea that the application was performing properly during the reported timeframes the users were experiencing issues.

The application vendor was extremely helpful in examining the HAR data, and quickly determined from the HAR and their own internal logs that HTTP/HTTPS requests from the clients were being queued up and delayed from reaching their back-end infrastructure (Chrome only allows 6 concurrent connections to a single hostname). Within the HAR data the vendor observed some fairly aggressive custom polling within the application that was making unconditional Javascript calls every 2 seconds that resulted in a 12Kb data set being transferred to the client. The initial theory was that some Internet slowdown was causing the client requests to slowdown and eventually fall behind which then coupled with the unconditional Javascript calls and the six connection limit in Chrome led to an extremely poor user experience.

We eventually learned that the infrastructure the users were riding had recently switched Internet Service Providers two weeks earlier. Hmmm… hadn’t the issues started 2 weeks earlier? Yes they had! Ultimately we determined that there was enough occasionally packet loss and packet retransmissions over this new Internet link that it was impacting this specific application. The infrastructure was switched back to the original Internet link and the issue hasn’t been observed since.

My Thoughts?

In this specific case the intermittent packet loss and retransmissions were causing the application to fall behind in it’s communications with the backend infrastructure which was resulting in an extremely poor user experience. It’s relatively safe to argue that if the application code wasn’t as aggressive in it’s polling that it could potentially “tolerate” a certain amount of packet loss and retransmissions.

I personally believe as a network engineer it’s invaluable to learn why something doesn’t work instead of just accepting that it doesn’t work. Inevitably there will be things that we can’t explain but I’m a huge advocate of spending the effort to make sure you understand the vast majority, it’s really the only way you’ll make the environment around you better and ultimately more resilient.

Cheers!

]]>
CenturyLink/Level 3 Internet meltdown followed by Reddit moderator madness https://blog.michaelfmcnamara.com/2020/08/centurylink-level-3-internet-meltdown-followed-by-reddit-moderator-madness/ Sun, 30 Aug 2020 20:05:56 +0000 https://blog.michaelfmcnamara.com/?p=6602 It was another exciting morning around the Internet. Seems that CenturyLink(Level 3) had a meltdown that caused all sorts of issues for ~ 5 hours this morning starting around 6:04AM EDT and lasting until around 11:12AM EDT.

It started as it always does with reports of DNS issues, then CDN issues (Cloudflare) and eventually CenturyLink was identified as the culprit, or to be more precise any packets traversing the CenturyLink (Level3) network.

Thankfully Reddit was a great community resource and reports quickly started rolling in on these two threads;

For reasons that still aren’t 100% clear the moderators for r/networking decided to delete the first thread. So the refugees from r/networking went to r/sysadmin to escape the persecution only to have the moderators of r/networking admit their mistake sometime later and un-delete the post.

I’ll admit I was floored when I found the original thread was deleted. There were hundreds of us struggling to source what was actually going on and trying to understand how we could mitigate the impact to our employers and some moderator deletes the thread?!? @$%#

The refugees eventually made their feelings known in a thread titled, META: I guess major news-worthy outages are off topic here?

Cheers!

]]>
COVID-19 The War Waged by Information Technology Professionals https://blog.michaelfmcnamara.com/2020/03/covid-19-the-war-waged-by-information-technology-professionals/ Fri, 27 Mar 2020 01:29:02 +0000 https://blog.michaelfmcnamara.com/?p=6516 The past few weeks have been extremely exhausting both professionally and personally. Coronavirus (COVID-19) has taken the world by storm and is literally upending people’s daily lives and ruining businesses large and small. Let’s not forget the large number of people that have lost their lives to this virus. My thoughts and prayers for all those who have lost love ones. My thanks and admiration to all those medical professionals on the front lines treating the sick.

While very few of us have planned and organized days these past few weeks have been unlike anything I’ve ever experienced, running from one fire to another, one disaster to another. Whether it’s a power failure in a data center or someone deciding to water the potted plant that they hung over the network switch, there’s always some new emergency or problem that requires IT to jump in and save the day. This event was no different but the scale and duration was a whole new experience for everyone.

We started mobilizing our disaster preparedness plan around the middle of February. The initial request from the leadership team was pretty straight forward, “How do we prepare to have our home office employees and call centers agents work remotely?”. Like most large-medium sized enterprises we have a couple of hundred people working remotely every day, however we were talking about going from 200-300 daily remote users to potentially 3,000-4,000 daily remote users in a very short time span. And a significant portion of those users still had desktop devices.

In the span of a week we had ordered, imaged, configured and deployed (shipped or handed out) over 400 laptops to over 400 employees and call center agents. We also spun up a new Virtual Private Network (VPN) solution using Palo Alto Network’s GlobalProtect to help supplement our existing Pulse Secure and Microsoft Direct Access solutions.

I should note that I reached out to Pulse Secure and they offered us a temporary 60 day license to help us cope with the additional users – kudos to Pulse Secure.

Like everyone we’re in the middle of our second week and the Internet itself is starting to show it’s cracks. This past Monday and Tuesday we experienced connectivity issues across 30 stores in and around London, UK for ~ 45-60 minutes at a time. We later learned that Monday was the first day in the UK with all schools closed and British Telecom (BT) wasn’t handling the strain well. I’m sure it’s not helping BT that Disney+ just launched in the UK and Ireland on Wednesday.

We’ve had a number of issues with Microsoft, Slack and Zoom over the past two weeks and expect those issues will likely continue as more and more people around the nation and globe transition to working remotely.

Nobody’s really sure what the future holds… hopefully things will start to improve as we work to flatten the curve.

Thanks to all the IT folks that are continuing to carry on the struggle, be it onsite or from the confines of your own home…we know what what your’re going through and we appreciate your efforts!

If you have story to share, let us know below.

Stay safe! Cheers!

]]>
Story – Packet Loss and Failing 10Gbps SFP+ Optic https://blog.michaelfmcnamara.com/2019/07/story-packet-loss-and-failing-10gbps-sfp-optic/ Sat, 06 Jul 2019 16:55:18 +0000 https://blog.michaelfmcnamara.com/?p=6182 Here’s an old story that I never published.. and seeing that I haven’t been writing much lately I’m going to take the easy route and just publish this.

It’s been another interesting weekend… and by interesting I actually mean another weekend of working through yet another challenging issue.

Summary

It started back on Thursday with more than a few alerts from my own custom built monitoring solution. A few years back I wrote a Bash script to help monitor the Internet facing infrastructure and numerous VIPs that we host in our Data Centers. That script has worked well over the years helping validate application availability against network availability. With everything else going on I purposely ignored the alerts, assuming there was some DoS attack or other malady that the Internet was suffering from and it would soon fix itself.  By late Friday afternoon I could no longer ignore the alerts as they were pilling up in my Inbox by the hundreds and it was long past time to roll up the sleeves and figure out what had broken where. I initially assumed that I would find some issue or problem with either the hosting company or an Internet Service Provider. A cursory review of the Internet border routers revealed that a few 10Gbps Internet links had bounced within the past 30 days but everything was running clean from the Internet Service Provider through our border routers, switches and firewalls up to our Internet facing load balancers. Initially I thought there was an issue with either AT&T or NTT as a number of the monitoring servers were traversing those ISPs but after a number of tests I found that packet loss across either of those ISPs was generally less than 0.4% which isn’t all that bad. If the plumbing was looking good then why were the alerts firing? I looked at the alert again and noticed that the messages read “socket timeout” and not “socket connection failure”.

In any event I ran a quick packet trace using tcpdump from one of the monitoring servers and found that there was traffic flowing, although there was a significant amount of retransmissions and missing packets. It looked like the health checks were timing out at the default of 10 seconds. I increased the timeout to 20 seconds and bingo the majority of health checks were now returning successfully. I’m not sure I agree with the verbiage of “socket timeout” since the socket was exchanging information between the client and server, it was more of an overall application timeout since the request was not completed with the specified timeout value.

Data Analysis

Now the $1,000,000 question, what had changed that I needed to increase the timeout?

Thankfully I’ve been logging this data for the past 3+ years so I was able to import of a few of the data points since Sept 2017 (207K rows – 1 every 60 seconds) into Excel and using the quick chart shortcut (Alt-F1) I was able to quickly visualize the data which provided some interesting results. The amount of time it was taking the health checks to complete had risen significantly in the past few weeks.

With that data it was now clear that the health checks were failing because they were hitting the 10 second default timeout. But what had happened that it was now taking on average longer than 10 seconds for the backend to return the result to the client? Was the backend slower to respond that it had previously been? Was the Internet slower than it had previously been? Was there enough packet loss and retransmissions to impact the timing? Was the size of the data being returned changing?

In short the answer appears to be a little bit of everything above.

  • Was the backend slower to respond that it had previously been? Yes
  • Was the Internet slower than it had previously been? Yes (I always assuming the Internet is getting more and more congested)
  • Was there enough packet loss and retransmissions to impact the timing? Yes (especially with 3K+ miles between the endpoints)
  • Was the size of the data being returned changing? Yes (the size of the HTML was increased causing more data to be transferred)

An interesting but logical side affect, the monitoring servers that were the farthest from the Data Center in question had a greater number of errors. This is logical because they would have the greater latency to reach that specific Data Center, any packet loss or retransmissions would cause additional delay given the latency. This explains why some monitoring servers were reporting no issues or problems and others were reporting all sorts of issues and problems. The increased physical distance between the Data Center and the monitoring server was exacerbating the timing because of the inherit packet loss and retransmissions on the Internet which was further exacerbated by the growing size of the HTML that was being transferred across those vast distances and increased time it was taking the backend to ultimately serve up the response.

This is a great example of why you can’t always just blame the network, even though it’s the easiest thing to do.

Resolution

In the end I found a failing 10Gbps SFP in the Internet facing load-balancers that needed to be replaced. I placed a monitoring probe on the local network and found the same amount of packet loss and re-transmissions which confirmed that the problem was local to my Data Center. I failed over between the primary and secondary Internet facing load-balancers and the problem disappeared so the issue was with the primary Internet facing load-balancer.

Cheers!

]]>
YouTube TV – cutting the cord with Roku https://blog.michaelfmcnamara.com/2018/02/youtube-tv-cutting-the-cord-with-roku/ https://blog.michaelfmcnamara.com/2018/02/youtube-tv-cutting-the-cord-with-roku/#comments Mon, 05 Feb 2018 13:42:37 +0000 https://blog.michaelfmcnamara.com/?p=6174 Like many folks before me I’m looking to cut the cord on the traditional cable TV. I picked up a Roku Streaming Stick+ and enrolled in the 7-day trial for YouTube TV since it’s available in the Philadelphia market. I’ll hopefully be able to drop Verizion FiOS TV and keep the Verizon FiOS Internet and significantly reduce my $200/monthly Internet and Cable TV bill.

YouTube TV has Nat Geo and Nat Geo Wild which are a requirement from the family.

The next big question… should I go with Verizon Gigabit Internet?

Anyone with any recommendations?

]]>
https://blog.michaelfmcnamara.com/2018/02/youtube-tv-cutting-the-cord-with-roku/feed/ 2
Verizon FiOS Internet – Juniper Private VLANs https://blog.michaelfmcnamara.com/2017/09/verizon-fios-internet-juniper-private-vlans/ https://blog.michaelfmcnamara.com/2017/09/verizon-fios-internet-juniper-private-vlans/#comments Wed, 20 Sep 2017 02:31:00 +0000 https://blog.michaelfmcnamara.com/?p=6104 I recently stumbled over an interesting problem with Verizon’s FiOS Internet service while doing some consulting. In an effort to protect the innocent and prevent and ass hattery, I’ve changed the IP addressing to use something from RFC5737.

A client had two physical sites about 1 mile apart which were connected to the Internet by separate Verizon FiOS broadband connections and which were assigned the following static IP addresses;

Site A:

IP Network: 198.51.100.226/28
Subnet Mask: 255.255.255.0
Default Gateway: 198.51.100.1
Usable IP Addresses: 198.51.100.226 – 198.254.100.238

Site B:

IP Network: 198.51.100.50/28
Subnet Mask: 255.255.255.0
Default Gateway: 198.51.100.1
Usable IP Addresses: 198.51.100.50 – 198.51.100.63

Let me be the first to admit that the information above isn’t quite right… there is no IP address block 198.51.100.226/28, it should be 198.51.100.224/28. I believe that’s Verizon trying to avoid having customers accidentally use the network address or the first address in the IP address block which is likely reserved for the actual Verizon Actiontec router.

The client was trying to establish a VPN tunnel between the two sites and was running into difficulties. The issue was with the IP addressing provided by Verizon and it’s likely implementation of private VLANs on the Juniper hardware. I’m assuming that Verizon is likely using PVLANs to isolate traffic between individual customers to minimize the number of IP subnets they need to create. Instead of creating 16 /28 IP networks they are using a single /24 network and then isolating the traffic between customers using PVLANs. The issue in the example above is pretty obvious – the individual client devices are attempting to communicate with each other on the local subnet. Believing that there’s no need to signal the upstream router because the netmask indicates that the remote site should be in the same IP network. While the remote site is actually in the same IP network, the implementation of PVLANs is blocking communication between the client devices.

Anyone have any experience with Verizon FiOS using PVLANs?

I believe I heard years ago that Verizon chose Juniper for their FiOS implementation.

Cheers!

Reference: Juniper – Understanding Private VLANs on EX Series Switches

]]>
https://blog.michaelfmcnamara.com/2017/09/verizon-fios-internet-juniper-private-vlans/feed/ 2
Net Neutrality and the Future of the Internet https://blog.michaelfmcnamara.com/2017/07/net-neutrality-and-the-future-of-the-internet/ Tue, 11 Jul 2017 23:38:52 +0000 https://blog.michaelfmcnamara.com/?p=6078 If you have been under a rock for the past 6+ months you might need to take notice.

On July 12th this blog will be participating in an “INTERNET-WIDE DAY OF ACTION TO SAVE NET NEUTRALITY” in order to help raise awareness and spur action on a part of the masses.

Cheers!

]]>
Retail Holiday Peak 2016 https://blog.michaelfmcnamara.com/2016/11/retail-holiday-peak-2016/ Sat, 19 Nov 2016 20:13:26 +0000 https://blog.michaelfmcnamara.com/?p=5902 It’s that time of year again… the holidays are just around the corner and every retailer is gearing up for Black Friday and Cyber Monday. My employer kicked off the holiday shopping season last night with one brand having their yearly 4-hour sale. Thankfully there were no surprises and our infrastructure and application stack was able to handle the additional load without issue. I did stumble upon an instrumentation issue between PRTG and a Cisco FirePOWER 4110 firewall – perhaps I’ll share more about that problem in another post. It’s a challenge every year to try and forecast the potential load and then meet the surge in demand, let’s not forget about all the email marketing campaigns and app push notifications that the brands want to hit their customers with. It can be a very challenging time for many Information Technology teams.

Now we wait for Thanksgiving and the four days to follow… confident that we’ve taken all the correct steps and everything is ready.  Only time will tell the true story.

Cheers!

]]>
It’s the networks fault #18 https://blog.michaelfmcnamara.com/2016/01/its-the-networks-fault-18/ Mon, 04 Jan 2016 23:34:08 +0000 http://blog.michaelfmcnamara.com/?p=5479 Here’s a look at a few different articles and posts that caught me eye over the past few weeks…

Articles

Network Field Day #NFD11 by Dominik Pickhardt – Dominik will be attending Network Field Day 11 this January 2016 in San Jose, CA. It just happens that I’ve also been invited to join the gang in Silicon Valley on January 19th – 22nd. You find more information over on the Tech Field Day website.

US House okays making internet tax exemptions permanent by Shaun Nichols – We’ll need to see how HR 644 fairs in the senate now that it includes a provision to prevent states from collecting sales tax on Internet retailers for out of state customers.

IP leak affecting VPN providers with port forwarding by Perfect Privacy – The team over at Perfect Privacy have revealed how an attacker can reveal a VPN user’s real IP address given a few specific conditions.

A free, almost foolproof way to check for malware by Roger A. Grimes – A great article describing how to easily test a Windows client to see if it’s infected with some malware. I’ve recently found myself doing quite a bit of security forensics analyzing various systems and images.

Will Let’s Encrypt threaten commercial certificate authorities? by Larry Seltzer – Let’s Encrypt is a new free Certificate Authority looking to make publicly signed certificates available for free to anyone. The stated goal of the organization is to help secure the Internet by offering free SSL certificates to anyone. The certificates are only valid for 90 days, a significant caveat and differentiator with the commercial certificate authorities.

Cheers!

]]>
Dear Internet – Family Fun https://blog.michaelfmcnamara.com/2014/12/dear-internet-family-fun/ https://blog.michaelfmcnamara.com/2014/12/dear-internet-family-fun/#comments Fri, 19 Dec 2014 20:52:46 +0000 http://blog.michaelfmcnamara.com/?p=5083 I asked my three daughters and the loving wife if they had anything they wanted to share on my blog. Any bits of interesting news or observations. I thought for sure they would write about Minecraft or Roboblox or something long those lines. Interestingly enough Margaret, my second oldest at the age of 10, volunteered the following piece.

img005

If I loosely translate I believe it reads as follows; “Dear Internet, I alway[s] wonder why my dad always comes home and says “My belly is in my backbone, what’s for dinner”. Plus ever time he comes up from his so called lair I mean has he even eaten lunch?

Thanks for sharing your thoughts Margaret. I’m usually running during the morning and afternoon and sometimes dinner is my first meal of the day.

Cheers!

Note: This is a series of posts made under the Network Engineer in Retail 30 Days of Peak, this is post number 25 of 30. Special credit goes to Margaret for helping provide the content for this post! Although I’m not sure this one counts. All the posts can be viewed from the 30in30 tag.

Image Credit Cécile Graat

]]>
https://blog.michaelfmcnamara.com/2014/12/dear-internet-family-fun/feed/ 2
BGP Multihomed Internet Data Center https://blog.michaelfmcnamara.com/2014/12/bgp-multihomed-internet-data-center/ Mon, 15 Dec 2014 13:00:56 +0000 http://blog.michaelfmcnamara.com/?p=4790 It’s both loved and loathed in the network engineering community but BGP came through for us in the past 24 hours.

We utilize BGP to provide dynamic routing between the many Internet Service Providers we are peering with and at the many Data Centers and circuits over which we peer. This past weekend we had an issue with our primary Internet Service Provider (AT&T) but BGP did it’s job and dutifully detected the dead router and re-routed traffic to the remaining Internet Service Providers. The actual outage time was less then 60 seconds. Even though it occurred around 1:30AM EST in the morning we’re hosting websites that need to be accessible in every timezone around the world. While it was 1:30AM on the East coast it was only 10:30PM on the West coast where shoppers were still busy picking through the online goods and placing orders. And while it might have been a little too early for our friends in the UK (6:30AM GMT), we could have shoppers online from either France or Germany (7:30AM GMT +1).

Dec 14 2014 01:27:20.337: %BGP-5-ADJCHANGE: neighbor 12.251.xxx.xxx Down BGP Notification sent
Dec 14 2014 01:27:20.337: %BGP-3-NOTIFICATION: sent to neighbor 12.251.xxx.xxx 4/0 (hold time expired) 0 bytes
Dec 14 2014 01:27:22.650: %BGP_SESSION-5-ADJCHANGE: neighbor 12.251.xxx.xxx IPv4 Unicast topology base removed from session  BGP Notification sent
Dec 14 2014 01:33:25.052: %BGP-5-ADJCHANGE: neighbor 12.251.xxx.xxx Up

We also utilize BGP internally in combination with BFD (Bidirectional Forwarding Detection) to help reduce the failover time on the internal network. We’ve actually had BFD accidentally trip a number of times because it can be too sensitive which can create just as many issues having routes flapping back and forth between multiple paths.

As of this writing I have ~ 511,000 IP routes in my BGP routing tables.

Looking at a peering point on the East coast of the United States;

511499 network entries using 132989740 bytes of memory
2550193 path entries using 244818528 bytes of memory
836860/82048 BGP path/bestpath attribute entries using 187456640 bytes of memory
293120 BGP AS-PATH entries using 13504896 bytes of memory
12459 BGP community entries using 1489256 bytes of memory
51 BGP route-map cache entries using 3264 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 580262324 total bytes of memory
BGP activity 11331307/10819807 prefixes, 150321731/147771538 paths, scan interval 60 secs

Here’s a look at a peering point on the West coast of the United States;

511029 network entries using 132867540 bytes of memory
1021218 path entries using 98036928 bytes of memory
246998/81716 BGP path/bestpath attribute entries using 55327552 bytes of memory
145392 BGP AS-PATH entries using 6562258 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 292794278 total bytes of memory
BGP activity 4607568/4096503 prefixes, 24245483/23224265 paths, scan interval 60 secs

The delta in path entries between the two is a result of the number of BGP peers I have on that specific router.

East = 2,550,193
West = 1,021,218

As you can guess I have a number of additional peers on the East coast than I have on the West coast – plans are in the works to resolve that next calendar year.

You can see the dramatic growth in the number of BGP routes being advertised over the Internet from http://bgp.potaroo.net/.

Cheers!

Note: This is a series of posts made under the Network Engineer in Retail 30 Days of Peak, this is post number 21 of 30. All the posts can be viewed from the 30in30 tag.

]]>
Virtual Desktop – patches, patches and more patches https://blog.michaelfmcnamara.com/2012/09/virtual-desktop-patches-patches-and-more-patches/ Mon, 03 Sep 2012 16:08:44 +0000 http://blog.michaelfmcnamara.com/?p=2873 JavaI fired up my virtual desktop (Windows XP) named DUMBO this morning for the first time in a few weeks.

This is the machine I generally use to remotely connect to customer networks when I’m consulting – I don’t use my personal desktop for a number of reasons. The virtual desktop runs on a HP Proliant DL360 G5 running CentOS v6.3 with KVM along with a number of other test and development guest machines.

Anyway I had to spend the better part of 60 minutes patching the machine.

  • Microsoft Security Updates (6)
  • Mozilla Firefox (v15.0)
  • Mozilla Thunderbird (v12.01)
  • Adobe Flash Update (v11.4.402.265)
  • Adobe Reader Update (v10.1.4.38)
  • Oracle Java Update (SE 6 Update 35)
  • LibreOffice (v3.5.6)

Obviously it’s critical that my desktop be clean of any unscrupulous software especially since I usually have complete access to the entire network and occasionally I’ll connect to an Active Directory resource as a Domain Administrator. I personally rely on a defense in depth approach making sure that all my software is up-to-date and employing a reputable Internet Security/Antivirus program. I’ve been using Kaspersky Internet Security for the past 3 years and it’s actually saved me on a number of occasions, usually from unscrupulous ad networks that were trying to exploit known vulnerabilities in Microsoft’s Internet Explorer or Mozilla’s Firefox.

The most recent security headline grabber was the zero-day vulnerability in Oracle’s Java software – along with the fix and patch. Many security experts are advising people to disable or uninstall Java if they don’t need it – the problem – users typically won’t really know if they need or use Java.

In February 2010 and January 2011 I wrote about a number of security threats and the alarming number of machines I was finding from neighbors and friends that were operating on the edge with either out-dated or missing Internet Security/Antivirus software. I’m sorry to say the trend hasn’t diminished at all. I’m seeing the same or worse in business and corporate networks where IT staffs are struggling to keep up with the “do more with less” mantra while security takes a back seat.

You only need to read the article entitled Inside a ‘Reveton’ Ransomware Operation by Brian Krebs and ponder the criminal possibilities.

There are a great many of us using our personal computers for electronic banking. I personally love the convenience and can’t remember the last time I was actually in a bank branch. However, with that convenience comes a lot of danger and added responsibility. If you have young kids using your personal computer I would strongly urge you to setup accounts for them without administrative access, many operating systems also have parental controls to help monitor your child’s activity.

Here’s my yearly reminder to everyone, spend a few minutes and make sure that the software on your laptop/desktop is up-to-date and that your Internet Security/Antivirus software is running properly. The few minutes (or few $$$$ renewing your Internet Security/Antivirus subscription) you spend now will likely save you from hours and days of frustration and heartache down the road.

Cheers!

References:

Secunia Personal Inspector
Secunia Online Software Inspector (requires Java)

]]>
NCAA March Madness – How’s your Internet link handling the madness? https://blog.michaelfmcnamara.com/2012/03/ncaa-march-madness-hows-your-internet-link-handling-the-madness/ https://blog.michaelfmcnamara.com/2012/03/ncaa-march-madness-hows-your-internet-link-handling-the-madness/#comments Mon, 12 Mar 2012 22:45:11 +0000 http://blog.michaelfmcnamara.com/?p=2733 It’s March again, a time for putting down fertilizer on the lawn, a time for celebrating St. Patrick’s Day, and a time to watch your Internet utilization spike through the roof.

I’m a Blue Coat ProxySG and Websense customer so I have some options at my disposal to help stem the flood from both my public/guest (WiFi) networks and my internal networks. However, even with those tools available it can be a real challenge these days to try and filter just the unwanted content out of the network, especially if you’re charged with only blocking the streaming content of the site and you are required to keep basic site access working. So there’s no blocking ncaa.com/* because that would block basic site access.

I currently have about 15,000 devices on my internal network and I average around 3,000 public devices daily on my public/guest networks. The public/guest networks routinely consume around 50Mbps of Internet traffic and the bulk of the public/guest networks are setup on our internal 802.11b/g wireless networks. So I need to be concerned about the performance of the wireless networks themselves and not just the Internet gateway/firewall.

I’m sure there are going to be dozens if not hundreds of different ways for users to find the content. I’ve already spotted a few users trying to connect via Slingbox and there are multiple apps on Google Play and the Apple Store that offer to stream the games to your mobile device over WiFi (our public/guest networks).

Here are the list of URLs that I’m starting with. I’m hoping this should help curb 50%-75% of the traffic, I’ll need to evaluate whether it will be worth the effort to go looking for the remaining 25%.

  • *.turner.ncaa.com
  • www.ncaa.com/mml

If you are a smaller organization you might want to have a look at OpenDNS. It’s very easy to implement and is very cost effective.

I’m curious what other people are doing, if anything? Do you already have your network locked down so this isn’t an issue? If you have a public/guest network do you allow access? Do you have any challenges based on the size of your network?

Cheers!

]]>
https://blog.michaelfmcnamara.com/2012/03/ncaa-march-madness-hows-your-internet-link-handling-the-madness/feed/ 3
Web Goes on Strike! https://blog.michaelfmcnamara.com/2012/01/web-goes-on-strike/ https://blog.michaelfmcnamara.com/2012/01/web-goes-on-strike/#comments Wed, 18 Jan 2012 03:46:24 +0000 http://blog.michaelfmcnamara.com/?p=2632 I know this is short notice but this site and the discussion forums will not be available on January 18th between 8AM and 8PM (GMT -5). We will be participating in the online protest to stop the Internet censorship bills, SOPA & PIPA. We’ll be joining some big name Internet sites such as Wikipedia, Reddit, Cheezburger Network, WordPress, Mozilla, Destructoid, Gog.com, Namecheap, Imgur, Electronic Frontier Foundation and thousands of blogs and web sites.

I apologize in advance for the inconvenience but we need to put a stop to this legislation.

Visit here for more information…

Cheers!

]]>
https://blog.michaelfmcnamara.com/2012/01/web-goes-on-strike/feed/ 1
We’re IPv6 ready and accessible! https://blog.michaelfmcnamara.com/2011/12/were-ipv6-ready-and-accessible/ https://blog.michaelfmcnamara.com/2011/12/were-ipv6-ready-and-accessible/#comments Mon, 12 Dec 2011 18:37:59 +0000 http://blog.michaelfmcnamara.com/?p=2588 The server that hosts this site is now IPv6 ready and accessible.

I’m not sure how many users are potentially using IPv6, but I was doing some research regarding securing an IP v6 allocation from ARIN (American Registry of Internet Numbers) and decided to enable IPv6 on my own personal server.

I must admit the majority of the work was completed by our hosting provider Linode. I only had to make a few small CentOS Linux configuration changes and we were up and running.

Anyone else IPv6 ready on their public Internet facing servers?

Cheers!

]]>
https://blog.michaelfmcnamara.com/2011/12/were-ipv6-ready-and-accessible/feed/ 2
I’m going to be rich! https://blog.michaelfmcnamara.com/2011/03/im-going-to-be-rich/ https://blog.michaelfmcnamara.com/2011/03/im-going-to-be-rich/#comments Tue, 08 Mar 2011 21:00:40 +0000 http://blog.michaelfmcnamara.com/?p=2022 I haven’t received one of these fraudulent email messages in quite a while so I thought I would share with everyone my good fortune. Anyone know what I should spend my new found fortune on first? Perhaps a Motorola Xoom or an Apple iPad2 or maybe a new PC?

ATTN:MCNAMARA,

I am a Trustee and Executor of the estate of a deceased client(Dr P. MCNAMARA) in Budapest, Hungary. I have sat on a 5 year forgotten financial inheritance. In few weeks time, this fund will be transferred to the state as required by law since there’s no claim made.  We can both collaborate and share the proceeds 60/40. Your part would be to receive the funds as the beneficiary , since you have the same last name as my late client, and I will prepare the required documents and have it released to you in just days. Please reply this mail stating full name, phone and fax number details if interested. So I can start
the claims process as we build a mutual trust.

Many thanks in advance as I look forward to our partnership and trust.

Regards,
Douglas Wild

The message originated from Yahoo China from a Douglas Wild (jd.zainnwild@yahoo.com.cn) with Yahoo user account of (X-RocketYMMF:) jdwild1@att.net.

Cheers!

]]>
https://blog.michaelfmcnamara.com/2011/03/im-going-to-be-rich/feed/ 2
Smithsonian Channel: System Crash https://blog.michaelfmcnamara.com/2010/01/smithsonian-channel-system-crash/ https://blog.michaelfmcnamara.com/2010/01/smithsonian-channel-system-crash/#comments Thu, 28 Jan 2010 05:00:49 +0000 http://blog.michaelfmcnamara.com/?p=1255 The Smithsonian Channel has put together a very insightful show entitled System Crash chronicling the dangers of our growing digital world.

Unbelievable…and unstable. Unlimited…and unreliable. See how our growing dependence on modern technology, now running everything from transportation to energy to finance to communications, has made life a whole lot easier…and infinitely, sometimes tragically, more complicated.

Take a disturbing trip to the dark side of the Internet, where cyber crooks pose a constant threat to our finances, privacy, even our national security. Discover how hackers can attack major corporations and bring entire countries to a standstill, and what, if anything, we can do to stop them.

It’s intend audience is the every day casual Internet user, not the security or network engineer. I thought it did a very good job of articulating the dangers that are growing and the peril that many Internet users are completely unaware of today.

Here’s a brief excerpt from the show;

Cheers!

]]>
https://blog.michaelfmcnamara.com/2010/01/smithsonian-channel-system-crash/feed/ 2
Save the Internet – Two Million Strong for Net Neutrality https://blog.michaelfmcnamara.com/2009/10/save-the-internet-two-million-strong-for-net-neutrality/ Wed, 28 Oct 2009 03:00:54 +0000 http://blog.michaelfmcnamara.com/?p=1077 I thought it was well past time for me to write about this topic. I’m just going to post some links and ask everyone to make up their own minds or at the very least to consider what Net Neutrality means to you. I will tell you that I’m a huge Net Neutrality advocate. I can only image what companies like Verizon, Comcast and AT&T might try doing if allowed.

I would urge anyone interested in voicing their support to visit Save the Internet and sign the petition.

I found this great graphic over on DVICE that should really help put it into perspective for the common Internet user.

without_net_neutrality

There’s also a great article from Paul Venezia over on Infoworld; http://www.infoworld.com/d/hardware/net-neutrality-stupid-stupid-does-179

Cheers!

]]>
Internet Utilization at 99.9% Arrgghhh! https://blog.michaelfmcnamara.com/2009/06/internet-utilization-at-99-9-arrgghhh/ Thu, 25 Jun 2009 00:00:31 +0000 http://blog.michaelfmcnamara.com/?p=797 Man_on_phone_2I thought I would just share this short story with you all… it’s a classic case of what can happen even with the best of plans and intentions. We recently deployed Adobe Acrobat Reader 9.1.2 via Microsoft Active Directory Group Policy.

We rushed the deployment in order to address some of the recent Acrobat vulnerabilities that were being actively exploited in the wild by Nine-Ball and other trojans/malware. We noticed an unusual uptick in Internet utilization almost immediately after the package had been deployed. When we examined our Websense logs we found an extreme number of HTTP requests to swupd.adbobe.com. We determined that these requests were coming from Adobe software products that were attempting to check for an update via Adobe’s auto-update feature. The HTTP requests were being denied by our Blue Coat ProxySG appliances because we require user authentication to access the Internet. While the Adobe auto-update component was able to read the PAC file configured within Internet Explorer it was not able to provide authentication when challenged with a 407 response. We originally thought the sheer number of clients making requests was putting an undo burden on the system so we added some CPL code to our Blue Coat ProxySG appliances to allow non-authenticated access to *.adobe.com. Within minutes of that change the wheels on the bus came flying off literally.

We just happen to have two 50Mbps Ethernet links to the Internet being served from two Blue Coat ProxySG appliances with about 5,500 client PCs. Within minutes both ProxySG appliances went to 96% CPU utilization and both Internet links went to 99.9% utilization. We had literally let the cat out of the bag and it was off and running… the number of client PCs trying to download updates from Adobe surged and they literally started to choke our two Internet connections.

Thankfully the Blue Coat ProxySG appliances support bandwidth classes. We created a 1Mbps class and added some CPL code to bandwidth restrict access to *.adobe.com. While that proved to be the quick fix we’re also deploying an update via Group Policy to disable the auto-update feature per Adobe’s knowledgebase article.

Cheers!

]]>
VoIP USB Adapter MagicJack https://blog.michaelfmcnamara.com/2008/08/voip-usb-adapter-magicjack/ https://blog.michaelfmcnamara.com/2008/08/voip-usb-adapter-magicjack/#comments Sun, 17 Aug 2008 03:30:15 +0000 http://blog.michaelfmcnamara.com/?p=340 A few weeks ago a few friends approached me about Internet based VoIP solutions for their home phone. They were fed up with the $100.00+ phone bills and weren’t really excited about giving any more money to the local cable television company(Comcast). I’ve been an AT&T CallVantage VoIP customer for the past 2-3 years so I was obviously ready to recommend AT&T CallVantage until I discovered that they are no longer accepting new customers. While I wasn’t ready to recommend Vonage or any of the other solutions out there I did comment to them about the recent buzz around a product called MagicJack. The MagicJack USB adapter itself costs approximately $39.95 and includes the first year of service free while subsequent years are $19.95 a year (yes you read that right $19.95/year). The solution requires a Windows XP or MacOS desktop/laptop and utilizes your broadband Internet connection. I personally know of two folks that are currently utilizing the solution and they absolutely love it and they are admittingly not very technical or computer savy. However, they simply love the solution and they both estimate that it’s saving them between $75 and $100 a month in long distance phone charges. The solution has scored numerous product awards including PC Magazine’s Editor’s Choice award.

So while I’m not exactly sure what I’ll do myself since it’s probably only a matter of time until AT&T pulls the plug on CallVanage it seems like MagicJack could be a great solution for those teenagers heading off to college. They’d no longer have an excuse for not calling home every once-n-while. :)

Cheers!

]]>
https://blog.michaelfmcnamara.com/2008/08/voip-usb-adapter-magicjack/feed/ 2