Michael McNamara https://blog.michaelfmcnamara.com technology, networking, virtualization and IP telephony Sun, 07 May 2017 20:10:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 How would you plan a network migration? https://blog.michaelfmcnamara.com/2017/05/how-would-you-plan-a-network-migration/ https://blog.michaelfmcnamara.com/2017/05/how-would-you-plan-a-network-migration/#comments Sun, 07 May 2017 20:10:10 +0000 https://blog.michaelfmcnamara.com/?p=5747 You can usually measure the success of any project by the amount of planning, research and testing that’s been invested into the project. I’ve done dozens of network migrations, from complete forklifts and gradual side by side migrations and all of them required a significant amount of planning, research and testing prior to the actual execution. It was the planning, research and testing that I directly credit for the success for all those projects. Here are the general steps that I go through when migrating a network (replacing or upgrading the physical hardware or equipment);

  1. Cleanup
  2. Documentation
  3. Research
  4. Testing pre-migration
  5. Execution
  6. Testing post-migration
  7. Turnover

Let’s go through each of those steps and I’ll explain what I’m talking about.

Step 1. Cleanup

This step is usually overlooked but can be the step that provides the biggest bang for the buck. We have a core switch with 240 ports, 110 of which are completely idle. Why not cleanup and remove those 110 ports and save yourself from having to worry about trying to migrate them. The same goes for the actual configuration. We have 8 port-channels of which only 5 are active. The other 3 port-channels have been decommissioned but no one ever cleaned up the configuration or cabling. Let’s clean up the configuration prior to any migration so we only need to worry about what’s actually in use.

Step 2. Documentation

I generally like to document all the switch ports, not just the uplinks and downlinks but also dumping the MAC/FDB and ARP tables and document what’s connected to every port. You’d be surprised how often this has proved very helpful either during the migration or post migration troubleshooting.

Step 3. Research

It’s really important to-do the research to understand what caveats you could run into. In most cases you won’t be the first person building a wheel, there will have been a bunch of other folks that have done this already and have discussed their issues, problems and experiences online somewhere. It’s equally important to understand how you should be configuring the new gear and how you’re going to reach the final goal. Let’s not forget the logistics of any implementation. Is there enough space, power, cooling… is the power 120V or 220V, do I have the proper PDU and UPS sized properly, do I need 5-15P or C14 power cords?

Step 4. Testing Pre-Migration

No one wants to jump off a cliff without knowing with a high degree of certainty that the parachute is going to open and work. This is the phase where you prove that all the planning and research is going to show real fruit. If you have a test plan, please make sure you execute it pre-migration. You’d be surprised how many times I run into people telling me that X or Y isn’t working after a network change – only to find out that X or Y had never worked for quite sometime.

Step 5. Execution

Here’s where the rubber meets the road.. whether it’s an overnight forklift or a side by side migration this is what you’ve been planning for. It’s time to get the job done.

Step 6. Testing Post-Migration

Let’s make sure that everything is still working properly… before the users start calling on Monday morning.

Step 7. Turnover

The final hurdle, documentation and the implementation of some type of monitoring and management solution.

Let me know what’s been your largest or most challenging upgrade or migration in the past few years.

Cheers!

Image Credit: sanja gjenero

]]>
https://blog.michaelfmcnamara.com/2017/05/how-would-you-plan-a-network-migration/feed/ 1
ISC BIND 9.10.2-P3 Forwarding Caching Only Nameserver https://blog.michaelfmcnamara.com/2015/08/isc-bind-9-10-2-p3-forwarding-caching-only-nameserver/ Mon, 03 Aug 2015 12:00:16 +0000 http://blog.michaelfmcnamara.com/?p=5373 I recently had to migrate a large DNS environment from about 23 Microsoft Domain Controllers to Infoblox DNS. I could have just deleted all the zones and set the forwarding on the Microsoft DNS servers but I wanted to leave the Microsoft DNS configuration and data in place to provide a quick backout option in the unlikely event that it was need (it was needed but the second time around using the named.conf file below was the charm).

PrintI ended up deploying ISC BIND 9.10.2-P3 across a mix of Windows 2003 and Windows 2008 domain controller servers, some 32-bit and some 64-bit.

As I alluded to above I originally had issues running BIND getting error messages such as the following after only a few hours running the service and clients failing to get name resolution.

27-Jul-2015 19:15:04.575 general: error: ..\client.c:2108: unexpected error:
27-Jul-2015 19:15:04.575 general: error: failed to get request's destination: failure
27-Jul-2015 19:15:04.981 general: error: ..\client.c:2108: unexpected error:
27-Jul-2015 19:15:04.981 general: error: failed to get request's destination: failure
27-Jul-2015 19:15:20.971 general: error: ..\client.c:2108: unexpected error:
27-Jul-2015 19:15:20.971 general: error: failed to get request's destination: failure

There were also a few other errors that apeared to be releated to the anti-DDoS mechanisms built into BIND;

27-Jul-2015 19:50:02.369 resolver: notice: clients-per-query increased to 15

So I went back and recrafted the named.conf file and came up with the following which seems to be working well for me now almost 5 days after the Infoblox DNS migration.

You’ll noticed that I commented out the localhost zone and the 127.0.0.1 reverse zone as well. I didn’t think that BIND would run without them but sure enough it does. I also enabled query logging so I could see what type of abuse the DNS servers were getting. I found a couple of servers that were querying more than 40,000 times a minute for a management platform that had been retired almost 5+ years ago.

options {
  directory "c:\program files\isc bind 9\bin";
 
  // here are the servers we'll send all our queries to
  forwarders {10.1.1.1; 10.2.2.2;};
  forward only;

  auth-nxdomain no;

  // need to include allow-query at a minimum
  allow-recursion { "any"; };
  allow-query { "any"; };
  allow-transfer { "none"; };

  // lets leave IPv6 off for now less to worry about
  listen-on-v6 { "none"; };

  // standard stuff
  version none;
  minimal-responses yes;
 
  // cache positive and negative results for only 5 minutes
  max-cache-ttl 300;
  max-ncache-ttl 300;

  // disable DDoS mechanisms in BIND
  clients-per-query 0;
  max-clients-per-query 0;

};

logging{
   channel example_log{
    file "C:\program files\isc bind 9\log\named.log" versions 3 size 250k;
    severity info;
    print-severity yes;
    print-time yes;
    print-category yes;
  };

  channel queries_file {
    file "c:\program files\isc bind 9\log\queries.log" versions 10 size 10m;
    severity dynamic;
    print-time yes;
  };

  category default{ example_log; };
  category queries { queries_file; };

};

//zone "localhost" in{
//  type master;
//  file "pri.localhost";
//  allow-update{none;};
//};

//zone "0.0.127.in-addr.arpa" in{
//  type master;
//  file "localhost.rev";
//  allow-update{none;};
//};

I setup my first nameserver running BIND 4.x back in 1995, more than 20 years ago while working at Manhattan College. While I'm pretty familiar with BIND a lot has changed since then and so I had to-do a fair bit of research to arrive at the configuration above.

Hopefully someone else will find it helpful.

Cheers!

]]>