Michael McNamara https://blog.michaelfmcnamara.com technology, networking, virtualization and IP telephony Sat, 30 Oct 2021 18:19:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 How to install and setup Ansible to manage Junos on CentOS https://blog.michaelfmcnamara.com/2020/07/how-to-install-and-setup-ansible-to-manage-junos-on-centos/ https://blog.michaelfmcnamara.com/2020/07/how-to-install-and-setup-ansible-to-manage-junos-on-centos/#comments Fri, 03 Jul 2020 12:01:48 +0000 https://blog.michaelfmcnamara.com/?p=6563

If you Google “Ansible” and “Junos” you’ll find literally hundreds of articles, posts and videos… some covering pre 2.0 Ansible, some covering Ansible 2.5, or 2.6 or later and almost all of them are completely different – and a great many of the instructions no longer work!

I recently wanted to test out the Ansible Junos modules put out by Juniper but first I had to spend a good hour figuring out all the inter dependencies to get everything working on a CentOS 7 server. The Juniper DAY ONE: AUTOMATING JUNOS WITH ANSIBLE written by Sean Sawtell is a great starting point but I ran into problems just getting my local environment running. The hundreds if not thousands of posts and videos were extremely confusing and I quickly grew frustrated.

What follows is a quick guide on how to get everything working on a minimal CentOS 7 server. Depending on your requirements, it might be more advisable to look at running a fully prepared Docker container, where all the needed software is ready to run. You just need to provide the Ansible configuration and playbooks.

Here’s what you need to-do from root or a root equivalent account using sudo. Since I built this test VM on a VMware ESXi 6.5 server I wanted to install the open-source VMware tools and perform any updates.

yum install open-vm-tools
yum update
init 6

yum install epel-release
yum install python3 jxmlease

pip3 install ncclient
pip3 install junos-eznc
pip3 install ansible

ansible-galaxy install Juniper.junos

That’s all you need and you are ready to go… if you want to play around with Netmiko or Napalm you only need to use PIP to install those Python modules.

pip3 install netmiko
pip3 install napalm

Cheers!

]]>
https://blog.michaelfmcnamara.com/2020/07/how-to-install-and-setup-ansible-to-manage-junos-on-centos/feed/ 2
Hosting Provider – Digital Ocean https://blog.michaelfmcnamara.com/2013/11/hosting-provider-digital-ocean/ https://blog.michaelfmcnamara.com/2013/11/hosting-provider-digital-ocean/#comments Sun, 10 Nov 2013 01:04:11 +0000 http://blog.michaelfmcnamara.com/?p=4018 I was still feeling irked with Linode after I discovered the performance of my Linode VPS decreased significantly after receiving a number of “free” upgrades, so with my Linode 2G coming up for renewal in December I recently starting digging around to see if there were any other hosting providers that might be worthwhile. That’s when I stumbled across Digital Ocean.

I fired up a $10 Droplet (the name of a virtual guest server at Digital Ocean) and ran some performance benchmarks, comparing my Linode 2G to Digital Ocean. The results were very exciting, I found my Linode 2G (2GB) turned out a score of 202.0 while the Droplet (1Gb) turned in a score of 842.9.

Here are the actual statistics from UnixBench v5.1.3;

Linode 2G

(2 GB, 48 GB, 4 TB, 8 cores (2x priority), $40 / mo)

========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: earth.michaelfmcnamara.com: GNU/Linux
   OS: GNU/Linux -- 3.8.4-linode50 -- #1 SMP Mon Mar 25 15:50:29 EDT 2013
   Machine: i686 (i386)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   CPU 0: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
          Hyper-Threading, MMX, Physical Address Ext
   CPU 1: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
          Hyper-Threading, MMX, Physical Address Ext
   CPU 2: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
          Hyper-Threading, MMX, Physical Address Ext
   CPU 3: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
          Hyper-Threading, MMX, Physical Address Ext
   CPU 4: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
          Hyper-Threading, MMX, Physical Address Ext
   CPU 5: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
          Hyper-Threading, MMX, Physical Address Ext
   CPU 6: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
          Hyper-Threading, MMX, Physical Address Ext
   CPU 7: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
          Hyper-Threading, MMX, Physical Address Ext
   15:33:12 up 187 days,  8:58,  1 user,  load average: 0.01, 0.03, 0.05; runlevel 3

------------------------------------------------------------------------
Benchmark Run: Sat Nov 09 2013 15:33:12 - 16:01:11
8 CPUs in system; running 1 parallel copy of tests

Dhrystone 2 using register variables        9047350.8 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     1674.1 MWIPS (10.2 s, 7 samples)
Execl Throughput                                824.2 lps   (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks         62735.0 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks           16425.8 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks        256839.3 KBps  (30.0 s, 2 samples)
Pipe Throughput                               77091.1 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                   9260.1 lps   (10.0 s, 7 samples)
Process Creation                               1427.0 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   1716.2 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    721.3 lpm   (60.0 s, 2 samples)
System Call Overhead                         277578.9 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0    9047350.8    775.3
Double-Precision Whetstone                       55.0       1674.1    304.4
Execl Throughput                                 43.0        824.2    191.7
File Copy 1024 bufsize 2000 maxblocks          3960.0      62735.0    158.4
File Copy 256 bufsize 500 maxblocks            1655.0      16425.8     99.2
File Copy 4096 bufsize 8000 maxblocks          5800.0     256839.3    442.8
Pipe Throughput                               12440.0      77091.1     62.0
Pipe-based Context Switching                   4000.0       9260.1     23.2
Process Creation                                126.0       1427.0    113.3
Shell Scripts (1 concurrent)                     42.4       1716.2    404.8
Shell Scripts (8 concurrent)                      6.0        721.3   1202.2
System Call Overhead                          15000.0     277578.9    185.1
                                                                   ========
System Benchmarks Index Score                                         202.0

Droplet

(2GB, 40GB SSD, 3 TB, 2 cores, $20 / mo)

========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: moon.michaelfmcnamara.com: GNU/Linux
   OS: GNU/Linux -- 2.6.32-358.6.2.el6.i686 -- #1 SMP Thu May 16 18:12:13 UTC 2013
   Machine: i686 (i386)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   CPU 0: QEMU Virtual CPU version 1.0 (4600.0 bogomips)
          x86-64, MMX, Physical Address Ext, SYSCALL/SYSRET, Intel virtualization
   20:33:02 up 5 days, 38 min,  2 users,  load average: 0.00, 0.01, 0.03; runlevel 3

------------------------------------------------------------------------
Benchmark Run: Sat Nov 09 2013 20:33:02 - 21:01:06
1 CPU in system; running 1 parallel copy of tests

Dhrystone 2 using register variables       16269312.5 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     2547.8 MWIPS (8.8 s, 7 samples)
Execl Throughput                               3643.5 lps   (29.7 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        470232.0 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          133863.4 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks       1146234.7 KBps  (30.0 s, 2 samples)
Pipe Throughput                              937630.7 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 193152.1 lps   (10.0 s, 7 samples)
Process Creation                              11101.8 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   3889.5 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                    491.7 lpm   (60.1 s, 2 samples)
System Call Overhead                         770778.6 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   16269312.5   1394.1
Double-Precision Whetstone                       55.0       2547.8    463.2
Execl Throughput                                 43.0       3643.5    847.3
File Copy 1024 bufsize 2000 maxblocks          3960.0     470232.0   1187.5
File Copy 256 bufsize 500 maxblocks            1655.0     133863.4    808.8
File Copy 4096 bufsize 8000 maxblocks          5800.0    1146234.7   1976.3
Pipe Throughput                               12440.0     937630.7    753.7
Pipe-based Context Switching                   4000.0     193152.1    482.9
Process Creation                                126.0      11101.8    881.1
Shell Scripts (1 concurrent)                     42.4       3889.5    917.3
Shell Scripts (8 concurrent)                      6.0        491.7    819.5
System Call Overhead                          15000.0     770778.6    513.9
                                                                   ========
System Benchmarks Index Score                                         842.9

It should be noted that there are a number of differentiators between Linode and Digital Ocean. Linode utilizes XEN while Digital Ocean utilizes KVM. Linode utilizes traditional hard disks while Digital Ocean utilizes SSDs (Solid-State Drives). It’s pretty well known that SSDs are much faster than traditional hard disks but SSDs also have reliability issues. And sometimes the biggest differentiator is price. While that 2GB VPS with Linode will cost you $40 /month, a 2GB Droplet with Digital Ocean will only set you back $20 /month.

I was impressed enough that I’m moving the majority of my workloads to Digital Ocean. I’ll probably end up with 2 Droplets, a 1Gb and 2Gb. Only time will tell if Digital Ocean will be as reliable as Linode but I’ll be here to let you know.

Cheers!

Update: Sunday November 17, 2013 – You can find a related post and follow-up here, Hosting Provider – Digital Ocean (Part 2)

]]>
https://blog.michaelfmcnamara.com/2013/11/hosting-provider-digital-ocean/feed/ 1
Linode Upgrades – Which hosting provider do you use? https://blog.michaelfmcnamara.com/2013/05/linode-upgrades-which-hosting-provider-do-you-use/ https://blog.michaelfmcnamara.com/2013/05/linode-upgrades-which-hosting-provider-do-you-use/#comments Sun, 19 May 2013 14:18:03 +0000 http://blog.michaelfmcnamara.com/?p=3613 Linode_LogoThere are dozens if not hundreds of hosting providers out there these days so how do you go about choosing the right one for you?

In the early days of my foray into blogging I utilized Google’s Blogger for the first six months. I then decided to move to GoDaddy’s (shared) managed hosting which wasn’t as bad as some reviews would have you believe. A year later I decided to leave GoDaddy for RIMU Hosting. I left behind managed hosting for an un-managed CentOS Linux VPS (Virtual Private Server). While I was a former IBM AIX System Administrator and Linux enthusiast I wasn’t quite prepared for the effort required to setup and manage a simple Linux web server. While I enjoyed the challenge it took me quite sometime to get everything automated. As the traffic to my blog and the discussion forums grew I started running into the memory and bandwidth limitations of the plan I was using from RIMU so I decided to switch to Linode after reading some positive review (such as this one). I’m happy to say I’ve been using Linode for almost 18 months, since October 2011, and have never had any issues or problems.

There have been a number of significant upgrades at Linode over the past few months, so much so that I thought I would take a second to detail them here and shamelessly plug my referral link at the same time.

Linode NexGen: RAM Upgrade

Linode literally left the best for last since most virtual workloads are memory constrained. They are essentially bumping everyone up one level, a Linode 512 becomes a Linode 1G, a Linode 1024 becomes a Linode 2GB, so on and so forth. It should be noted that they are also increasing their pricing by $0.05 per month, example the Linode 1G is $20.00 and not $19.95.

Plan RAM Disk XFER CPU Price
Linode 1G 1 GB 24 GB 2 TB 8 cores (1x priority) $20 / mo
Linode 2G 2 GB 48 GB 4 TB 8 cores (2x priority) $40 / mo
Linode 4G 4 GB 96 GB 8 TB 8 cores (4x priority) $80 / mo
Linode 8G 8 GB 192 GB 16 TB 8 cores (8x priority) $160 / mo
Linode 16G 16 GB 384 GB 20 TB 8 cores (16x priority) $320 / mo
Linode 24G 24 GB 576 GB 20 TB 8 cores (24x priority) $480 / mo
Linode 32G 32 GB 768 GB 20 TB 8 cores (32x priority) $640 / mo
Linode 40G 40 GB 960 GB 20 TB 8 cores (40x priority) $800 / mo

Linode NextGen: The Hardware

Linode has upgraded their hosts with two Intel Sandy Bridge E5-2670 processors. The E5-2670 is at the high end of the power-price-performance ratio and each E5-2670 enjoys 20 MB of cache and has 8 cores running at 2.6 GHz. There’s a lot of processing power behind that virtual server depending on your needs.

Linode Nextgen: The Network

Linode has deployed a new Cisco Nexus 7000 and 5000 topology (very similar to the topology that I personally use) in their data centers. “To top things off we’ve increased the amount of outbound transfer included with all plans by 1,000%.  That’s right, 10 times the included transfer!”

Linode 512 upgraded from 200GB to 2000GB (2TB)
Linode 1G upgraded from 400GB to 4000GB (4TB)
Linode 2G upgraded from 800GB to 8000GB (8TB)
Linode 4G upgraded from 1600GB to 16000GB (16TB)
Linode 8G upgraded from 2000GB to 20000GB (20TB)

Storage increased by 20%

Linode 512 goes from 20GB to 24GB
Linode 1GB goes from 40GB to 48GB
Linode 2GB goes from 80GB to 96GB
Linode 4GB goes from 160GB to 192GB
Linode 8GB goes from 320GB to 384GB
Linode 12GB goes from 480GB to 576GB
Linode 20GB goes from 800GB to 960GB

My Thoughts

There’s definitely been quite a few changes over at Linode so I wondered what those changes might have done to performance. At the surface it certainly appears that the average Linode customer is now getting more. We’re getting 100% more memory, 20% more storage, 1,000% more bandwidth. So how what percent of a performance increase can we expect in processing?

We’ll I decided to run some UnixBench tests and compare these new tests with some previous results I posted in an article entitled Linode VPS Hosting I posted back in October 2011.

I started writing this article back in April 2013. That was before the Linode Manager password reset, which was explained by the security breach that was disclosed shortly thereafter. Throughout that time I’ve struggle to get performance numbers anywhere near what I captured in October 2011. I even attempted to engage Linode support and while they were cordial they gave me the typical ‘we can move you to a new host’ response without really engaging in an in-depth discussion around the horrendous performance numbers. I would write four or five paragraphs to which they would respond with one or two liners.

October 2011 Hardware

System: li366-32: GNU/Linux
OS: GNU/Linux — 3.0.4-linode38 — #1 SMP Thu Sep 22 14:59:08 EDT 2011
Machine: i686: i386
Language: en_US.utf8 (charmap=”UTF-8″, collate=”UTF-8″)
CPUs: 0: Intel(R) Xeon(R) CPU L5520 @ 2.27GHz (4522.0 bogomips)
Hyper-Threading, MMX, Physical Address Ext
1: Intel(R) Xeon(R) CPU L5520 @ 2.27GHz (4522.0 bogomips)
Hyper-Threading, MMX, Physical Address Ext
2: Intel(R) Xeon(R) CPU L5520 @ 2.27GHz (4522.0 bogomips)
Hyper-Threading, MMX, Physical Address Ext
3: Intel(R) Xeon(R) CPU L5520 @ 2.27GHz (4522.0 bogomips)
Hyper-Threading, MMX, Physical Address Ext
Uptime: 11:06:54 up 14 min, 1 user, load average: 0.05, 0.04, 0.05; runlevel 3

May 2013 Hardware

System: earth.michaelfmcnamara.com: GNU/Linux
OS: GNU/Linux — 3.8.4-linode50 — #1 SMP Mon Mar 25 15:50:29 EDT 2013
Machine: i686: i386
Language: en_US.utf8 (charmap=”UTF-8″, collate=”UTF-8″)
CPUs: 0: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
Hyper-Threading, MMX, Physical Address Ext
1: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
Hyper-Threading, MMX, Physical Address Ext
2: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
Hyper-Threading, MMX, Physical Address Ext
3: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
Hyper-Threading, MMX, Physical Address Ext
4: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
Hyper-Threading, MMX, Physical Address Ext
5: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
Hyper-Threading, MMX, Physical Address Ext
6: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
Hyper-Threading, MMX, Physical Address Ext
7: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips)
Hyper-Threading, MMX, Physical Address Ext
Uptime: 21:42:19 up 9 days, 14:07, 1 user, load average: 0.18, 0.10, 0.06; runlevel 3

Here are the performance numbers of each side by side;

Test Score Unit Time Iters. Baseline Oct 2011 May 2013
Dhrystone 2 using register variables 16345243.3 lps 10.0 s 7 116700.0 1400.6 827.5
Double-Precision Whetstone 2455.5 MWIPS 10.1 s 7 55.0 446.5 301.2
Execl Throughput 1179.1 lps 30.0 s 2 43.0 274.2 184.7
File Copy 1024 bufsize 2000 maxblocks 342283.0 KBps 30.0 s 2 3960.0 864.4 139.6
File Copy 256 bufsize 500 maxblocks 87956.8 KBps 30.0 s 2 1655.0 531.5 91.9
File Copy 4096 bufsize 8000 maxblocks 958654.2 KBps 30.0 s 2 5800.0 1652.9 341.2
Pipe Throughput 488607.7 lps 10.0 s 7 12440.0 392.8 58.3
Pipe-based Context Switching 32606.8 lps 10.0 s 7 4000.0 81.5 23.1
Process Creation 2233.1 lps 30.0 s 2 126.0 177.2 108.3
Shell Scripts (1 concurrent) 2560.1 lpm 60.0 s 2 42.4 603.8 402.4
Shell Scripts (8 concurrent) 970.0 lpm 60.0 s 2 6.0 1616.7 1115.7
System Call Overhead 451501.4 lps 10.0 s 7 15000.0 301.0 185.4
System Benchmarks Index Score: 495.1 191.6

You can find the actual HTML results file online for October 2011 and May 2013.

It’s obvious that quite a few things have changed since I first tested Linode back in October 2011. The original testing was performed on a Linode 512 with 4 Intel Xeon L5520 @ 2.27GHz (4522.0 bogomips). The most recent testing was performed on a Linode 2048 with 8 Intel Xeon CPU E5-2630L 0 @ 2.00GHz (4000.1 bogomips). While the original hardware configuration offered 4 cores the latest hardware offering provides 8 cores. I’ve been using the 1 parallel process testing numbers to help gauge the performance of a single core. The disk IO numbers look very poor but when I perform a basic disk IO test everything seems pretty good.

[root@earth ~]# dd if=/dev/zero of=test bs=64k count=48k conv=fdatasync
49152+0 records in
49152+0 records out
3221225472 bytes (3.2 GB) copied, 45.4488 s, 70.9 MB/s

Perhaps the original data I collected in October 2011 was flawed, perhaps I was the only user on that physical server and now years later the Linode environment has become much more crowded – similar to Comcast Cable Modem Internet. It worked great the first few years but after everyone in the neighborhood started subscribing the performance really tanked.

I could probably use tools like Bonnie++ and Nbench to help validate my results but I wouldn’t be able to compare them against any previous results. I’d probably only use these tools if I was going to find a new hosting provider and wanted to benchmark their environments against what I have available today. I even went as far as to download UnixBench v5.1.3 and re-ran my tests only to score a 149.2 compared to the original result of 191.6.

With all that said the server and websites appear to be running fine.  The Web Page Performance tests for this site are pretty decent, 2.944 seconds (first view) and 1.623 seconds (second view). Perhaps the performance numbers will change when my server gets migrated to a host with the new Intel Xeon E5-2670 CPUs.

In summary I’m not sure what to say… I had thought this article would be an easy post to write but the performance numbers followed by the security incident have left me wondering if Linode is the hosting provider for me. Performance benchmarking within a virtual environment is really difficult given all the different components and the ever changing workloads.

Cheers!

]]>
https://blog.michaelfmcnamara.com/2013/05/linode-upgrades-which-hosting-provider-do-you-use/feed/ 6
IPv6 and Nginx https://blog.michaelfmcnamara.com/2013/01/ipv6-and-nginx/ Wed, 09 Jan 2013 00:10:52 +0000 http://blog.michaelfmcnamara.com/?p=3372 ipv6launchWhen I recently migrated from Apache to Nginx I temporarily broke the ability of IPv6 clients to reach my website. Last night I spent a few minutes updating the Nginx configuration to support IPv6. Since I’m using virtual hosts on a single CentOS 6.3 server I had to make a few configuration tweaks to the Nginx config files. Thankfully Linode fully supports IPv6 so there was very little to-do outside of the actual web server configuration.

The adoption of IPv6 is growing slowly but promises to accelerate significantly with the depletion of the IPv4 address pools. You can check out Google’s statistics which indicate that more than 1% of all Internet traffic to Google is now IPv6 as opposed to the legacy IPv4. In June 2011 Chandler Harris from Information Week didn’t believe that the IPv4 exhaustion was spuring IPv6 adoption. I think that Chandler was just a little too early in his predictions. While all the IPv4 address pools had been allocated the pools themselves weren’t actually exhausted. Looking at the Google statistics you can quickly see that Romania is leading the IPv6 adoption race at 8.91% followed by France at 5.1% and the United States at 2.17%.

Here are the steps I took to enable IPv6 support within Nginx.

You first need to confirm that Nginx was compiled with IPv6 support. You can do that with the following command; nginx -V. If Nginx was compiled with IPv6 support you should find the following in the output –with-ipv6.

In the default Nginx configuration file (/etc/nginx/conf.d/default.conf) I replaced the listen 80 default_server; with listen [::]:80 default_server; 

I also had to modify each of my virtual host configuration files adding listen [::]:80; to each.

In the end I ended up with files that look like this;

default.conf

#
# The default server
#
server {
    listen      [::]:80 default_server;
    server_name  _;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    error_page  404              /404.html;
    location = /404.html {
        root   /usr/share/nginx/html;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

blog.michaelfmcnamara.com.conf

server {

    # Tell nginx to handle requests for the www.yoursite.com domain
    listen              [::]:80;
    server_name         blog.michaelfmcnamara.com mirror.michaelfmcnamara.com;
    index               index.php index.html index.htm;
    ...
    ...
}

You can test your website at IPv4 and IPv6 availability at ipv6-test.com.

Cheers!

]]>
Apache2 mod_php vs Nginx PHP-FPM https://blog.michaelfmcnamara.com/2012/11/apache2-mod_php-vs-nginx-php-fpm/ https://blog.michaelfmcnamara.com/2012/11/apache2-mod_php-vs-nginx-php-fpm/#comments Wed, 28 Nov 2012 00:44:00 +0000 http://blog.michaelfmcnamara.com/?p=3203 nginx-logoOur society places a great deal of value on speed whether that’s good or not is debatable. It’s clear though that on the Internet speed is king. I’m not just talking about the speed of your Internet connection but also the speed of the website you are communicating with. In 2010 Google announced that it would include website speed or latency in ranking webpages. I recently started looking (again) at the performance and speed of my personal website. For the past 4 years I’ve been running the typical LAMP stack comprised of Linux, Apache, MySQL and PHP to power my WordPress and Simple Machines Forum websites. And while the LAMP stack is very stable and well established among Internet web servers it’s not exactly known for breaking any speed records.

How can I do more without more?

With traffic growing year after year and the website feeling slow (anything beyond 3 seconds is slow to me) I picked up the torch again hoping to finally answer the question of How can I do more without more? How can I squeeze more performance out of the same hardware (VPS) that I currently have?

For the past three weeks I’ve been exploring the Apache (LAMP) vs Nginx (LEMP) debate trying to understand the pros and cons and if the performance gains would be applicable to my specific needs in running a website that processes under 100,000 page views monthly. It’s clear that Nginx can really help a website scale but I was curious what Nginx could do for me. It wasn’t just Nginx that I was testing but also PHP-FPM as opposed to using mod_php for Apache. I decided that the only way I could figure out the answer was the spend the time doing some actual testing and benchmarking.

I spun up a Linode512 instance and deployed CentOS 6.2 which I subsequently upgraded to release 6.3 via yum. I installed MySQL along with Nginx, PHP-FPM and APC from the REMI repository. Once that was all done I loaded an XML backup of my blog into the test server and attempted to duplicate my production website (blog) as closely as possible. The numbers showed an improvement but they didn’t really justify the effort needed to actually migrate the site. It wasn’t until I enabled W3 Total Cache (W3TC) along with some custom Nginx configurations that I observed a very significant improvement in performance. While there was a performance gain utilizing PHP-FPM over mod_php the real performance gain came in Nginx serving up static HTML files as compared to Apache. While WordPress is a PHP application, the plug-in W3 Total Control (W3TC) provides the ability to serve up static HTML files which allows small servers such as mine (Linode1024) to not only handle thousands of users but to serve them quickly via cached data and static HTML as opposed to actually running PHP for every request/session.

Last week I migrated this website to the Linode512 instance, upgraded the Linode1024 instance and migrated the site back without any downtime.  While I had to spend some time scouring Google and testing some Nginx configurations but the effort was well worth it.

This website now loads in 1-2 seconds as opposed to previously loading in 6-7 seconds.

Benchmarks

I ran a number of different benchmarks including web-based applications such as WebPageTest and command line tools such as ab (Apache Bench). Here are the before and after benchmarks from the same Linode1024 instance.

blog.michaelfmcnamara.com (Apache/mod_php)

From: Dulles, VA – IE 8 – Cable
Thursday, November 15, 2012 9:00:52 PM

Performance Results (Median Run)

Document Complete Fully Loaded
Load Time First Byte Start Render DOM Elements Time Requests Bytes In Time Requests Bytes In
First View (Run 5) 4.451s 0.255s 1.138s 593 4.451s 46 834 KB 6.400s 48 857 KB
Repeat View (Run 4) 1.981s 0.343s 0.634s 593 1.981s 8 19 KB 2.027s 8 19 KB

blog.michaelfmcnamara.com (Nginx/PHP-FPM/W3TC)

From: Dulles, VA – IE 8 – Cable
Saturday, November 24, 2012 12:04:14 PM

Performance Results (Median Run)

Document Complete Fully Loaded
Load Time First Byte Start Render DOM Elements Time Requests Bytes In Time Requests Bytes In
First View (Run 4) 1.524s 0.113s 0.583s 668 1.524s 37 396 KB 3.462s 50 766 KB
Repeat View (Run 4) 1.422s 0.115s 0.401s 668 1.422s 8 5 KB 1.422s 8 5 KB

Looking at the numbers you can quickly see that we went from an average of 6-7 seconds to 1-2 seconds which is an incredible performance boost. I had tried a few different times over the years to get the FTB (First Time Byte) under 1.0 second but it wasn’t until I started utilizing some caching and static HTML (W3TC) that I was able to accomplish that goal. It’s clear now that PHP was creating the large FTB value as it was processing the code (WordPress).

Tweaks

Here are a few of the tweaks I needed to get everything running properly on my CentOS 6.3 server with Nginx and PHP-FPM with APC caching running WordPress and Simple Machines Forum.

/etc/nginx/conf.d/www.acme.com.conf

server {

    # Tell nginx to handle requests for the www.yoursite.com domain
    server_name         www.acme.com;
    index               index.php index.html index.htm;
    root                /srv/www/www.acme.com/html;
    access_log          /srv/www/www.acme.com/logs/access.log;
    error_log           /srv/www/www.acme.com/logs/error.log;

    # Allow uploads of 20M in size
    client_max_body_size 20M;

    # Use gzip compression
    # gzip_static       on;  # Uncomment if you compiled Nginx using --with-http_gzip_static_module
    gzip                on;
    gzip_disable        "msie6";
    gzip_vary           on;
    gzip_proxied        any;
    gzip_comp_level     5;
    gzip_buffers        16 8k;
    gzip_http_version   1.0;
    gzip_types          text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript image/png image/gif image/jpeg;

    # Rewrite minified CSS and JS files
    location ~* \.(css|js) {
        expires 30d;
        add_header Pragma public;
        add_header Cache-Control "public";
        if (!-f $request_filename) {
            rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last;
         }
    }
    # Set a variable to work around the lack of nested conditionals
    set $cache_uri $request_uri;

    # POST requests and urls with a query string should always go to PHP
    if ($request_method = POST) {
        set $cache_uri 'no cache';
    }
    if ($query_string != "") {
        set $cache_uri 'no cache';
    }

    # Don't cache uris containing the following segments
    if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php|wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") {
        set $cache_uri "no cache";
    }

    # Don't use the cache for logged in users or recent commenters
    if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp\-postpass|wordpress_logged_in") {
        set $cache_uri 'no cache';
    }

    # Use cached or actual file if they exists, otherwise pass request to WordPress
    location / {
        try_files /wp-content/w3tc/pgcache/$cache_uri/_index.html $uri $uri/ /index.php?q=$uri&$args;
    }

    # Cache static files for as long as possible
    location ~* \.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
        try_files       $uri =404;
        expires         max;
        access_log      off;
    }

    # Deny access to hidden files
    location ~* /\.ht {
        deny            all;
        access_log      off;
        log_not_found   off;
    }

    # Pass PHP scripts on to PHP-FPM
    location ~* \.php$ {
        try_files       $uri /index.php;
        fastcgi_index   index.php;
        fastcgi_buffers 8 256k;
        fastcgi_buffer_size 128k;
        fastcgi_intercept_errors on;
        fastcgi_pass    unix:/tmp/php5-fpm.sock;
        include         fastcgi_params;
        fastcgi_param   SCRIPT_FILENAME    $document_root$fastcgi_script_name;
        fastcgi_param   SCRIPT_NAME        $fastcgi_script_name;
    }

}

I did have some issues with Nginx and PHP tracking sessions in SMF. I would constantly get Session Verification failed when trying to logout of the forums or login to the administrative portal. That turned out to be an issue with the default value of session.save_path in the php.ini file so I modified the path to use /tmp and made sure that directory was accessible to all.

/etc/php.ini

session.save_path = "/var/lib/php/session"
to
session.save_path = "/tmp"

There were a few other tweaks but those were the ones that took me the longest to resolve/assemble.

Overall I’m really happy with the performance gains, if you’re running a WordPress website and your looking for numbers between 1-2 seconds you should definitely check out Nginx with PHP-FPM and APC, and don’t forget W3 Total Cache.

I also had to add the following to my Nginx configuration file to allow facilitate the RSS redirect to Google’s FeedBurner service.

    # FeedBurner RSS Redirect - replace URLs below with your values

    if ($http_user_agent !~ FeedBurner) {
      rewrite ^/comment/feed/ http://feeds.feedburner.com/CommentsForMichaelFMcnamara last;
      rewrite ^/feed/ http://feeds.feedburner.com/michaelfmcnamara last;
    }

Cheers!

]]>
https://blog.michaelfmcnamara.com/2012/11/apache2-mod_php-vs-nginx-php-fpm/feed/ 4
CentOS 6.2 KVM – VirtIO paravirtualized drivers for Windows https://blog.michaelfmcnamara.com/2012/05/centos-6-2-kvm-virtio-paravirtualized-drivers-for-windows/ https://blog.michaelfmcnamara.com/2012/05/centos-6-2-kvm-virtio-paravirtualized-drivers-for-windows/#comments Tue, 01 May 2012 21:24:07 +0000 http://blog.michaelfmcnamara.com/?p=2784 I’ve just recently been playing around with KVM on a HP DL360 running CentOS 6.2 x64. I had a very difficult time finding the VirtIO paravirtualized drivers for Windows in a virtual floppy format (vfd). I was looking for the vfd format so I could easily install the drivers in a Windows XP guest I was building and testing.

I’m going to post a link here to the file, not quite sure why it was pulled from RedHat’s site.

virtio-win-1.1.16.vfd (MD5SUM: 7437f5d81fc43e8da3be01802fa4e9fb)

Cheers!

]]>
https://blog.michaelfmcnamara.com/2012/05/centos-6-2-kvm-virtio-paravirtualized-drivers-for-windows/feed/ 9
We systematically reject ‘apache@…’ Huh? https://blog.michaelfmcnamara.com/2011/02/we-systematically-reject-apache-huh/ Sat, 12 Feb 2011 17:00:50 +0000 http://blog.michaelfmcnamara.com/?p=1960 I’m continually amazed by how much hands on effort it takes to run even a small blog or community these days. The SPAM bots are continually spewing their useless garbage everywhere, the hackers and script kiddies are continually trying to break down the front door and somewhere in there is the appreciative reader in search of an answer to his/her question or just genuinely interested in the topic at hand.

Every now and then a genuine (system administration) issue or problem surfaces that deserves some time and effort. Since I’m utilizing a virtual private server (VPS) running CentOS 5.5, I’m responsible for administering and managing the server myself. I was an IBM AIX (long live SMIT) and Solaris System Administrator in a previous life so it’s not a big challenge but it can be a time consuming task. The benefits of managing my own server are still significant enough for me and I’ve learned so much about Linux, MySQL, PHP, Perl, etc. that the experience has been well worth the investment in my view.

I recently noticed that I was getting a lot of bounced email messages on the server from a number of readers that had subscribed to posts on my blog. Here’s a quick snippet of the bounced error message;

Action: failed
Status: 5.1.7
Remote-MTA: dns; mx.acme.org
Diagnostic-Code: smtp; 550 5.1.7 ... We
    systematically reject 'apache@...'

It seems that a few domains (example above is acme.org – changed to protect identity) were rejecting any email message with the Return-Path set to apache@hostname. In my case the Return-Path was set to apache@michaelfmcnamara.com although the From address was set to noreply@michaelfmcnamara.com. Unfortunately you can’t set (not to my knowledge anyway) the Return-Path from within WordPress administration portal. You need to manually edit wp-includes/class-phpmailer.php and set the variable $Sender to the same email address you setup within WordPress to use as your From address.

/**
* Sets the Sender email (Return-Path) of the message.  If not empty,
* will be sent via -f to sendmail or as 'MAIL FROM' in smtp mode.
* @var string
*/
var $Sender            = 'noreply@michaelfmcnamara.com';

With that change complete I can see from the server logs (/var/log/maillog) that the Return-Path is now being properly set.

Feb 12 08:29:56 michaelfmcnamara postfix/pickup[9770]: 2B8FD2C3BB: uid=48 from=<noreply@michaelfmcnamara.com>
Feb 12 08:29:56 michaelfmcnamara postfix/cleanup[11068]: 2B8FD2C3BB: message-id=<67fa95dc7fd22d7c6cfd481d506bfd87@blog.michaelfmcnamara.com>
Feb 12 08:29:56 michaelfmcnamara postfix/qmgr[2647]: 2B8FD2C3BB: from=<noreply@michaelfmcnamara.com>, size=1729, nrcpt=1 (queue active)
Feb 12 08:29:56 michaelfmcnamara postfix/local[11070]: 2B8FD2C3BB: to=<whowhatwhen@michaelfmcnamara.com>, relay=local, delay=0.07, delays=0.04/0.01/0/0.02, dsn=2.0.0, status=sent (forwarded as 321C72C37A)
Feb 12 08:29:56 michaelfmcnamara postfix/qmgr[2647]: 2B8FD2C3BB: removed

With that change those domains that were rejecting email from my server are now accepting them again. Just another day where I’ve learned something new.

Cheers!

Update: Thursday February 24, 2011

It seems the upgrade to WordPress 3.1 has overwritten the change I made in the file… had to update the file again!

Update: Friday April 22, 2011

It seems the upgrade to WordPress 3.1.1 has overwritten the change I made in the file again!

]]>
CentOS Linux – error updating rpm https://blog.michaelfmcnamara.com/2010/07/centos-linux-error-updating-rpm/ https://blog.michaelfmcnamara.com/2010/07/centos-linux-error-updating-rpm/#comments Thu, 29 Jul 2010 03:00:02 +0000 http://blog.michaelfmcnamara.com/?p=1530 I came across an interesting problem recently with the CentOS Linux server I use to host this blog. I encountered a problem when upgrading from CentOS 5.4 to 5.5. I had performed this upgrade on a fair number of physical servers without issue but this was the first time I was upgrading a XEN based VPS running CentOS.

Every package upgraded without an issue expect for rpm itself which error-ed out as detailed below;

[root@michaelfmcnamara sysconfig]# yum update rpm
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* addons: yum.singlehop.com
* base: yum.singlehop.com
* extras: mirror.raystedman.net
* updates: mirror.skiplink.com
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package rpm.i386 0:4.4.2.3-18.el5 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================
Package                 Arch                     Version                              Repository                Size
======================================================================================================================
Updating:
rpm                     i386                     4.4.2.3-18.el5                       base                     1.2 M

Transaction Summary
======================================================================================================================
Install       0 Package(s)
Upgrade       1 Package(s)

Total download size: 1.2 M
Is this ok [y/N]: y
Downloading Packages:
rpm-4.4.2.3-18.el5.i386.rpm                                                                    | 1.2 MB     00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Updating       :rpm                                                                    1/2
Error unpacking rpm package rpm-4.4.2.3-18.el5.i386
error: unpacking of archive failed on file /etc/cron.daily/rpm: cpio: rename

Failed:
rpm.i386 0:4.4.2.3-18.el5

Complete!

So what was wrong with /etc/cron.daily/rpm that the package was failing to install.

Wouldn’t you know that the immutable attribute had been set on the file?

[root@michaelfmcnamara ~]# lsattr /etc/cron.daily/rpm
----i-------- /etc/cron.daily/rpm

Let’s remove that attribute;

root@michaelfmcnamara ~]# chattr -i /etc/cron.daily/rpm

With that fixed let’s try running that update again;

[root@michaelfmcnamara cron.daily]# yum update rpm
Loaded plugins: fastestmirror
Determining fastest mirrors
* addons: yum.singlehop.com
* base: yum.singlehop.com
* extras: mirror.raystedman.net
* updates: mirrors.netdna.com
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package rpm.i386 0:4.4.2.3-18.el5 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================
Package                 Arch                     Version                              Repository                Size
======================================================================================================================
Updating:
rpm                     i386                     4.4.2.3-18.el5                       base                     1.2 M

Transaction Summary
======================================================================================================================
Install       0 Package(s)
Upgrade       1 Package(s)

Total download size: 1.2 M
Is this ok [y/N]: y
Downloading Packages:
rpm-4.4.2.3-18.el5.i386.rpm                                                                    | 1.2 MB     00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Updating       : rpm                                                                                            1/2
Cleanup        : rpm                                                                                            2/2

Updated:
rpm.i386 0:4.4.2.3-18.el5

Complete!

Success!

]]>
https://blog.michaelfmcnamara.com/2010/07/centos-linux-error-updating-rpm/feed/ 1
Domain Name Server patch https://blog.michaelfmcnamara.com/2008/07/domain-name-server-patch/ https://blog.michaelfmcnamara.com/2008/07/domain-name-server-patch/#comments Sun, 13 Jul 2008 23:00:51 +0000 http://blog.michaelfmcnamara.com/2008/07/domain-name-server-patch/ O'Reilly DNS and BIND Last week there was a flurry of information revolving around a new security flaw in the Domain Name System — software that acts as the central nervous system for the entire Internet.

On Tuesday July 10, 2008 a number of vendors including Microsoft, Cisco, Juniper and RedHat released patches and/or acknowledged the flaw existed. The Internet Software Consortium, the group responsible for development of the popular Berkeley Internet Domain Named (BIND) server from which nearly all DNS offshoots are based, also acknowledged the flaw and released a patch.

I personally spent about 90 minutes on last Wednesday updating several internal and external systems including numerous CentOS v5.2 servers and Windows 2003 Service Pack 2 servers. I was unable to find any mention of the DNS flaw on the Alcatel-Lucent website so I’ll probably need to place a call concerning Alcaltel-Lucent’s VitalQIP product.

I used yum to patch the CentOS Linux servers [“yum update”] and then just restarted the named process [“service named restart”]. On the Windows 2003 Service Pack 2 servers I used Windows Update to download and install KB941672 after which I rebooted the servers.

Here are some references:

http://www.theregister.co.uk/2008/07/09/dns_fix_alliance/
http://www.networkworld.com/news/2008/071008-patch-domain-name-servers-now.html
http://www.networkworld.com/news/2008/070808-dns-flaw-disrupts-internet.html

http://www.networkworld.com/podcasts/newsmaker/2008/071108nmw-dns.html

http://www.us-cert.gov/cas/techalerts/TA08-190B.html
http://www.microsoft.com/technet/security/bulletin/MS07-062.mspx

I would strongly suggest that all network administrators start looking into patching their DNS servers as soon as possible.

Cheers!

UPDATE: July 14, 2008

Here’s an update from RedHat concerning the configuration (named.conf) of BIND;

We have updated the Enterprise Linux 5 packages in this advisory. The default and sample caching-nameserver configuration files have been updated so that they do not specify a fixed query-source port. Administrators wishing to take advantage of randomized UDP source ports should check their configuration file to ensure they have not specified fixed query-source ports.

It seems that a check of the configuration file would be in order. Let me throw in a quick warning though if your DNS server is sitting behind a firewall you may need to check with the firewall administrator to understand how the firewall will behave if you randomize your source ports. I believe there are quite a few firewalls out there that only expect to see DNS traffic sourced from a DNS server on UDP/53.

Good Luck!

]]>
https://blog.michaelfmcnamara.com/2008/07/domain-name-server-patch/feed/ 1
CentOS v5.2 is available! https://blog.michaelfmcnamara.com/2008/07/centos-v52-is-available/ https://blog.michaelfmcnamara.com/2008/07/centos-v52-is-available/#comments Sat, 05 Jul 2008 13:00:03 +0000 http://blog.michaelfmcnamara.com/?p=107 centos_logo The folks over at CentOS released v5.2 on Tuesday June 24, 2008. I’ve been running six different HP Proliant DL360s over the past 24 months acting as a public WiFi Hotspot portal servers. The solution has been met all my expectations and almost manages itself entirely (I still need to apply patches and security updates).  CentOS 5.2 adds the same functionality that RHEL 5.2 adds including the latest virtualization support. If you’re looking for a Linux distribution for that brand new server hardware and you don’t have the budget to afford RedHat then CentOS is for you. CentOS is essentially a clone of RedHat Enterprise Linux compiled from the RHEL source files provided under GPL licensing terms. If you’re looking for a Linux distribution to run on that brand new laptop/desktop then I don’t think CentOS if for you. I would probably suggest Ubuntu as a solution for any laptop/desktop.

Just visit the current Mirrors list to start downloading today.

Note: Just be warned that if your running CentOS v5.0 or v5.1 you will be upgraded to CentOS v5.2 if you issue a “yum update“. I believe the release notes indicate you need to issue a “yum upgrade” in order to upgrade but that wasn’t my experience.

Cheers!

]]>
https://blog.michaelfmcnamara.com/2008/07/centos-v52-is-available/feed/ 2