Nov 21

What is:

What is:

MS-OCSP stands for “Online Certificate Status Protocol (OCSP) Extensions” from Microsoft. Although this might seem a bit daunting the plain english version of that isn’t: “Microsoft publishes Open Specifications documentation for protocols, file formats, languages, standards as well as overviews of the interaction among each of these technologies.” This is a protocol extension provided by Microsoft to check on the Certificate Status. Performing a whois on the domain name confirms this is a domain owned by Microsoft so it is considered safe. If your firewall software is notifying you of connections to you can relax. I am including below the WhoIs information at the date of publishing confirming it is a Microsoft domain.

Some of the URLs you might encounter include:…


Whois for MSOCSP.Com:

Domain Name: MSOCSP.COM
Whois Server:
Referral URL:
Name Server: NS1.MSFT.NET
Name Server: NS2.MSFT.NET
Name Server: NS3.MSFT.NET
Name Server: NS4.MSFT.NET
Name Server: NS5.MSFT.NET
Status: clientDeleteProhibited
Status: clientTransferProhibited
Status: clientUpdateProhibited
Updated Date: 20-jan-2014
Creation Date: 20-jan-2014
Expiration Date: 20-jan-2015

Domain Name:
Registry Domain ID: 1843565191_DOMAIN_COM-VRSN
Registrar WHOIS Server:
Registrar URL:
Updated Date: 2014-10-15T07:09:26-0700
Creation Date: 2014-01-20T10:25:22-0800
Registrar Registration Expiration Date: 2015-01-20T10:25:22-0800
Registrar: MarkMonitor, Inc.
Registrar IANA ID: 292
Registrar Abuse Contact Email: [email protected]
Registrar Abuse Contact Phone: +1.2083895740
Domain Status: clientUpdateProhibited
Domain Status: clientTransferProhibited
Domain Status: clientDeleteProhibited
Registry Registrant ID:
Registrant Name: Domain Administrator
Registrant Organization: Microsoft Corporation
Registrant Street: One Microsoft Way,
Registrant City: Redmond
Registrant State/Province: WA
Registrant Postal Code: 98052
Registrant Country: US
Registrant Phone: +1.4258828080
Registrant Phone Ext:
Registrant Fax: +1.4259367329
Registrant Fax Ext:
Registrant Email: [email protected]
Registry Admin ID:
Admin Name: Domain Administrator
Admin Organization: Microsoft Corporation
Admin Street: One Microsoft Way,
Admin City: Redmond
Admin State/Province: WA
Admin Postal Code: 98052
Admin Country: US
Admin Phone: +1.4258828080
Admin Phone Ext:
Admin Fax: +1.4259367329
Admin Fax Ext:
Admin Email: [email protected]
Registry Tech ID:
Tech Name: MSN Hostmaster
Tech Organization: Microsoft Corporation
Tech Street: One Microsoft Way,
Tech City: Redmond
Tech State/Province: WA
Tech Postal Code: 98052
Tech Country: US
Tech Phone: +1.4258828080
Tech Phone Ext:
Tech Fax: +1.4259367329
Tech Fax Ext:
Tech Email: [email protected]
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:

Nov 19

Ubuntu 14.04 and above in a Generation 2 Hyper-V Virtual Machine (VM)

Ubuntu 14.04 and above in a Generation 2 Hyper-V Virtual Machine (VM)

As most of you know, a Generation 2 Hyper-V Virtual Machine is generally reserved for Windows 2012 or 64 bit versions of Windows 8 as the New virtual Machine Wizard specifies:

Generation 2

This virtual machine generation provides support for features such as Secure Boot, SCSI boot, and PXE boot using a standard network adapter. Guest operating systems must be running at least Windows Server 2012 or 64-bit versions of Windows 8.

This was true until recently when Ubuntu released version 14.04 (currently 14.10 is also available). Ubuntu 14.04 is the first linux release to support running inside a Generation 2 Virtual Machine. Needless to say you are going to need the 64 bit version.

The key to get this running is that you need to disable Secure Boot. This needs to be done before you commence the installation of the OS in the VM.

  • Go into the VM Settings
    • Hardware
      • Firmware
        • On the top there is a “Secure Boot” option. Disable the checkbox on “Enable Secure Boot”

As the window indicates:

Secure Boot is a feature that helps prevent unauthorized code from running at boot time. It is recommended that you enable this setting.

Just disregard the recommendation. As mentioned this is required to run Ubuntu on a Generation 2 VM. If you are running Windows 2012 or Windows 8 and above you might want to follow the recommendation (and default setting) of using Secure Boot.

The Integration Services offered by Ubuntu have improved and not only will you be able to enjoy some of the Gen 2 improvements but also things like Dynamic memory being available during the installation process and online backup. It’s pretty nice that Ubuntu is supporting Hyper-V features making it a viable option when deploying linux VMs on Hyper-V.

Nov 17

Resolved: Reverse mapping checking getaddrinfo for {Reverse DNS hostname [IP Address]} failed – POSSIBLE BREAK-IN ATTEMPT!

Resolved: Reverse mapping checking getaddrinfo for {Reverse DNS hostname [IP Address]} failed – POSSIBLE BREAK-IN ATTEMPT!

If you start looking at the SSH log: “/var/log/auth.log” you might come across a lot of “Reverse mapping checking getaddrinfo for {Reverse DNS hostname [IP Address]} failed – POSSIBLE BREAK-IN ATTEMPT!” messages. In my case I only had login attempts from my IP Address so I wasn’t too worried but I wanted to get to understand why this message was appearing.

After reading online things made more sense. The message references a “reverse mapping check” and on the message itself it has the reverse DNS hostname of my IP Address. So what is going on is that it does a circular DNS check. It gets your IP address, it then does a reverse DNS check (for most ISPs the reverse DNS would be one they set), and then it will try to perform an IP resolution on that DNS hostname. Clearly for most people their ISPs won’t set up the reverse DNS to their hostnames which they can control. Because of this the reverse mapping check will fail. Forward and Reverse DNS needs to be configured correctly and coherently.



To resolve this issue you need to setup your PTR record and make sure the entire circular DNS validation is set up properly:

  • Setup PTR record on the DNS server in use
  • As an alternative of above option, one can put “UseDNS no” in /etc/ssh/sshd_config on server and restart sshd.


  • There is no reverse DNS setup for the hostname that is used, or the reverse DNS isn’t setup to resolve to a host name that resolves to the IP address (forward DNS).

Nov 17

How to: Override a Location directive on NginX

How to: Override a Location directive on NginX

Sometimes when coding you are in need to re-use a lot of the logic across different systems but find yourself needing to overwrite some of that functionality on a special case. In this particular scenario I needed to overwrite a Location directive on NginX. What do I mean by that? Well, I do a lot of includes as a lot of functionality is shared across several sites but from time to time I have a special request/need where I need to implement custom behavior. For that reason I was defining a Location directive twice on NginX. To avoid itI could had dropped an include but it would mean manually maintaining a copy of some directives specially for this site. To avoid that I was looking for a way to define a Location directive twice and have one of them take preference. For those using EasyCamp’s EasyEngine you might find yourself trying to overwrite some of the default functionality/configuration but come across issues when trying to define the same location elsewhere.

When you declare the same Location directive on NginX you get an error indicating:

nginx: [emerg] duplicate location “/wp-login.php” in /etc/somepath/common.conf:25

In order to achieve this functionality I relied on NginX ability to stop processing further location directives. You need to change the declaration of the Location directive by using a different operator. Like, if you were using the = operator, have the new definition use the ^~ operator. Below is an example:

location = /wp-login.php


location ^~ /wp-login.php

If you read NginX’s documentation: you’ll find the following:

This directive allows different configurations depending on the URI. It can be configured using both literal strings and regular expressions. To use regular expressions, you must use a prefix:

  1. “~” for case sensitive matching
  2. “~*” for case insensitive matching
  3. there is no syntax for NOT matching a regular expression. Instead, match the target regular expression and assign an empty block, then use location / to match anything else.

The order in which location directives are checked is as follows:

  1. Directives with the “=” prefix that match the query exactly (literal string). If found, searching stops
  2. All remaining directives with conventional strings. If this match used the “^~” prefix, searching stops.
  3. Regular expressions, in the order they are defined in the configuration file.
  4. If #3 yielded a match, that result is used. Otherwise, the match from #2 is used.

So all directives were searching stop would help you to completely overwrite a Location directive. As I mentioned, you need to leverage the fact that NginX considers a query with different operator an entirely different Location directive avoiding the “duplicate location” error message when running “nginx -t” to verify your configuration.

Nov 15

How to: Create an SSH connection using Terminal on Mac OS X & save the configuration for later use / shortcut

How to: Create an SSH connection using Terminal on Mac OS X & save the configuration for later use / shortcut

So now that I bought myself a new Mac I decided I would try to avoid by all means installing Windows on it. I use a lot of applications (and some games) that either only run on Windows or they work better on Windows. Sadly one of them is Putty (SSH client). I really liked how I could save different profiles for the different servers I connect to and overall the GUI to control tunnels, duplicate settings, etc. Clearly all I knew how to do on a mac was SSH ServerName boom! So finally I took some time to learn how to use all the SSH goodies on a Mac:

I. The Command Line

There are a few basic things you should know before we get into the cooler stuff. I say this because you always want to know how to perform any task two different ways. helps you troubleshooting in case one does not wok. In my case the configuration /shortcut file was in the wrong location and I knew right away because the command line worked that the problem was on the config file. But I digress. The ssh command line has several parameters but we’ll cover just a few to get you started:


is used to indicate the identity file location. This is the location where you save that key file to authenticate to the server. One thing to keep in mind is that this file needs to be somewhat secured for OSX to even let you use it. Here is an example:

Permissions 0777 for ‘/Users/CloudIngenium/Documents/.SSL/Open SSH Key.txt’ are too open.

It is required that your private key files are NOT accessible by others.

This private key will be ignored.

bad permissions: ignore key: /Users/CloudIngenium/Documents/.SSL/Open SSH Key.txt

What’s recommended in this case is that you chmod the file (if not the entire directory if you use it exclusively for that purpose) to 600 or 400.


is used to specify the user you want to connect to as. Think ‘root’ or your actual username.


is used to specify the port. If your server does not run on the default port 22 you can specify another one here.

So in summary, you can use something like this:

ssh –i deployment_ssh_key.cert –l root -p 12345 <IP>

II. Saved Configuration / Shortcut method

Now on to the really exciting part.

Nov 15

What is a privileged port on a Mac / Darwin?

What is a privileged port on a Mac / Darwin?

Recently I was trying to connect to a remote SSH server using my MacBook. At one point when trying to configure a Tunnel I got an error saying:

Privileged ports can only be forwarded by root

My first instinct was to make sure I had root access on the remote server and on the local one. I realized I needed to use sudo to launch SSH to avoid this error. But clearly that was not the best workaround.

I did a little digging to see what were these “privileged ports” and learn more about them. It turns out for some reason Mac OSX has a restriction on binding all ports below 1024. I am guessing this is because long time ago every application that matter grabbed some of those ports (think 80 and 443 por the Internet, FTP, SMTP, etc.) so probably that opens room for security issues. Regardless I simply decided to bind a port about 1024 and that avoided the need to use root access. If you have the flexibility to change the local binding port then that’s probably the best route.

Oct 26

How to: Move your NginX website to HTTPs / SSL


How to: Move your NginX website to HTTPs / SSL

It comes at no surprise that a lot of people are looking into moving their sites to HTTPs due to recent events: Google’s decision to give ranking points to sites that use SSL / HTTPs and eavesdropping by governments world wide. There are a number of considerations before taking this step, specially for people who have not yet deployed HTTPs / SSL on their web servers.

There are a number of things you need to start doing and fixing in order to have a functional site that performs well. Because most sites were not thought to use SSL to begin with, you’ll find that there might be problems when accessing your site from https instead of http. Below are a few tips to consider although it is not an exhaustive list:

  1. Internal links: If you are using WordPress you will find a lot of the links you have not only do they include http:// but also the domain name. This is a problem for two reasons: 1) you are hard coding the transport (in this case http) so even if you use a different transport it will force the browser to take the hard coded one and 2) You are hard coding your site’s URL. Imagine you change your domain name… now you have to change all your URLs. For these reasons I generally recommend people use relative URLs (they start with / vs the transport or domain name).
  2. Infrastructure concerns: Make sure your CDN supports SSL as well as any other part of your infrastructure (reverse proxies, firewalls, etc). Consider using SPDY when using SSL for added performance benefits. I really like using to check for SPDY readiness. It lets you know if you have deployed SPDY and other tips to get the most out of it. We will go over the checklist for this in a minute
  3. NginX specific configuration: We will cover the required configuration needed to address the implementation of an SSL site as well as performance considerations.

So let’s get to it!

I. Use SSL cache

As you can imagine hosting a site using SSL requires additional work on the CPU. We need to make sure we don’t saturate the CPU otherwise our site will be loading much slower. For example, I once tried using a 16k bit certificate… my CPU would die depending on the number of requests it had to handle. For that reason I’ve joined the 99% of people in using a 2k bit certificate for my site. For that same reason we need to turn on SSL session cache on NginX to save on CPU resources:

ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;

II. Use the right cyphers

This was also a surprise to me, but you can configure the cyphers NginX uses. This has two major impacts: 1) Certain cyphers are weak or vulnerabilities have been discovered. Pretty much using them does not guarantee much security or worse case, compromises the security of your entire site. 2) Certain cyphers are more CPU intensive than others. Surely you want to be helpful for the poor guy who uses that strange CPU intensive cypher but you’re just asking for trouble down the road. I wrote an article on CloudFlare recommended configuration here: How to: Improve SSL performance on NginX which you can also refer to as well as this article: Hardening Your Web Server’s SSL Ciphers.

Below is the recommended configuration as of the date of publishing of this post:

ssl_prefer_server_ciphers On;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;

III. SPDY readiness

So now we are ready to use SPDY! We’ll go step by step through some recommended configurations:

a) Enable SSL with SPDY on the default port 443

listen 443 spdy ssl;

b) Use a valid X.509 Certificate

As long as you are using a valid certificate from a public key certificate authority you’re more likely than not fine.

c) Server Hello Contains NPN Extension

If you are using NginX you’re probably fine. Make sure you are running the latest version.

d) Enable HTTP Fallback

All this means is that your site is also available via HTTP just in case

listen 80 default_server;

listen 443 default_server spdy ssl;

e)HTTP Redirects to HTTPs / SSL / SPDY

If a visitor arrives via HTTP, have your webserver redirect them to the HTTPs version of your site. If you have SPDY enabled they will probably have a faster loading experience.

server {
listen 80 default_server;
server_name *;

return 301 https://$host$request_uri;

f) Use HTTP Strict Transport Security (HSTS)

All this does is indicate your client that for future requests for your site it should use SSL. I set up the max age to one day as I am starting to test site wide SSL deployment. If you feel very comfortable with this I have observed most people use a max age of one year. Be as it may the important thing is to enable it and like cache use a reasonable time in case you are forced to go back to a non-SSL site.

# This forces every request after this one to be over HTTPS for one day
add_header Strict-Transport-Security “max-age=86400; includeSubdomains”;

Don’t forget that the includeSubdomains should only be used if you are deploying Strict Transport Security to your subdomains as well. You can remove it if you don’t want your subdomains to use HSTS.

UPDATE: SSLLabs recommends using at least 180 days for your HSTS. So you should use 15552000 for max age once your done testing and find the performance of your site acceptable.

g) Enable OCSP Stapling

If you are not sure what OCSP Stapling is, I recommend reading the CloudFlare article: OCSP Stapling: How CloudFlare Just Made SSL 30% Faster. Pretty much what this does is remove a large portion of the SSL overhead by doing some work on your server. This is good as your visitors will enjoy a faster experience on your site.

First you need to make sure your certificate includes the entire certificate chain. I’ll look up a previous post I had on how to concatenate certificates to achieve this (pretty much you have your certificate and attach to it the certificates of each public certification authority above it.)

Here is the code needed on NginX:

ssl_stapling on;
ssl_stapling_verify on;
resolver valid=300s;
resolver_timeout 10s;

Do keep in mind we are using Google’s resolvers for this. You can use other DNS resolvers if you prefer.


Here is a sample report from

Report Details

Network Server on 443Nice, this host has a network service listening on port 443. SPDY works over SSL/TLS which usually listens on port 443.
SSL/TLS DetectedGood, this host is speaking SSL/TLS. SPDY piggybacks on top of SSL/TLS, so a website needs SSL/TLS to use SPDY.
Valid X.509 CertificateThis website is responding with a valid X.509 certificate. X.509 certificate errors can cause the browser to display warning messages and to stop speaking with the website, so using a valid certificate is an essential step to supporting SPDY.
ServerHello Contains NPN ExtensionNice, this server including the NPN Entension during the SSL/TLS handshake. The NPN Extension is an additional part of the SSL/TLS ServerHello message which allows the web server to tell browser it supports additional protocols, like SPDY.
Success! SPDY is Enabled!Hurray, this website is using SPDY! The following protocols are supported:

  • spdy/3.1
  • http/1.1
HTTP Fallback DetectedThis website is using SPDY, but it also supports traditional HTTP over SSL. This ensures that older web browsers can still access this site using HTTP
HTTP Redirects to SPDYPretty Sexy! Accessing this website via HTTP automatically redirects the user to access the website via SSL/TLS and SPDY. This means all of website’s visitors that can browse the site with SPDY, do browse the site using SPDY.
Strict-Transport-Security SupportededExcellent! This website is using HSTS, also known as Strict Transport Security. This tells the browser to always use SSL when talking to this website, allows more of your visitors the opportunity to both be secure and to use SPDY. The server is sending the headerStrict-Transport-Security: max-age=86400; includeSubdomains which tells the web browser to always use SSL to access this website for the next 1 days.

Oct 26

How to: Create a Self Signed Certificate in Ubuntu

How to: Create a Self Signed Certificate in Ubuntu

Many times during the initial phases of a Web Server deployment we are in need of testing secure communications (and its configuration) using a certificate. Unfortunately we might not have a valid certificate from a certification authority at the time. There are also other scenarios were you can’t or don’t really need a full fledged certificate so you are in need of using an alternative.

A Self Signed Certificate as its name implies, is a certificate which you are creating yourself. The benefits are clear: you control the entire process and can create a certificate with whatever characteristics you want. The downside is that because it is self-signed, in other words, not signed by a valid and trusted certification authority, most browsers/clients will get security warnings or deny the user access to the site. Clearly this solution works well for testing scenarios or personal sites where you don’t need to establish public trust. If however, you are creating an online store and you’re going into production then you more likely than not need a certificate from a certification authority. I personally recommend StartSSL as their costs are most competitive.

Now that we have cleared why you may want to create a self signed certificate, let’s focus on the actual how:

Step One — Create or choose a location for your SSL Certificate

Depending on your personal taste and needs, you might want to place your SSL certificate in a number of places. I recommend you dedicate a place to store them as you probably will want to secure access to it somehow.

Apache people like using /etc/apache2/ssl while NginX people like using /etc/nginx/conf.ssl from what I’ve seen. But clearly this is a matter of choice. Because I use shared storage, I have my SSL store on a shared mount close to my www files. Again, this is mostly a decision based on your personal/corporate taste and system requirements.

Step Two — Create your Self Signed SSL Certificate

We are going to go ahead now and create both our public certificate and private key. The public certificate is used to present yourself and for the client/browser to know how to communicate securely with your server. As the name implies, it is for public eyes so it is not necessary you protect it zealously. Your private key, again as the name implies, needs to remain private. It contains the secret on how to decrypt the communication and it is needed by your server to know how to read the encrypted communication. If someone gains access to this then they can eavesdrop the “secured” traffic to your site.

Now that we have cleared that, let’s go ahead and create our certificate:

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/conf.ssl/selfsigned.key -out /etc/nginx/conf.ssl/selfsigned.crt

Let’s go step by step over the parameters we used on this command:

  • sudo: You need admin credentials in order to run the command.
  • openssl: This is the command tool for OpenSSL which is used to create and manage certificates, keys, signing requests, etc.
  • req: This subcommand is used to specify the parameters for the request.
  • -x509: This option specifies that we want to make a self-signed certificate file instead of generating a certificate request. X.509 is a public key infrastructure standard that SSL adheres to for its key and certificate managment.
  • -nodes: This option tells OpenSSL that we do not wish to secure our key file with a passphrase. The benefit of securing your private key file with a passphrase is that it can’t be used without the passphrase. In the case of a webserver, it means that everytime you start the web service you’ll need to manually provide the password. Most people won’t use a passphrase for a web server because of this.
  • -days 365: This specifies for how many days the certificate will be valid for. The accepted standard nowadays is one to two years for added security. I believe 3 years has been set as the maximum allowed and 5 year certificates are considered insecure. I recommend using a validity of 1 year for most certificates. Remember, you can always get a new one before it expires.
  • -newkey rsa:2048: This option will create the certificate request and a new private key at the same time. Following that we are indicating we want an RSA key and that it should be 2048 bits long. 2048 bits is the minimum size accepted as secure this days. The most common key sizes go from 2k all the way up to 16k (2k, 4k, 8k and 16k). I used to use a 16k certificate but my experience is that once you start hitting a lot of SSL traffic it does consume a lot of CPU power. That’s the reason why you’ll see 2k is the standard. There are other types of keys like -sha256 which you could use.
  • -keyout: This parameter names the output file for the private key file that is being created.
  • -out: This option names the output file for the certificate that we are generating.

When you hit “ENTER”, you will need to provide additional information that will go into the certificate. You can also specify an answer file which I will cover later on. This answer file is useful when you’re trying to secure multiple domains with one certificate (using a SAN – Subject Alternative Name).

Most of this information can be considered useless. Your end users will see it only when they take a peak into the certificate which will be less than 1% of the people if I had to guess. Regardless, the key piece of information is going to be the Common Name. The Common Name is the domain name that will be used by the certificate.

Common Names are key. Take for example this site:

  • a CN of only works for
  • a CN of only works for
  • a CN of * works for, but it does not work for

So if you are trying to secure let’s say, your main domain name and all of its subdomains you need to use a SAN certificate. You need to use the naked domain name ( is a naked domain name as it has no subdomains is technically a subdomain) and the wildcard domain name for all subdomains (*

Also, if you are using a static IP address and it is dedicated to your web server you can consider including the IP address as an alternative SAN. If you are going to access the site via IP address then you use the IP address as the common name.

Below is the additional information you are going to have to provide. If you want to use SAN then you need to use an answer file as the wizard won’t ask you for alternative names:

Country Name (2 letter code) [AU]: 
State or Province Name (full name) [Some-State]: 
Locality Name (eg, city) []: New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]: Your Company's name LLC
Organizational Unit Name (eg, section) []: IT
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []: [email protected]

The private key and public certificate will be created and placed in your /etc/nginx/conf.ssl/ssl directory.

Step 2.1 — Create your Self Signed SSL Certificate using multiple Subject Name Alternatives

As I mentioned above, using a regular certificate is only good for a site with no subsites (subdomains). Here we are going to explore how to create SAN certificates so one certificate can work with multiple domain names and subdomains.

In order to configure OpenSSL to use V3 Requirements (Subject Name Alternatives) we need to edit the openssl.cnf file located on Debian and Ubuntu systems this file can found at: /usr/lib/ssl/openssl.cnf, on CentOS and Fedora at: /etc/pki/tls/openssl.cnf.

I. Insert the following line immediately before the “HOME” entry:


SAN=”email:[email protected]

You should use your own domain name used for your server’s fully qualified domain name (FQDN) as well as a valid support email adddress. OpenSSL will append the default support email address to the SAN field for all new SSL certificates if you don’t provide a SAN variable. This is required as we cannot leave the subjectAltName parameter empty.

II. Next, add the following line immediately after the [ v3_req ] and [ v3_ca ] section markers.



III. Configure your “SAN” environment variable

At the shell prompt you will issue a command to manually configure the SAN environment variable. will be read to obtain a list of alternate DNS names that should be considered valid for new certificates. Remember if you don’t specify this the only SAN that will be included is your support email address so these changes won’t cause issues if you forget to add a SAN. 
At the shell prompt type:

export SAN=”DNS:*,,,”

Substitute your own domain names adding additional domains delimited by commas. Remember you need to add each subdomain and domain you want to protect and that the wildcard subdomain ‘*’ does not include the naked domain name.

Now you are ready to request your certificate again!

Sep 18

Resolved: Blank pages when using NginX with php-fpm

Resolved: Blank pages when using NginX with php-fpm

As of late I have been a bit busy so keeping up with updates to the web server has been pretty much neglected. Because of that I decided to switch to the supported distribution to get the latest updates although that means a distribution without any of the extra plugins I’ve come to use. I decided this was an acceptable compromise due primarily to time issues. So pretty much there is where my problems began. I had a fully functional installation and all of a sudden after I switched distributions I got a blank screen. I tried everything: setting permissions again, monitoring the logs for errors, configuring again Nginx and PPH-FPM. Everything checked, the status pages, the ping stubs, the logs showed no errors… so this was quite the mystery. There were no indications as to what would result in blank pages being shown yet no errors on Nginx or PHP-FPM.

Fast forward a few hours later and finally I found the issue: The fastcgi.conf file was missing the definition of PATH_TRANSLATED. Apparently the previous distribution I was using had included this definition in the file but the distribution has two fastcgi configuration files (fastcgi_params or fastcgi.conf)

and one of them is missing this definition. It was just a matter of adding the following line and you have yourself a functioning php site once again:

fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name;

I hope this helps others. Seems like a really silly thing but without it PHP-FPM does not have the required configuration to be able to function properly.

Sep 14

How to: Increase the memory being allocated to PHP

How to: Increase the memory being allocated to PHP

As part of a series of posts regarding how to improve performance when using WordPress, this particular one will focus on the key aspect of memory available to PHP.

As most readers would agree, available memory is an important aspect of any application and its performance. WordPress is no different. Out of the box most installations have the default 64mb available for PHP and WordPress. Not all hosts are configured the same but if you have your own generally this are the default settings. Depending on the load you have, number of PHP processes, available memory, etc. tweaking those settings will have an impact on your performance.

Most of the tweaking has to be done on a case to case basis (generally you want to be able to provide as much memory as possible but within reason). But regardless, the key is to know where you need to modify those setting so you can enjoy the benefits of having more memory available to your WordPress installation:

I. Increase memory_limit in PHP.ini

The first thing you want to consider is increasing your memory limit in PHP. The memory_limit directive specifies the maximum amount of memory a script may consume (by default 64mb). If you have an old installation or a shared host you might find yourself with a lower value so taking a look at this setting should be your first step. Remember that the Memory Limit is assigned per process. So if you have PHP FPM and 10 processes, you are going to potentially need up to 640mb in available RAM. Another thing to keep in mind is that this value impacts all your PHP applications (or you could segregate the setting per pool on PHP FPM, but if you share pools then you share the setting).

If you are using a shared host you might not be able to modify the php.ini file. In that case adding the following line to your .htaccess file might do the trick:

php_value memory_limit 64M

II. Increase WP_MEMORY_LIMIT in wp-config.php

The next step is inscreasing the WP_MEMORY_LIMIT value in your wp-config.php file. As you can probably guess by now, the memory_limit in PHP.ini is useless if you don’t increase WP_Memory_Limit in wp-config.php. Because of shared pools and the like, WordPress starting with version 2.5 uses the WP_MEMORY_LIMIT option to precisely limit the maximum amount of memory that can be consumed by PHP. What WordPress would do is attempt to increase the memory allocated to PHP only when running WordPress. So theoretically if your host/configuration allows it, your memory_limit could be set to 32M while your WP_Memory_Limit to 64M and WordPress will try to change on the fly the memory allocation for its processes to 64 Megabytes.

III. Increase WP_MAX_MEMORY_LIMIT in wp-config.php

This one is my favorite setting and also one that I haven’t seen much written about. WordPress realizes as most of us have with time, that the administration area is more resource intensive than the general website. And it makes sense, after all, that polling all post vs displaying only one is more demanding. What many of us ended up doing was allocating 128 or even 256 Megabytes to PHP’s memory_limit and wp-config.php’s WP_MEMORY_LIMIT which resulted in increased performance of the administration area but then again, excessive consumption for the other processes handling regular visitor/user requests. To address this issue WordPress WP_MAX_MEMORY_LIMIT option which establishes the maximum memory that can be used on the administration area. Ironically, you could also provide a lower value than WP_MEMORY_LIMIT effectively decreasing the memory assigned when in the administration area. Pretty cool eh?


Be sure to provide your WordPress installation with sufficient resources to handle the incoming traffic from your visitors. WordPress recommends at least 40 MB allocated to PHP. Obviously the more the better but within reason. Be sure to configure the three settings that govern the memory allocation so that one of the settings doesn’t restrict the others effectively nulling what you’re trying to accomplish.

Load more