Being mid forties, I remember when we didn’t have the Internet. We didn’t have mobile phones. We weren’t all stressed so much due to an overload of information and accessibility. The Internet broke at home. More specifically, the router had issues for four days, intermittent problems, and then crashed badly yesterday.
My wife was having a meltdown of epic proportion for her home office.
Two opposite spectrums we are. She uses it for social and work, I just despise it, but certainly use it. Hell, I’m posting this aren’t I? This is more a tool to get things off my chest, out of my head and into the nether region of cyberspace which nobody really understands. Pretty sure nobody really understands the cloud.
So with my wife in total meltdown mode, off course my life then must shift into crisis mode. The problem is, I’m supposed to care, but I totally love when the Internet stops. I like being alone. I like not having to bother about what anyone else is doing. My wife, obviously the exception. I like disconnecting from the 24/7 on button that our lives seem to be nowadays.
After becoming literally so stressed about all this due to my wife and her pressuring me, taking out her frustration and such around me, obviously I become involved and some how have to fix the problem. So off I went to the shops last night to buy a new router, as the conclusion was our current one was now broken. Two hours of tech support dicking me around to come to that outcome.
A little laugh though, was that I did tear the guy a new arsehole when he stated that we must signup for another six months in order for them to replace their router. Wait… they supply a router so we get their service that we already pay for? And you want me to commit to six months with you because your equipment broke? WTF? I told them to fuckoff.
They’re sending a new router anyway, without strings attached. Funny that. Pay for a service, you kind of want that service to work without bullshit negotiations to the side, especially being a long time customer already.
Went and got a better router than the cheap shit they provide, and what will be in the mail. Another phone call to tech support this morning to get some stuff undone that the guy screwed with last night, and viola. Working internet once again. Well… that and the nice lady this morning was a bit more on the ball I think, and discovered there was also an account issue with the password. It was trying to use two, thus failing. Their system failed during a password update helping along the problem with drop-outs.
Damn! Happy wife — I wish it would break permanently.
Mail servers are technical beasts. Even the experts recommend you employ professional mail services over attempting to do it yourself. Sure, its easy to get postfix and dovecot to do basic mail sending, yet include a database, mail management, online mail reader, insecure and secure ports all working, sending and receiving mail correctly? Think again.
You could pay for cPanel, Plesk or such commercial software which works out of the box as an all-in-one mail / web server. Look closer though at those solutions, they too do not employ correct mail protocols, nor do they use the best or latest web server technologies. They pass the basics test to send and receive email, and launch websites. <b>My problem</b>… they don’t work with standalone NGINX.
Enter iRedMail – a free, expert solution with a paid pro optional version. A complete free email solution that caters NGINX as the sole web server whilst using the latest technologies. Don’t get me wrong, iRedMail out of the box needs minor tweaking. The web server essentials are easily upgraded, and that’s what makes it so much better than most solutions.
iRedMail requires a fresh server to install upon. If you plan on using iRedMail anti-spam / virus scanning components, you require a server with a minimum of 2GB RAM. I turn all that off, as that’s what anti-virus does at our computer, and is often better. Blocking spam and email misuse is easily achieved with SPF, DMARC and DKIM, which we can include without using RAM hungry software.
When you disable the noise, a cheap single core VPS (Linode or Digital Ocean) with 1Gb RAM and SSD will suffice to run 2,000 daily users, email, NGINX and the latest database and PHP versions.
DNS requires you to set an A and MX record for your FQDN. These need to be set well in advance to allow time to propagate. You need to set SPF for your FQDN and each email domain you add to improve mail deliverability. Your FQDN requires DMARC too (follow the link). Both will dramatically increase delivery to inboxes. You can DNS set an approval for Gmail via Google Postmaster (i.e. set your FQDN which is the mailing server).
MX: your-domain.com, value = my.fqdn.com
Set an SPF for each hosting domain to use the IP’s of the FQDN:
TXT mydomain.com, VALUE = v=spf1 ip4:123.456.789.0 ip6:2404:5874:3:0:576:3eff:hge5:ba47 -all
To reduce your risk of email abuse, including the www version, if you don’t use it to mail from, add an SPF for each non-mailing domain version to ensure any mail sent is rejected as fraudulent spam.
TXT: www, value = v=spf1 -all
Add a HELO SPF for the FQDN:
TXT my.fqdn.com VALUE v=spf1 a -all
Add an MX SPF for your qualified domain:
TXT fqdn.com, VALUE = v=spf1 mx -all
DMARC is somewhat different, yet easy. Read the previous link. Start by sending reports to your email using p=none; to identify any issues. If all is well, work towards p=reject; as the default setting to stop all hijack attempts using your mailing addresses.
Reverse DNS typically takes the longest and is a MUST have for a mail server. Your mail server won’t work without reverse DNS having propagated fully. I would honestly leave the following until 24hrs after you have all DNS in place, especially reverse DNS, otherwise you will install this and mail won’t work, you will wonder why, screw with things that aren’t broken, as it all comes back to DNS for a mail server.
You must set a Fully Qualified Domain Name (FQDN), so whatever you choose, add it to your etc/hosts and set the hostname too. Your hostname is a unique name, and for mail, is what you use as the FQDN too. So if you call your computer “my” we make the FQDN sub-domain as “my” too. The order is: IP FQDN hostname.
123.456.789.123 my.fqdn.com my
hostnamectl set-hostname my
Check that both are set correctly:
Dependant upon your Centos install, SELinux must be disabled. Check status first:
If returned as “disabled” then do nothing. If enabled, you need to disable it. Change “enforcing” to either “permissive” or “disabled” to ensure compatibility:
# MariaDB 10.2 CentOS repository list - created 2017-07-22 09:57 UTC
name = MariaDB
baseurl = http://yum.mariadb.org/10.2/centos7-amd64
SAVE & CLOSE
Now we update OpenSSL to the latest version for HTTP2 compatibility and install some needed software:
yum -y install wget bzip2 yum-utils gcc git
tar -zxf openssl-1.0.2-latest.tar.gz
mv /usr/bin/openssl /root/
ln -s /usr/local/ssl/bin/openssl /usr/bin/openssl
Download iRedMail version (check iRedMail website). I have chosen the specific version below over the latest, as the developer has done some super weird stuff with NGINX file structure, which sucks. This requires an extra step prior to running the bash:
tar xjf iRedMail-0.9.6.tar.bz2
rm -rf iRedMail-0.9.6.tar.bz2
Add to and save: export status_check_new_iredmail=DONE
We must update our pool file to work with iRedAdmin. I included a full version to copy and paste straight into it as a blank version (delete existing content) or you work through it and make your own changes to your file:
All done. You now have the latest stable PHP 7.1.x PHP-FPM.
Install Lets Encrypt
To avoid signed certificate problems when connecting mail clients, lets install, automate and use Lets Encrypt if you want a free SSL solution. Please read below notes, as Lets Encrypt is not considered a trusted certificate authority for signing email, which can cause bounces for legitimate email requiring trusted TLS connection.
git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
FOLLOW THE PROMPTS
./letsencrypt-auto certonly --standalone
Your certificates needed will be located at:
You need to change the old for the new in the following locations. Before restarting NGINX, you should always test it first, to remove public services downtime:
All done. Lets Encrypt will automatically check certificate currency on the first of each month, and if needed, will renew. There are many different ways to run cron, so whatever suits you. I like to do things at specific times and dates for management and control.
Any email provider who requires a trusted TLS certificate in order to send email to your server, will fail when using Lets Encrypt. You will see “Untrusted TLS connection established to…” in your mail log. This will cause some legitimate email sent to you, to bounce. Lets Encrypt is not advised for professional services, instead purchase a Positive SSL certificate by Comodo as a minimum for your FQDN (US$10 per annum via NameCheap.com), as you will require the CA bundle to become a trusted, secure, mail server.
After installing your new key, Positive SSL cert and ca-bundle onto the server, ensure you add the Positive SSL ca-bundle to and change NGINX:
iRedMail also adds all SSL locations to the my.cnf database file, so you can change that too with any new locations.
RESTART all three services.
If you don’t change the server_name, you will get a new mail log error attempting to connect to the top level domain.
This will not work using the Lets Encrypt ca-bundle. Again, not a trusted certificate authority.
When using Positive SSL with ca-bundle, a senders mail log will show: “Trusted TLS connection established to…” when sending mail to you.
As Amavis is so intertwined with Spamassassin and ClavAV (RAM hungry), it’s easier to disable them and install openDKIM for email signing instead. DKIM will allow you to get your email past the toughest spam filter of them all, Microsoft’s Outlook system.
Because I host many domains I’m going to use a domain folder to segregate keys.
Now grab your public key and enter it into your domains DNS as a TXT record. You must ensure you get the full key and enter it correctly for your DNS provider. Remember that DKIM is validated against your domain, so a valid A record is needed.
Name = default._domainkey
Value = v=DKIM1; k=rsa; p=MIGfMA0….
A final task is to ensure your firewall allows TCP port 8891, as that is what openDKIM uses to sign outgoing mail. Mail will fail to send without it. For the default iredmail install using Firewalld:
DKIM failure usually occurs for two primary reasons: permission errors and DNS errors. The above is tried, tested and proven, and your outcome upon testing should look similar to the below. To prove a point, using default cPanel email I get the following positive spam score result, which whilst it claims not to be marked as spam, places email from my account into an Outlook spam-box:
SpamAssassin Score: 0.11
Message is NOT marked as spam
0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid
0.0 T_DKIM_INVALID DKIM-Signature header exists but is not valid
Using my version, you get a negative spam score which places mail into Outlook inbox, Microsoft being the toughest filter to get through:
SpamAssassin Score: -0.101
Message is NOT marked as spam
-0.0 SPF_HELO_PASS SPF: HELO matches SPF record
0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid
-0.1 DKIM_VALID Message has at least one valid DKIM or DK signature
-0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author’s domain
Enjoy. If you can improve it, please comment with your recommendations.
If you read the NGINX admins guide on building NGINX from source, you may believe that you should be building PCRE, ZLIB and OpenSSL from source. Centos already has these libraries installed. All you need is their devels which add the tools for each when building from source.
Saying that, if you are not running the latest OpenSSL stable 1.0.2 branch, you should be for HTTP2 compatibility. The current 1.0.2e YUM is not HTTP2 compatible. The 1.1 branch is too far, and not entirely Centos 7 compatible.
Both Zlib and PCRE devels will be installed automatically with ngx_pagespeed.
bash <(curl -f -L -sS https://ngxpagespeed.com/install) \ --nginx-version latest
When completed it will output the add module directive with location you installed, i.e. –add-module=/root/ngx_pagespeed-latest-stable. You will need to add that to your configure argument.
Move & Remove NGINX
Copy the configure arguments your current NGINX install is using and paste them within an editor.
Add your ngx_pagespeed module output to that argument. You will have something like:
yum -y install openssl-devel
tar zxf nginx-1.11.13.tar.gz
./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx ... < your configure argument >
cp -R /root/nginx /etc
Say yes (y) to each file overwrite. You can compare and amend if you wish, file by file. If you used the same version, then overwriting should not be a problem.
Because we no longer use an RPM, we need to manually implement our systemd. The below is a copy of the NGINX official version. Check for accuracy.
PASTE, SAVE AND EXIT
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
All done. Enable NGINX to start at boot, check syntax first (see known errors below) and if OK, start NGINX:
Pagespeed is far from set and forget. The basics at ngx_pagespeed configuration will help you get a fundamental cache running, combined with an important basic component that is URL Restricting (what is and is not to be cached). The full docs outline far more configuration options, which I do not cover here as you need to set them based on what is relevant to your virtual host / server.
Be super careful with further settings beyond the basics, as settings can cause unintended complications depending on the software you use on your server, and work differently between browsers. Example: inlining Google Font CSS has no negative affect on WordPress, yet totally screws with Xenforo. Even preserve URL relativity, a basic setting, can break parts of your site between browsers. Be careful, roll out one thing at a time and get your users to feedback issues.
When you make a change, drop every cache from your server to browser. Even disable every cache beyond your server, so you test the effect of page speed settings. PageSpeed can very quickly render your server useless when you go in heavy adding options. Image options specifically can be very CPU intensive and just not worth it. Saying that, when you delve into the settings you have a lot of control to limit rendering your server useless with PageSpeed performing too many optimisations at once.
When you think longer term, and depending on the frequency your static content changes, you can implement quite the longevity cache that progressively improves until at the point of being up to date with only new content.
How Do I Know It’s Working?
Open a website on your server, right click and view page source, then search for pagespeed. You should find where pagespeed is being applied. If you view the network tab in inspector, reload the page, you should see a pagespeed header.
When using page speed with PHP-FPM sockets, you will likely get unintended consequences and stall PHP-FPM, thus shutting down all your sites. If using sockets, remove any keep alive components, disable them in both the http socket and server PHP config.
You will need to monitor your nginx, php-fpm and database logs, even mail logs and such, for sudden errors caused by a wrong setting somewhere. Trust me, page speed is amazing, but tedious to get setup and get right. Approach with caution and monitor for days, a week even, after changing a setting. I’ve found negative impact from page speed settings across most primary services, all of which have easy enough solutions, but will vary dependent upon your setup and settings used.
Start basic, monitor for a week and fix the basic issues that arise first. Use the time to read and understand potential settings you may want to implement, and work out viable solutions based on your server specs and software application.
Flush Pagespeed Cache
Just run the following to flush the ngx_pagespeed cache: