src
date:2015-10-28 15:04:24
url:/src/about
**If you're here because Google sent you to one of the articles I deleted and then you got redirected here, have a look at the [Internet Archive](https://web.archive.org/web/*/https://longhandpixels.net/blog/), which preserved those pages.**
For a while I had another blog at the URL longhandpixels.net. I made a few half-hearted attempts to make money with it, which I refuse to do here.
I felt uncomfortable with the marketing that required and a little bit dirty about the whole thing. I don't want to spend my life writing things that will draw in people to buy my book. Honestly, I don't care about selling the book (at this point, 2018, it's enough out of date that I pulled it completely).
What I want to do is write what I want to write, whether the topic is [life on the road with my family, traveling in a restored 1969 Dodge Travco RV](/) (which is what most of this site is about), fiction or technology. I don't really care if anyone else is interested or not. Long story short; I shut down longhandpixels. I ported over a small portion of articles that I liked and deleted the rest, redirecting them all to this page, hence the message at the top.
So, there you go. Now if I were you I'd close this browser window right now and go somewhere with fresh air and sunshine, but if you're not up for that, I really do hope you enjoy `src`, which is what I call this code/tech-centric portion of luxagraf.
###Acknowledgements
`src` and the rest of this site would not be possible without the following software, many thanks to the creators:
* [Git](http://git-scm.com/) -- pretty much everything I write is stored in Git for version control purposes. I host my own repos privately.
* [Nginx](http://nginx.org/) -- This site is served by a custom build of Nginx. You can read more about how I set up Nginx in the tutorial I wrote, *[Install Nginx on Debian/Ubuntu](/src/install-nginx-debian)*
* [Python](https://www.python.org/) and [Django](https://www.djangoproject.com/) -- This site consists primarily of flat HTML files generated by a custom Django application I wrote.
* [Arch Linux](https://www.archlinux.org/) -- Way down at the bottom of stack there is Arch, which is my preferred operating system, server or otherwise. Currently I run Arch on a small VPS instance at [Vultr.com](http://www.vultr.com/?ref=6825229) (affiliate link, costs you nothing, but helps cover my hosting).
# Switching from LastPass to Pass
date:2015-10-28 15:02:09
url:/src/pass
I used to keep all my passwords in my head. I kept track of them using some memory tricks based my very, very limited understanding of what memory champions like [Ed Cooke][1] do. Basically I would generate strings using [pwgen][2] and then memorized them.
As you might imagine, this did not scale well.
Or rather it led to me getting lazy. It used to be that hardly any sites required you to log in so it was no big deal to memorize a few passwords. Now pretty much every time you buy something you have to create an account and I don't want to memorize a new strong password for some one-off site I'll probably never visit again. So I ended up using a less strong password for those. Worse, I'd re-use that password at multiple sites.
My really important passwords (email and financial sites), are still only in my head, but recognizing that re-using the same simple password for the one-offs was a bad idea, I started using LastPass for those sorts of things. But I never really liked using LastPass. It bothered me that my passwords were stored on a third-party server. But LastPass was just *so* easy.
Then LogMeIn bought LastPass and suddenly I was motivated to move on.
As I outlined in a [brief piece][3] for The Register, there are lots of replacement services out there -- I like [Dashlane][4], despite the price -- but I didn't want my password data on a third party server any more. I wanted to be in total control.
I can't remember how I ran across [pass][5], but I've been meaning to switch over to it for a while now. It exactly what I wanted in a password tool -- a simple, secure, command line based system using tested tools like GnuPG. There's also [Firefox add-on][6] and [an Android app][7] to make life a bit easier. So far though, I'm not using either.
So I cleaned up my LastPass account, exported everything to CSV and imported it all into pass with this [Ruby script][8].
Once you have the basics installed there are two ways to run pass, with Git and without. I can't tell you how many times Git has saved my ass, so naturally I went with a Git-based setup that I host on a private server. That, combined with regular syncing to my Debian machine, my wife's Mac, rsyncing to a storage server, and routine backups to Amazon S3 means my encrypted password files are backed up on six different physical machines. Moderately insane, but sufficiently redundant that I don't worry about losing anything.
If you go this route there's one other thing you need to backup -- your GPG keys. The public key is easy, but the private one is a bit harder. I got some good ideas from [here][9]. On one hand you could be paranoid-level secure and make a paper print out of your key. I suggest using a barcode or QR code, and then printing on card stock which you laminate for protection from the elements and then store it in a secure location like a safe deposit box. I may do this at some point, but for now I went with the less secure plan B -- I simply encrypted my private key with a passphrase.
Yes, that essentially negates at least some of the benefit of using a key instead of passphrase in the first place. However, since, as noted above, I don't store any passwords that would, so to speak, give you the keys to my kingdom, I'm not terribly worried about it. Besides, if you really want to get these passwords it would be far easier to just take my laptop and [hit me with a $5 wrench][10] until I told you the passphrase for gnome-keyring.
The more realistic thing to worry about is how other, potentially far less tech-savvy people can access these passwords should something happen to you. No one in my immediate family knows how to use GPG. Yet. So should something happen to me before I teach my kids how to use it, I periodically print out my important passwords and store that file in a secure place along with a will, advance directive and so on.
[1]: https://twitter.com/tedcooke
[2]: https://packages.debian.org/search?keywords=pwgen
[3]: tk
[4]: http://dashlane.com/
[5]: http://www.passwordstore.org/
[6]: https://github.com/jvenant/passff#readme
[7]: https://github.com/zeapo/Android-Password-Store
[8]: http://git.zx2c4.com/password-store/tree/contrib/importers/lastpass2pass.rb
[9]: http://security.stackexchange.com/questions/51771/where-do-you-store-your-personal-private-gpg-key
[10]: https://www.xkcd.com/538/
# Setup And Secure Your First VPS
date:2015-03-31 20:45:50
url:/src/setup-and-secure-vps
Let's talk about your server hosting situation. I know a lot of you are still using a shared web host. The thing is, it's 2015, shared hosting is only necessary if you really want unexplained site outages and over-crowded servers that slow to a crawl.
It's time to break free of those shared hosting chains. It time to stop accepting the software stack you're handed. It's time to stop settling for whatever outdated server software and configurations some shared hosting company sticks you with.
You need a VPS. Seriously.
What? Virtual Private Servers? Those are expensive and complicated... don't I need to know Linux or something?
No, no and not really.
Thanks to an increasingly competitive market you can pick up a very capable VPS for $5 a month. Setting up your VPS *is* a little more complicated than using a shared host, but most VPS's these days have one-click installers that will set up a Rails, Django or even WordPress environment for you.
As for Linux, knowing your way around the command line certainly won't hurt, but these tutorials will teach you everything you really need to know. We'll also automate everything so that critical security updates for your server are applied automatically without you lifting a finger.
## Pick a VPS Provider
There are hundreds, possibly thousands of VPS providers these days. You can nerd out comparing all of them on [serverbear.com](http://blog.serverbear.com/) if you want. When you're starting out I suggest sticking with what I call the big three: Linode, Digital Ocean or Vultr.
Linode would be my choice for mission critical hosting. I use it for client projects, but Vultr and Digital Ocean are cheaper and perfect for personal projects and experiments. Both offer $5 a month servers, which gets you .5 GB of RAM, plenty of bandwidth and 20-30GB of a SSD-based storage space. Vultr actually gives you a little more RAM, which is helpful if you're setting up a Rails or Django environment (i.e. a long running process that requires more memory), but I've been hosting a Django-based site on a 512MB Digital Ocean instance for 18 months and have never run out of memory.
Also note that all these plans start off charging by the hour so you can spin up a new server, play around with it and then destroy it and you'll have only spent a few pennies.
Which one is better? They're both good. I've been using Vultr more these days, but Digital Ocean has a nicer, somewhat slicker control panel. There are also many others I haven't named. Just pick one.
Here's a link that will get you a $10 credit at [Vultr](http://www.vultr.com/?ref=6825229) and here's one that will get you a $10 credit at [Digital Ocean](https://www.digitalocean.com/?refcode=3bda91345045) (both of those are affiliate links and help cover the cost of hosting this site *and* get you some free VPS time).
For simplicity's sake, and because it offers more one-click installers, I'll use Digital Ocean for the rest of this tutorial.
## Create Your First VPS
In Digital Ocean you'll create a "Droplet". It's a three step process: pick a plan (stick with the $5 a month plan for starters), pick a location (stick with the defaults) and then install a bare OS or go with a one-click installer. Let's get WordPress up and running, so select WordPress on 14.04 under the Applications tab.
If you want automatic backups, and you do, check that box. Backups are not free, but generally won't add more than about $1 to your monthly bill -- it's money well spent.
The last thing we need to do is add an SSH key to our account. If we don't Digital Ocean will email our root password in a plain text email. Yikes.
If you need to generate some SSH keys, here's a short guide, [How to Generate SSH keys](/src/ssh-keys-secure-logins). You can skip step 3 in that guide. Once you've got your keys set up on your local machine you just need to add them to your droplet.
If you're on OS X, you can use this command to copy your public key to the clipboard:
~~~~console
pbcopy < ~/.ssh/id_rsa.pub
~~~~
Otherwise you can use cat to print it out and copy it:
~~~~console
cat ~/.ssh/id_rsa.pub
~~~~
Now click the button to "add an SSH key". Then paste the contents of your clipboard into the box. Hit "add SSH Key" and you're done.
Now just click the giant "Create Droplet".
Congratulations you just deployed your first VPS server.
## Secure Your VPS
Now we can log in to our new VPS with this code:
~~~~console
ssh root@127.87.87.87
~~~~
That will cause SSH to ask if you want to add the server to list of known hosts. Say yes and then on OS X you'll get a dialog asking for the passphrase you created a minute ago when you generate your SSH key. Enter it, check the box to save it to your keychain so you don't have to enter it again.
And you're now logged in to your VPS as root. That's not how we want to log in though since root is a very privileged user that can wreak all sorts of havoc. The first thing we'll do is change the password of the root user. To do that, just enter:
~~~~console
passwd
~~~~
And type a new password.
Now let's create a new user:
~~~~console
adduser myusername
~~~~
Give your username a secure password and then enter this command:
~~~~console
visudo
~~~~
If you get an error saying that there is no app installed, you'll need to first install sudo (`apt-get install sudo` on Debian, which does not ship with sudo). That will open a file. Use the arrow key to move the cursor down to the line that reads:
~~~~vim
root ALL=(ALL:ALL) ALL
~~~~
Now add this line:
~~~~vim
myusername ALL=(ALL:ALL) ALL
~~~~
Where myusername is the username you created just a minute ago. Now we need to save the file. To do that hit Control-X, type a Y and then hit return.
Now, **WITHOUT LOGGING OUT OF YOUR CURRENT ROOT SESSION** open another terminal window and make sure you can login with your new user:
~~~~console
ssh myusername@12.34.56.78
~~~~
You'll be asked for the password that we created just a minute ago on the server (not the one for our SSH key). Enter that password and you should be logged in. To make sure we can get root access when we need it, try entering this command:
~~~~console
sudo apt-get update
~~~~
That should ask for your password again and then spit out a bunch of information, all of which you can ignore for now.
Okay, now you can log out of your root terminal window. To do that just hit Control-D.
## Finishing Up
What about actually accessing our VPS on the web? Where's WordPress? Just point your browser to the bare IP address you used to log in and you should get the first screen of the WordPress installer.
We now have a VPS deployed and we've taken some very basic steps to secure it. We can do a lot more to make things more secure, but I've covered that in a separate article:
One last thing: the user we created does not have access to our SSH keys, we need to add them. First make sure you're logged out of the server (type Control-D and you'll get a message telling you the connection has been closed). Now, on your local machine paste this command:
~~~~console
cat ~/.ssh/id_rsa.pub | ssh myusername@45.63.48.114 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
~~~~
You'll have to put in your password one last time, but from now on you can login via SSH.
## Next Steps
Congratulations you made it past the first hurdle, you're well on your way to taking control over your server. Kick back, relax and write some blog posts.
Write down any problems you had with this tutorial and send me a link so I can check out your blog (I'll try to help figure out what went wrong too).
Because we used a pre-built image from Digital Ocean though we're really not much better off than if we went with shared hosting, but that's okay, you have to start somewhere. Next up we'll do the same things, but this time create a bare OS which will serve as the basis for a custom built version of Nginx that's highly optimized and way faster than any stock server.
# Setup SSH Keys for Secure Logins
date:2015-03-21 20:49:26
url:/src/ssh-keys-secure-logins
SSH keys are an easier, more secure way of logging into your virtual private server via SSH. Passwords are vulnerable to brute force attacks and just plain guessing. Key-based authentication is (currently) much more difficult to brute force and, when combined with a password on the key, provides a secure way of accessing your VPS instances from anywhere.
Key-based authentication uses two keys, the first is the "public" key that anyone is allowed to see. The second is the "private" key that only you ever see. So to log in to a VPS using keys we need to create a pair -- a private key and a public key that matches it -- and then securely upload the public key to our VPS instance. We'll further protect our private key by adding a password to it.
Open up your terminal application. On OS X, that's Terminal, which is in Applications >> Utilities folder. If you're using Linux I'll assume you know where the terminal app is and Windows fans can follow along after installing [Cygwin](http://cygwin.com/).
Here's how to generate SSH keys in three simple steps.
## Setup SSH for More Secure Logins
### Step 1: Check for SSH Keys
Cut and paste this line into your terminal to check and see if you already have any SSH keys:
~~~~console
ls -al ~/.ssh
~~~~
If you see output like this, then skip to Step 3:
~~~~console
id_dsa.pub
id_ecdsa.pub
id_ed25519.pub
id_rsa.pub
~~~~
### Step 2: Generate an SSH Key
Here's the command to create a new SSH key. Just cut and paste, but be sure to put in your own email address in quotes:
~~~~console
ssh-keygen -t rsa -C "your_email@example.com"
~~~~
This will start a series of questions, just hit enter to accept the default choice for all of them, including the last one which asks where to save the file.
Then it will ask for a passphrase, pick a good long one. And don't worry you won't need to enter this every time, there's something called `ssh-agent` that will ask for your passphrase and then store it for you for the duration of your session (i.e. until you restart your computer).
~~~~console
Enter passphrase (empty for no passphrase): [Type a passphrase]
Enter same passphrase again: [Type passphrase again]
~~~~
Once you've put in the passphrase, SSH will spit out a "fingerprint" that looks a bit like this:
~~~~console
# Your identification has been saved in /Users/you/.ssh/id_rsa.
# Your public key has been saved in /Users/you/.ssh/id_rsa.pub.
# The key fingerprint is:
# d3:50:dc:0f:f4:65:29:93:dd:53:c2:d6:85:51:e5:a2 scott@longhandpixels.net
~~~~
### Step 3 Copy Your Public Key to your VPS
If you have ssh-copy-id installed on your system you can use this line to transfer your keys:
~~~~console
ssh-copy-id user@123.45.56.78
~~~~
If that doesn't work, you can paste in the keys using SSH:
~~~.language-bash
cat ~/.ssh/id_rsa.pub | ssh user@12.34.56.78 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
~~~
Whichever you use you should get a message like this:
~~~~console
The authenticity of host '12.34.56.78 (12.34.56.78)' can’t be established.
RSA key fingerprint is 01:3b:ca:85:d6:35:4d:5f:f0:a2:cd:c0:c4:48:86:12.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '12.34.56.78' (RSA) to the list of known hosts.
username@12.34.56.78's password:
~~~~
Now try logging into the machine, with `ssh username@12.34.56.78`, and check in:
~~~~console
~/.ssh/authorized_keys
~~~~
to make sure we haven't added extra keys that you weren't expecting.
Now log in to your VPS with ssh like so:
~~~~console
ssh username@12.34.56.78
~~~~
And you won't be prompted for a password by the server. You will, however, be prompted for the passphrase you used to encrypt your SSH key. You'll need to enter that passphrase to unlock your SSH key, but ssh-agent should store that for you so you only need to re-enter it when you logout or restart your computer.
And there you have it, secure, key-based log-ins for your VPS.
### Bonus: SSH config
If you'd rather not type `ssh myuser@12.34.56.78` all the time you can add that host to your SSH config file and refer to it by hostname.
The SSH config file lives in `~/.ssh/config`. This command will either open that file if it exists or create it if it doesn't:
~~~~console
nano ~/.ssh/config
~~~~
Now we need to create a host entry. Here's what mine looks like:
~~~~ini
Host myname
Hostname 12.34.56.78
user myvpsusername
#Port 24857 #if you set a non-standard port uncomment this line
CheckHostIP yes
TCPKeepAlive yes
~~~~
Then to login all I need to do is type `ssh myname`. This is even more helpful when using `scp` since you can skip the whole username@server and just type: `scp myname:/home/myuser/somefile.txt .` to copy a file.
# How My Two-Year-Old Twins Made Me a Better Programmer
date:2014-08-05 20:55:13
url:/src/better
TL;DR version: "information != knowledge; knowledge != wisdom; wisdom != experience;"
I have two-year-old twins. Every day I watch them figure out more about the world around them. Whether that's how to climb a little higher, how to put on a t-shirt, where to put something when you're done with it, or what to do with these crazy strings hanging off your shoes.
It can be incredibly frustrating to watch them struggle with something new and fail. They're your children so your instinct is to step in and help. But if you step in and do everything for them they never figure out how to do any of it on their own. I've learned to wait until they ask for help.
Watching them struggle and learn has made me realize that I don't let myself struggle enough and my skills are stagnating because of it. I'm happy to let Google step in and solve all my problems for me. I get work done, true, but at the expense of learning new things.
I've started to think of this as the Stack Overflow problem, not because I actually blame Stack Overflow -- it's a great resource, the problem is mine -- but because it's emblematic of a problem. I use StackOverflow, and Google more generally, as a crutch, as a way to quickly solve problems with some bit of information rather than digging deeper and turning information into actual knowledge.
On one hand quick solutions can be a great thing. Searching the web lets me solve my problem and move on to the next (potentially more interesting) one.
On the other hand, information (the solution to the problem at hand) is not as useful as knowledge. Snippets of code and other tiny bits of information are not going to land you job, nor will they help you when you want to write a tutorial or a book about something. This sort of "let's just solve the problem" approach begins and ends in the task at hand. The information you get out of that is useful for the task you're doing, but knowledge is much larger than that. And I don't know about you, but I want to be more than something that's useful for finishing tasks.
Information is useless to me if it isn't synthesized into personal knowledge somehow. And, for me at least, that information only becomes knowledge when I stop, back up and try to understand the *why* rather than than just the *how*. Good answers on Stack Overflow explain the why, but more often than not this doesn't happen.
For example, today I wanted a simple way to get python's `os.listdir` to ignore directories. I knew that I could loop through all the returned elements and test if they were directories, but I thought perhaps there was a more elegant way to doing that (short answer, not really). The details of my problem aren't the point though, the point is that the question had barely formed in my mind and I noticed my fingers already headed for command tab, ready to jump the browser and cut and paste some solution from Stack Overflow.
This time though I stopped myself before I pulled up my browser. I thought about my daughters in the next room. I knew that I would likely have the answer to my question in 10 seconds and also knew I would forget it and move on in 20. I was about to let easy answers step in and solve my problem for me. I was about to avoid learning something new. Sometimes that's fine, but do it too much and I'm worried I might be more of a successful cut-and-paster than struggling programmer.
Sometimes it's good to take a few minutes to read the actual docs, pull up the man pages, type `:help` or whatever and learn. It's going to take a few extra minutes. You might even take an unexpected detour from the task at hand. That might mean you learn something you didn't expect to learn. Yes, it might mean you lose a few minutes of "work" to learn. It might even mean that you fail. Sometimes the docs don't help. The sure, Google. The important part of learning is to struggle, to apply your energy to the problem rather than finding to solution.
Sometimes you need to struggle with your shoelaces for hours, otherwise you'll never figure out how to tie them.
In my specific case I decided to permanently reduce my dependency on Stack Overflow and Google. Instead of flipping to the browser I fired up the Python interpreter and typed `help(os.listdir)`. Did you know the Python interpreter has a built-in help function called, appropriately enough, `help()`? The `help()` function takes either an object or a keyword (the latter needs to be in quotes like "keyword"). If you're having trouble I wrote a quick guide to [making Python's built-in `help()` function work][1].
Now, I could have learned what I wanted to know in 2 seconds using Google. Instead it took me 20 minutes[^1] to figure out. But now I understand how to do what I wanted to do and, more importantly, I understand *why* it will work. I have a new piece of knowledge and next time I encounter the same situation I can draw on my knowledge rather than turning to Google again. It's not exactly wisdom or experience yet, but it's getting closer. And when you're done solving all the little problems of day-to-day coding that's really the point -- improving your skill, learning and getting better at what you do every day.
[^1]: Most of that time was spent figuring out where OS X stores Python docs, which [I won't have to do again][1]. Note to self, I gotta switch back to Linux.
[1]: /src/python-help
# Get Smarter with Python's Built-In Help
date:2014-08-01 20:56:57
url:/src/python-help
One of my favorite things about Python is the `help()` function. Fire up the standard Python interpreter, and import `help` from `pydoc` and you can search Python's official documentation from within the interpreter. Reading the f'ing manual from the interpreter. As it should be[^1].
The `help()` function takes either an object or a keyword. The former must be imported first while the latter needs to be a string like "keyword". Whichever you use Python will pull up the standard Python docs for that object or keyword. Type `help()` without anything and you'll start an interactive help session.
The `help()` function is awesome, but there's one little catch.
In order for this to work properly you need to make sure you have the `PYTHONDOCS` environment variable set on your system. On a sane operating system this will likely be in '/usr/share/doc/pythonX.X/html'. In mostly sane OSes like Debian (and probably Ubuntu/Mint, et al) you might have to explicitly install the docs with `apt-get install python-doc`, which will put the docs in `/usr/share/doc/pythonX.X-doc/html/`.
If you're using OS X's built-in Python, the path to Python's docs would be:
~~~~console
/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/
~~~~
Note the 2.6 in that path. As far as I can tell OS X Mavericks does not ship with docs for Python 2.7, which is weird and annoying (like most things in Mavericks). If it's there and you've found it, please enlighten me in the comments below.
Once you've found the documentation you can add that variable to your bash/zshrc like so:
~~~~console
export PYTHONDOCS=/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/
~~~~
Now fire up iPython, type `help()` and start learning rather than always hobbling along with [Google, Stack Overflow and other crutches](/src/better).
Also, PSA. If you do anything with Python, you really need to check out [iPython](http://ipython.org/). It will save you loads of time, has more awesome features than a Veg-O-Matic and [notebooks](http://ipython.org/notebook.html), don't even get me started on notebooks. And in iPython you don't even have to import help, it's already there, ready to go from the minute it starts.
[^1]: The Python docs are pretty good too. Not Vim-level good, but close.
# Protect Your Online Privacy with Ghostery
date:2014-05-29 21:00:40
url:/src/protect-your-online-privacy-ghostery
[**Update 12-11-2015** While everything in this tutorial still works, I should note that I don't actually use Ghostery anymore. Instead I've found [uBlock Origin](https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/) for Chromium and Firefox to be far more robust, customizable and powerful. It's also [open source](https://github.com/gorhill/uBlock). For most people I would continue to suggest Ghostery, but for the particularly tech savvy, check out uBlock.]
There's an invisible web that lies just below the web you see everyday. That invisible web is tracking the sites you visit, the pages you read, the things you like, the things you favorite and collating all that data into a portrait of things you are likely to purchase. And all this happens without anyone asking your consent.
Not much has changed since [I wrote about online tracking years ago on Webmonkey][1]. Back then visiting five websites meant "somewhere between 21 and 47 other websites learn about your visit to those five". That number just continues to grow.
If that doesn't bother you, and you could not care less who is tracking you, then this is not the tutorial for you.
However, if the extent of online tracking bothers you and you want to do something about it, there is some good news. In fact it's not that hard to stop all that tracking.
To protect your privacy online you'll just need to add a tool like [Ghostery][2] or [Do Not Track Plus][3] to your web browser. Both will work, but I happen to use Ghostery so that's what I'm going to show you how to set up.
## Install and Setup Ghostery in Firefox, Chrome/Chromium, Opera and Safari.
The first step is to install the Ghostery extension for your web browser. To do that, just head over to the [Ghostery downloads page][4] and click the install button that's highlighted for your browser.
Some browsers will ask you if you want to allow the add-on to be installed. In Firefox just click "Allow" and then click "Install Now" when the installation window opens up.
[![Installing add-ons in Firefox][5]](/media/src/images/2014/gh-firefox-install01.png "View Image 1")
: In Firefox click Allow...
[![Installing add-ons in Firefox 2][6]](/media/src/images/2014/gh-firefox-install02.png "View Image 2")
: ...and then Install Now
If you're using Chrome just click the Add button.
[![Installing extensions in Chrome/Chromium][7]](/media/src/images/2014/gh-chrome-install01.jpg "View Image 3")
: Installing extensions in Chrome/Chromium
Ghostery is now installed, but out of the box Ghostery doesn't actually block anything. That's why, once you have it installed, Ghostery should have opened a new window or tab that looks like this:
[![The Ghostery install wizard][8]](/media/src/images/2014/gh-first-screen.jpg "View Image 4")
: The Ghostery install wizard
This is the series of screens that walk you through the process of setting up Ghostery to block sites that would like to track you.
Before I dive into setting up Ghostery, it's important to understand that some of what Ghostery can block will limit what you see on the web. For example, Disqus is a very popular third-party comment system. It happens to track you as well. If you block that tracking though you won't see comments on a lot of sites.
There are two ways around this. One is to decide that you trust Disqus and allow it to run on any site. The second is to only allow Disqus on sites where you want to read the comments. I'll show you how to set up both options.
## Configuring Ghostery
First we have to configure Ghostery. Click the right arrow on that first screen to get started. That will lead you to this screen:
[![The Ghostery install wizard, page 2][9]](/media/src/images/2014/gh-second-screen.jpg "View Image 5")
: The Ghostery install wizard, page 2
If you want to help Ghostery get better you can check this box. Then click the right arrow again and you'll see a page asking if you want to enable the Alert Bubble.
[![The Ghostery install wizard, page 3][10]](/media/src/images/2014/gh-third-screen.jpg "View Image 6")
: The Ghostery install wizard, page 3
This is Ghostery's little alert box that comes up when you visit a new page. It will show you all the trackers that are blocked. Think of this as a little window into the invisible web. I enable this, though I change the default settings a little bit. We'll get to that in just a second.
The next screen is the core of Ghostery. This is where we decide which trackers to block and which to allow.
[![The Ghostery install wizard -- blocking trackers][11]](/media/src/images/2014/gh-main-01.jpg "View Image 7")
: The Ghostery install wizard -- blocking trackers
Out of the box Ghostery blocks nothing. Let's change that. I start by blocking everything:
[![Ghostery set to block all known trackers][12]](/media/src/images/2014/gh-main-02.jpg "View Image 8")
: Ghostery set to block all known trackers
Ghostery will also ask if you want to block new trackers as it learns about them. I go with yes.
Now chances are the setup we currently have is going to limit your ability to use some websites. To stick with the earlier example, this will mean Disqus comments are never loaded. The easiest way to fix this is to search for Disqus and enable it:
[![Ghostery set to block everything but Disqus][13]](/media/src/images/2014/gh-main-03.jpg "View Image 9")
: Ghostery set to block everything but Disqus
Note that, along the top of the tracker list there are some buttons. This makes it easy to enable, for example, not just Disqus but every commenting system. If you'd like to do that click the "Commenting System" button and uncheck all the options:
[![Filtering Ghostery by type of tracker][14]](/media/src/images/2014/gh-main-04.jpg "View Image 10")
: Filtering Ghostery by type of tracker
Another category of things you might want to allow are music players like those from SoundCloud. To learn more about a particular service, just click the link next to the item and Ghostery will show you what it knows, including any industry affiliations.
[![Ghostery showing details on Disqus][15]](/media/src/images/2014/gh-main-05.jpg "View Image 11")
: Ghostery showing details on Disqus
Now you may be thinking, wait, how do I know which companies I want to allow and which I don't? Well, you don't really need to know all of them because you can enable them as you go too.
Let's save what we have and test Ghostery out on a site. Click the right arrow one last time and check to make sure that the Ghostery icon is in your toolbar. If it isn't you can click the button "Add Button".
## Ghostery in Action
Okay, Ghostery is installed and blocking almost everything it knows about. But that might limit what we can do. For example, let's go visit arstechnica.com. You can see down here at the bottom of the screen there's a list of everything that's blocked.
[![Ghostery showing all the trackers no longer tracking you][16]](/media/src/images/2014/gh-example-01.jpg "View Image 12")
: Ghostery showing all the trackers no longer tracking you
You can see in that list that right now the Twitter button is blocked. So if you scroll down the bottom of the article and look at the author bio (which should have a twitter button) you'll see this little Ghostery icon:
[![Ghostery replaces elements it has blocked with the Ghostery icon.][17]](/media/src/images/2014/gh-example-02.jpg "View Image 13")
: Ghostery replaces elements it has blocked with the Ghostery icon.
That's how you will know that Ghostery has blocked something. If you were to click on that element Ghostery would load the blocked script and you'd see a Twitter button. But what if you always want to see the Twitter button? To do that we'll come up to the toolbar and click on the Ghostery icon which will reveal the blocking menu:
[![The Ghostery panel.][18]](/media/src/images/2014/gh-example-03.jpg "View Image 14")
: The Ghostery panel.
Just slide the Twitter button to the left and Twitter's button (and accompanying tracking beacons) will be allowed after you reload the page. Whenever you return to Ars, the Twitter button will load. As I mentioned before, you can do this on a per-site basis if there are just a few sites you want to allow. To enable the Twitter button on every site, click the little check box button the right of the slider. Realize though, that enabling it globally will mean Twitter can track you everywhere you go.
[![Enabling trackers from the Ghostery panel.][19]](/media/src/images/2014/gh-example-04.jpg "view image 15")
: Enabling trackers from the Ghostery panel.
This panel is essentially doing the same thing as the setup page we used earlier. In fact, we can get back the setting page by click the gear icon and then the "Options" button:
[![Getting back to the Ghostery setting page.][20]](/media/src/images/2014/gh-example-05.jpg "view image 16")
: Getting back to the Ghostery setting page.
Now, you may have noticed that the little purple panel showing you what was blocked hung around for quite a while, fifteen seconds to be exact, which is a bit long in my opinion. We can change that by clicking the Advanced tab on the Ghostery options page:
[![Getting back to the Ghostery setting page.][21]](/media/src/images/2014/gh-example-06.jpg "view image 17")
: Getting back to the Ghostery setting page.
The first option in the list is whether or not to show the alert bubble at all, followed by the length of time it's shown. I like to set this to the minimum, 3 seconds. Other than this I leave the advanced settings at their defaults.
Scroll to the bottom of the settings page, click save, and you're done setting up Ghostery.
## Conclusion
Now you can browse the web with a much greater degree of privacy, only allowing those companies *you* approve of to know what you're up to. And remember, any time a site isn't working the way you think you should, you can temporarily disable Ghostery by clicking the icon in the toolbar and hitting the pause blocking button down at the bottom of the Ghostery panel:
[![Temporarily disable Ghostery.][22]](/media/src/images/2014/gh-example-07.jpg "view image 18")
: Temporarily disable Ghostery.
Also note that there is an iOS version of Ghostery, though, due to Apple's restrictions on iOS, it's an entirely separate web browser, not a plugin for Mobile Safari. If you use Firefox for Android there is a plugin available.
##Further reading:
* [How To Install Ghostery (Internet Explorer)][23] -- Ghostery's guide to installing it in Internet Explorer.
* [Secure Your Browser: Add-Ons to Stop Web Tracking][24] -- A piece I wrote for Webmonkey a few years ago that gives some more background on tracking and some other options you can use besides Ghostery.
* [Tracking our online trackers][25] -- TED talk by Gary Kovacs, CEO of Mozilla Corp, covering online behavior tracking more generally.
* This sort of tracking is [coming to the real world too][26], so there's that to look forward to.
[1]: http://www.webmonkey.com/2012/02/secure-your-browser-add-ons-to-stop-web-tracking/
[2]: https://www.ghostery.com/
[3]: https://www.abine.com/index.html
[4]: https://www.ghostery.com/en/download
[5]: /media/src/images/2014/gh-firefox-install01-tn.jpg
[6]: /media/src/images/2014/gh-firefox-install02-tn.jpg
[7]: /media/src/images/2014/gh-chrome-install01-tn.jpg
[8]: /media/src/images/2014/gh-first-screen-tn.jpg
[9]: /media/src/images/2014/gh-second-screen-tn.jpg
[10]: /media/src/images/2014/gh-third-screen-tn.jpg
[11]: /media/src/images/2014/gh-main-01-tn.jpg
[12]: /media/src/images/2014/gh-main-02-tn.jpg
[13]: /media/src/images/2014/gh-main-03-tn.jpg
[14]: /media/src/images/2014/gh-main-04-tn.jpg
[15]: /media/src/images/2014/gh-main-05-tn.jpg
[16]: /media/src/images/2014/gh-example-01-tn.jpg
[17]: /media/src/images/2014/gh-example-02-tn.jpg
[18]: /media/src/images/2014/gh-example-03-tn.jpg
[19]: /media/src/images/2014/gh-example-04-tn.jpg
[20]: /media/src/images/2014/gh-example-05-tn.jpg
[21]: /media/src/images/2014/gh-example-06-tn.jpg
[22]: /media/src/images/2014/gh-example-07-tn.jpg
[23]: https://www.youtube.com/watch?v=NaI17dSfPRg
[24]: http://www.webmonkey.com/2012/02/secure-your-browser-add-ons-to-stop-web-tracking/
[25]: http://www.ted.com/talks/gary_kovacs_tracking_the_trackers
[26]: http://business.financialpost.com/2014/02/01/its-creepy-location-based-marketing-is-following-you-whether-you-like-it-or-not/?__lsa=e48c-7542
# Scaling Responsive Images in CSS
date:2014-02-27 20:43:23
url:/src/scaling-responsive-images-css
It's pretty easy to handle images responsively with CSS. Just use `@media` queries to swap images at various breakpoints in your design.
It's slightly trickier to get those images to be fluid and scale in between breakpoints. Or rather, it's not hard to get them to scale horizontally, but what about vertical scaling?
Imagine this scenario. You have a div with a paragraph inside it and you want to add a background using the `:before` pseudo element -- just a decorative image behind some text. You can set the max-width to 100% to get the image to fluidly scale in width, but what about scaling the height?
That's a bit trickier, or at least it tripped me up for a minute the other day. I started with this:
~~~~css
.wrapper--image:before {
content: "";
display: block;
max-width: 100%;
height: 443px;
background-color: #f3f;
background-image: url('bg.jpg');
background-repeat: no-repeat;
background-size: 100%;
}
~~~~
Do that and you'll see... nothing. Okay, I expected that. Setting height to auto doesn't work because the pseudo element has no real content, which means its default height is zero. Okay, how do I fix that?
You might try setting the height to the height of your background image. That works whenever the div is the size of, or larger than, the image. But the minute your image scales down at all you'll have blank space at the bottom of your div, because the div has a fixed height with an image inside that's shorter than that fixed height. Try re-sizing [this demo](/demos/css-bg-image-scaling/no-vertical-scaling.html) to see what I'm talking about, make the window less than 800px and you'll see the box no longer scales with the image.
To get around this we can borrow a trick from Thierry Koblentz's technique for [creating intrinsic ratios for video](http://alistapart.com/article/creating-intrinsic-ratios-for-video/) to create a box that maintains the ratio of our background image.
We'll leave everything the way it is, but add one line:
~~~~css
.wrapper--image:before {
content: "";
display: block;
max-width: 100%;
background-color: #f3f;
background-image: url('bg.jpg');
background-repeat: no-repeat;
background-size: 100%;
padding-top: 55.375%;
}
~~~~
We've added padding to the top of the element, which forces the element to have a height (at least visually). But where did I get that number? That's the ratio of the dimensions of the background image. I simply divided the height of the image by the width of the image. In this case my image was 443px tall and 800px wide, which gives us 53.375%.
Here's a [working demo](/demos/css-bg-image-scaling/vertical-scaling.html).
And there you have it, properly scaling CSS background images on `:before` or other "empty" elements, pseudo or otherwise.
The only real problem with this technique is that requires you to know the dimensions of your image ahead of time. That won't be possible in every scenario, but if it is, this will work.
# Install Nginx on Debian/Ubuntu
date:2014-02-10 21:03:23
url:/src/install-nginx-debian
I recently helped a friend set up his first Nginx server and in the process realized I didn't have a good working reference for how I set up Nginx.
So, for myself, my friend and anyone else looking to get started with Nginx, here's my somewhat opinionated guide to installing and configuring Nginx to serve static files. Which is to say, this is how I install and set up Nginx to serve my own and my clients' static files whether those files are simply stylesheets, images and JavaScript or full static sites like this one. What follows is what I believe are the best practices of Nginx[^1]; if you know better, please correct me in the comments.
[This post was last updated 30 October 2015]
## Nginx Beats Apache for Static Content[^2]
Apache is overkill. Unlike Apache, which is a jack-of-all-trades server, Nginx was really designed to do just a few things well, one of which is to offer a simple, fast, lightweight server for static files. And Nginx is really, really good at serving static files. In fact, in my experience Nginx with PageSpeed, gzip, far future expires headers and a couple other extras I'll mention is faster than serving static files from Amazon S3[^3] (potentially even faster in the future if Verizon and its ilk [really do](http://netneutralitytest.com/) start [throttling cloud-based services](http://davesblog.com/blog/2014/02/05/verizon-using-recent-net-neutrality-victory-to-wage-war-against-netflix/)).
## Nginx is Different from Apache
In its quest to be lightweight and fast, Nginx takes a different approach to modules than you're probably familiar with in Apache. In Apache you can dynamically load various features using modules. You just add something like `LoadModule alias_module modules/mod_alias.so` to your Apache config files and just like that Apache loads the alias module.
Unlike Apache, Nginx can not dynamically load modules. Nginx has available what it has available when you install it.
That means if you really want to customize and tweak it, it's best to install Nginx from source. You don't *have* to install it from source. But if you really want a screaming fast server, I suggest compiling Nginx yourself, enabling and disabling exactly the modules you need. Installing Nginx from source allows you to add some third-party tools, most notably Google's PageSpeed module, which has some fantastic tools for speeding up your site.
Luckily, installing Nginx from source isn't too difficult. Even if you've never compiled any software from source, you can install Nginx. The remainder of this post will show you exactly how.
## My Ideal Nginx Setup for Static Sites
Before we start installing, let's go over the things we'll be using to build a fast, lightweight server with Nginx.
* [Nginx](http://nginx.org).
* [SPDY](http://www.chromium.org/spdy/spdy-protocol) -- Nginx offers "experimental support for SPDY", but it's not enabled by default. We're going to enable it when we install Nginx. In my testing SPDY support has worked without a hitch, experimental or otherwise.
* [Google Page Speed](https://developers.google.com/speed/pagespeed/module) -- Part of Google's effort to make the web faster, the Page Speed Nginx module "automatically applies web performance best practices to pages and associated assets".
* [Headers More](https://github.com/agentzh/headers-more-nginx-module/) -- This isn't really necessary from a speed standpoint, but I often like to set custom headers and hide some headers (like which version of Nginx your server is running). Headers More makes that very easy.
* [Naxsi](https://github.com/nbs-system/naxsi) -- Naxsi is a "Web Application Firewall module for Nginx". It's not really all that important for a server limited to static files, but it adds an extra layer of security should you decided to use Nginx as a proxy server down the road.
So we're going to install Nginx with SPDY support and three third-party modules.
Okay, here's the step-by-step process to installing Nginx on a Debian 8 (or Ubuntu) server. If you're looking for a good, cheap VPS host I've been happy with [Vultr.com](http://www.vultr.com/?ref=6825229) (that's an affiliate link that will help support luxagraf; if you prefer, here's a non-affiliate link: [link](http://www.vultr.com/))
The first step is to make sure you're installing the latest release of Nginx. To do that check the [Nginx download page](http://nginx.org/en/download.html) for the latest version of Nginx (at the time of writing that's 1.5.10).
Okay, SSH into your server and let's get started.
While these instructions will work on just about any server, the one thing that will be different is how you install the various prerequisites needed to compile Nginx.
On a Debian/Ubuntu server you'd do this:
~~~~console
sudo apt-get -y install build-essential zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libssl-dev tar unzip
~~~~
If you're using RHEL/Cent/Fedora you'll want these packages:
~~~~console
sudo yum install gcc-c++ pcre-dev pcre-devel zlib-devel make
~~~~
After you have the prerequisites installed it's time to grab the latest version of Google's Pagespeed module. Google's [Nginx PageSpeed installation instructions](https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source) are pretty good, so I'll reproduce them here.
First grab the latest version of PageSpeed, which is currently 1.9.32.2, but check the sources since it updates frequently and change this first variable to match the latest version.
~~~~console
NPS_VERSION=1.9.32.2
wget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}-beta.zip
unzip release-${NPS_VERSION}-beta.zip
~~~~
Now, before we compile pagespeed we need to grab `psol`, which PageSpeed needs to function properly. So, let's `cd` into the `ngx_pagespeed-release-1.8.31.4-beta` folder and grab `psol`:
~~~~console
cd ngx_pagespeed-release-${NPS_VERSION}-beta/
wget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz
tar -xzvf ${NPS_VERSION}.tar.gz
cd ../
~~~~
Alright, so the `ngx_pagespeed` module is all setup and ready to install. All we have to do at this point is tell Nginx where to find it.
Now let's grab the Headers More and Naxsi modules as well. Again, check the [Headers More](https://github.com/agentzh/headers-more-nginx-module/) and [Naxsi](https://github.com/nbs-system/naxsi) pages to see what the latest stable version is and adjust the version numbers in the following accordingly.
~~~~console
HM_VERSION =v0.25
wget https://github.com/agentzh/headers-more-nginx-module/archive/${HM_VERSION}.tar.gz
tar -xvzf ${HM_VERSION}.tar.gz
NAX_VERSION=0.53-2
wget https://github.com/nbs-system/naxsi/archive/${NAX_VERSION}.tar.gz
tar -xvzf ${NAX_VERSION}.tar.gz
~~~~
Now we have all three third-party modules ready to go, the last thing we'll grab is a copy of Nginx itself:
~~~~console
NGINX_VERSION=1.7.7
wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz
tar -xvzf nginx-${NGINX_VERSION}.tar.gz
~~~~
Then we `cd` into the Nginx folder and compile. So, first:
~~~~console
cd nginx-${NGINX_VERSION}/
~~~~
So now we're inside the Nginx folder, let's configure our installation. We'll add in all our extras and turn off a few things we don't need. Or at least they're things I don't need, if you need the mail modules, then delete those lines. If you don't need SSL, you might want to skip that as well. Here's the config setting I use (Note: all paths are for Debian servers, you'll have to adjust the various paths accordingly for RHEL/Cent/Fedora/ servers):
~~~~console
./configure
--add-module=$HOME/naxsi-${NAX_VERSION}/naxsi_src
--prefix=/usr/share/nginx
--sbin-path=/usr/sbin/nginx
--conf-path=/etc/nginx/nginx.conf
--pid-path=/var/run/nginx.pid
--lock-path=/var/lock/nginx.lock
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/access.log
--user=www-data
--group=www-data
--without-mail_pop3_module
--without-mail_imap_module
--without-mail_smtp_module
--with-http_stub_status_module
--with-http_ssl_module
--with-http_spdy_module
--with-http_gzip_static_module
--add-module=$HOME/ngx_pagespeed-release-${NPS_VERSION}-beta
--add-module=$HOME/headers-more-nginx-module-${HM_VERSION}
~~~~
There are a few things worth noting here. First off make sure that Naxsi is first. Here's what the [Naxsi wiki page](https://github.com/nbs-system/naxsi/wiki/installation) has to say on that score: "Nginx will decide the order of modules according the order of the module's directive in Nginx's ./configure. So, no matter what (except if you really know what you are doing) put Naxsi first in your ./configure. If you don't do so, you might run into various problems, from random/unpredictable behaviors to non-effective WAF." The last thing you want is to think you have a web application firewall running when in fact you don't, so stick with Naxsi first.
There are a couple other things you might want to add to this configuration. If you're going to be serving large files, larger than your average 1.5MB HTML page, consider adding the line: `--with-file-aio `, which is apparently faster than the stock `sendfile` option. See [here](https://calomel.org/nginx.html) for more details. There are quite a few other modules available. A [full list of the default modules](http://wiki.nginx.org/Modules) can be found on the Nginx site. Read through that and if there's another module you need, you can add it to that config list.
Okay, we've told Nginx what to do, now let's actually install it:
~~~~console
make
sudo make install
~~~~
Once `make install` finishes doing its thing you'll have Nginx all set up.
Congrats! You made it.
The next step is to add Nginx to the list of things your server starts up automatically whenever it reboots. Since we installed Nginx from scratch we need to tell the underlying system what we did.
## Make it Autostart
Since we compiled from source rather than using Debian/Ubuntu's package management tools, the underlying stystem isn't aware of Nginx's existence. That means it won't automatically start it up when the system boots. In order to ensure that Nginx does start on boot we'll have to manually add Nginx to our server's list of startup services. That way, should we need to reboot, Nginx will automatically restart when the server does.
**Note: I have embraced systemd so this is out of date, see below for systemd version**
To do that I use the [Debian init script](https://github.com/MovLib/www/blob/master/bin/init-nginx.sh) listed in the [Nginx InitScripts page](http://wiki.nginx.org/InitScripts):
If that works for you, grab the raw version:
~~~~console
wget https://raw.githubusercontent.com/MovLib/www/develop/etc/init.d/nginx.sh
# I had to edit the DAEMON var to point to nginx
# change line 63 in the file to:
DAEMON=/usr/sbin/nginx
# then move it to /etc/init.d/nginx
sudo mv nginx.sh /etc/init.d/nginx
# make it executable:
sudo chmod +x /etc/init.d/nginx
# then just:
sudo service nginx start #also restart, reload, stop etc
~~~~
##Updated Systemd scripts
Yeah I went and did it. I kind of like systemd actually. Anyway, here's what I use to stop and start my custom compiled nginx with systemd...
First we need to create and edit an nginx.service file.
~~~~console
sudo vim /lib/systemd/system/nginx.service #this is for debian
~~~~
Then I use this script which I got from the nginx wiki I believe.
~~~~ini
# Stop dance for nginx
# =======================
#
# ExecStop sends SIGSTOP (graceful stop) to the nginx process.
# If, after 5s (--retry QUIT/5) nginx is still running, systemd takes control
# and sends SIGTERM (fast shutdown) to the main process.
# After another 5s (TimeoutStopSec=5), and if nginx is alive, systemd sends
# SIGKILL to all the remaining processes in the process group (KillMode=mixed).
#
# nginx signals reference doc:
# http://nginx.org/en/docs/control.html
#
[Unit]
Description=A high performance web server and a reverse proxy server
After=network.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid
TimeoutStopSec=5
KillMode=mixed
[Install]
WantedBy=multi-user.target
~~~~
Save that file, exit your text editor. Now we just need to tell systemd about our script and then we can stop and start via our service file. To do that...
~~~~console
sudo systemctl enable nginx.service
sudo systemctl start nginx.service
sudo systemctl status nginx.service
~~~~
I suggest taking the last bit and turning it into an alias in your `bashrc` or `zshrc` file so that you can quickly restart/reload the server when you need it. Here's what I use:
~~~~ini
alias xrestart="sudo systemctl restart nginx.service"
~~~~
If you're using systemd, congrats, you're done. If you're looking for a way to get autostart to work on older or non-systemd servers, read on...
**End systemd update**
Okay so we now have the initialization script all set up, now let's make Nginx start up on reboot. In theory this should do it:
~~~~console
update-rc.d -f nginx defaults
~~~~
But that didn't work for me with my Digital Ocean Debian 7 x64 droplet (which complained that "`insserv rejected the script header`"). I didn't really feel like troubleshooting that at the time; I was feeling lazy so I decided to use chkconfig instead. To do that I just installed chkconfig and added Nginx:
~~~~console
sudo apt-get install chkconfig
sudo chkconfig --add nginx
sudo chkconfig nginx on
~~~~
So there we have it, everything you need to get Nginx installed with SPDY, PageSpeed, Headers More and Naxsi. A blazing fast server for static files.
After that it's just a matter of configuring Nginx, which is entirely dependent on how you're using it. For static setups like this my configuration is pretty minimal.
Before we get to that though, there's the first thing I do: edit `/etc/nginx/nginx.conf` down to something pretty simple. This is the root config so I keep it limited to a `http` block that turns on a few things I want globally and an include statement that loads site-specific config files. Something a bit like this:
~~~~nginx
user www-data;
events {
worker_connections 1024;
}
http {
include mime.types;
include /etc/nginx/naxsi_core.rules;
default_type application/octet-stream;
types_hash_bucket_size 64;
server_names_hash_bucket_size 128;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
more_set_headers "Server: My Custom Server";
keepalive_timeout 65;
gzip on;
pagespeed on;
pagespeed FileCachePath /var/ngx_pagespeed_cache;
include /etc/nginx/sites-enabled/*.conf;
}
~~~~
A few things to note. I've include the core rules file from the Naxsi source. To make sure that file exists, we need to copy it over to `/etc/nginx/`.
~~~~console
sudo cp naxsi-0.53-2/naxci_config/naxsi_core.rule /etc/nginx
~~~~
Now let's restart the server so it picks up these changes:
~~~~console
sudo service nginx restart
~~~~
Or, if you took my suggestion of creating an alias, you can type: `xrestart` and Nginx will restart itself.
With this configuration we have a good basic setup and any `.conf` files you add to the folder `/etc/nginx/sites-enabled/` will be included automatically. So if you want to create a conf file for mydomain.com, you'd create the file `/etc/nginx/sites-enabled/mydomain.conf` and put the configuration for that domain in that file.
I'm going to post a follow up on how I configure Nginx very soon. In the mean time here's a pretty comprehensive [guide to configuring Nginx](https://calomel.org/nginx.html) in a variety of scenarios. And remember, if you want to some more helpful tips and tricks for web developers, sign up for the mailing list below.
[^1]: If you're more experienced with Nginx and I'm totally bass-akward about something in this guide, please let me know.
[^2]: In my experience anyway. Probably Apache can be tuned to get pretty close to Nginx's performance with static files, but it's going to take quite a bit of work. One is not necessarily better, but there are better tools for different jobs.
[^3]: That said, obviously a CDN service like Cloudfront will, in most cases, be much faster than Nginx or any other server.
# Tools for Writing an Ebook
date:2014-01-24 20:05:17
url:/src/ebook-writing-tools
It never really occurred to me to research which tools I would need to create an ebook because I knew I was going to use Markdown, which could then be translated into pretty much any format using [Pandoc](http://johnmacfarlane.net/pandoc/). Bu since a few people have [asked](https://twitter.com/situjapan/status/549935669129142272) for more details on *exactly* which tools I used, here's a quick rundown:
1. I write books as single text files lightly marked up with Pandoc-flavored Markdown.
2. Then I run Pandoc, passing in custom templates, CSS files, fonts I bought and so on. Pretty much as [detailed here in the Pandoc documentation](http://johnmacfarlane.net/pandoc/epub.html). I run these commands often enough that I write a shell script for each project so I don't have to type in all the flags and file paths each time.
3. Pandoc outputs an ePub file and an HTML file. The latter is then used with [Weasyprint](http://weasyprint.org/) to generate the PDF version of the ebook. Then I used the ePub file and the [Kindle command line tool](http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1000765211) to create a .mobi file.
4. All of the formatting and design is just CSS, which I am already comfortable working with (though ePub is only a subset of CSS and reader support is somewhat akin to building website in 1998 -- who knows if it's gonna work? The PDF is what I consider the reference copy.)
In the end I get the book in TXT, HTML, PDF, ePub and .mobi formats, which covers pretty much every digital reader I'm aware of. Out of those I actually include the PDF, ePub and Mobi files when you [buy the book](/src/books/).
## FAQs and Notes.
**Why not use InDesign or iBook Author or \_\_\_\_\_\_\_?**
I wanted to use open source software, which offers me more control over the process than I could get with monolithic tools like visual layout editors.
The above tools are, for me anyway, the simplest possible workflow which outputs the highest quality product.
**What about Prince?**
What does The Purple One have to do with writing books? Oh, that [Prince](http://www.princexml.com/). Actually I really like Prince and it can do a few things that WeasyPrint cannot (like execute JavaScript which is handy for code highlighting or allow for `@font-face` font embedding), but it's not free and in the end, I decided, not worth the money.
**Can you share your shell script?**
Here's the basic idea, adjust file paths to suit your working habits.
~~~~bash
#!/bin/sh
#Update PDF:
pandoc --toc --toc-depth=2 --smart --template=lib/template.html5 --include-before-body=lib/header.html -t html5 -o rwd.html draft.txt && weasyprint rwd.html rwd.pdf
#Update epub:
pandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-epub.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o rwd.epub draft.txt
#update Mobi:
pandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-kindle.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o kindle.epub Draft.txt && kindlegen kindle.epub -o rwd.mobi
~~~~
I just run this script and bang, all my files are updated.
What advice do you have for people trying to write an ebook?
At the risk of sounding trite, just do it.
Writing a book is not easy, or rather the writing is never easy, but I don't think it's ever been this easy to *produce* a book. It took me two afternoons to come up with a workflow that involves all free, open source software and allows me to publish literally any text file on my hard drive as a book that can then be read by millions. I type two key strokes and I have a book. Even if millions don't ever read your book (and, for the record, millions have most definitely not read my books), that is still f'ing amazing.
Now go make something cool (and be sure to tell me about it).
# Whatever Happened to Webmonkey.com?
date:2013-09-20 21:04:57
url:/src/whatever-happened-webmonkey
[Update 02/2019: If you're looking for a good resource, similar to Webmonkey, I suggest Mozilla's [Developer Docs site](https://developer.mozilla.org/en-US/). It lacks Webmonkey's sense of humor and fun, and it doesn't cover everything Webmonkey covered, but it does have some good tutorials and documentation on HTML, CSS and JavaScript]
People on Twitter have been asking what's up with [Webmonkey.com][1]. Originally I wanted to get this up on Webmonkey, but I got locked out of the CMS before I managed to do that, so I'm putting it here.
Earlier this year Wired decided to stop producing new content for Webmonkey. [**Update 07/2016**: The domain has been shut down and now redirects to wired.com. I told you they were serious this time.]
For those keeping track at home, this is the fourth, and I suspect final, time Webmonkey has been shut down (previously it was shut down in 1999, 2004 and 2006).
I've been writing for Webmonkey.com since 2000, full time since 2006 (when it came back from the dead for a third run). And for the last two years I have been the sole writer, editor and producer of the site.
Like so many of you, I learned how to build websites from Webmonkey. But it was more than just great tutorials and how tos. Part of what made Webmonkey great was that it was opinionated and rough around the edges. Webmonkey was not the product of professional writers, it was written and created by the web nerds building Wired's websites. It was written by people like us, for people like us.
I'll miss Webmonkey not just because it was my job for many years, but because at this point it feels like a family dog to me, it's always been there and suddenly it's not. Sniff. I'll miss you Webmonkey.
Quite a few people have asked me why it was shut down, but unfortunately I don't have many details to share. I've always been a remote employee, not in San Francisco at all in fact, and consequently somewhat out of the loop. What I can say is that Webmonkey's return to Wired in 2006 was the doing of long-time Wired editor Evan Hansen ([now at Medium][2]). Evan was a tireless champion of Webmonkey and saved it from the Conde Nast ax several times. He was also one of the few at Wired who "got" Webmonkey. When Evan left Wired earlier this year I knew Webmonkey's days were numbered.
I don't begrudge Wired for shutting Webmonkey down. While I have certain nostalgia for its heyday, even I know it's been a long time since Webmonkey was leading the way in web design. I had neither the staff nor the funding to make Webmonkey anything like its early 2000s self. In that sense I'm glad it was shut down rather than simply fading further into obscurity.
I am very happy that Wired has left the site in place. As far as I know Webmonkey (and its ever-popular cheat sheets, which still get a truly astounding amount of traffic) will remain available on the web. [**Update 07/2016**: so much for that, domain and all content are gone now.] That said, note to the [Archive Team][3], it wouldn't hurt to create a backup. Sadly, many of the very earliest writings have already been lost in the various CMS transitions over the years and even much of what's there now has incorrect bylines. Still, at least most of it's there. For now.
If you have any questions or want more details use the comments box below.
In closing, I'd like to thank some people at Wired -- thank you to my editors over the years, especially [Michael Calore][5], [Evan Hansen][6] and [Leander Kahney][7] who all made me a much better writer. Also thanks to Louise for always making sure I got paid. And finally, to everyone who read Webmonkey and contributed over the years, whether with articles or even just a comment, thank you.
Cheers and, yes, thanks for all the bananas.
[1]: http://www.webmonkey.com/
[2]: https://medium.com/@evanatmedium
[3]: http://www.archiveteam.org/index.php?title=Main_Page
[4]: https://twitter.com/LongHandPixels
[5]: http://snackfight.com/
[6]: https://twitter.com/evanatmedium
[7]: http://www.cultofmac.com/about/
# New Adventures in HiFi Text
date:2005-02-12 11:01:49
url:/src/new-adventures-in-hifi-text
I sometimes bitch about Microsoft Word in this piece, but let me be clear that I do not hate Windows or Microsoft, nor am I a rabid fan of Apple. In fact prior to the advent of OS X, I was ready to ditch the platform altogether. I could list as many crappy things about Mac OS 7.x-9.x as I can about Windows. Maybe even more. But OS X changed that for me, it's everything I was looking for in an OS and very little I wasn't. But I also don't think Microsoft is inherently evil and Windows is their plan to exploit the vulnerable masses. I mean really, do you think [this guy][2] or anything he might do could be *evil*? Of course not. I happen to much prefer OS X, but that's just personal preference and computing needs. I use Windows all the time at work and I don't hate it, it just lacks a certain *je ne sais quoi*. [2014 update: These days I use Arch Linux because it just works better than any other OS I have use.]
###In Praise of Plain Text
That said, I have never really liked Microsoft Word on any platform. It does all sorts of things I would prefer that it didn't, such as capitalize URLs while I'm typing or automatically convert email addresses to live links. Probably you can turn these sorts of things on and off in the preferences, but that's not the point. I dislike the way Word approaches me, [assuming that I want every bell and whistle possible][10], including a shifty looking paperclip with Great Gatsbyesque eyes watching my every move.
Word has too many features and yet fails to implement any of them with much success. Since I don't work in an office environment, I really don't have any need for Word (did I mention it's expensive and crashes with alarming frequency?). I write for a couple of magazines here and there, post things on this site, and slave away at the mediocre American novel, none of which requires me to use MS Word or the .doc format. In short, I don't *need* Word.
Yet for years I used it anyway. I still have the copy I purchased in college and even upgraded when it became available for OS X. But I used it mainly out of ignorance to the alternatives, rather than usefulness of the software. I can now say I have tried pretty much every office/word processing program that's available for OS X and I only like one of them -- [Mellel][11]. But aside from that one, I've concluded I just don't like word processors (including [Apple's new Pages program][12]).
These days I do my writing in a text editor, usually BBEdit. Since I've always used BBEdit to write code, it was open and ready to go. Over time I noticed that when I wanted to jot down some random idea I turned to BBEdit rather than opening up Word. It started as a convenience thing and just sort of snowballed from there. Now I'm really attached to writing in plain text.
In terms of archival storage, plain text is an excellent way to write. If BareBones, the makers of BBEdit, went bankrupt tomorrow I wouldn't miss a beat because just about any program out there can read my files. As a file storage format, plain text is almost totally platform independent (I'm sure someone has got a text editor running on their PS2 by now.), which makes plain text fairly future proof (and if it's not then we have bigger issues to deal with). Plain text is also easy to marked up for web display, a couple of `` tags, maybe a link here and there and we're on our way. ###In Praise of Formatted Text But there are some drawbacks to writing in plain text -- it sucks for physical documents. No one wants to read printed plain text. Because plain text must be single spaced printing renders some pretty small text with no room to make corrections -- less than ideal for editing purposes. Sure, I could adjust the font size and whatnot from within BBEdit's preferences, but I can't get the double spacing, which is indispensable for editing, but a waste of space when I'm actually writing. Of course this may be peculiar to me. It may be hard for some people to write without having the double-spaced screen display. Most people probably look at what they're writing while they write it. I do not. I look at my hands. Not to find the keys, but rather with a sort of abstract fascination. My hands seem to know where to go without me having to think about it, it's kind of amazing and I like to watch it happen. I could well be thinking about something entirely different from what I'm typing and staring down at my hands produces a strange realization -- wow look at those fingers go, I wonder how they know what their doing? I'm thinking about the miraculous way they seem to know what their doing, rather than what they're actually doing. It's highly likely that this is my own freakishness, but it eliminates the need for nicely spaced screen output (and introduces the need for intense editing). But wait, let's go back to that earlier part where I said its easy to mark up plain text for the web -- what if it were possible to mark up plain text for print? Now that would be something. ###The Best of Both Worlds (Maybe) In fact there is a markup language for print documents. Unfortunately its pretty clunky. It goes by the name TeX, the terseness of which should make you think -- ah, Unix. But TeX is actually really wonderful. It gives you the ability to write in plain text and use an, albeit esoteric and awkward, syntax to mark it up. TeX can then convert your document into something very classy and beautiful. Now prior to the advent of Adobe's ubiquitous PDF format I have no idea what sort of things TeX produced, nor do I care, because PDF exists and TeX can leverage it to render printable, distributable, cross-platform, open standard and, most importantly, really good looking documents. But first let's deal with the basics. TeX is convoluted, ugly, impossibly verbose and generally useless to anyone without a computer science degree. Recognizing this, some reasonable folks can along and said, hey, what if we wrote some simple macros to access this impossibly verbose difficult to comprehend language? That would be marvelous. And so some people did and called the result LaTeX because they were nerd/geeks and loved puns and the shift key. Actually I am told that LaTeX is pronounced Lah Tech, and that TeX should not be thought of as tex, but rather the greek letters tau, epsilon and chi. This is all good and well if you want to convince people you're using a markup language rather than checking out fetish websites, but the word is spelled latex and will be pronounced laytex as long as I'm the one saying it. (Note to Bono: Your name is actually pronounced bo know. Sorry, that's just how it is in my world.) So, while TeX may do the actual work of formating your plain text document, what you actually use to mark up your documents is called LaTeX. I'm not entirely certain, but I assume that the packages that comprise LaTeX are simple interfaces that take basic input shortcuts and then tell TeX what they mean. Sort of like what Markdown does in converting text to HTML. Hmmm. More on that later. ###Installation and RTFM suggestions So I went through the whole unixy rigamarole of installing packages in usr/bin/ and other weird directories that I try to ignore and got a lovely little Mac OS X-native front end called [TeXShop][3]. Here is a link to the [Detailed instructions for the LaTeX/TeX set up I installed][4]. The process was awkward, but not painful. The instruction comprise only four steps, not as bad as say, um, well, okay, it's not drag-n-drop, but its pretty easy. I also went a step further because LaTeX in most of it's incarnations is pretty picky about what fonts it will work with. If this seems idiotic to you, you are not alone. I thought hey, I have all these great fonts, I should be able to use any of them in a LaTeX document, but no, it's not that easy. Without delving too deep into the mysterious world of fonts, it seems that, in order to render text as well as it does, TeX needs special fonts -- particularly fonts that have specific ligatures included in them. Luckily a very nice gentlemen by the name of Jonathan Kew has already solved this problem for those of us using Mac OS X. So I [downloaded and installed XeTeX][13], which is actually a totally different macro processor that runs semi-separately from a standard LaTeX installation (at least I think it is, feel free to correct me if I'm wrong. This link offers [more information on XeTeX][5]. So then [I read the fucking manual][6] and [the other fucking manual][7] (which should be on your list of best practices when dealing with new software or programming languages). After an hour or so of tinkering with pre-made templates developed by others, and consulting the aforementioned manuals, I was actually able to generate some decent looking documents. But the syntax for LaTeX is awkward and verbose (remember -- written to avoid having to know an awkward and verbose syntax known as TeX). Would you rather write this: ~~~{.latex} \section{Heading} \font\a="Bell MT" at 12pt \a some text some text some text some text, for the love of god I will not use latin sample text because damnit I am not roman and do not like fiddling. \href{some link text}{http://www.linkaddress.com} to demonstrate what a link looks like in XeTeX. \verb#here is a line of code# to show what inline code looks like in XeTeX some more text because I still won't stoop, yes I said stoop, to Latin. ~~~ Or this: ~~~{.markdown} ###Heading Some text some text some text some text, for the love of god I will not use latin sample text because damnit I am not roman and do not like fiddling. [some link text][99] to demonstrate what a link looks like in Markdown. `here is a line of code` to show what inline code looks like in Markdown. And some more text because I still won't stoop, yes I said stoop, to Latin. ~~~ In simple terms of readability, [John Gruber's Markdown][8] (the second sample code) is a stroke of true brilliance. I can honestly say that nothing has changed my writing style as much since my parents bought one of these newfangled computer thingys back in the late 80's. So, with no more inane hyperbole, lets just say I like Markdown. LaTeX on the other hand shows it's age like the denture baring ladies of a burlesque revival show. It ain't sexy. And believe me, my sample is the tip of the iceberg in terms of mark up. ###Using Perl and Applescript to Generate XeTeX Here's where I get vague, beg personal preferences, hint a vast undivulged knowledge of AppleScript (not true, I just use the "start recording" feature in BBEdit) and simply say that, with a limited knowledge of Perl, I was able to rewrite Markdown, combine that with some applescripts to call various Grep patterns (LaTeX must escape certain characters, most notably, `$` and `&`) and create a BBEdit Textfactory which combines the first two elements to generate LaTeX markup from a Markdown syntax plain text document. And no I haven't been reading Proust, I just like long, parenthetically-aside sentences. Yes all of the convolution of the preceding sentence allows me to, in one step, convert this document to a latex document and output it as a PDF file. Don't believe me? [Download this article as a PDF produced using LaTeX][9]. In fact it's so easy I'm going to batch process all my entries and make them into nice looking PDFs which will be available at the bottom of the page. ###Technical Details I first proposed this idea of using Markdown to generate LaTeX on the BBEdit mailing list and was informed that it would be counter-productive to the whole purpose and philosophy of LaTeX. While I sort of understand this guidance, I disagree. I already have a ton of documents written with Markdown syntax. Markdown is the most minimal syntax I've found for generating html. Why not adapt my existing workflow to generate some basic LaTeX? See I don't want to write LaTeX documents; I want to write text documents with Markdown syntax in them and generate html and PDF from the same initial document. Then I want to revert the initial document back to it's original form and stash it away on my hard drive. I simply wanted a one step method of processing a Markdown syntax text file into XeTeX to compliment the one step method I already have for turning the same document into HTML. Here's how I do it. I modified Markdown to generate what LaTeX markup I need, i.e. specific definitions for list elements, headings, quotes, code blocks etc. This was actually pretty easy, and keep in mind that I have never gotten beyond a "hello world" script in Perl. Kudos to John Gruber for copious comments and very logical, easy to read code. That's all good and well, but then there are some other things I needed to do to get a usable TeX version of my document. For instance certain characters need to be escaped, like the entities mentioned above. Now if I were more knowledgeable about Perl I would have just added these to the Markdown file, but rather than wrestle with Perl I elected to use grep via BBEdit. So I crafted an applescript that first parsed out things like `—` and replaced them with the unicode equivalent which is necessary to get an em-dash in XeTeX (in a normal LaTeX environment you would use `---` to generate an emdash). Other things like quote marks, curly brackets and ampersands are similarly replaced with their XeTeX equivalents (for some it's unicode, others like `{` or `}` must be escaped like so: `\{`). Next I created a BBEdit Textfactory to call these scripts in the right order (for instance I need to replace quote marks after running my modified Markdown script since Markdown will use quotes to identify things like url title tags (which my version simply discards). Then I created an applescript that calls the textfactory and then applies a BBEdit glossary item to the resulting (selected) text, which adds all the preamble TeX definitions I use and then passes that whole code block off to XeTeX via TeXShop and outputs the result in Preview. Convoluted? Yes. But now that it's done and assigned a shortcut key it takes less than two seconds to generate a pdf of really good looking (double spaced) text. The best part is if I want to change things around, the only file I have to adjust is the BBEdit glossary item that creates the preamble. The only downside is that to undo the various manipulations wrought on the original text file I have to hit the undo command five times. At some point I'll sit down and figure out how to do everything using Perl and then it will be a one step undo just like regular Markdown. In the mean time I just wrote a quick applescript that calls undo five times :) ###Am I insane? I don't know. I'm willing to admit to esoteric and when pressed will concede stupid, but damnit I like it. And from initial install to completed workflow we're only talking about six hours, most of which was spent pouring over LaTeX manuals. Okay yes, I'm insane. I went to all this effort just to avoid an animated paperclip. But seriously, that thing is creepy. Note of course that my LaTeX needs are limited and fairly simple. I wanted one version of my process to output a pretty simple double spaced document for editing. Then I whipped up another version for actual reading by others (single spaced, nice margins and header etc). I'm a humanities type, I'm not doing complex math equations, inline images, or typesetting an entire book with table of contents and bibliography. Of course even if I were, the only real change I would need to make is to the LaTeX preamble template. Everything else would remain the same, which is pretty future proof. And if BBEdit disappears and Apple goes belly up, well, I still have plain text files to edit on my PS457. [1]: http://www.luxagraf.com/archives/flash/software_sucks "Why Software sucks. Sometimes." [2]: http://www.snopes.com/photos/people/gates.asp "Bill Gates gets sexy for the teens--yes, that is a mac in the background" [3]: http://www.uoregon.edu/~koch/texshop/texshop.html "TeXShop for Mac OS X" [4]: http://www.mecheng.adelaide.edu.au/~will/texstart/ "TeX on Mac OS X: The most simple beginner's guide" [5]: http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&item_id=xetex_texshop "Using XeTeX with TexShop" [6]: http://www.math.hkbu.edu.hk/TeX/ "online LaTeX manual" [7]: http://www.math.hkbu.edu.hk/TeX/lshort.pdf "Not so Short introduction to LaTeX" [8]: http://daringfireball.net/projects/markdown/ "Markdown" [9]: http://www.luxagraf.com/pdf/hifitext.pdf "this article as an XeTeX generated pdf" [10]: http://www1.appstate.edu/~clarkne/hatemicro.html "Microsoft Word Suicide Note help" [11]: http://www.redlers.com/ "Mellel, great software and a bargin at $39" [12]: http://www.apple.com/iwork/pages/ "Apple's Pages, part of the new iWork suite" [13]: http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&item_id=xetex&_sc=1 "The XeTeX typesetting system"