[{"model": "src.topic", "pk": 1, "fields": {"name": "Command Line", "slug": "command-line", "pluralized_name": "the Command Line"}}, {"model": "src.topic", "pk": 2, "fields": {"name": "Security", "slug": "security", "pluralized_name": "Security"}}, {"model": "src.topic", "pk": 3, "fields": {"name": "Web Servers", "slug": "web-servers", "pluralized_name": "Web Servers"}}, {"model": "src.topic", "pk": 4, "fields": {"name": "Python", "slug": "python", "pluralized_name": "Python"}}, {"model": "src.topic", "pk": 5, "fields": {"name": "Privacy", "slug": "privacy", "pluralized_name": "Privacy"}}, {"model": "src.topic", "pk": 6, "fields": {"name": "IndieWeb", "slug": "indieweb", "pluralized_name": "the Indieweb"}}, {"model": "src.entry", "pk": 1, "fields": {"title": "Switching from LastPass to Pass", "slug": "pass", "body_html": "

I used to keep all my passwords in my head. I kept track of them using some memory tricks based my very, very limited understanding of what memory champions like Ed Cooke do. Basically I would generate strings using pwgen and then memorized them.

\n

As you might imagine, this did not scale well.

\n

Or rather it led to me getting lazy. It used to be that hardly any sites required you to log in so it was no big deal to memorize a few passwords. Now pretty much every time you buy something you have to create an account and I don't want to memorize a new strong password for some one-off site I'll probably never visit again. So I ended up using a less strong password for those. Worse, I'd re-use that password at multiple sites.

\n

My really important passwords (email and financial sites), are still only in my head, but recognizing that re-using the same simple password for the one-offs was a bad idea, I started using LastPass for those sorts of things. But I never really liked using LastPass. It bothered me that my passwords were stored on a third-party server. But LastPass was just so easy.

\n

Then LogMeIn bought LastPass and suddenly I was motivated to move on.

\n

As I outlined in a brief piece for The Register, there are lots of replacement services out there -- I like Dashlane, despite the price -- but I didn't want my password data on a third party server any more. I wanted to be in total control.

\n

I can't remember how I ran across pass, but I've been meaning to switch over to it for a while now. It exactly what I wanted in a password tool -- a simple, secure, command line based system using tested tools like GnuPG. There's also Firefox add-on and an Android app to make life a bit easier. So far though, I'm not using either.

\n

So I cleaned up my LastPass account, exported everything to CSV and imported it all into pass with this Ruby script.

\n

Once you have the basics installed there are two ways to run pass, with Git and without. I can't tell you how many times Git has saved my ass, so naturally I went with a Git-based setup that I host on a private server. That, combined with regular syncing to my Debian machine, my wife's Mac, rsyncing to a storage server, and routine backups to Amazon S3 means my encrypted password files are backed up on six different physical machines. Moderately insane, but sufficiently redundant that I don't worry about losing anything.

\n

If you go this route there's one other thing you need to backup -- your GPG keys. The public key is easy, but the private one is a bit harder. I got some good ideas from here. On one hand you could be paranoid-level secure and make a paper print out of your key. I suggest using a barcode or QR code, and then printing on card stock which you laminate for protection from the elements and then store it in a secure location like a safe deposit box. I may do this at some point, but for now I went with the less secure plan B -- I simply encrypted my private key with a passphrase.

\n

Yes, that essentially negates at least some of the benefit of using a key instead of passphrase in the first place. However, since, as noted above, I don't store any passwords that would, so to speak, give you the keys to my kingdom, I'm not terribly worried about it. Besides, if you really want to get these passwords it would be far easier to just take my laptop and hit me with a $5 wrench until I told you the passphrase for gnome-keyring.

\n

The more realistic thing to worry about is how other, potentially far less tech-savvy people can access these passwords should something happen to you. No one in my immediate family knows how to use GPG. Yet. So should something happen to me before I teach my kids how to use it, I periodically print out my important passwords and store that file in a secure place along with a will, advance directive and so on.

", "body_markdown": "I used to keep all my passwords in my head. I kept track of them using some memory tricks based my very, very limited understanding of what memory champions like [Ed Cooke][1] do. Basically I would generate strings using [pwgen][2] and then memorized them. \r\n\r\nAs you might imagine, this did not scale well. \r\n\r\nOr rather it led to me getting lazy. It used to be that hardly any sites required you to log in so it was no big deal to memorize a few passwords. Now pretty much every time you buy something you have to create an account and I don't want to memorize a new strong password for some one-off site I'll probably never visit again. So I ended up using a less strong password for those. Worse, I'd re-use that password at multiple sites.\r\n\r\nMy really important passwords (email and financial sites), are still only in my head, but recognizing that re-using the same simple password for the one-offs was a bad idea, I started using LastPass for those sorts of things. But I never really liked using LastPass. It bothered me that my passwords were stored on a third-party server. But LastPass was just *so* easy.\r\n\r\nThen LogMeIn bought LastPass and suddenly I was motivated to move on. \r\n\r\nAs I outlined in a [brief piece][3] for The Register, there are lots of replacement services out there -- I like [Dashlane][4], despite the price -- but I didn't want my password data on a third party server any more. I wanted to be in total control.\r\n\r\nI can't remember how I ran across [pass][5], but I've been meaning to switch over to it for a while now. It exactly what I wanted in a password tool -- a simple, secure, command line based system using tested tools like GnuPG. There's also [Firefox add-on][6] and [an Android app][7] to make life a bit easier. So far though, I'm not using either.\r\n\r\nSo I cleaned up my LastPass account, exported everything to CSV and imported it all into pass with this [Ruby script][8]. \r\n\r\nOnce you have the basics installed there are two ways to run pass, with Git and without. I can't tell you how many times Git has saved my ass, so naturally I went with a Git-based setup that I host on a private server. That, combined with regular syncing to my Debian machine, my wife's Mac, rsyncing to a storage server, and routine backups to Amazon S3 means my encrypted password files are backed up on six different physical machines. Moderately insane, but sufficiently redundant that I don't worry about losing anything.\r\n\r\nIf you go this route there's one other thing you need to backup -- your GPG keys. The public key is easy, but the private one is a bit harder. I got some good ideas from [here][9]. On one hand you could be paranoid-level secure and make a paper print out of your key. I suggest using a barcode or QR code, and then printing on card stock which you laminate for protection from the elements and then store it in a secure location like a safe deposit box. I may do this at some point, but for now I went with the less secure plan B -- I simply encrypted my private key with a passphrase. \r\n\r\nYes, that essentially negates at least some of the benefit of using a key instead of passphrase in the first place. However, since, as noted above, I don't store any passwords that would, so to speak, give you the keys to my kingdom, I'm not terribly worried about it. Besides, if you really want to get these passwords it would be far easier to just take my laptop and [hit me with a $5 wrench][10] until I told you the passphrase for gnome-keyring.\r\n\r\nThe more realistic thing to worry about is how other, potentially far less tech-savvy people can access these passwords should something happen to you. No one in my immediate family knows how to use GPG. Yet. So should something happen to me before I teach my kids how to use it, I periodically print out my important passwords and store that file in a secure place along with a will, advance directive and so on.\r\n\r\n\r\n[1]: https://twitter.com/tedcooke\r\n[2]: https://packages.debian.org/search?keywords=pwgen\r\n[3]: tk\r\n[4]: http://dashlane.com/\r\n[5]: http://www.passwordstore.org/\r\n[6]: https://github.com/jvenant/passff#readme\r\n[7]: https://github.com/zeapo/Android-Password-Store\r\n[8]: http://git.zx2c4.com/password-store/tree/contrib/importers/lastpass2pass.rb\r\n[9]: http://security.stackexchange.com/questions/51771/where-do-you-store-your-personal-private-gpg-key\r\n[10]: https://www.xkcd.com/538/\r\n", "pub_date": "2015-10-28T15:02:09", "last_updated": "2015-10-29T21:38:15.134", "enable_comments": true, "has_code": false, "status": 1, "meta_description": "I used to keep all my passwords in my head. As you might imagine, this did not scale well.", "template_name": 0, "topics": [1, 2]}}, {"model": "src.entry", "pk": 2, "fields": {"title": "About src", "slug": "about", "body_html": "

If you're here because Google sent you to one of the articles I deleted and then you got redirected here, have a look at the Internet Archive, which preserved those pages.

\n

For a while I had another blog at the URL longhandpixels.net. I made a few half-hearted attempts to make money with it, which I refuse to do here.

\n

I felt uncomfortable with the marketing that required and a little bit dirty about the whole thing. I don't want to spend my life writing things that will draw in people to buy my book. Honestly, I don't care about selling the book.

\n

If you want what I think is the best resource on the web to learn about responsive design then buy it, otherwise, don't. That's all the marketing I can do.

\n

What I want to do is write what I want to write, whether the topic is travel and life (what most of this site is about), fiction or technology. I don't really care if anyone else is interested or not. Long story short; I shut down longhandpixels. I ported over a small portion of articles that I liked and deleted the rest, redirecting them all to this page, hence the message at the top.

\n

So, there you go. Now if I were you I'd close this browser window right now and go somewhere with fresh air and sunshine, but if you're not up for that, I really do hope you enjoy src, which is what I call this code/tech centric portion of luxagraf.

\n

Acknowledgements

\n

src and the rest of this site would not be possible without the following software, many thanks to the creators:

\n", "body_markdown": "**If you're here because Google sent you to one of the articles I deleted and then you got redirected here, have a look at the [Internet Archive](https://web.archive.org/web/*/https://longhandpixels.net/blog/), which preserved those pages.**\r\n\r\nFor a while I had another blog at the URL longhandpixels.net. I made a few half-hearted attempts to make money with it, which I refuse to do here. \r\n\r\nI felt uncomfortable with the marketing that required and a little bit dirty about the whole thing. I don't want to spend my life writing things that will draw in people to buy my book. Honestly, I don't care about selling the book.\r\n\r\nIf you want what I think is the best resource on the web to learn about responsive design then [buy it](/src/books/responsive-web-design), otherwise, don't. That's all the marketing I can do.\r\n\r\nWhat I want to do is write what I want to write, whether the topic is travel and life (what most of this site is about), fiction or technology. I don't really care if anyone else is interested or not. Long story short; I shut down longhandpixels. I ported over a small portion of articles that I liked and deleted the rest, redirecting them all to this page, hence the message at the top.\r\n\r\nSo, there you go. Now if I were you I'd close this browser window right now and go somewhere with fresh air and sunshine, but if you're not up for that, I really do hope you enjoy `src`, which is what I call this code/tech centric portion of luxagraf. \r\n\r\n###Acknowledgements\r\n\r\n`src` and the rest of this site would not be possible without the following software, many thanks to the creators:\r\n\r\n* [Pandoc](http://johnmacfarlane.net/pandoc/) -- Everything I write is just a plain text file with a bit of markdown. To turn that into HTML (and sometimes epub) I use a magical tool called Pandoc.\r\n\r\n* [WeasyPrint](http://weasyprint.org/) -- I use WeasyPrint to generate PDFs.\r\n\r\n* [Git](http://git-scm.com/) -- pretty much everything I write is stored in Git for version control purposes. I host my own repos privately.\r\n\r\n* [Nginx](http://nginx.org/) -- This site is served by a custom build of Nginx. You can read more about how I set up Nginx in the tutorial I wrote, *[Install Nginx on Debian/Ubuntu](/src/install-nginx-debian)*\r\n\r\n* [Python](https://www.python.org/) and [Django](https://www.djangoproject.com/) -- This site consists primarily of flat HTML files generated by a custom Django application I wrote.\r\n\r\n* [Debian Linux](https://www.debian.org/) -- Way down at the bottom of stack there is Debian, which is my preferred operating system, server or otherwise. Currently I run Debian on a small VPS instance at [Vultr.com](http://www.vultr.com/?ref=6825229) (affiliate link, costs you nothing, but helps cover my hosting).", "pub_date": "2015-10-28T15:04:24", "last_updated": "2015-11-26T20:15:41.028", "enable_comments": false, "has_code": false, "status": 1, "meta_description": "about luxagraf:src", "template_name": 0, "topics": []}}, {"model": "src.entry", "pk": 3, "fields": {"title": "Setup And Secure Your First VPS", "slug": "setup-and-secure-vps", "body_html": "

Let's talk about your server hosting situation. I know a lot of you are still using a shared web host. The thing is, it's 2015, shared hosting is only necessary if you really want unexplained site outages and over-crowded servers that slow to a crawl.

\n

It's time to break free of those shared hosting chains. It time to stop accepting the software stack you're handed. It's time to stop settling for whatever outdated server software and configurations some shared hosting company sticks you with.

\n

You need a VPS. Seriously.

\n

What? Virtual Private Servers? Those are expensive and complicated... don't I need to know Linux or something?

\n

No, no and not really.

\n

Thanks to an increasingly competitive market you can pick up a very capable VPS for $5 a month. Setting up your VPS is a little more complicated than using a shared host, but most VPS's these days have one-click installers that will set up a Rails, Django or even WordPress environment for you.

\n

As for Linux, knowing your way around the command line certainly won't hurt, but these tutorials will teach you everything you really need to know. We'll also automate everything so that critical security updates for your server are applied automatically without you lifting a finger.

\n

Pick a VPS Provider

\n

There are hundreds, possibly thousands of VPS providers these days. You can nerd out comparing all of them on serverbear.com if you want. When you're starting out I suggest sticking with what I call the big three: Linode, Digital Ocean or Vultr.

\n

Linode would be my choice for mission critical hosting. I use it for client projects, but Vultr and Digital Ocean are cheaper and perfect for personal projects and experiments. Both offer $5 a month servers, which gets you .5 GB of RAM, plenty of bandwidth and 20-30GB of a SSD-based storage space. Vultr actually gives you a little more RAM, which is helpful if you're setting up a Rails or Django environment (i.e. a long running process that requires more memory), but I've been hosting a Django-based site on a 512MB Digital Ocean instance for 18 months and have never run out of memory.

\n

Also note that all these plans start off charging by the hour so you can spin up a new server, play around with it and then destroy it and you'll have only spent a few pennies.

\n

Which one is better? They're both good. I've been using Vultr more these days, but Digital Ocean has a nicer, somewhat slicker control panel. There are also many others I haven't named. Just pick one.

\n

Here's a link that will get you a $10 credit at Vultr and here's one that will get you a $10 credit at Digital Ocean (both of those are affiliate links and help cover the cost of hosting this site and get you some free VPS time).

\n

For simplicity's sake, and because it offers more one-click installers, I'll use Digital Ocean for the rest of this tutorial.

\n

Create Your First VPS

\n

In Digital Ocean you'll create a \"Droplet\". It's a three step process: pick a plan (stick with the $5 a month plan for starters), pick a location (stick with the defaults) and then install a bare OS or go with a one-click installer. Let's get WordPress up and running, so select WordPress on 14.04 under the Applications tab.

\n

If you want automatic backups, and you do, check that box. Backups are not free, but generally won't add more than about $1 to your monthly bill -- it's money well spent.

\n

The last thing we need to do is add an SSH key to our account. If we don't Digital Ocean will email our root password in a plain text email. Yikes.

\n

If you need to generate some SSH keys, here's a short guide, How to Generate SSH keys. You can skip step 3 in that guide. Once you've got your keys set up on your local machine you just need to add them to your droplet.

\n

If you're on OS X, you can use this command to copy your public key to the clipboard:

\n
pbcopy < ~/.ssh/id_rsa.pub\n
\n\n\n

Otherwise you can use cat to print it out and copy it:

\n
cat ~/.ssh/id_rsa.pub\n
\n\n\n

Now click the button to \"add an SSH key\". Then paste the contents of your clipboard into the box. Hit \"add SSH Key\" and you're done.

\n

Now just click the giant \"Create Droplet\".

\n

Congratulations you just deployed your first VPS server.

\n

Secure Your VPS

\n

Now we can log in to our new VPS with this code:

\n
ssh root@127.87.87.87\n
\n\n\n

That will cause SSH to ask if you want to add the server to list of known hosts. Say yes and then on OS X you'll get a dialog asking for the passphrase you created a minute ago when you generate your SSH key. Enter it, check the box to save it to your keychain so you don't have to enter it again.

\n

And you're now logged in to your VPS as root. That's not how we want to log in though since root is a very privileged user that can wreak all sorts of havoc. The first thing we'll do is change the password of the root user. To do that, just enter:

\n
passwd\n
\n\n\n

And type a new password.

\n

Now let's create a new user:

\n
adduser myusername\n
\n\n\n

Give your username a secure password and then enter this command:

\n
visudo\n
\n\n\n

If you get an error saying that there is no app installed, you'll need to first install sudo (apt-get install sudo on Debian, which does not ship with sudo). That will open a file. Use the arrow key to move the cursor down to the line that reads:

\n
root ALL=(ALL:ALL) ALL\n
\n\n\n

Now add this line:

\n
myusername ALL=(ALL:ALL) ALL\n
\n\n\n

Where myusername is the username you created just a minute ago. Now we need to save the file. To do that hit Control-X, type a Y and then hit return.

\n

Now, WITHOUT LOGGING OUT OF YOUR CURRENT ROOT SESSION open another terminal window and make sure you can login with your new user:

\n
ssh myusername@12.34.56.78\n
\n\n\n

You'll be asked for the password that we created just a minute ago on the server (not the one for our SSH key). Enter that password and you should be logged in. To make sure we can get root access when we need it, try entering this command:

\n
sudo apt-get update\n
\n\n\n

That should ask for your password again and then spit out a bunch of information, all of which you can ignore for now.

\n

Okay, now you can log out of your root terminal window. To do that just hit Control-D.

\n

Finishing Up

\n

What about actually accessing our VPS on the web? Where's WordPress? Just point your browser to the bare IP address you used to log in and you should get the first screen of the WordPress installer.

\n

We now have a VPS deployed and we've taken some very basic steps to secure it. We can do a lot more to make things more secure, but I've covered that in a separate article:

\n

One last thing: the user we created does not have access to our SSH keys, we need to add them. First make sure you're logged out of the server (type Control-D and you'll get a message telling you the connection has been closed). Now, on your local machine paste this command:

\n
cat ~/.ssh/id_rsa.pub | ssh myusername@45.63.48.114 "mkdir -p ~/.ssh && cat >>  ~/.ssh/authorized_keys"\n
\n\n\n

You'll have to put in your password one last time, but from now on you can login via SSH.

\n

Next Steps

\n

Congratulations you made it past the first hurdle, you're well on your way to taking control over your server. Kick back, relax and write some blog posts.

\n

Write down any problems you had with this tutorial and send me a link so I can check out your blog (I'll try to help figure out what went wrong too).

\n

Because we used a pre-built image from Digital Ocean though we're really not much better off than if we went with shared hosting, but that's okay, you have to start somewhere. Next up we'll do the same things, but this time create a bare OS which will serve as the basis for a custom built version of Nginx that's highly optimized and way faster than any stock server.

", "body_markdown": "Let's talk about your server hosting situation. I know a lot of you are still using a shared web host. The thing is, it's 2015, shared hosting is only necessary if you really want unexplained site outages and over-crowded servers that slow to a crawl.\r\n\r\nIt's time to break free of those shared hosting chains. It time to stop accepting the software stack you're handed. It's time to stop settling for whatever outdated server software and configurations some shared hosting company sticks you with.\r\n\r\nYou need a VPS. Seriously.\r\n\r\nWhat? Virtual Private Servers? Those are expensive and complicated... don't I need to know Linux or something?\r\n\r\nNo, no and not really.\r\n\r\nThanks to an increasingly competitive market you can pick up a very capable VPS for $5 a month. Setting up your VPS *is* a little more complicated than using a shared host, but most VPS's these days have one-click installers that will set up a Rails, Django or even WordPress environment for you.\r\n\r\nAs for Linux, knowing your way around the command line certainly won't hurt, but these tutorials will teach you everything you really need to know. We'll also automate everything so that critical security updates for your server are applied automatically without you lifting a finger.\r\n\r\n## Pick a VPS Provider\r\n\r\nThere are hundreds, possibly thousands of VPS providers these days. You can nerd out comparing all of them on [serverbear.com](http://serverbear.com/) if you want. When you're starting out I suggest sticking with what I call the big three: Linode, Digital Ocean or Vultr.\r\n\r\nLinode would be my choice for mission critical hosting. I use it for client projects, but Vultr and Digital Ocean are cheaper and perfect for personal projects and experiments. Both offer $5 a month servers, which gets you .5 GB of RAM, plenty of bandwidth and 20-30GB of a SSD-based storage space. Vultr actually gives you a little more RAM, which is helpful if you're setting up a Rails or Django environment (i.e. a long running process that requires more memory), but I've been hosting a Django-based site on a 512MB Digital Ocean instance for 18 months and have never run out of memory.\r\n\r\nAlso note that all these plans start off charging by the hour so you can spin up a new server, play around with it and then destroy it and you'll have only spent a few pennies.\r\n\r\nWhich one is better? They're both good. I've been using Vultr more these days, but Digital Ocean has a nicer, somewhat slicker control panel. There are also many others I haven't named. Just pick one.\r\n\r\nHere's a link that will get you a $10 credit at [Vultr](http://www.vultr.com/?ref=6825229) and here's one that will get you a $10 credit at [Digital Ocean](https://www.digitalocean.com/?refcode=3bda91345045) (both of those are affiliate links and help cover the cost of hosting this site *and* get you some free VPS time).\r\n\r\nFor simplicity's sake, and because it offers more one-click installers, I'll use Digital Ocean for the rest of this tutorial.\r\n\r\n## Create Your First VPS\r\n\r\nIn Digital Ocean you'll create a \"Droplet\". It's a three step process: pick a plan (stick with the $5 a month plan for starters), pick a location (stick with the defaults) and then install a bare OS or go with a one-click installer. Let's get WordPress up and running, so select WordPress on 14.04 under the Applications tab.\r\n\r\nIf you want automatic backups, and you do, check that box. Backups are not free, but generally won't add more than about $1 to your monthly bill -- it's money well spent.\r\n\r\nThe last thing we need to do is add an SSH key to our account. If we don't Digital Ocean will email our root password in a plain text email. Yikes.\r\n\r\nIf you need to generate some SSH keys, here's a short guide, [How to Generate SSH keys](/blog/2015/03/set-up-ssh-keys-secure-logins). You can skip step 3 in that guide. Once you've got your keys set up on your local machine you just need to add them to your droplet.\r\n\r\nIf you're on OS X, you can use this command to copy your public key to the clipboard:\r\n\r\n~~~~console\r\npbcopy < ~/.ssh/id_rsa.pub\r\n~~~~\r\n\r\nOtherwise you can use cat to print it out and copy it:\r\n\r\n~~~~console\r\ncat ~/.ssh/id_rsa.pub\r\n~~~~\r\n\r\nNow click the button to \"add an SSH key\". Then paste the contents of your clipboard into the box. Hit \"add SSH Key\" and you're done.\r\n\r\nNow just click the giant \"Create Droplet\".\r\n\r\nCongratulations you just deployed your first VPS server.\r\n\r\n## Secure Your VPS\r\n\r\nNow we can log in to our new VPS with this code:\r\n\r\n~~~~console\r\nssh root@127.87.87.87\r\n~~~~\r\n\r\nThat will cause SSH to ask if you want to add the server to list of known hosts. Say yes and then on OS X you'll get a dialog asking for the passphrase you created a minute ago when you generate your SSH key. Enter it, check the box to save it to your keychain so you don't have to enter it again.\r\n\r\nAnd you're now logged in to your VPS as root. That's not how we want to log in though since root is a very privileged user that can wreak all sorts of havoc. The first thing we'll do is change the password of the root user. To do that, just enter:\r\n\r\n~~~~console\r\npasswd\r\n~~~~\r\n\r\nAnd type a new password.\r\n\r\nNow let's create a new user:\r\n\r\n~~~~console\r\nadduser myusername\r\n~~~~\r\n\r\nGive your username a secure password and then enter this command:\r\n\r\n~~~~console\r\nvisudo\r\n~~~~\r\n\r\nIf you get an error saying that there is no app installed, you'll need to first install sudo (`apt-get install sudo` on Debian, which does not ship with sudo). That will open a file. Use the arrow key to move the cursor down to the line that reads:\r\n\r\n~~~~vim\r\nroot ALL=(ALL:ALL) ALL\r\n~~~~\r\n\r\nNow add this line:\r\n\r\n~~~~vim\r\nmyusername ALL=(ALL:ALL) ALL\r\n~~~~\r\n\r\nWhere myusername is the username you created just a minute ago. Now we need to save the file. To do that hit Control-X, type a Y and then hit return.\r\n\r\nNow, **WITHOUT LOGGING OUT OF YOUR CURRENT ROOT SESSION** open another terminal window and make sure you can login with your new user:\r\n\r\n~~~~console\r\nssh myusername@12.34.56.78\r\n~~~~\r\n\r\nYou'll be asked for the password that we created just a minute ago on the server (not the one for our SSH key). Enter that password and you should be logged in. To make sure we can get root access when we need it, try entering this command:\r\n\r\n~~~~console\r\nsudo apt-get update\r\n~~~~\r\n\r\nThat should ask for your password again and then spit out a bunch of information, all of which you can ignore for now.\r\n\r\nOkay, now you can log out of your root terminal window. To do that just hit Control-D.\r\n\r\n## Finishing Up\r\n\r\nWhat about actually accessing our VPS on the web? Where's WordPress? Just point your browser to the bare IP address you used to log in and you should get the first screen of the WordPress installer.\r\n\r\nWe now have a VPS deployed and we've taken some very basic steps to secure it. We can do a lot more to make things more secure, but I've covered that in a separate article:\r\n\r\nOne last thing: the user we created does not have access to our SSH keys, we need to add them. First make sure you're logged out of the server (type Control-D and you'll get a message telling you the connection has been closed). Now, on your local machine paste this command:\r\n\r\n~~~~console\r\ncat ~/.ssh/id_rsa.pub | ssh myusername@45.63.48.114 \"mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys\"\r\n~~~~\r\n\r\nYou'll have to put in your password one last time, but from now on you can login via SSH.\r\n\r\n## Next Steps\r\n\r\nCongratulations you made it past the first hurdle, you're well on your way to taking control over your server. Kick back, relax and write some blog posts.\r\n\r\nWrite down any problems you had with this tutorial and send me a link so I can check out your blog (I'll try to help figure out what went wrong too).\r\n\r\nBecause we used a pre-built image from Digital Ocean though we're really not much better off than if we went with shared hosting, but that's okay, you have to start somewhere. Next up we'll do the same things, but this time create a bare OS which will serve as the basis for a custom built version of Nginx that's highly optimized and way faster than any stock server.\r\n\r\n", "pub_date": "2015-03-31T20:45:50", "last_updated": "2015-10-31T09:29:49.569", "enable_comments": true, "has_code": true, "status": 1, "meta_description": "Still using shared hosting? It's 2015, time to set up your own VPS. Here's a guide to launching your first VPS on Digital Ocean or Vultr.", "template_name": 0, "topics": [3]}}, {"model": "src.entry", "pk": 4, "fields": {"title": "Setup SSH Keys for Secure Logins", "slug": "ssh-keys-secure-logins", "body_html": "

SSH keys are an easier, more secure way of logging into your virtual private server via SSH. Passwords are vulnerable to brute force attacks and just plain guessing. Key-based authentication is (currently) much more difficult to brute force and, when combined with a password on the key, provides a secure way of accessing your VPS instances from anywhere.

\n

Key-based authentication uses two keys, the first is the \"public\" key that anyone is allowed to see. The second is the \"private\" key that only you ever see. So to log in to a VPS using keys we need to create a pair -- a private key and a public key that matches it -- and then securely upload the public key to our VPS instance. We'll further protect our private key by adding a password to it.

\n

Open up your terminal application. On OS X, that's Terminal, which is in Applications >> Utilities folder. If you're using Linux I'll assume you know where the terminal app is and Windows fans can follow along after installing Cygwin.

\n

Here's how to generate SSH keys in three simple steps.

\n

Setup SSH for More Secure Logins

\n

Step 1: Check for SSH Keys

\n

Cut and paste this line into your terminal to check and see if you already have any SSH keys:

\n
ls -al ~/.ssh\n
\n\n\n

If you see output like this, then skip to Step 3:

\n
id_dsa.pub\nid_ecdsa.pub\nid_ed25519.pub\nid_rsa.pub\n
\n\n\n

Step 2: Generate an SSH Key

\n

Here's the command to create a new SSH key. Just cut and paste, but be sure to put in your own email address in quotes:

\n
ssh-keygen -t rsa -C "your_email@example.com"\n
\n\n\n

This will start a series of questions, just hit enter to accept the default choice for all of them, including the last one which asks where to save the file.

\n

Then it will ask for a passphrase, pick a good long one. And don't worry you won't need to enter this every time, there's something called ssh-agent that will ask for your passphrase and then store it for you for the duration of your session (i.e. until you restart your computer).

\n
Enter passphrase (empty for no passphrase): [Type a passphrase]\nEnter same passphrase again: [Type passphrase again]\n
\n\n\n

Once you've put in the passphrase, SSH will spit out a \"fingerprint\" that looks a bit like this:

\n
# Your identification has been saved in /Users/you/.ssh/id_rsa.\n# Your public key has been saved in /Users/you/.ssh/id_rsa.pub.\n# The key fingerprint is:\n# d3:50:dc:0f:f4:65:29:93:dd:53:c2:d6:85:51:e5:a2 scott@longhandpixels.net\n
\n\n\n

Step 3 Copy Your Public Key to your VPS

\n

If you have ssh-copy-id installed on your system you can use this line to transfer your keys:

\n
ssh-copy-id user@123.45.56.78\n
\n\n\n

If that doesn't work, you can paste in the keys using SSH:

\n
cat ~/.ssh/id_rsa.pub | ssh user@12.34.56.78 "mkdir -p ~/.ssh && cat >>  ~/.ssh/authorized_keys"\n
\n\n\n

Whichever you use you should get a message like this:

\n
The authenticity of host '12.34.56.78 (12.34.56.78)' can\u2019t be established.\nRSA key fingerprint is 01:3b:ca:85:d6:35:4d:5f:f0:a2:cd:c0:c4:48:86:12.\nAre you sure you want to continue connecting (yes/no)? yes\nWarning: Permanently added '12.34.56.78' (RSA) to the list of known hosts.\nusername@12.34.56.78's password: \n
\n\n\n

Now try logging into the machine, with ssh username@12.34.56.78, and check in:

\n
~/.ssh/authorized_keys\n
\n\n\n

to make sure we haven't added extra keys that you weren't expecting.

\n

Now log in to your VPS with ssh like so:

\n
ssh username@12.34.56.78\n
\n\n\n

And you won't be prompted for a password by the server. You will, however, be prompted for the passphrase you used to encrypt your SSH key. You'll need to enter that passphrase to unlock your SSH key, but ssh-agent should store that for you so you only need to re-enter it when you logout or restart your computer.

\n

And there you have it, secure, key-based log-ins for your VPS.

\n

Bonus: SSH config

\n

If you'd rather not type ssh myuser@12.34.56.78 all the time you can add that host to your SSH config file and refer to it by hostname.

\n

The SSH config file lives in ~/.ssh/config. This command will either open that file if it exists or create it if it doesn't:

\n
nano ~/.ssh/config\n
\n\n\n

Now we need to create a host entry. Here's what mine looks like:

\n
Host myname  \n  Hostname 12.34.56.78\n  user myvpsusername\n  #Port 24857 #if you set a non-standard port uncomment this line\n  CheckHostIP yes\n  TCPKeepAlive yes\n
\n\n\n

Then to login all I need to do is type ssh myname. This is even more helpful when using scp since you can skip the whole username@server and just type: scp myname:/home/myuser/somefile.txt . to copy a file.

", "body_markdown": "SSH keys are an easier, more secure way of logging into your virtual private server via SSH. Passwords are vulnerable to brute force attacks and just plain guessing. Key-based authentication is (currently) much more difficult to brute force and, when combined with a password on the key, provides a secure way of accessing your VPS instances from anywhere.\r\n\r\nKey-based authentication uses two keys, the first is the \"public\" key that anyone is allowed to see. The second is the \"private\" key that only you ever see. So to log in to a VPS using keys we need to create a pair -- a private key and a public key that matches it -- and then securely upload the public key to our VPS instance. We'll further protect our private key by adding a password to it.\r\n\r\nOpen up your terminal application. On OS X, that's Terminal, which is in Applications >> Utilities folder. If you're using Linux I'll assume you know where the terminal app is and Windows fans can follow along after installing [Cygwin](http://cygwin.com/).\r\n\r\nHere's how to generate SSH keys in three simple steps.\r\n\r\n\r\n## Setup SSH for More Secure Logins\r\n\r\n### Step 1: Check for SSH Keys\r\n\r\nCut and paste this line into your terminal to check and see if you already have any SSH keys:\r\n\r\n~~~~console\r\nls -al ~/.ssh\r\n~~~~\r\n\r\nIf you see output like this, then skip to Step 3:\r\n\r\n~~~~console\r\nid_dsa.pub\r\nid_ecdsa.pub\r\nid_ed25519.pub\r\nid_rsa.pub\r\n~~~~\r\n\r\n### Step 2: Generate an SSH Key\r\n\r\nHere's the command to create a new SSH key. Just cut and paste, but be sure to put in your own email address in quotes:\r\n\r\n~~~~console\r\nssh-keygen -t rsa -C \"your_email@example.com\"\r\n~~~~\r\n\r\nThis will start a series of questions, just hit enter to accept the default choice for all of them, including the last one which asks where to save the file.\r\n\r\nThen it will ask for a passphrase, pick a good long one. And don't worry you won't need to enter this every time, there's something called `ssh-agent` that will ask for your passphrase and then store it for you for the duration of your session (i.e. until you restart your computer).\r\n\r\n~~~~console\r\nEnter passphrase (empty for no passphrase): [Type a passphrase]\r\nEnter same passphrase again: [Type passphrase again]\r\n~~~~\r\n\r\nOnce you've put in the passphrase, SSH will spit out a \"fingerprint\" that looks a bit like this:\r\n\r\n~~~~console\r\n# Your identification has been saved in /Users/you/.ssh/id_rsa.\r\n# Your public key has been saved in /Users/you/.ssh/id_rsa.pub.\r\n# The key fingerprint is:\r\n# d3:50:dc:0f:f4:65:29:93:dd:53:c2:d6:85:51:e5:a2 scott@longhandpixels.net\r\n~~~~\r\n\r\n### Step 3 Copy Your Public Key to your VPS\r\n\r\nIf you have ssh-copy-id installed on your system you can use this line to transfer your keys:\r\n\r\n~~~~console\r\nssh-copy-id user@123.45.56.78\r\n~~~~\r\n\r\nIf that doesn't work, you can paste in the keys using SSH:\r\n\r\n~~~.language-bash\r\ncat ~/.ssh/id_rsa.pub | ssh user@12.34.56.78 \"mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys\"\r\n~~~\r\n\r\nWhichever you use you should get a message like this:\r\n\r\n~~~~console\r\nThe authenticity of host '12.34.56.78 (12.34.56.78)' can\u2019t be established.\r\nRSA key fingerprint is 01:3b:ca:85:d6:35:4d:5f:f0:a2:cd:c0:c4:48:86:12.\r\nAre you sure you want to continue connecting (yes/no)? yes\r\nWarning: Permanently added '12.34.56.78' (RSA) to the list of known hosts.\r\nusername@12.34.56.78's password: \r\n~~~~\r\n\r\nNow try logging into the machine, with `ssh username@12.34.56.78`, and check in:\r\n\r\n~~~~console\r\n~/.ssh/authorized_keys\r\n~~~~\r\n\r\nto make sure we haven't added extra keys that you weren't expecting.\r\n\r\nNow log in to your VPS with ssh like so:\r\n\r\n~~~~console\r\nssh username@12.34.56.78\r\n~~~~\r\n\r\nAnd you won't be prompted for a password by the server. You will, however, be prompted for the passphrase you used to encrypt your SSH key. You'll need to enter that passphrase to unlock your SSH key, but ssh-agent should store that for you so you only need to re-enter it when you logout or restart your computer.\r\n\r\nAnd there you have it, secure, key-based log-ins for your VPS.\r\n\r\n### Bonus: SSH config\r\n\r\nIf you'd rather not type `ssh myuser@12.34.56.78` all the time you can add that host to your SSH config file and refer to it by hostname. \r\n\r\nThe SSH config file lives in `~/.ssh/config`. This command will either open that file if it exists or create it if it doesn't:\r\n\r\n~~~~console\r\nnano ~/.ssh/config\r\n~~~~\r\n\r\nNow we need to create a host entry. Here's what mine looks like:\r\n\r\n~~~~ini\r\nHost myname \r\n Hostname 12.34.56.78\r\n user myvpsusername\r\n #Port 24857 #if you set a non-standard port uncomment this line\r\n CheckHostIP yes\r\n TCPKeepAlive yes\r\n~~~~\r\n\r\nThen to login all I need to do is type `ssh myname`. This is even more helpful when using `scp` since you can skip the whole username@server and just type: `scp myname:/home/myuser/somefile.txt .` to copy a file.\r\n", "pub_date": "2015-03-21T20:49:26", "last_updated": "2015-10-31T09:27:11.481", "enable_comments": true, "has_code": true, "status": 1, "meta_description": "How to set up SSH keys for more secure logins to your VPS.", "template_name": 0, "topics": [3]}}, {"model": "src.entry", "pk": 5, "fields": {"title": "How My Two-Year-Old Twins Made Me a Better Programmer", "slug": "better", "body_html": "

TL;DR version: \"information != knowledge; knowledge != wisdom; wisdom != experience;\"

\n

I have two-year-old twins. Every day I watch them figure out more about the world around them. Whether that's how to climb a little higher, how to put on a t-shirt, where to put something when you're done with it, or what to do with these crazy strings hanging off your shoes.

\n

It can be incredibly frustrating to watch them struggle with something new and fail. They're your children so your instinct is to step in and help. But if you step in and do everything for them they never figure out how to do any of it on their own. I've learned to wait until they ask for help.

\n

Watching them struggle and learn has made me realize that I don't let myself struggle enough and my skills are stagnating because of it. I'm happy to let Google step in and solve all my problems for me. I get work done, true, but at the expense of learning new things.

\n

I've started to think of this as the Stack Overflow problem, not because I actually blame Stack Overflow -- it's a great resource, the problem is mine -- but because it's emblematic of a problem. I use StackOverflow, and Google more generally, as a crutch, as a way to quickly solve problems with some bit of information rather than digging deeper and turning information into actual knowledge.

\n

On one hand quick solutions can be a great thing. Searching the web lets me solve my problem and move on to the next (potentially more interesting) one.

\n

On the other hand, information (the solution to the problem at hand) is not as useful as knowledge. Snippets of code and other tiny bits of information are not going to land you job, nor will they help you when you want to write a tutorial or a book about something. This sort of \"let's just solve the problem\" approach begins and ends in the task at hand. The information you get out of that is useful for the task you're doing, but knowledge is much larger than that. And I don't know about you, but I want to be more than something that's useful for finishing tasks.

\n

Information is useless to me if it isn't synthesized into personal knowledge somehow. And, for me at least, that information only becomes knowledge when I stop, back up and try to understand the why rather than than just the how. Good answers on Stack Overflow explain the why, but more often than not this doesn't happen.

\n

For example, today I wanted a simple way to get python's os.listdir to ignore directories. I knew that I could loop through all the returned elements and test if they were directories, but I thought perhaps there was a more elegant way to doing that (short answer, not really). The details of my problem aren't the point though, the point is that the question had barely formed in my mind and I noticed my fingers already headed for command tab, ready to jump the browser and cut and paste some solution from Stack Overflow.

\n

This time though I stopped myself before I pulled up my browser. I thought about my daughters in the next room. I knew that I would likely have the answer to my question in 10 seconds and also knew I would forget it and move on in 20. I was about to let easy answers step in and solve my problem for me. I was about to avoid learning something new. Sometimes that's fine, but do it too much and I'm worried I might be more of a successful cut-and-paster than struggling programmer.

\n

Sometimes it's good to take a few minutes to read the actual docs, pull up the man pages, type :help or whatever and learn. It's going to take a few extra minutes. You might even take an unexpected detour from the task at hand. That might mean you learn something you didn't expect to learn. Yes, it might mean you lose a few minutes of \"work\" to learn. It might even mean that you fail. Sometimes the docs don't help. The sure, Google. The important part of learning is to struggle, to apply your energy to the problem rather than finding to solution.

\n

Sometimes you need to struggle with your shoelaces for hours, otherwise you'll never figure out how to tie them.

\n

In my specific case I decided to permanently reduce my dependency on Stack Overflow and Google. Instead of flipping to the browser I fired up the Python interpreter and typed help(os.listdir). Did you know the Python interpreter has a built-in help function called, appropriately enough, help()? The help() function takes either an object or a keyword (the latter needs to be in quotes like \"keyword\"). If you're having trouble I wrote a quick guide to making Python's built-in help() function work.

\n

Now, I could have learned what I wanted to know in 2 seconds using Google. Instead it took me 20 minutes1 to figure out. But now I understand how to do what I wanted to do and, more importantly, I understand why it will work. I have a new piece of knowledge and next time I encounter the same situation I can draw on my knowledge rather than turning to Google again. It's not exactly wisdom or experience yet, but it's getting closer. And when you're done solving all the little problems of day-to-day coding that's really the point -- improving your skill, learning and getting better at what you do every day.

\n
\n
\n
    \n
  1. \n

    Most of that time was spent figuring out where OS X stores Python docs, which I won't have to do again. Note to self, I gotta switch back to Linux. 

    \n
  2. \n
\n
", "body_markdown": "\r\nTL;DR version: \"information != knowledge; knowledge != wisdom; wisdom != experience;\"\r\n\r\nI have two-year-old twins. Every day I watch them figure out more about the world around them. Whether that's how to climb a little higher, how to put on a t-shirt, where to put something when you're done with it, or what to do with these crazy strings hanging off your shoes.\r\n\r\nIt can be incredibly frustrating to watch them struggle with something new and fail. They're your children so your instinct is to step in and help. But if you step in and do everything for them they never figure out how to do any of it on their own. I've learned to wait until they ask for help.\r\n\r\nWatching them struggle and learn has made me realize that I don't let myself struggle enough and my skills are stagnating because of it. I'm happy to let Google step in and solve all my problems for me. I get work done, true, but at the expense of learning new things.\r\n\r\nI've started to think of this as the Stack Overflow problem, not because I actually blame Stack Overflow -- it's a great resource, the problem is mine -- but because it's emblematic of a problem. I use StackOverflow, and Google more generally, as a crutch, as a way to quickly solve problems with some bit of information rather than digging deeper and turning information into actual knowledge.\r\n\r\nOn one hand quick solutions can be a great thing. Searching the web lets me solve my problem and move on to the next (potentially more interesting) one.\r\n\r\nOn the other hand, information (the solution to the problem at hand) is not as useful as knowledge. Snippets of code and other tiny bits of information are not going to land you job, nor will they help you when you want to write a tutorial or a book about something. This sort of \"let's just solve the problem\" approach begins and ends in the task at hand. The information you get out of that is useful for the task you're doing, but knowledge is much larger than that. And I don't know about you, but I want to be more than something that's useful for finishing tasks.\r\n\r\nInformation is useless to me if it isn't synthesized into personal knowledge somehow. And, for me at least, that information only becomes knowledge when I stop, back up and try to understand the *why* rather than than just the *how*. Good answers on Stack Overflow explain the why, but more often than not this doesn't happen.\r\n\r\nFor example, today I wanted a simple way to get python's `os.listdir` to ignore directories. I knew that I could loop through all the returned elements and test if they were directories, but I thought perhaps there was a more elegant way to doing that (short answer, not really). The details of my problem aren't the point though, the point is that the question had barely formed in my mind and I noticed my fingers already headed for command tab, ready to jump the browser and cut and paste some solution from Stack Overflow.\r\n\r\nThis time though I stopped myself before I pulled up my browser. I thought about my daughters in the next room. I knew that I would likely have the answer to my question in 10 seconds and also knew I would forget it and move on in 20. I was about to let easy answers step in and solve my problem for me. I was about to avoid learning something new. Sometimes that's fine, but do it too much and I'm worried I might be more of a successful cut-and-paster than struggling programmer.\r\n\r\nSometimes it's good to take a few minutes to read the actual docs, pull up the man pages, type `:help` or whatever and learn. It's going to take a few extra minutes. You might even take an unexpected detour from the task at hand. That might mean you learn something you didn't expect to learn. Yes, it might mean you lose a few minutes of \"work\" to learn. It might even mean that you fail. Sometimes the docs don't help. The sure, Google. The important part of learning is to struggle, to apply your energy to the problem rather than finding to solution.\r\n\r\nSometimes you need to struggle with your shoelaces for hours, otherwise you'll never figure out how to tie them.\r\n\r\nIn my specific case I decided to permanently reduce my dependency on Stack Overflow and Google. Instead of flipping to the browser I fired up the Python interpreter and typed `help(os.listdir)`. Did you know the Python interpreter has a built-in help function called, appropriately enough, `help()`? The `help()` function takes either an object or a keyword (the latter needs to be in quotes like \"keyword\"). If you're having trouble I wrote a quick guide to [making Python's built-in `help()` function work][1].\r\n\r\nNow, I could have learned what I wanted to know in 2 seconds using Google. Instead it took me 20 minutes[^1] to figure out. But now I understand how to do what I wanted to do and, more importantly, I understand *why* it will work. I have a new piece of knowledge and next time I encounter the same situation I can draw on my knowledge rather than turning to Google again. It's not exactly wisdom or experience yet, but it's getting closer. And when you're done solving all the little problems of day-to-day coding that's really the point -- improving your skill, learning and getting better at what you do every day.\r\n\r\n[^1]: Most of that time was spent figuring out where OS X stores Python docs, which [I won't have to do again][1]. Note to self, I gotta switch back to Linux.\r\n\r\n[1]: /src/get-smarter-pythons-built-in-help\r\n", "pub_date": "2014-08-05T20:55:13", "last_updated": "2015-10-29T21:40:54.900", "enable_comments": true, "has_code": false, "status": 1, "meta_description": "To get better at something you have to struggle. Without struggle you'll never turn information into knowledge or understanding.", "template_name": 0, "topics": [4]}}, {"model": "src.entry", "pk": 6, "fields": {"title": "Get Smarter with Python's Built-In Help", "slug": "python-help", "body_html": "

One of my favorite things about Python is the help() function. Fire up the standard Python interpreter, and import help from pydoc and you can search Python's official documentation from within the interpreter. Reading the f'ing manual from the interpreter. As it should be1.

\n

The help() function takes either an object or a keyword. The former must be imported first while the latter needs to be a string like \"keyword\". Whichever you use Python will pull up the standard Python docs for that object or keyword. Type help() without anything and you'll start an interactive help session.

\n

The help() function is awesome, but there's one little catch.

\n

In order for this to work properly you need to make sure you have the PYTHONDOCS environment variable set on your system. On a sane operating system this will likely be in '/usr/share/doc/pythonX.X/html'. In mostly sane OSes like Debian (and probably Ubuntu/Mint, et al) you might have to explicitly install the docs with apt-get install python-doc, which will put the docs in /usr/share/doc/pythonX.X-doc/html/.

\n

If you're using OS X's built-in Python, the path to Python's docs would be:

\n
/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/\n
\n\n\n

Note the 2.6 in that path. As far as I can tell OS X Mavericks does not ship with docs for Python 2.7, which is weird and annoying (like most things in Mavericks). If it's there and you've found it, please enlighten me in the comments below.

\n

Once you've found the documentation you can add that variable to your bash/zshrc like so:

\n
export PYTHONDOCS=/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/\n
\n\n\n

Now fire up iPython, type help() and start learning rather than always hobbling along with Google, Stack Overflow and other crutches.

\n

Also, PSA. If you do anything with Python, you really need to check out iPython. It will save you loads of time, has more awesome features than a Veg-O-Matic and notebooks, don't even get me started on notebooks. And in iPython you don't even have to import help, it's already there, ready to go from the minute it starts.

\n
\n
\n
    \n
  1. \n

    The Python docs are pretty good too. Not Vim-level good, but close. 

    \n
  2. \n
\n
", "body_markdown": "\r\nOne of my favorite things about Python is the `help()` function. Fire up the standard Python interpreter, and import `help` from `pydoc` and you can search Python's official documentation from within the interpreter. Reading the f'ing manual from the interpreter. As it should be[^1].\r\n\r\nThe `help()` function takes either an object or a keyword. The former must be imported first while the latter needs to be a string like \"keyword\". Whichever you use Python will pull up the standard Python docs for that object or keyword. Type `help()` without anything and you'll start an interactive help session.\r\n\r\nThe `help()` function is awesome, but there's one little catch.\r\n\r\nIn order for this to work properly you need to make sure you have the `PYTHONDOCS` environment variable set on your system. On a sane operating system this will likely be in '/usr/share/doc/pythonX.X/html'. In mostly sane OSes like Debian (and probably Ubuntu/Mint, et al) you might have to explicitly install the docs with `apt-get install python-doc`, which will put the docs in `/usr/share/doc/pythonX.X-doc/html/`.\r\n\r\nIf you're using OS X's built-in Python, the path to Python's docs would be:\r\n\r\n~~~~console\r\n/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/\r\n~~~~\r\n\r\nNote the 2.6 in that path. As far as I can tell OS X Mavericks does not ship with docs for Python 2.7, which is weird and annoying (like most things in Mavericks). If it's there and you've found it, please enlighten me in the comments below.\r\n\r\nOnce you've found the documentation you can add that variable to your bash/zshrc like so:\r\n\r\n~~~~console\r\nexport PYTHONDOCS=/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/\r\n~~~~\r\n\r\nNow fire up iPython, type `help()` and start learning rather than always hobbling along with [Google, Stack Overflow and other crutches](/src/better).\r\n\r\nAlso, PSA. If you do anything with Python, you really need to check out [iPython](http://ipython.org/). It will save you loads of time, has more awesome features than a Veg-O-Matic and [notebooks](http://ipython.org/notebook.html), don't even get me started on notebooks. And in iPython you don't even have to import help, it's already there, ready to go from the minute it starts.\r\n\r\n[^1]: The Python docs are pretty good too. Not Vim-level good, but close.\r\n", "pub_date": "2014-08-01T20:56:57", "last_updated": "2015-10-31T09:25:24.757", "enable_comments": true, "has_code": true, "status": 1, "meta_description": "Python has great docs, here's how to use them.", "template_name": 0, "topics": [4]}}, {"model": "src.entry", "pk": 7, "fields": {"title": "Protect Your Online Privacy with Ghostery", "slug": "protect-your-online-privacy-ghostery", "body_html": "

There's an invisible web that lies just below the web you see everyday. That invisible web is tracking the sites you visit, the pages you read, the things you like, the things you favorite and collating all that data into a portrait of things you are likely to purchase. And all this happens without anyone asking your consent.

\n

Not much has changed since I wrote about online tracking years ago on Webmonkey. Back then visiting five websites meant \"somewhere between 21 and 47 other websites learn about your visit to those five\". That number just continues to grow.

\n

If that doesn't bother you, and you could not care less who is tracking you, then this is not the tutorial for you.

\n

However, if the extent of online tracking bothers you and you want to do something about it, there is some good news. In fact it's not that hard to stop all that tracking.

\n

To protect your privacy online you'll just need to add a tool like Ghostery or Do Not Track Plus to your web browser. Both will work, but I happen to use Ghostery so that's what I'm going to show you how to set up.

\n

Install and Setup Ghostery in Firefox, Chrome/Chromium, Opera and Safari.

\n

The first step is to install the Ghostery extension for your web browser. To do that, just head over to the Ghostery downloads page and click the install button that's highlighted for your browser.

\n

Some browsers will ask you if you want to allow the add-on to be installed. In Firefox just click \"Allow\" and then click \"Install Now\" when the installation window opens up.

\n
\n
\"Installing
\n
In Firefox click Allow...
\n
\"Installing
\n
...and then Install Now
\n
\n

If you're using Chrome just click the Add button.

\n
\n
\"Installing
\n
Installing extensions in Chrome/Chromium
\n
\n

Ghostery is now installed, but out of the box Ghostery doesn't actually block anything. That's why, once you have it installed, Ghostery should have opened a new window or tab that looks like this:

\n
\n
\"The
\n
The Ghostery install wizard
\n
\n

This is the series of screens that walk you through the process of setting up Ghostery to block sites that would like to track you.

\n

Before I dive into setting up Ghostery, it's important to understand that some of what Ghostery can block will limit what you see on the web. For example, Disqus is a very popular third-party comment system. It happens to track you as well. If you block that tracking though you won't see comments on a lot of sites.

\n

There are two ways around this. One is to decide that you trust Disqus and allow it to run on any site. The second is to only allow Disqus on sites where you want to read the comments. I'll show you how to set up both options.

\n

Configuring Ghostery

\n

First we have to configure Ghostery. Click the right arrow on that first screen to get started. That will lead you to this screen:

\n
\n
\"The
\n
The Ghostery install wizard, page 2
\n
\n

If you want to help Ghostery get better you can check this box. Then click the right arrow again and you'll see a page asking if you want to enable the Alert Bubble.

\n
\n
\"The
\n
The Ghostery install wizard, page 3
\n
\n

This is Ghostery's little alert box that comes up when you visit a new page. It will show you all the trackers that are blocked. Think of this as a little window into the invisible web. I enable this, though I change the default settings a little bit. We'll get to that in just a second.

\n

The next screen is the core of Ghostery. This is where we decide which trackers to block and which to allow.

\n
\n
\"The
\n
The Ghostery install wizard -- blocking trackers
\n
\n

Out of the box Ghostery blocks nothing. Let's change that. I start by blocking everything:

\n
\n
\"Ghostery
\n
Ghostery set to block all known trackers
\n
\n

Ghostery will also ask if you want to block new trackers as it learns about them. I go with yes.

\n

Now chances are the setup we currently have is going to limit your ability to use some websites. To stick with the earlier example, this will mean Disqus comments are never loaded. The easiest way to fix this is to search for Disqus and enable it:

\n
\n
\"Ghostery
\n
Ghostery set to block everything but Disqus
\n
\n

Note that, along the top of the tracker list there are some buttons. This makes it easy to enable, for example, not just Disqus but every commenting system. If you'd like to do that click the \"Commenting System\" button and uncheck all the options:

\n
\n
\"Filtering
\n
Filtering Ghostery by type of tracker
\n
\n

Another category of things you might want to allow are music players like those from SoundCloud. To learn more about a particular service, just click the link next to the item and Ghostery will show you what it knows, including any industry affiliations.

\n
\n
\"Ghostery
\n
Ghostery showing details on Disqus
\n
\n

Now you may be thinking, wait, how do I know which companies I want to allow and which I don't? Well, you don't really need to know all of them because you can enable them as you go too.

\n

Let's save what we have and test Ghostery out on a site. Click the right arrow one last time and check to make sure that the Ghostery icon is in your toolbar. If it isn't you can click the button \"Add Button\".

\n

Ghostery in Action

\n

Okay, Ghostery is installed and blocking almost everything it knows about. But that might limit what we can do. For example, let's go visit arstechnica.com. You can see down here at the bottom of the screen there's a list of everything that's blocked.

\n
\n
\"Ghostery
\n
Ghostery showing all the trackers no longer tracking you
\n
\n

You can see in that list that right now the Twitter button is blocked. So if you scroll down the bottom of the article and look at the author bio (which should have a twitter button) you'll see this little Ghostery icon:

\n
\n
\"Ghostery
\n
Ghostery replaces elements it has blocked with the Ghostery icon.
\n
\n

That's how you will know that Ghostery has blocked something. If you were to click on that element Ghostery would load the blocked script and you'd see a Twitter button. But what if you always want to see the Twitter button? To do that we'll come up to the toolbar and click on the Ghostery icon which will reveal the blocking menu:

\n
\n
\"The
\n
The Ghostery panel.
\n
\n

Just slide the Twitter button to the left and Twitter's button (and accompanying tracking beacons) will be allowed after you reload the page. Whenever you return to Ars, the Twitter button will load. As I mentioned before, you can do this on a per-site basis if there are just a few sites you want to allow. To enable the Twitter button on every site, click the little check box button the right of the slider. Realize though, that enabling it globally will mean Twitter can track you everywhere you go.

\n
\n
\"Enabling
\n
Enabling trackers from the Ghostery panel.
\n
\n

This panel is essentially doing the same thing as the setup page we used earlier. In fact, we can get back the setting page by click the gear icon and then the \"Options\" button:

\n
\n
\"Getting
\n
Getting back to the Ghostery setting page.
\n
\n

Now, you may have noticed that the little purple panel showing you what was blocked hung around for quite a while, fifteen seconds to be exact, which is a bit long in my opinion. We can change that by clicking the Advanced tab on the Ghostery options page:

\n
\n
\"Getting
\n
Getting back to the Ghostery setting page.
\n
\n

The first option in the list is whether or not to show the alert bubble at all, followed by the length of time it's shown. I like to set this to the minimum, 3 seconds. Other than this I leave the advanced settings at their defaults.

\n

Scroll to the bottom of the settings page, click save, and you're done setting up Ghostery.

\n

Conclusion

\n

Now you can browse the web with a much greater degree of privacy, only allowing those companies you approve of to know what you're up to. And remember, any time a site isn't working the way you think you should, you can temporarily disable Ghostery by clicking the icon in the toolbar and hitting the pause blocking button down at the bottom of the Ghostery panel:

\n
\n
\"Temporarily
\n
Temporarily disable Ghostery.
\n
\n

Also note that there is an iOS version of Ghostery, though, due to Apple's restrictions on iOS, it's an entirely separate web browser, not a plugin for Mobile Safari. If you use Firefox for Android there is a plugin available.

\n

Further reading:

\n", "body_markdown": "There's an invisible web that lies just below the web you see everyday. That invisible web is tracking the sites you visit, the pages you read, the things you like, the things you favorite and collating all that data into a portrait of things you are likely to purchase. And all this happens without anyone asking your consent.\r\n\r\nNot much has changed since [I wrote about online tracking years ago on Webmonkey][1]. Back then visiting five websites meant \"somewhere between 21 and 47 other websites learn about your visit to those five\". That number just continues to grow.\r\n\r\nIf that doesn't bother you, and you could not care less who is tracking you, then this is not the tutorial for you.\r\n\r\nHowever, if the extent of online tracking bothers you and you want to do something about it, there is some good news. In fact it's not that hard to stop all that tracking.\r\n\r\nTo protect your privacy online you'll just need to add a tool like [Ghostery][2] or [Do Not Track Plus][3] to your web browser. Both will work, but I happen to use Ghostery so that's what I'm going to show you how to set up. \r\n\r\n## Install and Setup Ghostery in Firefox, Chrome/Chromium, Opera and Safari.\r\n\r\nThe first step is to install the Ghostery extension for your web browser. To do that, just head over to the [Ghostery downloads page][4] and click the install button that's highlighted for your browser.\r\n\r\nSome browsers will ask you if you want to allow the add-on to be installed. In Firefox just click \"Allow\" and then click \"Install Now\" when the installation window opens up.\r\n\r\n[![Installing add-ons in Firefox][5]](/media/src/images/2014/gh-firefox-install01.png \"View Image 1\")\r\n: In Firefox click Allow...\r\n\r\n[![Installing add-ons in Firefox 2][6]](/media/src/images/2014/gh-firefox-install02.png \"View Image 2\")\r\n: ...and then Install Now\r\n\r\nIf you're using Chrome just click the Add button. \r\n\r\n[![Installing extensions in Chrome/Chromium][7]](/media/src/images/2014/gh-chrome-install01.jpg \"View Image 3\")\r\n: Installing extensions in Chrome/Chromium\r\n\r\nGhostery is now installed, but out of the box Ghostery doesn't actually block anything. That's why, once you have it installed, Ghostery should have opened a new window or tab that looks like this:\r\n\r\n[![The Ghostery install wizard][8]](/media/src/images/2014/gh-first-screen.jpg \"View Image 4\")\r\n: The Ghostery install wizard\r\n\r\nThis is the series of screens that walk you through the process of setting up Ghostery to block sites that would like to track you. \r\n\r\nBefore I dive into setting up Ghostery, it's important to understand that some of what Ghostery can block will limit what you see on the web. For example, Disqus is a very popular third-party comment system. It happens to track you as well. If you block that tracking though you won't see comments on a lot of sites. \r\n\r\nThere are two ways around this. One is to decide that you trust Disqus and allow it to run on any site. The second is to only allow Disqus on sites where you want to read the comments. I'll show you how to set up both options.\r\n\r\n## Configuring Ghostery\r\n\r\nFirst we have to configure Ghostery. Click the right arrow on that first screen to get started. That will lead you to this screen:\r\n\r\n[![The Ghostery install wizard, page 2][9]](/media/src/images/2014/gh-second-screen.jpg \"View Image 5\")\r\n: The Ghostery install wizard, page 2\r\n\r\nIf you want to help Ghostery get better you can check this box. Then click the right arrow again and you'll see a page asking if you want to enable the Alert Bubble.\r\n\r\n[![The Ghostery install wizard, page 3][10]](/media/src/images/2014/gh-third-screen.jpg \"View Image 6\")\r\n: The Ghostery install wizard, page 3\r\n\r\nThis is Ghostery's little alert box that comes up when you visit a new page. It will show you all the trackers that are blocked. Think of this as a little window into the invisible web. I enable this, though I change the default settings a little bit. We'll get to that in just a second.\r\n\r\nThe next screen is the core of Ghostery. This is where we decide which trackers to block and which to allow. \r\n\r\n[![The Ghostery install wizard -- blocking trackers][11]](/media/src/images/2014/gh-main-01.jpg \"View Image 7\")\r\n: The Ghostery install wizard -- blocking trackers\r\n\r\nOut of the box Ghostery blocks nothing. Let's change that. I start by blocking everything:\r\n\r\n[![Ghostery set to block all known trackers][12]](/media/src/images/2014/gh-main-02.jpg \"View Image 8\")\r\n: Ghostery set to block all known trackers\r\n\r\nGhostery will also ask if you want to block new trackers as it learns about them. I go with yes.\r\n\r\nNow chances are the setup we currently have is going to limit your ability to use some websites. To stick with the earlier example, this will mean Disqus comments are never loaded. The easiest way to fix this is to search for Disqus and enable it:\r\n\r\n[![Ghostery set to block everything but Disqus][13]](/media/src/images/2014/gh-main-03.jpg \"View Image 9\")\r\n: Ghostery set to block everything but Disqus\r\n\r\nNote that, along the top of the tracker list there are some buttons. This makes it easy to enable, for example, not just Disqus but every commenting system. If you'd like to do that click the \"Commenting System\" button and uncheck all the options:\r\n\r\n[![Filtering Ghostery by type of tracker][14]](/media/src/images/2014/gh-main-04.jpg \"View Image 10\")\r\n: Filtering Ghostery by type of tracker\r\n\r\nAnother category of things you might want to allow are music players like those from SoundCloud. To learn more about a particular service, just click the link next to the item and Ghostery will show you what it knows, including any industry affiliations.\r\n\r\n[![Ghostery showing details on Disqus][15]](/media/src/images/2014/gh-main-05.jpg \"View Image 11\")\r\n: Ghostery showing details on Disqus\r\n\r\nNow you may be thinking, wait, how do I know which companies I want to allow and which I don't? Well, you don't really need to know all of them because you can enable them as you go too. \r\n\r\nLet's save what we have and test Ghostery out on a site. Click the right arrow one last time and check to make sure that the Ghostery icon is in your toolbar. If it isn't you can click the button \"Add Button\".\r\n\r\n## Ghostery in Action\r\n\r\nOkay, Ghostery is installed and blocking almost everything it knows about. But that might limit what we can do. For example, let's go visit arstechnica.com. You can see down here at the bottom of the screen there's a list of everything that's blocked. \r\n\r\n[![Ghostery showing all the trackers no longer tracking you][16]](/media/src/images/2014/gh-example-01.jpg \"View Image 12\")\r\n: Ghostery showing all the trackers no longer tracking you\r\n\r\nYou can see in that list that right now the Twitter button is blocked. So if you scroll down the bottom of the article and look at the author bio (which should have a twitter button) you'll see this little Ghostery icon:\r\n\r\n[![Ghostery replaces elements it has blocked with the Ghostery icon.][17]](/media/src/images/2014/gh-example-02.jpg \"View Image 13\")\r\n: Ghostery replaces elements it has blocked with the Ghostery icon.\r\n\r\nThat's how you will know that Ghostery has blocked something. If you were to click on that element Ghostery would load the blocked script and you'd see a Twitter button. But what if you always want to see the Twitter button? To do that we'll come up to the toolbar and click on the Ghostery icon which will reveal the blocking menu:\r\n\r\n[![The Ghostery panel.][18]](/media/src/images/2014/gh-example-03.jpg \"View Image 14\")\r\n: The Ghostery panel.\r\n\r\nJust slide the Twitter button to the left and Twitter's button (and accompanying tracking beacons) will be allowed after you reload the page. Whenever you return to Ars, the Twitter button will load. As I mentioned before, you can do this on a per-site basis if there are just a few sites you want to allow. To enable the Twitter button on every site, click the little check box button the right of the slider. Realize though, that enabling it globally will mean Twitter can track you everywhere you go.\r\n\r\n[![Enabling trackers from the Ghostery panel.][19]](/media/src/images/2014/gh-example-04.jpg \"view image 15\")\r\n: Enabling trackers from the Ghostery panel.\r\n\r\nThis panel is essentially doing the same thing as the setup page we used earlier. In fact, we can get back the setting page by click the gear icon and then the \"Options\" button:\r\n\r\n[![Getting back to the Ghostery setting page.][20]](/media/src/images/2014/gh-example-05.jpg \"view image 16\")\r\n: Getting back to the Ghostery setting page.\r\n\r\nNow, you may have noticed that the little purple panel showing you what was blocked hung around for quite a while, fifteen seconds to be exact, which is a bit long in my opinion. We can change that by clicking the Advanced tab on the Ghostery options page:\r\n\r\n\r\n[![Getting back to the Ghostery setting page.][21]](/media/src/images/2014/gh-example-06.jpg \"view image 17\")\r\n: Getting back to the Ghostery setting page.\r\n\r\nThe first option in the list is whether or not to show the alert bubble at all, followed by the length of time it's shown. I like to set this to the minimum, 3 seconds. Other than this I leave the advanced settings at their defaults. \r\n\r\nScroll to the bottom of the settings page, click save, and you're done setting up Ghostery.\r\n\r\n## Conclusion\r\n\r\nNow you can browse the web with a much greater degree of privacy, only allowing those companies *you* approve of to know what you're up to. And remember, any time a site isn't working the way you think you should, you can temporarily disable Ghostery by clicking the icon in the toolbar and hitting the pause blocking button down at the bottom of the Ghostery panel:\r\n\r\n[![Temporarily disable Ghostery.][22]](/media/src/images/2014/gh-example-07.jpg \"view image 18\")\r\n: Temporarily disable Ghostery.\r\n\r\nAlso note that there is an iOS version of Ghostery, though, due to Apple's restrictions on iOS, it's an entirely separate web browser, not a plugin for Mobile Safari. If you use Firefox for Android there is a plugin available. \r\n\r\n##Further reading:\r\n\r\n\r\n* [How To Install Ghostery (Internet Explorer)][23] -- Ghostery's guide to installing it in Internet Explorer.\r\n* [Secure Your Browser: Add-Ons to Stop Web Tracking][24] -- A piece I wrote for Webmonkey a few years ago that gives some more background on tracking and some other options you can use besides Ghostery. \r\n* [Tracking our online trackers][25] -- TED talk by Gary Kovacs, CEO of Mozilla Corp, covering online behavior tracking more generally. \r\n* This sort of tracking is [coming to the real world too][26], so there's that to look forward to. \r\n\r\n\r\n\r\n\r\n[1]: http://www.webmonkey.com/2012/02/secure-your-browser-add-ons-to-stop-web-tracking/\r\n[2]: https://www.ghostery.com/\r\n[3]: https://www.abine.com/index.html\r\n[4]: https://www.ghostery.com/en/download\r\n[5]: /media/src/images/2014/gh-firefox-install01-tn.jpg\r\n[6]: /media/src/images/2014/gh-firefox-install02-tn.jpg\r\n[7]: /media/src/images/2014/gh-chrome-install01-tn.jpg\r\n[8]: /media/src/images/2014/gh-first-screen-tn.jpg\r\n[9]: /media/src/images/2014/gh-second-screen-tn.jpg\r\n[10]: /media/src/images/2014/gh-third-screen-tn.jpg\r\n[11]: /media/src/images/2014/gh-main-01-tn.jpg\r\n[12]: /media/src/images/2014/gh-main-02-tn.jpg\r\n[13]: /media/src/images/2014/gh-main-03-tn.jpg\r\n[14]: /media/src/images/2014/gh-main-04-tn.jpg\r\n[15]: /media/src/images/2014/gh-main-05-tn.jpg\r\n[16]: /media/src/images/2014/gh-example-01-tn.jpg\r\n[17]: /media/src/images/2014/gh-example-02-tn.jpg\r\n[18]: /media/src/images/2014/gh-example-03-tn.jpg\r\n[19]: /media/src/images/2014/gh-example-04-tn.jpg\r\n[20]: /media/src/images/2014/gh-example-05-tn.jpg\r\n[21]: /media/src/images/2014/gh-example-06-tn.jpg\r\n[22]: /media/src/images/2014/gh-example-07-tn.jpg\r\n[23]: https://www.youtube.com/watch?v=NaI17dSfPRg\r\n[24]: http://www.webmonkey.com/2012/02/secure-your-browser-add-ons-to-stop-web-tracking/\r\n[25]: http://www.ted.com/talks/gary_kovacs_tracking_the_trackers\r\n[26]: http://business.financialpost.com/2014/02/01/its-creepy-location-based-marketing-is-following-you-whether-you-like-it-or-not/?__lsa=e48c-7542\r\n", "pub_date": "2014-05-29T21:00:40", "last_updated": "2015-10-31T09:23:54.160", "enable_comments": true, "has_code": true, "status": 1, "meta_description": "How to install and configure the Ghostery browser add-on for maximum online privacy", "template_name": 0, "topics": [2, 5]}}, {"model": "src.entry", "pk": 8, "fields": {"title": "Install Nginx on Debian/Ubuntu", "slug": "install-nginx-debian", "body_html": "

I recently helped a friend set up his first Nginx server and in the process realized I didn't have a good working reference for how I set up Nginx.

\n

So, for myself, my friend and anyone else looking to get started with Nginx, here's my somewhat opinionated guide to installing and configuring Nginx to serve static files. Which is to say, this is how I install and set up Nginx to serve my own and my clients' static files whether those files are simply stylesheets, images and JavaScript or full static sites like this one. What follows is what I believe are the best practices of Nginx1; if you know better, please correct me in the comments.

\n

[This post was last updated 30 October 2015]

\n

Nginx Beats Apache for Static Content2

\n

Apache is overkill. Unlike Apache, which is a jack-of-all-trades server, Nginx was really designed to do just a few things well, one of which is to offer a simple, fast, lightweight server for static files. And Nginx is really, really good at serving static files. In fact, in my experience Nginx with PageSpeed, gzip, far future expires headers and a couple other extras I'll mention is faster than serving static files from Amazon S33 (potentially even faster in the future if Verizon and its ilk really do start throttling cloud-based services).

\n

Nginx is Different from Apache

\n

In its quest to be lightweight and fast, Nginx takes a different approach to modules than you're probably familiar with in Apache. In Apache you can dynamically load various features using modules. You just add something like LoadModule alias_module modules/mod_alias.so to your Apache config files and just like that Apache loads the alias module.

\n

Unlike Apache, Nginx can not dynamically load modules. Nginx has available what it has available when you install it.

\n

That means if you really want to customize and tweak it, it's best to install Nginx from source. You don't have to install it from source. But if you really want a screaming fast server, I suggest compiling Nginx yourself, enabling and disabling exactly the modules you need. Installing Nginx from source allows you to add some third-party tools, most notably Google's PageSpeed module, which has some fantastic tools for speeding up your site.

\n

Luckily, installing Nginx from source isn't too difficult. Even if you've never compiled any software from source, you can install Nginx. The remainder of this post will show you exactly how.

\n

My Ideal Nginx Setup for Static Sites

\n

Before we start installing, let's go over the things we'll be using to build a fast, lightweight server with Nginx.

\n\n

So we're going to install Nginx with SPDY support and three third-party modules.

\n

Okay, here's the step-by-step process to installing Nginx on a Debian 8 (or Ubuntu) server. If you're looking for a good, cheap VPS host I've been happy with Vultr.com (that's an affiliate link that will help support luxagraf; if you prefer, here's a non-affiliate link: link)

\n

The first step is to make sure you're installing the latest release of Nginx. To do that check the Nginx download page for the latest version of Nginx (at the time of writing that's 1.5.10).

\n

Okay, SSH into your server and let's get started.

\n

While these instructions will work on just about any server, the one thing that will be different is how you install the various prerequisites needed to compile Nginx.

\n

On a Debian/Ubuntu server you'd do this:

\n
sudo apt-get -y install build-essential zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libssl-dev tar unzip\n
\n\n\n

If you're using RHEL/Cent/Fedora you'll want these packages:

\n
sudo yum install gcc-c++ pcre-dev pcre-devel zlib-devel make\n
\n\n\n

After you have the prerequisites installed it's time to grab the latest version of Google's Pagespeed module. Google's Nginx PageSpeed installation instructions are pretty good, so I'll reproduce them here.

\n

First grab the latest version of PageSpeed, which is currently 1.9.32.2, but check the sources since it updates frequently and change this first variable to match the latest version.

\n
NPS_VERSION=1.9.32.2\nwget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}-beta.zip\nunzip release-${NPS_VERSION}-beta.zip\n
\n\n\n

Now, before we compile pagespeed we need to grab psol, which PageSpeed needs to function properly. So, let's cd into the ngx_pagespeed-release-1.8.31.4-beta folder and grab psol:

\n
cd ngx_pagespeed-release-${NPS_VERSION}-beta/\nwget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz\ntar -xzvf ${NPS_VERSION}.tar.gz\ncd ../\n
\n\n\n

Alright, so the ngx_pagespeed module is all setup and ready to install. All we have to do at this point is tell Nginx where to find it.

\n

Now let's grab the Headers More and Naxsi modules as well. Again, check the Headers More and Naxsi pages to see what the latest stable version is and adjust the version numbers in the following accordingly.

\n
HM_VERSION =v0.25\nwget https://github.com/agentzh/headers-more-nginx-module/archive/${HM_VERSION}.tar.gz\ntar -xvzf ${HM_VERSION}.tar.gz\nNAX_VERSION=0.53-2\nwget https://github.com/nbs-system/naxsi/archive/${NAX_VERSION}.tar.gz\ntar -xvzf ${NAX_VERSION}.tar.gz\n
\n\n\n

Now we have all three third-party modules ready to go, the last thing we'll grab is a copy of Nginx itself:

\n
NGINX_VERSION=1.7.7\nwget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz\ntar -xvzf nginx-${NGINX_VERSION}.tar.gz\n
\n\n\n

Then we cd into the Nginx folder and compile. So, first:

\n
cd nginx-${NGINX_VERSION}/\n
\n\n\n

So now we're inside the Nginx folder, let's configure our installation. We'll add in all our extras and turn off a few things we don't need. Or at least they're things I don't need, if you need the mail modules, then delete those lines. If you don't need SSL, you might want to skip that as well. Here's the config setting I use (Note: all paths are for Debian servers, you'll have to adjust the various paths accordingly for RHEL/Cent/Fedora/ servers):

\n
./configure \n        --add-module=$HOME/naxsi-${NAX_VERSION}/naxsi_src \n        --prefix=/usr/share/nginx \n        --sbin-path=/usr/sbin/nginx \n        --conf-path=/etc/nginx/nginx.conf \n        --pid-path=/var/run/nginx.pid \n        --lock-path=/var/lock/nginx.lock \n        --error-log-path=/var/log/nginx/error.log \n        --http-log-path=/var/log/access.log \n        --user=www-data \n        --group=www-data \n        --without-mail_pop3_module \n        --without-mail_imap_module \n        --without-mail_smtp_module \n        --with-http_stub_status_module \n        --with-http_ssl_module \n        --with-http_spdy_module \n        --with-http_gzip_static_module \n        --add-module=$HOME/ngx_pagespeed-release-${NPS_VERSION}-beta \n        --add-module=$HOME/headers-more-nginx-module-${HM_VERSION}\n
\n\n\n

There are a few things worth noting here. First off make sure that Naxsi is first. Here's what the Naxsi wiki page has to say on that score: \"Nginx will decide the order of modules according the order of the module's directive in Nginx's ./configure. So, no matter what (except if you really know what you are doing) put Naxsi first in your ./configure. If you don't do so, you might run into various problems, from random/unpredictable behaviors to non-effective WAF.\" The last thing you want is to think you have a web application firewall running when in fact you don't, so stick with Naxsi first.

\n

There are a couple other things you might want to add to this configuration. If you're going to be serving large files, larger than your average 1.5MB HTML page, consider adding the line: --with-file-aio, which is apparently faster than the stock sendfile option. See here for more details. There are quite a few other modules available. A full list of the default modules can be found on the Nginx site. Read through that and if there's another module you need, you can add it to that config list.

\n

Okay, we've told Nginx what to do, now let's actually install it:

\n
make\nsudo make install\n
\n\n\n

Once make install finishes doing its thing you'll have Nginx all set up.

\n

Congrats! You made it.

\n

The next step is to add Nginx to the list of things your server starts up automatically whenever it reboots. Since we installed Nginx from scratch we need to tell the underlying system what we did.

\n

Make it Autostart

\n

Since we compiled from source rather than using Debian/Ubuntu's package management tools, the underlying stystem isn't aware of Nginx's existence. That means it won't automatically start it up when the system boots. In order to ensure that Nginx does start on boot we'll have to manually add Nginx to our server's list of startup services. That way, should we need to reboot, Nginx will automatically restart when the server does.

\n

Note: I have embraced systemd so this is out of date, see below for systemd version

\n

To do that I use the Debian init script listed in the Nginx InitScripts page:

\n

If that works for you, grab the raw version:

\n
wget https://raw.githubusercontent.com/MovLib/www/develop/etc/init.d/nginx.sh\n# I had to edit the DAEMON var to point to nginx\n# change line 63 in the file to:\nDAEMON=/usr/sbin/nginx\n# then move it to /etc/init.d/nginx\nsudo mv nginx.sh /etc/init.d/nginx\n# make it executable:\nsudo chmod +x /etc/init.d/nginx\n# then just:\nsudo service nginx start #also restart, reload, stop etc\n
\n\n\n

Updated Systemd scripts

\n

Yeah I went and did it. I kind of like systemd actually. Anyway, here's what I use to stop and start my custom compiled nginx with systemd...

\n

First we need to create and edit an nginx.service file.

\n
sudo vim /lib/systemd/system/nginx.service #this is for debian\n
\n\n\n

Then I use this script which I got from the nginx wiki I believe.

\n
# Stop dance for nginx\n# =======================\n#\n# ExecStop sends SIGSTOP (graceful stop) to the nginx process.\n# If, after 5s (--retry QUIT/5) nginx is still running, systemd takes control\n# and sends SIGTERM (fast shutdown) to the main process.\n# After another 5s (TimeoutStopSec=5), and if nginx is alive, systemd sends\n# SIGKILL to all the remaining processes in the process group (KillMode=mixed).\n#\n# nginx signals reference doc:\n# http://nginx.org/en/docs/control.html\n#\n[Unit]\nDescription=A high performance web server and a reverse proxy server\nAfter=network.target\n\n[Service]\nType=forking\nPIDFile=/run/nginx.pid\nExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'\nExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'\nExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload\nExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid\nTimeoutStopSec=5\nKillMode=mixed\n\n[Install]\nWantedBy=multi-user.target\n
\n\n\n

Save that file, exit your text editor. Now we just need to tell systemd about our script and then we can stop and start via our service file. To do that...

\n
sudo systemctl enable nginx.service\nsudo systemctl start nginx.service\nsudo systemctl status nginx.service\n
\n\n\n

I suggest taking the last bit and turning it into an alias in your bashrc or zshrc file so that you can quickly restart/reload the server when you need it. Here's what I use:

\n
alias xrestart="sudo systemctl restart nginx.service"\n
\n\n\n

If you're using systemd, congrats, you're done. If you're looking for a way to get autostart to work on older or non-systemd servers, read on...

\n

End systemd update

\n

Okay so we now have the initialization script all set up, now let's make Nginx start up on reboot. In theory this should do it:

\n
update-rc.d -f nginx defaults\n
\n\n\n

But that didn't work for me with my Digital Ocean Debian 7 x64 droplet (which complained that \"insserv rejected the script header\"). I didn't really feel like troubleshooting that at the time; I was feeling lazy so I decided to use chkconfig instead. To do that I just installed chkconfig and added Nginx:

\n
sudo apt-get install chkconfig\nsudo chkconfig --add nginx\nsudo chkconfig nginx on\n
\n\n\n

So there we have it, everything you need to get Nginx installed with SPDY, PageSpeed, Headers More and Naxsi. A blazing fast server for static files.

\n

After that it's just a matter of configuring Nginx, which is entirely dependent on how you're using it. For static setups like this my configuration is pretty minimal.

\n

Before we get to that though, there's the first thing I do: edit /etc/nginx/nginx.conf down to something pretty simple. This is the root config so I keep it limited to a http block that turns on a few things I want globally and an include statement that loads site-specific config files. Something a bit like this:

\n
user  www-data;\nevents {\n    worker_connections  1024;\n}\nhttp {\n    include mime.types;\n    include /etc/nginx/naxsi_core.rules;\n    default_type  application/octet-stream;\n    types_hash_bucket_size 64;\n    server_names_hash_bucket_size 128;\n    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '\n                      '$status $body_bytes_sent "$http_referer" '\n                      '"$http_user_agent" "$http_x_forwarded_for"';\n\n    access_log  logs/access.log  main;\n    more_set_headers "Server: My Custom Server";\n    keepalive_timeout  65;\n    gzip  on;\n    pagespeed on;\n    pagespeed FileCachePath /var/ngx_pagespeed_cache;\n    include /etc/nginx/sites-enabled/*.conf;\n}\n
\n\n\n

A few things to note. I've include the core rules file from the Naxsi source. To make sure that file exists, we need to copy it over to /etc/nginx/.

\n
sudo cp naxsi-0.53-2/naxci_config/naxsi_core.rule /etc/nginx\n
\n\n\n

Now let's restart the server so it picks up these changes:

\n
sudo service nginx restart\n
\n\n\n

Or, if you took my suggestion of creating an alias, you can type: xrestart and Nginx will restart itself.

\n

With this configuration we have a good basic setup and any .conf files you add to the folder /etc/nginx/sites-enabled/ will be included automatically. So if you want to create a conf file for mydomain.com, you'd create the file /etc/nginx/sites-enabled/mydomain.conf and put the configuration for that domain in that file.

\n

I'm going to post a follow up on how I configure Nginx very soon. In the mean time here's a pretty comprehensive guide to configuring Nginx in a variety of scenarios. And remember, if you want to some more helpful tips and tricks for web developers, sign up for the mailing list below.

\n
\n
\n
    \n
  1. \n

    If you're more experienced with Nginx and I'm totally bass-akward about something in this guide, please let me know. 

    \n
  2. \n
  3. \n

    In my experience anyway. Probably Apache can be tuned to get pretty close to Nginx's performance with static files, but it's going to take quite a bit of work. One is not necessarily better, but there are better tools for different jobs. 

    \n
  4. \n
  5. \n

    That said, obviously a CDN service like Cloudfront will, in most cases, be much faster than Nginx or any other server. 

    \n
  6. \n
\n
", "body_markdown": "\r\nI recently helped a friend set up his first Nginx server and in the process realized I didn't have a good working reference for how I set up Nginx.\r\n\r\nSo, for myself, my friend and anyone else looking to get started with Nginx, here's my somewhat opinionated guide to installing and configuring Nginx to serve static files. Which is to say, this is how I install and set up Nginx to serve my own and my clients' static files whether those files are simply stylesheets, images and JavaScript or full static sites like this one. What follows is what I believe are the best practices of Nginx[^1]; if you know better, please correct me in the comments.\r\n\r\n[This post was last updated 30 October 2015]\r\n\r\n## Nginx Beats Apache for Static Content[^2]\r\n\r\nApache is overkill. Unlike Apache, which is a jack-of-all-trades server, Nginx was really designed to do just a few things well, one of which is to offer a simple, fast, lightweight server for static files. And Nginx is really, really good at serving static files. In fact, in my experience Nginx with PageSpeed, gzip, far future expires headers and a couple other extras I'll mention is faster than serving static files from Amazon S3[^3] (potentially even faster in the future if Verizon and its ilk [really do](http://netneutralitytest.com/) start [throttling cloud-based services](http://davesblog.com/blog/2014/02/05/verizon-using-recent-net-neutrality-victory-to-wage-war-against-netflix/)).\r\n\r\n## Nginx is Different from Apache\r\n\r\nIn its quest to be lightweight and fast, Nginx takes a different approach to modules than you're probably familiar with in Apache. In Apache you can dynamically load various features using modules. You just add something like `LoadModule alias_module modules/mod_alias.so` to your Apache config files and just like that Apache loads the alias module.\r\n\r\nUnlike Apache, Nginx can not dynamically load modules. Nginx has available what it has available when you install it.\r\n\r\nThat means if you really want to customize and tweak it, it's best to install Nginx from source. You don't *have* to install it from source. But if you really want a screaming fast server, I suggest compiling Nginx yourself, enabling and disabling exactly the modules you need. Installing Nginx from source allows you to add some third-party tools, most notably Google's PageSpeed module, which has some fantastic tools for speeding up your site.\r\n\r\nLuckily, installing Nginx from source isn't too difficult. Even if you've never compiled any software from source, you can install Nginx. The remainder of this post will show you exactly how.\r\n\r\n## My Ideal Nginx Setup for Static Sites\r\n\r\nBefore we start installing, let's go over the things we'll be using to build a fast, lightweight server with Nginx.\r\n\r\n* [Nginx](http://nginx.org).\r\n* [SPDY](http://www.chromium.org/spdy/spdy-protocol) -- Nginx offers \"experimental support for SPDY\", but it's not enabled by default. We're going to enable it when we install Nginx. In my testing SPDY support has worked without a hitch, experimental or otherwise.\r\n* [Google Page Speed](https://developers.google.com/speed/pagespeed/module) -- Part of Google's effort to make the web faster, the Page Speed Nginx module \"automatically applies web performance best practices to pages and associated assets\".\r\n* [Headers More](https://github.com/agentzh/headers-more-nginx-module/) -- This isn't really necessary from a speed standpoint, but I often like to set custom headers and hide some headers (like which version of Nginx your server is running). Headers More makes that very easy.\r\n* [Naxsi](https://github.com/nbs-system/naxsi) -- Naxsi is a \"Web Application Firewall module for Nginx\". It's not really all that important for a server limited to static files, but it adds an extra layer of security should you decided to use Nginx as a proxy server down the road.\r\n\r\nSo we're going to install Nginx with SPDY support and three third-party modules.\r\n\r\nOkay, here's the step-by-step process to installing Nginx on a Debian 8 (or Ubuntu) server. If you're looking for a good, cheap VPS host I've been happy with [Vultr.com](http://www.vultr.com/?ref=6825229) (that's an affiliate link that will help support luxagraf; if you prefer, here's a non-affiliate link: [link](http://www.vultr.com/))\r\n\r\nThe first step is to make sure you're installing the latest release of Nginx. To do that check the [Nginx download page](http://nginx.org/en/download.html) for the latest version of Nginx (at the time of writing that's 1.5.10).\r\n\r\nOkay, SSH into your server and let's get started.\r\n\r\nWhile these instructions will work on just about any server, the one thing that will be different is how you install the various prerequisites needed to compile Nginx.\r\n\r\nOn a Debian/Ubuntu server you'd do this:\r\n\r\n~~~~console\r\nsudo apt-get -y install build-essential zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libssl-dev tar unzip\r\n~~~~\r\n\r\n\r\n\r\nIf you're using RHEL/Cent/Fedora you'll want these packages:\r\n\r\n~~~~console\r\nsudo yum install gcc-c++ pcre-dev pcre-devel zlib-devel make\r\n~~~~\r\n\r\nAfter you have the prerequisites installed it's time to grab the latest version of Google's Pagespeed module. Google's [Nginx PageSpeed installation instructions](https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source) are pretty good, so I'll reproduce them here.\r\n\r\nFirst grab the latest version of PageSpeed, which is currently 1.9.32.2, but check the sources since it updates frequently and change this first variable to match the latest version.\r\n\r\n~~~~console\r\nNPS_VERSION=1.9.32.2\r\nwget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}-beta.zip\r\nunzip release-${NPS_VERSION}-beta.zip\r\n~~~~\r\n\r\nNow, before we compile pagespeed we need to grab `psol`, which PageSpeed needs to function properly. So, let's `cd` into the `ngx_pagespeed-release-1.8.31.4-beta` folder and grab `psol`:\r\n\r\n~~~~console\r\ncd ngx_pagespeed-release-${NPS_VERSION}-beta/\r\nwget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz\r\ntar -xzvf ${NPS_VERSION}.tar.gz\r\ncd ../\r\n~~~~\r\n\r\nAlright, so the `ngx_pagespeed` module is all setup and ready to install. All we have to do at this point is tell Nginx where to find it.\r\n\r\nNow let's grab the Headers More and Naxsi modules as well. Again, check the [Headers More](https://github.com/agentzh/headers-more-nginx-module/) and [Naxsi](https://github.com/nbs-system/naxsi) pages to see what the latest stable version is and adjust the version numbers in the following accordingly.\r\n\r\n~~~~console\r\nHM_VERSION =v0.25\r\nwget https://github.com/agentzh/headers-more-nginx-module/archive/${HM_VERSION}.tar.gz\r\ntar -xvzf ${HM_VERSION}.tar.gz\r\nNAX_VERSION=0.53-2\r\nwget https://github.com/nbs-system/naxsi/archive/${NAX_VERSION}.tar.gz\r\ntar -xvzf ${NAX_VERSION}.tar.gz\r\n~~~~\r\n\r\nNow we have all three third-party modules ready to go, the last thing we'll grab is a copy of Nginx itself:\r\n\r\n~~~~console\r\nNGINX_VERSION=1.7.7\r\nwget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz\r\ntar -xvzf nginx-${NGINX_VERSION}.tar.gz\r\n~~~~\r\n\r\nThen we `cd` into the Nginx folder and compile. So, first:\r\n\r\n~~~~console\r\ncd nginx-${NGINX_VERSION}/\r\n~~~~\r\n\r\nSo now we're inside the Nginx folder, let's configure our installation. We'll add in all our extras and turn off a few things we don't need. Or at least they're things I don't need, if you need the mail modules, then delete those lines. If you don't need SSL, you might want to skip that as well. Here's the config setting I use (Note: all paths are for Debian servers, you'll have to adjust the various paths accordingly for RHEL/Cent/Fedora/ servers):\r\n\r\n\r\n~~~~console\r\n./configure \r\n --add-module=$HOME/naxsi-${NAX_VERSION}/naxsi_src \r\n --prefix=/usr/share/nginx \r\n --sbin-path=/usr/sbin/nginx \r\n --conf-path=/etc/nginx/nginx.conf \r\n --pid-path=/var/run/nginx.pid \r\n --lock-path=/var/lock/nginx.lock \r\n --error-log-path=/var/log/nginx/error.log \r\n --http-log-path=/var/log/access.log \r\n --user=www-data \r\n --group=www-data \r\n --without-mail_pop3_module \r\n --without-mail_imap_module \r\n --without-mail_smtp_module \r\n --with-http_stub_status_module \r\n --with-http_ssl_module \r\n --with-http_spdy_module \r\n --with-http_gzip_static_module \r\n --add-module=$HOME/ngx_pagespeed-release-${NPS_VERSION}-beta \r\n --add-module=$HOME/headers-more-nginx-module-${HM_VERSION}\r\n~~~~\r\n\r\nThere are a few things worth noting here. First off make sure that Naxsi is first. Here's what the [Naxsi wiki page](https://github.com/nbs-system/naxsi/wiki/installation) has to say on that score: \"Nginx will decide the order of modules according the order of the module's directive in Nginx's ./configure. So, no matter what (except if you really know what you are doing) put Naxsi first in your ./configure. If you don't do so, you might run into various problems, from random/unpredictable behaviors to non-effective WAF.\" The last thing you want is to think you have a web application firewall running when in fact you don't, so stick with Naxsi first.\r\n\r\nThere are a couple other things you might want to add to this configuration. If you're going to be serving large files, larger than your average 1.5MB HTML page, consider adding the line: `--with-file-aio `, which is apparently faster than the stock `sendfile` option. See [here](https://calomel.org/nginx.html) for more details. There are quite a few other modules available. A [full list of the default modules](http://wiki.nginx.org/Modules) can be found on the Nginx site. Read through that and if there's another module you need, you can add it to that config list.\r\n\r\nOkay, we've told Nginx what to do, now let's actually install it:\r\n\r\n~~~~console\r\nmake\r\nsudo make install\r\n~~~~\r\n\r\nOnce `make install` finishes doing its thing you'll have Nginx all set up.\r\n\r\nCongrats! You made it.\r\n\r\nThe next step is to add Nginx to the list of things your server starts up automatically whenever it reboots. Since we installed Nginx from scratch we need to tell the underlying system what we did.\r\n\r\n## Make it Autostart\r\n\r\nSince we compiled from source rather than using Debian/Ubuntu's package management tools, the underlying stystem isn't aware of Nginx's existence. That means it won't automatically start it up when the system boots. In order to ensure that Nginx does start on boot we'll have to manually add Nginx to our server's list of startup services. That way, should we need to reboot, Nginx will automatically restart when the server does.\r\n\r\n**Note: I have embraced systemd so this is out of date, see below for systemd version**\r\n\r\nTo do that I use the [Debian init script](https://github.com/MovLib/www/blob/master/bin/init-nginx.sh) listed in the [Nginx InitScripts page](http://wiki.nginx.org/InitScripts):\r\n\r\nIf that works for you, grab the raw version:\r\n\r\n~~~~console\r\nwget https://raw.githubusercontent.com/MovLib/www/develop/etc/init.d/nginx.sh\r\n# I had to edit the DAEMON var to point to nginx\r\n# change line 63 in the file to:\r\nDAEMON=/usr/sbin/nginx\r\n# then move it to /etc/init.d/nginx\r\nsudo mv nginx.sh /etc/init.d/nginx\r\n# make it executable:\r\nsudo chmod +x /etc/init.d/nginx\r\n# then just:\r\nsudo service nginx start #also restart, reload, stop etc\r\n~~~~\r\n\r\n##Updated Systemd scripts\r\n\r\nYeah I went and did it. I kind of like systemd actually. Anyway, here's what I use to stop and start my custom compiled nginx with systemd...\r\n\r\nFirst we need to create and edit an nginx.service file.\r\n\r\n~~~~console\r\nsudo vim /lib/systemd/system/nginx.service #this is for debian\r\n~~~~\r\n\r\nThen I use this script which I got from the nginx wiki I believe.\r\n\r\n~~~~ini\r\n# Stop dance for nginx\r\n# =======================\r\n#\r\n# ExecStop sends SIGSTOP (graceful stop) to the nginx process.\r\n# If, after 5s (--retry QUIT/5) nginx is still running, systemd takes control\r\n# and sends SIGTERM (fast shutdown) to the main process.\r\n# After another 5s (TimeoutStopSec=5), and if nginx is alive, systemd sends\r\n# SIGKILL to all the remaining processes in the process group (KillMode=mixed).\r\n#\r\n# nginx signals reference doc:\r\n# http://nginx.org/en/docs/control.html\r\n#\r\n[Unit]\r\nDescription=A high performance web server and a reverse proxy server\r\nAfter=network.target\r\n\r\n[Service]\r\nType=forking\r\nPIDFile=/run/nginx.pid\r\nExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'\r\nExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'\r\nExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload\r\nExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid\r\nTimeoutStopSec=5\r\nKillMode=mixed\r\n\r\n[Install]\r\nWantedBy=multi-user.target\r\n~~~~\r\n\r\nSave that file, exit your text editor. Now we just need to tell systemd about our script and then we can stop and start via our service file. To do that...\r\n\r\n~~~~console\r\nsudo systemctl enable nginx.service\r\nsudo systemctl start nginx.service\r\nsudo systemctl status nginx.service\r\n~~~~\r\n\r\nI suggest taking the last bit and turning it into an alias in your `bashrc` or `zshrc` file so that you can quickly restart/reload the server when you need it. Here's what I use:\r\n\r\n~~~~ini\r\nalias xrestart=\"sudo systemctl restart nginx.service\"\r\n~~~~\r\n\r\n\r\nIf you're using systemd, congrats, you're done. If you're looking for a way to get autostart to work on older or non-systemd servers, read on...\r\n\r\n**End systemd update**\r\n\r\nOkay so we now have the initialization script all set up, now let's make Nginx start up on reboot. In theory this should do it:\r\n\r\n~~~~console\r\nupdate-rc.d -f nginx defaults\r\n~~~~\r\n\r\nBut that didn't work for me with my Digital Ocean Debian 7 x64 droplet (which complained that \"`insserv rejected the script header`\"). I didn't really feel like troubleshooting that at the time; I was feeling lazy so I decided to use chkconfig instead. To do that I just installed chkconfig and added Nginx:\r\n\r\n~~~~console\r\nsudo apt-get install chkconfig\r\nsudo chkconfig --add nginx\r\nsudo chkconfig nginx on\r\n~~~~\r\n\r\nSo there we have it, everything you need to get Nginx installed with SPDY, PageSpeed, Headers More and Naxsi. A blazing fast server for static files.\r\n\r\nAfter that it's just a matter of configuring Nginx, which is entirely dependent on how you're using it. For static setups like this my configuration is pretty minimal.\r\n\r\nBefore we get to that though, there's the first thing I do: edit `/etc/nginx/nginx.conf` down to something pretty simple. This is the root config so I keep it limited to a `http` block that turns on a few things I want globally and an include statement that loads site-specific config files. Something a bit like this:\r\n\r\n~~~~nginx\r\nuser www-data;\r\nevents {\r\n worker_connections 1024;\r\n}\r\nhttp {\r\n include mime.types;\r\n include /etc/nginx/naxsi_core.rules;\r\n default_type application/octet-stream;\r\n types_hash_bucket_size 64;\r\n server_names_hash_bucket_size 128;\r\n log_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\r\n '$status $body_bytes_sent \"$http_referer\" '\r\n '\"$http_user_agent\" \"$http_x_forwarded_for\"';\r\n\r\n access_log logs/access.log main;\r\n more_set_headers \"Server: My Custom Server\";\r\n keepalive_timeout 65;\r\n gzip on;\r\n pagespeed on;\r\n pagespeed FileCachePath /var/ngx_pagespeed_cache;\r\n include /etc/nginx/sites-enabled/*.conf;\r\n}\r\n~~~~\r\n\r\nA few things to note. I've include the core rules file from the Naxsi source. To make sure that file exists, we need to copy it over to `/etc/nginx/`.\r\n\r\n~~~~console\r\nsudo cp naxsi-0.53-2/naxci_config/naxsi_core.rule /etc/nginx\r\n~~~~\r\n\r\nNow let's restart the server so it picks up these changes:\r\n\r\n~~~~console\r\nsudo service nginx restart\r\n~~~~\r\n\r\nOr, if you took my suggestion of creating an alias, you can type: `xrestart` and Nginx will restart itself.\r\n\r\nWith this configuration we have a good basic setup and any `.conf` files you add to the folder `/etc/nginx/sites-enabled/` will be included automatically. So if you want to create a conf file for mydomain.com, you'd create the file `/etc/nginx/sites-enabled/mydomain.conf` and put the configuration for that domain in that file.\r\n\r\nI'm going to post a follow up on how I configure Nginx very soon. In the mean time here's a pretty comprehensive [guide to configuring Nginx](https://calomel.org/nginx.html) in a variety of scenarios. And remember, if you want to some more helpful tips and tricks for web developers, sign up for the mailing list below.\r\n\r\n[^1]: If you're more experienced with Nginx and I'm totally bass-akward about something in this guide, please let me know.\r\n[^2]: In my experience anyway. Probably Apache can be tuned to get pretty close to Nginx's performance with static files, but it's going to take quite a bit of work. One is not necessarily better, but there are better tools for different jobs.\r\n[^3]: That said, obviously a CDN service like Cloudfront will, in most cases, be much faster than Nginx or any other server.\r\n", "pub_date": "2014-02-10T21:03:23", "last_updated": "2015-10-30T23:31:31.145", "enable_comments": true, "has_code": true, "status": 1, "meta_description": "A complete guide to installing and configuring Nginx to serve static files for lightning fast websites.", "template_name": 0, "topics": [3]}}, {"model": "src.entry", "pk": 9, "fields": {"title": "Whatever Happened to Webmonkey.com?", "slug": "whatever-happened-webmonkey", "body_html": "

People on Twitter have been asking what's up with Webmonkey.com. Originally I wanted to get this up on Webmonkey, but I got locked out of the CMS before I managed to do that, so I'm putting it here.

\n

Earlier this year Wired decided to stop producing new content for Webmonkey.

\n

For those keeping track at home, this is the fourth, and I suspect final, time Webmonkey has been shut down (previously it was shut down in 1999, 2004 and 2006).

\n

I've been writing for Webmonkey.com since 2000, full time since 2006 (when it came back from the dead for a third run). And for the last two years I have been the sole writer, editor and producer of the site.

\n

Like so many of you, I learned how to build websites from Webmonkey. But it was more than just great tutorials and how tos. Part of what made Webmonkey great was that it was opinionated and rough around the edges. Webmonkey was not the product of professional writers, it was written and created by the web nerds building Wired's websites. It was written by people like us, for people like us.

\n

I'll miss Webmonkey not just because it was my job for many years, but because at this point it feels like a family dog to me, it's always been there and suddenly it's not. Sniff. I'll miss you Webmonkey.

\n

Quite a few people have asked me why it was shut down, but unfortunately I don't have many details to share. I've always been a remote employee, not in San Francisco at all in fact, and consequently somewhat out of the loop. What I can say is that Webmonkey's return to Wired in 2006 was the doing of long-time Wired editor Evan Hansen (now at Medium). Evan was a tireless champion of Webmonkey and saved it from the Conde Nast ax several times. He was also one of the few at Wired who \"got\" Webmonkey. When Evan left Wired earlier this year I knew Webmonkey's days were numbered.

\n

I don't begrudge Wired for shutting Webmonkey down. While I have certain nostalgia for its heyday, even I know it's been a long time since Webmonkey was leading the way in web design. I had neither the staff nor the funding to make Webmonkey anything like its early 2000s self. In that sense I'm glad it was shut down rather than simply fading further into obscurity.

\n

I am very happy that Wired has left the site in place. As far as I know Webmonkey (and its ever-popular cheat sheets, which still get a truly astounding amount of traffic) will remain available on the web. That said, note to the Archive Team, it wouldn't hurt to create a backup. Sadly, many of the very earliest writings have already been lost in the various CMS transitions over the years and even much of what's there now has incorrect bylines. Still, at least most of it's there. For now.

\n

As for me, I've decided to go back to what I enjoyed most about the early days of Webmonkey: teaching people how to make cool stuff for the web.

\n

To that end I'm currently working on a book about responsive design, which I'm hoping to make available by the end of October. If you're interested drop your email in the box below and I'll let you know when it's out (alternately you can follow @LongHandPixels on Twitter).

\n

If you have any questions or want more details use the comments box below.

\n

In closing, I'd like to thank some people at Wired -- thank you to my editors over the years, especially Michael Calore, Evan Hansen and Leander Kahney who all made me a much better writer. Also thanks to Louise for always making sure I got paid. And finally, to everyone who read Webmonkey and contributed over the years, whether with articles or even just a comment, thank you.

\n

Cheers and, yes, thanks for all the bananas.

", "body_markdown": "\r\nPeople on Twitter have been asking what's up with [Webmonkey.com][1]. Originally I wanted to get this up on Webmonkey, but I got locked out of the CMS before I managed to do that, so I'm putting it here.\r\n\r\nEarlier this year Wired decided to stop producing new content for Webmonkey.\r\n\r\nFor those keeping track at home, this is the fourth, and I suspect final, time Webmonkey has been shut down (previously it was shut down in 1999, 2004 and 2006).\r\n\r\nI've been writing for Webmonkey.com since 2000, full time since 2006 (when it came back from the dead for a third run). And for the last two years I have been the sole writer, editor and producer of the site.\r\n\r\nLike so many of you, I learned how to build websites from Webmonkey. But it was more than just great tutorials and how tos. Part of what made Webmonkey great was that it was opinionated and rough around the edges. Webmonkey was not the product of professional writers, it was written and created by the web nerds building Wired's websites. It was written by people like us, for people like us.\r\n\r\nI'll miss Webmonkey not just because it was my job for many years, but because at this point it feels like a family dog to me, it's always been there and suddenly it's not. Sniff. I'll miss you Webmonkey.\r\n\r\nQuite a few people have asked me why it was shut down, but unfortunately I don't have many details to share. I've always been a remote employee, not in San Francisco at all in fact, and consequently somewhat out of the loop. What I can say is that Webmonkey's return to Wired in 2006 was the doing of long-time Wired editor Evan Hansen ([now at Medium][2]). Evan was a tireless champion of Webmonkey and saved it from the Conde Nast ax several times. He was also one of the few at Wired who \"got\" Webmonkey. When Evan left Wired earlier this year I knew Webmonkey's days were numbered.\r\n\r\nI don't begrudge Wired for shutting Webmonkey down. While I have certain nostalgia for its heyday, even I know it's been a long time since Webmonkey was leading the way in web design. I had neither the staff nor the funding to make Webmonkey anything like its early 2000s self. In that sense I'm glad it was shut down rather than simply fading further into obscurity.\r\n\r\nI am very happy that Wired has left the site in place. As far as I know Webmonkey (and its ever-popular cheat sheets, which still get a truly astounding amount of traffic) will remain available on the web. That said, note to the [Archive Team][3], it wouldn't hurt to create a backup. Sadly, many of the very earliest writings have already been lost in the various CMS transitions over the years and even much of what's there now has incorrect bylines. Still, at least most of it's there. For now.\r\n\r\nAs for me, I've decided to go back to what I enjoyed most about the early days of Webmonkey: teaching people how to make cool stuff for the web.\r\n\r\nTo that end I'm currently working on a book about responsive design, which I'm hoping to make available by the end of October. If you're interested drop your email in the box below and I'll let you know when it's out (alternately you can follow [@LongHandPixels][4] on Twitter).\r\n\r\nIf you have any questions or want more details use the comments box below.\r\n\r\nIn closing, I'd like to thank some people at Wired -- thank you to my editors over the years, especially [Michael Calore][5], [Evan Hansen][6] and [Leander Kahney][7] who all made me a much better writer. Also thanks to Louise for always making sure I got paid. And finally, to everyone who read Webmonkey and contributed over the years, whether with articles or even just a comment, thank you.\r\n\r\nCheers and, yes, thanks for all the bananas.\r\n\r\n[1]: http://www.webmonkey.com/\r\n[2]: https://medium.com/@evanatmedium\r\n[3]: http://www.archiveteam.org/index.php?title=Main_Page\r\n[4]: https://twitter.com/LongHandPixels\r\n[5]: http://snackfight.com/\r\n[6]: https://twitter.com/evanatmedium\r\n[7]: http://www.cultofmac.com/about/\r\n", "pub_date": "2013-09-20T21:04:57", "last_updated": "2015-10-29T21:43:01.674", "enable_comments": true, "has_code": false, "status": 1, "meta_description": "This monkey's gone to heaven. Again. Wired shut down Webmonkey.com for the fourth and likely final time.", "template_name": 0, "topics": []}}, {"model": "src.entry", "pk": 10, "fields": {"title": "Tools for Writing an Ebook", "slug": "ebook-writing-tools", "body_html": "

It never really occurred to me to research which tools I would need to create an ebook because I knew I was going to use Markdown, which could then be translated into pretty much any format using Pandoc. Bu since a few people have asked for more details on exactly which tools I used, here's a quick rundown:

\n
    \n
  1. I write books as single text files lightly marked up with Pandoc-flavored Markdown.
  2. \n
  3. Then I run Pandoc, passing in custom templates, CSS files, fonts I bought and so on. Pretty much as detailed here in the Pandoc documentation. I run these commands often enough that I write a shell script for each project so I don't have to type in all the flags and file paths each time.
  4. \n
  5. Pandoc outputs an ePub file and an HTML file. The latter is then used with Weasyprint to generate the PDF version of the ebook. Then I used the ePub file and the Kindle command line tool to create a .mobi file.
  6. \n
  7. All of the formatting and design is just CSS, which I am already comfortable working with (though ePub is only a subset of CSS and reader support is somewhat akin to building website in 1998 -- who knows if it's gonna work? The PDF is what I consider the reference copy.)
  8. \n
\n

In the end I get the book in TXT, HTML, PDF, ePub and .mobi formats, which covers pretty much every digital reader I'm aware of. Out of those I actually include the PDF, ePub and Mobi files when you buy the book.

\n

FAQs and Notes.

\n

Why not use InDesign or iBook Author or _______?

\n

I wanted to use open source software, which offers me more control over the process than I could get with monolithic tools like visual layout editors.

\n

The above tools are, for me anyway, the simplest possible workflow which outputs the highest quality product.

\n

What about Prince?

\n

What does The Purple One have to do with writing books? Oh, that Prince. Actually I really like Prince and it can do a few things that WeasyPrint cannot (like execute JavaScript which is handy for code highlighting or allow for @font-face font embedding), but it's not free and in the end, I decided, not worth the money.

\n

Can you share your shell script?

\n

Here's the basic idea, adjust file paths to suit your working habits.

\n
#!/bin/sh\n#Update PDF:\npandoc --toc --toc-depth=2 --smart --template=lib/template.html5 --include-before-body=lib/header.html -t html5 -o rwd.html draft.txt && weasyprint rwd.html rwd.pdf\n\n\n#Update epub:\npandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-epub.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o rwd.epub draft.txt\n\n#update Mobi:\npandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-kindle.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o kindle.epub Draft.txt && kindlegen kindle.epub -o rwd.mobi\n
\n\n\n

I just run this script and bang, all my files are updated.

\n

What advice do you have for people trying to write an ebook?

\n

At the risk of sounding trite, just do it.

\n

Writing a book is not easy, or rather the writing is never easy, but I don't think it's ever been this easy to produce a book. It took me two afternoons to come up with a workflow that involves all free, open source software and allows me to publish literally any text file on my hard drive as a book that can then be read by millions. I type two key strokes and I have a book. Even if millions don't ever read your book (and, for the record, millions have most definitely not read my books), that is still f'ing amazing.

\n

Now go make something cool (and be sure to tell me about it).

", "body_markdown": "It never really occurred to me to research which tools I would need to create an ebook because I knew I was going to use Markdown, which could then be translated into pretty much any format using [Pandoc](http://johnmacfarlane.net/pandoc/). Bu since a few people have [asked](https://twitter.com/situjapan/status/549935669129142272) for more details on *exactly* which tools I used, here's a quick rundown:\r\n\r\n1. I write books as single text files lightly marked up with Pandoc-flavored Markdown.\r\n2. Then I run Pandoc, passing in custom templates, CSS files, fonts I bought and so on. Pretty much as [detailed here in the Pandoc documentation](http://johnmacfarlane.net/pandoc/epub.html). I run these commands often enough that I write a shell script for each project so I don't have to type in all the flags and file paths each time.\r\n3. Pandoc outputs an ePub file and an HTML file. The latter is then used with [Weasyprint](http://weasyprint.org/) to generate the PDF version of the ebook. Then I used the ePub file and the [Kindle command line tool](http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1000765211) to create a .mobi file.\r\n4. All of the formatting and design is just CSS, which I am already comfortable working with (though ePub is only a subset of CSS and reader support is somewhat akin to building website in 1998 -- who knows if it's gonna work? The PDF is what I consider the reference copy.)\r\n\r\nIn the end I get the book in TXT, HTML, PDF, ePub and .mobi formats, which covers pretty much every digital reader I'm aware of. Out of those I actually include the PDF, ePub and Mobi files when you [buy the book](/src/books/).\r\n\r\n## FAQs and Notes.\r\n\r\n**Why not use InDesign or iBook Author or \\_\\_\\_\\_\\_\\_\\_?**\r\n\r\nI wanted to use open source software, which offers me more control over the process than I could get with monolithic tools like visual layout editors. \r\n\r\nThe above tools are, for me anyway, the simplest possible workflow which outputs the highest quality product. \r\n\r\n**What about Prince?**\r\n\r\nWhat does The Purple One have to do with writing books? Oh, that [Prince](http://www.princexml.com/). Actually I really like Prince and it can do a few things that WeasyPrint cannot (like execute JavaScript which is handy for code highlighting or allow for `@font-face` font embedding), but it's not free and in the end, I decided, not worth the money.\r\n\r\n**Can you share your shell script?**\r\n\r\nHere's the basic idea, adjust file paths to suit your working habits.\r\n\r\n~~~~bash\r\n#!/bin/sh\r\n#Update PDF:\r\npandoc --toc --toc-depth=2 --smart --template=lib/template.html5 --include-before-body=lib/header.html -t html5 -o rwd.html draft.txt && weasyprint rwd.html rwd.pdf\r\n\r\n\r\n#Update epub:\r\npandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-epub.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o rwd.epub draft.txt\r\n\r\n#update Mobi:\r\npandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-kindle.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o kindle.epub Draft.txt && kindlegen kindle.epub -o rwd.mobi\r\n~~~~\r\n\r\nI just run this script and bang, all my files are updated. \r\n\r\nWhat advice do you have for people trying to write an ebook?\r\n\r\nAt the risk of sounding trite, just do it. \r\n\r\nWriting a book is not easy, or rather the writing is never easy, but I don't think it's ever been this easy to *produce* a book. It took me two afternoons to come up with a workflow that involves all free, open source software and allows me to publish literally any text file on my hard drive as a book that can then be read by millions. I type two key strokes and I have a book. Even if millions don't ever read your book (and, for the record, millions have most definitely not read my books), that is still f'ing amazing. \r\n\r\nNow go make something cool (and be sure to tell me about it).\r\n", "pub_date": "2014-01-24T20:05:17", "last_updated": "2015-10-31T09:22:12.082", "enable_comments": true, "has_code": true, "status": 1, "meta_description": "The tools I use to write and publish ebooks. All free and open source.", "template_name": 0, "topics": []}}, {"model": "src.entry", "pk": 11, "fields": {"title": "Scaling Responsive Images in CSS", "slug": "scaling-responsive-images-css", "body_html": "

It's pretty easy to handle images responsively with CSS. Just use @media queries to swap images at various breakpoints in your design.

\n

It's slightly trickier to get those images to be fluid and scale in between breakpoints. Or rather, it's not hard to get them to scale horizontally, but what about vertical scaling?

\n

Imagine this scenario. You have a div with a paragraph inside it and you want to add a background using the :before pseudo element -- just a decorative image behind some text. You can set the max-width to 100% to get the image to fluidly scale in width, but what about scaling the height?

\n

That's a bit trickier, or at least it tripped me up for a minute the other day. I started with this:

\n
.wrapper--image:before {\n    content: "";\n    display: block;\n    max-width: 100%;\n    height: 443px;\n    background-color: #f3f;\n    background-image: url('bg.jpg');\n    background-repeat: no-repeat;\n    background-size: 100%;\n }\n
\n\n\n

Do that and you'll see... nothing. Okay, I expected that. Setting height to auto doesn't work because the pseudo element has no real content, which means its default height is zero. Okay, how do I fix that?

\n

You might try setting the height to the height of your background image. That works whenever the div is the size of, or larger than, the image. But the minute your image scales down at all you'll have blank space at the bottom of your div, because the div has a fixed height with an image inside that's shorter than that fixed height. Try re-sizing this demo to see what I'm talking about, make the window less than 800px and you'll see the box no longer scales with the image.

\n

To get around this we can borrow a trick from Thierry Koblentz's technique for creating intrinsic ratios for video to create a box that maintains the ratio of our background image.

\n

We'll leave everything the way it is, but add one line:

\n
.wrapper--image:before {\n    content: "";\n    display: block;\n    max-width: 100%;\n    background-color: #f3f;\n    background-image: url('bg.jpg');\n    background-repeat: no-repeat;\n    background-size: 100%;\n    padding-top: 55.375%;\n}\n
\n\n\n

We've added padding to the top of the element, which forces the element to have a height (at least visually). But where did I get that number? That's the ratio of the dimensions of the background image. I simply divided the height of the image by the width of the image. In this case my image was 443px tall and 800px wide, which gives us 53.375%.

\n

Here's a working demo.

\n

And there you have it, properly scaling CSS background images on :before or other \"empty\" elements, pseudo or otherwise.

\n

The only real problem with this technique is that requires you to know the dimensions of your image ahead of time. That won't be possible in every scenario, but if it is, this will work.

", "body_markdown": "It's pretty easy to handle images responsively with CSS. Just use `@media` queries to swap images at various breakpoints in your design.\r\n\r\nIt's slightly trickier to get those images to be fluid and scale in between breakpoints. Or rather, it's not hard to get them to scale horizontally, but what about vertical scaling?\r\n\r\nImagine this scenario. You have a div with a paragraph inside it and you want to add a background using the `:before` pseudo element -- just a decorative image behind some text. You can set the max-width to 100% to get the image to fluidly scale in width, but what about scaling the height?\r\n\r\nThat's a bit trickier, or at least it tripped me up for a minute the other day. I started with this:\r\n\r\n~~~~css\r\n.wrapper--image:before {\r\n content: \"\";\r\n display: block;\r\n max-width: 100%;\r\n height: 443px;\r\n background-color: #f3f;\r\n background-image: url('bg.jpg');\r\n background-repeat: no-repeat;\r\n background-size: 100%;\r\n }\r\n~~~~\r\n\r\nDo that and you'll see... nothing. Okay, I expected that. Setting height to auto doesn't work because the pseudo element has no real content, which means its default height is zero. Okay, how do I fix that?\r\n\r\nYou might try setting the height to the height of your background image. That works whenever the div is the size of, or larger than, the image. But the minute your image scales down at all you'll have blank space at the bottom of your div, because the div has a fixed height with an image inside that's shorter than that fixed height. Try re-sizing [this demo](/demos/css-bg-image-scaling/no-vertical-scaling.html) to see what I'm talking about, make the window less than 800px and you'll see the box no longer scales with the image.\r\n\r\nTo get around this we can borrow a trick from Thierry Koblentz's technique for [creating intrinsic ratios for video](http://alistapart.com/article/creating-intrinsic-ratios-for-video/) to create a box that maintains the ratio of our background image. \r\n\r\nWe'll leave everything the way it is, but add one line:\r\n\r\n~~~~css\r\n.wrapper--image:before {\r\n content: \"\";\r\n display: block;\r\n max-width: 100%;\r\n background-color: #f3f;\r\n background-image: url('bg.jpg');\r\n background-repeat: no-repeat;\r\n background-size: 100%;\r\n padding-top: 55.375%;\r\n}\r\n\r\n~~~~\r\n\r\nWe've added padding to the top of the element, which forces the element to have a height (at least visually). But where did I get that number? That's the ratio of the dimensions of the background image. I simply divided the height of the image by the width of the image. In this case my image was 443px tall and 800px wide, which gives us 53.375%.\r\n\r\nHere's a [working demo](/demos/css-bg-image-scaling/vertical-scaling.html).\r\n\r\nAnd there you have it, properly scaling CSS background images on `:before` or other \"empty\" elements, pseudo or otherwise.\r\n\r\nThe only real problem with this technique is that requires you to know the dimensions of your image ahead of time. That won't be possible in every scenario, but if it is, this will work.\r\n", "pub_date": "2014-02-27T20:43:23", "last_updated": "2015-10-31T09:22:57.705", "enable_comments": true, "has_code": true, "status": 1, "meta_description": "CSS @media make responsive images easy, but if you want your responsive images to scale between breakpoints things get a bit trickier.", "template_name": 0, "topics": []}}, {"model": "src.entry", "pk": 12, "fields": {"title": " How Google\u2019s AMP project speeds up the Web\u2014by sandblasting HTML", "slug": "how-googles-amp-project-speeds-web-sandblasting-ht", "body_html": "

[This story originally appeared on Ars Technica, to comment and enjoy the full reading experience with images (including a TRS-80 browsing the web) you should read it over there.]

\n

There's a story going around today that the Web is too slow, especially over mobile networks. It's a pretty good story\u2014and it's a perpetual story. The Web, while certainly improved from the days of 14.4k modems, has never been as fast as we want it to be, which is to say that the Web has never been instantaneous.

\n

Curiously, rather than a focus on possible cures, like increasing network speeds, finding ways to decrease network latency, or even speeding up Web browsers, the latest version of the \"Web is too slow\" story pins the blame on the Web itself. And, perhaps more pointedly, this blame falls directly on the people who make it.

\n

The average webpage has increased in size at a terrific rate. In January 2012, the average page tracked by HTTPArchive transferred 1,239kB and made 86 requests. Fast forward to September 2015, and the average page loads 2,162kB of data and makes 103 requests. These numbers don't directly correlate to longer page load-and-render times, of course, especially if download speeds are also increasing. But these figures are one indicator of how quickly webpages are bulking up.

\n

Native mobile applications, on the other hand, are getting faster. Mobile devices get more powerful with every release cycle, and native apps take better advantage of that power.

\n

So as the story goes, apps get faster, the Web gets slower. This is allegedly why Facebook must invent Facebook Instant Articles, why Apple News must be built, and why Google must now create Accelerated Mobile Pages (AMP). Google is late to the game, but AMP has the same goal as Facebook's and Apple's efforts\u2014making the Web feel like a native application on mobile devices. (It's worth noting that all three solutions focus exclusively on mobile content.)

\n

For AMP, two things in particular stand in the way of a lean, mean browsing experience: JavaScript... and advertisements that use JavaScript. The AMP story is compelling. It has good guys (Google) and bad guys (everyone not using Google Ads), and it's true to most of our experiences. But this narrative has some fundamental problems. For example, Google owns the largest ad server network on the Web. If ads are such a problem, why doesn't Google get to work speeding up the ads?

\n

There are other potential issues looming with the AMP initiative as well, some as big as the state of the open Web itself. But to think through the possible ramifications of AMP, first you need to understand Google's new offering itself.

\n

What is AMP?

\n

To understand AMP, you first need to understand Facebook's Instant Articles. Instant Articles use RSS and standard HTML tags to create an optimized, slightly stripped-down version of an article. Facebook then allows for some extra rich content like auto-playing video or audio clips. Despite this, Facebook claims that Instant Articles are up to 10 times faster than their siblings on the open Web. Some of that speed comes from stripping things out, while some likely comes from aggressive caching.

\n

But the key is that Instant Articles are only available via Facebook's mobile apps\u2014and only to established publishers who sign a deal with Facebook. That means reading articles from Facebook's Instant Article partners like National Geographic, BBC, and Buzzfeed is a faster, richer experience than reading those same articles when they appear on the publisher's site. Apple News appears to work roughly the same way, taking RSS feeds from publishers and then optimizing the content for delivery within Apple's application.

\n

All this app-based content delivery cuts out the Web. That's a problem for the Web and, by extension, for Google, which leads us to Google's Accelerated Mobile Pages project.

\n

Unlike Facebook Articles and Apple News, AMP eschews standards like RSS and HTML in favor of its own little modified subset of HTML. AMP HTML looks a lot like HTML without the bells and whistles. In fact, if you head over to the AMP project announcement, you'll see an AMP page rendered in your browser. It looks like any other page on the Web.

\n

AMP markup uses an extremely limited set of tags. Form tags? Nope. Audio or video tags? Nope. Embed tags? Certainly not. Script tags? Nope. There's a very short list of the HTML tags allowed in AMP documents available over on the project page. There's also no JavaScript allowed. Those ads and tracking scripts will never be part of AMP documents (but don't worry, Google will still be tracking you).

\n

AMP defines several of its own tags, things like amp-youtube, amp-ad, or amp-pixel. The extra tags are part of what's known as Web components, which will likely become a Web standard (or it might turn out to be \"ActiveX part 2,\" only the future knows for sure).

\n

So far AMP probably sounds like a pretty good idea\u2014faster pages, no tracking scripts, no JavaScript at all (and so no overlay ads about signing up for newsletters). However, there are some problematic design choices in AMP. (At least, they're problematic if you like the open Web and current HTML standards.)

\n

AMP re-invents the wheel for images by using the custom component amp-img instead of HTML's img tag, and it does the same thing with amp-audio and amp-video rather than use the HTML standard audio and video. AMP developers argue that this allows AMP to serve images only when required, which isn't possible with the HTML img tag. That, however, is a limitation of Web browsers, not HTML itself. AMP has also very clearly treated accessibility as an afterthought. You lose more than just a few HTML tags with AMP.

\n

In other words, AMP is technically half baked at best. (There are dozens of open issues calling out some of the most egregious decisions in AMP's technical design.) The good news is that AMP developers are listening. One of the worst things about AMP's initial code was the decision to disable pinch-and-zoom on articles, and thankfully, Google has reversed course and eliminated the tag that prevented pinch and zoom.

\n

But AMP's markup language is really just one part of the picture. After all, if all AMP really wanted to do was strip out all the enhancements and just present the content of a page, there are existing ways to do that. Speeding things up for users is a nice side benefit, but the point of AMP, as with Facebook Articles, looks to be more about locking in users to a particular site/format/service. In this case, though, the \"users\" aren't you and I as readers; the \"users\" are the publishers putting content on the Web.

\n

It's the ads, stupid

\n

The goal of Facebook Instant Articles is to keep you on Facebook. No need to explore the larger Web when it's all right there in Facebook, especially when it loads so much faster in the Facebook app than it does in a browser.

\n

Google seems to have recognized what a threat Facebook Instant Articles could be to Google's ability to serve ads. This is why Google's project is called Accelerated Mobile Pages. Sorry, desktop users, Google already knows how to get ads to you.

\n

If you watch the AMP demo, which shows how AMP might work when it's integrated into search results next year, you'll notice that the viewer effectively never leaves Google. AMP pages are laid over the Google search page in much the same way that outside webpages are loaded in native applications on most mobile platforms. The experience from the user's point of view is just like the experience of using a mobile app.

\n

Google needs the Web to be on par with the speeds in mobile apps. And to its credit, the company has some of the smartest engineers working on the problem. Google has made one of the fastest Web browsers (if not the fastest) by building Chrome, and in doing so the company has pushed other vendors to speed up their browsers as well. Since Chrome debuted, browsers have become faster and better at an astonishing rate. Score one for Google.

\n

The company has also been touting the benefits of mobile-friendly pages, first by labeling them as such in search results on mobile devices and then later by ranking mobile friendly pages above not-so-friendly ones when other factors are equal. Google has been quick to adopt speed-improving new HTML standards like the responsive images effort, which was first supported by Chrome. Score another one for Google.

\n

But pages keep growing faster than network speeds, and the Web slows down. In other words, Google has tried just about everything within its considerable power as a search behemoth to get Web developers and publishers large and small to speed up their pages. It just isn't working.

\n

One increasingly popular reaction to slow webpages has been the use of content blockers, typically browser add-ons that stop pages from loading anything but the primary content of the page. Content blockers have been around for over a decade now (No Script first appeared for Firefox in 2005), but their use has largely been limited to the desktop. That changed in Apple's iOS 9, which for the first time put simple content-blocking tools in the hands of millions of mobile users.

\n

Combine all the eyeballs that are using iOS with content blockers, reading Facebook Instant Articles, and perusing Apple News, and you suddenly have a whole lot of eyeballs that will never see any Google ads. That's a problem for Google, one that AMP is designed to fix.

\n

Static pages that require Google's JavaScript

\n

The most basic thing you can do on the Web is create a flat HTML file that sits on a server and contains some basic tags. This type of page will always be lightning fast. It's also insanely simple. This is literally all you need to do to put information on the Web. There's no need for JavaScript, no need even for CSS.

\n

This is more or less the sort of page AMP wants you to create (AMP doesn't care if your pages are actually static or\u2014more likely\u2014generated from a database. The point is what's rendered is static). But then AMP wants to turn around and require that each page include a third-party script in order to load. AMP deliberately sets the opacity of the entire page to 0 until this script loads. Only then is the page revealed.

\n

This is a little odd; as developer Justin Avery writes, \"Surely the document itself is going to be faster than loading a library to try and make it load faster.\"

\n

Pinboard.in creator Maciej Ceg\u0142owski did just that, putting together a demo page that duplicates the AMP-based AMP homepage without that JavaScript. Over a 3G connection, Ceg\u0142owski's page fills the viewport in 1.9 seconds. The AMP homepage takes 9.2 seconds. JavaScript slows down page load times, even when that JavaScript is part of Google's plan to speed up the Web.

\n

Ironically, for something that is ostensibly trying to encourage better behavior from developers and publishers, this means that pages using progressive enhancement, keeping scripts to a minimum and aggressively caching content\u2014in other words sites following best practices and trying to do things right\u2014may be slower in AMP.

\n

In the end, developers and publishers who have been following best practices for Web development and don't rely on dozens of tracking networks and ads have little to gain from AMP. Unfortunately, the publishers building their sites like that right now are few and far between. Most publishers have much to gain from generating AMP pages\u2014at least in terms of speed. Google says that AMP can improve page speed index scores by between 15 to 85 percent. That huge range is likely a direct result of how many third-party scripts are being loaded on some sites.

\n

The dependency on JavaScript has another detrimental effect. AMP documents depend on JavaScript, which is to say that if their (albeit small) script fails to load for some reason\u2014say, you're going through a tunnel on a train or only have a flaky one-bar connection at the beach\u2014the AMP page is completely blank. When an AMP page fails, it fails spectacularly.

\n

Google knows better than this. Even Gmail still offers a pure HTML-based fallback version of itself.

\n

AMP for publishers

\n

Under the AMP bargain, all big media has to do is give up its ad networks. And interactive maps. And data visualizations. And comment systems.

\n

Your WordPress blog can get in on the stripped-down AMP action as well. Given that WordPress powers roughly 24 percent of all sites on the Web, having an easy way to generate AMP documents from WordPress means a huge boost in adoption for AMP. It's certainly possible to build fast websites using WordPress, but it's also easy to do the opposite. WordPress plugins often have a dramatic (negative) impact on load times. It isn't uncommon to see a WordPress site loading not just one but several external JavaScript libraries because the user installed three plugins that each use a different library. AMP neatly solves that problem by stripping everything out.

\n

So why would publishers want to use AMP? Google, while its influence has dipped a tad across industries (as Facebook and Twitter continue to drive more traffic), remains a powerful driver of traffic. When Google promises more eyeballs on their stories, big media listens.

\n

AMP isn't trying to get rid of the Web as we know it; it just wants to create a parallel one. Under this system, publishers would not stop generating regular pages, but they would also start generating AMP files, usually (judging by the early adopter examples) by appending /amp to the end of the URL. The AMP page and the canonical page would reference each other through standard HTML tags. User agents could then pick and choose between them. That is, Google's Web crawler might grab the AMP page, but desktop Firefox might hit the AMP page and redirect to the canonical URL.

\n

On one hand, what this amounts to is that after years of telling the Web to stop making m. mobile-specific websites, Google is telling the Web to make /amp-specific mobile pages. On the other hand, this nudges publishers toward an idea that's big in the IndieWeb movement: Publish (on your) Own Site, Syndicate Elsewhere (or POSSE for short).

\n

The idea is to own the canonical copy of the content on your own site but then to send that content everywhere you can. Or rather, everywhere you want to reach your readers. Facebook Instant Article? Sure, hook up the RSS feed. Apple News? Send the feed over there, too. AMP? Sure, generate an AMP page. No need to stop there\u2014tap the new Medium API and half a dozen others as well.

\n

Reading is a fragmented experience. Some people will love reading on the Web, some via RSS in their favorite reader, some in Facebook Instant Articles, some via AMP pages on Twitter, some via Lynx in their terminal running on a restored TRS-80 (seriously, it can be done. See below). The beauty of the POSSE approach is that you can reach them all from a single, canonical source.

\n

AMP and the open Web

\n

While AMP has problems and just might be designed to lock publishers into a Google-controlled format, so far it does seem friendlier to the open Web than Facebook Instant Articles.

\n

In fact, if you want to be optimistic, you could look at AMP as the carrot that Google has been looking for in its effort to speed up the Web. As noted Web developer (and AMP optimist) Jeremy Keith writes in a piece on AMP, \"My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask 'Why can't our regular pages be this fast?' By showing that there is life beyond big bloated invasive webpages, perhaps the AMP project will work as a demo of what the whole Web could be.\"

\n

Not everyone is that optimistic about AMP, though. Developer and Author Tim Kadlec writes, \"[AMP] doesn't feel like something helping the open Web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the Web... Using a very specific tool to build a tailored version of my page in order to 'reach everyone' doesn't fit any definition of the 'open Web' that I've ever heard.\"

\n

There's one other important aspect to AMP that helps speed up their pages: Google will cache your pages on its CDN for free. \"AMP is caching... You can use their caching if you conform to certain rules,\" writes Dave Winer, developer and creator of RSS, in a post on AMP. \"If you don't, you can use your own caching. I can't imagine there's a lot of difference unless Google weighs search results based on whether you use their code.\"

", "body_markdown": "[**This story originally appeared on Ars Technica, to comment and enjoy the full reading experience with images (including a TRS-80 browsing the web) you should read it over there.**]\r\n\r\nThere's a story going around today that the Web is too slow, especially over mobile networks. It's a pretty good story\u2014and it's a perpetual story. The Web, while certainly improved from the days of 14.4k modems, has never been as fast as we want it to be, which is to say that the Web has never been instantaneous.\r\n\r\nCuriously, rather than a focus on possible cures, like increasing network speeds, finding ways to decrease network latency, or even speeding up Web browsers, the latest version of the \"Web is too slow\" story pins the blame on the Web itself. And, perhaps more pointedly, this blame falls directly on the people who make it.\r\n\r\nThe average webpage has increased in size at a terrific rate. In January 2012, the average page tracked by HTTPArchive [transferred 1,239kB and made 86 requests](http://httparchive.org/trends.php?s=All&minlabel=Oct+1+2012&maxlabel=Oct+1+2015#bytesTotal&reqTotal). Fast forward to September 2015, and the average page loads 2,162kB of data and makes 103 requests. These numbers don't directly correlate to longer page load-and-render times, of course, especially if download speeds are also increasing. But these figures are one indicator of how quickly webpages are bulking up.\r\n\r\nNative mobile applications, on the other hand, are getting faster. Mobile devices get more powerful with every release cycle, and native apps take better advantage of that power.\r\n\r\nSo as the story goes, apps get faster, the Web gets slower. This is allegedly why Facebook must invent Facebook Instant Articles, why Apple News must be built, and why Google must now create [Accelerated Mobile Pages](http://arstechnica.com/information-technology/2015/10/googles-new-amp-html-spec-wants-to-make-mobile-websites-load-instantly/) (AMP). Google is late to the game, but AMP has the same goal as Facebook's and Apple's efforts\u2014making the Web feel like a native application on mobile devices. (It's worth noting that all three solutions focus exclusively on mobile content.)\r\n\r\nFor AMP, two things in particular stand in the way of a lean, mean browsing experience: JavaScript... and advertisements that use JavaScript. The AMP story is compelling. It has good guys (Google) and bad guys (everyone not using Google Ads), and it's true to most of our experiences. But this narrative has some fundamental problems. For example, Google owns the largest ad server network on the Web. If ads are such a problem, why doesn't Google get to work speeding up the ads?\r\n\r\nThere are other potential issues looming with the AMP initiative as well, some as big as the state of the open Web itself. But to think through the possible ramifications of AMP, first you need to understand Google's new offering itself.\r\n\r\n## What is AMP?\r\n\r\nTo understand AMP, you first need to understand Facebook's Instant Articles. Instant Articles use RSS and standard HTML tags to create an optimized, slightly stripped-down version of an article. Facebook then allows for some extra rich content like auto-playing video or audio clips. Despite this, Facebook claims that Instant Articles are up to 10 times faster than their siblings on the open Web. Some of that speed comes from stripping things out, while some likely comes from aggressive caching.\r\n\r\nBut the key is that Instant Articles are only available via Facebook's mobile apps\u2014and only to established publishers who sign a deal with Facebook. That means reading articles from Facebook's Instant Article partners like National Geographic, BBC, and Buzzfeed is a faster, richer experience than reading those same articles when they appear on the publisher's site. Apple News appears to work roughly the same way, taking RSS feeds from publishers and then optimizing the content for delivery within Apple's application.\r\n\r\nAll this app-based content delivery cuts out the Web. That's a problem for the Web and, by extension, for Google, which leads us to Google's Accelerated Mobile Pages project.\r\n\r\nUnlike Facebook Articles and Apple News, AMP eschews standards like RSS and HTML in favor of its own little modified subset of HTML. AMP HTML looks a lot like HTML without the bells and whistles. In fact, if you head over to the [AMP project announcement](https://www.ampproject.org/how-it-works/), you'll see an AMP page rendered in your browser. It looks like any other page on the Web.\r\n\r\nAMP markup uses an extremely limited set of tags. Form tags? Nope. Audio or video tags? Nope. Embed tags? Certainly not. Script tags? Nope. There's a very short list of the HTML tags allowed in AMP documents available over on the [project page](https://github.com/ampproject/amphtml/blob/master/spec/amp-html-format.md). There's also no JavaScript allowed. Those ads and tracking scripts will never be part of AMP documents (but don't worry, Google will still be tracking you).\r\n\r\nAMP defines several of its own tags, things like amp-youtube, amp-ad, or amp-pixel. The extra tags are part of what's known as [Web components](http://www.w3.org/TR/components-intro/), which will likely become a Web standard (or it might turn out to be \"ActiveX part 2,\" only the future knows for sure).\r\n\r\nSo far AMP probably sounds like a pretty good idea\u2014faster pages, no tracking scripts, no JavaScript at all (and so no overlay ads about signing up for newsletters). However, there are some problematic design choices in AMP. (At least, they're problematic if you like the open Web and current HTML standards.)\r\n\r\nAMP re-invents the wheel for images by using the custom component amp-img instead of HTML's img tag, and it does the same thing with amp-audio and amp-video rather than use the HTML standard audio and video. AMP developers argue that this allows AMP to serve images only when required, which isn't possible with the HTML img tag. That, however, is a limitation of Web browsers, not HTML itself. AMP has also very clearly treated [accessibility](https://en.wikipedia.org/wiki/Computer_accessibility) as an afterthought. You lose more than just a few HTML tags with AMP.\r\n\r\nIn other words, AMP is technically half baked at best. (There are dozens of open issues calling out some of the [most](https://github.com/ampproject/amphtml/issues/517) [egregious](https://github.com/ampproject/amphtml/issues/481) [decisions](https://github.com/ampproject/amphtml/issues/545) in AMP's technical design.) The good news is that AMP developers are listening. One of the worst things about AMP's initial code was the decision to disable pinch-and-zoom on articles, and thankfully, Google has reversed course and [eliminated the tag that prevented pinch and zoom](https://github.com/ampproject/amphtml/issues/592).\r\n\r\nBut AMP's markup language is really just one part of the picture. After all, if all AMP really wanted to do was strip out all the enhancements and just present the content of a page, there are existing ways to do that. Speeding things up for users is a nice side benefit, but the point of AMP, as with Facebook Articles, looks to be more about locking in users to a particular site/format/service. In this case, though, the \"users\" aren't you and I as readers; the \"users\" are the publishers putting content on the Web.\r\n\r\n## It's the ads, stupid\r\n\r\nThe goal of Facebook Instant Articles is to keep you on Facebook. No need to explore the larger Web when it's all right there in Facebook, especially when it loads so much faster in the Facebook app than it does in a browser.\r\n\r\nGoogle seems to have recognized what a threat Facebook Instant Articles could be to Google's ability to serve ads. This is why Google's project is called Accelerated Mobile Pages. Sorry, desktop users, Google already knows how to get ads to you.\r\n\r\nIf you watch the [AMP demo](https://googleblog.blogspot.com/2015/10/introducing-accelerated-mobile-pages.html), which shows how AMP might work when it's integrated into search results next year, you'll notice that the viewer effectively never leaves Google. AMP pages are laid over the Google search page in much the same way that outside webpages are loaded in native applications on most mobile platforms. The experience from the user's point of view is just like the experience of using a mobile app.\r\n\r\nGoogle needs the Web to be on par with the speeds in mobile apps. And to its credit, the company has some of the smartest engineers working on the problem. Google has made one of the fastest Web browsers (if not the fastest) by building Chrome, and in doing so the company has pushed other vendors to speed up their browsers as well. Since Chrome debuted, browsers have become faster and better at an astonishing rate. Score one for Google.\r\n\r\nThe company has also been touting the benefits of mobile-friendly pages, first by labeling them as such in search results on mobile devices and then later by ranking mobile friendly pages above not-so-friendly ones when other factors are equal. Google has been quick to adopt speed-improving new HTML standards like the responsive images effort, which was first supported by Chrome. Score another one for Google.\r\n\r\nBut pages keep growing faster than network speeds, and the Web slows down. In other words, Google has tried just about everything within its considerable power as a search behemoth to get Web developers and publishers large and small to speed up their pages. It just isn't working.\r\n\r\nOne increasingly popular reaction to slow webpages has been the use of content blockers, typically browser add-ons that stop pages from loading anything but the primary content of the page. Content blockers have been around for over a decade now (No Script first appeared for Firefox in 2005), but their use has largely been limited to the desktop. That changed in Apple's iOS 9, which for the first time put simple content-blocking tools in the hands of millions of mobile users.\r\n\r\nCombine all the eyeballs that are using iOS with content blockers, reading Facebook Instant Articles, and perusing Apple News, and you suddenly have a whole lot of eyeballs that will never see any Google ads. That's a problem for Google, one that AMP is designed to fix.\r\n\r\n## Static pages that require Google's JavaScript\r\n\r\nThe most basic thing you can do on the Web is create a flat HTML file that sits on a server and contains some basic tags. This type of page will always be lightning fast. It's also insanely simple. This is literally all you need to do to put information on the Web. There's no need for JavaScript, no need even for CSS.\r\n\r\nThis is more or less the sort of page AMP wants you to create (AMP doesn't care if your pages are actually static or\u2014more likely\u2014generated from a database. The point is what's rendered is static). But then AMP wants to turn around and require that each page include a third-party script in order to load. AMP deliberately sets the opacity of the entire page to 0 until this script loads. Only then is the page revealed.\r\n\r\nThis is a little odd; as developer Justin Avery [writes](https://responsivedesign.is/articles/whats-the-deal-with-accelerated-mobile-pages-amp), \"Surely the document itself is going to be faster than loading a library to try and make it load faster.\"\r\n\r\nPinboard.in creator Maciej Ceg\u0142owski did just that, putting together a demo page that duplicates the AMP-based AMP homepage without that JavaScript. Over a 3G connection, Ceg\u0142owski's page fills the viewport in [1.9 seconds](http://www.webpagetest.org/result/151016_RF_VNE/). The AMP homepage takes [9.2 seconds](http://www.webpagetest.org/result/151016_9J_VNN/). JavaScript slows down page load times, even when that JavaScript is part of Google's plan to speed up the Web.\r\n\r\nIronically, for something that is ostensibly trying to encourage better behavior from developers and publishers, this means that pages using progressive enhancement, keeping scripts to a minimum and aggressively caching content\u2014in other words sites following best practices and trying to do things right\u2014may be slower in AMP.\r\n\r\nIn the end, developers and publishers who have been following best practices for Web development and don't rely on dozens of tracking networks and ads have little to gain from AMP. Unfortunately, the publishers building their sites like that right now are few and far between. Most publishers have much to gain from generating AMP pages\u2014at least in terms of speed. Google says that AMP can improve page speed index scores by between 15 to 85 percent. That huge range is likely a direct result of how many third-party scripts are being loaded on some sites.\r\n\r\nThe dependency on JavaScript has another detrimental effect. AMP documents depend on JavaScript, which is to say that if their (albeit small) script fails to load for some reason\u2014say, you're going through a tunnel on a train or only have a flaky one-bar connection at the beach\u2014the AMP page is completely blank. When an AMP page fails, it fails spectacularly.\r\n\r\nGoogle knows better than this. Even Gmail still offers a pure HTML-based fallback version of itself.\r\n\r\n## AMP for publishers\r\n\r\nUnder the AMP bargain, all big media has to do is give up its ad networks. And interactive maps. And data visualizations. And comment systems.\r\n\r\nYour WordPress blog can get in on the stripped-down AMP action as well. Given that WordPress powers roughly 24 percent of all sites on the Web, having an easy way to generate AMP documents from WordPress means a huge boost in adoption for AMP. It's certainly possible to build fast websites using WordPress, but it's also easy to do the opposite. WordPress plugins often have a dramatic (negative) impact on load times. It isn't uncommon to see a WordPress site loading not just one but several external JavaScript libraries because the user installed three plugins that each use a different library. AMP neatly solves that problem by stripping everything out.\r\n\r\nSo why would publishers want to use AMP? Google, while its influence has dipped a tad across industries (as Facebook and Twitter continue to drive more traffic), remains a powerful driver of traffic. When Google promises more eyeballs on their stories, big media listens.\r\n\r\nAMP isn't trying to get rid of the Web as we know it; it just wants to create a parallel one. Under this system, publishers would not stop generating regular pages, but they would also start generating AMP files, usually (judging by the early adopter examples) by appending /amp to the end of the URL. The AMP page and the canonical page would reference each other through standard HTML tags. User agents could then pick and choose between them. That is, Google's Web crawler might grab the AMP page, but desktop Firefox might hit the AMP page and redirect to the canonical URL.\r\n\r\nOn one hand, what this amounts to is that after years of telling the Web to stop making m. mobile-specific websites, Google is telling the Web to make /amp-specific mobile pages. On the other hand, this nudges publishers toward an idea that's big in the [IndieWeb movement](http://indiewebcamp.com/): Publish (on your) Own Site, Syndicate Elsewhere (or [POSSE](http://indiewebcamp.com/POSSE) for short).\r\n\r\nThe idea is to own the canonical copy of the content on your own site but then to send that content everywhere you can. Or rather, everywhere you want to reach your readers. Facebook Instant Article? Sure, hook up the RSS feed. Apple News? Send the feed over there, too. AMP? Sure, generate an AMP page. No need to stop there\u2014tap the new Medium API and half a dozen others as well.\r\n\r\nReading is a fragmented experience. Some people will love reading on the Web, some via RSS in their favorite reader, some in Facebook Instant Articles, some via AMP pages on Twitter, some via Lynx in their terminal running on a [restored TRS-80](http://arstechnica.com/information-technology/2015/08/surfing-the-internet-from-my-trs-80-model-100/) (seriously, it can be done. See below). The beauty of the POSSE approach is that you can reach them all from a single, canonical source.\r\n\r\n## AMP and the open Web\r\n\r\nWhile AMP has problems and just might be designed to lock publishers into a Google-controlled format, so far it does seem friendlier to the open Web than Facebook Instant Articles.\r\n\r\nIn fact, if you want to be optimistic, you could look at AMP as the carrot that Google has been looking for in its effort to speed up the Web. As noted Web developer (and AMP optimist) Jeremy Keith [writes](https://adactio.com/journal/9646) in a piece on AMP, \"My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask 'Why can't our regular pages be this fast?' By showing that there is life beyond big bloated invasive webpages, perhaps the AMP project will work as a demo of what the whole Web could be.\"\r\n\r\nNot everyone is that optimistic about AMP, though. Developer and Author Tim Kadlec [writes](https://timkadlec.com/2015/10/amp-and-incentives/), \"[AMP] doesn't feel like something helping the open Web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the Web... Using a very specific tool to build a tailored version of my page in order to 'reach everyone' doesn't fit any definition of the 'open Web' that I've ever heard.\"\r\n\r\nThere's one other important aspect to AMP that helps speed up their pages: Google will cache your pages on its CDN for free. \"AMP is caching... You can use their caching if you conform to certain rules,\" writes Dave Winer, developer and creator of RSS, [in a post on AMP](http://scripting.com/2015/10/10/supportingStandardsWithoutAllThatNastyInterop.html). \"If you don't, you can use your own caching. I can't imagine there's a lot of difference unless Google weighs search results based on whether you use their code.\"\r\n", "pub_date": "2015-11-05T16:42:44", "last_updated": "2015-11-07T08:30:05.033", "enable_comments": false, "has_code": false, "status": 1, "meta_description": "A story I wrote for Ars Technica about Google's AMP project and why it's a bad idea", "template_name": 0, "topics": [6]}}, {"model": "src.book", "pk": 1, "fields": {"title": "Build A Better Web with Responsive Design", "image": "images/src/book-cover_wcyhPCG.png", "slug": "responsive-web-design", "body_html": "

It\u2019s time to stop fighting the web\u2019s flexible nature. Discover how simple tools like CSS media queries, fluid layouts and flexible media can transform your website from fixed-width failure to responsive success.

\n

Build websites like Bruce Lee. Kung Fu legend Bruce Lee told students to \u201cbe like water\u201d. Lee was no web developer, but his words also describe the core of responsive design \u2013 your sites are water, flowing into the mobile future. As Lee said, \u201cYou put water into a bottle it becomes the bottle; you put it in a teapot it becomes the teapot.\u201d

\n

In Build a Better Web with Responsive Web Design I\u2019ll teach you how to build websites that flow like water. You\u2019ll learn everything you need to know to handle the multi-device world of today. I\u2019ll even show you how to be \u201cfuture-friendly\u201d so your sites will work not just on the devices of today, but also the hot new devices five years from now.

\n
\n\n

What You\u2019ll Get Out of This Book

\n\n
\n\n
\n

Don't Believe Me?

\n

Go on then, read a sample chapter right now. No mailing list to join, no hoops to jump through. Just click the link to the PDF file and see if you like it. If you do, you can buy a copy below for less than you'll probably pay for dinner tonight.

\n

And note that all payment processing is done by Gumroad. They're great, much better than Paypal. And naturally this site and Gumroad are both served over a secure connection so all your data is safe. Unless the NSA have already broken into your house and hacked your hardware — can't help you there. But hey, you'll still get a great book about responsive design.

\n
", "body_markdown": "It\u2019s time to stop fighting the web\u2019s flexible nature. Discover how simple tools like CSS media queries, fluid layouts and flexible media can transform your website from fixed-width failure to responsive success.\r\n\r\nBuild websites like Bruce Lee. Kung Fu legend Bruce Lee told students to \u201cbe like water\u201d. Lee was no web developer, but his words also describe the core of responsive design \u2013 your sites are water, flowing into the mobile future. As Lee said, \u201cYou put water into a bottle it becomes the bottle; you put it in a teapot it becomes the teapot.\u201d\r\n\r\nIn Build a Better Web with Responsive Web Design I\u2019ll teach you how to build websites that flow like water. You\u2019ll learn everything you need to know to handle the multi-device world of today. I\u2019ll even show you how to be \u201cfuture-friendly\u201d so your sites will work not just on the devices of today, but also the hot new devices five years from now.\r\n\r\n
\r\n\r\n

What You\u2019ll Get Out of This Book

\r\n\r\n
\r\n
\r\n

Don't Believe Me?

\r\n

Go on then, read a sample chapter right now. No mailing list to join, no hoops to jump through. Just click the link to the PDF file and see if you like it. If you do, you can buy a copy below for less than you'll probably pay for dinner tonight.

\r\n

And note that all payment processing is done by Gumroad. They're great, much better than Paypal. And naturally this site and Gumroad are both served over a secure connection so all your data is safe. Unless the NSA have already broken into your house and hacked your hardware — can't help you there. But hey, you'll still get a great book about responsive design.

\r\n
\r\n\r\n", "pub_date": "2014-03-07T08:31:15", "last_updated": "2015-11-26T20:07:11.727", "status": 1, "price": 29.0, "price_sale": 17.0, "meta_description": "", "pages": "248", "template_name": "details/src_book.html"}}]