summaryrefslogtreecommitdiff
path: root/src/published
diff options
context:
space:
mode:
Diffstat (limited to 'src/published')
-rw-r--r--src/published/2013-09-20_whatever-happened-to-webmonkey.txt43
-rw-r--r--src/published/2014-02-07_html5-placeholder-label-search-forms.txt102
-rw-r--r--src/published/2014-02-10_install-nginx-debian-ubuntu.txt250
-rw-r--r--src/published/2014-02-27_scaling-responsive-images-css.txt60
-rw-r--r--src/published/2014-06-10_protect-your-privacy-with-ghostery.txt148
-rw-r--r--src/published/2014-08-02_get-smarter-pythons-built-in-help.txt37
-rw-r--r--src/published/2014-08-05_how-my-two-year-old-twins-made-me-a-better-programmer.txt40
-rw-r--r--src/published/2014-08-11_building-faster-responsive-websites-webpagetest.txt124
-rw-r--r--src/published/2014-09-04_history-of-picture-element.txt199
-rw-r--r--src/published/2015-01-24_how-to-write-ebook.txt58
-rw-r--r--src/published/2015-04-02_complete-guide-ssh-keys.txt128
-rw-r--r--src/published/2015-04-03_set-up-secure-first-vps.txt147
-rw-r--r--src/published/2015-10-28_pass.txt37
-rw-r--r--src/published/2015-11-05_how-googles-amp-project-speeds-web-sandblasting-ht.txt107
-rw-r--r--src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt216
-rw-r--r--src/published/arch-philosophy.txt24
-rw-r--r--src/published/backup-2.txt23
-rw-r--r--src/published/command-line-searchable-text-snippets.txt116
-rw-r--r--src/published/technology.txt56
19 files changed, 0 insertions, 1915 deletions
diff --git a/src/published/2013-09-20_whatever-happened-to-webmonkey.txt b/src/published/2013-09-20_whatever-happened-to-webmonkey.txt
deleted file mode 100644
index 112ce46..0000000
--- a/src/published/2013-09-20_whatever-happened-to-webmonkey.txt
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title: Whatever Happened to Webmonkey.com?
-pub_date: 2013-09-20 21:12:31
-slug: /blog/2013/09/whatever-happened-to-webmonkey
-metadesc: Wired has shut down Webmonkey.com for the fourth and probably final time.
-
----
-
-People on Twitter have been asking what's up with [Webmonkey.com][1]. Originally I wanted to get this up on Webmonkey, but I got locked out of the CMS before I managed to do that, so I'm putting it here.
-
-Earlier this year Wired decided to stop producing new content for Webmonkey.
-
-For those keeping track at home, this is the fourth, and I suspect final, time Webmonkey has been shut down (previously it was shut down in 1999, 2004 and 2006).
-
-I've been writing for Webmonkey.com since 2000, full time since 2006 (when it came back from the dead for a third run). And for the last two years I have been the sole writer, editor and producer of the site.
-
-Like so many of you, I learned how to build websites from Webmonkey. But it was more than just great tutorials and how tos. Part of what made Webmonkey great was that it was opinionated and rough around the edges. Webmonkey was not the product of professional writers, it was written and created by the web nerds building Wired's websites. It was written by people like us, for people like us.
-
-I'll miss Webmonkey not just because it was my job for many years, but because at this point it feels like a family dog to me, it's always been there and suddenly it's not. Sniff. I'll miss you Webmonkey.
-
-Quite a few people have asked me why it was shut down, but unfortunately I don't have many details to share. I've always been a remote employee, not in San Francisco at all in fact, and consequently somewhat out of the loop. What I can say is that Webmonkey's return to Wired in 2006 was the doing of long-time Wired editor Evan Hansen ([now at Medium][2]). Evan was a tireless champion of Webmonkey and saved it from the Conde Nast ax several times. He was also one of the few at Wired who "got" Webmonkey. When Evan left Wired earlier this year I knew Webmonkey's days were numbered.
-
-I don't begrudge Wired for shutting Webmonkey down. While I have certain nostalgia for its heyday, even I know it's been a long time since Webmonkey was leading the way in web design. I had neither the staff nor the funding to make Webmonkey anything like its early 2000s self. In that sense I'm glad it was shut down rather than simply fading further into obscurity.
-
-I am very happy that Wired has left the site in place. As far as I know Webmonkey (and its ever-popular cheat sheets, which still get a truly astounding amount of traffic) will remain available on the web. That said, note to the [Archive Team][3], it wouldn't hurt to create a backup. Sadly, many of the very earliest writings have already been lost in the various CMS transitions over the years and even much of what's there now has incorrect bylines. Still, at least most of it's there. For now.
-
-As for me, I've decided to go back to what I enjoyed most about the early days of Webmonkey: teaching people how to make cool stuff for the web.
-
-To that end I'm currently working on a book about responsive design, which I'm hoping to make available by the end of October. If you're interested drop your email in the box below and I'll let you know when it's out (alternately you can follow [@LongHandPixels][4] on Twitter).
-
-If you have any questions or want more details use the comments box below.
-
-In closing, I'd like to thank some people at Wired -- thank you to my editors over the years, especially [Michael Calore][5], [Evan Hansen][6] and [Leander Kahney][7] who all made me a much better writer. Also thanks to Louise for always making sure I got paid. And finally, to everyone who read Webmonkey and contributed over the years, whether with articles or even just a comment, thank you.
-
-Cheers and, yes, thanks for all the bananas.
-
-[1]: http://www.webmonkey.com/
-[2]: https://medium.com/@evanatmedium
-[3]: http://www.archiveteam.org/index.php?title=Main_Page
-[4]: https://twitter.com/LongHandPixels
-[5]: http://snackfight.com/
-[6]: https://twitter.com/evanatmedium
-[7]: http://www.cultofmac.com/about/
diff --git a/src/published/2014-02-07_html5-placeholder-label-search-forms.txt b/src/published/2014-02-07_html5-placeholder-label-search-forms.txt
deleted file mode 100644
index b1c083e..0000000
--- a/src/published/2014-02-07_html5-placeholder-label-search-forms.txt
+++ /dev/null
@@ -1,102 +0,0 @@
----
-title: HTML5 Placeholder as a Label in Search Forms
-pub_date: 2014-02-07 14:38:20
-slug: /blog/2014/02/html5-placeholder-label-search-forms
-metadesc: Using HTML5's placeholder attribute instead of a label is never a good idea. Except when maybe it is.
-tags: Best Practices
-code: True
-tutorial: True
----
-
-The HTML5 form input attribute `placeholder` is a tempting replacement for the good old `<label>` form element.
-
-In fact the web is littered with sites that use `placeholder` instead of labels (or worse, JavaScript to make the `value` attribute act like `label`).
-
-Just because a practice is widespread does not make it a *best* practice though. Remember "skip intro"? I rest my case. Similarly, **you should most definitely not use `placeholder` as a substitute for form labels**. It may be a pattern on today's web, but it's a shitty pattern.
-
-Labels help users complete forms. There are [mountains](http://rosenfeldmedia.com/books/web-form-design/) of [data](http://css-tricks.com/label-placement-on-forms/) and [eye tracking studies](http://www.uxmatters.com/mt/archives/2006/07/label-placement-in-forms.php) to back this up. If you want people to actually fill out your forms (as opposed, I guess, to your forms just looking "so clean, so elegant") then you want to use labels. The best forms, from a usability standpoint, are forms with non-bold, left aligned labels above the field they label.
-
-Again, **using placeholder as a substitute for labels is a horrible UI pattern that you should (almost) never use.**
-
-Is that dogmatic enough for you? Oh wait, *almost* never. Yes, I think there is one specific case where maybe this pattern makes sense: search forms.
-
-Search forms are so ubiquitous and so well understood at this point that it may be redundant to have a label that says "search", a placeholder that also says "search" and a button that says "search" as well. I think just two of those would be fine.
-
-We could skip the placeholder text, which should really be more of a hint anyway -- e.g. "Jane Doe" rather than "Your Name" -- but what if we want to dispense with the label to save a bit of screen real estate, which can be at a premium on smaller viewports?
-
-The label should still be part of the actual HTML, whether your average sighted user actually sees it or not. We need it there for accessibility. But with search forms, well, maybe you can tuck that label away, out of site.
-
-Progressive enhancement dictates that the labels should most definitely be there though. Let's consider a simple search form example:
-
-~~~.language-markup
-<form action="/search" method="get">
- <label id="search-label" for="search">Search:</label>
- <input type="text" name="search" id="query" value="" placeholder="Search LongHandPixels">
- <input class="btn" type="submit" value="Search">
-</form>
-~~~
-
-Here we have our `<label>` tag and use the `for` attribute to bind it with the text input that is our search field. So far, so good for best practices.
-
-Here's what I think is the progressive enhancement route for search forms: use the HTML above and then use JavaScript and CSS to hide away the label when the it makes sense to do so. In other words, don't just hide the label in CSS.
-
-Hiding the label with something like `label {visibility: hidden;}` is a bad idea. That, and its evil cousin `display: none;` hide elements from screen readers and other assistive devices. Instead we'd want to do something like this:
-
-~~~.language-css
-.search-form-label-class {
- position: absolute;
- left: -999em;
-}
-~~~
-
-Check out Aaron Gustafson's ALA article <cite>[Now You See Me](http://alistapart.com/article/now-you-see-me)</cite> for more details on the various ways to hide things visually without hiding them from people who may need them the most.
-
-So this code is better, our label is off-screen and the placeholder text combined with descriptive button text serve the same purpose and still make the function of the form clear. The main problem we have right now is we've hidden the label in every browser, even browsers that won't display the `placeholder` attribute. That's not so great.
-
-In this case you might argue that the button still makes the form function relatively clear, but I think we can do better. Instead of adding a rule to our stylesheet, let's use a bit of JavaScript to apply our CSS only if the browser understands the `placeholder` attribute. Here's a bit of code to do that:
-
-~~~.language-javascript
-<script>
-if (("placeholder" in document.createElement("input"))) {
- document.getElementById("search-label").style.position= 'absolute';
- document.getElementById("search-label").style.left= '-999em';
-}
- </script>
-~~~
-
-This is just plain JavaScript, if your site already has `jQuery` or some other library running them by all means use it's functions to select your elements and apply CSS. The point is the `if` statement, which tests to see if the current browser support the `placeholder` attribute. If it does them we hide the label off-screen, if it doesn't then nothing happens. Either way the element remains accessible to screen readers.
-
-If you'd like to see it in action, here's a working demo: [HTML5 placeholder as a label in search form](https://longhandpixels.net/demos/html5-placeholder/)
-
-So is this a good idea? Honestly, I don't know. It might be splitting hairs. I think it's okay for search forms or other single field forms where there's less chance users will be confused when the placeholder text disappears.
-
-Pros:
-
-* Saves space (no label, which can be a big help on small viewports)
-* Still offers good accessibility
-
-Cons:
-
-* **Technically this is wrong**. I'm essentially using JavaScript to make `placeholder` take the place of a label, which is not what `placeholder` is for.
-* **Placeholders can be confusing**. Some people won't start typing a search term because they're waiting for the field to clear. Others will think that the field is filled and the form can be submitted. See Chris Coyier's CSS-Tricks site for some ideas on how [you can make it apparent that the field](http://css-tricks.com/hang-on-placeholders/) is ready for input.
-* **Not good for longer forms**. Again, multi-field forms need labels. Placeholders disappear when you start typing. If you forget which field you're in after you start typing, placeholders are no help. Despite the fact that the web is littered with forms that do this, please don't. Use labels on longer forms.
-
-I'm doing this with the search form on this site. I started to do the same with my new mailing list sign up form (which isn't live yet), which is what got me writing this, thinking aloud as it were. My newsletter sign up form will have two fields, email and name (only email is required), and after thinking about this some more I deleted the JavaScript and left the labels.
-
-If I shorten the form to just email, which I may, depending on some A/B testing I'm doing, then I may use this technique there too (and A/B test again). As of now though I don't think it's a good idea for that form for the reasons mentioned above.
-
-I'm curious to hear from users, what do you think of this pattern? Is this okay? Unnecessary? Bad idea? Let me know what you think.
-
-## Further Reading:
-
-* The W3C's Web Platform Docs on the [placeholder](http://docs.webplatform.org/wiki/html/attributes/placeholder) attribute.
-* Luke Wroblewski's book <cite>[Web Form Design](http://rosenfeldmedia.com/books/web-form-design/)</cite> is the Bible of building good forms.
-* A little taste of Wroblewski's book over on his blog: [Web Application Form Design](http://www.lukew.com/ff/entry.asp?1502).
-* UXMatter's once did some [eyeball tracking studies](http://www.uxmatters.com/mt/archives/2006/07/label-placement-in-forms.php) based on Wroblewski's book.
-* Aaron Gustafson's ALA article [Now You See Me](http://alistapart.com/article/now-you-see-me), which talks about best practices for hiding elements with JavaScript.
-* CSS-Tricks: [Hang On Placeholders](http://css-tricks.com/hang-on-placeholders/).
-* [Jeremy Keith on `placeholder`](http://adactio.com/journal/6147/).
-* CSS-Tricks: [Places It's Tempting To Use Display: None; But Don't](http://css-tricks.com/places-its-tempting-to-use-display-none-but-dont/). Seriously, don't.
-{^ .list--indented }
-
-
diff --git a/src/published/2014-02-10_install-nginx-debian-ubuntu.txt b/src/published/2014-02-10_install-nginx-debian-ubuntu.txt
deleted file mode 100644
index e6cc835..0000000
--- a/src/published/2014-02-10_install-nginx-debian-ubuntu.txt
+++ /dev/null
@@ -1,250 +0,0 @@
----
-title: Install Nginx on Debian/Ubuntu
-pub_date: 2014-02-10 11:35:31
-slug: /blog/2014/02/install-nginx-debian-ubuntu
-tags: Web Servers
-metadesc: A complete guide to installing and configuring Nginx to serve static files (on a Digital Ocean or similar VPS)
-code: True
-tutorial: True
-
----
-
-I recently helped a friend set up his first Nginx server and in the process realized I didn't have a good working reference for how I set up Nginx.
-
-So, for myself, my friend and anyone else looking to get started with Nginx, here's my somewhat opinionated guide to installing and configuring Nginx to serve static files. Which is to say, this is how I install and set up Nginx to serve my own and my clients' static files whether those files are simply stylesheets, images and JavaScript or full static sites like this one. What follows is what I believe are the best practices of Nginx[^1]; if you know better, please correct me in the comments.
-
-[This post was last updated <span class="dt-updated updated" datetime="2014-11-05T12:04:25" itemprop="datePublished"><span>05 November 2014</span></span>]
-
-## Nginx Beats Apache for Static Content[^2]
-
-I've written before about how static website generators like [Jekyll](http://jekyllrb.com), [Pelican](http://blog.getpelican.com) and [Cactus](https://github.com/koenbok/Cactus) are a great way to prototype websites in a hurry. They're also great tools for actually managing sites, not just "blogs". There are in fact some very large websites powered by these "blogging" engines. President Obama's very successful fundraising website [ran on Jekyll](http://kylerush.net/blog/meet-the-obama-campaigns-250-million-fundraising-platform/).
-
-Whether you're just building a quick live prototype or running an actual live website of static files, you'll need a good server. So why not use Apache? Simply put, Apache is overkill.
-
-Unlike Apache, which is a jack-of-all-trades server, Nginx was really designed to do just a few things well, one of which is to offer a simple, fast, lightweight server for static files. And Nginx is really, really good at serving static files. In fact, in my experience Nginx with PageSpeed, gzip, far future expires headers and a couple other extras I'll mention is faster than serving static files from Amazon S3[^3] (potentially even faster in the future if Verizon and its ilk [really do](http://netneutralitytest.com/) start [throttling cloud-based services](http://davesblog.com/blog/2014/02/05/verizon-using-recent-net-neutrality-victory-to-wage-war-against-netflix/)).
-
-## Nginx is Different from Apache
-
-In its quest to be lightweight and fast, Nginx takes a different approach to modules than you're probably familiar with in Apache. In Apache you can dynamically load various features using modules. You just add something like `LoadModule alias_module modules/mod_alias.so` to your Apache config files and just like that Apache loads the alias module.
-
-Unlike Apache, Nginx can not dynamically load modules. Nginx has available what it has available when you install it.
-
-That means if you really want to customize and tweak it, it's best to install Nginx from source. You don't *have* to install it from source. But if you really want a screaming fast server, I suggest compiling Nginx yourself, enabling and disabling exactly the modules you need. Installing Nginx from source allows you to add some third-party tools, most notably Google's PageSpeed module, which has some fantastic tools for speeding up your site.
-
-Luckily, installing Nginx from source isn't too difficult. Even if you've never compiled any software from source, you can install Nginx. The remainder of this post will show you exactly how.
-
-## My Ideal Nginx Setup for Static Sites
-
-Before we start installing, let's go over the things we'll be using to build a fast, lightweight server with Nginx.
-
-* [Nginx](http://nginx.org).
-* [SPDY](http://www.chromium.org/spdy/spdy-protocol) -- Nginx offers "experimental support for SPDY", but it's not enabled by default. We're going to enable it when we install Nginx. In my testing SPDY support has worked without a hitch, experimental or otherwise.
-* [Google Page Speed](https://developers.google.com/speed/pagespeed/module) -- Part of Google's effort to make the web faster, the Page Speed Nginx module "automatically applies web performance best practices to pages and associated assets".
-* [Headers More](https://github.com/agentzh/headers-more-nginx-module/) -- This isn't really necessary from a speed standpoint, but I often like to set custom headers and hide some headers (like which version of Nginx your server is running). Headers More makes that very easy.
-* [Naxsi](https://github.com/nbs-system/naxsi) -- Naxsi is a "Web Application Firewall module for Nginx". It's not really all that important for a server limited to static files, but it adds an extra layer of security should you decided to use Nginx as a proxy server down the road.
-
-So we're going to install Nginx with SPDY support and three third-party modules.
-
-Okay, here's the step-by-step process to installing Nginx on a Debian 7 (or Ubuntu) server. If you're looking for a good, cheap VPS host I've been happy with [Digital Ocean](https://www.digitalocean.com/?refcode=3bda91345045) (that's an affiliate link that will help support LongHandPixels; if you prefer, here's a non-affiliate link: [link](https://www.digitalocean.com/))
-
-The first step is to make sure you're installing the latest release of Nginx. To do that check the [Nginx download page](http://nginx.org/en/download.html) for the latest version of Nginx (at the time of writing that's 1.5.10).
-
-Okay, SSH into your server and let's get started.
-
-While these instructions will work on just about any server, the one thing that will be different is how you install the various prerequisites needed to compile Nginx.
-
-On a Debian/Ubuntu server you'd do this:
-
-~~~.language-bash
-$ sudo apt-get -y install build-essential zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libssl-dev tar unzip
-~~~
-
-If you're using RHEL/Cent/Fedora you'll want these packages:
-
-~~~.language-bash
-$ sudo yum install gcc-c++ pcre-dev pcre-devel zlib-devel make
-~~~
-
-After you have the prerequisites installed it's time to grab the latest version of Google's Pagespeed module. Google's [Nginx PageSpeed installation instructions](https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source) are pretty good, so I'll reproduce them here.
-
-First grab the latest version of PageSpeed, which is currently 1.9.32.2, but check the sources since it updates frequently and change this first cariable to match the latest version.
-
-~~~.language-bash
-NPS_VERSION=1.9.32.2
-wget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}-beta.zip
-unzip release-${NPS_VERSION}-beta.zip
-~~~
-
-Now, before we compile pagespeed we need to grab `psol`, which PageSpeed needs to function properly. So, let's `cd` into the `ngx_pagespeed-release-1.8.31.4-beta` folder and grab `psol`:
-
-~~~.language-bash
-cd ngx_pagespeed-release-${NPS_VERSION}-beta/
-wget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz
-tar -xzvf ${NPS_VERSION}.tar.gz
-cd ../
-~~~
-
-Alright, so the `ngx_pagespeed` module is all setup and ready to install. All we have to do at this point is tell Nginx where to find it.
-
-Now let's grab the Headers More and Naxsi modules as well. Again, check the [Headers More](https://github.com/agentzh/headers-more-nginx-module/) and [Naxsi](https://github.com/nbs-system/naxsi) pages to see what the latest stable version is and adjust the version numbers in the following accordingly.
-
-~~~.language-bash
-HM_VERSION =v0.25
-wget https://github.com/agentzh/headers-more-nginx-module/archive/${HM_VERSION}.tar.gz
-tar -xvzf ${HM_VERSION}.tar.gz
-NAX_VERSION=0.53-2
-wget https://github.com/nbs-system/naxsi/archive/${NAX_VERSION}.tar.gz
-tar -xvzf ${NAX_VERSION}.tar.gz
-~~~
-
-Now we have all three third-party modules ready to go, the last thing we'll grab is a copy of Nginx itself:
-
-~~~.language-bash
-NGINX_VERSION=1.7.7
-wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz
-tar -xvzf nginx-${NGINX_VERSION}.tar.gz
-~~~
-
-Then we `cd` into the Nginx folder and compile. So, first:
-
-~~~.language-bash
-cd nginx-${NGINX_VERSION}/
-~~~
-
-So now we're inside the Nginx folder, let's configure our installation. We'll add in all our extras and turn off a few things we don't need. Or at least they're things I don't need, if you need the mail modules, then delete those lines. If you don't need SSL, you might want to skip that as well. Here's the config setting I use (Note: all paths are for Debian servers, you'll have to adjust the various paths accordingly for RHEL/Cent/Fedora/ servers):
-
-
-~~~.language-bash
-./configure \
- --add-module=$HOME/naxsi-${NAX_VERSION}/naxsi_src \
- --prefix=/usr/share/nginx \
- --sbin-path=/usr/sbin/nginx \
- --conf-path=/etc/nginx/nginx.conf \
- --pid-path=/var/run/nginx.pid \
- --lock-path=/var/lock/nginx.lock \
- --error-log-path=/var/log/nginx/error.log \
- --http-log-path=/var/log/access.log \
- --user=www-data \
- --group=www-data \
- --without-mail_pop3_module \
- --without-mail_imap_module \
- --without-mail_smtp_module \
- --with-http_stub_status_module \
- --with-http_ssl_module \
- --with-http_spdy_module \
- --with-http_gzip_static_module \
- --add-module=$HOME/ngx_pagespeed-release-${NPS_VERSION}-beta \
- --add-module=$HOME/headers-more-nginx-module-${HM_VERSION}\
-~~~
-
-There are a few things worth noting here. First off make sure that Naxsi is first. Here's what the [Naxsi wiki page](https://github.com/nbs-system/naxsi/wiki/installation) has to say on that score: "Nginx will decide the order of modules according the order of the module's directive in Nginx's ./configure. So, no matter what (except if you really know what you are doing) put Naxsi first in your ./configure. If you don't do so, you might run into various problems, from random/unpredictable behaviors to non-effective WAF." The last thing you want is to think you have a web application firewall running when in fact you don't, so stick with Naxsi first.
-
-There are a couple other things you might want to add to this configuration. If you're going to be serving large files, larger than your average 1.5MB HTML page, consider adding the line: `--with-file-aio \`, which is apparently faster than the stock `sendfile` option. See [here](https://calomel.org/nginx.html) for more details. There are quite a few other modules available. A [full list of the default modules](http://wiki.nginx.org/Modules) can be found on the Nginx site. Read through that and if there's another module you need, you can add it to that config list.
-
-Okay, we've told Nginx what to do, now let's actually install it:
-
-~~~.language-bash
-make
-sudo make install
-~~~
-
-Once `make install` finishes doing its thing you'll have Nginx all set up.
-
-Congrats! You made it.
-
-The next step is to add Nginx to the list of things your server starts up automatically whenever it reboots. Since we installed Nginx from scratch we need to tell the underlying system what we did.
-
-## Make it Autostart
-
-Since we compiled from source rather than using Debian/Ubuntu's package management tools, the underlying stystem isn't aware of Nginx's existence. That means it won't automatically start it up when the system boots. In order to ensure that Nginx does start on boot we'll have to manually add Nginx to our server's list of startup services. That way, should we need to reboot, Nginx will automatically restart when the server does.
-
-To do that I use the [Debian init script](https://github.com/MovLib/www/blob/master/bin/init-nginx.sh) listed in the [Nginx InitScripts page](http://wiki.nginx.org/InitScripts):
-
-If that works for you, grab the raw version:
-
-~~~.language-bash
-wget https://raw.githubusercontent.com/MovLib/www/develop/etc/init.d/nginx.sh
-# I had to edit the DAEMON var to point to nginx
-# change line 63 in the file to:
-DAEMON=/usr/sbin/nginx
-# then move it to /etc/init.d/nginx
-sudo mv nginx.sh /etc/init.d/nginx
-# make it executable:
-sudo chmod +x /etc/init.d/nginx
-# then just:
-sudo service nginx start #also restart, reload, stop etc
-~~~
-
-I suggest taking the last bit and turning it into an alias in your `bashrc` or `zshrc` file so that you can quickly restart/reload the server when you need it. Here's what I use:
-
-~~~.language-bash
-alias xrestart="sudo service nginx restart"
-alias xreload="sudo service nginx reload"
-~~~
-
-Okay so we now have the initialization script all set up, now let's make Nginx start up on reboot. In theory this should do it:
-
-~~~.language-bash
-update-rc.d -f nginx defaults
-~~~
-
-But that didn't work for me with my Digital Ocean Debian 7 x64 droplet (which complained that "`insserv rejected the script header`"). I didn't really feel like troubleshooting that at the time; I was feeling lazy so I decided to use chkconfig instead. To do that I just installed chkconfig and added Nginx:
-
-~~~.language-bash
-sudo apt-get install chkconfig
-sudo chkconfig --add nginx
-sudo chkconfig nginx on
-~~~
-
-So there we have it, everything you need to get Nginx installed with SPDY, PageSpeed, Headers More and Naxsi. A blazing fast server for static files.
-
-After that it's just a matter of configuring Nginx, which is entirely dependent on how you're using it. For static setups like this my configuration is pretty minimal.
-
-Before we get to that though, there's the first thing I do: edit `/etc/nginx/nginx.conf` down to something pretty simple. This is the root config so I keep it limited to a `http` block that turns on a few things I want globally and an include statement that loads site-specific config files. Something a bit like this:
-
-~~~.language-bash
-user www-data;
-events {
- worker_connections 1024;
-}
-http {
- include mime.types;
- include /etc/nginx/naxsi_core.rules;
- default_type application/octet-stream;
- types_hash_bucket_size 64;
- server_names_hash_bucket_size 128;
- log_format main '$remote_addr - $remote_user [$time_local] "$request" '
- '$status $body_bytes_sent "$http_referer" '
- '"$http_user_agent" "$http_x_forwarded_for"';
-
- access_log logs/access.log main;
- more_set_headers "Server: My Custom Server";
- keepalive_timeout 65;
- gzip on;
- pagespeed on;
- pagespeed FileCachePath /var/ngx_pagespeed_cache;
- include /etc/nginx/sites-enabled/*.conf;
-}
-~~~
-
-A few things to note. I've include the core rules file from the Naxsi source. To make sure that file exists, we need to copy it over to `/etc/nginx/`.
-
-~~~.language-bash
-sudo cp naxsi-0.53-2/naxci_config/naxsi_core.rule /etc/nginx
-~~~
-
-Now let's restart the server so it picks up these changes:
-
-~~~.language-bash
-sudo service nginx restart
-~~~
-
-Or, if you took my suggestion of creating an alias, you can type: `xrestart` and Nginx will restart itself.
-
-With this configuration we have a good basic setup and any `.conf` files you add to the folder `/etc/nginx/sites-enabled/` will be included automatically. So if you want to create a conf file for mydomain.com, you'd create the file `/etc/nginx/sites-enabled/mydomain.conf` and put the configuration for that domain in that file.
-
-I'm going to post a follow up on how I configure Nginx very soon. In the mean time here's a pretty comprehensive [guide to configuring Nginx](https://calomel.org/nginx.html) in a variety of scenarios. And remember, if you want to some more helpful tips and tricks for web developers, sign up for the mailing list below.
-
-[^1]: If you're more experienced with Nginx and I'm totally bass-akward about something in this guide, please let me know.
-[^2]: In my experience anyway. Probably Apache can be tuned to get pretty close to Nginx's performance with static files, but it's going to take quite a bit of work. One is not necessarily better, but there are better tools for different jobs.
-[^3]: That said, obviously a CDN service like Cloudfront will, in most cases, be much faster than Nginx or any other server.
diff --git a/src/published/2014-02-27_scaling-responsive-images-css.txt b/src/published/2014-02-27_scaling-responsive-images-css.txt
deleted file mode 100644
index 15fc129..0000000
--- a/src/published/2014-02-27_scaling-responsive-images-css.txt
+++ /dev/null
@@ -1,60 +0,0 @@
----
-title: Scaling Responsive Images in CSS
-pub_date: 2014-02-27 12:04:25
-slug: /blog/2014/02/scaling-responsive-images-css
-tags: Responsive Web Design, Responsive Images
-metadesc: Media queries make responsive images a snap in CSS, but if you want your responsive images to scale between breakpoints things get a bit trickier.
-code: True
-tutorial: True
----
-
-It's pretty easy to handle images responsively with CSS. Just use `@media` queries to swap images at various breakpoints in your design.
-
-It's slightly trickier to get those images to be fluid and scale in between breakpoints. Or rather, it's not hard to get them to scale horizontally, but what about vertical scaling?
-
-Imagine this scenario. You have a div with a paragraph inside it and you want to add a background using the `:before` pseudo element -- just a decorative image behind some text. You can set the max-width to 100% to get the image to fluidly scale in width, but what about scaling the height?
-
-That's a bit trickier, or at least it tripped me up for a minute the other day. I started with this:
-
-~~~.language-css
-.wrapper--image:before {
- content: "";
- display: block;
- max-width: 100%;
- height: 443px;
- background-color: #f3f;
- background-image: url('bg.jpg');
- background-repeat: no-repeat;
- background-size: 100%;
- }
-~~~
-
-Do that and you'll see... nothing. Okay, I expected that. Setting height to auto doesn't work because the pseudo element has no real content, which means its default height is zero. Okay, how do I fix that?
-
-You might try setting the height to the height of your background image. That works whenever the div is the size of, or larger than, the image. But the minute your image scales down at all you'll have blank space at the bottom of your div, because the div has a fixed height with an image inside that's shorter than that fixed height. Try re-sizing [this demo](/demos/css-bg-image-scaling/no-vertical-scaling.html) to see what I'm talking about, make the window less than 800px and you'll see the box no longer scales with the image.
-
-To get around this we can borrow a trick from Thierry Koblentz's technique for [creating intrinsic ratios for video](http://alistapart.com/article/creating-intrinsic-ratios-for-video/) to create a box that maintains the ratio of our background image.
-
-We'll leave everything the way it is, but add one line:
-
-~~~.language-css
-.wrapper--image:before {
- content: "";
- display: block;
- max-width: 100%;
- background-color: #f3f;
- background-image: url('bg.jpg');
- background-repeat: no-repeat;
- background-size: 100%;
- padding-top: 55.375%;
-}
-
-~~~
-
-We've added padding to the top of the element, which forces the element to have a height (at least visually). But where did I get that number? That's the ratio of the dimensions of the background image. I simply divided the height of the image by the width of the image. In this case my image was 443px tall and 800px wide, which gives us 53.375%.
-
-Here's a [working demo](/demos/css-bg-image-scaling/vertical-scaling.html).
-
-And there you have it, properly scaling CSS background images on `:before` or other "empty" elements, pseudo or otherwise.
-
-The only real problem with this technique is that requires you to know the dimensions of your image ahead of time. That won't be possible in every scenario, but if it is, this will work.
diff --git a/src/published/2014-06-10_protect-your-privacy-with-ghostery.txt b/src/published/2014-06-10_protect-your-privacy-with-ghostery.txt
deleted file mode 100644
index 983936c..0000000
--- a/src/published/2014-06-10_protect-your-privacy-with-ghostery.txt
+++ /dev/null
@@ -1,148 +0,0 @@
----
-title: How to Protect Your Online Privacy with Ghostery
-pub_date: 2014-05-29 12:04:25
-slug: /blog/2014/05/protect-your-privacy-ghostery
-metadesc: How to install and configure the Ghostery browser add-on for maximum online privacy
-tutorial: True
-
----
-
-There's an invisible web that lies just below the web you see everyday. That invisible web is tracking the sites you visit, the pages you read, the things you like, the things you favorite and collating all that data into a portrait of things you are likely to purchase. And all this happens without anyone asking your consent.
-
-Not much has changed since [I wrote about online tracking years ago on Webmonkey][1]. Back then visiting five websites meant "somewhere between 21 and 47 other websites learn about your visit to those five". That number just continues to grow.
-
-If that doesn't bother you, and you could not care less who is tracking you, then this is not the tutorial for you.
-
-However, if the extent of online tracking bothers you and you want to do something about it, there is some good news. In fact it's not that hard to stop all that tracking.
-
-To protect your privacy online you'll just need to add a tool like [Ghostery](https://www.ghostery.com/) or [Do Not Track Plus](https://www.abine.com/index.html) to your web browser. Both will work, but I happen to use Ghostery so that's what I'm going to show you how to set up.
-
-## Install and Setup Ghostery in Firefox, Chrome/Chromium, Opera and Safari.
-
-The first step is to install the Ghostery extension for your web browser. To do that, just head over to the [Ghostery downloads page](https://www.ghostery.com/en/download) and click the install button that's highlighted for your browser.
-
-Some browsers will ask you if you want to allow the add-on to be installed. In Firefox just click "Allow" and then click "Install Now" when the installation window opens up.
-
-[![Installing add-ons in Firefox](/media/images/2014/gh-firefox-install01-tn.jpg)](/media/images/2014/gh-firefox-install01.png "View Image 1")
-: In Firefox click Allow...
-
-[![Installing add-ons in Firefox 2](/media/images/2014/gh-firefox-install02-tn.jpg)](/media/images/2014/gh-firefox-install02.png "View Image 2")
-: ...and then Install Now
-
-If you're using Chrome just click the Add button.
-
-[![Installing extensions in Chrome/Chromium](/media/images/2014/gh-chrome-install01-tn.jpg)](/media/images/2014/gh-chrome-install01.jpg "View Image 3")
-: Installing extensions in Chrome/Chromium
-
-Ghostery is now installed, but out of the box Ghostery doesn't actually block anything. That's why, once you have it installed, Ghostery should have opened a new window or tab that looks like this:
-
-[![The Ghostery install wizard](/media/images/2014/gh-first-screen-tn.jpg)](/media/images/2014/gh-first-screen.jpg "View Image 4")
-: The Ghostery install wizard
-
-This is the series of screens that walk you through the process of setting up Ghostery to block sites that would like to track you.
-
-Before I dive into setting up Ghostery, it's important to understand that some of what Ghostery can block will limit what you see on the web. For example, Disqus is a very popular third-party comment system. It happens to track you as well. If you block that tracking though you won't see comments on a lot of sites.
-
-There are two ways around this. One is to decide that you trust Disqus and allow it to run on any site. The second is to only allow Disqus on sites where you want to read the comments. I'll show you how to set up both options.
-
-## Configuring Ghostery
-
-First we have to configure Ghostery. Click the right arrow on that first screen to get started. That will lead you to this screen:
-
-[![The Ghostery install wizard, page 2](/media/images/2014/gh-second-screen-tn.jpg)](/media/images/2014/gh-second-screen.jpg "View Image 5")
-: The Ghostery install wizard, page 2
-
-If you want to help Ghostery get better you can check this box. Then click the right arrow again and you'll see a page asking if you want to enable the Alert Bubble.
-
-[![The Ghostery install wizard, page 3](/media/images/2014/gh-third-screen-tn.jpg)](/media/images/2014/gh-third-screen.jpg "View Image 6")
-: The Ghostery install wizard, page 3
-
-This is Ghostery's little alert box that comes up when you visit a new page. It will show you all the trackers that are blocked. Think of this as a little window into the invisible web. I enable this, though I change the default settings a little bit. We'll get to that in just a second.
-
-The next screen is the core of Ghostery. This is where we decide which trackers to block and which to allow.
-
-[![The Ghostery install wizard -- blocking trackers](/media/images/2014/gh-main-01-tn.jpg)](/media/images/2014/gh-main-01.jpg "View Image 7")
-: The Ghostery install wizard -- blocking trackers
-
-Out of the box Ghostery blocks nothing. Let's change that. I start by blocking everything:
-
-[![Ghostery set to block all known trackers](/media/images/2014/gh-main-02-tn.jpg)](/media/images/2014/gh-main-02.jpg "View Image 8")
-: Ghostery set to block all known trackers
-
-Ghostery will also ask if you want to block new trackers as it learns about them. I go with yes.
-
-Now chances are the setup we currently have is going to limit your ability to use some websites. To stick with the earlier example, this will mean Disqus comments are never loaded. The easiest way to fix this is to search for Disqus and enable it:
-
-[![Ghostery set to block everything but Disqus](/media/images/2014/gh-main-03-tn.jpg)](/media/images/2014/gh-main-03.jpg "View Image 9")
-: Ghostery set to block everything but Disqus
-
-Note that, along the top of the tracker list there are some buttons. This makes it easy to enable, for example, not just Disqus but every commenting system. If you'd like to do that click the "Commenting System" button and uncheck all the options:
-
-[![Filtering Ghostery by type of tracker](/media/images/2014/gh-main-04-tn.jpg)](/media/images/2014/gh-main-04.jpg "View Image 10")
-: Filtering Ghostery by type of tracker
-
-Another category of things you might want to allow are music players like those from SoundCloud. To learn more about a particular service, just click the link next to the item and Ghostery will show you what it knows, including any industry affiliations.
-
-[![Ghostery showing details on Disqus](/media/images/2014/gh-main-05-tn.jpg)](/media/images/2014/gh-main-05.jpg "View Image 11")
-: Ghostery showing details on Disqus
-
-Now you may be thinking, wait, how do I know which companies I want to allow and which I don't? Well, you don't really need to know all of them because you can enable them as you go too.
-
-Let's save what we have and test Ghostery out on a site. Click the right arrow one last time and check to make sure that the Ghostery icon is in your toolbar. If it isn't you can click the button "Add Button".
-
-## Ghostery in Action
-
-Okay, Ghostery is installed and blocking almost everything it knows about. But that might limit what we can do. For example, let's go visit arstechnica.com. You can see down here at the bottom of the screen there's a list of everything that's blocked.
-
-[![Ghostery showing all the trackers no longer tracking you](/media/images/2014/gh-example-01-tn.jpg)](/media/images/2014/gh-example-01.jpg "View Image 12")
-: Ghostery showing all the trackers no longer tracking you
-
-You can see in that list that right now the Twitter button is blocked. So if you scroll down the bottom of the article and look at the author bio (which should have a twitter button) you'll see this little Ghostery icon:
-
-[![Ghostery replaces elements it has blocked with the Ghostery icon.](/media/images/2014/gh-example-02-tn.jpg)](/media/images/2014/gh-example-02.jpg "View Image 13")
-: Ghostery replaces elements it has blocked with the Ghostery icon.
-
-That's how you will know that Ghostery has blocked something. If you were to click on that element Ghostery would load the blocked script and you'd see a Twitter button. But what if you always want to see the Twitter button? To do that we'll come up to the toolbar and click on the Ghostery icon which will reveal the blocking menu:
-
-[![The Ghostery panel.](/media/images/2014/gh-example-03-tn.jpg)](/media/images/2014/gh-example-03.jpg "View Image 14")
-: The Ghostery panel.
-
-Just slide the Twitter button to the left and Twitter's button (and accompanying tracking beacons) will be allowed after you reload the page. Whenever you return to Ars, the Twitter button will load. As I mentioned before, you can do this on a per-site basis if there are just a few sites you want to allow. To enable the Twitter button on every site, click the little check box button the right of the slider. Realize though, that enabling it globally will mean Twitter can track you everywhere you go.
-
-[![Enabling trackers from the Ghostery panel.](/media/images/2014/gh-example-04-tn.jpg)](/media/images/2014/gh-example-04.jpg "view image 15")
-: Enabling trackers from the Ghostery panel.
-
-This panel is essentially doing the same thing as the setup page we used earlier. In fact, we can get back the setting page by click the gear icon and then the "Options" button:
-
-[![Getting back to the Ghostery setting page.](/media/images/2014/gh-example-05-tn.jpg)](/media/images/2014/gh-example-05.jpg "view image 16")
-: Getting back to the Ghostery setting page.
-
-Now, you may have noticed that the little purple panel showing you what was blocked hung around for quite a while, fifteen seconds to be exact, which is a bit long in my opinion. We can change that by clicking the Advanced tab on the Ghostery options page:
-
-
-[![Getting back to the Ghostery setting page.](/media/images/2014/gh-example-06-tn.jpg)](/media/images/2014/gh-example-06.jpg "view image 17")
-: Getting back to the Ghostery setting page.
-
-The first option in the list is whether or not to show the alert bubble at all, followed by the length of time it's shown. I like to set this to the minimum, 3 seconds. Other than this I leave the advanced settings at their defaults.
-
-Scroll to the bottom of the settings page, click save, and you're done setting up Ghostery.
-
-## Conclusion
-
-Now you can browse the web with a much greater degree of privacy, only allowing those companies *you* approve of to know what you're up to. And remember, any time a site isn't working the way you think you should, you can temporarily disable Ghostery by clicking the icon in the toolbar and hitting the pause blocking button down at the bottom of the Ghostery panel:
-
-[![Temporarily disable Ghostery.](/media/images/2014/gh-example-07-tn.jpg)](/media/images/2014/gh-example-07.jpg "view image 18")
-: Temporarily disable Ghostery.
-
-Also note that there is an iOS version of Ghostery, though, due to Apple's restrictions on iOS, it's an entirely separate web browser, not a plugin for Mobile Safari. If you use Firefox for Android there is a plugin available.
-
-##Further reading:
-
-* [How To Install Ghostery (Internet Explorer)](https://www.youtube.com/watch?v=NaI17dSfPRg) -- Ghostery's guide to installing it in Internet Explorer.
-* [Secure Your Browser: Add-Ons to Stop Web Tracking][1] -- A piece I wrote for Webmonkey a few years ago that gives some more background on tracking and some other options you can use besides Ghostery.
-* [Tracking our online trackers](http://www.ted.com/talks/gary_kovacs_tracking_the_trackers) -- TED talk by Gary Kovacs, CEO of Mozilla Corp, covering online behavior tracking more generally.
-* This sort of tracking is [coming to the real world too](http://business.financialpost.com/2014/02/01/its-creepy-location-based-marketing-is-following-you-whether-you-like-it-or-not/?__lsa=e48c-7542), so there's that to look forward to.
-{^ .list--indented }
-
-
-[1]: http://www.webmonkey.com/2012/02/secure-your-browser-add-ons-to-stop-web-tracking/
diff --git a/src/published/2014-08-02_get-smarter-pythons-built-in-help.txt b/src/published/2014-08-02_get-smarter-pythons-built-in-help.txt
deleted file mode 100644
index cb9c807..0000000
--- a/src/published/2014-08-02_get-smarter-pythons-built-in-help.txt
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Get Smarter with Python's Built-In Help
-pub_date: 2014-08-01 12:04:25
-slug: /blog/2014/08/get-smarter-pythons-built-in-help
-metadesc: Sometimes you have to put down the Stack Overflow, step away from the Google and go straight to the docs. Python has great docs, here's how to use them.
-tags: Python
-code: True
-
----
-
-One of my favorite things about Python is the `help()` function. Fire up the standard Python interpreter, and import `help` from `pydoc` and you can search Python's official documentation from within the interpreter. Reading the f'ing manual from the interpreter. As it should be[^1].
-
-The `help()` function takes either an object or a keyword. The former must be imported first while the latter needs to be a string like "keyword". Whichever you use Python will pull up the standard Python docs for that object or keyword. Type `help()` without anything and you'll start an interactive help session.
-
-The `help()` function is awesome, but there's one little catch.
-
-In order for this to work properly you need to make sure you have the `PYTHONDOCS` environment variable set on your system. On a sane operating system this will likely be in '/usr/share/doc/pythonX.X/html'. In mostly sane OSes like Debian (and probably Ubuntu/Mint, et al) you might have to explicitly install the docs with `apt-get install python-doc`, which will put the docs in `/usr/share/doc/pythonX.X-doc/html/`.
-
-If you're using OS X's built-in Python, the path to Python's docs would be:
-
-~~~.language-bash
-/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/
-~~~
-
-Note the 2.6 in that path. As far as I can tell OS X Mavericks does not ship with docs for Python 2.7, which is weird and annoying (like most things in Mavericks). If it's there and you've found it, please enlighten me in the comments below.
-
-Once you've found the documentation you can add that variable to your bash/zshrc like so:
-
-~~~.language-bash
-export PYTHONDOCS=/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/
-~~~
-
-Now fire up iPython, type `help()` and start learning rather than always hobbling along with [Google, Stack Overflow and other crutches](/blog/2014/08/how-my-two-year-old-twins-made-me-a-better-programmer).
-
-Also, PSA. If you do anything with Python, you really need to check out [iPython](http://ipython.org/). It will save you loads of time, has more awesome features than a Veg-O-Matic and [notebooks](http://ipython.org/notebook.html), don't even get me started on notebooks. And in iPython you don't even have to import help, it's already there, ready to go from the minute it starts.
-
-[^1]: The Python docs are pretty good too. Not Vim-level good, but close.
diff --git a/src/published/2014-08-05_how-my-two-year-old-twins-made-me-a-better-programmer.txt b/src/published/2014-08-05_how-my-two-year-old-twins-made-me-a-better-programmer.txt
deleted file mode 100644
index 4838814..0000000
--- a/src/published/2014-08-05_how-my-two-year-old-twins-made-me-a-better-programmer.txt
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: How My Two-Year-Old Twins Made Me a Better Programmer
-pub_date: 2014-08-05 12:04:25
-slug: /blog/2014/08/how-my-two-year-old-twins-made-me-a-better-programmer
-metadesc: To get better at programming you have to struggle. Sometimes you have to put down the Stack Overflow, step away from the Google and go straight to the docs. Open a manpage, pull up the help files, dig a little deeper and turn information into knowledge.
-tags: Python
-
----
-
-TL;DR version: "information != knowledge; knowledge != wisdom; wisdom != experience;"
-
-I have two-year-old twin girls. Every day I watch them figure out more about the world around them. Whether that's how to climb a little higher, how to put on a t-shirt, where to put something when you're done with it, or what to do with these crazy strings hanging off your shoes.
-
-It can be incredibly frustrating to watch them struggle with something new and fail. They're your children so your instinct is to step in and help. But if you step in and do everything for them they never figure out how to do any of it on their own. I've learned to wait until they ask for help.
-
-Watching them struggle and learn has made me realize that I don't let myself struggle enough and my skills are stagnating because of it. I'm happy to let Google step in and solve all my problems for me. I get work done, true, but at the expense of learning new things.
-
-I've started to think of this as the Stack Overflow problem, not because I actually blame Stack Overflow -- it's a great resource, the problem is mine -- but because it's emblematic of a problem. I use StackOverflow, and Google more generally, as a crutch, as a way to quickly solve problems with some bit of information rather than digging deeper and turning information into actual knowledge.
-
-On one hand quick solutions can be a great thing. Searching the web lets me solve my problem and move on to the next (potentially more interesting) one.
-
-On the other hand, information (the solution to the problem at hand) is not as useful as knowledge. Snippets of code and other tiny bits of information are not going to land you job, nor will they help you when you want to write a tutorial or a book about something. This sort of "let's just solve the problem" approach begins and ends in the task at hand. The information you get out of that is useful for the task you're doing, but knowledge is much larger than that. And I don't know about you, but I want to be more than something that's useful for finishing tasks.
-
-Information is useless to me if it isn't synthesized into personal knowledge somehow. And, for me at least, that information only becomes knowledge when I stop, back up and try to understand the *why* rather than than just the *how*. Good answers on Stack Overflow explain the why, but more often than not this doesn't happen.
-
-For example, today I wanted a simple way to get python's `os.listdir` to ignore directories. I knew that I could loop through all the returned elements and test if they were directories, but I thought perhaps there was a more elegant way to doing that (short answer, not really). The details of my problem aren't the point though, the point is that the question had barely formed in my mind and I noticed my fingers already headed for command tab, ready to jump the browser and cut and paste some solution from Stack Overflow.
-
-This time though I stopped myself before I pulled up my browser. I thought about my daughters in the next room. I knew that I would likely have the answer to my question in 10 seconds and also knew I would forget it and move on in 20. I was about to let easy answers step in and solve my problem for me. I was about to avoid learning something new. Sometimes that's fine, but do it too much and I'm worried I might be more of a successful cut-and-paster than struggling programmer.
-
-Sometimes it's good to take a few minutes to read the actual docs, pull up the man pages, type `:help` or whatever and learn. It's going to take a few extra minutes. You might even take an unexpected detour from the task at hand. That might mean you learn something you didn't expect to learn. Yes, it might mean you lose a few minutes of "work" to learn. It might even mean that you fail. Sometimes the docs don't help. The sure, Google. The important part of learning is to struggle, to apply your energy to the problem rather than finding to solution.
-
-Sometimes you need to struggle with your shoelaces for hours, otherwise you'll never figure out how to tie them.
-
-In my specific case I decided to permanently reduce my dependency on Stack Overflow and Google. Instead of flipping to the browser I fired up the Python interpreter and typed `help(os.listdir)`. Did you know the Python interpreter has a built-in help function called, appropriately enough, `help()`? The `help()` function takes either an object or a keyword (the latter needs to be in quotes like "keyword"). If you're having trouble I wrote a quick guide to [making Python's built-in `help()` function work][1].
-
-Now, I could have learned what I wanted to know in 2 seconds using Google. Instead it took me 20 minutes[^1] to figure out. But now I understand how to do what I wanted to do and, more importantly, I understand *why* it will work. I have a new piece of knowledge and next time I encounter the same situation I can draw on my knowledge rather than turning to Google again. It's not exactly wisdom or experience yet, but it's getting closer. And when you're done solving all the little problems of day-to-day coding that's really the point -- improving your skill, learning and getting better at what you do every day.
-
-[^1]: Most of that time was spent figuring out where OS X stores Python docs, which [I won't have to do again][1]. Note to self, I gotta switch back to Linux.
-
-[1]: /blog/2014/08/get-smarter-pythons-built-in-help
diff --git a/src/published/2014-08-11_building-faster-responsive-websites-webpagetest.txt b/src/published/2014-08-11_building-faster-responsive-websites-webpagetest.txt
deleted file mode 100644
index fa82f90..0000000
--- a/src/published/2014-08-11_building-faster-responsive-websites-webpagetest.txt
+++ /dev/null
@@ -1,124 +0,0 @@
----
-title: Building Faster Responsive Websites with Webpagetest
-pub_date: 2014-08-11 12:04:25
-slug: /blog/2014/08/building-faster-responsive-sites-with-webpagetest
-metadesc: All the usual performance best practices apply to responsive web design, but there are some extra things you can do to speed up your responsive websites.
-tags: Responsive Web Design, Best Practices
-code: True
-tutorial: True
-
----
-
-All the normal best practices for speeding up your website apply to responsive web design. That is, optimize your database and backend tools first, eliminate bottlenecks, cache queries, etc. Then move on to your server where you should focus on compressing and caching everything you can.
-
-It makes no sense to optimize front end code like we're about to do if the real bottlenecks are complex database queries or other back end issues. Because those sorts of things are way beyond the scope of this article, I'll assume that your back end code or the team responsible for it has already optimized and everything is cached as much as possible.
-
-Before we dive into how you can use Webpagetest to speed up your responsive web sites let's clarify what we mean by "speed up".
-
-What we really care about when we're trying to speed up a page is the *time to first render*. That is, the time it takes to get something visible on the screen. The overall page load time is secondary to getting *something* -- ideally the most important content -- on the screen as fast as possible.
-
-Give the viewer something; the rest of the page can load in the background.
-
-The first step is to do some basic front-end optimization -- eliminate blocking scripts, compress images, minify files, turn on cache headers, use a CDN for static assets and all the other well established best practices for speeding up pages. Run your site through [Google PageSpeed Insights](https://developers.google.com/speed/pagespeed/insights/) and read through Yahoo's [Best Practices for Speeding Up Your Web Site](http://developer.yahoo.com/performance/rules.html) (it's old but it's still a great reference).
-
-Remember that the single biggest win for most sites will be reducing image size.
-
-There are many tools for testing your site and running performance audits that look at specific areas of potential optimization. There's desktop software that simulates all kinds of network conditions and web browsers, but for now we'll stick with [Webpagetest.org](http://www.webpagetest.org/) (hereafter, WPT), a free, online testing tool.
-
-When you first visit WPT, you'll see a screen that looks like this:
-
-[![Webpagetest.org](/media/images/2014/wpt-01-tn.jpg)](/media/images/2014/wpt-01.jpg "View Image 1")
-: Webpagetest basics
-
-Before you do anything, expand the advanced options. This is where the good stuff is hidden. The first thing to do here is change the number of tests to run. I like to go with 5. If you want you can use a higher number (the max is 9), but it will take longer. Use odd numbers here since WPT uses the median for results.
-
-I like to set it to First View Only and check Capture Video so I can see the page load for myself. Here's what this would look like (note that I've also set it to emulate a 3G connection and the device is set to iPhone 4):
-
-[![Webpagetest advanced config options](/media/images/2014/wpt-02-tn.jpg)](/media/images/2014/wpt-02.jpg "View Image 2")
-: Webpagetest advanced config options
-
-Before we actually run the test I want to point out a slightly hidden, but super cool feature. Click the scripts tab to the right. See the empty box? What the? Well, click the documentation link and have a look at all the possibilities. There's a ton of stuff you can do here, but let me give you an example of one very powerful feature -- the `setDnsName` option.
-
-You can use this to test how much your CDN is helping your page load times. To do that you'd enter your overrides in the format: `setDnsName <name-to-override> <override-name>`. If your CDN served files from say `www.domain.com` and your origin was `origin.domain.com` you'd enter this in the box:
-
-~~~.language-bash
-setDnsName www.domain.com origin.domain.com
-~~~
-
-That way you can run the test both with and without your CDN and compare the results without altering the code on your actual site.
-
-Right now we'll keep it simple and run an ordinary test, so hit the yellow button. Depending on how many times you told it to run, this may take a little while. Eventually you'll get a page that looks like this:
-
-[![The test results overview page](/media/images/2014/wpt-03-tn.jpg)](/media/images/2014/wpt-03.jpg "View Image 3")
-: The test results overview page
-
-The main results there at the top are pulled from the median run. In this case that's run 3. Notice the Plot Full Results link below the main table. Follow that link to see a breakdown of all your tests, which can be useful to see if there are any anomalies in load times. Since there weren't any anomalous results for this page, let's take a closer look at the median, run 3.
-
-So what jumps out here? Well, the time to fully loaded is almost 15 seconds. I consider that terrible, but it actually passes for reasonably fast over 3G these days. Why is it so bad? Well, I picked this URL for a reason, it has nearly a dozen images, which take a while to load.
-
-But I'm not really interested in total load times, I want to get a sense of how long it takes to get something on the page. The number we want to look at for that information is the Start Render time. Here you can see it's about 2.5 seconds for the median run.
-
-In this case though I'm going to focus on the worst test, which is run 1, where nothing appears on the screen for 5 seconds. That's terrible, though it is a lot better than 15 seconds.
-
-Now I know there's a lot I can do to improve the overall load time of this page, but what I'm most interested in is shaving down that 5 seconds before anything shows up. To get a better idea of what might be causing that delay I'll go back to the test results page and click on the waterfall view.
-
-[![The test results waterfall view](/media/images/2014/wpt-04-tn.jpg)](/media/images/2014/wpt-04.jpg "View Image 4")
-: The test results waterfall view
-
-Here I can see that there are some redirects going on. I recently switched from http to https and haven't updated this post. So I need to do that. Then there's the images themselves, which are most likely the bottleneck. I ran the same test conditions on another page on my site that doesn't have any images at all and, as you can see from this filmstrip image (yes, you can export WPT filmstrips as images, look for the link that says "Export filmstrip as an image...") the above the fold content is visible in 1 second:
-
-![The filmstrip view of a more typical Longhandpixels URL](/media/images/2014/wpt-05.jpg)
-
-So the problem with the first URL is likely three-fold, the size of the images, the number of images and how they're loaded.
-
-I was in a hurry to get that post up, so I just let my `max-width: 100%;` rule for responsive images handle scaling down the very large images. In short I did what your clients will likely do -- be lazy and skip image optimization. I really need to automate an image optimization setup on my server, but in lieu of that, I manually resized the images, ran them through [ImageOptim](https://imageoptim.com/) (if you want a Linux equivalent check out [Trimage](http://trimage.org/), which hasn't been updated in years, but runs just fine for me in Debian stable and testing) and reran the tests:
-
-[![Test results showing 3 second load time](/media/images/2014/wpt-06-tn.jpg)](/media/images/2014/wpt-06.jpg "View Image 6")
-: Things are getting better
-
-That's a little better. We're down to worst case scenario of 3 second load time over a 3G connection. So it looks as if my hunch is right, the images are the bottleneck.
-
-I'm convinced, but suppose this were a client site and I wanted to show them why they need to optimize their images. You know what makes a powerful argument for image optimization? Making your client sit through those painfully slow load times. So go back to your main text results page, click the link on the right that says "Watch Video."
-
-It will take a minute for WPT to generate your video, but when it does scroll to the bottom and grab the embed code. Here's the two videos from the WPT results I've run so far, embedded below for your viewing pain:
-
-<iframe src="https://www.webpagetest.org/video/view.php?id=140806_XB_Z5T.1.0&embed=1&width=332&height=716" width="332" height="716"></iframe>
-
-<iframe src="https://www.webpagetest.org/video/view.php?id=140808_0G_RZW.2.0&embed=1&width=332&height=716" width="332" height="716"></iframe>
-
-Convincing no?
-
-Now I'm going to keep testing and trying to speed up my page. My next step is going to be tweaking the server. Yeah I know I said at the beginning that you should start here and I didn't. Neither, I'd be willing to bet, did you. That's okay, we'll do it now.
-
-I use Nginx to serve this site and I compile it myself with quite a few speed-oriented extras, but the main tool I use is the [Nginx Pagespeed module](https://developers.google.com/speed/pagespeed/module). For more on how I set up Nginx and how you can do the same, see my post: [Install Nginx on Debian/Ubuntu](https://longhandpixels.net/blog/2014/02/install-nginx-debian-ubuntu).
-
-I'm going to turn on a very cool feature in the `nginx_pagespeed_module` that I haven't been using until now, something called the [`lazyload_images` filter](https://developers.google.com/speed/pagespeed/module/filter-lazyload-images). Here's the line I'll add to my configuration file:
-
-~~~.language-bash
-pagespeed EnableFilters lazyload_images;
-~~~
-
-This will tell PageSpeed to delay loading images on the page unless they're visible in the viewport. Even better, as of version 1.8.31.2, this filter will force download images after the page's `onload` event fires. That means you won't get that janky scrolling effect that happens when images are fetched as they enter the viewport, which happens with a lot of websites that do this with JavaScript.
-
-So I turned on the `lazyload_images` filter on my server and reran the tests to find that...
-
-[![Test results showing 1 second load time over 3G](/media/images/2014/wpt-07-tn.jpg)](/media/images/2014/wpt-07.jpg "View Image 7")
-: That's more like it -- Test results showing 1 second load time over 3G.
-
-The page is now filling the initial viewport in about 1 second over 3G. I can live with that, but honestly it could probably be better. For example, some sort of responsive image solution would reduce the size of images on mobile screens and bring down the total page load time (not to mention saving a bunch of bandwidth)
-
-I could also do some other little optimizations, including combining the prism.css code highlighting file with my main CSS file to save a request. I could probably ditch the web fonts, create a single SVG that holds the logo and icons and then position everything with CSS.
-
-That would eliminate a request and probably reduce the overall size of the page as well. And I could put everything behind a CDN, which would probably have more impact than everything else I just mentioned combined, but that costs money and frankly, 1 second over 3G is fine for now.
-
-Hopefully this has given you some idea of how the tools available through Webpagetest can help you speed up your responsive website (or even your non-responsive site). It's true that I didn't really do anything here you can't do with the Firefox or Chrome developer tools, but I find -- particularly with clients who need a little convincing -- that WPT's filmstrips and videos are invaluable. And I should note that there are plenty of things WPT can do that your favorite developer tools cannot, but I'll save those for another post.
-
-While Webpagetest and its ilk are great tools, you should always also test on real devices in the real world. Ideally you'll test your site on an actual, slower network and see what it feels like to wait. Three seconds might sound fine, but actually sitting through it might inspire you to dig a little deeper and see what else you can optimize.
-
-If you don't have access to a slow network, <strike>then come to the U.S., they're everywhere</strike> then simulators will have to do. If you want to do some live testing over constrained network simulations there are some great dedicated tools like [Slowy](http://slowyapp.com/), [Throttle](https://github.com/dmolsen/Throttle) or even the Network Conditioner tool which is part of more recent versions of Apple's OS X Developer Tools (see developer Matt Gemmel's helpful overview of [how to set up Network Conditioner](http://mattgemmell.com/2011/07/25/network-link-conditioner-in-lion/) if you're using Mac OS X 10.7 or higher).
-
-## Further Reading
-
-If you'd like to learn more I recommend you start at the beginning. First learn [how to read the waterfall charts](http://www.webperformancetoday.com/2010/07/09/waterfalls-101/) that these services generate. Then I suggest you read Steve Souders' [High Performance Web Sites](http://stevesouders.com/) and the [Web Performance Today](http://www.webperformancetoday.com/) blog, both excellent resources for anyone interested in speeding up their site. Finally, few people know as much about [optimizing massive web applications](http://www.igvita.com/2013/01/15/faster-websites-crash-course-on-web-performance/) and sites as Google's Ilya Grigorik, who's part of the company's Make The Web Fast team. Subscribe to Ilya's blog and [follow him on Twitter](https://twitter.com/igrigorik) for a steady stream of speed-related links and tips.
-
-For some more details on all the cool stuff in Webpagetest, check out [Patrick Meenan's blog](http://blog.patrickmeenan.com/) (he's the creator of Webpagetest) and especially [this short video](https://www.youtube.com/watch?v=AEAj-HSfYSA).
diff --git a/src/published/2014-09-04_history-of-picture-element.txt b/src/published/2014-09-04_history-of-picture-element.txt
deleted file mode 100644
index d3e6fb2..0000000
--- a/src/published/2014-09-04_history-of-picture-element.txt
+++ /dev/null
@@ -1,199 +0,0 @@
----
-title: A Brief History of the Picture Element
-pub_date: 2014-09-01 09:30:23
-slug: /blog/2014/09/brief-history-picture-element
-tags: Responsive Images, Responsive Web Design
-metadesc: The story behind the new HTML picture element and how a few dedicated web developers made the web better for everyone.
-code: True
-
----
-
-[**Note**: This article was originally written for Ars Techica and there's a nice, [slightly edited version of it over on Ars][1] that you should probably read. It's also got some artwork and screenshots not included here. But I'm re-publishing this here as well for posterity]
-
-The web is going to get faster in the very near future. Sadly, this is rare enough to be news.
-
-The speed bump won't be because our devices are getting faster, though they are. Nor will it be because some giant company created something great, though they probably have.
-
-The web will be getting faster very soon because a small group of web developers saw a problem and decided to solve it for all of us.
-
-The problem is images.
-
-As of August 2014 the [size of the average page in the top 1,000 sites on the web][2] is 1.7MB. Images account for almost 1MB of that 1.7MB.
-
-If you've got a nice fast fiber connection that image payload isn't such a big deal. But, if you're on a mobile network, that huge image payload is not just slowing you down, it's using up your limited bandwidth and, depending on your mobile data plan, might well be costing you money.
-
-What makes that image payload doubly annoying when you're using a mobile device is that you're getting images intended for giant monitors and they're being loaded on a screen little bigger than your palm. It's a waste of bandwidth delivering pixels you don't need.
-
-Web developers recognized this problem very early on in the growth of what was called the "mobile" web back then.
-
-More recently a few of them banded together to do something that web developers have never done before -- create a new HTML element.
-
-## In the Beginning Was the "Mobile Web"
-
-Browsing the web on your phone hasn't always been what it is today. Even browsing the web on the first iPhone, one of the first phones with a real web browser, was still pretty terrible.
-
-Browsing the web on a small screen back then required constant tapping to zoom in on content that had been optimized for much larger screens. Images took forever to load over the iPhone's slow EDGE network connection and then there was all that Flash content, which didn't load at all. And that was the iPhone. Browsing the web using Blackberry and other OSes crippled mobile browsers was even worse.
-
-It wasn't necessarily the devices' fault, though mobile browsers did, and in many cases still do, lag well behind their desktop brethren. Most of the problem though was the fault of web developers. The web is inherently flexible, but web developers had made it fixed by optimizing sites for large desktop monitors.
-
-To fix this a lot of sites started building a second site. It sounds crazy now, but just a few years ago the going solution for handling new devices like the Blackberry, the then-new iPhone and some of the first Android phones was to use server-side device detection scripts and redirect users to a dedicated site for mobile devices, typically a URL like m.domain.com.
-
-These dedicated mobile URLs -- often referred to as M-dot sites -- typically lacked many features found on their "real" desktop counterparts and often didn't even redirect properly, leaving you on the homepage when you wanted a specific article.
-
-M-dot websites are a fine example of developers encountering a problem and figuring out a way to make it even worse.
-
-Luckily for us, most web developers did not jump on the m-dot bandwagon because something much better came along.
-
-## Responsive Design Killed the M-Dot Star
-
-In 2010 web developer Ethan Marcotte wrote a little article about something he called [Responsive Web Design][3].
-
-Marcotte suggested that with the proliferation of mobile devices and the pain of building these dedicated m-dot sites, it might make more sense to embrace the inherently fluid nature of the web and build websites that were flexible. Sites that used relative widths to fit any screen and worked well no matter what device was accessing it.
-
-Marcotte's vision gave web developers a way to build sites that flex and rearrange their content based on the size and characteristics of the device in your hand.
-
-Responsive web design isn't perhaps a panacea, but it's pretty close.
-
-Responsive design started with a few more prominent developers making their personal sites responsive, but it quickly took off when Marcotte and the developers at the Filament Group redesigned the [Boston Globe][4] website to make it responsive. The Globe redesign showed that responsive design worked for more than developer portfolios and blogs. The Globe redesign showed that responsive design was the way of the future.
-
-While the Globe redesign was successful from a user standpoint, Marcotte and the Filament Group did run into some problems behind the scenes, particularly with images.
-
-Marcotte's original article dealt with images by scaling them down using CSS. That made them fit smaller screens and preserve the layout of content, but it also means mobile devices were loading huge images that would never be displayed at full resolution.
-
-For the most part this is still what happens on nearly every site you visit on a small screen. Web developers know, as the developers building the Globe site knew, that this is a problem, but solving it is not as easy as it seems at first glance.
-
-In fact solving this problem would require adding a brand new element to HTML.
-
-## Introducing the Picture Element
-
-The Picture element story begins with the developers working on the Boston Globe, including Mat Marquis, who would eventually co-author the HTML specification.
-
-In the beginning though, no one working on the Globe site was thinking about creating new HTML elements. Marquis and the other developers just wanted to build a site that loaded faster on mobile devices.
-
-As Marquis explains, they thought they had a solution. "We started with an image for mobile and then selectively enhanced it up from there. It was a hack using cookies and JavaScript. It worked up until about a week before the site launched."
-
-Around this time both Firefox and Chrome were updating their prefetching capabilities and the new image prefetching tools broke the method used on the Globe prototypes.
-
-Browser prefetching was more than just a problem for the solution originally planed for the Globe site. It's actually the crux of what's so difficult about responsive images.
-
-When a server sends a page to your browser the browser first downloads all the HTML on the page and then parses it. Or at least that's what used to happen. Modern web browsers attempt to speed up page load times by downloading images *before* parsing the page's body. The browser starts downloading the image long before it knows where that image will be in the page layout or how big it will need to be.
-
-This is simultaneously a very good thing -- it means images load faster -- and a very tricky thing -- it means using JavaScript to manipulate images can actually slow down your page even when your JavaScript is trying to load smaller images (because you end up fighting the prefetcher and downloading two images).
-
-Marquis and the rest of the developers working on the site had to scrap their original plan and go back to the drawing board. "We started trying to hash out some solution that we could use going forward... but nothing really materialized." However, they started [writing about the problem][5] and other developers joined the conversation. The quickly learned they were not alone in struggling with responsive images.
-
-"By this time," Marquis says, "we have 10 or 15 developers and nobody has come up with anything."
-
-The Globe site ended up launched with no solution -- mobile devices were stuck downloading huge images.
-
-Soon other prominent developers outside the Globe project started to weigh in with possible solutions, including Google's Paul Irish and Opera's Bruce Lawson. Still, no one was able to craft a solution that covered [all the possible use cases][6] developers had identified.
-
-"We soon realized," says Marquis, "that, even if we were able to solve this with a clever bit of JavaScript we would be working around browser-level optimizations rather than working with them." In other words, using JavaScript meant fighting the browser's built-in image prefetching.
-
-Talk then moved to lower-level solutions, including a new HTML element that might somehow get around the image prefetching problems in a way that JavaScript never would. It was Bruce Lawson of Opera who first suggested that a new `<picture>` element might be in order. Though they did not know it at the time, a picture element had been proposed once before, but it never went anywhere.
-
-## Welcome to Standards Jungle
-
-It is one thing to decide a new HTML element is needed. It's quite another thing to actually navigate the stratified, labyrinthine world of web standards. Especially if no one on your team has ever done such a thing.
-
-Perhaps the best thing about being naive though is that you tend to plow forward without the hesitation that attends someone who *knows* how difficult the road ahead it will be.
-
-And so the developers working on the picture element took their ideas to the WHATWG, one of two groups that oversee the development of HTML. The WHATWG is made up primarily of browser vendors, which makes it a good place to gauge how likely it is that browsers will actually ship your ideas.
-
-To paraphrase Tolstoy, every standards body is unhappy in its own way, but, as Marquis was about to learn, the WHATWG is perhaps most unhappy when people outside it make suggestions about what it ought to do. Suffice to say that Marquis and the rest of the developers involved did not get the WHATWG interested in a new HTML element.
-
-Right around this time the W3C, which is where the second group that oversees HTML, the HTML WG, is based, launched a new idea -- community groups. Community groups are the W3C's attempt to get outsiders involved in the standards process, a place to propose problems and work on solutions.
-
-After being shot down by the WHATWG, someone suggested that the developers start a community group and the [Responsive Images Community Group][7] (RICG) was born.
-
-The only problem with community groups is that no one in the actual working groups pays any attention to community groups. Or at least they didn't in 2011.
-
-Blissfully unaware of this, Marquis and hundreds of other developers hashed out a responsive image solution in the community group.
-
-Much of that effort was thanks to Marcos Caceres, now at Mozilla, who, unlike the rest of the group members, had some experience with writing web standards. That experience allowed Caceres to span the divide between two worlds -- web development and standards development. Caceres organized the RICG's efforts and helped the group produce the kind of use cases and tests that standards bodies are looking for. As Marquis puts it, "Marcos saw us flailing around in IRC and helped get everything organized."
-
-"I tried to herd all the cats," Caceres jokes. And herd he did. He set up the Github repos to get everything in one place, set up a space for the responsive images site and helped bring everything together into the first use cases document. "This played a really critical role for me and for the community," says Caceres, "because it forced us to articulate what the actual problem was... and to set priorities."
-
-After months of effort, the RICG brought their ideas to the WHATWG IRC. This also did not go well. As Caceres puts it, "standards bodies like to say 'oh we want a lot of input for developers', but then when developers come it ends in tears. Or it used to."
-
-If you read the WHATWG IRC logs from that time you'll see that the WHATWG members fall into a classic "not invented here" trap. Not only did they reject the input from developers, they turned around and, without considering the RICG's work at all, [proposed their own solution][8], something called `set`, an attribute that solved only one of the many use cases Marquis and company had already identified.
-
-Developers were, understandably, miffed.
-
-With developers pushing Picture and browser makers and standards bodies favoring the far more limited and very confusing (albeit still useful) `set` proposal, it looked like nothing would ever actually come of the RICG's work.
-
-As Paul Irish put it in the [WHATWG IRC channel][9], "[Marquis] corralled and led a group of the best mobile web developers, created a CG, isolated a solution (from many), fought for and won consensus within the group, wrote a draft spec and proposed it. Basically he's done the thing standards folks really want "authors" to do. Which is why this this feels so defeating."
-
-Irish was not alone. The developer outcry surrounding the WHATWG's counter proposal was quite vocal, vocal enough that some entirely new proposals surfaced, but browser makers failed to agree on anything. Mozilla killed the WHATWG's idea of `set` on `img`. And Chrome refused to implement Picture as it was defined at the time.
-
-If this all sounds like a bad soap opera, well, it was. This process is, believe it or not, how the web you're using right now gets made.
-
-## Invented Here.
-
-To the credit of the WHATWG, the group did eventually overcome their not-invented-here syndrome. Or at least partially overcame it.
-
-Compromises started to happen. The RICG rolled support for many of the ideas in`set` into their proposal. That wasn't enough to convince the WHATWG, but it got some members working together with the Marquis and the RICG. The WHATWG still didn't like Picture, but they didn't outright reject it anymore either.
-
-To an outsider the revision process looks a bit like a game of Ping Pong, except that every time someone hits the ball it changes shape.
-
-The big breakthrough for Picture came from Opera's Simon Pieters and Apple's Tab Atkins. They made a simple, but powerful, suggestion -- make picture a wrapper for `img`. That way there would not be two separate elements for images on the web (which was rightly considered confusing), but there would still be a new way to control which image the browser displays.
-
-This is exactly the approach used in the final version of the Picture spec.
-
-When the browser encounters a Picture element, it first evaluates any rules that the web developer might specify. Opera's developer site has a good article on [all the possibilities Picture offers][10]. Then, after evaluating the various rules, the browser picks the best image based on its own criteria. This is another nice feature since the browser's criteria can include your settings. For example, future browsers might offer an option to stop high-res images from loading over 3G, regardless of what any Picture element on the page might say. Once the browser knows which image is the best choice it actually loads and displays that image in a good old `img` element.
-
-This solves two big problems -- the browser prefetching problem -- prefetching still works and there's no performance penalty -- and the problem of what to do when the browser doesn't understand picture -- it falls back to whatever is in the `img` tag.
-
-So, in the final proposal, what happens is Picture wraps an `img` tag and if the browser is too old to know what to make of a `<picture>` element then it loads the fallback `img` tag. All the accessibility benefits remain since the alt attribute is still on the `img` element.
-
-Everyone is happy and the web wins.
-
-## Nice Theory, but Show Me the Browser
-
-The web only wins if browsers actually support a proposed standard. And at this time last year no browser on the web actually supported Picture.
-
-While Firefox and Chrome had both committed to supporting it, it might be years before it became a priority for either, making Picture little more than a nice theory.
-
-Enter Yoav Weiss, a rare developer who spans the worlds of web development and C++ development. Weiss was a independent contractor who wanted Picture to become a part of the web. Weiss knew C++, the language most browsers are written in, but had never worked on a web browser before.
-
-Still, like Caceres, Weiss was able to bridge a gap, in this case the world of web developers and C++ developers, putting him in a unique position to be able to know what Picture needed to do and how to make it happen. So, after talking it over with other Chromium developers, Weiss started hacking on Blink, the rendering engine that powers Google's Chrome browser.
-
-Implementing Picture was no small task. "Getting Picture into Blink required some infrastructure that wasn't there," says Weiss. "I had two options: either wait for the infrastructure to happen naturally over the course of the next two years, or make it happen myself."
-
-Weiss, who, incidentally, has three young children and, presumably, not much in the way of free time, quickly realized that working night and weekends wasn't going to cut it. Weiss need to turn his work on Picture into a contract job. So he, Marquis and others involved in the community group, set up a <a href="https://www.indiegogo.com/projects/picture-element-implementation-in-blink">crowd funding campaign on Indiegogo</a>.
-
-On the face of it it sounds like a doomed proposition -- why would developers fund a feature that will ultimately end up in a web browser they otherwise have no control over?
-
-Then something amazing happened. The campaign didn't just meet its goal, it went way over it. Web developers wanted Picture bad enough to spend their money on the cause.
-
-It could have been the t-shirts. It could have been the novelty of it. Or it could have been that web developers saw how important a solution to the image problem was in a way that the browser makers and standards bodies didn't. Most likely it was some combination of all these and more.
-
-In the end enough money was raised to not only implement Picture in Blink, but to also port Weiss' work back to WebKit so WebKit browsers (including Apple's iOS version of Safari) can use it as well. At the same time Marcos Caceres started work at Mozilla and has helped drive Firefox's support for Picture.
-
-As of today the Picture element will be available in Chrome and Firefox by the end of the year. It's available now in Chrome's dev channel and Firefox 34+ (in Firefox you'll need enable it in `about:config`). Here's a test page showing the new [Picture element in action][11].
-
-Apple appears to be adding support to Safari though the backport to WebKit wasn't finished in time for the upcoming Safari 8. Microsoft has likewise been supportive and is considering Picture for the next release of IE.
-
-## The Future of the Web
-
-The story of the Picture element isn't just an interesting tale of web developers working together to make the web a better place. It's also a glimpse at the future of the web. The separation between between those who build the web and those who create web standards is disappearing. The W3C's community groups are growing and sites like [Move the Web Forward][12] aim to help bridge the gap between developer ideas and standards bodies.
-
-There's even a site devoted to what it calls "[specifiction][13]" -- giving web developers a place to suggest tools they need, discuss possible solutions and then find the relevant W3C working group to make it happen.
-
-Picture may be almost finished, but the RICG isn't going away. In fact it's renaming itself and taking on a new project -- [Element Queries][14]. Coming soon to a browser near you.
-
-
-[1]: http://arstechnica.com/information-technology/2014/09/how-a-new-html-element-will-make-the-web-faster/
-[2]: http://httparchive.org/interesting.php?a=All&l=Aug%2015%202014&s=Top1000
-[3]: http://alistapart.com/article/responsive-web-design
-[4]: http://www.bostonglobe.com/
-[5]: http://blog.cloudfour.com/responsive-imgs/
-[6]: http://usecases.responsiveimages.org/
-[7]: http://responsiveimages.org/
-[8]: http://www.w3.org/community/respimg/2012/05/11/respimg-proposal/
-[9]: http://krijnhoetmer.nl/irc-logs/whatwg/20120510#l-747
-[10]: http://dev.opera.com/articles/native-responsive-images/
-[11]: https://longhandpixels.net/2014/08/picture-test
-[12]: http://movethewebforward.org/
-[13]: http://specifiction.org/
-[14]: http://responsiveimagescg.github.io/eq-usecases/
diff --git a/src/published/2015-01-24_how-to-write-ebook.txt b/src/published/2015-01-24_how-to-write-ebook.txt
deleted file mode 100644
index 7163dad..0000000
--- a/src/published/2015-01-24_how-to-write-ebook.txt
+++ /dev/null
@@ -1,58 +0,0 @@
----
-title: How to Write an Ebook
-pub_date: 2015-01-24 12:52:53
-slug: /blog/2015/01/how-to-write-ebook
-metadesc: The tools I use to write and publish ebooks. All free and open source.
-
----
-
-When I set out to write a book I had little more than an outline in Markdown. Just a few headers and bullet points on each of what became the major chapters of my [book on responsive web design](https://longhandpixels.net/books/responsive-web-design).
-
-It never really occurred to me to research which tools I would need to create a book because I knew I was going to use Markdown, which could then be translated into pretty much any format using [Pandoc](http://johnmacfarlane.net/pandoc/).
-
-Since quite a few people have [asked](https://twitter.com/situjapan/status/549935669129142272) for more details on exactly which tools I used, here's a quick rundown:
-
-1. I write books as single text files lightly marked up with Pandoc-flavored Markdown.
-2. Then I run Pandoc, passing in custom templates, CSS files, fonts I bought and so on. Pretty much as [detailed here in the Pandoc documentation](http://johnmacfarlane.net/pandoc/epub.html). I run these commands often enough that I write a shell script for each project so I don't have to type in all the flags and file paths each time.
-3. Pandoc outputs an ePub file and an HTML file. The latter is then used with [Weasyprint](http://weasyprint.org/) to generate the PDF version of the ebook. Then I used the ePub file and the [Kindle command line tool](http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1000765211) to create a .mobi file.
-4. All of the formatting and design is just CSS, which I am already comfortable working with (though ePub is only a subset of CSS and reader support is somewhat akin to building website in 1998 -- who knows if it's gonna work? The PDF is what I consider the reference copy.)
-
-In the end I get the book in TXT, HTML, PDF, ePub and .mobi formats, which covers pretty much every digital reader I'm aware of. Out of those I actually include the PDF, ePub and Mobi files when you [buy the book](https://longhandpixels.net/books/responsive-web-design).
-
-### FAQs and Notes.
-
-<strong>Why Not Use iBook Author?</strong>
-
-I don't want my book tied to a company's software which may or may not continue to exist. Plus I wanted to use open source software. And I wanted more control over the process than I could get with monolithic tools like visual layout editors.
-
-The above tools are, for me anyway, the simplest possible workflow which outputs the highest quality product.
-
-<strong>What about Prince?</strong>
-
-What does The Purple One have to do with writing books? Oh, that [Prince](http://www.princexml.com/). Actually I really like Prince and it can do a few things that WeasyPrint cannot (like execute JavaScript which is handy for code highlighting or allow for `@font-face` font embedding), but it's not free and in the end, I decided, not worth the money.
-
-<strong>Can you share your shell script?</strong>
-
-Here's the basic idea, adjust file paths to suit your working habits.
-
-~~~.language-bash
-#!/bin/sh
-#Update PDF:
-pandoc --toc --toc-depth=2 --smart --template=lib/template.html5 --include-before-body=lib/header.html -t html5 -o rwd.html draft.txt && weasyprint rwd.html rwd.pdf
-
-#Update epub:
-pandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-epub.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o rwd.epub draft.txt
-
-#update Mobi:
-pandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-kindle.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o kindle.epub Draft.txt && kindlegen kindle.epub -o rwd.mobi
-~~~
-
-I just run this script and bang, all my files are updated.
-
-<strong>What Advice can you Offer for People Wanting to Write an Ebook?</strong>
-
-At the risk of sounding trite, just do it.
-
-Writing a book is not easy, or rather the writing is never easy, but I don't think it's ever been this easy to *produce* a book. It took me two afternoons to come up with a workflow that involves all free, open source software and allows me to publish literally any text file on my hard drive as a book that can then be read by millions. I type two key strokes and I have a book. Even if millions don't ever read your book (and, for the record, millions have most definitely not read my books), that is still f'ing amazing.
-
-Now go make something cool (and be sure to tell me about it).
diff --git a/src/published/2015-04-02_complete-guide-ssh-keys.txt b/src/published/2015-04-02_complete-guide-ssh-keys.txt
deleted file mode 100644
index 25d646c..0000000
--- a/src/published/2015-04-02_complete-guide-ssh-keys.txt
+++ /dev/null
@@ -1,128 +0,0 @@
----
-title: How to Setup SSH Keys for Secure Logins
-pub_date: 2015-03-21 12:52:53
-slug: /blog/2015/03/set-up-ssh-keys-secure-logins
-metadesc: How to set up SSH keys for more secure logins to your VPS.
-code: True
-
----
-
-SSH keys are an easier, more secure way of logging into your virtual private server via SSH. Passwords are vulnerable to brute force attacks and just plain guessing. Key-based authentication is (currently) much more difficult to brute force and, when combined with a password on the key, provides a secure way of accessing your VPS instances from anywhere.
-
-Key-based authentication uses two keys, the first is the "public" key that anyone is allowed to see. The second is the "private" key that only you ever see. So to log in to a VPS using keys we need to create a pair -- a private key and a public key that matches it -- and then securely upload the public key to our VPS instance. We'll further protect our private key by adding a password to it.
-
-Open up your terminal application. On OS X, that's Terminal, which is in Applications >> Utilities folder. If you're using Linux I'll assume you know where the terminal app is and Windows fans can follow along after installing [Cygwin](http://cygwin.com/).
-
-Here's how to generate SSH keys in three simple steps.
-
-
-## Setup SSH for More Secure Logins
-
-### Step 1: Check for SSH Keys
-
-Cut and paste this line into your terminal to check and see if you already have any SSH keys:
-
-~~~.language-bash
-ls -al ~/.ssh
-~~~
-
-If you see output like this, then skip to Step 3:
-
-~~~.language-bash
-id_dsa.pub
-id_ecdsa.pub
-id_ed25519.pub
-id_rsa.pub
-~~~
-
-### Step 2: Generate an SSH Key
-
-Here's the command to create a new SSH key. Just cut and paste, but be sure to put in your own email address in quotes:
-
-~~~.language-bash
-ssh-keygen -t rsa -C "your_email@example.com"
-~~~
-
-This will start a series of questions, just hit enter to accept the default choice for all of them, including the last one which asks where to save the file.
-
-Then it will ask for a passphrase, pick a good long one. And don't worry you won't need to enter this every time, there's something called `ssh-agent` that will ask for your passphrase and then store it for you for the duration of your session (i.e. until you restart your computer).
-
-~~~.language-bash
-Enter passphrase (empty for no passphrase): [Type a passphrase]
-Enter same passphrase again: [Type passphrase again]
-~~~
-
-Once you've put in the passphrase, SSH will spit out a "fingerprint" that looks a bit like this:
-
-~~~.language-bash
-# Your identification has been saved in /Users/you/.ssh/id_rsa.
-# Your public key has been saved in /Users/you/.ssh/id_rsa.pub.
-# The key fingerprint is:
-# d3:50:dc:0f:f4:65:29:93:dd:53:c2:d6:85:51:e5:a2 scott@longhandpixels.net
-~~~
-
-### Step 3 Copy Your Public Key to your VPS
-
-If you have ssh-copy-id installed on your system you can use this line to transfer your keys:
-
-~~~.language-bash
-ssh-copy-id user@123.45.56.78
-~~~
-
-If that doesn't work, you can paste in the keys using SSH:
-
-~~~.language-bash
-cat ~/.ssh/id_rsa.pub | ssh user@12.34.56.78 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
-~~~
-
-Whichever you use you should get a message like this:
-
-
-~~~.language-bash
-The authenticity of host '12.34.56.78 (12.34.56.78)' can't be established.
-RSA key fingerprint is 01:3b:ca:85:d6:35:4d:5f:f0:a2:cd:c0:c4:48:86:12.
-Are you sure you want to continue connecting (yes/no)? yes
-Warning: Permanently added '12.34.56.78' (RSA) to the list of known hosts.
-username@12.34.56.78's password:
-~~~
-
- Now try logging into the machine, with "ssh 'user@12.34.56.78'", and check in:
-
-~~~.language-bash
-~/.ssh/authorized_keys
-~~~
-
-to make sure we haven't added extra keys that you weren't expecting.
-
-Now log in to your VPS with ssh like so:
-
-~~~.language-bash
- ssh username@12.34.56.78
-~~~
-
-And you won't be prompted for a password by the server. You will, however, be prompted for the passphrase you used to encrypt your SSH key. You'll need to enter that passphrase to unlock your SSH key, but ssh-agent should store that for you so you only need to re-enter it when you logout or restart your computer.
-
-And there you have it, secure, key-based log-ins for your VPS.
-
-### Bonus: SSH config
-
-If you'd rather not type `ssh myuser@12.34.56.78` all the time you can add that host to your SSH config file and refer to it by hostname.
-
-The SSH config file lives in `~/.ssh/config`. This command will either open that file if it exists or create it if it doesn't:
-
-~~~.language-bash
-nano ~/.ssh/config
-~~~
-
-Now we need to create a host entry. Here's what mine looks like:
-
-~~~.language-bash
-Host myname
- Hostname 12.34.56.78
- user myvpsusername
- #Port 24857 #if you set a non-standard port uncomment this line
- CheckHostIP yes
- TCPKeepAlive yes
-~~~
-
-Then to login all I need to do is type `ssh myname`. This is even more helpful when using `scp` since you can skip the whole username@server and just type: `scp myname:/home/myuser/somefile.txt .` to copy a file.
diff --git a/src/published/2015-04-03_set-up-secure-first-vps.txt b/src/published/2015-04-03_set-up-secure-first-vps.txt
deleted file mode 100644
index ebb9b30..0000000
--- a/src/published/2015-04-03_set-up-secure-first-vps.txt
+++ /dev/null
@@ -1,147 +0,0 @@
----
-title: How to Setup And Secure Your First VPS
-pub_date: 2015-03-31 12:52:53
-slug: /blog/2015/03/set-up-secure-first-vps
-metadesc: Still using shared hosting? It's 2015, time to set up your own VPS. Here's a complete guide to launching your first VPS on Digital Ocean or Vultr.
-code: True
-
----
-
-Let's talk about your server hosting situation. I know a lot of you are still using a shared web host. The thing is, it's 2015, shared hosting is only necessary if you really want unexplained site outages and over-crowded servers that slow to a crawl.
-
-It's time to break free of those shared hosting chains. It time to stop accepting the software stack you're handed. It's time to stop settling for whatever outdated server software and configurations some shared hosting company sticks you with.
-
-**It's time to take charge of your server; you need a VPS**
-
-What? Virtual Private Servers? Those are expensive and complicated... don't I need to know Linux or something?
-
-No, no and not really.
-
-Thanks to an increasingly competitive market you can pick up a very capable VPS for $5 a month. Setting up your VPS *is* a little more complicated than using a shared host, but most VPS's these days have one-click installers that will set up a Rails, Django or even WordPress environment for you.
-
-As for Linux, knowing your way around the command line certainly won't hurt, but these tutorials will teach you everything you really need to know. We'll also automate everything so that critical security updates for your server are applied automatically without you lifting a finger.
-
-## Pick a VPS Provider
-
-There are hundreds, possibly thousands of VPS providers these days. You can nerd out comparing all of them on [serverbear.com](http://serverbear.com/) if you want. When you're starting out I suggest sticking with what I call the big three: Linode, Digital Ocean or Vultr.
-
-Linode would be my choice for mission critical hosting. I use it for client projects, but Vultr and Digital Ocean are cheaper and perfect for personal projects and experiments. Both offer $5 a month servers, which gets you .5 GB of RAM, plenty of bandwidth and 20-30GB of a SSD-based storage space. Vultr actually gives you a little more RAM, which is helpful if you're setting up a Rails or Django environment (i.e. a long running process that requires more memory), but I've been hosting a Django-based site on a 512MB Digital Ocean instance for 18 months and have never run out of memory.
-
-Also note that all these plans start off charging by the hour so you can spin up a new server, play around with it and then destroy it and you'll have only spent a few pennies.
-
-Which one is better? They're both good. I've been using Vultr more these days, but Digital Ocean has a nicer, somewhat slicker control panel. There are also many others I haven't named. Just pick one.
-
-Here's a link that will get you a $10 credit at [Vultr](http://www.vultr.com/?ref=6825229) and here's one that will get you a $10 credit at [Digital Ocean](https://www.digitalocean.com/?refcode=3bda91345045) (both of those are affiliate links and help cover the cost of hosting this site *and* get you some free VPS time).
-
-For simplicity's sake, and because it offers more one-click installers, I'll use Digital Ocean for the rest of this tutorial.
-
-## Create Your First VPS
-
-In Digital Ocean you'll create a "Droplet". It's a three step process: pick a plan (stick with the $5 a month plan for starters), pick a location (stick with the defaults) and then install a bare OS or go with a one-click installer. Let's get WordPress up and running, so select WordPress on 14.04 under the Applications tab.
-
-If you want automatic backups, and you do, check that box. Backups are not free, but generally won't add more than about $1 to your monthly bill -- it's money well spent.
-
-The last thing we need to do is add an SSH key to our account. If we don't Digital Ocean will email our root password in a plain text email. Yikes.
-
-If you need to generate some SSH keys, here's a short guide, [How to Generate SSH keys](/blog/2015/03/set-up-ssh-keys-secure-logins). You can skip step 3 in that guide. Once you've got your keys set up on your local machine you just need to add them to your droplet.
-
-If you're on OS X, you can use this command to copy your public key to the clipboard:
-
-~~~.language-bash
-pbcopy < ~/.ssh/id_rsa.pub
-~~~
-
-Otherwise you can use cat to print it out and copy it:
-
-~~~.language-bash
-cat ~/.ssh/id_rsa.pub
-~~~
-
-Now click the button to "add an SSH key". Then paste the contents of your clipboard into the box. Hit "add SSH Key" and you're done.
-
-Now just click the giant "Create Droplet".
-
-Congratulations you just deployed your first VPS server.
-
-## Secure Your VPS
-
-Now we can log in to our new VPS with this code:
-
-~~~.language-bash
-ssh root@127.87.87.87
-~~~
-
-That will cause SSH to ask if you want to add the server to list of known hosts. Say yes and then on OS X you'll get a dialog asking for the passphrase you created a minute ago when you generate your SSH key. Enter it, check the box to save it to your keychain so you don't have to enter it again.
-
-And you're now logged in to your VPS as root. That's not how we want to log in though since root is a very privileged user that can wreak all sorts of havoc. The first thing we'll do is change the password of the root user. To do that, just enter:
-
-~~~.language-bash
-passwd
-~~~
-
-And type a new password.
-
-Now let's create a new user:
-
-~~~.language-bash
-adduser myusername
-~~~
-
-Give your username a secure password and then enter this command:
-
-~~~.language-bash
-visudo
-~~~
-
-If you get an error saying that there is no app installed, you'll need to first install sudo (`apt-get install sudo` on Debian, which does not ship with sudo). That will open a file. Use the arrow key to move the cursor down to the line that reads:
-
-~~~.language-bash
-root ALL=(ALL:ALL) ALL
-~~~
-
-Now add this line:
-
-~~~.language-bash
-myusername ALL=(ALL:ALL) ALL
-~~~
-
-Where myusername is the username you created just a minute ago. Now we need to save the file. To do that hit Control-X, type a Y and then hit return.
-
-Now, **WITHOUT LOGGING OUT OF YOUR CURRENT ROOT SESSION** open another terminal window and make sure you can login with your new user:
-
-~~~.language-bash
-ssh myusername@12.34.56.78
-~~~
-
-You'll be asked for the password that we created just a minute ago on the server (not the one for our SSH key). Enter that password and you should be logged in. To make sure we can get root access when we need it, try entering this command:
-
-~~~.language-bash
-sudo apt-get update
-~~~
-
-That should ask for your password again and then spit out a bunch of information, all of which you can ignore for now.
-
-Okay, now you can log out of your root terminal window. To do that just hit Control-D.
-
-## Finishing Up
-
-What about actually accessing our VPS on the web? Where's WordPress? Just point your browser to the bare IP address you used to log in and you should get the first screen of the WordPress installer.
-
-We now have a VPS deployed and we've taken some very basic steps to secure it. We can do a lot more to make things more secure, but I've covered that in a separate article:
-
-One last thing: the user we created does not have access to our SSH keys, we need to add them. First make sure you're logged out of the server (type Control-D and you'll get a message telling you the connection has been closed). Now, on your local machine paste this command:
-
-~~~.language-bash
-cat ~/.ssh/id_rsa.pub | ssh myusername@45.63.48.114 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
-~~~
-
-You'll have to put in your password one last time, but from now on you can login via SSH.
-
-## Next Steps
-
-Congratulations you made it past the first hurdle, you're well on your way to taking control over your server. Kick back, relax and write some blog posts.
-
-Write down any problems you had with this tutorial and send me a link so I can check out your blog (I'll try to help figure out what went wrong too).
-
-Because we used a pre-built image from Digital Ocean though we're really not much better off than if we went with shared hosting, but that's okay, you have to start somewhere. Next up we'll do the same things, but this time create a bare OS which will serve as the basis for a custom built version of Nginx that's highly optimized and way faster than any stock server.
-
diff --git a/src/published/2015-10-28_pass.txt b/src/published/2015-10-28_pass.txt
deleted file mode 100644
index f02998c..0000000
--- a/src/published/2015-10-28_pass.txt
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Switching from LastPass to Pass
-pub_date: 2015-10-28 12:04:25
-slug: /src/pass
-tags: command line, security
-
----
-
-I never used to use a password manager. I kept all my passwords in my head used some tricks I learned from my very, very limited understanding of what memory champions like [Ed Cooke][1] do, to keep track of them. I generated strings using [pwgen][2] and then memorized them. As you might imagine, this did not scale well. Or rather it led to me getting lazy. I don't want to memorize a new strong password for some one-off site I'll probably never log in to again. So I would use a less strong password for those. Worse I'd re-use that password at multiple sites.
-
-Recognizing that this was a bad idea, I gave up at some point and started using LastPass for these sorts of things. But my really important passwords (email and financial sites), are still only in my head. I never particularly like that my passwords were stored on a third-party server, but LastPass was just *so* easy. Then LogMeIn bought LastPass and suddenly I was motivated to move on.
-
-As I outlined in a [brief piece][3] for The Register, there are lots of replacement services out there -- I like [Dashlane][4], despite the price -- but I didn't want my password data on a third party server any more. I wanted to be in total control.
-
-I can't remember how I ran across [pass][5], but I've been meaning to switch over to it for a while now. It exactly what I wanted in a password tool -- a simple, secure, command line based system using tested tools like GnuPG. There's also [Firefox add-on][6] and [an Android app][7] to make life a bit easier. So far though, I'm not using either.
-
-So I cleaned up my LastPass account, exported everything to CSV and imported it all into pass with this [Ruby script][8].
-
-Once you have the basics installed there are two ways to run pass, with Git and without. I can't tell you how many times Git has saved my ass, so naturally I went with a Git-based setup that I host on a private server. That, combined with regular syncing to my Debian machine, my wife's Mac, rsyncing to a storage server, and routine backups to Amazon S3 means my encrypted password files are backed up on six different physical machines. Moderately insane, but sufficiently redundant that I don't worry about losing anything.
-
-If you go this route there's one other thing you need to backup -- your GPG keys. The public key is easy, but the private one is a bit harder. I got some good ideas from [here][9]. On one hand you could be paranoid-level secure and make a paper print out of your key. I suggest using a barcode or QR code, and then printing on card stock which you laminate for protection from the elements and then store it in a secure location like a safe deposit box. I may do this at some point, but for now I went with the less secure plan B -- I simply encrypted my private key with a passphrase.
-
-Yes, that essentially negates at least some of the benefit of using a key instead of passphrase in the first place. However, since, as noted above, I don't store any passwords that would, so to speak, give you the keys to my kingdom, I'm not terribly worried about it. Besides, if you really want to get these passwords it would be far easier to just take my laptop and [hit me with a $5 wrench][10] until I told you the passphrase for gnome-keyring.
-
-The more realistic thing to worry about is how other, potentially far less tech-savvy people can access these passwords should something happen to you. No one in my immediate family knows how to use GPG. Yet. So should something happen to me before I teach my kids how to use it, I periodically print out my important passwords and store that file in a secure place along with a will, advance directive and so on.
-
-
-[1]: https://twitter.com/tedcooke
-[2]: https://packages.debian.org/search?keywords=pwgen
-[3]: tk
-[4]: http://dashlane.com/
-[5]: http://www.passwordstore.org/
-[6]: https://github.com/jvenant/passff#readme
-[7]: https://github.com/zeapo/Android-Password-Store
-[8]: http://git.zx2c4.com/password-store/tree/contrib/importers/lastpass2pass.rb
-[9]: http://security.stackexchange.com/questions/51771/where-do-you-store-your-personal-private-gpg-key
-[10]: https://www.xkcd.com/538/
diff --git a/src/published/2015-11-05_how-googles-amp-project-speeds-web-sandblasting-ht.txt b/src/published/2015-11-05_how-googles-amp-project-speeds-web-sandblasting-ht.txt
deleted file mode 100644
index 5c443da..0000000
--- a/src/published/2015-11-05_how-googles-amp-project-speeds-web-sandblasting-ht.txt
+++ /dev/null
@@ -1,107 +0,0 @@
----
-title: How Google’s AMP project speeds up the Web—by sandblasting HTML
-pub_date: 2015-11-05 12:04:25
-slug: /src/how-googles-amp-project-speeds-web-sandblasting-ht
-tags: IndieWeb
-
----
-
-[**This story originally appeared on <a href="http://arstechnica.com/information-technology/2015/11/googles-amp-an-internet-giant-tackles-the-old-myth-of-the-web-is-too-slow/" rel="me">Ars Technica</a>, to comment and enjoy the full reading experience with images (including a TRS-80 browsing the web) you should read it over there.**]
-
-There's a story going around today that the Web is too slow, especially over mobile networks. It's a pretty good story—and it's a perpetual story. The Web, while certainly improved from the days of 14.4k modems, has never been as fast as we want it to be, which is to say that the Web has never been instantaneous.
-
-Curiously, rather than a focus on possible cures, like increasing network speeds, finding ways to decrease network latency, or even speeding up Web browsers, the latest version of the "Web is too slow" story pins the blame on the Web itself. And, perhaps more pointedly, this blame falls directly on the people who make it.
-
-The average webpage has increased in size at a terrific rate. In January 2012, the average page tracked by HTTPArchive [transferred 1,239kB and made 86 requests](http://httparchive.org/trends.php?s=All&minlabel=Oct+1+2012&maxlabel=Oct+1+2015#bytesTotal&reqTotal). Fast forward to September 2015, and the average page loads 2,162kB of data and makes 103 requests. These numbers don't directly correlate to longer page load-and-render times, of course, especially if download speeds are also increasing. But these figures are one indicator of how quickly webpages are bulking up.
-
-Native mobile applications, on the other hand, are getting faster. Mobile devices get more powerful with every release cycle, and native apps take better advantage of that power.
-
-So as the story goes, apps get faster, the Web gets slower. This is allegedly why Facebook must invent Facebook Instant Articles, why Apple News must be built, and why Google must now create [Accelerated Mobile Pages](http://arstechnica.com/information-technology/2015/10/googles-new-amp-html-spec-wants-to-make-mobile-websites-load-instantly/) (AMP). Google is late to the game, but AMP has the same goal as Facebook's and Apple's efforts—making the Web feel like a native application on mobile devices. (It's worth noting that all three solutions focus exclusively on mobile content.)
-
-For AMP, two things in particular stand in the way of a lean, mean browsing experience: JavaScript... and advertisements that use JavaScript. The AMP story is compelling. It has good guys (Google) and bad guys (everyone not using Google Ads), and it's true to most of our experiences. But this narrative has some fundamental problems. For example, Google owns the largest ad server network on the Web. If ads are such a problem, why doesn't Google get to work speeding up the ads?
-
-There are other potential issues looming with the AMP initiative as well, some as big as the state of the open Web itself. But to think through the possible ramifications of AMP, first you need to understand Google's new offering itself.
-
-## What is AMP?
-
-To understand AMP, you first need to understand Facebook's Instant Articles. Instant Articles use RSS and standard HTML tags to create an optimized, slightly stripped-down version of an article. Facebook then allows for some extra rich content like auto-playing video or audio clips. Despite this, Facebook claims that Instant Articles are up to 10 times faster than their siblings on the open Web. Some of that speed comes from stripping things out, while some likely comes from aggressive caching.
-
-But the key is that Instant Articles are only available via Facebook's mobile apps—and only to established publishers who sign a deal with Facebook. That means reading articles from Facebook's Instant Article partners like National Geographic, BBC, and Buzzfeed is a faster, richer experience than reading those same articles when they appear on the publisher's site. Apple News appears to work roughly the same way, taking RSS feeds from publishers and then optimizing the content for delivery within Apple's application.
-
-All this app-based content delivery cuts out the Web. That's a problem for the Web and, by extension, for Google, which leads us to Google's Accelerated Mobile Pages project.
-
-Unlike Facebook Articles and Apple News, AMP eschews standards like RSS and HTML in favor of its own little modified subset of HTML. AMP HTML looks a lot like HTML without the bells and whistles. In fact, if you head over to the [AMP project announcement](https://www.ampproject.org/how-it-works/), you'll see an AMP page rendered in your browser. It looks like any other page on the Web.
-
-AMP markup uses an extremely limited set of tags. Form tags? Nope. Audio or video tags? Nope. Embed tags? Certainly not. Script tags? Nope. There's a very short list of the HTML tags allowed in AMP documents available over on the [project page](https://github.com/ampproject/amphtml/blob/master/spec/amp-html-format.md). There's also no JavaScript allowed. Those ads and tracking scripts will never be part of AMP documents (but don't worry, Google will still be tracking you).
-
-AMP defines several of its own tags, things like amp-youtube, amp-ad, or amp-pixel. The extra tags are part of what's known as [Web components](http://www.w3.org/TR/components-intro/), which will likely become a Web standard (or it might turn out to be "ActiveX part 2," only the future knows for sure).
-
-So far AMP probably sounds like a pretty good idea—faster pages, no tracking scripts, no JavaScript at all (and so no overlay ads about signing up for newsletters). However, there are some problematic design choices in AMP. (At least, they're problematic if you like the open Web and current HTML standards.)
-
-AMP re-invents the wheel for images by using the custom component amp-img instead of HTML's img tag, and it does the same thing with amp-audio and amp-video rather than use the HTML standard audio and video. AMP developers argue that this allows AMP to serve images only when required, which isn't possible with the HTML img tag. That, however, is a limitation of Web browsers, not HTML itself. AMP has also very clearly treated [accessibility](https://en.wikipedia.org/wiki/Computer_accessibility) as an afterthought. You lose more than just a few HTML tags with AMP.
-
-In other words, AMP is technically half baked at best. (There are dozens of open issues calling out some of the [most](https://github.com/ampproject/amphtml/issues/517) [egregious](https://github.com/ampproject/amphtml/issues/481) [decisions](https://github.com/ampproject/amphtml/issues/545) in AMP's technical design.) The good news is that AMP developers are listening. One of the worst things about AMP's initial code was the decision to disable pinch-and-zoom on articles, and thankfully, Google has reversed course and [eliminated the tag that prevented pinch and zoom](https://github.com/ampproject/amphtml/issues/592).
-
-But AMP's markup language is really just one part of the picture. After all, if all AMP really wanted to do was strip out all the enhancements and just present the content of a page, there are existing ways to do that. Speeding things up for users is a nice side benefit, but the point of AMP, as with Facebook Articles, looks to be more about locking in users to a particular site/format/service. In this case, though, the "users" aren't you and I as readers; the "users" are the publishers putting content on the Web.
-
-## It's the ads, stupid
-
-The goal of Facebook Instant Articles is to keep you on Facebook. No need to explore the larger Web when it's all right there in Facebook, especially when it loads so much faster in the Facebook app than it does in a browser.
-
-Google seems to have recognized what a threat Facebook Instant Articles could be to Google's ability to serve ads. This is why Google's project is called Accelerated Mobile Pages. Sorry, desktop users, Google already knows how to get ads to you.
-
-If you watch the [AMP demo](https://googleblog.blogspot.com/2015/10/introducing-accelerated-mobile-pages.html), which shows how AMP might work when it's integrated into search results next year, you'll notice that the viewer effectively never leaves Google. AMP pages are laid over the Google search page in much the same way that outside webpages are loaded in native applications on most mobile platforms. The experience from the user's point of view is just like the experience of using a mobile app.
-
-Google needs the Web to be on par with the speeds in mobile apps. And to its credit, the company has some of the smartest engineers working on the problem. Google has made one of the fastest Web browsers (if not the fastest) by building Chrome, and in doing so the company has pushed other vendors to speed up their browsers as well. Since Chrome debuted, browsers have become faster and better at an astonishing rate. Score one for Google.
-
-The company has also been touting the benefits of mobile-friendly pages, first by labeling them as such in search results on mobile devices and then later by ranking mobile friendly pages above not-so-friendly ones when other factors are equal. Google has been quick to adopt speed-improving new HTML standards like the responsive images effort, which was first supported by Chrome. Score another one for Google.
-
-But pages keep growing faster than network speeds, and the Web slows down. In other words, Google has tried just about everything within its considerable power as a search behemoth to get Web developers and publishers large and small to speed up their pages. It just isn't working.
-
-One increasingly popular reaction to slow webpages has been the use of content blockers, typically browser add-ons that stop pages from loading anything but the primary content of the page. Content blockers have been around for over a decade now (No Script first appeared for Firefox in 2005), but their use has largely been limited to the desktop. That changed in Apple's iOS 9, which for the first time put simple content-blocking tools in the hands of millions of mobile users.
-
-Combine all the eyeballs that are using iOS with content blockers, reading Facebook Instant Articles, and perusing Apple News, and you suddenly have a whole lot of eyeballs that will never see any Google ads. That's a problem for Google, one that AMP is designed to fix.
-
-## Static pages that require Google's JavaScript
-
-The most basic thing you can do on the Web is create a flat HTML file that sits on a server and contains some basic tags. This type of page will always be lightning fast. It's also insanely simple. This is literally all you need to do to put information on the Web. There's no need for JavaScript, no need even for CSS.
-
-This is more or less the sort of page AMP wants you to create (AMP doesn't care if your pages are actually static or—more likely—generated from a database. The point is what's rendered is static). But then AMP wants to turn around and require that each page include a third-party script in order to load. AMP deliberately sets the opacity of the entire page to 0 until this script loads. Only then is the page revealed.
-
-This is a little odd; as developer Justin Avery [writes](https://responsivedesign.is/articles/whats-the-deal-with-accelerated-mobile-pages-amp), "Surely the document itself is going to be faster than loading a library to try and make it load faster."
-
-Pinboard.in creator Maciej Cegłowski did just that, putting together a demo page that duplicates the AMP-based AMP homepage without that JavaScript. Over a 3G connection, Cegłowski's page fills the viewport in [1.9 seconds](http://www.webpagetest.org/result/151016_RF_VNE/). The AMP homepage takes [9.2 seconds](http://www.webpagetest.org/result/151016_9J_VNN/). JavaScript slows down page load times, even when that JavaScript is part of Google's plan to speed up the Web.
-
-Ironically, for something that is ostensibly trying to encourage better behavior from developers and publishers, this means that pages using progressive enhancement, keeping scripts to a minimum and aggressively caching content—in other words sites following best practices and trying to do things right—may be slower in AMP.
-
-In the end, developers and publishers who have been following best practices for Web development and don't rely on dozens of tracking networks and ads have little to gain from AMP. Unfortunately, the publishers building their sites like that right now are few and far between. Most publishers have much to gain from generating AMP pages—at least in terms of speed. Google says that AMP can improve page speed index scores by between 15 to 85 percent. That huge range is likely a direct result of how many third-party scripts are being loaded on some sites.
-
-The dependency on JavaScript has another detrimental effect. AMP documents depend on JavaScript, which is to say that if their (albeit small) script fails to load for some reason—say, you're going through a tunnel on a train or only have a flaky one-bar connection at the beach—the AMP page is completely blank. When an AMP page fails, it fails spectacularly.
-
-Google knows better than this. Even Gmail still offers a pure HTML-based fallback version of itself.
-
-## AMP for publishers
-
-Under the AMP bargain, all big media has to do is give up its ad networks. And interactive maps. And data visualizations. And comment systems.
-
-Your WordPress blog can get in on the stripped-down AMP action as well. Given that WordPress powers roughly 24 percent of all sites on the Web, having an easy way to generate AMP documents from WordPress means a huge boost in adoption for AMP. It's certainly possible to build fast websites using WordPress, but it's also easy to do the opposite. WordPress plugins often have a dramatic (negative) impact on load times. It isn't uncommon to see a WordPress site loading not just one but several external JavaScript libraries because the user installed three plugins that each use a different library. AMP neatly solves that problem by stripping everything out.
-
-So why would publishers want to use AMP? Google, while its influence has dipped a tad across industries (as Facebook and Twitter continue to drive more traffic), remains a powerful driver of traffic. When Google promises more eyeballs on their stories, big media listens.
-
-AMP isn't trying to get rid of the Web as we know it; it just wants to create a parallel one. Under this system, publishers would not stop generating regular pages, but they would also start generating AMP files, usually (judging by the early adopter examples) by appending /amp to the end of the URL. The AMP page and the canonical page would reference each other through standard HTML tags. User agents could then pick and choose between them. That is, Google's Web crawler might grab the AMP page, but desktop Firefox might hit the AMP page and redirect to the canonical URL.
-
-On one hand, what this amounts to is that after years of telling the Web to stop making m. mobile-specific websites, Google is telling the Web to make /amp-specific mobile pages. On the other hand, this nudges publishers toward an idea that's big in the [IndieWeb movement](http://indiewebcamp.com/): Publish (on your) Own Site, Syndicate Elsewhere (or [POSSE](http://indiewebcamp.com/POSSE) for short).
-
-The idea is to own the canonical copy of the content on your own site but then to send that content everywhere you can. Or rather, everywhere you want to reach your readers. Facebook Instant Article? Sure, hook up the RSS feed. Apple News? Send the feed over there, too. AMP? Sure, generate an AMP page. No need to stop there—tap the new Medium API and half a dozen others as well.
-
-Reading is a fragmented experience. Some people will love reading on the Web, some via RSS in their favorite reader, some in Facebook Instant Articles, some via AMP pages on Twitter, some via Lynx in their terminal running on a [restored TRS-80](http://arstechnica.com/information-technology/2015/08/surfing-the-internet-from-my-trs-80-model-100/) (seriously, it can be done. See below). The beauty of the POSSE approach is that you can reach them all from a single, canonical source.
-
-## AMP and the open Web
-
-While AMP has problems and just might be designed to lock publishers into a Google-controlled format, so far it does seem friendlier to the open Web than Facebook Instant Articles.
-
-In fact, if you want to be optimistic, you could look at AMP as the carrot that Google has been looking for in its effort to speed up the Web. As noted Web developer (and AMP optimist) Jeremy Keith [writes](https://adactio.com/journal/9646) in a piece on AMP, "My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask 'Why can't our regular pages be this fast?' By showing that there is life beyond big bloated invasive webpages, perhaps the AMP project will work as a demo of what the whole Web could be."
-
-Not everyone is that optimistic about AMP, though. Developer and Author Tim Kadlec [writes](https://timkadlec.com/2015/10/amp-and-incentives/), "[AMP] doesn't feel like something helping the open Web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the Web... Using a very specific tool to build a tailored version of my page in order to 'reach everyone' doesn't fit any definition of the 'open Web' that I've ever heard."
-
-There's one other important aspect to AMP that helps speed up their pages: Google will cache your pages on its CDN for free. "AMP is caching... You can use their caching if you conform to certain rules," writes Dave Winer, developer and creator of RSS, [in a post on AMP](http://scripting.com/2015/10/10/supportingStandardsWithoutAllThatNastyInterop.html). "If you don't, you can use your own caching. I can't imagine there's a lot of difference unless Google weighs search results based on whether you use their code."
diff --git a/src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt b/src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt
deleted file mode 100644
index e83d8e3..0000000
--- a/src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt
+++ /dev/null
@@ -1,216 +0,0 @@
-* **Updated July 2022**: This was getting a bit out of date in some places so I've fixed a few things. More importantly, I've run into to some issues with cgroups and lxc on Arch and added some notes below under the [special note to Arch users](#arch)*
-
-I've used Vagrant to manage my local development environment for quite some time. The developers I used to work with used it and, while I have no particular love for it, it works well enough. Eventually I got comfortable enough with Vagrant that I started using it in my own projects. I even wrote about [setting up a custom Debian 9 Vagrant box](/src/create-custom-debian-9-vagrant-box) to mirror the server running this site.
-
-The problem with Vagrant is that I have to run a huge memory-hungry virtual machine when all I really want to do is run Django's built-in dev server.
-
-My laptop only has 8GB of RAM. My browser is usually taking around 2GB, which means if I start two Vagrant machines, I'm pretty much maxed out. Django's dev server is also painfully slow to reload when anything changes.
-
-Recently I was talking with one of Canonical's [MAAS](https://maas.io/) developers and the topic of containers came up. When I mentioned I really didn't like Docker, but hadn't tried anything else, he told me I really needed to try LXD. Later that day I began reading through the [LinuxContainers](https://linuxcontainers.org/) site and tinkering with LXD. Now, a few days later, there's not a Vagrant machine left on my laptop.
-
-Since it's just me, I don't care that LXC only runs on Linux. LXC/LXD is blazing fast, lightweight, and dead simple. To quote, Canonical's [Michael Iatrou](https://blog.ubuntu.com/2018/01/26/lxd-5-easy-pieces), LXC "liberates your laptop from the tyranny of heavyweight virtualization and simplifies experimentation."
-
-Here's how I'm using LXD to manage containers for Django development on Arch Linux. I've also included instructions and commands for Ubuntu since I set it up there as well.
-
-### What's the difference between LXC, LXD and `lxc`
-
-I wrote this guide in part because I've been hearing about LXC for ages, but it seemed unapproachable, overwhelming, too enterprisey you might say. It's really not though, in fact I found it easier to understand than Vagrant or Docker.
-
-So what is a LXC container, what's LXD, and how are either different than say a VM or for that matter Docker?
-
-* LXC - low-level tools and a library to create and manage containers, powerful, but complicated.
-* LXD - is a daemon which provides a REST API to drive LXC containers, much more user-friendly
-* `lxc` - the command line client for LXD.
-
-In LXC parlance a container is essentially a virtual machine, if you want to get pedantic, see Stéphane Graber's post on the [various components that make up LXD](https://stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/). For the most part though, interacting with an LXC container is like interacting with a VM. You say ssh, LXD says socket, potato, potahto. Mostly.
-
-An LXC container is not a container in the same sense that Docker talks about containers. Think of it more as a VM that only uses the resources it needs to do whatever it's doing. Running this site in an LXC container uses very little RAM. Running it in Vagrant uses 2GB of RAM because that's what I allocated to the VM -- that's what it uses even if it doesn't need it. LXC is much smarter than that.
-
-Now what about LXD? LXC is the low level tool, you don't really need to go there. Instead you interact with your LXC container via the LXD API. It uses YAML config files and a command line tool `lxc`.
-
-That's the basic stack, let's install it.
-
-### Install LXD
-
-On Arch I used the version of [LXD in the AUR](https://aur.archlinux.org/packages/lxd/). Ubuntu users should go with the Snap package. The other thing you'll want is your distro's Btrfs or ZFS tools.
-
-Part of LXC's magic relies on either Btrfs and ZFS to read a virtual disk not as a file the way Virtualbox and others do, but as a block device. Both file systems also offer copy-on-write cloning and snapshot features, which makes it simple and fast to spin up new containers. It takes about 6 seconds to install and boot a complete and fully functional LXC container on my laptop, and most of that time is downloading the image file from the remote server. It takes about 3 seconds to clone that fully provisioned base container into a new container.
-
-In the end I set up my Arch machine using Btrfs or Ubuntu using ZFS to see if I could see any difference (so far, that would be no, the only difference I've run across in my research is that Btrfs can run LXC containers inside LXC containers. LXC Turtles all the way down).
-
-Assuming you have Snap packages set up already, Debian and Ubuntu users can get everything they need to install and run LXD with these commands:
-
-~~~~console
-apt install zfsutils-linux
-~~~~
-
-And then install the snap version of lxd with:
-
-~~~~console
-snap install lxd
-~~~~
-
-Once that's done we need to initialize LXD. I went with the defaults for everything. I've printed out the entire init command output so you can see what will happen:
-
-~~~~console
-sudo lxd init
-Create a new BTRFS pool? (yes/no) [default=yes]:
-would you like to use LXD clustering? (yes/no) [default=no]:
-Do you want to configure a new storage pool? (yes/no) [default=yes]:
-Name of the new storage pool [default=default]:
-Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
-Create a new BTRFS pool? (yes/no) [default=yes]:
-Would you like to use an existing block device? (yes/no) [default=no]:
-Size in GB of the new loop device (1GB minimum) [default=15GB]:
-Would you like to connect to a MAAS server? (yes/no) [default=no]:
-Would you like to create a new local network bridge? (yes/no) [default=yes]:
-What should the new bridge be called? [default=lxdbr0]:
-What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
-What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
-Would you like LXD to be available over the network? (yes/no) [default=no]:
-Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
-Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
-~~~~
-
-LXD will then spit out the contents of the profile you just created. It's a YAML file and you can edit it as you see fit after the fact. You can also create more than one profile if you like. To see all installed profiles use:
-
-~~~~console
-lxc profile list
-~~~~
-
-To view the contents of a profile use:
-
-~~~~console
-lxc profile show <profilename>
-~~~~
-
-To edit a profile use:
-
-~~~~console
-lxc profile edit <profilename>
-~~~~
-
-So far I haven't needed to edit a profile by hand. I've also been happy with all the defaults although, when I do this again, I will probably enlarge the storage pool, and maybe partition off some dedicated disk space for it. But for now I'm just trying to figure things out so defaults it is.
-
-The last step in our setup is to add our user to the lxd group. By default LXD runs as the lxd group, so to interact with containers we'll need to make our user part of that group.
-
-~~~~console
-sudo usermod -a -G lxd yourusername
-~~~~
-
-#####Special note for Arch users. {:#arch }
-
-To run unprivileged containers as your own user, you'll need to jump through a couple extra hoops. As usual, the [Arch User Wiki](https://wiki.archlinux.org/index.php/Linux_Containers#Enable_support_to_run_unprivileged_containers_(optional)) has you covered. Read through and follow those instructions and then reboot and everything below should work as you'd expect.
-
-Or at least it did until about June of 2022 when something changed with cgroups and I stopped being able to run my lxc containers. I kept getting errors like:
-
-~~~~console
-Failed to create cgroup at_mnt 24()
-lxc debian-base 20220713145726.259 ERROR conf - ../src/lxc/conf.c:lxc_mount_auto_mounts:851 - No such file or directory - Failed to mount "/sys/fs/cgroup"
-~~~~
-
-I tried debugging, and reading through all the bug reports I could find over the course of a couple of days and got nowhere. No one else seems to have this problem. I gave up and decided I'd skip virtualization and develop directly on Arch. I installed PostgreSQL... and it wouldn't start, also throwing an error about cgroups. That is when I dug deeper into cgroups and found a way to revert to the older behavior. I added this line to my boot params (in my case that's in /boot/loader/entries/arch.conf):
-
-~~~~console
-systemd.unified_cgroup_hierarchy=0
-~~~~
-
-That fixed all the issues for me. If anyone can explain *why* I'd be interested to hear from you in the comments.
-
-### Create Your First LXC Container
-
-Let's create our first container. This website runs on a Debian VM currently hosted on Vultr.com so I'm going to spin up a Debian container to mirror this environment for local development and testing.
-
-To create a new LXC container we use the `launch` command of the `lxc` tool.
-
-There are four ways you can get LXC containers, local (meaning a container base you've downloaded), images (which come from [https://images.linuxcontainers.org/](https://images.linuxcontainers.org/), ubuntu (release versions of Ubuntu), and ubuntu-daily (daily images). The images on linuxcontainers are unofficial, but the Debian image I used worked perfectly. There's also Alpine, Arch CentOS, Fedora, openSuse, Oracle, Palmo, Sabayon and lots of Ubuntu images. Pretty much every architecture you could imagine is in there too.
-
-I created a Debian 9 Stretch container with the amd64 image. To create an LXC container from one of the remote images the basic syntax is `lxc launch images:distroname/version/architecture containername`. For example:
-
-~~~~console
-lxc launch images:debian/stretch/amd64 debian-base
-Creating debian-base
-Starting debian-base
-~~~~
-
-That will grab the amd64 image of Debian 9 Stretch and create a container out of it and then launch it. Now if we look at the list of installed containers we should see something like this:
-
-~~~~console
-lxc list
-+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
-| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
-+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
-| debian-base | RUNNING | 10.171.188.236 (eth0) | fd42:e406:d1eb:e790:216:3eff:fe9f:ad9b (eth0) | PERSISTENT | |
-+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+
-~~~~
-
-Now what? This is what I love about LXC, we can interact with our container pretty much the same way we'd interact with a VM. Let's connect to the root shell:
-
-~~~~console
-lxc exec debian-base -- /bin/bash
-~~~~
-
-Look at your prompt and you'll notice it says `root@nameofcontainer`. Now you can install everything you need on your container. For me, setting up a Django dev environment, that means Postgres, Python, Virtualenv, and, for this site, all the Geodjango requirements (Postgis, GDAL, etc), along with a few other odds and ends.
-
-You don't have to do it from inside the container though. Part of LXD's charm is to be able to run commands without logging into anything. Instead you can do this:
-
-~~~~console
-lxc exec debian-base -- apt update
-lxc exec debian-base -- apt install postgresql postgis virtualenv
-~~~~
-
-LXD will output the results of your command as if you were SSHed into a VM. Not being one for typing, I created a bash alias that looks like this: `alias luxdev='lxc exec debian-base -- '` so that all I need to type is `luxdev <command>`.
-
-What I haven't figured out is how to chain commands, this does not work:
-
-~~~~console
-lxc exec debian-base -- su - lxf && cd site && source venv/bin/activate && ./manage.py runserver 0.0.0.0:8000
-~~~~
-
-According to [a bug report](https://github.com/lxc/lxd/issues/2057), it should work in quotes, but it doesn't for me. Something must have changed since then, or I'm doing something wrong.
-
-The next thing I wanted to do was mount a directory on my host machine in the LXC instance. To do that you'll need to edit `/etc/subuid` and `/etc/subgid` to add your user id. Use the `id` command to get your user and group id (it's probably 1000 but if not, adjust the commands below). Once you have your user id, add it to the files with this one liner I got from the [Ubuntu blog](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd):
-
-~~~~console
-echo 'root:1000:1' | sudo tee -a /etc/subuid /etc/subgid
-~~~~
-
-Then you need to configure your LXC instance to use the same uid:
-
-~~~~console
-lxc config set debian-base raw.idmap 'both 1000 1000'
-~~~~
-
-The last step is to add a device to your config file so LXC will mount it. You'll need to stop and start the container for the changes to take effect.
-
-~~~~console
-lxc config device add debian-base sitedir disk source=/path/to/your/directory path=/path/to/where/you/want/folder/in/lxc
-lxc stop debian-base
-lxc start debian-base
-~~~~
-
-That replicates my setup in Vagrant, but we've really just scratched the surface of what you can do with LXD. For example you'll notice I named the initial container "debian-base". That's because this is the base image (fully set up for Djano dev) which I clone whenever I start a new project. To clone a container, first take a snapshot of your base container, then copy that snapshot to create a new container:
-
-~~~~console
-lxc snapshot debian-base debian-base-configured
-lxc copy debian-base/debian-base-configured mycontainer
-~~~~
-
-Now you've got a new container named mycontainer. If you'd like to tweak anything, for example mount a different folder specific to this new project you're starting, you can edit the config file like this:
-
-~~~~console
-lxc config edit mycontainer
-~~~~
-
-I highly suggest reading through Stéphane Graber's 12 part series on LXD to get a better idea of other things you can do, how to manage resources, manage local images, migrate containers, or connect LXD with Juju, Openstack or yes, even Docker.
-
-#####Shoulders stood upon
-
-* [Stéphane Graber's 12 part series on lxd 2.0](https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/) - Graber wrote LXC and LXD, this is the best resource I found and highly recommend reading it all.
-* [Mounting your home directory in LXD](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd)
-* [Official how to](https://linuxcontainers.org/lxd/getting-started-cli/)
-* [Linux Containers Discourse site](https://discuss.linuxcontainers.org/t/deploying-django-applications/996)
-* [LXD networking: lxdbr0 explained](https://blog.ubuntu.com/2016/04/07/lxd-networking-lxdbr0-explained)
-
-
-[^1]: To be fair, I didn't need to get rid of Vagrant. You can use Vagrant to manage LXC containers, but I don't know why you'd bother. LXD's management tools and config system works great, why add yet another tool to the mix? Unless you're working with developers who use Windows, in which case LXC, which is short for, *Linux Container*, is not for you.
diff --git a/src/published/arch-philosophy.txt b/src/published/arch-philosophy.txt
deleted file mode 100644
index ff18521..0000000
--- a/src/published/arch-philosophy.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-Everyone seems to have a post about why they ended up with Arch. This is mine.
-
-I recently made the switch to Arch Linux for my primary desktop and it's been great. Arch very much feels like the end of the line for me --the bottom of the rabbit hole as it were. Once you have a system that does everything you need it to do effortlessly, why bother with anything else? Some of it might be a pain at times, hand partitioning, hand mounting and generating your own fstab files, but it teaches you a lot. It pulls back the curtain so you can see that you are in fact the person behind the curtain, you just didn't realize it.
-
-<img src="images/2020/desktop042020_uAICE8n.png" id="image-2325" class="picwide caption" />
-
-**[Updated July 2021: Still running Arch. Still happy about it. I did switch back to Openbox instead of i3, but otherwise my setup is unchanged]**
-
-Why bother? Control. Simplicity. Stubbornness. The good old DIY ethos, which is born out of the realization that if you don't do things yourself you'll have to accept the mediocrity that capitalism has produced. You never learn; you never grow. That's no way to live.
-
-I used to be a devoted Debian fan. I still agree with the Debian manifesto, such as it is. In practice however I found myself too often having to futz with things.
-
-I came to Arch for the AUR, though the truth is these days I don't use it much. Then for a while I [ran Sway](/src/guide-to-switching-i3-to-sway), which was really only practical on Arch. Since then though I went back to X.org. Sorry Wayland, but much as I love Sway, I did not love wrestling with MIDI controller drivers, JACK, and all the other elements of an audio/video workflow in Wayland. It can be done, but it’s more work, and I don’t want to work at getting software to work. I’m too old for that shit. I want to plug in a microphone, open Audacity, and record. If it’s any more complicated than that -- and it was for me in Wayland with the mics I own -- I will find something else. I really don’t care what my software stack is, so long as I can create what I want to create with it.
-
-Wayland was smoother, less graphically glitchy, but meh, whatever. Ninety percent of the time I’m writing in Vim in a Urxvt window. I need smooth scrolling and transitions like I need a hole in my head. I also set up Openbox to behave very much like Sway, so I still have the same shortcuts and honestly, aside from the fact that Tint2 has more icons than Waybar, I can’t tell the difference. Well, that’s not true. Vim works fine with the clipboard again, no need for Neovim.
-
-My Arch setup these days is minimalist: [Openbox](http://openbox.org/wiki/Main_Page) with [tint2](https://gitlab.com/o9000/tint2). I open apps with [dmenu](http://tools.suckless.org/dmenu/) and do most of my file system tasks from the terminal using bash (or [Ranger](http://nongnu.org/ranger/) if I want something fancier). Currently my setup uses about 200MB of RAM with no apps open. Arch doesn't have quite the software selection of Debian, but it has most of the software you'd ever want. My needs are simple: bash, vim, tmux, mutt, newsboat, mpd, mpv, git, feh, gimp, darktable and dev stuff like python3, postgis, etc. Every distro has this stuff.
-
-meaning I have no need to spend more than $400 on a laptop.
-
-
-Arch's real strength though is how amazingly easy it is to package your own software. Because even Debian's epically oversized repos can't hold everything. The Debian repos pale next to the Arch User Respository (AUR), which has just about every piece of software available for Linux. And it's up-to-date. So up-to-date that half the AUR packages have a -git variant that's pulled straight from the project's git repo. The best part is there are tools to manage and update all these out of repo packages. I strongly suggest you learn to package and install AUR repos by hand, but once you've done that a few times and you know what's happening I suggest installing [yay](https://github.com/Jguer/yay) to simplify managing all those AUR installs.
-
-I've installed Arch on dozens of machines at this point. I started with my Macbook Pro, which I've since sold (no need for high end hardware with my setup), but it ran Arch like a champ (what a relief to not need OS X). Currently I use a Lenovo x270 that I picked up off eBay for $300. I added a larger hard drive, a second hard drive, and 32-gigabytes of RAM. It runs Arch like a champ and gives me all I could ever want in a laptop. Okay, a graphics card would be nice for my occasional bouts of video editing, but otherwise it's more than enough.
diff --git a/src/published/backup-2.txt b/src/published/backup-2.txt
deleted file mode 100644
index 4012668..0000000
--- a/src/published/backup-2.txt
+++ /dev/null
@@ -1,23 +0,0 @@
-I wrote previously about how I [backup database files](/src/automatic-offsite-postgresql-backups) automatically. The key word there being "automatically". If I have to remember to make a backup the odds of it happening drop to zero. So I automate as I described in that piece, but that's not the only backup I have.
-
-The point for me as a writer is that I don't want to lose these words.
-
-Part of the answer is backing up databases, but part of my solution is also creating workflows which automatically spawn backups.
-
-This is actually my preferred backup method because it's not just a backup, it's future proofing. PostgreSQL may not be around ten years from now (I hope it is, because it's pretty awesome, but it may not be), but it's not my only backup.
-
-In fact I've got at least half a dozen backups of these words and I haven't even finished this piece yet. Right now I'm typing these words in Vim and will save the file in a Git repo that will get pushed to a server. That's two backups. Later the containing folder will be backed up on S3 (weekly), as well as two local drives (one daily, one weekly, both [rsync](https://rsync.samba.org/) copies).
-
-None of that really requires any effort on my part. I do have to add this file to the git repo and then commit and push it to the remote server, but [Vim Fugitive](https://github.com/tpope/vim-fugitive) makes that ridiculously simple.
-
-That's not the end of the backups though. Once I'm done writing I'll cut and paste this piece into my Django app and hit a publish button that will write the results out to the flat HTML file you're actually reading right now (this file is another backup). I also output a plain text version (just append `.txt` to any luxagraf URL to see a plain text version of the page).
-
-The end result is that all this makes it very unlikely I will loose these words outright.
-
-However, when I plugged these words into the database I gave this article a relationship with other objects in that database. So even though the redundant backups built into my workflow make a total data loss unlikely, without the database I will lose the relationships I've created. That's why I [a solid PostgreSQL backup strategy](/src/automatic-offsite-postgresql-backups), but what if Postgres does disappear?
-
-I could and occasionally do output all the data in the database to flat files with JSON or YAML versions of the metadata attached. Or at least some of it. It's hard to output massive amounts of geodata in the text file (for example the shapefiles of [national parks](https://luxagraf.net/projects/national-parks/) aren't particularly useful as text data).
-
-I'm not sure what the answer is really, but lately I've been thinking that maybe the answer is just to let it go? The words are the story, that's what my family, my kids, my friends, and whatever few readers I have really want. I'm the only one that cares about the larger story that includes the metadata, the relationships between the stories. Maybe I don't need that. Maybe that it's here today at all is remarkable enough on its own.
-
-The web is after all an ephemeral thing. It depends on our continued ability to do so many things we won't be able to do forever, like burn fossil fuels. In the end the most lasting backup I have may well be the 8.5x11 sheets of paper I've recently taken to printing out. Everything else depends on so much.
diff --git a/src/published/command-line-searchable-text-snippets.txt b/src/published/command-line-searchable-text-snippets.txt
deleted file mode 100644
index 7bb149b..0000000
--- a/src/published/command-line-searchable-text-snippets.txt
+++ /dev/null
@@ -1,116 +0,0 @@
-Snippets are bits of text you use frequently. Boilerplate email responses, code blocks, and whatever else you regularly need to type. My general rule is, if I type it more than twice, I save it as a snippet.
-
-I have a lot of little snippets of text and code from years of doing this. When I used the i3 desktop (and X11) I used [Autokey](https://github.com/autokey/autokey) to invoke shortcuts and paste these snippets where I need them. In Autokey you define a shortcut for your longer chunk of text, and then whenever you type that shortcut Autokey "expands" it to your longer text.
-
-It's a great app, but I [switched to a Wayland-based desktop](/src/guide-to-switching-i3-to-sway) ([Sway](https://swaywm.org/)) and Autokey doesn't work in Wayland yet. It's unclear to me whether it's even possible to have an Autokey-like app work within Wayland's security model ([Hawck](https://github.com/snyball/Hawck) claims to, but I have not tested it).
-
-Instead, after giving it some thought, I came up with a way to do everything I need in a way like even better, using tools that I already have installed.
-
-###Rolling Your Own Text Snippet Manager
-
-Autokey is modeled on the idea of typing shortcuts and having them replaced with a larger chuck of text. It works to a point, but has the mental overhead of needing to remember all those keystroke combos.
-
-Dedicating memory to digital stuff feels like we're doing it wrong. Why not *search* for a snippet instead of trying to remember some key combo? If the searching is fast and seamless there's no loss of "flow," or switching contexts, and no need to remember some obtuse shortcut.
-
-To work though the search must be *fast*. Fortunately there's a great little command line app that offers lighting-fast search: [`fzf`](https://github.com/junegunn/fzf), a command line "fuzzy" finder. `fzf` is a find-as-you-type search interface that's incredibly fast, especially when you pair it with [`ripgrep`](https://github.com/BurntSushi/ripgrep) instead of `find`.
-
-I already use `fzf` as a DIY application launcher, so I thought why not use it to search for snippets? This way I can keep my snippets in a simple text file, parse them into an array, pass that to `fzf`, search, and then pass the selected result on to the clipboard.
-
-I combined Alacritty, a Python script, `fzf`, `sed`, and some Sway shortcuts to make a snippet manager I can call up and search through with a single keystroke.
-
-###Python
-
-It may be possible to do this entirely in a bash script, but I'm not that great at bash scripting so I did the text parsing in Python, which I know well enough.
-
-I wanted to keep all my snippets in a single text file, with the option to do multiline snippets for readability (in other words I didn't want to be writing `\n` characters just because that's easier to parse). I picked `---` as a delimiter because... no reason really.
-
-The other thing I wanted was the ability to use tags to simplify searching. Tags become a way of filtering searches. For example, all the snippets I use writing for Wired can be tagged wired and I can see them all in one view by typing "wired" in `fzf`.
-
-So my snippet files looks something like this:
-
-````
-<div class="cluster">
- <span class="row-2">
- </span>
-</div>
-tags:html cluster code
-
----
-```python
-
-```
-tags: python code
-
----
-````
-
-Another goal, which you may notice above, is that I didn't want any format constraints. The snippets can take just about any ascii character. The tags line can have a space, not a have space, have commas, semicolons, doesn't matter because either way `fzf` can search it, and the tags will be stripped out before it hits the clipboard.
-
-Here's the script I cobbled together to parse this text file into an array I can pass to `fzf`:
-
-~~~python
-import re
-with open('~/.textsnippets.txt', 'r') as f:
- data = f.read()
-snips = re.split("---", data)
-for snip in snips:
- # strip the blank line at the end
- s = '\n'.join(snip.split('\n')[1:-1])
- #make sure we output the newlines, but no string wrapping single quotes
- print(repr(s.strip()).strip('\''))
-~~~
-
-All this script does is open a file, read the contents into a variable, split those contents on `---`, strip any extra space and then return the results to stdout.
-
-The only tricky part is the last line. We need to preserve the linebreaks and to do that I used [`repr`](https://docs.python.org/3.8/library/functions.html#repr), but that means Python literally prints the string, with the single quotes wrapping it. So the last `.strip('\'')` gets rid of those.
-
-I saved that file to `~/bin` which is already on my `$PATH`.
-
-###Shell Scripting
-
-The next thing we need to do is call this script, and pass the results to `fzf` so we can search them.
-
-To do that I just wrote a bash script.
-
-~~~.bash
-#!/usr/bin/env bash
-selected="$(python ~/bin/snippet.py | fzf -i -e )"
-#strip tags and any trailing space before sending to wl-copy
-echo -e "$selected"| sed -e 's/tags\:\.\*\$//;$d' | wl-copy
-~~~
-
-What happens here is the Python script gets called, parses the snippets file into chunks of text, and then that is passed to `fzf`. After experimenting with some `fzf` options I settled on case-insensitive, exact match (`-i -e`) searching as the most efficient means of finding what I want.
-
-Once I search for and find the snippet I want, that selected bit of text is stored in a variable called, creatively, `selected`. The next line prints that variable, passes it to `sed` to strip out the tags, along with any space after that, and then sends that snippet of text the clipboard via wl-copy.
-
-I saved this file in a folder on my `PATH` (`~/bin`) and called it `fzsnip`. At this point in can run `fzsnip` in a terminal and everything works as I'd expect. As a bonus I have my snippets in a plain text file I can access to copy and paste snippets on my phone, tablet, and any other device where I can run [NextCloud](https://nextcloud.com/).
-
-That's cool, but on my laptop I don't want to have to switch to the terminal every time I need to access a snippet. Instead I invoke a small terminal window wherever I am. To do that, I set up a keybinding in my Sway config file like this:
-
-~~~.bash
-bindsym $mod+s exec alacritty --class 'smsearch' --command bash -c 'fzsnip | xargs -r swaymsg -t command exec'
-~~~
-
-This is very similar to how I launch apps and search passwords, which I detailed in my post on [switching from i3 to Sway](/src/guide-to-switching-i3-to-sway). The basic idea is whatever virtual desktop I happen to be on, launch a new instance of [Alacritty](https://github.com/alacritty/alacritty), with the class `smsearch`. Assigning that class gives the new instance some styling I'll show below. The rest of the line fires off that shell script `fzsnip`. This allows me to hit `Alt+s` and get a small terminal window with a list of my snippets displayed. I search for the name of the snippet, hit return, the Alacritty window closes and the snippet is on my clipboard, ready to paste wherever I need it.
-
-This line in my Sway config file styles the window class `launcher`:
-
-~~~.bash
-for_window [app_id="^smsearch$"] floating enable, border none, resize set width 80 ppt height 60 ppt, move position 0 px 0 px
-~~~
-
-That puts the window in the upper left corner of the screen and makes it about 1/3 the width of my screen. You can adjust the width and height to suite your tastes.
-
-If you don't use Alacritty, adjust the command to use the terminal app you prefer. If you don't use Sway, you'll need to use whatever system-wide shortcut tool your window manager or desktop environment offers. Another possibility it is using [Guake](https://github.com/Guake/guake) which might be able to this for GNOME users, but I've never used it.
-
-###Conclusion
-
-I hope this gives anyone searching for a way to replace Autokey on Wayland some ideas. If you have any questions for run into problems, don't hesitate to drop a comment below.
-
-Is it as nice as Autokey? I actually like this far better now. I often had trouble remembering my Autokey shortcuts, now I can search instead.
-
-As I said above, if I were a better bash scripter I get rid of the Python file and just use a bash loop. That would make it easy to wrap it in a neat package and distribute it, but as it is it has too many moving parts to make it more than some cut and paste code.
-
-####Shoulders Stood Upon
-
-- [Using `fzf` instead of `dmenu`](https://medium.com/njiuko/using-fzf-instead-of-dmenu-2780d184753f) -- This is the post that got me thinking about ways I could use tools I already use (`fzf`, Alacritty) to accomplish more tasks.
diff --git a/src/published/technology.txt b/src/published/technology.txt
deleted file mode 100644
index cb613cb..0000000
--- a/src/published/technology.txt
+++ /dev/null
@@ -1,56 +0,0 @@
-Sometimes people email me to ask how I make luxagraf. Here's now I do it: I write, take pictures and combine them into stories.
-
-I recognize that this is not particularly helpful. Or it is, I think, but it's not why people email me. They want to know about at the tools I use. Which is fine. I guess. Consumerism! Yeah! Anyway, I decided to make a page I can just point people to, this one. There's no affiliate links and I'd really prefer it if you didn't buy any of this stuff because you don't need it. I don't need it. I could get by with less. I should get by with less.
-
-Still, for better or worse. Here are the tools I use.
-
-### Notebook and Pen
-
-My primary "device" is my notebook. I don't have a fancy notebook. I do have several notebooks though. One is in my pocket at all times and is filled with illegible scribbles that I attempt to decipher later. The other is larger and it's my sort of captain's log, though I don't write in with the kind regularity captain's do. Or that I imagine captain's do. Then I have other notebooks for specific purposes, meditation journal, etc.
-
-I'm not all that picky about notebooks, if they have paper in them I'm happy enough, but I could devote thousands and thousands of words to pens. For what seems like forever I was religiously devoted to the Uniball Roller Stick Pen in micro point, which I used to swipe from my dad's desk drawer back in high school. It's a lovely pen, I was gratified to note it was the pen of choice at the lawyer's office where we finalized the sale of our house. And yes, I totally took one.
-
-Once I bought a fancy pen from Japan that takes Parker ink refills, and it's my pen of choice. I can't remember the brand or anything which sucks because I'd love to get another.
-
-When that's not handy I use Uniball Vision pens, which also fill my two primary requirements in a pen: 1) it writes well 2) I can buy it almost anywhere for next to nothing.
-
-### Camera
-
-This is what everyone wants to know about. I used a Sony A7ii. It's a full frame mirrorless camera that happens to make it easy to use legacy film lenses. I bought it specifically because it's the only full frame digital camera available that lets me use the old lens that I love. Without the old lenses I find the Sony's output to be a little digital for my tastes, though the RAW files from the A7ii have wonderful dynamic range, which was the other selling point for me.
-
-That said, it's not a cheap camera. You should not buy one. The Sony a6000 is very nearly at good and costs $500 ($400 on eBay). In fact, having tested dozens of cameras for Wired over the years I can say with some authority that the a6000 is the best value for money on the market period, but doubly so if you want at cheap way to test out some older lenses.
-
-All of my lenses are old and manual focus, which I prefer to autofocus lenses. I like the fact that they're cheap too, but really the main appeal for me with old lenses was the far superior focusing rings. I grew up using all manual focus cameras. Autofocus was probably around by the time I picked up a camera, but I never had it. My father had (still has) a screw mount Pentax. I bought a Minolta with money from a high school job. Eventually I upgraded to a Nikon F3 which was my primary camera until 2004. While there are advantages to autofocus, and certainly modern lenses are much sharper in most cases, neither autofocus nor perfect edge to edge sharpness are significant for the type of photos I like to make.
-
-####lenses
-
-One thing about shooting manual lenses is that there are a ton of cheap manual lenses out there. I have seen amazing photos produced with $10 lenses. Learn to manual focus a lens is like opening a door into a secret world. A secret world where lenses are cheap. The net result of my foray into this world is that I have a ridiculous collection of lenses. And we live in a bus, lord knows what I'd have if we had more space.
-
-That said, about 90% of the time I have a very fast, relatively lightweight Canon FD 50 f1.4. I love this lens. I love love love it. The other fifty I love love love is my minolta 50 f/2, which is the slow one in the Minolta 50 family, but man is a great lens. I bought it for $20.
-
-At the wide end of the spectrum I have another Canon, the FD 20mm f2.8. For portraits I use the Minolta MD 100 f2 and an Olympus M Zuiko 100 f/2.8. I also have this crazy Russian fisheye thing I bought one night on eBay after I'd been drinking. It's pretty hilarious bad at anything less than f8, but it's useful for shooting in small spaces, like the inside of the bus.
-
-I also have, cough, a few other lenses that I don't use very often or that I use for a while and pass along via eBay. Right now I have a Minolta 58 f/1.4 that I really like, a Pentax 28 f/3.5 that doesn't do much for me (28mm just isn't how I see the world) and Canon 35 f/1.8 that I like alot but won't mount on any adapter I have. I need to get it serviced.
-
-
-### laptop
-
-My laptop is a Lenovo x250 I bought off eBay for $300. I upgraded the hard drives and put in an HD screen, which brought the total outlay to $550, which is really way too much to spend on a computer these days, but my excuse is that I make money using it.
-
-Why this particular laptop? It's small and the battery lasts quite a while (like 15 hrs when I'm writing, more 12 when editing photos). It also has a removable battery and can be upgraded by the user. I packed in almost 3TB of disk storage, which is nice. It does make a high pitch whining noise that drives me crazy whenever I'm in a quiet room with it, but since I mostly use it outdoors, sitting around our camps, this is rarely an issue.
-
-Still, like I said, I could get by with less. I should get by with less.
-
-The laptop runs Linux because everything else sucks a lot more than Linux. Which isn't too say that I love Linux, it could use some work too. But it sucks a whole lot less than the rest. I run Arch Linux, which I have written about elsewhere. The main appeal of Arch for me is that once I set it up I never have to think about it again. Because I test software for a living I also have a partition that hosts a revolving door of other Linux distros that I use from time to time, but never when I want to get work done. When I want to get work done, I use Arch.
-
-Because I am hopelessly bored with technology, I stick mainly with simple, text-based applications. Almost everything I do is done inside a single terminal (urxvt) window running tmux, which gives me four tabs. I write in Vim. For email I use mutt. I read RSS feeds with newsbeuter and I listen to music via mpd. I also have a command line calculator and a locally-hosted dictionary that I use pretty regularly.
-
-I do use a few GUI apps: Tor for browsing the web, Darktable and GIMP for editing photos, Stellarium for learning more about the night sky, and LibreOffice Calc for spreadsheets. That's about it.
-
-### ithing/tablet/drone/wrist tracking device thingy
-
-Yeah I don't have any of those. I'm one of those people. I pay for everything in cash too. Fucking weirdo is what I am. I told you you didn't want to know how I make stuff.
-
-<hr />
-
-So there you have it, my technology stack. I am of course always looking for ways to get by with less technology, but I think, after years of getting rid of stuff, I reached something close to ideal.