diff options
Diffstat (limited to 'src')
58 files changed, 0 insertions, 5274 deletions
diff --git a/src/backup-2.txt b/src/backup-2.txt deleted file mode 100644 index 15ac8c9..0000000 --- a/src/backup-2.txt +++ /dev/null @@ -1,15 +0,0 @@ -I wrote previously about how I backup database files automatically. The key word there being "automatically". If I have to remember to make a backup the odds of it happening drop to zero. So I automate as I described, but that's not really what making backups entails or at least that's not the point for me as a writer. - -The point for me as a writer is that I don't want to lose these words - -Ba - -In some cases "automate" can mean build workflows that spawn redundant copies. For example, right now I'm typing these words in Vim and will save the file in a Git repo that will get pushed to a server. Later the containing folder will be backed up on S3 plus a couple of local drives. - -It's unlikely I will loose these words outright. - -However, once I'm done writing I'll cut and paste this piece into my Django app and hit a publish button that will write the results out to the flat HTML file you're actually reading right now (this file is another backup). When I plugged it into the database I gave this article a relationship with other objects in that database. So even the redundant backups built into my workflow make a total data loss unlikely, without the database I will lose the relationships I've created. - -Which is just to illustrate what you already know: database backups are important and need to happen regularly. - - diff --git a/src/console-based-web-browsing-w3m.txt b/src/console-based-web-browsing-w3m.txt deleted file mode 100644 index f548555..0000000 --- a/src/console-based-web-browsing-w3m.txt +++ /dev/null @@ -1,17 +0,0 @@ -Console-Based Web Browsing With W3M - -I've been browsing the web with a 27-year-old, text-only browser for a couple of months now and it has made me like the web again. I don't ever want to go back to a graphical browser. - -The web is a steaming pile of JavaShit though, so I do from time to time have to open pages in a graphical browser. But I much prefer w3m and I always start there now. If the page I'm after works, I am happy, if it doesn’t I get to decide: begrudgingly open it in a graphical browser or just skip it. It’s remarkable how often the second option is the one I chose. It’s made me question what all I do on the web, most of it turns out to be unimportant and unnecessary. - -But it isn't the lack of JavaScript that makes browsing with w3m great. That does help clear up clutter, but it's really an entirely different experience that, the more I use it, the more I love it. - -With w3m I find myself focused on a single task in a way that I am not in Vivaldi. With w3m I get the information I want faster. I can open an entire rendered page in Vim with a single keystroke, and then I can copy and paste things to my notes or just save the whole page as text. When I'm done I quit and move on to something different. Opening w3m is so fast I don't keep it open. I use it when I need it and then I close it. - -This, I've come to think, is the key to eliminating distractions, staying focused and getting actual work done: close the browser when you don't need it. You don't think of an open web browser as multitasking, but it is and that's a recipe for distraction. Unitasking is the way forward most of the time, when you're done with the page, close the browser. - -This is very cumbersome with a graphical browser which has to boot up a ton of stuff and then load all those open tabs you have and it ends up taking so long enough that only a crazy person would close it when they were done with a single task. It'd be like shutting off your laptop every time you closed the lid. - -With w3m though this is exactly what I do and I swear I waste less time because of it. Often I even close out the terminal window that it was in because foot is pretty speedy too. Then I find myself staring at my desktop, which happens to be a somber image I took a long time ago in the swamps of Florida, and it always makes me want to close my laptop and go outside, which is why I use it as a desktop. - -What does this have to do with w3m? Very little I suppose, other than to say, if you're finding yourself wasting time browsing the internet for hours, try using w3m, you might like it, and I can almost garantee you'll save yourself some time that you'd otherwise waste on pointless internet things. Go make something instead. Or give someone a hug or a high five. diff --git a/src/getting-started-maas.txt b/src/getting-started-maas.txt deleted file mode 100644 index 74622a5..0000000 --- a/src/getting-started-maas.txt +++ /dev/null @@ -1,48 +0,0 @@ -Canonical's Metal As A Service (MAAS) allows you to deploy and manage physical hardware in the same way you can deploy and manage virtual machines. This means you can configure, deploy and manage bare metal servers just like you would VMs running on Amazon AWS or Microsoft Azure. MAAS gives you the management tools that have made the cloud popular, but with the additional benefits of physical hardware. - -To use MAAS you'll need a server to run the management software and at least one server which can be managed with a BMC (once MAAS in installed you can select different BMC power types according to your hardware setup). - -Canonical recommends letting the MAAS server handle DHCP for the network the managed machines are connected to, but if your current infrastructure requires a different approach to DHCP there are other options. The MAAS documentation has more details on [how DHCP works in MAAS](https://docs.maas.io/2.6/en/installconfig-network-dhcp) and how you can adapt it to your current setup. - -To install MAAS first download Ubuntu Server 18.04 LTS and follow the step-by-step installation instructions to set up Ubuntu on your server. Once you have Ubuntu 18.04 up and running, you can install MAAS. - -To get the latest development release of MAAS, use the [maas/next PPA](https://launchpad.net/~maas/+archive/ubuntu/next). First add the PPA, then update and install. - -~~~~console -sudo add-apt-repository ppa:maas/next -sudo apt update -sudo apt install maas -~~~~ - -Once MAAS is installed, you'll need initialize it and create an admin user. - -~~~~console -sudo maas init -~~~~ - -The init command will ask you ask you to create a username and password for the web-based GUI. You can optionally import your SSH keys as well. - -Once the installation is done you can login to the web-based MAAS GUI by pointing your browser to http://<your.maas.ip>:5240/MAAS/. - -<img src="maas-01.png" alt="MAAS web UI login screen" /> - -Once you login to the MAAS web UI you'll be presented with the MAAS configuration panel where you can set the region name, configure a DNS forwarder for domains not managed by MAAS, as well as configure the images and architectures you want available for MAAS-managed machines. - -<img src="maas-02.png" alt="MAAS web UI initial setup screen" /> - -For now you can accept the defaults and click continue. If you did not add your SSH keys in the initialization step, you'll need to upload them now. Then click "Go to Dashboard" to continue. - -<img src="maas-04.png" alt="MAAS web UI SSH keys screen" /> - -The last step is to configure DHCP. When the MAAS Dashboard loads it will alert you that "DHCP is not enabled on any VLAN." To setup DHCP click the "Subnets" menu item and then click the VLAN where you want to enable DHCP. - -<img src="maas-07.png" alt="MAAS web UI Subnet screen" /> - -This will bring up a new page where you can configure your DHCP subnet, start and end IP addresses, and Gateway IP. You can also decide how MAAS handles DHCP, whether directly from the rack controller or relayed to another VLAN. If you don't want MAAS to manage DHCP you can disable it here. - -<img src="maas-08.png" alt="MAAS web UI Subnet screen" /> - -To set up your first MAAS instances with MAAS handling DHCP, click the "Configure MAAS-managed DHCP" button. - -<img src="maas-09.png" alt="MAAS web UI Subnet screen" /> - diff --git a/src/gitea.txt b/src/gitea.txt deleted file mode 100644 index d45c469..0000000 --- a/src/gitea.txt +++ /dev/null @@ -1,229 +0,0 @@ -I've never liked hosting my git repos on someone else's servers. GitHub especially is not a company I'd do business with, ever. I do have a repo or two hosted over at GitLab because those are projects I want to be easily available to anyone. But I store almost everything in git -- notes, my whole documents folder, all my code projects, all my writing, pretty much everything is in git -- but I like to keep all that private and on my own server. - -For years I used [Gitlist](http://gitlist.org/) because it was clean, simple, and did 95 percent of what I needed in a web-based interface for my repos. But Gitlist is abandonware at this point and broken if you're using PHP 7.2. There are few forks that [patch it](https://github.com/patrikx3/gitlist), but it's copyrighted to the original dev and I don't want to depend on illegitimate forks for something so critical to my workflow. Then there's Gitlab, which I like, but the system requirements are ridiculous. - -Some searching eventually led me to Gitea, which is lightweight, written in Go and has everything I need. - -Here's a quick guide to getting Gitea up and running on your Ubuntu 18.04 -- or similar -- VPS. - -### Set up Gitea - -The first thing we're going to do is isolate Gitea from the rest of our server, running it under a different user seems to be the standard practice. Installing Gitea via the Arch User Repository will create a `git` user, so that's what I used on Ubuntu 18.04 as well. - -Here's a shell command to do that: - -~~~~console -sudo adduser --system --shell /bin/bash --group --disabled-password --home /home/git git -~~~~ - -This is pretty much a standard adduser command like you'd use when setting up a new VPS, the only difference is that we've added the `--disable-password` flag so you can't actually log in with it. That's a bit more secure and while we will use this user to authenticate over SSH, we'll do so with a key, not a password. - -Now we need to grab the latest Gitea binary. At the time of writing that's version 1.5.2, but be sure check the [Gitea downloads page](https://dl.gitea.io/gitea/) for the latest version and adjust the commands below to work with that version number. Let's download the Gitea binary and then we'll verify the signing key. Verifying keys is very important when working with binaries since you can't see the code behind them[^1]. - -~~~~console -wget -O gitea https://dl.gitea.io/gitea/1.5.2/gitea-1.5.2-linux-amd64 -gpg --keyserver pgp.mit.edu --recv 0x2D9AE806EC1592E2 -wget https://dl.gitea.io/gitea/1.5.2/gitea-1.5.2-linux-amd64.asc -gpg --verify gitea-1.5.2-linux-amd64.asc gitea -~~~~ - -A couple of notes here, GPG should say the keys match, but then it should also warn that "this key is not certified with a trusted signature!" That means, essentially, that this binary could have been signed by anybody. That should make you nervous, but at least we know it wasn't tampered with in transit[^1]. - -Now let's make the binary executable and test it to make sure it's working: - -~~~~console -chmod +x gitea -./gitea web -~~~~ - -You can stop Gitea with `Ctrl+C`. Let's move the binary to a more traditional location: - -~~~~console -sudo cp gitea /usr/local/bin/gitea -~~~~ - -The next thing we're going to be is create all the directories we need. - -~~~~console -sudo mkdir -p /var/lib/gitea/{custom,data,indexers,public,log} -sudo chown git:git /var/lib/gitea/{data,indexers,log} -sudo chmod 750 /var/lib/gitea/{data,indexers,log} -sudo mkdir /etc/gitea -sudo chown root:git /etc/gitea -sudo chmod 770 /etc/gitea -~~~~ - -That last line should make you nervous, that's too permissive for a public directory, but don't worry, as soon as we're done setting up Gitea we'll change the permissions on that directory and the config file inside it. - -Before we do that though let's create an systemd service file to start and stop Gitea. The Gitea project has a service file that will work well for our purposes, so let's grab it, make a couple changes and then we'll add it to our system: - -~~~~console -wget https://raw.githubusercontent.com/go-gitea/gitea/master/contrib/systemd/gitea.service -~~~~ - -Now open that file and uncomment the line `After=postgresql.service` so that Gitea starts after postgresql is running. The resulting config file should look like this: - -~~~~ini -[Unit] -Description=Gitea (Git with a cup of tea) -After=syslog.target -After=network.target -#After=mysqld.service -After=postgresql.service -#After=memcached.service -#After=redis.service - -[Service] -# Modify these two values and uncomment them if you have -# repos with lots of files and get an HTTP error 500 because -# of that -### -#LimitMEMLOCK=infinity -#LimitNOFILE=65535 -RestartSec=2s -Type=simple -User=git -Group=git -WorkingDirectory=/var/lib/gitea/ -ExecStart=/usr/local/bin/gitea web -c /etc/gitea/app.ini -Restart=always -Environment=USER=git HOME=/home/git GITEA_WORK_DIR=/var/lib/gitea -# If you want to bind Gitea to a port below 1024 uncomment -# the two values below -### -#CapabilityBoundingSet=CAP_NET_BIND_SERVICE -#AmbientCapabilities=CAP_NET_BIND_SERVICE - -[Install] -WantedBy=multi-user.target -~~~~ - -Now we need to move the service file to somewhere systemd expects it and then start and enable the service so Gitea will launch automatically when the server boots up. - -~~~~console -sudo cp gitea.service /etc/systemd/system/ -sudo systemctl enable gitea -sudo systemctl start gitea -~~~~ - -There you have it, Gitea is installed, running and will automatically boot whenever we restart the server. Now we need to set up Postgresql and then Nginx to serve up our Gitea site to the world. Or at least to us. - -### Setup a Postgresql and Nginx - -Gitea needs a database to store all our data in, I use PostgreSQL. You can also use MySQL, but you're on your own there. Install PostgreSQL if you haven't already: - -~~~~console -sudo apt install postgresql -~~~~ - -Now let's create a new user and database for Gitea: - -~~~~console -sudo su postgres -createuser gitea -createdb gitea -O gitea -~~~~ - -Exit the postgres user shell by hitting `Ctrl+D`. Now let's set up Nginx to serve our Gitea site. - -For the next part you'll need a domain name. I use a subdomain, git.mydomain.com, but for simplicity sake I'll refer to `mydomain.com` for the rest of this tutorial. Replace `mydomain.com` in all the instructions below with your actual domain name. - -~~~~console -sudo apt update -sudo apt install nginx -~~~~ - -Now we need to create a config file for this domain. By default Nginx will look for config files in `/etc/nginx/sites-enabled/`, so the config file we'll create is: - -~~~~console -nano /etc/nginx/sites-enabled/mydomain.com.conf -~~~~ - -Here's what that file looks like: - -~~~~nginx -server { - listen 80; - listen [::]:80; - server_name <mydomain.com>; - - - location / { - proxy_pass http://localhost:3000; - } - - proxy_set_header X-Real-IP $remote_addr; -} -~~~~ - -The main line here is the proxy_pass bit, which takes all requests and sends it to gitea, which is listening on `localhost:3000` by default. You can change that if you have something else that conflicts with it, but you'll need to change it here and in the service file that we used to start Gitea. - -The last step is to add an SSL cert to our site so we can clone over https (and SSH if you keep reading). I have another tutorial on setting up [Certbot for Nginx on Ubuntu](/src/certbot-nginx-ubuntu-1804). You can use that to get Certbot installed and auto-renewing certs. Then all you need to do is run: - -~~~~console -sudo certbot --nginx -~~~~ - -Select your gitea domain, follow the prompts and when you're done you'll be read to set up Gitea. - -### Setting up Gitea - -Point your browser to https://<mydomain>.com/install and go through the Gitea setup process. That screen looks like this, and you can use these values, except for the domain name (and be sure to enter the password you used when we created the `gitea` user for postgresql). - -One note, I strongly recommend check the "disable self registration" box, which means you'll need to create an administrator account at the bottom of the page, but will stop anyone else from being able to sign up. - -<img src="images/2018/gitea-install_FAW0kIJ.jpg" id="image-1706" class="picwide" /> - -Okay, now that we've got Gitea initialized it's time to go back and change the permissions on their directories that we set up earlier. - -~~~~console -sudo chmod 750 /etc/gitea -sudo chmod 644 /etc/gitea/app.ini -~~~~ -Okay, now you can create your first repo. Click the little button next to the repositories menu on the right side of your Gitea dashboard and that'll walk you through creating your first repo. Once that's done you can clone that repo with: - -~~~~console -git clone https://mydomain.com/giteausername/reponame.git -~~~~ - -Now if you have an existing repo that you want to push to your new Gitea repo, just edit the `.git/config` files to make your Gitea repo the new url, e.g.: - -~~~~ini -[remote "origin"] - url = https://mydomain.com/giteausername/reponame.git - fetch = +refs/heads/*:refs/remotes/origin/* -~~~~ - -Now do this: - -~~~~console -git push origin master -~~~~ - -### Setting up SSH - -Working with git over https is pretty good, but I prefer the more secure method of SSH. To get that working we'll need to add our SSH key to Gitea. That means you'll need a GPG key. If you don't have one already, open the terminal on your local machine and issue this command: - -~~~~console -ssh-keygen -o -a 100 -t ed25519 -~~~~ - -That will create a key named id_ed25519 in the directory `.ssh/`. If you want to know where that command comes from, read [this article](https://blog.g3rt.nl/upgrade-your-ssh-keys.html). - -Now we need to add that key to gitea. First open the file `.ssh/id_ed25519.pub` and copy the contents to your clipboard. Now in the Gitea we interface, click on the user menu link at the upper right and select "settings". Then across the top you'll see a bunch of tabs. Click the one that reads "SSH / GPG Keys". Click the add key button, give your key a name and paste in the contents of the key. - -Depending on how your VPS was set up, you may need to add the `git` user to your sshd config. Open `/etc/ssh/sshd_config` and look for a line that reads something like this: - -~~~~console -AllowUsers myuser myotheruser git -~~~~ - -Add git to the list so you'll be able to authenticate with the git user over ssh. Now test SSH cloning with this line, subsituting your SSH clone url: - -~~~~console -git clone ssh://git@mydomain/giteausername/reponame.git -~~~~ - -Assuming that works then you're all set, Gitea is working and you can create all the repos you need. If you have any problems you can drop a comment in the form below and I'll do my best to help you out. - -[^1]: You can compile Gitea yourself if you like, there are [instructions on the Gitea site](https://docs.gitea.io/en-us/install-from-source/), but be forewarned its uses quite a bit of RAM to build. diff --git a/src/how-use-websters-1913-dictionary-linux-edition.txt b/src/how-use-websters-1913-dictionary-linux-edition.txt deleted file mode 100644 index 12898f3..0000000 --- a/src/how-use-websters-1913-dictionary-linux-edition.txt +++ /dev/null @@ -1,39 +0,0 @@ -I suspect the overlap of Linux users and writers who care about the Webster's 1913 dictionary is vanishingly small. Quite possible just me. But in case there are others, I am committing these words to internet. Plus I will need them in the future when I forget how I set this up. - -Here is how you install, set up, and configure the command line app `sdcv` so that you too can have the one true dictionary at your fingertips in the command line app of your choosing. - -But first, about the one true dictionary. - -The one true dictionary is debatable I suppose. Feel free to debate. I have a "compact" version of the Oxford English Dictionary sitting on my desk and it is weighty both literally and figuratively in ways that the Webster's 1913 is not, but any dictionary that deserves consideration as your one true dictionary ought to do more than spit out dry, banal collections of words. - -John McPhee writes eloquently about the power of a dictionary in his famous New Yorker essay, *[Draft No 4](https://www.newyorker.com/magazine/2013/04/29/draft-no-4)*, which you can find in paper in [the compilation of essays by the same name](https://bookshop.org/books/draft-no-4-on-the-writing-process/9780374537975). Fellow New Yorker writer James Somers has [a brilliant essay on the genius of McPhee's dictionary](http://jsomers.net/blog/dictionary) and how you can get it installed on your Mac. - -Remarkably, the copy of the Webster's 1913 that Somers put up is still available. So go grab that. - -However, while his instructions are great for macOS users, they don't work on Linux and moreover they don't offer access from the shell. I write in Vim, in a tmux session, so I wanted an easy way to look things up without switching apps. - -The answer is named `sdcv`. It is, in the words of its man page, "a simple, cross-platform text-based utility for working with dictionaries in StarDict format." That last bit is key, because the Webster's 1913 file you downloaded from Somers is in StarDict format. I installed `sdcv` from the Arch Community repository, but it's in Debian and Ubuntu's official repos as well. - -Once `sdcv` is install you need to unzip that dictionary.zip file you should have grabbed from Somers' post. That will give you four files. All we need to do now is move them somewhere `sdcv` can find them. By default that's `$(XDG_DATA_HOME)/stardict/dic`, although you can customize that by add thing Environment variable `STARDICT_DATA_DIR` to your .bashrc. I keep my dictionaries in `~/bin/dict` folder so I just drop this in .bashrc: - -~~~bash -STARDICT_DATA_DIR="$HOME/bin/dict/ -~~~ - -### How to Lookup Words in Webster's 1913 from the Command Line - -To use your new one true dictionary, all you need to do is type `sdcv` and the word you'd like to look up. Add a leading '/' before the word and `sdcv` will use a fuzzy search algorithm, which is handy if you're unsure of the spelling. Search strings can use `?` and `*` for regex searching. I have never used either. - -My use is very simple. I wrote a little Bash function that looks like this: - -~~~bash -function d() { - sdcv "$1" | less -} -~~~ - -With this I type `d search_term` and get a paged view of the Webster's 1913 entry for that word. Since I always write in a tmux split, I just move my cursor to the blank split, type my search term and I can page through and read it while considering the context in the document in front of me. - -### But I Want a GUI - -Check out [StarDict](http://www.huzheng.org/stardict/), there are versions for Linux, Windows, and macOS, as well as source code. diff --git a/src/indie-web-co.txt b/src/indie-web-co.txt deleted file mode 100644 index 9924aae..0000000 --- a/src/indie-web-co.txt +++ /dev/null @@ -1,36 +0,0 @@ -Here's a disturbing factoid: **the world’s ten richest men have made $540 billion so far during the pandemic.** Amazon founder Jeff Bezos' worth went up so much between March and September 2020 that he could afford to give all 876,000 Amazon employees a $105k bonus and still have as much money as he had before the pandemic started ([source](https://oxfamilibrary.openrepository.com/bitstream/handle/10546/621149/bp-the-inequality-virus-summ-250121-en.pdf)). - -What does that have to do with code? Well, some of my code used to run on Amazon services. Some of my money is in Jeff Bezos' pocket. I was contributing to the economic inequality that Amazon enables. I decided I did not want to do that. - -But more than I didn't want to contribute to Amazon's bottom line, I *wanted* to contribute to someone's bottom line, the emphasis being on *someone*. I wanted to redirect the money I was already spending to small businesses, businesses that need the revenue. - -We can help each other instead of Silicon Valley billionaires. - -Late last year at [work](https://www.wired.com/author/scott-gilbertson/) we started showcasing some smaller, local businesses in affiliate links. It was a pretty simple idea, find some small companies in our communities making worthwhile things and support them by telling others. - -One woman whose company I linked to called it "life-changing." It's so strange to me that an act as simple as pasting some HTML into the right text box can changed someone's life. That's amazing. I bring this up not to toot my own horn, but to say that every day there are ways in which you can use the money you spend to help real people trying to make a living. If you've ever charged a little for a web service you probably know how much of a big deal even one more customer means. I want to be that one more customer for someone. - -My online expenses aren't much, just email, web hosting, storage space, and domain registration. I wanted to find some small business replacements for the megacorps I was using. - -I did a ton of research. Web hosting and email servers are tricky, these are critical things that run my business and my wife's business. It's great to support small businesses, but above all the services have to *work*. Luckily for us the forums over at [Low End Talk](https://www.lowendtalk.com/) are full of ideas and long term reviews of exactly these sorts of business -- small companies offering cheap web hosting, email hosting, and domain registration. - -After a few late nights digging through threads, finding the highlights, and then more research elsewhere on the web, I settled on [BuyVM](https://buyvm.net/) for my web hosting. The owner Francisco is very active on Low End Talk and, in my experience for the last three months, is providing a great service *for less* than I was paying at Vultr. It was so much less I was able to get a much larger block storage disk and have more room for my backups, which eliminated my need for Amazon S3/Glacier as well[^2]. I highly recommend BuyVM for your VPS needs. - -For email hosting I actually was already using a small company, [Migadu](https://www.migadu.com/). I liked their service, and I still recommend them if the pricing works for you, but they discountinued the plan I was on and I would have had to move to a more expensive plan to retain the same functionality. - -I jumped ship from Migadu during Black Friday because another small email provider I had heard good things about was having a deal: $100 for life. At that price, so long as it stays in business for 2 years, I won't loss any money. I moved my email to [MxRoute](https://mxroute.com/) and it has been excellant. I liked it so much I bought my parents a domain and freed them from Google. Highly recommend MxRoute. - -That left just one element of my web stack at Amazon: domain registration. I'll confess I gave up here. Domain registration are not a space filled with small companies (which to me is like 2-8 people). I gave up. And complained to a friend, who said, try harder. So I did and discovered [Porkbun](https://porkbun.com/), the best domain registrar I've used in the past two decades. I moved my small collection of domain over at the beginning of the year and it was a seamless, super-smooth transition. It lives up to its slogan: "an oddly safisfying experience." - -And those are my recommendations for small businesses you can support *and* still have a great technology stack: [Porkbun](https://porkbun.com/) (domain registration), [MxRoute](https://mxroute.com/) (email hosting), and [BuyVM](https://buyvm.net/) (VPS hosting). - -The thing I didn't replace was AWS CloudFront. I don't have enough traffic to warrant a CDN, so I just dropped it. If I ever change my mind about that, based on my research, I'll go with [KeyCDN](https://www.keycdn.com/pricing), or possible [Hostry](https://hostry.com/products/cdn/). - -I also haven't found a reliable replacement for SES, which I use to send my newsletters. I wish Sendgrid would spin off a company for non-transational email, but I don't see that happening. I could write another 5,000 words on how the big email providers totally, purposefully fucked up the best distributed communication system around. But I will spare you. - -The point is, these are three small companies providing useful services we developers need. If you're feeling like you'd rather your money went to people trying to make cool, useful stuff, rather than massive corporations, give them a try. If you have other suggestions drop them in the comments and maybe I can put together some sort of larger list. - -[Note: none of these links are affiliate links, just services I actually use and therefore recommend.] - -[^1]: This is something I'd like to do more, unfortunately there are not cottage industries for most of the things I write about (cameras, laptops, etc). Still, you do what you can I guess. -[^2]: I have a second cloud-based backup stored in Backblaze's B2 system. Backblaze is not a small company by any means, but it's one that, from the research I've been able to do, seems ethically run and about as decent as a corporation can be these days. diff --git a/src/kindle-hacking.txt b/src/kindle-hacking.txt deleted file mode 100644 index 064c700..0000000 --- a/src/kindle-hacking.txt +++ /dev/null @@ -1,16 +0,0 @@ -links: - -[Installing ADB and Fastboot on Linux & Device Detection "Drivers"](https://forum.xda-developers.com/android/general/guide-installing-adb-fastboot-linux-adb-t3478678) - -You need to be on 6.3.1.2 firmware: -[Fire HD 8 2018 (karnak) amonet-3](https://forum.xda-developers.com/hd8-hd10/orig-development/unlock-fire-hd-8-2018-karnak-amonet-3-t3963496/page52) - -[Download 6.3.1.2 firmware](https://fireos-tablet-src.s3.amazonaws.com/LlO8A9g4Q6ugQCylaeqWBWxYBb/update-kindle-Fire_HD8_8th_Gen-NS6312_user_1852_0002517056644.bin) - -2. Download the amazon frimware above and keep it where you can flash it. -3. Boot into recovery (Volume Down + Power at the same time) -4. Select "adb sideload" or whatever it says using your volume keys and press the power to select -5. Now adb sideload <frimware>.bin - -https://www.youtube.com/watch?v=sN6PphcI6XQ - diff --git a/src/mutt-help.txt b/src/mutt-help.txt deleted file mode 100644 index bddfb8e..0000000 --- a/src/mutt-help.txt +++ /dev/null @@ -1,5 +0,0 @@ -To delete messages matching a pattern: `D <text>` - -That will mark them for deletion, then you press tab to actually move them to the trash (or delete them depending on how you have mutt set up). - - diff --git a/src/old-no-longer-pub/2013-09-25_writing-in-the-open.txt b/src/old-no-longer-pub/2013-09-25_writing-in-the-open.txt deleted file mode 100644 index 6a9e33f..0000000 --- a/src/old-no-longer-pub/2013-09-25_writing-in-the-open.txt +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Writing in the Open -pub_date: 2013-09-25 14:14:55 -slug: /blog/2013/09/writing-in-the-open - ---- - -Whew! Crazy weekend. I wasn't really prepared for how the web would react to learning that [Webmonkey is no more][9]. My inbox blew up. Clearly Webmonkey will be missed. Thanks to everyone who sent me their thoughts on Webmonkey shutting down and all the stories about learning HTML (DHTML natch), CSS or JavaScript from the site. Also, glad to hear that there are apparently so many Webmonkey beanies out there. Anyway, if I'm a little slow responding to you, I apologize, but rest assured I have read everyone's email and I will get back to you all in the next few days - -In the mean time, you should go read Brad Frost's recent post on [Designing in the Open][1]. - -Frost and his wife, Melissa, are redesigning the Greater Pittsburgh Community Food Banks website and have decided to do everything in the open so the rest of us can see the process. There's a [very cool looking timeline][2] and a [post about their process so far][3]. In that post Melissa quotes a comment Josh Long wrote some time ago on Chris Coyier's [Working in Public][4] post (itself a good example) about why you would want to risk the potential embarrassment or ridicule or whatever else you're afraid of to work publicly: - -> 1. It makes you think clearly and directly. -> 2. It forces you to know what the hell you're talking about. -> 3. It shows people how much you put into your work. -> 4. It's a great way to document your work. -> 5. It's a great way to give back and teach others. - -To that I would add a couple more: it teaches you what you know (and don't), and it's fun, because really, if this isn't fun you shouldn't do it. - -The main reason we don't show our processes more or work in public is fear. Fear that, as Melissa Frost says, you'll embarrass yourself or that you'll be seen as a fraud or <your fear here>. Fear is self-created though. Fear is _our_ reaction to an imaginary negative event _we've_ projected into the future. It's something that might happen, but hasn't. Fear can be helpful. For example, you're afraid you'll forget about an important meeting and that fear prompts you to write it down on your calendar. But more often than not fear is not helpful. More often than not fear - that imaginary projection into the future - ends up inhibiting us in the present. It stops you from sharing the stuff you've made for instance. I put off launching this site for months, at least partly out of fear. - -So fuck fear. I don't have a cool looking timeline like the Frosts', nor do I have a super cool working new/old design split for this site like [Sparkbox's open redesign site][5]. But, in lieu of anything else, I do have a screenshot of my responsive web design book in progress: - - - -Does that count as working in the open? Probably not. But that's the best I can do at the moment. One of the early issues of McSweeneys had a line drawing of a bird with one wing, below it was the caption "trying, trying, trying". That's how I feel. - -If there's interest I could write more about what it's like to write a book (which I could title, how I tricked myself into writing 40,000 words in three weeks). At some point I'd like to take working in the open even further and put the "source" of the book in a public Git repository to make it easy for other people to fix typos, contribute resource links or, well - who knows what else people might end up doing? - -I think that's probably the best reason to do anything "in the open" - it opens more doors for other people. Yeah some of them will be jerks and trolls. But in my experience most of them are not. - -And opening the door to others opens the door to serendipity. And serendipity often leads to magic. - -When you put things out in the world the world takes them, plays with them - sometimes nicely, sometimes not - and unexpected things start to happen. In my experience these things tend to be good. Random @replies morph into friends, ideas spark others' imaginations. I find there ends up being a lot of synchronicity - ideas colliding in interesting ways, what Robert Anton Wilson called "[Coincidance][8]". - -I've never tried designing in the open, but I'd like to do more writing in the open. If you've got any good ideas on how to do that, please let me know. - -Okay, back to work. - -[1]: http://bradfrostweb.com/blog/post/designing-in-the-open/ (bradfrostweb.com, Designing in the Open) -[2]: http://foodbank.bradfrostweb.com/ -[3]: http://melissafrostdesign.com/post/pittsburgh-food-bank-open-redesign/ -[4]: http://chriscoyier.net/2012/09/23/working-in-public/ -[5]: http://building.seesparkbox.com/ -[6]: https://longhandpixels.net/media/images/2013/xrwdbook-inprogress-sm.jpg. -[7]: /media/images/2013/rwdbook-inprogress.jpg (view larger image of responsive design book in progress) -[8]: http://www.amazon.com/Coincidance-Head-Robert-Anton-Wilson/dp/1561840041 -[9]: /blog/2013/09/whatever-happened-to-webmonkey diff --git a/src/old-no-longer-pub/2013-09-30_responsive-images-srcset.txt b/src/old-no-longer-pub/2013-09-30_responsive-images-srcset.txt deleted file mode 100644 index 3b3e02b..0000000 --- a/src/old-no-longer-pub/2013-09-30_responsive-images-srcset.txt +++ /dev/null @@ -1,181 +0,0 @@ ---- -title: Responsive Images & `srcset` -pub_date: 2013-09-30 20:08:57 -slug: /blog/2013/09/responsive-images-srcset -metadesc: A comprehensive overview of responsive images and the srcset attribute. Regularly updated to keep up with changing standards proposals. -code: True -tags: Responsive Images ---- - -[Note: This post is superseded by the fact that the picture element now exists. Picture supports srcset as well, so you can do everything I mention below, you just do it within `<picture>`. See my [complete guide to the picture element](https://longhandpixels.net/blog/2014/02/complete-guide-picture-element) for more details.] - -There are, in my experience, three pain points in any responsive design -- tabular data, advertising and images. The latter is the most interesting problem to me since it's not something you can design or engineer your way around. True, there are ways you can coerce today's browsers into doing roughly what you want -- serve large images to large screens and small ones to small screens -- but these are really hacks, sometimes very clever hacks, but still hacks. - -Nathan Ford has put together a nice [Responsive Images Mega-List][1] in an attempt to catalog all the ways you can handle images in a responsive site today. - -That's a great list of resources for handling images today, but what I've been obsessing over lately is the future, when we won't need all these workarounds. - -Just like we hacked video into the web using Flash and eventually got the `<video>` tag, we're hacking responsive images into the web and eventually the web is going to give us a native solution. In fact, there's one just around the browser update bend. - -## How `srcset` Simplifies Responsive Images - -The exciting thing is that there are not just one, but two responsive image solutions already in the works. The W3C is working on a couple of new tools aimed at making responsive images less complex. The first new feature that's likely to make it to a browser near you is the new `srcset` attribute for the `<img>` tag. - -[**Update 27 Feb 2014**: Chrome has [added support](http://blog.chromium.org/2014/02/chrome-34-responsive-images-and_9316.html) for `srcset`, which means it will land in Opera soon as well. The Chrome implementation mirrors what's described below for WebKit. At the same time the `srcset` attribute has been added to the proposed `<picture>` element which you can read about in my [Complete Guide to the <Picture> Element](https://longhandpixels.net/blog/2014/02/complete-guide-picture-element).] - -The proposed `srcset` attribute has a controversial history, which I wrote about [several][2] [times][3] on Webmonkey. I don't think any of that matters at this point though. It's true that `srcset` doesn't address everything about responsive images, but it looks to me like it covers the 80 percent use case and, more importantly, there is [already some browser support][4] (and more on the way). - -Here's how `srcset` works: - -~~~.language-markup -<img alt="Image Desc" - src="image.jpg" - srcset="image-HD.jpg 2x, image-phone.jpg 320w, image-phone-HD.jpg 320w 2x"> -~~~ - -As you can see, the `srcset` attribute takes a comma-separated list of values. Within each item in the list you have three variables. First there's the URL to the image, then there's a maximum viewport dimension (optional) and then an optional pixel density (for targeting higher resolution screens). So the `srcset` value `image-HD.jpg 2x` tells the browser, roughly, if you're on a display with high-res screen then grab this high-res image. Pretty simple. You can of course make it much more complex by adding several other images to the list, for example, to target various screen widths. - -There are two major drawbacks to `srcset`. First, **you can only specify screen width (or height) in pixels**. The reason has to with how browsers pre-fetch content, which happens long before there's enough info to calculate the value of a percentage or em width/height. See this [thread on the W3C mailing list][5] for details. The bottom line is, flexible units for `srcset` is a no-go. - -The other major drawback is that **you can only specify the equivalent of max-width** when defining the viewport dimensions. There is no min-width or orientation support like you'd use in CSS @media queries. That means you may not be able to line your `srcset` breakpoints up with your CSS breakpoints. - -There's some other stuff in the spec worth noting as well. For instance, "if the viewport dimensions or pixel density changes, the user agent can replace the image data with a new image on the fly." That means (I think) that, while there's no equivalent to CSS 3 @media's `orientation` query, you could get the same effect because the viewport dimensions change on rotation, triggering larger images to load (though to make that work you'd end up targeting specific device widths, which is not [future-friendly][6]). It's hard to imagine a scenario in which the pixel density would change, but hey, why not I guess? - -There is one very cool part of the spec though, it puts the ultimate decision about which images are served in the hands of the user. - -No browser supports it, but the spec says that the higher res images specified in `srcset` are just candidates. Here's the [relevant bit][7]: - -> Optionally, return the URL of an entry in *candidates* chosen by the user agent, and that entry's associated pixel density, and then abort these steps. The user agent may apply any algorithm or heuristic in its selection of an entry for the purposes of this step. - -So in theory the browser gets the final say. This means the browser can check the available network and make a decision about whether or not to actually obey `srcset`. For instance it might reject the high-res images on 3G, but accept them over wifi. Even better, mobile browsers could add a user preference so users can say (as they can today with native apps), "I only want high-res images over wifi". Or all the time or whatever. The user is in control. - -I think that's probably the best way to handle what's possibly a user-developer conflict. For example, I want my images to look good on your retina iPad, but you might want to save your (possibly) expensive bandwidth for other things. I think the user should trump the developer in that scenario. With `srcset` the browser can give the user the power to make that decision. - -## Testing `srcset` Today - -This is all largely academic right now. Only one browser supports `srcset` and even that's just the nightly builds of Apple's WebKit. - -If you want to see it in action, go grab the [latest WebKit nightly][8]. Here's a live demo: - -<img src="/media/images/demos/srcsetdemo-fallback.jpg" srcset="/media/images/demos/srcsetdemo-2x.jpg 2x" alt="demo of srcset in action" width="660"/> -This first test is for retina displays, which looks like this: - -~~~.language-markup -<img alt="demo of srcset in action" - src="/media/images/demos/srcsetdemo-fallback.jpg" - srcset="/media/images/demos/srcsetdemo-2x.jpg 2x" /> -~~~ - -<img src="/media/images/demos/srcsetdemo-fallback.jpg" srcset="/media/images/demos/srcsetdemo-widthquery.jpg 420w" alt="demo of srcset in action" /> -This test is for mobile screens with a maximum viewport of 420px, here’s the code: - -~~~.language-markup -<img alt="demo of srcset in action" - src="/media/images/demos/srcsetdemo-fallback.jpg" - srcset="/media/images/demos/srcsetdemo-widthquery.jpg 420w" /> -~~~ - -<img src="/media/images/demos/srcsetdemo-fallback.jpg" srcset="/media/images/demos/srcsetdemo-mobile2x.jpg 420w x2" alt="demo of srcset in action" /> -The last test is for mobile high res screens and uses this code: - -~~~.language-markup -<img alt="demo of srcset in action" - src="/media/images/demos/srcsetdemo-fallback.jpg" - srcset="/media/images/demos/srcsetdemo-mobile2x.jpg 420w x2" /> -~~~ - -<img src="/media/images/demos/srcsetdemo-fallback.jpg" srcset="/media/images/demos/srcsetdemo-superwidequery.jpg 9000w" alt="demo of srcset in action" /> -This final test is designed to check WebKit's current implementation, which does not yet support specifying a width. It's the same query as above, but with a much wider max-width which should trigger it to load in desktop WebKit Nightly. - -~~~.language-markup -<img alt="demo of srcset in action" - src="/media/images/demos/srcsetdemo-fallback.jpg" - srcset="/media/images/demos/srcsetdemo-superwidequery.jpg 9000w" /> -~~~ - - -As of September 30, 2013, using the latest WebKit Nightly (v8536.30.1, 538+) only the first test works. WebKit only supports the pixel density queries, not the max viewport width query. - -## Which Web Browsers Support `srcset`? - -Eventually caniuse.org will probably [add][8] `srcset` (I think they require at least one shipping version of the feature before they'll track it), but for now I threw together a table for keeping track of which web browsers support `srcset`. - -Here's the list as of November 15, 2013: - -<div class="longtable"> -<table> -<colgroup> -<col style="text-align:left;"/> -<col style="text-align:left;"/> -</colgroup> - -<thead> -<tr> - <th style="text-align:left;">Browser</th> - <th style="text-align:left;"><code>srcset</code> support</th> -</tr> -</thead> - -<tbody> -<tr> - <td style="text-align:left;">WebKit Nightly</td> - <td class="yes" style="text-align:left;">yes</td> -</tr> -<tr> - <td style="text-align:left;">Safari 7</td> - <td class="no" style="text-align:left;">no</td> -</tr> -<tr> - <td style="text-align:left;">Firefox 30</td> - <td class="no" style="text-align:left;">no</td> -</tr> -<tr> - <td style="text-align:left;">Chrome 34+</td> - <td class="yes" style="text-align:left;">yes</td> -</tr> -<tr> - <td style="text-align:left;">Opera 16</td> - <td class="no" style="text-align:left;">no</td> -</tr> -<tr> - <td style="text-align:left;">IE 11</td> - <td class="no" style="text-align:left;">no</td> -</tr> -<tr> - <td style="text-align:left;">Mobile Safari 7</td> - <td class="no" style="text-align:left;">no</td> -</tr> -<tr> - <td style="text-align:left;">Opera Mini 7</td> - <td class="no" style="text-align:left;">no</td> -</tr> -<tr> - <td style="text-align:left;">Opera Mobile 14</td> - <td class="no" style="text-align:left;">no</td> -</tr> -<tr> - <td style="text-align:left;">Android Default 4.2</td> - <td class="no" style="text-align:left;">no</td> -</tr> -<tr> - <td style="text-align:left;">Chrome for Android</td> - <td class="no" style="text-align:left;">no</td> -</tr> -<tr> - <td style="text-align:left;">Firefox for Android</td> - <td class="no" style="text-align:left;">no </td> -</tr> -</tbody> -</table> -</div> - -Yes, it's a slightly ridiculous table, <strike>but with any luck Chrome will be joining the list of <code>srcset</code> supporters in the very near future</strike>. [**Update 2014-02-27**: Chrome 34 and higher [now support](http://blog.chromium.org/2014/02/chrome-34-responsive-images-and_9316.html) `srcset`, which also means Opera will soon as well]. My contacts at Mozilla tell me that Firefox is also working on support. So things are looking pretty good for the future. That doesn't help today though, so if you need something now, remember to check out Nathan Ford's [Responsive Images Mega-List][1] for a complete collection of responsive image solutions that work today. - -[1]: http://artequalswork.com/posts/responsive-images/ (Responsive Images Mega-List) -[2]: http://www.webmonkey.com/2012/05/ready-or-not-adaptive-image-solution-is-now-part-of-html/ (Ready or Not, Adaptive-Image Solution Is Now Part of HTML) -[3]: http://www.webmonkey.com/2012/05/browsers-at-odds-with-web-developers-over-adaptive-images/ (Browsers at Odds With Web Developers Over 'Adaptive Images') -[4]: http://mobile.smashingmagazine.com/2013/08/21/webkit-implements-srcset-and-why-its-a-good-thing/ -[5]: http://lists.w3.org/Archives/Public/public-whatwg-archive/2012May/0310.html -[6]: http://futurefriend.ly/ -[7]: http://www.w3.org/html/wg/drafts/srcset/w3c-srcset/#processing-the-image-candidates -[8]: http://nightly.webkit.org/ diff --git a/src/old-no-longer-pub/2013-11-08_easiest-way-get-started-designing-browser.txt b/src/old-no-longer-pub/2013-11-08_easiest-way-get-started-designing-browser.txt deleted file mode 100644 index f6b4b30..0000000 --- a/src/old-no-longer-pub/2013-11-08_easiest-way-get-started-designing-browser.txt +++ /dev/null @@ -1,116 +0,0 @@ ----
-title: The Easiest Way to Get Started 'Designing in the Browser'
-pub_date: 2013-11-08 16:02:19
-slug: /blog/2013/11/easiest-way-get-started-designing-browser
-metadesc: Developing and designing directly in the browser has greatly simplify my workflow. Skipping intermediary tools like Photoshop means I'm able to accomplish more in less time, which in turn means I can say yes to more projects.
-tags: Responsive Web Design, Building Smarter Workflows
-code: True
-tutorial: True
-
----
-
-*If you've ever struggled to "design in the web browser" this post is for you.*
-
-You've probably heard that phrase before, "designing in the browser", especially when it comes to building responsive websites. Some people even go so far as to say you shouldn't use Photoshop at all, but should build everything right in the browser.
-
-I think you should choose whatever tool works the best for you, but since I switched to developing directly in the browser a few years ago I've been able to greatly simplify my workflow. Skipping intermediary tools like Photoshop means I'm able to accomplish more in less time, which in turn means I can say yes to more projects.
-
-Sounds awesome right? But where do you start?
-
-## How to Simplify "Designing in the Browser"
-
-Start with the content. Get your content from the client, make your sketches, your wireframes, whatever other preliminary things are already part of your workflow. Then, when that's done, instead of opening Photoshop, Illustrator or other layout apps, you convert that content to HTML and start structuring it to match your wireframes.
-
-But how? I start up a web browser (I use Firefox), point it to my local mockup files (just HTML files in my project folder) and start editing. That's it; that's my workflow: edit, refresh; edit, refresh. It's simple and it makes the feedback loop of design and development immediate and simple.
-
-And to do that I didn't spend hours setting up some complex development environment, nor did I have to buy some expensive GUI server software package. In fact, I use just one line of code to pull this off.
-
-Here's how you can simplify your responsive design workflow, and start "designing in the browser" with what I call "the Python web server trick". I've been doing this for so long I often assume everyone knows about this, but I keep meeting people who don't. So... if you don't, here's a dead simple way to serve files locally with just one line of code.
-
-## The Best Web Server is the One That's Already Installed
-
-The key to designing in the browser is to have **a quick and easy way to serve up files locally**. You don't need anything fancy at this stage, just a basic web server.
-
-The easiest way I have found to set up this workflow is to use a very simple command line tool that creates a web server wherever I want it, whenever I want it.
-
-I know, I know, the command line is antiquated, mysterious and a bit frightening for many people. I know that because it was that way for me too. But I kept noticing how much faster I could do things compared to visual apps. And I found that every time I used the terminal, it got a little less intimidating. I learned how to do one little thing that sped up my overall workflow. Then I learned another. And another. Today I use the terminal more than any other application. You don't have to go that far, but don't let it intimidate you. Just take it slow. Start with one thing that simplifies your life, like this web server trick.
-
-If you've ever tried to set up a local development environment with Apache, PHP and the like you probably know what a headache that can be. Well, it turns out, if all you want is a simple web server, there's a much easier way.
-
-Here's how to **turn any local folder on your Mac or Linux machine into a web server** (Windows users can do the very same thing, though first you'll need to install Python. Follow [these instructions](http://www.anthonydebarros.com/2011/10/15/setting-up-python-in-windows-7/) to get Python installed and then come back and follow along).
-
-Before we start our web server we need a folder to hold all our files. This folder will be the "root" of our web server. I divide my time between OS X and Linux, which both offer a "Sites" folder in your home folder. I use this folder to store all my projects. If you prefer to store things elsewhere just adjust all the path names in the code that follow. Open the `Sites` folder and create a new folder inside it named `myproject`. Then create a new file named `mydemo.html`. Open that file in your favorite editor and just type "Hello World".
-
-That gives us something to test with. The next step is to open up your terminal application. In OS X that means you head to the `Applications` folder, open the `Utilities` folder and then double-click the Terminal application (Windows users head to the Start Menu, click Run and then type `cmd`; if you're on Linux I'll assume you know how to open a terminal window). In the new Terminal window type/paste this line:
-
-~~~.language-bash
-cd ~/Sites/myproject
-~~~
-
-The command `cd` just means "change directory". So we've changed from our home user folder to the `myproject` directory. Okay, you're now inside the folder we created just a minute ago. Now type this line:
-
-~~~.language-bash
-python -m SimpleHTTPServer 8080
-~~~
-
-Now open your favorite web browser and head to the URL: `localhost:8080/`. You should now see a directory listing with a link to your `mydemo.html` file. Click that and you should see "Hello World". Go back to your text editor and change the `mydemo.html` file to read "Hello World, it's nice to meet you". Jump back to the browser and reload the page. You should now see the message "Hello World, it's nice to meet you"
-
-Congratulations! You created a web server. You now have a simple and fast way to serve up HTML files locally. You can edit, refresh and mock up live HTML files right in the browser.
-
-All we're doing here is taking advantage of the fact that the [Python programming language](http://www.python.org/) ships with a built-in web server. Since Python is built into OS X and Linux, it's always there, ready to serve up files (as noted above, if you're on Windows you'll need to install Python. I also suggest installing [Cygwin](http://cygwin.com/), it will make everything you do on the command line easier).
-
-## Improving the Script
-
-So we have a very basic way to serve files locally. There are various ways to make this more sophisticated, but this basic method will work when you're first getting started.
-
-If you don't mind another quick trip to the Terminal you can even automate the process some more. To make it even simpler we can add an alias to what's known as a "profile", the configuration file that loads every time we start up a new terminal window. Most operating systems these days ship with the [Bash shell](http://en.wikipedia.org/wiki/Bash_%28Unix_shell%29). Assuming that's what you have (OS X uses Bash by default, as do most Linux distros), open a new terminal window and type this:
-
-~~~.language-bash
-nano ~/.bash_profile
-~~~
-
-Now paste this line into the window:
-
-~~~.language-bash
-alias serve='cd ~/Sites/myproject && python -m SimpleHTTPServer 8080'
-~~~
-
-Hit `control-x`, type a "y" and hit return to save the file. Now quit your terminal app and restart it.
-
-Now to turn on the server all we need to do is open a new terminal window and type "serve". Note that if your folder is in a different location, or if you move the folder you'll need to adjust your alias accordingly.
-
-If you've got a home network running and you'd like to be able to see your website on all your devices (handy for testing on phones, tablets and whatnot), you can alter this code slightly so other local devices can connect to your server. It's a little more complicated, but can still be a one-liner.
-
-For example, if your machine's local network address is 192.168.1.5, you could run this command:
-
-~~~.language-bash
-python -c "import BaseHTTPServer as bhs, SimpleHTTPServer as shs; bhs.HTTPServer(('192.168.1.5', 8080), shs.SimpleHTTPRequestHandler).serve_forever()"
-~~~
-
-Now, instead of `localhost`, open the URL `192.168.1.5:8080` in your web browser and you'll see the same page, but now you can point your phone to that URL and it will load there as well. Ditto your tablet, Kindle and any other devices connected to your local network.
-
-Obviously that's tough to remember so let's create an alias. To do that we'll just add another alias to the .bash_profile file we edited earlier. To open that up again just enter:
-
-~~~.language-bash
-nano ~/.bash_profile
-~~~
-
-Now paste this line into the window:
-
-~~~.language-bash
-alias serve_all="python -c 'import BaseHTTPServer as bhs, SimpleHTTPServer as shs; bhs.HTTPServer(('\''192.168.1.5'\'', 8080), shs.SimpleHTTPRequestHandler).serve_forever()'"
-~~~
-
-Now you can `cd` into any directory, type `serve_all` and run a web server that you can use to testing your sites on any device.
-
-That's all there is to it. A live web server whenever you want it, wherever you want it.
-
-That's how I "design in the browser".
-
-The next step is to take that nice content the client gave us and put it into our mockup files so we have something more useful than "Hello World" in our web browser. I do this using plain text files, [Markdown](http://daringfireball.net/projects/markdown/) and [Pandoc](http://johnmacfarlane.net/pandoc/), which I cover in more detail in this follow-up post: [Work Smarter: The Plain Text Workflow](/blog/2014/02/work-smarter-plain-text-workflow).
-
-I hope this simple Python server trick proves helpful, and, if you have any questions, drop them in the comments below.
-
-If you want to learn some more handy tips and tricks for improving your responsive design workflows check out my book, [Build a Better Web With Responsive Web Design](https://longhandpixels.net/books/responsive-web-design) and the accompanying videos.
-
-
diff --git a/src/old-no-longer-pub/2014-02-02_work-smarter-plain-text-workflow.txt b/src/old-no-longer-pub/2014-02-02_work-smarter-plain-text-workflow.txt deleted file mode 100644 index 551f195..0000000 --- a/src/old-no-longer-pub/2014-02-02_work-smarter-plain-text-workflow.txt +++ /dev/null @@ -1,250 +0,0 @@ ----
-title: "Work Smarter: The Plain Text Workflow"
-pub_date: 2014-02-02 19:37:07
-slug: /blog/2014/02/work-smarter-plain-text-workflow
-metadesc: A guide to smarter responsive design workflows. If you're still designing websites in Photoshop you're doing too much work.
-tags: Responsive Web Design, Building Smarter Workflows
-code: True
-tutorial: True
----
-
-*If you've ever struggled building responsive websites, this post is for you. It's part of a series on responsive design and smarter workflows, all pulled from my book, [Responsive Web Design](https://longhandpixels.net/books/responsive-web-design). If you find this excerpt useful, and want even more ideas on how responsive design can help you create amazing websites, sign up for the newsletter below and you'll get a discount when the book is released.*
-
-[TOC]
-
-If your current workflow looks anything like mine used to, you probably start most of your design work in a graphics editor like Photoshop. There's nothing "wrong" with that per say, nor is Photoshop some tool for evil, but I'm here to tell you there is a better, more productive way to work.
-
-Here's the thing: if you're designing websites in Photoshop you're doing too much work.
-
-If you're building responsive websites and you're not working directly with the code in a web browser you're wasting time and energy using the wrong tool for the job. There's nothing wrong with Photoshop, but it's not the best tool for developing responsive websites.
-
-Why?
-
-Photoshop has a fixed canvas size; the web has an infinitely flexible canvas. Responsive design means embracing that flexible canvas and you can't do that when you're working in a graphics editor.
-
-The web browser is the only tool I know of that offers the same fluid, flexible canvas of the web. **If you want to simplify and optimize your responsive workflow, the place to work is in the browser**.
-
-Designing in the browser means working with the web rather than against it, and that's the first step toward a simpler, faster responsive design workflow.
-
-##Faster, Easier Web Development? Sign Me Up!
-
-For a long time I ignored the advice to design in the browser because often the people giving it stopped with the advice. Go design in the browser! It's awesome! And, unicorns! Whereas I would just sit there thinking, *Uuuuuuuh, okay, but what the heck does that mean? Design in the browser...? Grumble, well, I've got work to do and I know how to use Photoshop...*
-
-Designing directly in the web browser makes sense at a theoretical level, but what does it mean in practical terms? What does this workflow look like and where do you start?
-
-That's why I wrote this, to outline how I finally figured out I could save tremendous time and effort working directly in the browser rather than prototyping everything in Photoshop. I'm going to share my workflow in hopes it will prove useful to you.
-
-Before we dive in, realize that this is just my workflow. It's not THE workflow by any means. Whether you're part of a small team, large team or working on your own, you have your idiosyncrasies and tics to consider. A good workflow will work with, not against you. I can't tell you how you should work. I can, however, tell you how I work and hopefully that will give you some ideas you can test in your own workflow. Take what you need, skip what you don't.
-
-In the [first part of this responsive design workflow series](https://longhandpixels.net/blog/2013/11/easiest-way-get-started-designing-browser) we looked at a super simple way to serve files on our local development machines. After all, you can't design in the browser if you don't have a server for your web browser to talk to. So I'll assume you've completed that tutorial (or have your own web server setup already).
-
-## Everything Starts with the Content
-
-The first thing to do with any tough question is to break it down into smaller questions. So instead of asking what does designing in the browser look like, let's start with a more basic question: What do we need before we can display a webpage?
-
-The simple answer is: the content.
-
-Before we can do anything with HTML or CSS we need the contents of our site. Everything starts with content. No content, no HTML. No HTML, no CSS.
-
-The foundation of any design has to be the content. I know what you're thinking; you're thinking I'll just grab some Lorem Ipsum. Or, if you prefer some snarky for your dev work, [Samuel L. Ipsum](http://slipsum.com). But that's not going to cut it.
-
-Without the content we're designing in the dark. I can't remember where I first saw a graphic like this, but it's what turned the light bulb on for me.... Simply put, there's no point in designing a layout with this:
-
-[](https://longhandpixels.net/media/images/2014/contentfirst1.png "View Image 1")
-: The content you think you're going to get.
-
-When what you're actually need to display looks like this:
-
-[](https://longhandpixels.net/media/images/2014/contentfirst1.png "View Image 2")
-: The content you actually got.
-
-
-As Jeffery Zeldman [writes](http://www.zeldman.com/2008/05/06/content-precedes-design/), "design in the absence of content is not design, it's decoration." We're not decorating websites, we're designing them, which means we need actual content.
-
-In terms of workflow, this means you need to get the content as soon as you can. No content, no work to flow. It means that when you're meeting with clients you need to emphasize the importance of having the actual content, not placeholders, but actual content. Without the content you can't make the client's vision a reality. You need real content to build real websites.
-
-If you're doing a redesign of existing content, you're all set, just dump the site using a tool like [curl](http://curl.haxx.se) or wget[^1] and then do a content inventory -- list all the contents and make a special note of any patterns, repeated content and outlier content you find.
-
-Once you have the content you can move on to sketching and wireframing -- figuring out the structure of the pages and overall site. I tend to do this stage mostly on paper because I find that to be the fastest way to get my ideas recorded, but I've seen other developers who use iPads with styluses, mind mapping tools, even one who used spreadsheets to create wireframes (yeah, he was a little strange).
-
-Once I have a good idea of the hierarchy and structure of the content on the site, I immediately jump into the browser and start working with live HTML. At this point I'm just stacking text and creating structure, there's no CSS yet.
-
-There are two necessary components in this workflow: the web server and the files.
-
-##Tool #1: The Web Server
-
-For the full details on how you can start a web server in any folder on your computer, see the first part of this series. To quickly recap where we left off, recall that we set up our example project in `~/Sites/myproject/`. We then open up our terminal application and type:
-
-~~~language-bash
-cd ~/Sites/myproject/ && serve
-~~~
-
-Assuming you set up the aliases outlined in the first post, this will start up a server in the `myproject` folder. Now point your browser to [http://localhost/:8080](http://localhost/:8080) and you should see a directory listing of... nothing! That's okay, the important thing is that we've got a server up and running.
-
-Now we're ready to turn our content into actual HTML files.
-
-##Tool #2: Plain Text
-
-Up until now I've been using a phrase I don't particularly like -- "the content". That makes it sound like it's just a pile of stuff. But it's not. It's words, phrases, sentences, pitches, headlines, sub headlines, outlines, lists, tables, buttons, forms, charts, illustrations, images, videos. "The content" is generic, the contents of the site you're building are not. This isn't just semantics, it's your first clue in how to get your workflow in line with the web itself.
-
-Notice two that almost everything in that list is either text, an image or a video.
-
-At its core this is what the web is made of -- text and images. This is why starting your work in a graphics editor doesn't make sense. The web is mostly text. Even fancy landing pages and ultra-slick web apps are ultimately about serving up text and images in the some way.
-
-And within that core, for most sites, "the content" is text. So when we're designing in the browser we'll start where the web does, with text. Then we'll add structure to that text. Then we can actually **design** pages optimized to display that structured text rather than just decorating some filler and hoping for the best.
-
-### The Power of Markdown
-
-Starting with just the text makes it much easier to see the structure of your content and mark it up accordingly.
-
-To my mind the best way to markup your documents at this stage is to use John Gruber's [Markdown](http://daringfireball.net/projects/markdown/). Markdown is a text-to-HTML conversion tool which allows you to markup text using an easy-to-read, easy-to-write format and then convert it to structurally valid HTML. While it won't allow you to markup every thing you might need in the end, it's perfect for generating quick prototypes like this. You can read through the [Markdown documentation over on Daring Fireball](http://daringfireball.net/projects/markdown/syntax).
-
-Best of all the syntax of Markdown is extremely simple, you can pick it up in an afternoon and reach the level of mastery we need in a day or two.
-
-To markup our content we'd just dive into our plain text files and add a bit of structure. Let's say you had some text that looks like this:
-
-~~~{language-markup}
-Acme Widgets
-
-We're Awesome Widget Makers
-
-Crank your widgets faster with ACME Widgets
-
-Our Widgets are the best in the business. You can rely on ACME widgets day in and day out. The toughest, most dependable widgets out there.
-
-What can ACME Widgets do for you?
-
-We can help you build better subwidgets
-We can make you life widgetier
-We can solve all your widget headaches
-~~~
-
-Let's use Markdown to add some structure. The result might look something like this:
-
-~~~language-markup
-#We're Awesome Widget Makers
-
-##Crank your widgets faster with ACME Widgets
-
-Our Widgets are the best in the business. You can rely on ACME widgets day in and day out. The toughest, most dependable widgets out there.
-
-###What can ACME Widgets do for you?
-
-* We can help you build better subwidgets
-* We can make your life widgetier
-* We can solve all your widget headaches
-~~~
-
-You can refer to the [Markdown docs](http://daringfireball.net/projects/markdown/syntax) for more details on the syntax, but the main thing to know is that `#` will be converted to `<h1>`, `##` to `<h2>` and so on. The asterisks denote an unordered list. The sentences in the middle will automatically be wrapped in `<p>` tags. We've added structure to the text, but kept things readable.
-
-Why bother? Why not go straight to HTML? Well, in this case the content is simple enough that sure, you might as well, but with much more complex, real-world content marking everything up in pure HTML is going to make it very difficult to read. Remember, this is the prototyping stage, things will be changing and you'll likely need to edit, rearrange and change your content many times. The more readable it is, the easier it is to make those structural changes.
-
-### Tools for Converting to Plain Text
-
-Unfortunately, it's rare that a client gives you plain text files. Most clients deliver content in MS Word files or PDFs or something even stranger. I had one client who sent over all their content as a series of PowerPoint presentations.
-
-The good news is that almost any files can be reduced to plain text. Regardless of how the client delivers the content, I convert it to plain text. For Word files I just open and save as plain text. That won't preserve any formatting, but you can keep the original around just to make sure you get the hierarchy and structure right.
-
-For PDFs I use a tool called [pdftotext](http://www.foolabs.com/xpdf/download.html). OS X users can grab a handy installer from [Carsten Blüm](http://www.bluem.net/en/mac/packages/). There are also numerous free online PDF-to-text converters, as well as OCR software available. If the client hands you content in PowerPoint slides you can open it in PowerPoint, save it as a Rich Text document and then open the Rich Text document in TextEdit or similar and save as plain text.
-
-The point is to get your content in plain text form.
-
-### Pandoc for Fame and Fortune
-
-The next step is to convert our Markdown-formatted file into an HTML file we can view in the browser using the server we set up above. There's a nearly unlimited number of ways we can convert from Markdown to actual HTML, but my favorite is [Pandoc](http://johnmacfarlane.net/pandoc/).
-
-Installing Pandoc is simple, just head over to the [Pandoc download page](http://code.google.com/p/pandoc/downloads/list) and grab the installer for your platform (for OS X grab the .dmg file, for Windows grab the .msi). Then double click the installer and follow the directions.
-
-Once you have Pandoc installed you just need to run it on your markdown files. To do that fire up a terminal application. On OS X that would be Applications >> Utilities >> Terminal. On Windows you'll need [Cygwin](http://x.cygwin.com).
-
-I know, the command line is antiquated, mysterious and a bit frightening for many people.
-
-I know that because it was that way for me too. But I kept noticing how much faster I could do things compared to visual apps. And I found that every time I used the terminal, it got a little less intimidating. I learned how to do one little thing that sped up my overall workflow. Then I learned another. And another. Today I use the terminal more than any other application. You don't have to go that far, but don't let it intimidate you. Just take it slow. Start with one thing that simplifies your life, like [the web server trick](https://longhandpixels.net/blog/2013/11/easiest-way-get-started-designing-browser).
-
-With that one already under your belt you're ready for Pandoc.
-
-Open your terminal and navigate to your project folder. To stick with the previous tutorial in this series we'll say our project files are in `~/Sites/myproject`:
-
-~~~{.language-bash}
-cd ~/Sites/myproject
-~~~
-
-Now that we're in the right directory we just need to invoke Pandoc:
-
-~~~{.language-bash}
-pandoc -s --smart -t html5 -o about.html about.txt
-~~~
-
-This line says, take the file `about.txt`, convert it from Markdown to HTML5 (that's the `-t html5` bit) and save the results in a new file named `about.html`. The `-s` flag at the beginning of the line tells Pandoc that we want this to be a standalone conversion, which means Pandoc will add `<html>`, `<head>`, `<body>` and a few other tags so that we have an actual valid HTML file rather than just a fragment of HTML. Pandoc even adds a link to the [HTML5shiv](https://code.google.com/p/html5shiv/) for IE.
-
-The `--smart` flag turns on one little extra feature that converts straight quotes to actual or curly quotes.
-
-Point your web browser to <http://localhost:8080/about.html> and you should see the results. View source and you'll notice that Pandoc has done a bunch of stuff:
-
-~~~language-markup
-<!DOCTYPE html>
-<html>
-<head>
- <meta charset="utf-8">
- <meta name="generator" content="pandoc">
- <title></title>
- <style type="text/css">code{white-space: pre;}</style>
- <!--[if lt IE 9]>
- <script src="http://html5shim.googlecode.com/svn/trunk/html5.js"></script>
- <![endif]-->
-</head>
-<body>
- <h1 id="were-awesome-widget-makers">We're Awesome Widget Makers</h1>
- <h2 id="crank-your-widgets-faster-with-acme-widgets">Crank your widgets faster with ACME Widgets</h2>
- <p>Our Widgets are the best in the business. You can rely on ACME widgets day in and day out. The toughest, most dependable widgets out there.</p>
- <h3 id="what-can-acme-widgets-do-for-you">What can ACME Widgets do for you?</h3>
- <ul>
- <li>We can help you build better subwidgets</li>
- <li>We can make your life widgetier</li>
- <li>We can solve all your widget headaches</li>
- </ul>
-</body>
-</html>
-~~~
-Pandoc has converted all our Markdown formatting into actual HTML. Our hashes change to header tags and our list gets marked up as an unordered list. Don't worry too much about the IDs and all the header elements. That can be changed or deleted when you move to working with templates in an actual content management system.
-
-As you can see we have a nicely structured HTML file which can serve as the basis of our templates or undergo further editing to add things like a site-wide navigation menu, header, logo and the like.
-
-Hmm. Maybe that list at the end should be an ordered list (probably not, but for example's sake, go with me here). Well, that's easy to change. Just open up the `about.txt` files and change the markdown to look like this:
-
-~~~language-markup
-#We're Awesome Widget Makers
-
-##Crank your widgets faster with ACME Widgets
-
-Our Widgets are the best in the business. You can rely on ACME widgets day in and day out. The toughest, most dependable widgets out there.
-
-###What can ACME Widgets do for you?
-
-1. We can help you build better subwidgets
-2. We can make you life widgetier
-3. We can solve all your widget headaches
-~~~
-
-Run the same Pandoc command (**tip**: if you hit the up arrow in your terminal app it will bring up the last used command. Assuming you've done nothing else in the mean time, that will be the Pandoc command we ran before).
-
-Refresh your browser and you should see that the list of things ACME widgets can do for you is now an ordered list. Hmm, maybe that header should be an H2? Maybe the client just called, they're sending over some updates for the page. None of that's a problem any more, you just update your text file, run Pandoc and see the results. Simple.
-
-## Further
-
-So far we've established a very basic, but fast workflow. We take our client provided content, convert it to text and with just two lines of code create HTML files and serve them locally for prototyping and structuring.
-
-All this does is give you quick and dirty HTML you can use for prototyping. Why is that useful?
-
-As Stephen Hay has [said repeatedly](http://www.the-haystack.com/) (and [written a book about](http://www.responsivedesignworkflow.com/), which you should read), starting with raw, unstyled HTML forces you to focus and prioritize. Hay suggests asking yourself, "what is the message that needs to be communicated if I was only able to provide them with unstyled HTML?" Start there, with the content -- the most important content -- and design everything around that.
-
-We've got that basic unstyled HTML. What if you want to get a little bit fancier with Pandoc? Well, you certainly can. I do.
-
-In the next installment in this series we'll look at some advanced ways to use Pandoc including customizing the HTML template it uses, adding site-wide elements like navigation, headers and footers, as well as the part most designers all waiting for -- adding an actual stylesheet.
-
-So stay tuned. In the mean time, you can head over the [Pandoc documentation](http://johnmacfarlane.net/pandoc/README.html) if you'd like to get a head start.
-
-[^1]: If you prefer a graphical download check out [Sitesucker](http://www.sitesucker.us/mac/mac.html) for OS X or [HTTrack](http://www.httrack.com) for Windows
-
-If you want to learn some more handy tips and tricks for improving your responsive design workflows, I'm writing a book to teach you exactly that (and a whole lot more). Sign up for the mailing list below to hear more about the book and get a discount when it's released.
-
diff --git a/src/old-no-longer-pub/2014-02-12_what-is-responsive-web-design.txt b/src/old-no-longer-pub/2014-02-12_what-is-responsive-web-design.txt deleted file mode 100644 index 9d63bf5..0000000 --- a/src/old-no-longer-pub/2014-02-12_what-is-responsive-web-design.txt +++ /dev/null @@ -1,75 +0,0 @@ ---- -title: What is Responsive Web Design? -pub_date: 2014-02-12 12:04:25 -slug: /blog/2014/02/what-is-responsive-web-design -metadesc: A gentle introduction to responsive web design... hint, it's more than flexible sites with some media queries -tags: Responsive Web Design - ---- - -[*Note: This is an excerpt from my book, [Build a Better Web With Responsive Web Design](https://longhandpixels.net/books/responsive-web-design). This starts out pretty simple, but even if you're already familiar with the basic concept of responsive design you might learn a few new things. My definition of responsive web design is very braod, encompassing ideas like taking a mobile-first approach and using progressive enhancement to make sure you responsive web site works everywhere.*] - -The phrase responsive design was coined by Ethan Marcotte in 2009 in an *A List Apart* article, entitled, appropriately enough, *[Responsive Web Design](http://www.alistapart.com/articles/responsive-web-design/)*. At the most basic level Marcotte used the phrase to mean building websites that respond to users' needs. - -The specific case Marcotte wrote about was making websites flow into different layouts for a variety of screens, especially mobile. When Marcotte's article first appeared in 2009 the iPhone was just starting to be truly ubiquitous. Clients were asking for "iPhone websites". But it wasn't just the iPhone, the handheld device market was just about to explode. There was a fear that the web would devolve into a multitude of separate websites, each tailored to a specific device, which, as Marcotte and others recognized, would be insane. - -As Marcotte writes "can we really continue to commit to supporting each new user agent with its own bespoke experience? At some point, this starts to feel like a zero sum game. But how can we -- and our designs -- adapt?" - -The answer was two-fold. First there were some new tools available to help us out, namely the @media query in CSS 3. Then there was the second and more powerful part, which meant going back to the web's origins and rediscovering something many of us lost along the way -- the web is an inherently flexible medium. - -In practice responsive design means creating websites that look good no matter which screen they might be on. Building a responsive website means making sure that the site is easy to read and navigate with a minimum of resizing, scrolling or panning. Building a responsive website means building a site your users and customers will recognize and enjoy regardless of which device they might be using -- mobile phone, tablet, laptop, desktop or even the internet enabled toaster of the future. - - - -It sounds wonderful at first blush -- who doesn't want their website to look great and work well on any screen? Even those we don't know about yet? Stop and think about it for a bit though and suddenly responsive design starts to sound unfathomably complex. - -How in the world do you make your site look good no matter where it's being served up -- phone, tablet, ebook reader, laptop, desktop and more? - -To complicate matters even those nice clean divisions -- mobile, tablet and desktop -- are fast disappearing. There are phones with tablet-like 7-inch HD screens, desktop-size monitors that run Android 4.0 and hybrid devices like Ubuntu's mobile OS which can dock to a monitor -- is that mobile? Is it a desktop? What if the answer to both questions is yes? - -Also consider that the "screens" of the future might not be screens at all. Google Glass is already in the wild and while it's still a "screen" of sorts, it's certainly different from what most of us are used to. Several years ago Microsoft showed off a prototype projection device dubbed the "OmniTouch" which essentially put the "screen" anywhere -- the desk in front of you, the wall during a presentation, your hand, anywhere. Other, somewhat more likely to actually make it to market "screens" include "smart" glass for windows, that can can pull all sort of tricks, including turning opaque to become a display. At the other end of the spectrum Sony is hard at work on displays that behave like and are no thicker than a single sheet of paper. - -.</small>](images/google-glass.png) - -Some of these ideas will become part of our reality, some will not. Other ideas we can't even imagine right now will also be created. - -The "screen" of the future might be your sunglasses, your hand, the back of the seat in front of you on the bus or the window by your bed when you wake up in the morning. Which of these is mobile, which is a tablet and which is a desktop? To build a future-proof web we need to stop focusing so heavily on individual devices, yes, but why stop there? While we're at it we might as well get rid of the notion of device categories as well since the hardware has already started doing exactly that. In the end even thinking of "screens" will soon seem antiquated. For this reason most web standards rarely use the word "screen", opting instead to talk about "viewports". - -Ultimately it makes no sense to frame our discussions by device type or even context. All of these devices, these screens, these viewports, these *things* are just portals into the world of the web. The more portals that can see into the web the more people you can bring into your world. - -The future of the web is a chaos and confusion of portals. Building a tricked out site for each of them is an insane idea today and it will be even more insane two years, five years, twenty years from now. - -So what do we do? Well, we could wait and see. Keep building sites that target individual devices until we collapse under the weight of the work (or price ourselves out of anything clients would be willing to pay). Or we can take what developers like Brad Frost call a "future-friendly" approach. That is, we can do the best we can with the tools we have available today and make decisions on which tools to use based on which are the most likely to work in the future. - -Let's step back in time for a minute to 2007 and consider the developer's dilemma when the iPhone first launched. You need to embed a movie in your page. The do-nothing approach would dictate you just embed a Flash player and call it a day. No Flash? Too bad. Apple will come around because, well, everyone has Flash. Except that we all know how that turned out. - -If instead we took a more future-friendly approach we might embed the movie using HTML5's video tag, while offering a Flash fallback for browsers that don't support modern web standards. This would have meant a bit more work at the time since there's a bit more code to write and we would have had to figure out exactly how to do it. - -But the easier, "do-nothing" approach would have meant more work down the road when you had to convert your site to HTML5 video anyway since even Adobe has abandoned the idea of including Flash on mobile devices. - -What can we learn from this little example? Well, first and foremost we need to embrace solutions that work today. There's no point in designing *only* for the future. In this case going with only HTML5 video tags would probably have been a bad choice; in 2007 you still needed a Flash-based fallback. - -We need to make our sites work well with the web as it is, but we should also keep an ear cocked toward the future and embrace those tools and design patterns that are most likely to work with the devices of the future as well. Brad Frost put it quite well when he said, "We don't know what will be under Christmas trees two years from now, but that's what we need to design for today." - -Right now that means using the responsive design tools and best practices that follow to build websites. - -Let's start with the three basic tools of responsive design -- fluid layouts, media queries and flexible media. - -* *Fluid Layouts*: By defining our content grids in mathematical proportions rather than pixels our content will fit any screen. So instead of having a 750px main column and a 250px sidebar, the columns would be defined as 75% and 25% respectively. -* *Media Queries*: Media queries are a CSS feature that allow styles to be applied conditionally, based on criteria such as screen width and pixel density. With as little as 3 or 4 lines of code you can resize your entire website and re-flow content to fit different screen sizes. -* *Flexible Media*: Dimensions of images, video, and animations should be flexible and adapt to suit different screen sizes similar to how grids should be fluid. - -To these three core principles of responsive design I am adding two more -- mobile-first design and progressive enhancement. - -* *Mobile-first Design*: Start by making sure your site and its content work on the least capable devices your visitors are using. By all means build as fancy and JavaScripty of a site as you want; just do it on basic, solid foundations. -* *Progressive Enhancement*: Don't *stop* with that most basic version of your site; start there. Then layer in complexity and more advanced features for more capable devices, progressively enhancing it as the devices become more capable. - -That may sound like a lot of stuff to keep track of, but guess how many of the things in this list are actually new? - -Just one, @media queries. Everything else is almost as old as the web. That means you don't really have to learn anything new, you just need to shift your approach in some subtle, but profound ways. - -Implementing responsive design is simple, but, as they say, the devil is in the details. - -That's where this book comes in. You bought this book, which means you're open to new ideas and new workflows. That's good because I'm going to challenge some long-held assumptions behind many of the sites you've probably built. I'm also going to tell you that, most likely, you've been doing it wrong, as they say. That's okay, I did it wrong for years too. Sure the sites we built worked, after all the biggest challenge we had was making things work in IE 6. And work they did, but we were still working from flawed premises and it's time to change that. - -It's time to step back from our toolkits and workflows and question everything because those fixed width sites designed for desktop screens don't work well on smartphones and probably don't work at all on feature phones. They probably won't look all that great in Google Glass or projected directly onto your retina either. As William Gibson said, the future is already here, it's just unevenly distributed. And that's what we need to develop, websites and web apps capable of handling the uneven distribution of the future. diff --git a/src/old-no-longer-pub/2014-02-19_complete-guide-picture-element.txt b/src/old-no-longer-pub/2014-02-19_complete-guide-picture-element.txt deleted file mode 100644 index 8f1c18b..0000000 --- a/src/old-no-longer-pub/2014-02-19_complete-guide-picture-element.txt +++ /dev/null @@ -1,235 +0,0 @@ ----
-title: A Complete Guide to the `<Picture>` Element
-pub_date: 2014-02-19 09:30:23
-slug: /blog/2014/02/complete-guide-picture-element
-tags: Responsive Images, Responsive Web Design
-metadesc: Everything you ever wanted to know about the proposed <picture> element. Just don't use it quite yet.
-code: True
-tutorial: True
-
----
-
-*If you've ever struggled building responsive websites, this post is for you. It's part of a series on responsive design, in particular responsive images, pulled from my book, [Responsive Web Design](https://longhandpixels.net/books/responsive-web-design). If you find this excerpt useful, and want even more ideas on how responsive design can help you create amazing websites, pick up a copy today.*
-
-[**Last Update: 08/20/2014**]
-
-I also wrote up the back story of the `<picture>` element and all the hard work that made it possible for Ars Technica. If you want to know not just how to use it, but how a small group of people created it, be sure to <a href="http://arstechnica.com/information-technology/2014/09/how-a-new-html-element-will-make-the-web-faster/" rel="me">check that out</a> as well.
-
-[TOC]
-
-Most people who've never heard the phrase before think that "responsive design" refers to building websites that are, well, responsive. That is, fast pages that respond to user input with no lag or discernible load times.
-
-Of course that's not exactly what the phrase "responsive design" refers to in most web development contexts, but I think the web might be better off if it were. I don't think we need to throw out Ethan Marcotte's original definition of responsive design -- fluid grids, flexible images and @media queries -- but perhaps we could add another criteria to our definition: responsive websites should, above all else, be **really, really fast**.
-
-There are many, many ways to speed up websites, responsive or otherwise, but few things will lighten the load like reducing image size. If you've done nothing yet to optimize the front-end portion of your site, images are almost always the best place to start. Even without responsive images, I managed to shave several seconds off this site's load time using some very simple, basic optimizations.
-
-Nothing, however, is going to speed up mobile page load times like responsive images. And the good news is, thanks to a lot of hard work from some deditcated developers, responsive images are here.
-
-## The `<picture>` Element
-
-Following the development of `<picture>` was a bit like listening to [Statler and Waldorf](https://en.wikipedia.org/wiki/Statler_and_Waldorf) in the [balcony of the Muppet's theatre](http://www.youtube.com/watch?v=NpYEJx7PkWE): "I love it!" "It's terrible!" "It's brilliant!" "It's okay" "It could be better" "It's awful!" "I love it!" And so on as developers and browser makers hashed out the details.
-
-In short, it was a soap opera. But in the end sanity prevailed and it got done. We have a draft specification for a new HTML element -- `<picture>`.
-
-As of right now `<picture>` is available in the dev channel of Chrome and in Firefox 34+. In both cases you'll need to enable it. In Firefox, head to `about:config` and search for "dom.image.picture.enabled". In Chrome you'll need to go to [chrome://flags/#enable-experimental-web-platform-features](chrome://flags/#enable-experimental-web-platform-features), enable that feature and restart.
-
-By the end of the year `<picture>` support should be on by default in the stable versions of both Chrome and Firefox. More importantly for those of us taking a mobile-first approach to development, `<picture.` support will be available in the mobile versions as well.
-
-What about other browsers? Opera is based on Blink and will support `<picture>` (hopefully) when Chrome does. Apple's Safari supports the `srcset` portion of `picture`, which we'll discuss in a minute. WebKit, which powers Safari, will soon have support for the rest of picture, but Apple won't likely ship it in Safari until the next major update. According to Microsoft's new [Status.Modern.IE](http://status.modern.ie/pictureelement) site, `<picture>` support is "under consideration" for a future release.
-
-Fortunately for us, browsers that don't understand `<picture>` have a fallback -- the good old `<img>` element.
-
-That means there's nothing to stop you from using `<picture>` right now. If you need a solution that works everywhere right now, there's PictureFill, a JavaScript based polyfill, but it requires JavaScript, which may not be right for every solution. On the plus side, PictureFill only kicks in when the browser doesn't have native support. Personally, I'm going ahead with straight `picture` for most clients.
-
-## Digging Into Picture
-
-The `<picture>` element looks a bit like the HTML5 `<audio>` and `<video>` tags. The actual `<picture>` tag acts as a container element for `<source>` elements which then points to the actual images you want to load.
-
-The big difference is that `<picture>` doesn't actually load your image. For that you need an `<img>` tag. So the browser evaluates all the various attributes you've specified in your `<picture>` block, picks the best image and then loads the image into the `<img>` inside your `<picture>` tag.
-
-In other words, `<picture>` doesn't actually display your image, it just tells the browser which image to display. Think of it as a way to filter possibilities for the `img` tag inside it.
-
-To help the browser pick the best image you have three major components to work with, all attributes of `<source>` elements within the `<picture>` tag, except for `srcset`, which can be an attribute of image as well.
-
-The three attributes are:
-
-* `srcset`: Yes, the same `srcset` that was [originally proposed for the `<img>` tag](https://longhandpixels.net/blog/2013/09/responsive-images-srcset). The `srcset` attribute gives the browser a list of possible images, along with some (optional) "hints" about the screen resolution and screen size that correspond with each image source.
-
-* `media`: The `media` attribute is where you would put your `@media` query information. When the `@media` attribute evaluates to true, the browser then moves to the associated `srcset`.
-
-* `sizes`: The `sizes` attribute allows you to specify a set of intrinsic sizes for the images described in the srcset attribute. This one is a little tricky at first, but basically it allows you to tell the browser how much of the viewport should be taken up by the image. This will become clearer in the example below.
-{^ .list--indented }
-
-To get a better understanding of how each attribute works, let's dive into some code.
-
-## Using the `<picture>` Element for High Resolution Images
-
-Let's start with the first use case in the [responsive images use case list](http://usecases.responsiveimages.org): "[resolution-based selection](http://usecases.responsiveimages.org/#resolution-based-selection)". Essentially we want to serve high-resolution images to high-res devices while allowing low-res devices to avoid the bandwidth penalty of overly-large files.
-
-Here's how you would use `<picture>` to give Hi-DPI screens high-res images and regular screens regular images.
-
-Let's say we're trying to build a more responsive version of my [Responsive Web Design book page](/books/responsive-web-design). Let's say we have two book cover images -- `cover1x.jpg`, which is a normal resolution image, and `cover2x.jpg` which is the same image, but at a much higher resolution.
-
-Let's go ahead and make things [future-friendly](http://futurefriendlyweb.com) by adding a third image, `cover4x.jpg`, to handle those 4K+ monitors that are just a few years away from being on every desktop. So with three images at three resolutions our `<picture>` code would look like this:
-
-~~~{.language-markup}
-<picture>
- <source srcset="cover1x.jpg 1x, cover2x.jpg 2x, cover4x.jpg 4x">
- <img src="cover1x.jpg" alt="Responsive Web Design cover">
-</picture>
-~~~
-
-Here we have a simple `<picture>` tag with one `<source>` tag and an `<img>` tag which doubles as a fall back for older browsers. Within the `<source>` tag we've used the `srcset` attribute to say to the browser (or "user-agent" in spec-speak) if the screen pixel density is 1x then load `cover1x.jpg`; if the screen density is 2x then load the higher-resolution `cover2x.jpg`. Finally, if the screen density is 4x, grab `cover4x.jpg`.
-
-What happens if the resolution is somewhere in between these values? Well, you could add in other resolutions (e.g. 1.3x, 1.6x and so on) and URLs if you want to be explicit. Remember though that which image to choose is entirely up to the browser. The `srcset` values we've given are described in the spec as "hints". It may be that, despite having a high-res screen, the user has explicitly instructed the browser (through a preference setting) not to download large images over 3G.
-
-Screen resolution is after all just one factor in deciding on the appropriate image to download. As developers we don't (and never will) have all the information that the browser does, which is why the final decision lies with the browser. As I've said before, this is a good thing; this is exactly the way it should be.
-
-Here's the our resolution-based query in action. Provided you're got a high-res display and are running a browser with `<picture>` support this should display the high-res image:
-
-<picture>
- <source srcset="/media/images/demos/srcsetdemo-2x.jpg 2x" />
- <img src="/media/images/demos/srcsetdemo-fallback.jpg" alt="demo of srcset in action" />
-</picture>
-
-That's how you would handle the simple resolution-based selection scenario. Before we move on though, let's look at another value you can add to `srcset` declarations: width.
-
-Consider this scenario: we have roughly the same situation, we'll limit it to two images this time, one high-res, one not. But we don't know how wide the image is going to be on the user's screen. Say our normal-res image is 640px wide. On a high-res screen that happens to be only 320 effective pixels wide, a 640px image would actually qualify as a high-res image. The situation is slightly more nuanced than a simple 1x vs 2x screen. To always send the larger image to 2x screens might still waste bandwidth because we're not accounting for the size of the screen/image.
-
-Here's how you can handle this scenario with `<picture>`. Let's stick with the same assumptions in the last scenario, but let's be a little more specific this time, `cover1x.jpg` is 640px wide and `cover2x.jpg` is 1280px wide. Here's what the code would look like:
-
-~~~{.language-markup}
-<picture>
- <source sizes="100%" srcset="cover1x.jpg 640w, cover2x.jpg 1280w">
- <img src="cover1x.jpg" alt="Responsive Web Design cover">
-</picture>
-~~~
-
-Now our `srcset` values are based on width and the browser gets to select the best image based on another `<source>` attribute, `sizes`. In this case we've told the browser that final image selected will be as wide as the entire viewport. Later we'll see how you can use this with other values.
-
-The final result will be as wide as the viewport, so if the user is on a device that is effectively 320px wide, but at 2x density the browser would, barring other conflicting info like user settings, pick `cover1.jpg`. If the user's viewport happened to be 640px wide, but the density was only 1x, `cover1.jpg` would again be used. On the other hand if the viewport happened to be 640px wide, but the density was 2x, `cover2.jpg` would be used.
-
-## Different Image Sizes Based on Viewport Width
-
-When you think of responsive images, this is probably the use case you think of -- serving smaller images to smaller screens, larger ones to larger screens. Later we'll see how you can combine this with the pixel density stuff above for even more control.
-
-First, here's how `<picture>` can be used to serve up different images based on viewport width.
-
-For the following examples, let's say we have three images, `small.jpg`, `medium.jpg` and `large.jpg` and we want to serve them to the corresponding viewport sizes. Let's make one more assumption: that we're taking a mobile-first approach and our fallback will also be the smallest image.
-
-Here's what that code would look like:
-
-~~~{.language-markup}
-<picture>
- <source media="(min-width: 45em)" srcset="large.jpg">
- <source media="(min-width: 18em)" srcset="medium.jpg">
- <img src="small.jpg" alt="Robert Anton Wilson laughing">
-</picture>
-~~~
-
-This time we've used the `media` attribute to write a couple queries that work just like CSS `@media` queries. Our mobile-first approach here means any viewport larger than 45em gets `large.jpg`, any viewport between 18em and 45em gets `medium.jpg` and anything smaller than 18em gets our `small.jpg`.
-
-Notice that here our smaller image is in the `<img>` tag, not a `<source>` tag. While we could add a third `<source>` tag with a srcset pointing to `small.jpg`, there's no need to do that since, as I mentioned earlier, `<picture>` and `<source>` are not the tags that actually load images. The `<picture>` element must contain an `<img>` element for the browser to actually display your image. Browsers that understand `<picture>` will first parse through all your rules, pick an image and then swap that image into the `src` attribute on the `<img>` tag.
-
-In this example not only is the `<img>` tag a fallback for older browsers, its `src` value also becomes the image used by `<picture>` savvy browsers if neither media query evaluates to true.
-
-Here's the above example in action (wrapped in a figure tag)
-
-<figure>
-<picture>
- <source media="(min-width: 45em)" srcset="/media/images/2014/wilson-large.jpg">
- <source media="(min-width: 28em)" srcset="/media/images/2014/wilson-medium.jpg">
- <img src="/media/images/2014/wilson-small.jpg" alt="Robert Anton Wilson laughing">
-</picture>
-<figcaption>Robert Anton Wilson. Image from Wikicommons</figcaption>
-</figure>
-
-## Different Image Size and Resolution Based on Viewport Width
-
-Now let's combine both of the previous examples and use `<picture>` to serve up different size and resolution images based on viewport width and device pixel density. To do that we'll need six images -- `small.jpg`, `small-hd.jpg`, `medium.jpg`, `medium-hd.jpg`, `.large.jpg` and `large-hd.jpg` (side note: in the future you'll want a CMS that's good at generating tons of image options from the one you actually upload. Otherwise, plan on going insane while resizing images in Photoshop).
-
-Okay, let's put all those images into a `<picture>` tag:
-
-~~~{.language-markup}
-<picture>
- <source media="(min-width: 45em)" srcset="large.jpg, large-hd.jpg 2x">
- <source media="(min-width: 18em)" srcset="medium.jpg, medium-hd.jpg 2x">
- <source srcset="small.jpg, small-hd.jpg 2x">
- <img src="small.jpg" alt="Robert Anton Wilson laughing" >
-</picture>
-~~~
-
-This looks just like the previous example except that now our `scrset` includes a second image and the 2x value to indicate that our `-hd.jpg` images are for high resolution screens.
-
-Also note that this time we did use a third `<source>` tag since small screen devices may still be high resolution. That is, while we don't need a media attribute, we do want to check the resolution, which requires a third `<source>` tag.
-
-## Solving the Art Direction Conundrum
-
-Here's a common responsive design problem: You have an image that, at full size on large screens, easily conveys its information. However, when that image is scaled down to fit on a small screen it becomes difficult to understand the image. For example consider an image of the president shaking hands with Robert Anton Wilson. At full size the image might show both men and some background, but when shrunk down you would barely be able to make out that it's two men shaking hands, let alone have any clue who the men might be.
-
-In situations like this it makes sense to crop the image rather than just scaling it down. In the example above that might mean cropping the image to be just the President and Robert Anton Wilson's heads. You no longer know they're shaking hands, but most of the time it's more important to know who they are than what they're doing.
-
-Frankly, handling this scenario is really more a problem for your CMS than the `<picture>` element. But assuming you have a way to generate the cropped image (or images if you're doing both normal and high-res) then the code would look something like this:
-
-~~~{.language-markup}
-<picture>
- <source media="(min-width: 45em)" srcset="original.jpg, original-hd.jpg 2x">
- <source media="(min-width: 18em)" srcset="cropped-medium.jpg, cropped-medium-hd.jpg 2x">
- <source srcset="cropped-small.jpg, cropped-small-hd.jpg 2x">
- <img src="cropped-small.jpg" alt="The President shaking hands with Robert Anton Wilson" >
-</picture>
-~~~
-
-Here we're assuming there are two crops that make sense for the viewports they're targeting. In this case that means a crop that fits viewports between 18em and 45em and another (presumably tighter) crop for smaller screens. We're also assuming we have both regular and high-resolution versions of the image.
-
-See what I mean about having a CMS that makes it really easy to generate a ton of different images from a single source? Having to do something like this by hand would suck for even the smallest of blogs.
-
-There are other possible scenarios that fit the art direction problem, for example, providing a black and white version of a color pie chart for monochrome screens.
-
-## Handling More Complex Scenarios
-
-So far we've looked at pretty easy-to-grok scenarios using `<picture>`, but the new element addresses some more complex situations as well. For example, we might have a responsive layout where images morph depending on viewport width (and thus there may not always be a one-to-one correlation between viewport width and image size).
-
-The `<picture>` element can handle this scenario as well, but this where the syntax starts to get, well, things can get complicated (as things tend to do when you want them to be very flexible).
-
-Imagine you have a storefront with three breakpoints, one for phone-ish devices, another for tablet-ish and a desktop layout. You build the site using a mobile-first approach, so you start with a single-column layout with images that span the full width of the viewport. At the first breakpoint the images switch to a two-column layout and may be a bit smaller than the full-width, single-column images just before the breakpoint (even though the viewport is larger now). Finally, on the larger layout the images move to a three-column grid and start off at the same size as the two-column layout but then scale up to be as large or larger than the images in the single-column layout.
-
-[](https://longhandpixels.net/media/images/2014/picture_element_illustration.png "View Image 1")
-: The very common image grid scenario. In this example we're using a single column (100% width) on small screens, two columns (50% width) on medium screens and three columns (rough 33%, but with some additional padding) on large screens.
-
-So what do we do with this scenario? Again, the first thing you'll need is a CMS that generates, let's say six, images to fit this scenario. Assuming the images are in place, the code is actually not that bad, albeit a little verbose. Here's some example code pulled directly from the responsive images spec:
-
-~~~{.language-markup}
-<picture>
-<source sizes="(max-width: 30em) 100%, (max-width: 50em) 50%, calc(33% - 100px)"
- srcset="pic100.jpg 100w, pic200.jpg 200w, pic400.jpg 400w,
- pic800.jpg 800w, pic1600.jpg 1600w, pic3200.jpg 3200w">
-<img src="pic400.jpg" alt="Robert Anton Wilson laughing">
-</picture>
-~~~
-
-Believe it or not, this is actually the terse way to write this out. You *could* write this out as six different `<source>` elements each with the entire `srcset` above, though I have no idea why you would want to do that.
-
-Let's step through the code line by line. The first thing we do is set up a series of breakpoints along with the size of the image relative to the viewport width at each of those breakpoints. So `(max-width: 30em) 100%` covers our smaller screen where the layout is single column and the image is full width. Then `(max-width: 50em) 50%` covers our medium layout which happens between 30em and 50em, where images are now 50% the width of the viewport.
-
-For the last argument in `sizes` things are a little trickier. There's no max-width, this just applies to everything over 50em. The single argument uses `calc()` to say images are 1/3 the viewport width, but there's 100px of padding as well. You may have heard that you should avoid using `calc()` in CSS since it tends to slow things down. Is the same thing true here? I actually don't know; if you do, chime in in the comments.
-
-Once we have the image-to-viewport ratio setup for each of our layout possibilities, then we use `srcset` to point the browser to our six image sizes, adding a width specification to help the browser pick the best one. In the end the browser will pick the optimal image based on the current image-to-viewport ratio, current viewport size and current viewport density.
-
-Complicated though this may be, it's actually pretty awesome. You've got the ability to serve the right image to the right screen based on the actual size the image will be on the screen. That's far more effective and powerful than just saying send a small image to a small screen and a big on to a big screen.
-
-## Further Reading
-
-Picture is awesome, but sprawling in scope. It's probably the single most potentially confusing element in HTML, but fortunately the basic uses cases are simple.
-
-Still, it never hurts to have more info. With that in mind here's a list of tutorials and write ups that you should check out as well.
-
-* [Native Responsive Images](https://dev.opera.com/articles/native-responsive-images/) -- Yoav Weiss writing for the Opera Dev center on how to use the `<picture>` element. Weiss wrote the code for Blink/WebKit's `<picture>` support; very few people understand `<picture>` as well as he does.
-
-* [The official Responsive Images Community Group website](http://responsiveimages.org/) Lots of good stuff here, including some [demos](http://responsiveimages.org/demos/).
-
-* [Responsive Images Use Cases](http://usecases.responsiveimages.org/) -- This covers the use cases and rational behind the `<picture>` element.
-
-* [The picture Element](http://www.w3.org/html/wg/drafts/html/master/embedded-content.html#the-picture-element) -- the HTML spec document.
-{^ .list--indented }
-
diff --git a/src/old-no-longer-pub/2014-02-20_live-editing-sass-firefox-vim-keybindings.txt b/src/old-no-longer-pub/2014-02-20_live-editing-sass-firefox-vim-keybindings.txt deleted file mode 100644 index a133705..0000000 --- a/src/old-no-longer-pub/2014-02-20_live-editing-sass-firefox-vim-keybindings.txt +++ /dev/null @@ -1,118 +0,0 @@ ----
-title: Live Editing Sass in Firefox with Vim Keybindings
-pub_date: 2014-02-20 10:08:45
-slug: /blog/2014/02/live-editing-sass-firefox-vim-keybindings
-tags: Building Smarter Workflows
-metadesc: The Firefox developer tools now support live editing Sass files right in the browser and Vim keyboard shortcuts
-code: True
-tutorial: True
-
----
-
-[TOC]
-
-The Firefox developer tools now support [live editing Sass files right in the browser](https://hacks.mozilla.org/2014/02/live-editing-sass-and-less-in-the-firefox-developer-tools/).
-
-This is, like a lot of what Mozilla has been doing lately, a case of Firefox playing catch-up with the competition -- Chrome has had similar features for quite some time.
-
-On the other hand, Firefox is ahead of Chrome in another area: [Vim and Emacs keybindings](https://developer.mozilla.org/en-US/docs/Tools/Using_the_Source_Editor#Alternative_key_mappings) (which I believe is because the editor in Firefox is based on [CodeMirror](http://codemirror.net/)).
-
-If that means nothing to you then stick with Chrome. If, however, you're loath to abandon the power of Vim or Emacs for editing files in the browser, this means you can have the best of both worlds -- live editing Sass files in the browser *and* Vim or Emacs keybindings.
-
-Because live editing [Sass files](http://sass-lang.com/) in the browser with Vim keybindings? That's some awesome sauce right there.
-
-If you prefer, here's a screencast walking you through the process. Other wise, read on.
-
-<div class="embed-container-960">
-<div class='embed-container'>
- <iframe src='https://player.vimeo.com/video/87289985' frameborder='0' webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>
-</div>
-</div>
-
-## Set Up Sass Editing in Firefox.
-
-The Mozilla Hacks blog posted a [quick overview](https://hacks.mozilla.org/2014/02/live-editing-sass-and-less-in-the-firefox-developer-tools/) on how to set things up, but it assumes you've already got Sass set up to compile sourcemaps. In case you don't, here's some more complete instructions on how to set up Sass and Firefox[^1].
-
-### Firefox
-
-First off you need a pre-release version of Firefox. Both the Sass sourcemaps support and the Vim/Emacs keybindings are available starting with Firefox 29. I use the [Nightly channel](http://nightly.mozilla.org/) (currently Firefox 30a1), but the [Aurora channel](http://www.mozilla.org/en-US/firefox/aurora/) (currently Firefox 29a2) will work as well.
-
-### Sass and Compass
-
-The next step is to set up Sass such that it will output a sourcemap file in addition to your CSS.
-
-What in the world is a sourcemap? Simply put, sourcemaps are a way to map compiled code back to its pre-compiled source. The reason any of this is possible at all is because Sass recently added support for CSS sourcemaps. That means that when it turns your Sass code into CSS, Sass also outputs a map of what it did. Firefox can then look at the map and connect the rendered CSS back to your source Sass. The sourcemap support is brand new and currently only found in the pre-release versions of Sass.
-
-I happen to like the Sass offshoot [Compass](http://compass-style.org/) better than vanilla Sass as Compass provides some very handy extras like CSS 3 prefixing tools. As with Sass, only the pre-release versions of Compass support sourcemaps (and even those are [not quite there](https://github.com/chriseppstein/compass/issues/1108)).
-
-Fortunately we can get the pre-release versions of both Sass and Compass with a single command.
-
-~~~.language-bash
-$ sudo gem install compass --pre
-~~~
-
-This command tells the Ruby gem system that we want the pre-release version of Compass (that's what the --pre flag is for). In the process we'll also get the latest version of Sass that works with Compass.
-
-You're probably used to starting Compass with something like `compass watch`. Eventually you'll be able to do that, but for now the only way I've been able to get Sass, Compass and sourcemaps working together is by invoking `sass` directly like so:
-
-~~~.language-bash
-`sass --compass --poll --sourcemap --watch sass/screen.scss:screen.css`
-~~~
-
-To break that down:
-
-1. `sass --compass` -- this bit starts Sass and includes Compass so that we have access to all our extras.
-2. `--poll` -- this gets around a super annoying permissions error. This shouldn't be necessary, but currently it gets around a bug. Alternately you can start `sass` with `sudo`.
-3. `--watch` -- watch tells sass to watch for changes rather than just compiling once.
-4. `--sourcemap sass/screen.scss:screen.css` -- This is the important part. We tell Sass that we want to use sourcemaps and then we create a mapping. In this case I've told Sass to make a map explaining how the file screen.scss inside the `sass` folder turns into the `screen.css` output file. This is what Firefox will use to map CSS rules to Sass rules.
-
-Sass is now running, watching our files for changes. Now we just need a local server of some kind. I typically use my [my python server trick](https://longhandpixels.net/blog/2013/11/easiest-way-get-started-designing-browser) for quick prototyping, but anything will work -- local Apache, Nginx, Django development server, whatever development server RoR offers -- anything will work.
-
-### Putting it All Together
-
-Now let's go back to Firefox and let it know about our sourcemap.
-
-Here's where things are significantly different than Chrome. I'm not sure which is "better" but if you're used to the Chrome approach, the Firefox approach may seem strange.
-
-{: class="img-right"}First load your localhost url. Now open the developer tools and inspect some element you want to change. Now all you need to do is right click in the CSS section of the Inspector tab and choose "Show original sources". Now click the little link next to the style rules and Firefox will open your Sass file in the Style editor.
-
-Now just hit save, either the link next to the file list or hit CMD-S (CTRL-S). The first time you save a file you have to tell Firefox where the file is on your disk -- navigate to the folder with your Sass files and hit save, overwriting the original. You'll need to do this once for each Sass partial that you have, which is annoying if you've got a lot. I happen to prefer the Chrome method, which maps the local folder to a local URL. It's a bit more work to set up, but you only have to do it once.
-
-Either way though that's it, you're done. Edit Sass live in the browser, see your changes update almost instantly with no refresh.
-
-Here's what it looks like:
-
-[](https://longhandpixels.net/media/images/2014/smaller-ff-tools.gif "View Image 1")
-: Just right click to change the view from compiled CSS to SCSS. Pretty Cool. Note that I've already saved the file once so Firefox knows where it is.
-
-
-## Vim or Emacs Keybindings
-
-I know, I know you were only in it for the keybindings, how the heck do you get those?
-
-Pretty simple. Open `about:config` and search for `devtools.editor.keymap`. Right click the "value" field and enter either "vim" or "emacs". I had to restart Firefox for the changes to take effect.
-
-Now you have a way to edit Sass right in the browser and still get the benefit of all the keyboard shortcuts you've commited to muscle memory over the years.
-
-There's one annoying bug (at least I think it's a bug) for Vim users, `:w` (the Vim save command) does not work like CMD-S (CTRL-S); it will always open the file save dialog box rather than just writing to disk. It's annoying, but I haven't found a workaround yet.
-
-## Shortcomings Compared to Chrome
-
-While Firefox's combo of live editing Sass and Vim keybindings is awesome, there a couple things that I think could be improved.
-
-In Chrome if you CMD-click (or CTRL-click) on an individual CSS rule in the styles tab Chrome jumps you to the relevant file and moves your cursor right to that rule. It even highlights the line in yellow for a second or two so you know where the cursor is in the file. It's very slick and very useful. CMD-click a rule in Firefox and nothing special will happen. Bummer.
-
-The other thing that's troubling me with Firefox is the need to "Save As" the first time you edit a Sass file. It feels janky to me and frankly it's a pain when your project has dozens and dozens of Sass partials. I much prefer Chrome's (admittedly perhaps more confusing at first) approach of associating a folder with a URL.
-
-Still, the Vim keybindings makes me more productive than I can be in Chrome without them so I'm back to Firefox.
-
-## Further Reading
-
-* Mozilla Hack Blog: [Live Editing Sass and Less in the Firefox Developer Tools](https://hacks.mozilla.org/2014/02/live-editing-sass-and-less-in-the-firefox-developer-tools/)
-* Tutsplus: [Developing With Sass and Chrome DevTools](http://code.tutsplus.com/tutorials/developing-with-sass-and-chrome-devtools--net-32805)
-* Ben Frain: [Faster Sass debugging and style iteration with source maps, Chrome Web Developer Tools and Grunt](http://benfrain.com/add-sass-compass-debug-info-for-chrome-web-developer-tools/)
-* Google Developer Docs: [Working with CSS Preprocessors](https://developers.google.com/chrome-developer-tools/docs/css-preprocessors)
-* HTML5Rocks: [Sass/CSS Source Map debugging](http://www.html5rocks.com/en/tutorials/developertools/revolutions2013/#toc-key-performance)
-{^ .list--indented }
-
-[^1]: At least here's how you make it happen for Sass; I know nothing about Less, much less Less with Grunt. If you prefer Less, head over to the [Mozilla Hack blog post](https://hacks.mozilla.org/2014/02/live-editing-sass-and-less-in-the-firefox-developer-tools/) and check out their instructions.
diff --git a/src/old-no-longer-pub/2014-03-04_how-to-build-responsive-websites-like-bruce-lee.txt b/src/old-no-longer-pub/2014-03-04_how-to-build-responsive-websites-like-bruce-lee.txt deleted file mode 100644 index c6e9036..0000000 --- a/src/old-no-longer-pub/2014-03-04_how-to-build-responsive-websites-like-bruce-lee.txt +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: Why Responsive Design, or How to Build Websites Like Bruce Lee -pub_date: 2014-03-04 12:04:25 -slug: /blog/2014/03/how-to-build-responsive-websites-like-bruce-lee -metadesc: Embracing the fluid nature of responsive web design means you can start building websites like Bruce Lee. -tags: Responsive Web Design - ---- - -[*Note: This is an excerpt from my book, [Build a Better Web With Responsive Web Design](https://longhandpixels.net/books/responsive-web-design). If you like what you read here, pick up a copy of the book and be sure to sign up for the newsletter over there in the sidebar.*] - -Spend enough time emersed in the world of web development and you may notice that the phrase ***responsive web design*** has taken on an almost religious status. - -I grew believing that you should question everything, especially your unchallenged premises. So, why is responsive design a good idea? - -There are many reasons, but to me the most compelling is also the simplest -- the web already is responsive. - -Every time you build a website that is not fluid by nature, with content that flows gracefully onto any screen, you are fighting the essential nature of the web. That fighting is going to make your life as a developer more difficult, as anyone who's ever tried to achieve pixel-perfect precision across browsers can attest. - -The web is naturally fluid. To show you what I mean, let's play with an example page. - -Head to the `sample_files` folder that came with this book and open up the folder `responsive basics`. Now open the file basic.html in your favorite web browser. Pretty boring right? Just some black text on a white background. The text isn't even centered or constrained at all, just long lines stretching across the screen. But let's play with this most basic of web pages for a minute. Grab the right edge of your browser window and resize it, making it narrower. What happens? The text re-wraps and re-flows to fit the new window size. - -Hmm, so the page responded to our input (resizing the screen). That's responsive design in a nutshell. And yes, the web is responsive right out of the box. Now no one but web developers ever drags their window around to watch how designs reflow, so don't get hung up on that. The point isn't that text re-flows as you drag, but that the text can flow onto any screen. - -If you've ever built a fixed-width website (960px wide by chance?) you've broken one of the web's greatest features -- it's naturally fluid. Web browsers have always enabled nearly everything now labeled "responsive design" right out of the box, including the most fundamental element you're looking at here -- fluid layouts. - -Okay, sweet, done. - -Well, there is a little more to it, but this really is the core and I find it helpful to frame the question "why responsive design?" this way because it gets to the basic truth of the web -- everything is flexible, everything is fluid. When we embrace the web's inherent fluidity we're actually freeing ourselves from our own constraints. I find that a very liberating feeling, one that has implications well beyond just the web. - -Kung Fu legend Bruce Lee's teacher once said to him. "Preserve yourself by following the natural bends of things and don't interfere." Lee meditated on this idea and eventually came up with his now famous quote that he must have "a mind like water". - ->"Don't get set into one form, adapt it and build your own, and let it grow, be like water. Empty your mind, be formless, shapeless -- like water. Now you put water in a cup, it becomes the cup; You put water into a bottle it becomes the bottle; You put it in a teapot it becomes the teapot." -- Bruce Lee - -Lee's metaphor works for nearly anything, but it's especially apt for anyone building responsive websites. It perfectly captures what we're striving for -- content and design that flow like water from one screen to the next -- as well as the approach we need to take -- keeping an open mind, letting go of our preconceptions and learning to embrace the flow of the web. - -If all that sounds a little hokey to you, consider another thought, equally rooted in Asian philosophy, but a little more concrete, John Allsopp's *[A Dao of Web Design](http://alistapart.com/article/dao)*. - -Allsopp's famous article is perhaps the best thing ever written about web development. What's perhaps most astounding about *A Dao of Web Design* is that it was written not last year, not even right after the iPhone was released and changed the mobile device landscape forever, but way back in 2000 when the web development community was still trying to sort out how to separate form and function with cascading stylesheets. - -Before iPads, before Google Glasses, before there even was a mobile web, Allsopp hit the nail on the head: "It is the nature of the web to be flexible, and it should be our role as designers and developers to embrace this flexibility, and produce pages which, by being flexible, are accessible to all." - -I highly recommend you read [the entire article](http://alistapart.com/article/dao). Go ahead, I'll wait. - -Sure, some of Allsopp's examples are a bit outdated, like his argument against the font tag. No developer worth their salt would even think to use a font tag these days. In fact, that's one of the upsides of the web today -- much of Allsopp's advice in the article has since become part of most web developer's best practices, part of the standard web developer toolkit. For example, avoiding HTML for presentation and using relative font sizing (percentages or ems) to ensure pages scale properly. - -But sadly most of us did not heed Allsopp's bigger-picture advice to embrace the inherently flexible nature of the web. Here's Allsopp's original suggestion for working with the flexibility of the web rather than fighting it: - ->Make pages which are accessible, regardless of the browser, platform or screen that your reader chooses or must use to access your pages. This means pages which are legible regardless of screen resolution or size, or number of colors (and remember too that pages may be printed, or read aloud by reading software, or read using braille browsers). This means pages which adapt to the needs of a reader, whose eyesight is less than perfect, and who wishes to read pages with a very large font size. - -Unfortunately Allsopp's advice was largely ignored, even by some of the best known developers on the web. Regrettably, we at Webmonkey did not start embracing responsive development best practices until the second round of interest, when, like everyone else, the iPhone and other mobile devices forced us to rethink our designs. - -That's okay though, it's never too late to embrace something new. diff --git a/src/old-no-longer-pub/2014-03-12_zen-art-responsive-workflow.txt b/src/old-no-longer-pub/2014-03-12_zen-art-responsive-workflow.txt deleted file mode 100644 index f9e023f..0000000 --- a/src/old-no-longer-pub/2014-03-12_zen-art-responsive-workflow.txt +++ /dev/null @@ -1,86 +0,0 @@ ---- -title: Zen and the Art of the Responsive Web Design Workflow -pub_date: 2014-03-12 10:08:57 -slug: /blog/2014/03/zen-art-responsive-design-workflow -metadesc: Embracing the fluid nature of responsive web design means you can start building websites like Bruce Lee. -tags: Responsive Web Design -tutorials: True - ---- - -[*Note: This is an excerpt from my book, [Build a Better Web With Responsive Web Design](https://longhandpixels.net/books/responsive-web-design). If you like what you read here, pick up a copy of the book and be sure to sign up for the newsletter over there in the sidebar.*] - -[TOC] - -If your current workflow is anything like mine and that of other designers I have worked with, it probably goes something like this: wireframe content -> sketch designs -> convert those to Photoshop comps -> pass to client for approval -> convert to HTML/CSS/JavaScript. I've left out a few steps, but the basic idea is pretty simple: Photoshop is the first place you go after you have the skeleton of the design on paper. - -Perhaps not coincidentally that's almost exactly the workflow (albeit a greatly simplified version) of Wired magazine's print edition. It's the workflow that web designers inherited from print and one that's still taught in all but the most forward thinking web design programs. It's taught because for a long time it worked. Sure people might resize their browser or text, but for the most part you could mock up something in Photoshop and reliably convert it to HTML and CSS with maybe a bit of JavaScript thrown in. - -To embrace a responsive approach and stick with the this workflow means you're going to have a very hard time not just thinking responsively but mocking things up. How will you know where your breakpoints need to be without putting your designs in a live browser and resizing them? Once you have those breakpoints you're going to have to back the Photoshop and create what, 4 mockups? 6? 10? That's a lot of extra work. - -[](/media/images/2014/workflows-the-hard-way.jpg "View Image 1") -: Mockups the hard way. Why create tons of different size and state mockups in Photoshop when you can actually build it, live, fully-functional in the browser? - -Just as we've shrugged off so much of the print design legacy in other ways -- flexible canvases, fluid designs and flowing text -- it's time to shrug off the print design workflow. It's time to put down the Photoshop, step away from the pixels and rethink our approach from the ground up. - -That doesn't mean we shouldn't learn from print. A lot of web developers talk about starting from the content and working your way out, which is exactly what print designers do as well. The content should always be the starting point, but the way we think about the content and the way we figure out how it will behave and appear to the user needs to change. - -## Working with the Responsive Flow - -Quite frankly coming up with a responsive workflow that really works is going to take some experimentation on your part. Whether you're part of a small team, large team or working on your own, you have your idiosyncrasies and tics to consider. A good workflow will work with, not against your unique style. - -While you should embrace your idiosyncrasies and tics, there are some general guidelines that have emerged from those of us who have worked on responsive sites big and small. I'm going to offer some methods and tools in the hope that they prove beneficial to you and your team. - -The biggest change I suggest making when -- or even before -- you attempt your first responsive design project is to get your designers and developers working together. Too often developers are brought in much later in the process, after the design work is already done. That might work when your designs have only one, set in stone width, but for responsive design you'll likely benefit from bringing in developers right at the outset. - -When your design and development teams work together you can start to get the kind of feedback loop you need for effective responsive workflows. Collaboration means there's immediate feedback for both teams. Developers can point out scenarios where designs fail and designers can go back and handle those. You decrease the likelihood of anything falling through the cracks. For example, what happens when the user denies access to geo-location data? What happens when the JavaScript fails and the ads don't load? What happens when the third-party fonts fail? - -Developers tend to be better at imagining all the ways in which a design can fail. Let them in early so they can better inform the process. They can point out problems early, before they require a massive effort to solve. - -The other side of the coin comes later, when you're building out the actual site. Keep designers involved and they can help sort out unforeseen problems. Developers might not know the why behind a decision, but if your designers are working alongside them they can help guide the development process by offering insight into decisions and helping to ensure the original, client-approved vision is maintained. - -This sort of workflow necessarily requires good communication skills, but more than anything it requires rethinking your approach. Gone are the days when designers handed their work to developers and moved on. Gone too are the days when developers could just sit back and wait for finalized mockups to convert to HTML and CSS. If such days ever existed at all, they are certainly behind us now. - -If your design and development teams aren't working together, get them working together post haste. - -What does this look like on a practical level? - -Well, as I've written elsewhere, [start with content](https://longhandpixels.net/blog/2014/02/work-smarter-plain-text-workflow#everything-starts-with-the-content) when you can. - -Once you have the content you can move on to sketching and wireframing -- figuring out the structure of the page and overall site. I tend to do this stage mostly on paper because if find it the fastest way to get my ideas recorded, but I've seen other developers who use iPads with styluses, mind mapping tools, even one who used spreadsheets to create wireframes (yeah, he was a little strange). - -If you haven't before embrace what designer Jason Santa Maria once termed the "[gray box method](http://v3.jasonsantamaria.com/archive/2004/05/24/grey_box_method.php)". That is, wireframe the basic elements of your site as a series of gray boxes you can then shuffle and rearrange as you figure out which bits of content need to go where on various layouts and screen sizes. You can even take this literally, cutting our gray shapes and taping them to something like a white board in various configurations. The point of using gray is to stop, in Santa Maria's words, "worrying about color and imagery choices and allow myself to focus on the site’s structure and hierarchy." This is particularly helpful in responsive design because it forces you to consider from the outset how you'll need to reshuffle content as the screen size changes. - -Along with wireframes, I sketch user flows and step by step scenarios based on how the client imagines their visitors interacting with the site. This helps with design obviously, but it also helps tremendously when you get to the testing stage by providing an ideal against which to measure how visitors *actually* interact with the site. The gap between those two things becomes the basis of the next iteration. - -## Designing in the Browser - -Once I have a good idea of how I want to arrange the content on the page I immediately jump into the browser and start working with live HTML. At this point I'm just stacking text and creating structure, there's no CSS yet. - -How I work in the browser is probably the most idiosyncratic part of my workflow. I've written a bit before about how I work with [plain text files and Pandoc to generate live HTML mockups](https://longhandpixels.net/blog/2014/02/work-smarter-plain-text-workflow#tool-2-plain-text). I then serve those mockups with a simple webserver, which I think is the [easiest way to get started "designing in the browser"](https://longhandpixels.net/blog/2013/11/easiest-way-get-started-designing-browser). From there I start editing and refining my mockups directly in the browser, using tools like the Chrome or [Firefox Developer Tools](https://longhandpixels.net/blog/2014/02/live-editing-sass-firefox-vim-keybindings). - -It's worth pointing out that this process of sketching, wireframing, outlining user flows and building mockups in the browser is not all that different from what I've always done, responsive or otherwise, but there is one major difference -- responsive design requires a more circular flow. - -In other words this is not a straight linear progression from sketch to wireframe live demo. It's more like a series of loops, something like: look at the content, sketch out basic ideas, create a wireframe, consider user flows, go back to content, rearrange slightly, tweak wireframe, create mockups, revisit user flows, re-tweak wireframe and so on. - -## Rinse and Repeat - -Once I've got a basic live site served up from a local folder, with at least one version of all the different pages the site is likely to have, the next step is to start doing all the things that make up building a responsive site. For me that means starting with my small screen layout, which is also typically the least capable device I'm supporting, and then working my way up the line. I create a basic mobile layout and then start stretching it until it looks like crap. I mark that as a breakpoint and head back to my text editor to start a new media query. Then I rinse and repeat with all the various pages until everything has been handled. - -This will be ported over to template files at some point. This mockup process is still very loose and experimental and I shift back and forth between text editor and Firefox/Chrome, experimenting to see what makes the most sense. - -This essentially begins the looping workflow I talked about above -- iterating through various pages, encountering issues and solving them, whether that's something I do myself or something slightly more structured when I'm working with a team. - -One of the advantages of this workflow is the fact that you can take a live mockup to the client. In my experience this is usually incredibly helpful. It allows the client to interact with a mockup and see where the problems are before the site is in a more finalized form. - - -## Conclusion and Further Reading. - -As I wrote at the start of this piece, there is no one size fits all responsive design workflow. That said I think it is helpful to distill the basics down to these three general principles: - -* Start with content (when possible) -* Work with code directly as much as possible -- test everything in the browser. -* Iterate. -{^ .list--indented } - -If you'd like more details on how I do all that and the tools I use be sure to look at my previous post on my [plain-text based workflow](). You might also want to pick up a copy of my book which has over 100 pages and dozens of videos devoted to various tools and [responsive design workflows](https://longhandpixels.net/books/responsive-web-design). diff --git a/src/old-no-longer-pub/2014-03-19_better-link-underlines-css-background-image.txt b/src/old-no-longer-pub/2014-03-19_better-link-underlines-css-background-image.txt deleted file mode 100644 index 85a929c..0000000 --- a/src/old-no-longer-pub/2014-03-19_better-link-underlines-css-background-image.txt +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: Better Link Underlines with CSS `background-image` -pub_date: 2014-03-19 12:04:25 -slug: /blog/2014/03/better-link-underlines-css-background-image -tags: CSS Tips & Tricks -metadesc: Using CSS background-image to get (nearly) perfect link underlines. -code: True -tutorial: True - ---- - -Medium's Marcin Wichary posted a fascinating look at his [quest](https://medium.com/p/7c03a9274f9) to create better looking link underlines on the site. It's a great read if you're at all obsessive about typography on the web. - ->How hard could it be to draw a horizontal line on a screen? ... This is a story on how a quick evening project to fix the appearance of underlined Medium links turned into a month-long endeavour. - -I highly suggest having a look at [Wichary's post](https://medium.com/p/7c03a9274f9), but Medium doesn't seem big on tutorials -- Wichary never actually posts the code he's using to achieve the underlines. - -I was curious, because Google may be abandoning link underlines, but I still like them, especially when the links are in bodies of text. I was also curious because this site has the same problem Medium's old link styles had in Chrome: ugly, thick underlines. The version of Chrome in the Canary channel fixes this, but the current shipping version still looks bad: - -[](https://longhandpixels.net/media/images/2014/chrome-underlines-bad.png "View Image 1") -: Link underlines in Chrome 33 and below - -Canary channel: - -[](https://longhandpixels.net/media/images/2014/chrome-underlines-better.png "View Image 2") -: Link underlines in Chrome 35+ - -Still, I was curious what Wichary has come up with so I opened the dev tools, poked around a bit and found that these are the key styles: - -~~~.language-css -a { - text-decoration: none; - background-image: linear-gradient(to bottom, white 75%, #333332 75%); - background-size: 2px 2px; - background-position: 0px 24px; - background-repeat: repeat-x; -} -~~~ - -You'll need to adjust the colors to fit with your site's color scheme and bear in mind that the background size and position may need to be adjusted based on your font size and line-height. - -I happen to like the underline slightly heavier than what Medium is using so after playing with this technique a bit, here's what I'm planning to roll out here at some point: - -~~~.language-css - background-image: linear-gradient(to bottom, white 50%, #DE4D00 75%); - background-size: 2px 2px; - background-position: 0 20px; - background-repeat: repeat-x; -~~~ - -Here's what it looks like: - -[](https://longhandpixels.net/media/images/2014/fancy-underlines.png "View Image 3") -: Link underlines using `background-image` - -So far as I can tell there are no accessibility or other downsides to this technique, but if you know better let me know in the comments. It degrades pretty well too since you can just use a good old `text-decoration: underline;` for older browsers. - -Now if only the Medium developers would give the site's URLs this level of attention. diff --git a/src/old-no-longer-pub/2014-03-20_look-responsive-design.txt b/src/old-no-longer-pub/2014-03-20_look-responsive-design.txt deleted file mode 100644 index b4553af..0000000 --- a/src/old-no-longer-pub/2014-03-20_look-responsive-design.txt +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: The Look of Responsive Design -pub_date: 2014-03-20 12:04:25 -slug: /blog/2014/03/look-responsive-design -metadesc: Does responsive design have a 'look'? -tags: Responsive Web Design ---- - -Mark Boulton posted something to Twitter yesterday that I think deserves more attention than it has seen thus far. Boulton [writes](https://twitter.com/markboulton/status/445943150247702528): - ->"I wonder if #RWD looks the way it does because so many projects aren't being run by designers, but by front-end dev teams." - -I suppose you could take that as a dig against front end developers, but I don't think it was meant that way and that, to me anyway, is not the interesting part of the question. - -I'm also not interested in defending responsive design. If you don't want/like/care about responsive design that's fine. Carry on. - -What I've been thinking about since I read Boulton's thought is not the roles of designers and developers in responsive projects[^1]. What got me thinking is this notion that "responsive web design looks the way it does". - -The reason I've been thinking about it is because when I first saw this in my Twitter feed I had a kind of gut reaction -- *yeah, I know what he means there, responsive web design does look a certain way*. But the more I've thought about this, the less I agree with myself and my initial gut reaction. - -I'm actually pretty sure I have no idea what responsive design looks like. - -What I'm really hoping is that Boulton will blog about what he meant because he's a talented designer and I am not, so it's very possible I'm missing the obvious. - -Tim Kadlec offers some reasons [Why RWD Looks Like RWD](http://timkadlec.com/2014/03/why-rwd-looks-like-rwd/), and goes a little further, writing "to be fair, a pretty large number of responsive sites do tend to share similar aesthetics." - -I agree with him, but I don't know that responsive web design has anything to do with it. I think that a pretty large number of WordPress site's [look like WordPress sites](http://ma.tt/2014/01/techmeme-100/). I think you could back up out of categories entirely and simply say "a large number of *websites* share similar aesthetics". - -Sure, the popularity of responsive design coincides with some other design trends: blocky, very consciously grid-oriented layouts, Pinterest-inspired layouts, Medium-popularized big image layouts and so on. And at the same time libraries like Bootstrap and its ilk have homogenized design details to some degree[^2]. Then there's the ubiquitous "hamburger" menu and other shared mobile design patterns, but most of those patterns are purloined from mobile applications rather than something responsive web designers came up with. - -There's also the fact that newsy websites (Boston Globe, BBC, The Guardian, et al) have been some of the earliest and most prominent adopters of responsive design. They all have similar visual designs, but I think that's owing more to the similarity in [the structure of news content](http://www.markboulton.co.uk/journal/structure-first-content-always) than responsive web design. - -Responsive design imposes new constraints that are hard to solve. And whenever there's new territory, early maps all tend to borrow from each other and end up looking similar. When someone solves a problem -- for example, pull to refresh -- dozens of others rush to adopt the same solutions, which leads to common design patterns, but there's nothing about that that's unique to responsive design. - -[^1]: I suspect that was more the point and it's probably a better topic, but it's not the one that got me thinking. -[^2]: Which is not to say that's a bad thing. It is just a thing. diff --git a/src/old-no-longer-pub/2014-03-26_shell-code-snippets-done-right.txt b/src/old-no-longer-pub/2014-03-26_shell-code-snippets-done-right.txt deleted file mode 100644 index 038b841..0000000 --- a/src/old-no-longer-pub/2014-03-26_shell-code-snippets-done-right.txt +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: Shell Code Snippets Done Right -pub_date: 2014-03-26 12:04:25 -slug: /blog/2014/03/shell-code-snippets-done-right -metadesc: A better way to post shell code samples on your site. -tags: CSS Tips & Tricks -code: True -tutorial: True - ---- - -If you post tutorials online, particularly anything involving shell code, you've inevitably run into the problem where people copy and paste the code and end up including the `$` prompt by mistake. - -The code won't work when you do that so your tutorial becomes less helpful. Sometimes people will post a comment wondering what happened, but they're just as likely to decide you don't know what you're talking about and move on. - -Even users that know not to copy the `$` have to be careful how when they select your code to avoid accidentally copying it. - -You could leave out the `$` to avoid this, but then it can be hard to tell what's a wrapped line versus what's two separate lines of code. - -Turn out there's a better way -- you can display the prompt and make sure no one ever selects it thanks to the CSS pseudo class `:before`. - -I found [this screenshot][1][^1] in an open tab this morning and thought I'd share what a subtle, but helpful stroke of genius Github has created here: - -[![Screenshot of Github source code showing how to use `:before` to add a prompt before shell code][2]](https://longhandpixels.net/media/images/2014/github-prompt-before.jpg "View Image 1") -: Github's ingenius use of `:before` to display a prompt before shell code snippets - -Github is using a pseudo class to prepend command line code with the `$` prompt. It's a great attention to detail -- you get the prompt, but because it's a pseudo element it can't be copied when users copy and paste the code. In most browsers it can't even be selected, which means you don't have to worry when you're highlighting code to copy it. - -This site currently uses [Prismjs][3] for code highlighting so Github's exact syntax won't work here, but here's a snippet of CSS to make this work with Prismjs: - -~~~.language-css -code.language-bash:before { - content: "$ "; -} -~~~ - -The markdown processor I use adds `language-bash` to both the `pre` and `code` tags, which is why I've used the more specific `code` selector. - -And there you go, a better way to display snippets of shell code on your site. - -The main downside I can see is that if/when I roll this out I have to go back and delete all the `$` in my posts. - -[^1]: I have no idea where I found this or how it came to open in my browser. Probably Twitter, but I've been unable to track down who did it since Twitter's search features suck. If you know, post a comment. - - -[1]: http://cl.ly/Ub7p -[2]: https://longhandpixels.net/media/images/2014/github-prompt-before.jpg -[3]: http://prismjs.com/ diff --git a/src/old-no-longer-pub/2014-04-05_why-mobile-first-responsive-design.txt b/src/old-no-longer-pub/2014-04-05_why-mobile-first-responsive-design.txt deleted file mode 100644 index 4728863..0000000 --- a/src/old-no-longer-pub/2014-04-05_why-mobile-first-responsive-design.txt +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: Why Mobile-First is the Right Way to Build Responsive Websites -pub_date: 2014-04-05 12:04:25 -slug: /blog/2014/04/why-mobile-first-responsive-design -metadesc: Why taking a mobile-first approach is the best way to build responsive website. -tags: Responsive Web Design - ---- -[*Note: This is an excerpt from my book, [Build a Better Web With Responsive Web Design](https://longhandpixels.net/books/responsive-web-design). If you like what you read here, pick up a copy of the book and be sure to sign up for the newsletter over there in the sidebar.*] - -The goal of responsive web design is to create websites that look great and function well on any device that accesses them. - -Assuming you're already familiar with the flexible foundations of fluid layouts and typography along with @media queries you're ready to dive into the world of responsive web design. - -In fact, you're probably itching to dive in and start writing @media queries for tablet-size devices and others for mobile devices to trick out your current fixed-width sites in shiny new responsive duds. - -That approach works. You can approach responsive design as something you tack on to a desktop site. That is, you can build out your designs for larger screens and then use media queries to make sure things look good on tablets, phones and all the other myriad devices out there. Indeed, this is often the only practical approach to take when you're adding on to an existing design with clients who have a tight budget or timeline and don't want a completely redesigned website. - -In other words, there's nothing wrong with building a desktop site and then adding some @media queries to handle other screens. In fact I guarantee that, no matter how much you want to embrace the approach I'm about to outline there will be cases where it just isn't an option. - -That said, there is, to my mind, a far better place to start your responsive designs -- with the least capable device. - -Developer Luke Wroblewski, who, among other things, was the Chief Design Architect at Yahoo, has helped popularize what he calls a "mobile-first" approach to design. In fact he's written a book on the subject entitled, appropriately, *[Mobile First](http://www.abookapart.com/products/mobile-first "buy Mobile First")*. I'm going to briefly discuss why I think what Wroblewski calls mobile first is the best approach for responsive design. For a deeper look at why a mobile first approach makes the most sense be sure to pick up a copy of Wroblewski's *Mobile First*. - -Why does Wroblewski think you should start designing for mobile devices first? Here's what he [writes on his blog](http://www.lukew.com/ff/entry.asp?1333 "Luke Wroblewski on why mobile-first is the right approach" ): - ->We start with the value of our service and our content. The constraints in mobile devices force us to focus on what really matters so we only include and emphasize the most valuable parts of our offering first and foremost. - -In other words the constraints of mobile devices -- often, though not always, limited bandwidth, small screens, touch interfaces, etc -- help you focus on what's really important in your designs. That means cutting the cruft and getting the content your users are after front and center. For example, it might force you to consider, given the limited screen sizes of mobile devices -- often less than 480px in height -- whether that 100px tall logo is really worth the screen real estate it requires. Ditto menus, sharing buttons and all the other cruft that can accumulate around the actual content users want. Some of that "cruft" might be important, some might not. The point is that starting with mobile devices helps to create constraints, which in turn creates focus. - -While I like Wroblewski's mobile-first approach, I have a slightly different way of looking at it. As I already mentioned, I like to think not just mobile-first, but *least capable device first*^[1]. - -Clearly I'm no good at coining a phrase, but I prefer to think of it this way because most of the constraints of mobile devices will be gone in the next few years -- networks are getting faster, so are mobile processors. However, while the phones of tomorrow may be as powerful as the laptops of today, there will be an entirely new class of devices we haven't even thought of yet connecting to the web that will have constraints. In other words, don't get hung up on the "mobile" part of mobile-first, think ***least capable device first***. - -Naturally, *least capable* is very open-ended. Does that mean you have to make sure your site renders in Lynx running on a BBC Micro? No, it just means starting with the basics and layering on complexity rather than starting with the complex desktop layout and trying to strip it back. - -In fact the first step is determining the least capable device you want to support. Sometimes that might be a feature phone browser that doesn't understand @media queries. Other clients might not care about reaching that market and just want something that works well on the mobile devices most popular in the U.S and Europe. - -Whatever the case, first determine where you're going to start. Once you have that least capable device to target you can start building. - -So what are the practical components of a least capable device mindset? What does this mean for our actual code? - -It means we start by building a site that works on these least capable devices. No media queries, no fancy JavaScript. Just a basic site optimized for the least capable device you're targeting. - -Start simple. Build your site so it looks good without many bells or whistles. This has two advantages, the first is practical, it supports older phones and mobile web browsers that don't understand @media queries (as noted earlier it might even mean obscurities like Lynx works as well). The second advantage to the approach is that it forces you to focus on what's important from the get-go. - -Then, once you have a baseline experience you can add the bells and whistles. - -You wouldn't start building a massive skyscraper without first laying a strong foundation. That's precisely what this bare-bones experience provides, a strong foundation to support all the rest of your progressively more complex enhancements. - - -[^1]: I'm pretty sure Wroblewski also means least capable device when he says mobile first, I just like to be explicit about it. diff --git a/src/old-no-longer-pub/2014-04-23_learn-web-development-today.txt b/src/old-no-longer-pub/2014-04-23_learn-web-development-today.txt deleted file mode 100644 index 47144c4..0000000 --- a/src/old-no-longer-pub/2014-04-23_learn-web-development-today.txt +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: How to Learn Web Development Today -pub_date: 2014-04-13 12:04:25 -slug: /blog/2014/04/learn-web-development-today -metadesc: How to learn web development on the modern web or why you don't need webmonkey, A List Apart, Smashing Magazine or anyone else. - ---- - -I launched my book today. If you're interested in learning how to build fast, user-friendly, mobile-first, progressively-enhanced responsive websites grab a copy of the awkwardly titled, ***[Build a Better Web with Responsive Web Design](https://longhandpixels.net/books/responsive-web-design?utm_source=lhp&utm_medium=blog&utm_campaign=lhp)***. There's a free 60 page excerpt you can download to try it out before you buy. - -It's been a long, and lately very hectic, road from [running Webmonkey](https://longhandpixels.net/blog/2013/09/whatever-happened-to-webmonkey) to self-publishing, but along the way I got an idea stuck in my head and it won't go away: we need to embrace the post-Webmonkey world. - -To put it another way, we need to embrace the post-megasite web. Webmonkey happens to be the megasite that I identify with, but there are others as well. - -I've been thinking about this in part because I've been embracing a post-megasite world on a personal level and also because several dozen people have emailed me over the past year to ask, "now that Webmonkey is gone, what publications should I follow to keep up with web development?" - -Most of these people will mention A List Apart, Smashing Magazine, <strike>.Net</strike>, maybe one or two more megasites and then want to know what else is out there that they don't know about yet. - -These are all great publications, but they're all, like Webmonkey, a dying breed. .Net already died in fact. There's just not much money in selling ads alongside daily web development news and web-savvy audiences like yourself tend to have ad blockers installed, which means even less money for these sites than say, CNN. - -## What to Do Instead - -Learning how to build websites and finding answers to your questions has become a very distributed thing in last ten years. While there are still some Webmonkey-esque things around, none of them can possibly keep track of everything. There's simply too much stuff happening too fast for any centralized resource to keep track of. Except for maybe CSS Tricks, which seems to show up in the results for pretty much every web development question I've ever had. Not sure when Chris Coyier sleeps. - -For the most part though web development has long since moved past the top-down, hierarchical knowledge structures that tend to define new endeavors, when only a few people know how to do something, to a distributed model where there are so many blog posts, so many answers on Stack Overflow, so many helpful tip and tricks around the web that there is no single resource capable of tracking them all. Except Google. - -This has some tremendous advantages over the world in which Webmonkey was born. - -First, it makes our knowledge more distributed, which means its more resilient. Fewer silos means less risk of catastrophic loss of knowledge. Even the loss of major things like Mark Pilgrim's wonderful HTML5 book can be recovered and distributed around so it won't disappear again. - -This sort of resilience is good, but the far better part of the more distributed knowledge base is that there are so many more people to learn from. So many people facing the same situations you are and solving the same problems you have means more solutions to be found. This has a two-fold effect. First, it makes your life easier because it's easier to find answers to your questions. Second, it frees up your time to contribute back different, possibly novel solutions to the problems you can't easily answer with a bit of web searching. - -So my response to people asking that question "now that Webmonkey is gone, what publications should I follow to keep up with web development?" has two parts. - -## Write What You Know - -First, and most importantly, start keeping a public journal of your work. I don't mean a portfolio, I mean a record of the problems you encounter and ways you solve them. Seriously, go right now and download and install WordPress or setup GitHub Pages or something. You're a web developer. If you don't know how to do that yet, you need to do learn how to do it now. - -Now you have a website. When you get stumped by some problem start by searching to see what other people have done to solve it. Then write up the problem and post a link to the solution you find. Your future self will thank you for this; I will thank you for this. - -Not only will you have a reference point should the same question come up again, you're more likely to remember your solution because you wrote it down. - -You'll also be helping the rest of us since the more links that point to good solutions the more those solutions rise to the top of search engines. Everyone wins and you have something to refer back to in six months when you encounter the same problem again and can't remember what you did. - -For example, I don't particularly want to remember how to setup Nginx on Debian. I want to know how to do it, but I don't really want to waste actual memory on it. There are too many far more interesting things to keep track of. So, when I did sit down and figure out how to do it, I wrote a post called [Install Nginx on Debian/Ubuntu](https://longhandpixels.net/blog/2014/02/install-nginx-debian-ubuntu). Now whenever I need to know how to install Nginx I just refer to that. - -## Find People Smarter Than You - -My other suggestion to people looking for a Webmonkey replacement is to embrace Noam Chomsky's notion of a star system, AKA the blogroll. Remember the blogroll? That thing where people used to list the people the admired, followed, learned from? Bring that back. List the people you learn from, the people you admire, the people that are smarter than you. As they say, if you're the smartest person in the room, you're in the wrong room. - -I'll get you started, here's a list of people that are way smarter than me. Follow these people on Twitter, subscribe to their RSS feeds, watch what they're doing and don't be afraid to ask them for advice. Most of them won't bite. The worst thing that will happen is they'll ignore you, which is not the end of the world. - -* [@anna_debenham](https://twitter.com/anna_debenham) -* [@yoavweiss](https://twitter.com/yoavweiss) -* [@jenville](https://twitter.com/jenville) -* [@wilto](https://twitter.com/wilto) -* [@MattWilcox](https://twitter.com/MattWilcox) -* [@aworkinglibrary](https://twitter.com/aworkinglibrary) -* [@igrigorik](https://twitter.com/igrigorik) -* [@JennLukas](https://twitter.com/JennLukas) -* [@karenmcgrane](https://twitter.com/karenmcgrane) -* [@paul_irish](https://twitter.com/paul_irish) -* [@meyerweb](https://twitter.com/meyerweb) -* [@mollydotcom](https://twitter.com/mollydotcom) -* [@lukew](https://twitter.com/lukew) -* [@susanjrobertson](https://twitter.com/susanjrobertson) -* [@adactio](https://twitter.com/adactio) -* [@Souders](https://twitter.com/Souders) -* [@lyzadanger](https://twitter.com/lyzadanger) -* [@beep](https://twitter.com/beep) -* [@tkadlec](https://twitter.com/tkadlec) -* [@brucel](https://twitter.com/brucel) -* [@johnallsopp](https://twitter.com/johnallsopp) -* [@scottjehl](https://twitter.com/scottjehl) -* [@jensimmons](https://twitter.com/jensimmons) -* [@sara_ann_marie](https://twitter.com/sara_ann_marie) -* [@samanthatoy](https://twitter.com/samanthatoy) -* [@swissmiss](https://twitter.com/swissmiss) -* [@LeaVerou](https://twitter.com/LeaVerou) -* [@stubbornella](https://twitter.com/stubbornella) -* [@brad_frost](https://twitter.com/brad_frost) -* [@chriscoyier](https://twitter.com/chriscoyier) -* [@RWD](https://twitter.com/RWD) - -For simplicity's sake these are all twitter links, but have a look at the bio section in each of those links, most of them have blogs you can add to your feed reader as well. - -This list is far from complete, there are thousands and thousands of people I have not listed here, but hopefully this will get you started. Then look and see who these people are following and keep expanding your network. - -Okay, now go build something you love. And don't forget to tell us how you did it. diff --git a/src/old-no-longer-pub/2014-04-26_create-email-courses-mailchimp.txt b/src/old-no-longer-pub/2014-04-26_create-email-courses-mailchimp.txt deleted file mode 100644 index 7beeb91..0000000 --- a/src/old-no-longer-pub/2014-04-26_create-email-courses-mailchimp.txt +++ /dev/null @@ -1,205 +0,0 @@ ---- -title: Create Email Courses With MailChimp -pub_date: 2014-04-25 12:04:25 -slug: /blog/2014/04/create-email-courses-mailchimp -metadesc: How to set up email courses or drip campaigns using MailChimp -tutorial: True - ---- - -I like to teach people how to do things, whether that's a simple tutorial on this site or a [big ol' book](https://longhandpixels.net/books/responsive-web-design). Lately I've been playing with the idea of a "course" delivered via email. - -I use MailChimp for my mailing lists and it turns out you can also use MailChimp to provide a timed series of tutorials as well, a bit like an email-based course. Basically I wanted a way to take sequential and otherwise related tutorials from this site and package them up in something that can be delivered via email. - -While MailChimp can do this, it's not really the primary use case and it was not immediately obvious to me how to do it. MailChimp has great documentation, but it doesn't really cover this use case in detail. The MailChimp admin interface also appears to have changed a lot in the last couple years so many of tutorials out there on the web are no longer completely accurate when it comes to menu locations and such. - -This is how I ended up accomplishing what I wanted to do. - -Everything that follows assumes you've used MailChimp before to at least send a newsletter. You'll also need to have a paid account because we're going to create a series of autoresponders, which (at the time of writing) are only available as part of paid accounts. - -Before we dive in, here's how I want this to work: When a person signs up for the longhandpixels newsletter there will be a checkbox that says "Yes, I'd also like to receive a free 7-day course on responsive web design" or similar. When someone checks that box I want to 1) add them to the main mailing list and 2) start a series of emails that get delivered once a day for the next seven days. If they don't check that box then they should just be signed up for the main mailing list. - -Note that you can do this in some sleazy ways that amount to tricking people into getting a bunch of marketing spam they don't want. Please don't do that. Life is short, don't waste people's time. It will just make people hate you and whatever you're offering. But if you have content people actually want, I think this is a great way to get it to them. - -## Create the Form - -The first step is to create a sign up form with a custom field that offers the option to also get the email course. To do that click the lists menu item to the left of the page in the MailChimp admin. - -Select the mailing list you want subscribers to end up on (or create a new list). That will take you to the list page where, along the top you'll see a series of tabs, click on "Signup forms". You should see three options, General forms, Embedded forms and Form integrations. Select the middle option, Embedded forms. - -{: class="img-right"} You should now be on the form customization page. Here you can build your form however you'd like, the thing we need to do is add an extra field so our potential subscribers can take the 7-day course. To do that we'll use MailChimp's form builder, which you can get to via the link under the Form options menu. - -Now we need to add a new field, so double-click the Check Boxes option on the right, which will add a new checkbox field to your form. You can change the field label to something that makes sense. I ended up not displaying this in the actual form, but it's how this segment of subscribers will be identified in the next step so put something in there that makes sense. I chose "rwd_course". - -You can then set the text in the options box to whatever you'd like. I use "Yes, I'd also like to receive a free 7-day course on responsive web design". - -[](https://longhandpixels.net/media/images/2014/mailchimp-form-builder.png "View Image 2") -: Adding a checkbox using MailChimp's form builder - -Alright, now go back to the initial form page and grab the embed code. This is curiously difficult to do. I may be missing something, but the only way I know of to get out of the form builder is to click Signup forms again and then once again select the middle option, Embedded forms. From here you can grab the embed code and customize it to your liking. You can see the example at the bottom of this page to see what mine looks like. - -## Set up the Email Course - -Now we have a way for people to join our mailing list and let us know they want the course. The next step is to set up the actual course. - -To do that we'll need to create a series of autoresponders. Click the autoresponders link on the left side of the page and then click the Create Autoresponder button at the top of the page. - -This will take you to the first step in the autoresponder setup process, which is essentially exactly the same as setting up your regular mailings, it just has a few extra options that allow you to control when it is sent out. - -The first step is to select whichever list you're using. Then we want to narrow the list by selecting the "Send to a new segment" option. Click the drop-down menu and look under Merge fields for the checkbox option we created earlier. Select "rwd_course" (or whatever you named it), "one of" and the only option available. - -[](https://longhandpixels.net/media/images/2014/mailchimp-segment-options.png "View Image 3") -: Setting up the list segment in MailChimp - -Hit the Next button and you'll see the screen that set what event will trigger the autoresponder and when the mail will be sent. The event we want to start things off with is "subscription to the list". Typically I send a welcome email to everyone who subscribes so I consider this to be first lesson in the course and thus send it one day from when someone signs up. You could, however, use this first autoresponder to send a kind of "thanks for signing up for the course, the first lesson will be here tomorrow" sort of thing, in which case I would set it to send "within the hour" after sign up (which is the shortest time MailChimp offers for autoresponders). - -Other options here include the ability to limit which days of the week the mail is sent, for example perhaps not sending your course on weekends. To be honest I haven't touched this yet, but it might be worth tweaking delivery days down the road, based on your response rates. - -The next screens are the same as setting up any other campaign in MailChimp, allowing you to customize the mail subject, email, name and other variables. There's plenty of [good documentation](http://kb.mailchimp.com/article/how-do-i-create-a-new-campaign/) on this stuff so I won't repeat it here. See the [MailChimp knowledge base](http://kb.mailchimp.com/home) if you're completely new to MailChimp. - -The only thing I'll mention here is that I use a very specific campaign naming structure so I can see the order of emails sent at a glance. Since this will be the first lesson I would name it "RWD Course - Lesson 01". Without the sequential naming scheme it's impossible to look at a list of your autoresponders and know which one is which lesson. - -From here you just add your first lesson, pick a nice theme and start up the autoresponder. - -So you have the form set up, and anyone who signs up will get lesson one delivered to their inbox a day later. - -Now you just need to rinse and repeat. Like I said, this is a little outside the typical MailChimp use case so unfortunately there's no simpler way to do it. Luckily you can save a bit of time if you go back to the Autoresponders list page (by clicking the autoresponer link in the left-hand menu) and click on the Edit drop down button where you'll see an option to "Replicate". - -This will essentially clone your autoresponder, keeping everything the same so all you have to change is the subject line, the autoresponder name (now "RWD Course - Lesson 02" or similar) and the actual content. Don't forget to force the plain text content to update as well since it won't, for some reason, always auto-update when you change the HTML content. - -Then replicate again. And again. And so on. You can start to see why Nathan Berry is doing well with [ConvertKit](http://convertkit.com/) (which greatly streamlines this process). I don't mind the extra steps, and I like to keep everything in one place so I'm sticking with MailChimp even if my attempts to create email courses are a little outside the usual use case. - -Be sure to let me know if you have any questions and if you'd like to see my responsive design course in action, here's the signup form: - -<div class="form--wrapper"> -<style> -.input--button, .package--button { - display: block; - background-color: #d05c1c; - background-image: -moz-linear-gradient(bottom, #d05c1c, #e26f30); - background-image: -webkit-linear-gradient(bottom, #d05c1c, #e26f30); - background-image: linear-gradient(to top bottom, #d05c1c, #e26f30); - -ms-filter: "progid:DXImageTransform.Microsoft.gradient (GradientType=0, startColorstr=#d05c1c, endColorstr=#e26f30)"; - padding: 15px 25px; - color: #fff !important; - margin-bottom: 10px; - font-weight: normal; - border: 1px solid #c14b0b; - border-radius: 5px; - text-decoration: none; - box-shadow: 0px 0px 3px rgba(0, 0, 0, 0.19), 0px 1px 0px rgba(255, 255, 255, 0.34) inset, 0px 0px 10px rgba(0, 0, 0, 0.05) inset; } -.form--wrapper { - padding: 2em 0; } - -.form--wrapper form { - margin: 0 auto; - max-width: 18.75em; -font-family: "HelveticaNeue-Light", "Helvetica Neue Light", "Helvetica Neue", Helvetica, Arial, "Lucida Grande", sans-serif; - clear: both; } - @media (min-width: 32em) { - .form--wrapper form { - float: right; - clear: none; - max-width: 15em; } } - @media (min-width: 47em) { - .form--wrapper form { - max-width: 23.75em; } } - -.form--fieldgroup { - margin-bottom: 1em; } - -label { - display: block; - font-size: 12px; - font-size: 0.75rem; - font-weight: 600; } - -.label--inline { - display: inline-block; - margin-left: 8px; - max-width: 21.875em; } - -.input--check { - float: left; - margin: 0; } - -.input--textfield { - width: 15.3125em; } - @media (min-width: 32em) { - .input--textfield { - max-width: 13.125em; } } - @media (min-width: 47em) { - .input--textfield { - max-width: 17.1875em; } } - -.input--button { - max-width: 250px; } - @media (min-width: 32em) { - .input--button { - max-width: 15em; } } - @media (min-width: 47em) { - .input--button { - max-width: 250px; } } - -.input--button__sample { - max-width: 280px; } - -.form--small { - font-size: 90%; } - -.img-open-book { - margin: 0 auto 2em; - display: block; } - @media (min-width: 32em) { - .img-open-book { - float: left; - width: 48%; - margin-right: 1em; } } -.form--wrapper:before, -.form--wrapper:after { - content:""; - display:table; -} -.form--wrapper:after { - clear:both; -} -.form--wrapper { - zoom:1; /* For IE 6/7 (trigger hasLayout) */ -} -.form--small { -font-size: 73% !important;} -</style> -<img alt="Build A Better Web With Responsive Design by Scott Gilbertson" src="https://longhandpixels.net/books/media/sample-chapter-image.png" class="img-open-book"> - <!-- Begin MailChimp Signup Form --> -<form action="https://longhandpixels.us7.list-manage.com/subscribe/post?u=f56776029b67b1c8c712eee00&id=040927f84d" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate=""> - -<div class="form--fieldgroup"> -<label for="mce-FNAME">First Name </label> <input type="text" value="" name="FNAME" class="input--textfield" placeholder="Jane Doe" id="mce-FNAME" style="background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABHklEQVQ4EaVTO26DQBD1ohQWaS2lg9JybZ+AK7hNwx2oIoVf4UPQ0Lj1FdKktevIpel8AKNUkDcWMxpgSaIEaTVv3sx7uztiTdu2s/98DywOw3Dued4Who/M2aIx5lZV1aEsy0+qiwHELyi+Ytl0PQ69SxAxkWIA4RMRTdNsKE59juMcuZd6xIAFeZ6fGCdJ8kY4y7KAuTRNGd7jyEBXsdOPE3a0QGPsniOnnYMO67LgSQN9T41F2QGrQRRFCwyzoIF2qyBuKKbcOgPXdVeY9rMWgNsjf9ccYesJhk3f5dYT1HX9gR0LLQR30TnjkUEcx2uIuS4RnI+aj6sJR0AM8AaumPaM/rRehyWhXqbFAA9kh3/8/NvHxAYGAsZ/il8IalkCLBfNVAAAAABJRU5ErkJggg==); background-attachment: scroll; background-position: 100% 50%; background-repeat: no-repeat no-repeat;"> -</div> -<div class="form--fieldgroup"> -<label for="mce-EMAIL">Email Address</label> <input type="email" autocapitalize="off" autocorrect="off" value="" placeholder="jane@doe.com" name="EMAIL" class="input--textfield" id="mce-EMAIL"> -</div> -<div class="form--fieldgroup"> -<label for="mce-group[5529]-5529-0" class="label--inline">Yes, I’d also like to receive a free 7-day course on responsive web design.</label><input type="checkbox" value="1" name="group[5529][1]" id="mce-group[5529]-5529-0" class="input--check" checked=""> -</div> -<div style="position: absolute; left: -5000px;"> -<input type="text" name="b_f56776029b67b1c8c712eee00_040927f84d" value=""> -</div> -<input type="submit" value="Send Me the Sample Chapter" name="subscribe" id="mc-embedded-subscribe" class="input--button input--button__sample"> -<p class="form--small"> -<small>We won’t send you spam. Unsubscribe at any time.</small> -</p> -</form> -</div> - -And seriously, don't create spammy drip campaigns no one wants to read. - -##Further Reading - -* [MailChimp.com - Getting Started Autoresponders: An Experiment in Empowerment](http://blog.mailchimp.com/getting-started-autoresponders-an-experiment-in-empowerment/) -* [MailChimp.com - Advanced customization of MailChimp's signup forms](http://kb.mailchimp.com/article/advanced-customization-of-mailchimps-sign-up-forms) -* [MailChimp.com - Drip Email Campaigns – Setting Expectations](https://blog.mailchimp.com/drip-email-campaigns-setting-expectations/) -* [Smashing Magazine - How To Create A Self-Paced Email Course](http://www.smashingmagazine.com/2014/02/10/how-to-create-a-self-paced-email-course/) -* [ryandoom.com - Setting up a drip email campaign with MailChimp](http://www.ryandoom.com/Blog/tabid/91/ID/26/Setting-up-a-drip-email-campaign-with-MailChimp.aspx) -{^ .list--indented } - diff --git a/src/old-no-longer-pub/2014-05-16_should-you-wrap-headers-images-and-text-links.txt b/src/old-no-longer-pub/2014-05-16_should-you-wrap-headers-images-and-text-links.txt deleted file mode 100644 index 853275f..0000000 --- a/src/old-no-longer-pub/2014-05-16_should-you-wrap-headers-images-and-text-links.txt +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: Should You Wrap Headers, Images and Text Together in Links? -pub_date: 2014-05-16 12:04:25 -slug: /blog/2014/05/should-you-wrap-headers-images-and-text-links -metadesc: What happens when you wrap large amounts of text and images in link tags? -code: True - ---- - -Question marks in headlines are a pet peeve of mine because the answer is [almost always](https://twitter.com/YourTitleSucks), "no". In this case though, I'm genuinely not sure. - -[Update 2014-07-16: I lean toward no for situations with a large amount of text since it makes selecting that text really hard on mobile devices] - -Matt Wilcox asked an interesting [question on Twitter](https://twitter.com/MattWilcox/status/467251442307567616) today: - -> A11y question: we now often wrap headers and paras and images all in one `<a/>`... Makes sense, but how does that impact screen reader users? - -And [the follow up](https://twitter.com/MattWilcox/status/467251958387331072): - -> This is a good way to avoid having to link an image, header, and ‘read more’ with three identical URLs, but how are wrapped links read out? - -Just in case you didn't know, HTML5 allows you to put just about anything inside an `<a>`, which means you can do things like this: - -~~~language-markup -<a href="#"> - <article> - <h1>My Article Title</h1> - <img src="" alt="some related image" /> - <time datetime="2014-05-15T19:43:23">05/15/14</time> - <p>Paragraph text. Couple be multiple 'grafs, could be dozens. Could be just about anything really.</p> - </article> -</a> -~~~ - -This eliminates the need have three separate links around the title, image and perhaps a "read more" link. - -But as Wilcox points out, there might be unanticipated consequences that make this not such a good idea in terms of accessibility, and even usability (not sure there's a real distinction there). For example, any text becomes <strike>impossible</strike> very difficult to select, which makes me hesitate to use this technique in this particular case. There may be cases where that isn't a major problem. - -I threw together a [test page](https://longhandpixels.net/demos/large-text-a11y/) that I can run through [ChromeVox](http://www.chromevox.com/) and OS X's VoiceOver (the two screen readers I have access to) and see what happens. But I don't use a screen reader on a regular basis so it's hard to know what works and what doesn't, other than obviously very annoying stuff. If you use a screen reader or have a means to test this scenario on other screen readers, I'd love to hear from you. diff --git a/src/old-no-longer-pub/2014-10-03_using-picture-vs-img.txt b/src/old-no-longer-pub/2014-10-03_using-picture-vs-img.txt deleted file mode 100644 index 3c2d39c..0000000 --- a/src/old-no-longer-pub/2014-10-03_using-picture-vs-img.txt +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: "`<picture>` Is Not the Element You're Looking For" -pub_date: 2014-09-30 09:30:23 -slug: /blog/2014/09/picture-not-the-element-youre-looking-for -tags: Responsive Images, Responsive Web Design -metadesc: Most of the time the <picture> element is not what you want, just use img. Your users will thank you. -code: True -tutorial: True - ---- - -The `<picture>` element will be supported in Chromium/Chrome/Opera stable in a few weeks. Later this year it will be part of Firefox. Some of it is already available in Safari and Mobile Safari. It's also top of IE12's to-do list. - -The bottom line is that you can start using `<picture>`. You can polyfill it with [Picturefill][1] if you want, but since `<picture>` degrades reasonably gracefully I haven't even been doing that. I've been using `<picture>`. - -Except that, as Jason Grigsby recently pointed out, you probably [don't need `<picture>`][2]. - -See, two of the most useful attributes for the `<picture>` element also work on the good old `<img>` element. Those two attributes are `srcset` and `sizes`. - -In other words, this markup (from, [A Complete Guide to the `<picture>` Element][3])... - -~~~{.language-markup} -<picture> - <source media="(min-width: 48em)" srcset="pic-large1x.jpg 800w, pic-large2x.jpg 800w 2x"> - <source media="(min-width: 37.25em)" srcset="pic-med1x.jpg 400w, pic-med2x.jpg 400w 2x"> - <source srcset="pic1x.jpg 200w, pic2x.jpg 200w 2x"> - <img src="pic1x.jpg" alt="Responsive Web Design cover"> -</picture> -~~~ - -...will do nearly the same things as this markup that doesn't use `<picture>`: - -~~~{.language-markup} -<img sizes="(min-width: 48em) 100vw, - (min-width: 37.25em) 50vw, - calc(33vw - 100px)" - srcset="pic-large1x.jpg 800w, - pic-large2x.jpg 800w 2x, - pic-med1x.jpg 400w, - pic-med2x.jpg 400w 2x, - pic2x.jpg 200w 2x, - pic1x.jpg 200w" - src="pic1x.jpg" alt="Responsive Web Design cover"> -~~~ - -The main difference between these two lies in the browser algorithm that ends up picking which image to display. - -In the first case, using the `<source>` tag means the browser **MUST** use the first source that has a rule that matches. That's [part of the `<picture>` specification][4]. - -In the second bit of code, using the `sizes` and `srcset` attribute on the `<img>` tag, means the browser gets to decide which image it thinks is best. **When you use `<img>` the browser can pick the picture as it sees fit**. Avoiding the `<source>` tag allows the browser to pick the image via its own algorithms. - -That means the browser can respect the wishes of your visitor, for example, not downloading a high resolution image over 3G networks. It also allows the browser to be smarter, for example, downloading the lowest resolution image until the person zooms in, at which point the browser might grab a higher resolution image. - -Generally speaking, the only time you need to use `<picture>` is when you're handling [the "art direction" use case][5], which, according to `<picture>` guru Yoav Weiss is [currently around 25 percent of the time][6]. - -The rest of the time, the majority of the time, stick with `<img>`, your users will thank you. - -## Further Reading - -* [Don't use `<picture>` (most of the time)](http://blog.cloudfour.com/dont-use-picture-most-of-the-time/) -* [A Complete Guide to the `<Picture>` Element](https://longhandpixels.net/blog/2014/02/complete-guide-picture-element) -* [Native Responsive Images](https://dev.opera.com/articles/native-responsive-images/) -* [Srcset and sizes](http://ericportis.com/posts/2014/srcset-sizes/) -{^ .list--indented } - - -[1]: http://scottjehl.github.io/picturefill/ -[2]: http://blog.cloudfour.com/dont-use-picture-most-of-the-time/ -[3]: https://longhandpixels.net/blog/2014/02/complete-guide-picture-element) -[4]: https://html.spec.whatwg.org/multipage/embedded-content.html#select-an-image-source -[5]: http://usecases.responsiveimages.org/#h3_art-direction -[6]: http://blog.yoav.ws/2013/05/How-Big-Is-Art-Direction diff --git a/src/old-no-longer-pub/2014-10-10_lenovo-chromebook-review.txt b/src/old-no-longer-pub/2014-10-10_lenovo-chromebook-review.txt deleted file mode 100644 index e4b6bca..0000000 --- a/src/old-no-longer-pub/2014-10-10_lenovo-chromebook-review.txt +++ /dev/null @@ -1,86 +0,0 @@ ---- -title: "Review: The Lenovo n20p Chromebook" -pub_date: 2014-10-10 09:30:23 -slug: /blog/2014/10/lenovo-n20p-chromebook-review -metadesc: A follow up to my Wired review, aimed at those curious about running Linux on Chromebooks - ---- - -I've been testing some Chromebooks for Wired. My first review is the [Lenovo n20p][1]. - -Here's the part I can't fit into the allotted 750 words: I'm not at all interested in these devices when they're running Chrome OS[^1]. - -I'm interested in Chromebooks because they're cheap and they're hackable. I'm interested in them because for $200-$300 you can get what amounts to an underpowered ultrabook. My ideal laptop is cheap, light, and long lasting. Chromebooks deliver on all three counts, which means I can take the $800-$1000 I would have spent on an actual ultrabook and put it to better use in the world. - -Chrome OS, however, is not for me. I need a terminal. I want to run shell scripts and apps like Vim, Mutt, Python, Pandoc, etc. I also want a RAW image editor. And I want to use the browser of my choice. - -In other words, I want Linux on my Chromebook, otherwise all bets are off, which is why I'm putting up this little addendum to my Wired review -- for anyone else who's curious to hear what Linux is like on a $200 laptop. - -Spoiler alert: It's surprisingly awesome, though you need to make sure you get the right hardware for what you want to do. - -## The Lenovo n20p - -I hope it's clear from the Wired review that I do not recommend you buy the Lenovo n20p unless you really, really want the touchscreen features. Otherwise the screen is a deal breaker. It's also overpriced relative to other Chromebooks and short on RAM. - -Let me reiterate that first point here: the screen is awful. Completely, utterly awful. Sure it's a touch screen, which is good, because it's not much of visual screen. There are far better screens on Chromebooks that retail for $100 less. I'm typing this right now in Vim, using the [Solarized][2] dark] color scheme and I can barely see the words on the screen. It's that bad. - -Still, this is the first Chromebook I've had to experiment with so I installed and have been using Debian GNU/Linux on it. I opted for the Xfce desktop since it's somewhat lighter than GNOME[^2]. - -Debian runs really well on this machine when booted through the [Crouton project][3]. Crouton is not secure and I don't recommend actually using it full time, but it's not bad for testing. The better option would be to dual boot through the [SeaBIOS firmware][4] (or ditch Chrome OS entirely) but I'm not sure SeaBIOS is included with this hardware. I'm sending this device back to Lenovo; I don't want to brick a loaner machine installing alternate firmware. - -## The Strange Story of Performance - -In the Wired review I mentioned that with Chrome OS I got about 6 hours of battery life doing normal stuff -- browsing the web, streaming audio and video, writing and so on. With Xfce on top of Debian on the same machine I get 8 sometimes 9 hours. I have no explanation for this other than perhaps Linux has better power management...? If that's true, Chrome OS is really lagging here because Linux has pretty shitty power management in my experience. Whatever the case Xfce routinely outdid Chrome OS in battery life. - -But wait, it gets weirder. In the Wired review I mentioned that the Lenovo bogs down considerably when you open more than 10 or so tabs (sad for 2GB of RAM, but that's just how it is). For example, open a few YouTube videos, Google Docs, a web-based email app and few dozen more and scrolling starts to stutter and switching tabs is noticeably sluggish. Do the very same thing in Chromium running on Debian and you get nothing of the sort. - -I have no explanation for this either, especially because Crouton means Chrome OS is still running alongside Debian. While performance should be on par, exceeding Chrome OS is difficult to explain. But hey, good news for those who want to run Linux on a Chromebook. - -So, what's it like to try something moderately taxing, in my case, editing RAW images in Darktable? And can you record a decent screencast? How about editing video? The answers are, it's not that bad, yes and yes but it will take longer. Which is to say that I found the Lenovo to be plenty snappy considering it's only 2GB of RAM and a Celeron processor. - -I did not load my entire photo library (remember there's only a 16GB SSD), but I did throw about 5 GB of photos in it and Darktable wasn't much slower than on my MacBook Pro, which has 16GB of RAM. Browsing through large folders of images sometimes caused jumpy scrolling and thumbnails took longer to generate and come into view, but it was not nearly as horrible as I expected. In fact it was totally usable, other than the screen. - -I didn't really understand how bad the screen sucks on the n20p until I tried to edit a photo. It was a washed out joke. Whole tonal ranges that Darktable was offering to adjust simply didn't show up in the image on the screen. - -I record a lot of screencasts and edit video a fair bit. I tested both on the Lenovo and am happy to report that it's possible, though exporting your edited HD video of any length will be something you're better off starting shortly before you head to bed. Which is to say it works fine, it just takes longer. - -## Hardware Problems - -Overall I was quite happy with the n20p's hardware, save the screen. The keyboard is one the best I've used on a small device. There is a proprietary power jack, which is annoying if you want to have two -- there's no borrowing a compatible cable from another device. - -Another potential annoyance for Linux users is that the SD card doesn't sit flush, which means you can't leave it in and use it as a second drive. Regrettably this seems to be the overall trend in current laptop design, so not like Lenovo are the only ones doing this, but it still sucks. - -I made a half-hearted attempt to crack open the case and see what the RAM/SSD upgrade potential is, but the case did not open easily and since I have to get it back to Lenovo in one piece I didn't push it. Also, the screen. Deal breaker. - -## Software Problems - -I couldn't get Xmodmap to remap the search key (what would be the caps lock key on most laptops) to control. I have no doubt it can be done, but my first attempt did not work and I didn't feel like spending the time to debug it. I doubt this is a hardware problem though since others have managed to get it working on Chromebooks, just a warning that your current .Xmodmap file might not work on a Chromebook. - -I also encountered some problems getting unicode characters to display properly. I've never had this problem with Xfce before but I doubt it was hardware related. I also somehow ended up with the xfce4-mixer needing to run as root to work, which can be fixed by uninstalling and reinstalling. - -Most of these things can probably be attributed to my own ignorance than anything directly related to the Chromebook/Linux experience. - -## Conclusion - -Don't buy the Lenovo n20p as a hackable Linux Chromebook. - -That said, Linux on a Chromebook is awesome. Or at least it has the potential to be. - -I want this to be my only computer. For that I want an HD IPS screen. There are a couple Chromebooks on the market now with better screens, which is encouraging. I'd also want a Chromebook with an upgradable SSD (like the Acer 720 line). - -In an ideal world there would be a way to upgrade to 8GB of RAM, but it seems that soldered RAM is becoming more common. On the plus side, I now know I can get by with 4GB and this revolutionary new technology called patience. I'd also love to see an SD card slot that accommodates the entire card so it can act as a second drive, but this seems to be the thing least likely to actually happen. - -Currently all these things exist, but not in a single machine. Like I said in the Wired article, at this point the best Chromebook is an impossible Frankenbook. That means there has to be some compromise or some patience. I'm opting for the latter. I'd be willing to bet though that when CES rolls around next year there will be some very tempting Chromebooks available. - -I'll follow this up with more thoughts when the next review is up on Wired (which will likely be the 13in NVIDIA Tegra-based Acer). - - -[^1]: I don't believe the deal Google offers -- all your data in exchange for free, useful services -- is a good exchange. They get more than I do out of that. But I am privileged to know how to host things myself and I can afford to pay for services like Fastmail. Most people, unfortunately, are not as privileged, something I try to be mindful of when suggesting whether or not you should use a particular technology. - -[^2]: Something like Openbox or Xmonad would be even lighter, but requires a bit more work to install through Crouton. I went with the lightest of the easy-to-install options available in Crouton. - - -[1]: http://www.wired.com/2014/10/lenovo-n20p-chromebook/ -[2]: http://ethanschoonover.com/solarized -[3]: https://github.com/dnschneid/crouton/ -[4]: http://www.coreboot.org/SeaBIOS diff --git a/src/old-no-longer-pub/2014-10-28_google-progressive-enhancement.txt b/src/old-no-longer-pub/2014-10-28_google-progressive-enhancement.txt deleted file mode 100644 index 8008e41..0000000 --- a/src/old-no-longer-pub/2014-10-28_google-progressive-enhancement.txt +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: Google Recommends Progressive Enhancement -pub_date: 2014-10-28 09:30:23 -slug: /blog/2014/10/google-progressive-enhancement -metadesc: Google has updated its webmaster tools guidelines to suggest developers use progressive enhancement. - ---- - -Just in case you don't subscribe to the [Longhandpixels newsletter][1] (for shame, sign up in the footer :-)), Google recently [updated its Webmaster Guidelines][2] to suggest that developers use progressive enhancement. - -Here's the quote: - ->Just like modern browsers, our rendering engine might not support all of the technologies a page uses. Make sure your web design adheres to the principles of progressive enhancement as this helps our systems (and a wider range of browsers) see usable content and basic functionality when certain web design features are not yet supported. - -As a fan of progressive enhancement ever since the term first appeared on Webmonkey eons ago, I would like to say thank you to Google. Thank you for throwing your weight behind progressive enhancement. - -I consider progressive enhancement the cornerstone on which all web design builds (literally and figuratively), but I also know from painful experience that not every developer agrees. I also know nothing shuts up a web developer faster than saying, *well, Google says...* - -What makes Google's announcement even more interesting is that it comes in the middle of a post that's actually telling developers that Google spiders will now render CSS and JavaScript. Even as Google's spiders get smarter and better at rendering pages the company still thinks progressive enhancement is important. - -If you're scratching your head wondering what progressive enhancement is, well, you can go read Steve Champeon’s [original Webmonkey article][3] for some background and then check out Aaron Gustafson's [ALA article from 2008][4]. - -If you're interested in how progressive enhancement works in conjunction with responsive design, pick up a copy of my book, <cite>[Build a Better Web with Responsive Design][5]</cite>. I published an excerpt that covers some aspects of progressive enhancement, responsive design and [Why mobile-first is the right way to build responsive websites][6], if you'd like to get a feel for what I'm talking about. - - -[1]: https://longhandpixels.net/newsletter/ -[2]: http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.html -[3]: http://hesketh.com/publications/progressive_enhancement_and_the_future_of_web_design.html -[4]: http://alistapart.com/article/understandingprogressiveenhancement -[5]: https://longhandpixels.net/books/responsive-web-design -[6]: https://longhandpixels.net/blog/2014/04/why-mobile-first-responsive-design diff --git a/src/old-no-longer-pub/2015-01-15_google-mobile-friendly-label.txt b/src/old-no-longer-pub/2015-01-15_google-mobile-friendly-label.txt deleted file mode 100644 index 58aa108..0000000 --- a/src/old-no-longer-pub/2015-01-15_google-mobile-friendly-label.txt +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: Google Calls Out 'Mobile-Friendly' Sites -pub_date: 2015-01-15 09:30:23 -slug: /blog/2015/01/google-mobile-friendly-label -metadesc: Google is now labeling mobile-friendly sites in search results and mobile-unfriendliness may soon hurt your search engine rankings - ---- - -Google has started adding a "mobile-friendly" label next to search results on mobile devices. - -<img src="/media/images/2015/google-mobile-friendly.png" alt="Google's new mobile friendly label"> - -As a user I love this [^1], and I'll be honest, I'd be pretty unlikely to click through to a site that doesn't get this label -- what's the point? Even if the site is the sole source of whatever I want there's a good chance I won't be able to view it on my phone. - -As a web developer I want to make sure my sites get that little "mobile-friendly" label. Here's Google's criteria for what makes a website "mobile-friendly": - ->* Avoids software that is not common on mobile devices, like Flash ->* Uses text that is readable without zooming ->* Sizes content to the screen so users don't have to scroll horizontally or zoom ->* Places links far enough apart so that the correct one can be easily tapped - -The first is hopefully obvious at this point. The third item is just basic responsive design 101. The others though are things that don't get nearly enough attention in responsive web design. Text big enough to read and links you can tap *should* be key elements of any good responsive design, but sadly, at least in my mobile browsing experience, they aren't. - -I'm happy to see Google start to label sites that fail on these counts and I hope it motivates developers to start taking the smaller, but still very important aspects of responsive design -- like typography and white space -- more seriously. - -I'm also happy that the mobile-friendliness of a site may become a factor in how it ranks in search results. According [the Google Webmaster Blog](http://googlewebmastercentral.blogspot.com/2014/11/helping-users-find-mobile-friendly-pages.html), Google sees the "mobile-friendly" labels as "a first step in helping mobile users to have a better mobile web experience. ***We are also experimenting with using the mobile-friendly criteria as a ranking signal***. (emphasis mine). - -<img src="/media/images/2015/google-mobile-friendly-lhp.jpg" alt="Longhandpixels gets the mobile friendly label"> - -There's a [test page](https://www.google.com/webmasters/tools/mobile-friendly/) where you can see how your sites do. This one gets the mobile-friendly label, natch. - -<div class="callout"><h3>Need help getting to "mobile-friendly"?</h3> <img src="/media/rwd-cover.png" alt="book cover" />The book <a href="https://longhandpixels.net/books/responsive-web-design">Build a Better Web with Responsive Web Design</a> is the fastest way to get from here to responsive. You'll learn the best ways to scale fonts with screen size and make sure your tap targets are big enough for comfortable browsing. Read the book, follow the suggestions and your site will be "mobile-friendly".</div> - -[^1]: I really wish DuckDuckGo would do this as well since I use it quite a bit more than Google these days. diff --git a/src/old-no-longer-pub/2015-10-03_wordpress-1.txt b/src/old-no-longer-pub/2015-10-03_wordpress-1.txt deleted file mode 100644 index 846f751..0000000 --- a/src/old-no-longer-pub/2015-10-03_wordpress-1.txt +++ /dev/null @@ -1,48 +0,0 @@ -WordPress is pretty universally reviled among programmers. It's more or less the Walmart of code. A lot of the core code is very poorly written and in a language no one likes. That's never bothered me. What I don't like is that it encourages mixing presentation code and logic. It also encourages terrible behavior by users[^1] and is at least partly responsible for how damn slow the web has become[^2]. - -WordPress is also often the best choice for clients. - -Why is something so awful often the best choice? Because clients have budgets and paying me to set up and lightly customize WordPress is dramatically cheaper than paying me to write you something totally custom (and really awesome) using Django or Rails. Budgets are part of reality and with WordPress -- in spite of all that's wrong with it -- the client's money goes further[^3]. - -WordPress is also fantastically good at some things. The admin is generally very usable and thoughtfully created; the comment system is hands down the best of the web and it has helped millions, possibly billions of people get their ideas on the web. - -That does not, however, explain why *I* was able to get past my dislike of WordPress. My dislike of WordPress centers around the way it mixes logic -- PHP code in this case -- and presentation (HTML). It's a terrible way to work and terrible thing to try to maintain. - -I want all my code in one file, all my HTML in another. The code file uses logic to get the information needed and put it in variables. It then passes those variables to the HTML which displays them and all is well. That's how pretty much every thing designed for the web since about 1999 has worked. Except for WordPress. - -I've built dozens of WordPress sites over the years and for most of that time I've hated every minute of it, which, trust me, is no way to make a living. - -All that changed about 4 months ago when I discovered [Timber](https://github.com/jarednova/timber). Timber is a plugin that lets you develop WordPress themes using object-oriented code and the [Twig template engine](http://twig.sensiolabs.org/). - -Timber takes all that is bad about WordPress's coding style and enables the kind of clean, sane separation of code web developers are used to. It makes WordPress behave a bit more like Rails or Django. - -More importantly, it lets me build WordPress sites without hating every minute of it. I actually like building WordPress sites using Timber. It's not as much fun as using Django, but it's close enough that I ported this site over to WordPress (from a custom Python-based static publishing system). - -A couple things to note if this sounds good to you. First, yes, Timber is a WordPress plugin. No, from my testing, it won't slow your site down. But yes, it does introduce a dependency on which all your code will hang. Porting from Timber to non-Timber will be non-trivial. Make sure you're okay with that before you dive in. - -To give you some sense of what it's like to work with Twig, here's what your template code looks like: - -~~~.language-markup -{% extends "base.twig" %} -{% block content %} -<article class="h-entry"> -<h1 class="p-name">{{post.title}}</h1> -<img src="{{post.thumbnail.src}}" alt="image alt text" /> -<div class="body"> - {{post.content}} -</div> -{% endblock %} -~~~ - -Yes, that looks a lot like Django template code. The Twig templating system is based on Django's (so is nearly every other templating project I've come across lately. Django is not perfect, but its template system is clearly pretty close). - -You still create .php files with Timber, they're just all logic. You instantiate Timber objects and then query for whatever posts/pages/custom post type data you want. Then you pass that on to the template file. Again, that will probably sound familiar to most non-WordPress developers. - -Who cares? Well, if you hate WordPress because of the way it forces you to mix logic and presentation code then Timber might make your life a bit brighter. If you're perfectly happy with WordPress as is, then why the hell are you still reading this? - - -[^1]: For a long time the WordPress Codex actually said that your wp-content folder was "intended by Developers to be completely writable by all (owner/user, group, and public)." Yup, seriously. It's since been changed, so congrats to the WordPress team on cracking open [Internet Security for Dummies](http://www.amazon.com/Norton-Internet-Security-Dummies-Computers/dp/0764575775), but here's the [Internet Archive page](https://web.archive.org/web/20110325073349/http://codex.wordpress.org/Hardening_WordPress) of that advice in case you don't believe me. - -[^2]: WordPress supposedly powers about a 1/3 of the web so that alone makes it responsible. But search the web for almost any problem with WordPress and the first bit of advice you'll get is "just install ______ plugin". That's great on one hand, because you don't need to know any code, but because of how WordPress bootstraps plugins it also tends to slow sites to a crawl (and load dozens of external scripts in many cases). - -[^3]: WordPress is also more widely used and therefore there are more people capable of maintaining it. That means clients have a more future-proof solution than a customized system written in Python or Ruby. diff --git a/src/pg-backup.txt b/src/pg-backup.txt deleted file mode 100644 index e1b6b10..0000000 --- a/src/pg-backup.txt +++ /dev/null @@ -1,191 +0,0 @@ -When it comes to backups I'm paranoid and lazy. That means I need to automate the process of making redundant backups. - -Pretty much everything to do with luxagraf lives in a single PostgreSQL database that gets backed up every night. To make sure I have plenty of copies of those backup files I download them to various other machines and servers around the web. That way I have copies of my database files on this server, another backup server, my local machine, several local backup hard drives, in Amazon S3 and Amazon Glacier. Yes, that's overkill, but it's all so ridiculously easy, why not? - -Here's how I do it. - -## Make Nightly Backups of PostgreSQL with `pg_dump` and `cron` - -The first step is to regularly dump your database. To do that PostgreSQL provides the handy `pg_dump` command. If you want a good overview of `pg_dump` check out the excellent [PostgreSQL manual]. Here's the basic syntax: - -~~~~console -pg_dump -U user -hhostname database_name > backup_file.sql -~~~~ - -So, if you had a database named mydb and you wanted to back it up to a file that starts with the name of the database and then includes today's date, you could do something like this: - -~~~~console -pg_dump -U user -hlocalhost mydb > mydb.`date '+%Y%m%d'`.sql -~~~~ - -That's pretty useful, but it's also potentially quite a big file. Thankfully we can just pipe the results to gzip to compress them: - -~~~~console -pg_dump -U user -hlocalhost mydb | gzip -c > mydb.`date '+%Y%m%d'`.sql.gz -~~~~ - -That's pretty good. In fact for many scenarios that's all you'll need. Plug that into your cron file by typing `crontab -e` and adding this line to make a backup every night at midnight: - -~~~~bash -0 0 * * * pg_dump -U user -hlocalhost mydb | gzip -c > mydb.`date '+%Y%m%d'`.sql -~~~~ - -For a long time that was all I did. But then I started running a few other apps that used PostgreSQL databases (like a version [Tiny Tiny RSS](https://tt-rss.org/gitlab/fox/tt-rss/wikis/home)), so I needed to have quite a few lines in there. Plus I wanted to perform a [VACUUM](http://www.postgresql.org/docs/current/static/sql-vacuum.html) on my main database every so often. So I whipped up a shell script. As you do. - -Actually most of this I cobbled together from sources I've unfortunately lost track of since. Which is to say I didn't write this from scratch. Anyway here's the script I use: - -~~~~base -#!/bin/bash -# -# Daily PostgreSQL maintenance: vacuuming and backuping. -# -## -set -e -for DB in $(psql -l -t -U postgres -hlocalhost |awk '{ print $1}' |grep -vE '^-|:|^List|^Name|template[0|1]|postgres|\|'); do - echo "[`date`] Maintaining $DB" - echo 'VACUUM' | psql -U postgres -hlocalhost -d $DB - DUMP="/path/to/backup/dir/$DB.`date '+%Y%m%d'`.sql.gz" - pg_dump -U postgres -hlocalhost $DB | gzip -c > $DUMP - PREV="$DB.`date -d'1 day ago' '+%Y%m%d'`.sql.gz" - md5sum -b $DUMP > $DUMP.md5 - if [ -f $PREV.md5 ] && diff $PREV.md5 $DUMP.md5; then - rm $DUMP $DUMP.md5 - fi -done -~~~~ - -Copy that code and save it in a file named psqlback.sh. Then make it executable: - -~~~~console -chmod u+x psqlback.sh -~~~~ - -Now before you run it, let's take a look at what's going on. - -First we're creating a loop so we can backup all our databases. - -~~~~bash -for DB in $(psql -l -t -U postgres -hlocalhost |awk '{ print $1}' |grep -vE '^-|:|^List|^Name|template[0|1]|postgres|\|'); do -~~~~ - -This looks complicated because we're using `awk` and `grep` to parse some output but basically all it's doing is querying PostgreSQL to get a list of all the databases (using the `postgres` user so we can access all of them). Then we pipe that to `awk` and `grep` to extract the name of each database and ignore a bunch of stuff we don't want. - -Then we store the name of database in the variable `DB` for the duration of the loop. - -Once we have the name of the database, the script outputs a basic logging message that says it's maintaining the database and then runs VACUUM. - -The next two lines should look familiar: - -~~~~bash -DUMP="/path/to/backup/dir/$DB.`date '+%Y%m%d'`.sql.gz" -pg_dump -U postgres -hlocalhost $DB | gzip -c > $DUMP -~~~~ - -That's very similar to what we did above, I just stored the file path in a variable because it gets used again. The next thing we do is grab the file from yesterday: - -~~~~bash -PREV="$DB.`date -d'1 day ago' '+%Y%m%d'`.sql.gz" -~~~~ - -Then we calculate the md5sum of our dump: - -~~~~bash -md5sum -b $DUMP > $DUMP.md5 -~~~~ - -The we compare that to yesterday's sum and if they're the same we delete our dump since we already have a copy. - -~~~~bash - if [ -f $PREV.md5 ] && diff $PREV.md5 $DUMP.md5; then - rm $DUMP $DUMP.md5 - fi -~~~~ - -Why? Well, there's no need to store a new backup if it matches the previous one exactly. Since sometimes nothing changes on this site for a few days, weeks, months even, this can save a good bit of space. - -Okay now that you know what it does, let's run it: - -~~~~console -./psqlback.sh -~~~~ - -If everything went well it should have asked you for a password and then printed out a couple messages about maintaining various databases. That's all well and good for running it by hand, but who is going to put in the password when cron is the one running it? - -### Automate Your Backups with `cron` - -First let's set up cron to run this script every night around midnight. Open up crontab: - -~~~~console -crontab -e -~~~~ - -Then add a line to call the script every night at 11:30PM: - -~~~~console -30 23 * * * ./home/myuser/bin/psqlback.sh > psqlbak.log -~~~~ - -You'll need to adjust the path to match your server, but otherwise that's all you need (if you'd like to run it less frequently or at a different time, you can read up on the syntax in the cron manual). - -But what happens when we're not there to type in the password? Well, the script fails. - -There are a variety of ways we can get around this. In fact the [PostgreSQL docs](http://www.postgresql.org/docs/current/static/auth-methods.html) cover everything from LDAP auth to peer auth. The latter is actually quite useful, though a tad bit complicated. I generally use the easiest method -- a password file. The trick to making it work for cron jobs is to create a file in your user's home directory called `.pgpass`. - -Inside that file you can provide login credentials for any user on any port. The format looks like this: - -~~~~vim -hostname:port:username:password -~~~~ - -You can use * as a wildcard if you need it. Here's what I use: - -~~~~vim -localhost:*:*:postgres:passwordhere -~~~~ - -I hate storing a password in the plain text file, but I haven't found a better way to do this. - -To be fair, assuming your server security is fine, the `.pgpass` method should be fine. Also note that Postgres will ignore this file if it has greater than 600 permissions (that is, user is the only one that can execute it. Let's change that so that: - -~~~~console -chmod 600 .pgpass -~~~~ - -Now we're all set. Cron will run our script every night at 11:30 PM and we'll have a compressed backup file of all our PostgreSQL data. - -## Automatically Moving It Offsite - -Now we have our database backed up to a file. That's a start. That saves us if PostgreSQL does something wrong or somehow becomes corrupted. But we still have a single point of failure -- what if the whole server crashes and can't be recovered? We're screwed. - -To solve that problem we need to get our data off this server and store it somewhere else. - -There's quite a few ways we could do this and I have done most of them. For example we could install [s3cmd](http://s3tools.org/s3cmd) and send them over to an Amazon S3 bucket. I actually do that, but it requires you pay for S3. In case you don't want to do that, I'm going to stick with something that's free -- Dropbox. - -Head over to the Dropbox site and follow their instructions for [installing Dropbox on a headless Linux server](https://www.dropbox.com/en/install?os=lnx). It's just one line of cut and pasting though you will need to authorize Dropbox with your account. - -**BUT WAIT** - -Before you authorize the server to use your account, well, don't. Go create a second account solely for this server. Do that, then authorize that new account for this server. - -Now go back to your server and symlink the folder you put in the script above, into the Dropbox folder. - -~~~~console -cd ~/Dropbox -ln -s ~/path/to/pgbackup/directory . -~~~~ - -Then go back to Dropbox log in to the second account, find that database backup folder you just symlinked in and share it with your main Dropbox account. - -This way, should something go wrong and the Dropbox folder on your server becomes compromised at least the bad guys only get your database backups and not the entirety of your documents folder or whatever might be in your normal Dropbox account. - -Credit to [Dan Benjamin](http://hivelogic.com/), who's first person I heard mention this dual account idea. - -The main thing to note about this method is that you're limited to 2GB of storage (the max for a free Dropbox account). That's plenty of space in most cases. Luxagraf has been running for more than 10 years, stores massive amounts of geodata in PostgreSQL, along with close to 1000 posts of various kinds, and a full compressed DB dump is still only about 35MB. So I can store well over 60 days worth of backups, which is plenty for my purposes (in fact I only store about half that). - -So create your second account, link your server installation to that and then share that folder with your main Dropbox account. - -The last thing I suggest you do, because Dropbox is not a backup service, but a syncing service, is **copy** the backup files out of the Dropbox folder on your local machine to somewhere else on that machine. Not move, but **copy**. So leave a copy in Dropbox and make a second copy on your local machine outside of the Dropbox folder. - -If you dislike Dropbox (I don't blame you, I no longer actually use it for anything other than this) there are other ways to accomplish the same thing. The already mentioned s3cmd could move your backups to Amazon S3, good old `scp` could move them to another server and of course you can always download them to your local machine using `scp` or `rsync` (or SFTP, but then that wouldn't be automated). - -Naturally I recommend you do all these things. I sync my nightly backups to my local machine with Dropbox and `scp` those to a storage server. Then I use s3cmd to send weekly backups to S3. That gives me three offsite backups which is enough even for my paranoid, digitally distrustful self. diff --git a/src/piwik.txt b/src/piwik.txt deleted file mode 100644 index 65e43fa..0000000 --- a/src/piwik.txt +++ /dev/null @@ -1,25 +0,0 @@ -If you follow tech circles at all you've probably read something lately about how ad-blockers are going to destroy the web. Or more humorously to my mind, that they're "immoral". I think they're neither, but they are most definitely not going away. - -Curiously, the browser add-on at the center of the controversy is Ghostery, which I've written about before not as an ad-blocker, which it really isn't, but as a privacy tool. - -To my mind that pretty much nails the debate. If you see Ghostery as a tool for preserving your privacy and blocking attempts to track you, you'll be a supporter. If you see Ghostery as a tool to block ads you'll probably be opposed to it. - -It should be obvious, since I wrote a tutorial about how to install and use it, that I think Ghostery is great. I wouldn't use a browser without it. - -That said, I think that as a erstwhile publisher, or perhaps just as a participant in the open web, I have an obligation to explore all the ways in which I can make Ghostery unnecessary for you. - -So I sat down and looked over this site and my personal site (luxagraf.net) to see what I could do to protect my readers from being tracked. I serve all my sites over HTTPS, which I guess is good, though sometimes I worry it's already been compromised and therefore creates a false sense of security. - -And, while I don't set or use any cookies that track you, I was loading a tracker via Google Analytics. I don't have a particular problem with Google Analytics, but that's not the point. The point is that you might have a problem with it. All you've really agreed to in following a link here is to see what information I might have. You didn't also agree to let Google know what you're doing and by extension anyone Google wants to share that info with. - -Not that Google is evil. But Google is beyond our control. But neither you nor I have any control over the data it collects. In the case of Analytics that means I can't, for example, delete all the data in it that's more than 6 months old. Nor do I have any control over what Google might do with all that data it's collected about what you do here. Yes it's supposedly anonymous data, but I truly hope that by now no one still believes any tracking data can truly be anonymous. - -I decided that I could not in good conscience continue to expose my readers to a script that tracks them, stores information about them and at the same time advocate a tool like Ghostery. - -That kind of hypocrisy doesn't sit well with me. So I deleted the Google Analytics script from all my sites (I'd already made a similarly inspired decision to pull my mailing list out of MailChimp). - -However, I found that I missed that analytics data. The web always feels a little like screaming into a black hole, the data we get from tools like Google Analytics makes it a little less so. I could see that people did indeed find my [tutorial on setting up Nginx on Debian]() useful or that almost no ever visits my [Ghostery tutorial](). It also helps see connections. Without it I would have no idea that several posts here are referenced on Stack Overflow or linked to from other articles around the web. - -Without analytics the web feels less friendly, less collaborative and more like futile shouting in a black hole. I didn't like it. - -Now I could analyze my server logs with Webalizer, and I have set that up in the past, but let's face it, it's pretty fugly. Fortunately there's Piwik, a self-hosted analytics package that offers everything I liked about Google Analytics, but keeps my in charge and even lets me turn off cookie-based tracking. So I can see who's coming here, where they're coming from, what they seem to be enjoying **and that's it**. I have no idea where you go from here, no idea what you do next and, most importantly, neither does anyone else. diff --git a/src/published/2013-09-20_whatever-happened-to-webmonkey.txt b/src/published/2013-09-20_whatever-happened-to-webmonkey.txt deleted file mode 100644 index 112ce46..0000000 --- a/src/published/2013-09-20_whatever-happened-to-webmonkey.txt +++ /dev/null @@ -1,43 +0,0 @@ ----
-title: Whatever Happened to Webmonkey.com?
-pub_date: 2013-09-20 21:12:31
-slug: /blog/2013/09/whatever-happened-to-webmonkey
-metadesc: Wired has shut down Webmonkey.com for the fourth and probably final time.
-
----
-
-People on Twitter have been asking what's up with [Webmonkey.com][1]. Originally I wanted to get this up on Webmonkey, but I got locked out of the CMS before I managed to do that, so I'm putting it here.
-
-Earlier this year Wired decided to stop producing new content for Webmonkey.
-
-For those keeping track at home, this is the fourth, and I suspect final, time Webmonkey has been shut down (previously it was shut down in 1999, 2004 and 2006).
-
-I've been writing for Webmonkey.com since 2000, full time since 2006 (when it came back from the dead for a third run). And for the last two years I have been the sole writer, editor and producer of the site.
-
-Like so many of you, I learned how to build websites from Webmonkey. But it was more than just great tutorials and how tos. Part of what made Webmonkey great was that it was opinionated and rough around the edges. Webmonkey was not the product of professional writers, it was written and created by the web nerds building Wired's websites. It was written by people like us, for people like us.
-
-I'll miss Webmonkey not just because it was my job for many years, but because at this point it feels like a family dog to me, it's always been there and suddenly it's not. Sniff. I'll miss you Webmonkey.
-
-Quite a few people have asked me why it was shut down, but unfortunately I don't have many details to share. I've always been a remote employee, not in San Francisco at all in fact, and consequently somewhat out of the loop. What I can say is that Webmonkey's return to Wired in 2006 was the doing of long-time Wired editor Evan Hansen ([now at Medium][2]). Evan was a tireless champion of Webmonkey and saved it from the Conde Nast ax several times. He was also one of the few at Wired who "got" Webmonkey. When Evan left Wired earlier this year I knew Webmonkey's days were numbered.
-
-I don't begrudge Wired for shutting Webmonkey down. While I have certain nostalgia for its heyday, even I know it's been a long time since Webmonkey was leading the way in web design. I had neither the staff nor the funding to make Webmonkey anything like its early 2000s self. In that sense I'm glad it was shut down rather than simply fading further into obscurity.
-
-I am very happy that Wired has left the site in place. As far as I know Webmonkey (and its ever-popular cheat sheets, which still get a truly astounding amount of traffic) will remain available on the web. That said, note to the [Archive Team][3], it wouldn't hurt to create a backup. Sadly, many of the very earliest writings have already been lost in the various CMS transitions over the years and even much of what's there now has incorrect bylines. Still, at least most of it's there. For now.
-
-As for me, I've decided to go back to what I enjoyed most about the early days of Webmonkey: teaching people how to make cool stuff for the web.
-
-To that end I'm currently working on a book about responsive design, which I'm hoping to make available by the end of October. If you're interested drop your email in the box below and I'll let you know when it's out (alternately you can follow [@LongHandPixels][4] on Twitter).
-
-If you have any questions or want more details use the comments box below.
-
-In closing, I'd like to thank some people at Wired -- thank you to my editors over the years, especially [Michael Calore][5], [Evan Hansen][6] and [Leander Kahney][7] who all made me a much better writer. Also thanks to Louise for always making sure I got paid. And finally, to everyone who read Webmonkey and contributed over the years, whether with articles or even just a comment, thank you.
-
-Cheers and, yes, thanks for all the bananas.
-
-[1]: http://www.webmonkey.com/
-[2]: https://medium.com/@evanatmedium
-[3]: http://www.archiveteam.org/index.php?title=Main_Page
-[4]: https://twitter.com/LongHandPixels
-[5]: http://snackfight.com/
-[6]: https://twitter.com/evanatmedium
-[7]: http://www.cultofmac.com/about/
diff --git a/src/published/2014-02-07_html5-placeholder-label-search-forms.txt b/src/published/2014-02-07_html5-placeholder-label-search-forms.txt deleted file mode 100644 index b1c083e..0000000 --- a/src/published/2014-02-07_html5-placeholder-label-search-forms.txt +++ /dev/null @@ -1,102 +0,0 @@ ----
-title: HTML5 Placeholder as a Label in Search Forms
-pub_date: 2014-02-07 14:38:20
-slug: /blog/2014/02/html5-placeholder-label-search-forms
-metadesc: Using HTML5's placeholder attribute instead of a label is never a good idea. Except when maybe it is.
-tags: Best Practices
-code: True
-tutorial: True
----
-
-The HTML5 form input attribute `placeholder` is a tempting replacement for the good old `<label>` form element.
-
-In fact the web is littered with sites that use `placeholder` instead of labels (or worse, JavaScript to make the `value` attribute act like `label`).
-
-Just because a practice is widespread does not make it a *best* practice though. Remember "skip intro"? I rest my case. Similarly, **you should most definitely not use `placeholder` as a substitute for form labels**. It may be a pattern on today's web, but it's a shitty pattern.
-
-Labels help users complete forms. There are [mountains](http://rosenfeldmedia.com/books/web-form-design/) of [data](http://css-tricks.com/label-placement-on-forms/) and [eye tracking studies](http://www.uxmatters.com/mt/archives/2006/07/label-placement-in-forms.php) to back this up. If you want people to actually fill out your forms (as opposed, I guess, to your forms just looking "so clean, so elegant") then you want to use labels. The best forms, from a usability standpoint, are forms with non-bold, left aligned labels above the field they label.
-
-Again, **using placeholder as a substitute for labels is a horrible UI pattern that you should (almost) never use.**
-
-Is that dogmatic enough for you? Oh wait, *almost* never. Yes, I think there is one specific case where maybe this pattern makes sense: search forms.
-
-Search forms are so ubiquitous and so well understood at this point that it may be redundant to have a label that says "search", a placeholder that also says "search" and a button that says "search" as well. I think just two of those would be fine.
-
-We could skip the placeholder text, which should really be more of a hint anyway -- e.g. "Jane Doe" rather than "Your Name" -- but what if we want to dispense with the label to save a bit of screen real estate, which can be at a premium on smaller viewports?
-
-The label should still be part of the actual HTML, whether your average sighted user actually sees it or not. We need it there for accessibility. But with search forms, well, maybe you can tuck that label away, out of site.
-
-Progressive enhancement dictates that the labels should most definitely be there though. Let's consider a simple search form example:
-
-~~~.language-markup
-<form action="/search" method="get">
- <label id="search-label" for="search">Search:</label>
- <input type="text" name="search" id="query" value="" placeholder="Search LongHandPixels">
- <input class="btn" type="submit" value="Search">
-</form>
-~~~
-
-Here we have our `<label>` tag and use the `for` attribute to bind it with the text input that is our search field. So far, so good for best practices.
-
-Here's what I think is the progressive enhancement route for search forms: use the HTML above and then use JavaScript and CSS to hide away the label when the it makes sense to do so. In other words, don't just hide the label in CSS.
-
-Hiding the label with something like `label {visibility: hidden;}` is a bad idea. That, and its evil cousin `display: none;` hide elements from screen readers and other assistive devices. Instead we'd want to do something like this:
-
-~~~.language-css
-.search-form-label-class {
- position: absolute;
- left: -999em;
-}
-~~~
-
-Check out Aaron Gustafson's ALA article <cite>[Now You See Me](http://alistapart.com/article/now-you-see-me)</cite> for more details on the various ways to hide things visually without hiding them from people who may need them the most.
-
-So this code is better, our label is off-screen and the placeholder text combined with descriptive button text serve the same purpose and still make the function of the form clear. The main problem we have right now is we've hidden the label in every browser, even browsers that won't display the `placeholder` attribute. That's not so great.
-
-In this case you might argue that the button still makes the form function relatively clear, but I think we can do better. Instead of adding a rule to our stylesheet, let's use a bit of JavaScript to apply our CSS only if the browser understands the `placeholder` attribute. Here's a bit of code to do that:
-
-~~~.language-javascript
-<script>
-if (("placeholder" in document.createElement("input"))) {
- document.getElementById("search-label").style.position= 'absolute';
- document.getElementById("search-label").style.left= '-999em';
-}
- </script>
-~~~
-
-This is just plain JavaScript, if your site already has `jQuery` or some other library running them by all means use it's functions to select your elements and apply CSS. The point is the `if` statement, which tests to see if the current browser support the `placeholder` attribute. If it does them we hide the label off-screen, if it doesn't then nothing happens. Either way the element remains accessible to screen readers.
-
-If you'd like to see it in action, here's a working demo: [HTML5 placeholder as a label in search form](https://longhandpixels.net/demos/html5-placeholder/)
-
-So is this a good idea? Honestly, I don't know. It might be splitting hairs. I think it's okay for search forms or other single field forms where there's less chance users will be confused when the placeholder text disappears.
-
-Pros:
-
-* Saves space (no label, which can be a big help on small viewports)
-* Still offers good accessibility
-
-Cons:
-
-* **Technically this is wrong**. I'm essentially using JavaScript to make `placeholder` take the place of a label, which is not what `placeholder` is for.
-* **Placeholders can be confusing**. Some people won't start typing a search term because they're waiting for the field to clear. Others will think that the field is filled and the form can be submitted. See Chris Coyier's CSS-Tricks site for some ideas on how [you can make it apparent that the field](http://css-tricks.com/hang-on-placeholders/) is ready for input.
-* **Not good for longer forms**. Again, multi-field forms need labels. Placeholders disappear when you start typing. If you forget which field you're in after you start typing, placeholders are no help. Despite the fact that the web is littered with forms that do this, please don't. Use labels on longer forms.
-
-I'm doing this with the search form on this site. I started to do the same with my new mailing list sign up form (which isn't live yet), which is what got me writing this, thinking aloud as it were. My newsletter sign up form will have two fields, email and name (only email is required), and after thinking about this some more I deleted the JavaScript and left the labels.
-
-If I shorten the form to just email, which I may, depending on some A/B testing I'm doing, then I may use this technique there too (and A/B test again). As of now though I don't think it's a good idea for that form for the reasons mentioned above.
-
-I'm curious to hear from users, what do you think of this pattern? Is this okay? Unnecessary? Bad idea? Let me know what you think.
-
-## Further Reading:
-
-* The W3C's Web Platform Docs on the [placeholder](http://docs.webplatform.org/wiki/html/attributes/placeholder) attribute.
-* Luke Wroblewski's book <cite>[Web Form Design](http://rosenfeldmedia.com/books/web-form-design/)</cite> is the Bible of building good forms.
-* A little taste of Wroblewski's book over on his blog: [Web Application Form Design](http://www.lukew.com/ff/entry.asp?1502).
-* UXMatter's once did some [eyeball tracking studies](http://www.uxmatters.com/mt/archives/2006/07/label-placement-in-forms.php) based on Wroblewski's book.
-* Aaron Gustafson's ALA article [Now You See Me](http://alistapart.com/article/now-you-see-me), which talks about best practices for hiding elements with JavaScript.
-* CSS-Tricks: [Hang On Placeholders](http://css-tricks.com/hang-on-placeholders/).
-* [Jeremy Keith on `placeholder`](http://adactio.com/journal/6147/).
-* CSS-Tricks: [Places It's Tempting To Use Display: None; But Don't](http://css-tricks.com/places-its-tempting-to-use-display-none-but-dont/). Seriously, don't.
-{^ .list--indented }
-
-
diff --git a/src/published/2014-02-10_install-nginx-debian-ubuntu.txt b/src/published/2014-02-10_install-nginx-debian-ubuntu.txt deleted file mode 100644 index e6cc835..0000000 --- a/src/published/2014-02-10_install-nginx-debian-ubuntu.txt +++ /dev/null @@ -1,250 +0,0 @@ ----
-title: Install Nginx on Debian/Ubuntu
-pub_date: 2014-02-10 11:35:31
-slug: /blog/2014/02/install-nginx-debian-ubuntu
-tags: Web Servers
-metadesc: A complete guide to installing and configuring Nginx to serve static files (on a Digital Ocean or similar VPS)
-code: True
-tutorial: True
-
----
-
-I recently helped a friend set up his first Nginx server and in the process realized I didn't have a good working reference for how I set up Nginx.
-
-So, for myself, my friend and anyone else looking to get started with Nginx, here's my somewhat opinionated guide to installing and configuring Nginx to serve static files. Which is to say, this is how I install and set up Nginx to serve my own and my clients' static files whether those files are simply stylesheets, images and JavaScript or full static sites like this one. What follows is what I believe are the best practices of Nginx[^1]; if you know better, please correct me in the comments.
-
-[This post was last updated <span class="dt-updated updated" datetime="2014-11-05T12:04:25" itemprop="datePublished"><span>05 November 2014</span></span>]
-
-## Nginx Beats Apache for Static Content[^2]
-
-I've written before about how static website generators like [Jekyll](http://jekyllrb.com), [Pelican](http://blog.getpelican.com) and [Cactus](https://github.com/koenbok/Cactus) are a great way to prototype websites in a hurry. They're also great tools for actually managing sites, not just "blogs". There are in fact some very large websites powered by these "blogging" engines. President Obama's very successful fundraising website [ran on Jekyll](http://kylerush.net/blog/meet-the-obama-campaigns-250-million-fundraising-platform/).
-
-Whether you're just building a quick live prototype or running an actual live website of static files, you'll need a good server. So why not use Apache? Simply put, Apache is overkill.
-
-Unlike Apache, which is a jack-of-all-trades server, Nginx was really designed to do just a few things well, one of which is to offer a simple, fast, lightweight server for static files. And Nginx is really, really good at serving static files. In fact, in my experience Nginx with PageSpeed, gzip, far future expires headers and a couple other extras I'll mention is faster than serving static files from Amazon S3[^3] (potentially even faster in the future if Verizon and its ilk [really do](http://netneutralitytest.com/) start [throttling cloud-based services](http://davesblog.com/blog/2014/02/05/verizon-using-recent-net-neutrality-victory-to-wage-war-against-netflix/)).
-
-## Nginx is Different from Apache
-
-In its quest to be lightweight and fast, Nginx takes a different approach to modules than you're probably familiar with in Apache. In Apache you can dynamically load various features using modules. You just add something like `LoadModule alias_module modules/mod_alias.so` to your Apache config files and just like that Apache loads the alias module.
-
-Unlike Apache, Nginx can not dynamically load modules. Nginx has available what it has available when you install it.
-
-That means if you really want to customize and tweak it, it's best to install Nginx from source. You don't *have* to install it from source. But if you really want a screaming fast server, I suggest compiling Nginx yourself, enabling and disabling exactly the modules you need. Installing Nginx from source allows you to add some third-party tools, most notably Google's PageSpeed module, which has some fantastic tools for speeding up your site.
-
-Luckily, installing Nginx from source isn't too difficult. Even if you've never compiled any software from source, you can install Nginx. The remainder of this post will show you exactly how.
-
-## My Ideal Nginx Setup for Static Sites
-
-Before we start installing, let's go over the things we'll be using to build a fast, lightweight server with Nginx.
-
-* [Nginx](http://nginx.org).
-* [SPDY](http://www.chromium.org/spdy/spdy-protocol) -- Nginx offers "experimental support for SPDY", but it's not enabled by default. We're going to enable it when we install Nginx. In my testing SPDY support has worked without a hitch, experimental or otherwise.
-* [Google Page Speed](https://developers.google.com/speed/pagespeed/module) -- Part of Google's effort to make the web faster, the Page Speed Nginx module "automatically applies web performance best practices to pages and associated assets".
-* [Headers More](https://github.com/agentzh/headers-more-nginx-module/) -- This isn't really necessary from a speed standpoint, but I often like to set custom headers and hide some headers (like which version of Nginx your server is running). Headers More makes that very easy.
-* [Naxsi](https://github.com/nbs-system/naxsi) -- Naxsi is a "Web Application Firewall module for Nginx". It's not really all that important for a server limited to static files, but it adds an extra layer of security should you decided to use Nginx as a proxy server down the road.
-
-So we're going to install Nginx with SPDY support and three third-party modules.
-
-Okay, here's the step-by-step process to installing Nginx on a Debian 7 (or Ubuntu) server. If you're looking for a good, cheap VPS host I've been happy with [Digital Ocean](https://www.digitalocean.com/?refcode=3bda91345045) (that's an affiliate link that will help support LongHandPixels; if you prefer, here's a non-affiliate link: [link](https://www.digitalocean.com/))
-
-The first step is to make sure you're installing the latest release of Nginx. To do that check the [Nginx download page](http://nginx.org/en/download.html) for the latest version of Nginx (at the time of writing that's 1.5.10).
-
-Okay, SSH into your server and let's get started.
-
-While these instructions will work on just about any server, the one thing that will be different is how you install the various prerequisites needed to compile Nginx.
-
-On a Debian/Ubuntu server you'd do this:
-
-~~~.language-bash
-$ sudo apt-get -y install build-essential zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libssl-dev tar unzip
-~~~
-
-If you're using RHEL/Cent/Fedora you'll want these packages:
-
-~~~.language-bash
-$ sudo yum install gcc-c++ pcre-dev pcre-devel zlib-devel make
-~~~
-
-After you have the prerequisites installed it's time to grab the latest version of Google's Pagespeed module. Google's [Nginx PageSpeed installation instructions](https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source) are pretty good, so I'll reproduce them here.
-
-First grab the latest version of PageSpeed, which is currently 1.9.32.2, but check the sources since it updates frequently and change this first cariable to match the latest version.
-
-~~~.language-bash
-NPS_VERSION=1.9.32.2
-wget https://github.com/pagespeed/ngx_pagespeed/archive/release-${NPS_VERSION}-beta.zip
-unzip release-${NPS_VERSION}-beta.zip
-~~~
-
-Now, before we compile pagespeed we need to grab `psol`, which PageSpeed needs to function properly. So, let's `cd` into the `ngx_pagespeed-release-1.8.31.4-beta` folder and grab `psol`:
-
-~~~.language-bash
-cd ngx_pagespeed-release-${NPS_VERSION}-beta/
-wget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz
-tar -xzvf ${NPS_VERSION}.tar.gz
-cd ../
-~~~
-
-Alright, so the `ngx_pagespeed` module is all setup and ready to install. All we have to do at this point is tell Nginx where to find it.
-
-Now let's grab the Headers More and Naxsi modules as well. Again, check the [Headers More](https://github.com/agentzh/headers-more-nginx-module/) and [Naxsi](https://github.com/nbs-system/naxsi) pages to see what the latest stable version is and adjust the version numbers in the following accordingly.
-
-~~~.language-bash
-HM_VERSION =v0.25
-wget https://github.com/agentzh/headers-more-nginx-module/archive/${HM_VERSION}.tar.gz
-tar -xvzf ${HM_VERSION}.tar.gz
-NAX_VERSION=0.53-2
-wget https://github.com/nbs-system/naxsi/archive/${NAX_VERSION}.tar.gz
-tar -xvzf ${NAX_VERSION}.tar.gz
-~~~
-
-Now we have all three third-party modules ready to go, the last thing we'll grab is a copy of Nginx itself:
-
-~~~.language-bash
-NGINX_VERSION=1.7.7
-wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz
-tar -xvzf nginx-${NGINX_VERSION}.tar.gz
-~~~
-
-Then we `cd` into the Nginx folder and compile. So, first:
-
-~~~.language-bash
-cd nginx-${NGINX_VERSION}/
-~~~
-
-So now we're inside the Nginx folder, let's configure our installation. We'll add in all our extras and turn off a few things we don't need. Or at least they're things I don't need, if you need the mail modules, then delete those lines. If you don't need SSL, you might want to skip that as well. Here's the config setting I use (Note: all paths are for Debian servers, you'll have to adjust the various paths accordingly for RHEL/Cent/Fedora/ servers):
-
-
-~~~.language-bash
-./configure \
- --add-module=$HOME/naxsi-${NAX_VERSION}/naxsi_src \
- --prefix=/usr/share/nginx \
- --sbin-path=/usr/sbin/nginx \
- --conf-path=/etc/nginx/nginx.conf \
- --pid-path=/var/run/nginx.pid \
- --lock-path=/var/lock/nginx.lock \
- --error-log-path=/var/log/nginx/error.log \
- --http-log-path=/var/log/access.log \
- --user=www-data \
- --group=www-data \
- --without-mail_pop3_module \
- --without-mail_imap_module \
- --without-mail_smtp_module \
- --with-http_stub_status_module \
- --with-http_ssl_module \
- --with-http_spdy_module \
- --with-http_gzip_static_module \
- --add-module=$HOME/ngx_pagespeed-release-${NPS_VERSION}-beta \
- --add-module=$HOME/headers-more-nginx-module-${HM_VERSION}\
-~~~
-
-There are a few things worth noting here. First off make sure that Naxsi is first. Here's what the [Naxsi wiki page](https://github.com/nbs-system/naxsi/wiki/installation) has to say on that score: "Nginx will decide the order of modules according the order of the module's directive in Nginx's ./configure. So, no matter what (except if you really know what you are doing) put Naxsi first in your ./configure. If you don't do so, you might run into various problems, from random/unpredictable behaviors to non-effective WAF." The last thing you want is to think you have a web application firewall running when in fact you don't, so stick with Naxsi first.
-
-There are a couple other things you might want to add to this configuration. If you're going to be serving large files, larger than your average 1.5MB HTML page, consider adding the line: `--with-file-aio \`, which is apparently faster than the stock `sendfile` option. See [here](https://calomel.org/nginx.html) for more details. There are quite a few other modules available. A [full list of the default modules](http://wiki.nginx.org/Modules) can be found on the Nginx site. Read through that and if there's another module you need, you can add it to that config list.
-
-Okay, we've told Nginx what to do, now let's actually install it:
-
-~~~.language-bash
-make
-sudo make install
-~~~
-
-Once `make install` finishes doing its thing you'll have Nginx all set up.
-
-Congrats! You made it.
-
-The next step is to add Nginx to the list of things your server starts up automatically whenever it reboots. Since we installed Nginx from scratch we need to tell the underlying system what we did.
-
-## Make it Autostart
-
-Since we compiled from source rather than using Debian/Ubuntu's package management tools, the underlying stystem isn't aware of Nginx's existence. That means it won't automatically start it up when the system boots. In order to ensure that Nginx does start on boot we'll have to manually add Nginx to our server's list of startup services. That way, should we need to reboot, Nginx will automatically restart when the server does.
-
-To do that I use the [Debian init script](https://github.com/MovLib/www/blob/master/bin/init-nginx.sh) listed in the [Nginx InitScripts page](http://wiki.nginx.org/InitScripts):
-
-If that works for you, grab the raw version:
-
-~~~.language-bash
-wget https://raw.githubusercontent.com/MovLib/www/develop/etc/init.d/nginx.sh
-# I had to edit the DAEMON var to point to nginx
-# change line 63 in the file to:
-DAEMON=/usr/sbin/nginx
-# then move it to /etc/init.d/nginx
-sudo mv nginx.sh /etc/init.d/nginx
-# make it executable:
-sudo chmod +x /etc/init.d/nginx
-# then just:
-sudo service nginx start #also restart, reload, stop etc
-~~~
-
-I suggest taking the last bit and turning it into an alias in your `bashrc` or `zshrc` file so that you can quickly restart/reload the server when you need it. Here's what I use:
-
-~~~.language-bash
-alias xrestart="sudo service nginx restart"
-alias xreload="sudo service nginx reload"
-~~~
-
-Okay so we now have the initialization script all set up, now let's make Nginx start up on reboot. In theory this should do it:
-
-~~~.language-bash
-update-rc.d -f nginx defaults
-~~~
-
-But that didn't work for me with my Digital Ocean Debian 7 x64 droplet (which complained that "`insserv rejected the script header`"). I didn't really feel like troubleshooting that at the time; I was feeling lazy so I decided to use chkconfig instead. To do that I just installed chkconfig and added Nginx:
-
-~~~.language-bash
-sudo apt-get install chkconfig
-sudo chkconfig --add nginx
-sudo chkconfig nginx on
-~~~
-
-So there we have it, everything you need to get Nginx installed with SPDY, PageSpeed, Headers More and Naxsi. A blazing fast server for static files.
-
-After that it's just a matter of configuring Nginx, which is entirely dependent on how you're using it. For static setups like this my configuration is pretty minimal.
-
-Before we get to that though, there's the first thing I do: edit `/etc/nginx/nginx.conf` down to something pretty simple. This is the root config so I keep it limited to a `http` block that turns on a few things I want globally and an include statement that loads site-specific config files. Something a bit like this:
-
-~~~.language-bash
-user www-data;
-events {
- worker_connections 1024;
-}
-http {
- include mime.types;
- include /etc/nginx/naxsi_core.rules;
- default_type application/octet-stream;
- types_hash_bucket_size 64;
- server_names_hash_bucket_size 128;
- log_format main '$remote_addr - $remote_user [$time_local] "$request" '
- '$status $body_bytes_sent "$http_referer" '
- '"$http_user_agent" "$http_x_forwarded_for"';
-
- access_log logs/access.log main;
- more_set_headers "Server: My Custom Server";
- keepalive_timeout 65;
- gzip on;
- pagespeed on;
- pagespeed FileCachePath /var/ngx_pagespeed_cache;
- include /etc/nginx/sites-enabled/*.conf;
-}
-~~~
-
-A few things to note. I've include the core rules file from the Naxsi source. To make sure that file exists, we need to copy it over to `/etc/nginx/`.
-
-~~~.language-bash
-sudo cp naxsi-0.53-2/naxci_config/naxsi_core.rule /etc/nginx
-~~~
-
-Now let's restart the server so it picks up these changes:
-
-~~~.language-bash
-sudo service nginx restart
-~~~
-
-Or, if you took my suggestion of creating an alias, you can type: `xrestart` and Nginx will restart itself.
-
-With this configuration we have a good basic setup and any `.conf` files you add to the folder `/etc/nginx/sites-enabled/` will be included automatically. So if you want to create a conf file for mydomain.com, you'd create the file `/etc/nginx/sites-enabled/mydomain.conf` and put the configuration for that domain in that file.
-
-I'm going to post a follow up on how I configure Nginx very soon. In the mean time here's a pretty comprehensive [guide to configuring Nginx](https://calomel.org/nginx.html) in a variety of scenarios. And remember, if you want to some more helpful tips and tricks for web developers, sign up for the mailing list below.
-
-[^1]: If you're more experienced with Nginx and I'm totally bass-akward about something in this guide, please let me know.
-[^2]: In my experience anyway. Probably Apache can be tuned to get pretty close to Nginx's performance with static files, but it's going to take quite a bit of work. One is not necessarily better, but there are better tools for different jobs.
-[^3]: That said, obviously a CDN service like Cloudfront will, in most cases, be much faster than Nginx or any other server.
diff --git a/src/published/2014-02-27_scaling-responsive-images-css.txt b/src/published/2014-02-27_scaling-responsive-images-css.txt deleted file mode 100644 index 15fc129..0000000 --- a/src/published/2014-02-27_scaling-responsive-images-css.txt +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: Scaling Responsive Images in CSS -pub_date: 2014-02-27 12:04:25 -slug: /blog/2014/02/scaling-responsive-images-css -tags: Responsive Web Design, Responsive Images -metadesc: Media queries make responsive images a snap in CSS, but if you want your responsive images to scale between breakpoints things get a bit trickier. -code: True -tutorial: True ---- - -It's pretty easy to handle images responsively with CSS. Just use `@media` queries to swap images at various breakpoints in your design. - -It's slightly trickier to get those images to be fluid and scale in between breakpoints. Or rather, it's not hard to get them to scale horizontally, but what about vertical scaling? - -Imagine this scenario. You have a div with a paragraph inside it and you want to add a background using the `:before` pseudo element -- just a decorative image behind some text. You can set the max-width to 100% to get the image to fluidly scale in width, but what about scaling the height? - -That's a bit trickier, or at least it tripped me up for a minute the other day. I started with this: - -~~~.language-css -.wrapper--image:before { - content: ""; - display: block; - max-width: 100%; - height: 443px; - background-color: #f3f; - background-image: url('bg.jpg'); - background-repeat: no-repeat; - background-size: 100%; - } -~~~ - -Do that and you'll see... nothing. Okay, I expected that. Setting height to auto doesn't work because the pseudo element has no real content, which means its default height is zero. Okay, how do I fix that? - -You might try setting the height to the height of your background image. That works whenever the div is the size of, or larger than, the image. But the minute your image scales down at all you'll have blank space at the bottom of your div, because the div has a fixed height with an image inside that's shorter than that fixed height. Try re-sizing [this demo](/demos/css-bg-image-scaling/no-vertical-scaling.html) to see what I'm talking about, make the window less than 800px and you'll see the box no longer scales with the image. - -To get around this we can borrow a trick from Thierry Koblentz's technique for [creating intrinsic ratios for video](http://alistapart.com/article/creating-intrinsic-ratios-for-video/) to create a box that maintains the ratio of our background image. - -We'll leave everything the way it is, but add one line: - -~~~.language-css -.wrapper--image:before { - content: ""; - display: block; - max-width: 100%; - background-color: #f3f; - background-image: url('bg.jpg'); - background-repeat: no-repeat; - background-size: 100%; - padding-top: 55.375%; -} - -~~~ - -We've added padding to the top of the element, which forces the element to have a height (at least visually). But where did I get that number? That's the ratio of the dimensions of the background image. I simply divided the height of the image by the width of the image. In this case my image was 443px tall and 800px wide, which gives us 53.375%. - -Here's a [working demo](/demos/css-bg-image-scaling/vertical-scaling.html). - -And there you have it, properly scaling CSS background images on `:before` or other "empty" elements, pseudo or otherwise. - -The only real problem with this technique is that requires you to know the dimensions of your image ahead of time. That won't be possible in every scenario, but if it is, this will work. diff --git a/src/published/2014-06-10_protect-your-privacy-with-ghostery.txt b/src/published/2014-06-10_protect-your-privacy-with-ghostery.txt deleted file mode 100644 index 983936c..0000000 --- a/src/published/2014-06-10_protect-your-privacy-with-ghostery.txt +++ /dev/null @@ -1,148 +0,0 @@ ---- -title: How to Protect Your Online Privacy with Ghostery -pub_date: 2014-05-29 12:04:25 -slug: /blog/2014/05/protect-your-privacy-ghostery -metadesc: How to install and configure the Ghostery browser add-on for maximum online privacy -tutorial: True - ---- - -There's an invisible web that lies just below the web you see everyday. That invisible web is tracking the sites you visit, the pages you read, the things you like, the things you favorite and collating all that data into a portrait of things you are likely to purchase. And all this happens without anyone asking your consent. - -Not much has changed since [I wrote about online tracking years ago on Webmonkey][1]. Back then visiting five websites meant "somewhere between 21 and 47 other websites learn about your visit to those five". That number just continues to grow. - -If that doesn't bother you, and you could not care less who is tracking you, then this is not the tutorial for you. - -However, if the extent of online tracking bothers you and you want to do something about it, there is some good news. In fact it's not that hard to stop all that tracking. - -To protect your privacy online you'll just need to add a tool like [Ghostery](https://www.ghostery.com/) or [Do Not Track Plus](https://www.abine.com/index.html) to your web browser. Both will work, but I happen to use Ghostery so that's what I'm going to show you how to set up. - -## Install and Setup Ghostery in Firefox, Chrome/Chromium, Opera and Safari. - -The first step is to install the Ghostery extension for your web browser. To do that, just head over to the [Ghostery downloads page](https://www.ghostery.com/en/download) and click the install button that's highlighted for your browser. - -Some browsers will ask you if you want to allow the add-on to be installed. In Firefox just click "Allow" and then click "Install Now" when the installation window opens up. - -[](/media/images/2014/gh-firefox-install01.png "View Image 1") -: In Firefox click Allow... - -[](/media/images/2014/gh-firefox-install02.png "View Image 2") -: ...and then Install Now - -If you're using Chrome just click the Add button. - -[](/media/images/2014/gh-chrome-install01.jpg "View Image 3") -: Installing extensions in Chrome/Chromium - -Ghostery is now installed, but out of the box Ghostery doesn't actually block anything. That's why, once you have it installed, Ghostery should have opened a new window or tab that looks like this: - -[](/media/images/2014/gh-first-screen.jpg "View Image 4") -: The Ghostery install wizard - -This is the series of screens that walk you through the process of setting up Ghostery to block sites that would like to track you. - -Before I dive into setting up Ghostery, it's important to understand that some of what Ghostery can block will limit what you see on the web. For example, Disqus is a very popular third-party comment system. It happens to track you as well. If you block that tracking though you won't see comments on a lot of sites. - -There are two ways around this. One is to decide that you trust Disqus and allow it to run on any site. The second is to only allow Disqus on sites where you want to read the comments. I'll show you how to set up both options. - -## Configuring Ghostery - -First we have to configure Ghostery. Click the right arrow on that first screen to get started. That will lead you to this screen: - -[](/media/images/2014/gh-second-screen.jpg "View Image 5") -: The Ghostery install wizard, page 2 - -If you want to help Ghostery get better you can check this box. Then click the right arrow again and you'll see a page asking if you want to enable the Alert Bubble. - -[](/media/images/2014/gh-third-screen.jpg "View Image 6") -: The Ghostery install wizard, page 3 - -This is Ghostery's little alert box that comes up when you visit a new page. It will show you all the trackers that are blocked. Think of this as a little window into the invisible web. I enable this, though I change the default settings a little bit. We'll get to that in just a second. - -The next screen is the core of Ghostery. This is where we decide which trackers to block and which to allow. - -[](/media/images/2014/gh-main-01.jpg "View Image 7") -: The Ghostery install wizard -- blocking trackers - -Out of the box Ghostery blocks nothing. Let's change that. I start by blocking everything: - -[](/media/images/2014/gh-main-02.jpg "View Image 8") -: Ghostery set to block all known trackers - -Ghostery will also ask if you want to block new trackers as it learns about them. I go with yes. - -Now chances are the setup we currently have is going to limit your ability to use some websites. To stick with the earlier example, this will mean Disqus comments are never loaded. The easiest way to fix this is to search for Disqus and enable it: - -[](/media/images/2014/gh-main-03.jpg "View Image 9") -: Ghostery set to block everything but Disqus - -Note that, along the top of the tracker list there are some buttons. This makes it easy to enable, for example, not just Disqus but every commenting system. If you'd like to do that click the "Commenting System" button and uncheck all the options: - -[](/media/images/2014/gh-main-04.jpg "View Image 10") -: Filtering Ghostery by type of tracker - -Another category of things you might want to allow are music players like those from SoundCloud. To learn more about a particular service, just click the link next to the item and Ghostery will show you what it knows, including any industry affiliations. - -[](/media/images/2014/gh-main-05.jpg "View Image 11") -: Ghostery showing details on Disqus - -Now you may be thinking, wait, how do I know which companies I want to allow and which I don't? Well, you don't really need to know all of them because you can enable them as you go too. - -Let's save what we have and test Ghostery out on a site. Click the right arrow one last time and check to make sure that the Ghostery icon is in your toolbar. If it isn't you can click the button "Add Button". - -## Ghostery in Action - -Okay, Ghostery is installed and blocking almost everything it knows about. But that might limit what we can do. For example, let's go visit arstechnica.com. You can see down here at the bottom of the screen there's a list of everything that's blocked. - -[](/media/images/2014/gh-example-01.jpg "View Image 12") -: Ghostery showing all the trackers no longer tracking you - -You can see in that list that right now the Twitter button is blocked. So if you scroll down the bottom of the article and look at the author bio (which should have a twitter button) you'll see this little Ghostery icon: - -[](/media/images/2014/gh-example-02.jpg "View Image 13") -: Ghostery replaces elements it has blocked with the Ghostery icon. - -That's how you will know that Ghostery has blocked something. If you were to click on that element Ghostery would load the blocked script and you'd see a Twitter button. But what if you always want to see the Twitter button? To do that we'll come up to the toolbar and click on the Ghostery icon which will reveal the blocking menu: - -[](/media/images/2014/gh-example-03.jpg "View Image 14") -: The Ghostery panel. - -Just slide the Twitter button to the left and Twitter's button (and accompanying tracking beacons) will be allowed after you reload the page. Whenever you return to Ars, the Twitter button will load. As I mentioned before, you can do this on a per-site basis if there are just a few sites you want to allow. To enable the Twitter button on every site, click the little check box button the right of the slider. Realize though, that enabling it globally will mean Twitter can track you everywhere you go. - -[](/media/images/2014/gh-example-04.jpg "view image 15") -: Enabling trackers from the Ghostery panel. - -This panel is essentially doing the same thing as the setup page we used earlier. In fact, we can get back the setting page by click the gear icon and then the "Options" button: - -[](/media/images/2014/gh-example-05.jpg "view image 16") -: Getting back to the Ghostery setting page. - -Now, you may have noticed that the little purple panel showing you what was blocked hung around for quite a while, fifteen seconds to be exact, which is a bit long in my opinion. We can change that by clicking the Advanced tab on the Ghostery options page: - - -[](/media/images/2014/gh-example-06.jpg "view image 17") -: Getting back to the Ghostery setting page. - -The first option in the list is whether or not to show the alert bubble at all, followed by the length of time it's shown. I like to set this to the minimum, 3 seconds. Other than this I leave the advanced settings at their defaults. - -Scroll to the bottom of the settings page, click save, and you're done setting up Ghostery. - -## Conclusion - -Now you can browse the web with a much greater degree of privacy, only allowing those companies *you* approve of to know what you're up to. And remember, any time a site isn't working the way you think you should, you can temporarily disable Ghostery by clicking the icon in the toolbar and hitting the pause blocking button down at the bottom of the Ghostery panel: - -[](/media/images/2014/gh-example-07.jpg "view image 18") -: Temporarily disable Ghostery. - -Also note that there is an iOS version of Ghostery, though, due to Apple's restrictions on iOS, it's an entirely separate web browser, not a plugin for Mobile Safari. If you use Firefox for Android there is a plugin available. - -##Further reading: - -* [How To Install Ghostery (Internet Explorer)](https://www.youtube.com/watch?v=NaI17dSfPRg) -- Ghostery's guide to installing it in Internet Explorer. -* [Secure Your Browser: Add-Ons to Stop Web Tracking][1] -- A piece I wrote for Webmonkey a few years ago that gives some more background on tracking and some other options you can use besides Ghostery. -* [Tracking our online trackers](http://www.ted.com/talks/gary_kovacs_tracking_the_trackers) -- TED talk by Gary Kovacs, CEO of Mozilla Corp, covering online behavior tracking more generally. -* This sort of tracking is [coming to the real world too](http://business.financialpost.com/2014/02/01/its-creepy-location-based-marketing-is-following-you-whether-you-like-it-or-not/?__lsa=e48c-7542), so there's that to look forward to. -{^ .list--indented } - - -[1]: http://www.webmonkey.com/2012/02/secure-your-browser-add-ons-to-stop-web-tracking/ diff --git a/src/published/2014-08-02_get-smarter-pythons-built-in-help.txt b/src/published/2014-08-02_get-smarter-pythons-built-in-help.txt deleted file mode 100644 index cb9c807..0000000 --- a/src/published/2014-08-02_get-smarter-pythons-built-in-help.txt +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Get Smarter with Python's Built-In Help -pub_date: 2014-08-01 12:04:25 -slug: /blog/2014/08/get-smarter-pythons-built-in-help -metadesc: Sometimes you have to put down the Stack Overflow, step away from the Google and go straight to the docs. Python has great docs, here's how to use them. -tags: Python -code: True - ---- - -One of my favorite things about Python is the `help()` function. Fire up the standard Python interpreter, and import `help` from `pydoc` and you can search Python's official documentation from within the interpreter. Reading the f'ing manual from the interpreter. As it should be[^1]. - -The `help()` function takes either an object or a keyword. The former must be imported first while the latter needs to be a string like "keyword". Whichever you use Python will pull up the standard Python docs for that object or keyword. Type `help()` without anything and you'll start an interactive help session. - -The `help()` function is awesome, but there's one little catch. - -In order for this to work properly you need to make sure you have the `PYTHONDOCS` environment variable set on your system. On a sane operating system this will likely be in '/usr/share/doc/pythonX.X/html'. In mostly sane OSes like Debian (and probably Ubuntu/Mint, et al) you might have to explicitly install the docs with `apt-get install python-doc`, which will put the docs in `/usr/share/doc/pythonX.X-doc/html/`. - -If you're using OS X's built-in Python, the path to Python's docs would be: - -~~~.language-bash -/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/ -~~~ - -Note the 2.6 in that path. As far as I can tell OS X Mavericks does not ship with docs for Python 2.7, which is weird and annoying (like most things in Mavericks). If it's there and you've found it, please enlighten me in the comments below. - -Once you've found the documentation you can add that variable to your bash/zshrc like so: - -~~~.language-bash -export PYTHONDOCS=/System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/Resources/English.lproj/Documentation/ -~~~ - -Now fire up iPython, type `help()` and start learning rather than always hobbling along with [Google, Stack Overflow and other crutches](/blog/2014/08/how-my-two-year-old-twins-made-me-a-better-programmer). - -Also, PSA. If you do anything with Python, you really need to check out [iPython](http://ipython.org/). It will save you loads of time, has more awesome features than a Veg-O-Matic and [notebooks](http://ipython.org/notebook.html), don't even get me started on notebooks. And in iPython you don't even have to import help, it's already there, ready to go from the minute it starts. - -[^1]: The Python docs are pretty good too. Not Vim-level good, but close. diff --git a/src/published/2014-08-05_how-my-two-year-old-twins-made-me-a-better-programmer.txt b/src/published/2014-08-05_how-my-two-year-old-twins-made-me-a-better-programmer.txt deleted file mode 100644 index 4838814..0000000 --- a/src/published/2014-08-05_how-my-two-year-old-twins-made-me-a-better-programmer.txt +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: How My Two-Year-Old Twins Made Me a Better Programmer -pub_date: 2014-08-05 12:04:25 -slug: /blog/2014/08/how-my-two-year-old-twins-made-me-a-better-programmer -metadesc: To get better at programming you have to struggle. Sometimes you have to put down the Stack Overflow, step away from the Google and go straight to the docs. Open a manpage, pull up the help files, dig a little deeper and turn information into knowledge. -tags: Python - ---- - -TL;DR version: "information != knowledge; knowledge != wisdom; wisdom != experience;" - -I have two-year-old twin girls. Every day I watch them figure out more about the world around them. Whether that's how to climb a little higher, how to put on a t-shirt, where to put something when you're done with it, or what to do with these crazy strings hanging off your shoes. - -It can be incredibly frustrating to watch them struggle with something new and fail. They're your children so your instinct is to step in and help. But if you step in and do everything for them they never figure out how to do any of it on their own. I've learned to wait until they ask for help. - -Watching them struggle and learn has made me realize that I don't let myself struggle enough and my skills are stagnating because of it. I'm happy to let Google step in and solve all my problems for me. I get work done, true, but at the expense of learning new things. - -I've started to think of this as the Stack Overflow problem, not because I actually blame Stack Overflow -- it's a great resource, the problem is mine -- but because it's emblematic of a problem. I use StackOverflow, and Google more generally, as a crutch, as a way to quickly solve problems with some bit of information rather than digging deeper and turning information into actual knowledge. - -On one hand quick solutions can be a great thing. Searching the web lets me solve my problem and move on to the next (potentially more interesting) one. - -On the other hand, information (the solution to the problem at hand) is not as useful as knowledge. Snippets of code and other tiny bits of information are not going to land you job, nor will they help you when you want to write a tutorial or a book about something. This sort of "let's just solve the problem" approach begins and ends in the task at hand. The information you get out of that is useful for the task you're doing, but knowledge is much larger than that. And I don't know about you, but I want to be more than something that's useful for finishing tasks. - -Information is useless to me if it isn't synthesized into personal knowledge somehow. And, for me at least, that information only becomes knowledge when I stop, back up and try to understand the *why* rather than than just the *how*. Good answers on Stack Overflow explain the why, but more often than not this doesn't happen. - -For example, today I wanted a simple way to get python's `os.listdir` to ignore directories. I knew that I could loop through all the returned elements and test if they were directories, but I thought perhaps there was a more elegant way to doing that (short answer, not really). The details of my problem aren't the point though, the point is that the question had barely formed in my mind and I noticed my fingers already headed for command tab, ready to jump the browser and cut and paste some solution from Stack Overflow. - -This time though I stopped myself before I pulled up my browser. I thought about my daughters in the next room. I knew that I would likely have the answer to my question in 10 seconds and also knew I would forget it and move on in 20. I was about to let easy answers step in and solve my problem for me. I was about to avoid learning something new. Sometimes that's fine, but do it too much and I'm worried I might be more of a successful cut-and-paster than struggling programmer. - -Sometimes it's good to take a few minutes to read the actual docs, pull up the man pages, type `:help` or whatever and learn. It's going to take a few extra minutes. You might even take an unexpected detour from the task at hand. That might mean you learn something you didn't expect to learn. Yes, it might mean you lose a few minutes of "work" to learn. It might even mean that you fail. Sometimes the docs don't help. The sure, Google. The important part of learning is to struggle, to apply your energy to the problem rather than finding to solution. - -Sometimes you need to struggle with your shoelaces for hours, otherwise you'll never figure out how to tie them. - -In my specific case I decided to permanently reduce my dependency on Stack Overflow and Google. Instead of flipping to the browser I fired up the Python interpreter and typed `help(os.listdir)`. Did you know the Python interpreter has a built-in help function called, appropriately enough, `help()`? The `help()` function takes either an object or a keyword (the latter needs to be in quotes like "keyword"). If you're having trouble I wrote a quick guide to [making Python's built-in `help()` function work][1]. - -Now, I could have learned what I wanted to know in 2 seconds using Google. Instead it took me 20 minutes[^1] to figure out. But now I understand how to do what I wanted to do and, more importantly, I understand *why* it will work. I have a new piece of knowledge and next time I encounter the same situation I can draw on my knowledge rather than turning to Google again. It's not exactly wisdom or experience yet, but it's getting closer. And when you're done solving all the little problems of day-to-day coding that's really the point -- improving your skill, learning and getting better at what you do every day. - -[^1]: Most of that time was spent figuring out where OS X stores Python docs, which [I won't have to do again][1]. Note to self, I gotta switch back to Linux. - -[1]: /blog/2014/08/get-smarter-pythons-built-in-help diff --git a/src/published/2014-08-11_building-faster-responsive-websites-webpagetest.txt b/src/published/2014-08-11_building-faster-responsive-websites-webpagetest.txt deleted file mode 100644 index fa82f90..0000000 --- a/src/published/2014-08-11_building-faster-responsive-websites-webpagetest.txt +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: Building Faster Responsive Websites with Webpagetest -pub_date: 2014-08-11 12:04:25 -slug: /blog/2014/08/building-faster-responsive-sites-with-webpagetest -metadesc: All the usual performance best practices apply to responsive web design, but there are some extra things you can do to speed up your responsive websites. -tags: Responsive Web Design, Best Practices -code: True -tutorial: True - ---- - -All the normal best practices for speeding up your website apply to responsive web design. That is, optimize your database and backend tools first, eliminate bottlenecks, cache queries, etc. Then move on to your server where you should focus on compressing and caching everything you can. - -It makes no sense to optimize front end code like we're about to do if the real bottlenecks are complex database queries or other back end issues. Because those sorts of things are way beyond the scope of this article, I'll assume that your back end code or the team responsible for it has already optimized and everything is cached as much as possible. - -Before we dive into how you can use Webpagetest to speed up your responsive web sites let's clarify what we mean by "speed up". - -What we really care about when we're trying to speed up a page is the *time to first render*. That is, the time it takes to get something visible on the screen. The overall page load time is secondary to getting *something* -- ideally the most important content -- on the screen as fast as possible. - -Give the viewer something; the rest of the page can load in the background. - -The first step is to do some basic front-end optimization -- eliminate blocking scripts, compress images, minify files, turn on cache headers, use a CDN for static assets and all the other well established best practices for speeding up pages. Run your site through [Google PageSpeed Insights](https://developers.google.com/speed/pagespeed/insights/) and read through Yahoo's [Best Practices for Speeding Up Your Web Site](http://developer.yahoo.com/performance/rules.html) (it's old but it's still a great reference). - -Remember that the single biggest win for most sites will be reducing image size. - -There are many tools for testing your site and running performance audits that look at specific areas of potential optimization. There's desktop software that simulates all kinds of network conditions and web browsers, but for now we'll stick with [Webpagetest.org](http://www.webpagetest.org/) (hereafter, WPT), a free, online testing tool. - -When you first visit WPT, you'll see a screen that looks like this: - -[](/media/images/2014/wpt-01.jpg "View Image 1") -: Webpagetest basics - -Before you do anything, expand the advanced options. This is where the good stuff is hidden. The first thing to do here is change the number of tests to run. I like to go with 5. If you want you can use a higher number (the max is 9), but it will take longer. Use odd numbers here since WPT uses the median for results. - -I like to set it to First View Only and check Capture Video so I can see the page load for myself. Here's what this would look like (note that I've also set it to emulate a 3G connection and the device is set to iPhone 4): - -[](/media/images/2014/wpt-02.jpg "View Image 2") -: Webpagetest advanced config options - -Before we actually run the test I want to point out a slightly hidden, but super cool feature. Click the scripts tab to the right. See the empty box? What the? Well, click the documentation link and have a look at all the possibilities. There's a ton of stuff you can do here, but let me give you an example of one very powerful feature -- the `setDnsName` option. - -You can use this to test how much your CDN is helping your page load times. To do that you'd enter your overrides in the format: `setDnsName <name-to-override> <override-name>`. If your CDN served files from say `www.domain.com` and your origin was `origin.domain.com` you'd enter this in the box: - -~~~.language-bash -setDnsName www.domain.com origin.domain.com -~~~ - -That way you can run the test both with and without your CDN and compare the results without altering the code on your actual site. - -Right now we'll keep it simple and run an ordinary test, so hit the yellow button. Depending on how many times you told it to run, this may take a little while. Eventually you'll get a page that looks like this: - -[](/media/images/2014/wpt-03.jpg "View Image 3") -: The test results overview page - -The main results there at the top are pulled from the median run. In this case that's run 3. Notice the Plot Full Results link below the main table. Follow that link to see a breakdown of all your tests, which can be useful to see if there are any anomalies in load times. Since there weren't any anomalous results for this page, let's take a closer look at the median, run 3. - -So what jumps out here? Well, the time to fully loaded is almost 15 seconds. I consider that terrible, but it actually passes for reasonably fast over 3G these days. Why is it so bad? Well, I picked this URL for a reason, it has nearly a dozen images, which take a while to load. - -But I'm not really interested in total load times, I want to get a sense of how long it takes to get something on the page. The number we want to look at for that information is the Start Render time. Here you can see it's about 2.5 seconds for the median run. - -In this case though I'm going to focus on the worst test, which is run 1, where nothing appears on the screen for 5 seconds. That's terrible, though it is a lot better than 15 seconds. - -Now I know there's a lot I can do to improve the overall load time of this page, but what I'm most interested in is shaving down that 5 seconds before anything shows up. To get a better idea of what might be causing that delay I'll go back to the test results page and click on the waterfall view. - -[](/media/images/2014/wpt-04.jpg "View Image 4") -: The test results waterfall view - -Here I can see that there are some redirects going on. I recently switched from http to https and haven't updated this post. So I need to do that. Then there's the images themselves, which are most likely the bottleneck. I ran the same test conditions on another page on my site that doesn't have any images at all and, as you can see from this filmstrip image (yes, you can export WPT filmstrips as images, look for the link that says "Export filmstrip as an image...") the above the fold content is visible in 1 second: - - - -So the problem with the first URL is likely three-fold, the size of the images, the number of images and how they're loaded. - -I was in a hurry to get that post up, so I just let my `max-width: 100%;` rule for responsive images handle scaling down the very large images. In short I did what your clients will likely do -- be lazy and skip image optimization. I really need to automate an image optimization setup on my server, but in lieu of that, I manually resized the images, ran them through [ImageOptim](https://imageoptim.com/) (if you want a Linux equivalent check out [Trimage](http://trimage.org/), which hasn't been updated in years, but runs just fine for me in Debian stable and testing) and reran the tests: - -[](/media/images/2014/wpt-06.jpg "View Image 6") -: Things are getting better - -That's a little better. We're down to worst case scenario of 3 second load time over a 3G connection. So it looks as if my hunch is right, the images are the bottleneck. - -I'm convinced, but suppose this were a client site and I wanted to show them why they need to optimize their images. You know what makes a powerful argument for image optimization? Making your client sit through those painfully slow load times. So go back to your main text results page, click the link on the right that says "Watch Video." - -It will take a minute for WPT to generate your video, but when it does scroll to the bottom and grab the embed code. Here's the two videos from the WPT results I've run so far, embedded below for your viewing pain: - -<iframe src="https://www.webpagetest.org/video/view.php?id=140806_XB_Z5T.1.0&embed=1&width=332&height=716" width="332" height="716"></iframe> - -<iframe src="https://www.webpagetest.org/video/view.php?id=140808_0G_RZW.2.0&embed=1&width=332&height=716" width="332" height="716"></iframe> - -Convincing no? - -Now I'm going to keep testing and trying to speed up my page. My next step is going to be tweaking the server. Yeah I know I said at the beginning that you should start here and I didn't. Neither, I'd be willing to bet, did you. That's okay, we'll do it now. - -I use Nginx to serve this site and I compile it myself with quite a few speed-oriented extras, but the main tool I use is the [Nginx Pagespeed module](https://developers.google.com/speed/pagespeed/module). For more on how I set up Nginx and how you can do the same, see my post: [Install Nginx on Debian/Ubuntu](https://longhandpixels.net/blog/2014/02/install-nginx-debian-ubuntu). - -I'm going to turn on a very cool feature in the `nginx_pagespeed_module` that I haven't been using until now, something called the [`lazyload_images` filter](https://developers.google.com/speed/pagespeed/module/filter-lazyload-images). Here's the line I'll add to my configuration file: - -~~~.language-bash -pagespeed EnableFilters lazyload_images; -~~~ - -This will tell PageSpeed to delay loading images on the page unless they're visible in the viewport. Even better, as of version 1.8.31.2, this filter will force download images after the page's `onload` event fires. That means you won't get that janky scrolling effect that happens when images are fetched as they enter the viewport, which happens with a lot of websites that do this with JavaScript. - -So I turned on the `lazyload_images` filter on my server and reran the tests to find that... - -[](/media/images/2014/wpt-07.jpg "View Image 7") -: That's more like it -- Test results showing 1 second load time over 3G. - -The page is now filling the initial viewport in about 1 second over 3G. I can live with that, but honestly it could probably be better. For example, some sort of responsive image solution would reduce the size of images on mobile screens and bring down the total page load time (not to mention saving a bunch of bandwidth) - -I could also do some other little optimizations, including combining the prism.css code highlighting file with my main CSS file to save a request. I could probably ditch the web fonts, create a single SVG that holds the logo and icons and then position everything with CSS. - -That would eliminate a request and probably reduce the overall size of the page as well. And I could put everything behind a CDN, which would probably have more impact than everything else I just mentioned combined, but that costs money and frankly, 1 second over 3G is fine for now. - -Hopefully this has given you some idea of how the tools available through Webpagetest can help you speed up your responsive website (or even your non-responsive site). It's true that I didn't really do anything here you can't do with the Firefox or Chrome developer tools, but I find -- particularly with clients who need a little convincing -- that WPT's filmstrips and videos are invaluable. And I should note that there are plenty of things WPT can do that your favorite developer tools cannot, but I'll save those for another post. - -While Webpagetest and its ilk are great tools, you should always also test on real devices in the real world. Ideally you'll test your site on an actual, slower network and see what it feels like to wait. Three seconds might sound fine, but actually sitting through it might inspire you to dig a little deeper and see what else you can optimize. - -If you don't have access to a slow network, <strike>then come to the U.S., they're everywhere</strike> then simulators will have to do. If you want to do some live testing over constrained network simulations there are some great dedicated tools like [Slowy](http://slowyapp.com/), [Throttle](https://github.com/dmolsen/Throttle) or even the Network Conditioner tool which is part of more recent versions of Apple's OS X Developer Tools (see developer Matt Gemmel's helpful overview of [how to set up Network Conditioner](http://mattgemmell.com/2011/07/25/network-link-conditioner-in-lion/) if you're using Mac OS X 10.7 or higher). - -## Further Reading - -If you'd like to learn more I recommend you start at the beginning. First learn [how to read the waterfall charts](http://www.webperformancetoday.com/2010/07/09/waterfalls-101/) that these services generate. Then I suggest you read Steve Souders' [High Performance Web Sites](http://stevesouders.com/) and the [Web Performance Today](http://www.webperformancetoday.com/) blog, both excellent resources for anyone interested in speeding up their site. Finally, few people know as much about [optimizing massive web applications](http://www.igvita.com/2013/01/15/faster-websites-crash-course-on-web-performance/) and sites as Google's Ilya Grigorik, who's part of the company's Make The Web Fast team. Subscribe to Ilya's blog and [follow him on Twitter](https://twitter.com/igrigorik) for a steady stream of speed-related links and tips. - -For some more details on all the cool stuff in Webpagetest, check out [Patrick Meenan's blog](http://blog.patrickmeenan.com/) (he's the creator of Webpagetest) and especially [this short video](https://www.youtube.com/watch?v=AEAj-HSfYSA). diff --git a/src/published/2014-09-04_history-of-picture-element.txt b/src/published/2014-09-04_history-of-picture-element.txt deleted file mode 100644 index d3e6fb2..0000000 --- a/src/published/2014-09-04_history-of-picture-element.txt +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: A Brief History of the Picture Element -pub_date: 2014-09-01 09:30:23 -slug: /blog/2014/09/brief-history-picture-element -tags: Responsive Images, Responsive Web Design -metadesc: The story behind the new HTML picture element and how a few dedicated web developers made the web better for everyone. -code: True - ---- - -[**Note**: This article was originally written for Ars Techica and there's a nice, [slightly edited version of it over on Ars][1] that you should probably read. It's also got some artwork and screenshots not included here. But I'm re-publishing this here as well for posterity] - -The web is going to get faster in the very near future. Sadly, this is rare enough to be news. - -The speed bump won't be because our devices are getting faster, though they are. Nor will it be because some giant company created something great, though they probably have. - -The web will be getting faster very soon because a small group of web developers saw a problem and decided to solve it for all of us. - -The problem is images. - -As of August 2014 the [size of the average page in the top 1,000 sites on the web][2] is 1.7MB. Images account for almost 1MB of that 1.7MB. - -If you've got a nice fast fiber connection that image payload isn't such a big deal. But, if you're on a mobile network, that huge image payload is not just slowing you down, it's using up your limited bandwidth and, depending on your mobile data plan, might well be costing you money. - -What makes that image payload doubly annoying when you're using a mobile device is that you're getting images intended for giant monitors and they're being loaded on a screen little bigger than your palm. It's a waste of bandwidth delivering pixels you don't need. - -Web developers recognized this problem very early on in the growth of what was called the "mobile" web back then. - -More recently a few of them banded together to do something that web developers have never done before -- create a new HTML element. - -## In the Beginning Was the "Mobile Web" - -Browsing the web on your phone hasn't always been what it is today. Even browsing the web on the first iPhone, one of the first phones with a real web browser, was still pretty terrible. - -Browsing the web on a small screen back then required constant tapping to zoom in on content that had been optimized for much larger screens. Images took forever to load over the iPhone's slow EDGE network connection and then there was all that Flash content, which didn't load at all. And that was the iPhone. Browsing the web using Blackberry and other OSes crippled mobile browsers was even worse. - -It wasn't necessarily the devices' fault, though mobile browsers did, and in many cases still do, lag well behind their desktop brethren. Most of the problem though was the fault of web developers. The web is inherently flexible, but web developers had made it fixed by optimizing sites for large desktop monitors. - -To fix this a lot of sites started building a second site. It sounds crazy now, but just a few years ago the going solution for handling new devices like the Blackberry, the then-new iPhone and some of the first Android phones was to use server-side device detection scripts and redirect users to a dedicated site for mobile devices, typically a URL like m.domain.com. - -These dedicated mobile URLs -- often referred to as M-dot sites -- typically lacked many features found on their "real" desktop counterparts and often didn't even redirect properly, leaving you on the homepage when you wanted a specific article. - -M-dot websites are a fine example of developers encountering a problem and figuring out a way to make it even worse. - -Luckily for us, most web developers did not jump on the m-dot bandwagon because something much better came along. - -## Responsive Design Killed the M-Dot Star - -In 2010 web developer Ethan Marcotte wrote a little article about something he called [Responsive Web Design][3]. - -Marcotte suggested that with the proliferation of mobile devices and the pain of building these dedicated m-dot sites, it might make more sense to embrace the inherently fluid nature of the web and build websites that were flexible. Sites that used relative widths to fit any screen and worked well no matter what device was accessing it. - -Marcotte's vision gave web developers a way to build sites that flex and rearrange their content based on the size and characteristics of the device in your hand. - -Responsive web design isn't perhaps a panacea, but it's pretty close. - -Responsive design started with a few more prominent developers making their personal sites responsive, but it quickly took off when Marcotte and the developers at the Filament Group redesigned the [Boston Globe][4] website to make it responsive. The Globe redesign showed that responsive design worked for more than developer portfolios and blogs. The Globe redesign showed that responsive design was the way of the future. - -While the Globe redesign was successful from a user standpoint, Marcotte and the Filament Group did run into some problems behind the scenes, particularly with images. - -Marcotte's original article dealt with images by scaling them down using CSS. That made them fit smaller screens and preserve the layout of content, but it also means mobile devices were loading huge images that would never be displayed at full resolution. - -For the most part this is still what happens on nearly every site you visit on a small screen. Web developers know, as the developers building the Globe site knew, that this is a problem, but solving it is not as easy as it seems at first glance. - -In fact solving this problem would require adding a brand new element to HTML. - -## Introducing the Picture Element - -The Picture element story begins with the developers working on the Boston Globe, including Mat Marquis, who would eventually co-author the HTML specification. - -In the beginning though, no one working on the Globe site was thinking about creating new HTML elements. Marquis and the other developers just wanted to build a site that loaded faster on mobile devices. - -As Marquis explains, they thought they had a solution. "We started with an image for mobile and then selectively enhanced it up from there. It was a hack using cookies and JavaScript. It worked up until about a week before the site launched." - -Around this time both Firefox and Chrome were updating their prefetching capabilities and the new image prefetching tools broke the method used on the Globe prototypes. - -Browser prefetching was more than just a problem for the solution originally planed for the Globe site. It's actually the crux of what's so difficult about responsive images. - -When a server sends a page to your browser the browser first downloads all the HTML on the page and then parses it. Or at least that's what used to happen. Modern web browsers attempt to speed up page load times by downloading images *before* parsing the page's body. The browser starts downloading the image long before it knows where that image will be in the page layout or how big it will need to be. - -This is simultaneously a very good thing -- it means images load faster -- and a very tricky thing -- it means using JavaScript to manipulate images can actually slow down your page even when your JavaScript is trying to load smaller images (because you end up fighting the prefetcher and downloading two images). - -Marquis and the rest of the developers working on the site had to scrap their original plan and go back to the drawing board. "We started trying to hash out some solution that we could use going forward... but nothing really materialized." However, they started [writing about the problem][5] and other developers joined the conversation. The quickly learned they were not alone in struggling with responsive images. - -"By this time," Marquis says, "we have 10 or 15 developers and nobody has come up with anything." - -The Globe site ended up launched with no solution -- mobile devices were stuck downloading huge images. - -Soon other prominent developers outside the Globe project started to weigh in with possible solutions, including Google's Paul Irish and Opera's Bruce Lawson. Still, no one was able to craft a solution that covered [all the possible use cases][6] developers had identified. - -"We soon realized," says Marquis, "that, even if we were able to solve this with a clever bit of JavaScript we would be working around browser-level optimizations rather than working with them." In other words, using JavaScript meant fighting the browser's built-in image prefetching. - -Talk then moved to lower-level solutions, including a new HTML element that might somehow get around the image prefetching problems in a way that JavaScript never would. It was Bruce Lawson of Opera who first suggested that a new `<picture>` element might be in order. Though they did not know it at the time, a picture element had been proposed once before, but it never went anywhere. - -## Welcome to Standards Jungle - -It is one thing to decide a new HTML element is needed. It's quite another thing to actually navigate the stratified, labyrinthine world of web standards. Especially if no one on your team has ever done such a thing. - -Perhaps the best thing about being naive though is that you tend to plow forward without the hesitation that attends someone who *knows* how difficult the road ahead it will be. - -And so the developers working on the picture element took their ideas to the WHATWG, one of two groups that oversee the development of HTML. The WHATWG is made up primarily of browser vendors, which makes it a good place to gauge how likely it is that browsers will actually ship your ideas. - -To paraphrase Tolstoy, every standards body is unhappy in its own way, but, as Marquis was about to learn, the WHATWG is perhaps most unhappy when people outside it make suggestions about what it ought to do. Suffice to say that Marquis and the rest of the developers involved did not get the WHATWG interested in a new HTML element. - -Right around this time the W3C, which is where the second group that oversees HTML, the HTML WG, is based, launched a new idea -- community groups. Community groups are the W3C's attempt to get outsiders involved in the standards process, a place to propose problems and work on solutions. - -After being shot down by the WHATWG, someone suggested that the developers start a community group and the [Responsive Images Community Group][7] (RICG) was born. - -The only problem with community groups is that no one in the actual working groups pays any attention to community groups. Or at least they didn't in 2011. - -Blissfully unaware of this, Marquis and hundreds of other developers hashed out a responsive image solution in the community group. - -Much of that effort was thanks to Marcos Caceres, now at Mozilla, who, unlike the rest of the group members, had some experience with writing web standards. That experience allowed Caceres to span the divide between two worlds -- web development and standards development. Caceres organized the RICG's efforts and helped the group produce the kind of use cases and tests that standards bodies are looking for. As Marquis puts it, "Marcos saw us flailing around in IRC and helped get everything organized." - -"I tried to herd all the cats," Caceres jokes. And herd he did. He set up the Github repos to get everything in one place, set up a space for the responsive images site and helped bring everything together into the first use cases document. "This played a really critical role for me and for the community," says Caceres, "because it forced us to articulate what the actual problem was... and to set priorities." - -After months of effort, the RICG brought their ideas to the WHATWG IRC. This also did not go well. As Caceres puts it, "standards bodies like to say 'oh we want a lot of input for developers', but then when developers come it ends in tears. Or it used to." - -If you read the WHATWG IRC logs from that time you'll see that the WHATWG members fall into a classic "not invented here" trap. Not only did they reject the input from developers, they turned around and, without considering the RICG's work at all, [proposed their own solution][8], something called `set`, an attribute that solved only one of the many use cases Marquis and company had already identified. - -Developers were, understandably, miffed. - -With developers pushing Picture and browser makers and standards bodies favoring the far more limited and very confusing (albeit still useful) `set` proposal, it looked like nothing would ever actually come of the RICG's work. - -As Paul Irish put it in the [WHATWG IRC channel][9], "[Marquis] corralled and led a group of the best mobile web developers, created a CG, isolated a solution (from many), fought for and won consensus within the group, wrote a draft spec and proposed it. Basically he's done the thing standards folks really want "authors" to do. Which is why this this feels so defeating." - -Irish was not alone. The developer outcry surrounding the WHATWG's counter proposal was quite vocal, vocal enough that some entirely new proposals surfaced, but browser makers failed to agree on anything. Mozilla killed the WHATWG's idea of `set` on `img`. And Chrome refused to implement Picture as it was defined at the time. - -If this all sounds like a bad soap opera, well, it was. This process is, believe it or not, how the web you're using right now gets made. - -## Invented Here. - -To the credit of the WHATWG, the group did eventually overcome their not-invented-here syndrome. Or at least partially overcame it. - -Compromises started to happen. The RICG rolled support for many of the ideas in`set` into their proposal. That wasn't enough to convince the WHATWG, but it got some members working together with the Marquis and the RICG. The WHATWG still didn't like Picture, but they didn't outright reject it anymore either. - -To an outsider the revision process looks a bit like a game of Ping Pong, except that every time someone hits the ball it changes shape. - -The big breakthrough for Picture came from Opera's Simon Pieters and Apple's Tab Atkins. They made a simple, but powerful, suggestion -- make picture a wrapper for `img`. That way there would not be two separate elements for images on the web (which was rightly considered confusing), but there would still be a new way to control which image the browser displays. - -This is exactly the approach used in the final version of the Picture spec. - -When the browser encounters a Picture element, it first evaluates any rules that the web developer might specify. Opera's developer site has a good article on [all the possibilities Picture offers][10]. Then, after evaluating the various rules, the browser picks the best image based on its own criteria. This is another nice feature since the browser's criteria can include your settings. For example, future browsers might offer an option to stop high-res images from loading over 3G, regardless of what any Picture element on the page might say. Once the browser knows which image is the best choice it actually loads and displays that image in a good old `img` element. - -This solves two big problems -- the browser prefetching problem -- prefetching still works and there's no performance penalty -- and the problem of what to do when the browser doesn't understand picture -- it falls back to whatever is in the `img` tag. - -So, in the final proposal, what happens is Picture wraps an `img` tag and if the browser is too old to know what to make of a `<picture>` element then it loads the fallback `img` tag. All the accessibility benefits remain since the alt attribute is still on the `img` element. - -Everyone is happy and the web wins. - -## Nice Theory, but Show Me the Browser - -The web only wins if browsers actually support a proposed standard. And at this time last year no browser on the web actually supported Picture. - -While Firefox and Chrome had both committed to supporting it, it might be years before it became a priority for either, making Picture little more than a nice theory. - -Enter Yoav Weiss, a rare developer who spans the worlds of web development and C++ development. Weiss was a independent contractor who wanted Picture to become a part of the web. Weiss knew C++, the language most browsers are written in, but had never worked on a web browser before. - -Still, like Caceres, Weiss was able to bridge a gap, in this case the world of web developers and C++ developers, putting him in a unique position to be able to know what Picture needed to do and how to make it happen. So, after talking it over with other Chromium developers, Weiss started hacking on Blink, the rendering engine that powers Google's Chrome browser. - -Implementing Picture was no small task. "Getting Picture into Blink required some infrastructure that wasn't there," says Weiss. "I had two options: either wait for the infrastructure to happen naturally over the course of the next two years, or make it happen myself." - -Weiss, who, incidentally, has three young children and, presumably, not much in the way of free time, quickly realized that working night and weekends wasn't going to cut it. Weiss need to turn his work on Picture into a contract job. So he, Marquis and others involved in the community group, set up a <a href="https://www.indiegogo.com/projects/picture-element-implementation-in-blink">crowd funding campaign on Indiegogo</a>. - -On the face of it it sounds like a doomed proposition -- why would developers fund a feature that will ultimately end up in a web browser they otherwise have no control over? - -Then something amazing happened. The campaign didn't just meet its goal, it went way over it. Web developers wanted Picture bad enough to spend their money on the cause. - -It could have been the t-shirts. It could have been the novelty of it. Or it could have been that web developers saw how important a solution to the image problem was in a way that the browser makers and standards bodies didn't. Most likely it was some combination of all these and more. - -In the end enough money was raised to not only implement Picture in Blink, but to also port Weiss' work back to WebKit so WebKit browsers (including Apple's iOS version of Safari) can use it as well. At the same time Marcos Caceres started work at Mozilla and has helped drive Firefox's support for Picture. - -As of today the Picture element will be available in Chrome and Firefox by the end of the year. It's available now in Chrome's dev channel and Firefox 34+ (in Firefox you'll need enable it in `about:config`). Here's a test page showing the new [Picture element in action][11]. - -Apple appears to be adding support to Safari though the backport to WebKit wasn't finished in time for the upcoming Safari 8. Microsoft has likewise been supportive and is considering Picture for the next release of IE. - -## The Future of the Web - -The story of the Picture element isn't just an interesting tale of web developers working together to make the web a better place. It's also a glimpse at the future of the web. The separation between between those who build the web and those who create web standards is disappearing. The W3C's community groups are growing and sites like [Move the Web Forward][12] aim to help bridge the gap between developer ideas and standards bodies. - -There's even a site devoted to what it calls "[specifiction][13]" -- giving web developers a place to suggest tools they need, discuss possible solutions and then find the relevant W3C working group to make it happen. - -Picture may be almost finished, but the RICG isn't going away. In fact it's renaming itself and taking on a new project -- [Element Queries][14]. Coming soon to a browser near you. - - -[1]: http://arstechnica.com/information-technology/2014/09/how-a-new-html-element-will-make-the-web-faster/ -[2]: http://httparchive.org/interesting.php?a=All&l=Aug%2015%202014&s=Top1000 -[3]: http://alistapart.com/article/responsive-web-design -[4]: http://www.bostonglobe.com/ -[5]: http://blog.cloudfour.com/responsive-imgs/ -[6]: http://usecases.responsiveimages.org/ -[7]: http://responsiveimages.org/ -[8]: http://www.w3.org/community/respimg/2012/05/11/respimg-proposal/ -[9]: http://krijnhoetmer.nl/irc-logs/whatwg/20120510#l-747 -[10]: http://dev.opera.com/articles/native-responsive-images/ -[11]: https://longhandpixels.net/2014/08/picture-test -[12]: http://movethewebforward.org/ -[13]: http://specifiction.org/ -[14]: http://responsiveimagescg.github.io/eq-usecases/ diff --git a/src/published/2015-01-24_how-to-write-ebook.txt b/src/published/2015-01-24_how-to-write-ebook.txt deleted file mode 100644 index 7163dad..0000000 --- a/src/published/2015-01-24_how-to-write-ebook.txt +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: How to Write an Ebook -pub_date: 2015-01-24 12:52:53 -slug: /blog/2015/01/how-to-write-ebook -metadesc: The tools I use to write and publish ebooks. All free and open source. - ---- - -When I set out to write a book I had little more than an outline in Markdown. Just a few headers and bullet points on each of what became the major chapters of my [book on responsive web design](https://longhandpixels.net/books/responsive-web-design). - -It never really occurred to me to research which tools I would need to create a book because I knew I was going to use Markdown, which could then be translated into pretty much any format using [Pandoc](http://johnmacfarlane.net/pandoc/). - -Since quite a few people have [asked](https://twitter.com/situjapan/status/549935669129142272) for more details on exactly which tools I used, here's a quick rundown: - -1. I write books as single text files lightly marked up with Pandoc-flavored Markdown. -2. Then I run Pandoc, passing in custom templates, CSS files, fonts I bought and so on. Pretty much as [detailed here in the Pandoc documentation](http://johnmacfarlane.net/pandoc/epub.html). I run these commands often enough that I write a shell script for each project so I don't have to type in all the flags and file paths each time. -3. Pandoc outputs an ePub file and an HTML file. The latter is then used with [Weasyprint](http://weasyprint.org/) to generate the PDF version of the ebook. Then I used the ePub file and the [Kindle command line tool](http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1000765211) to create a .mobi file. -4. All of the formatting and design is just CSS, which I am already comfortable working with (though ePub is only a subset of CSS and reader support is somewhat akin to building website in 1998 -- who knows if it's gonna work? The PDF is what I consider the reference copy.) - -In the end I get the book in TXT, HTML, PDF, ePub and .mobi formats, which covers pretty much every digital reader I'm aware of. Out of those I actually include the PDF, ePub and Mobi files when you [buy the book](https://longhandpixels.net/books/responsive-web-design). - -### FAQs and Notes. - -<strong>Why Not Use iBook Author?</strong> - -I don't want my book tied to a company's software which may or may not continue to exist. Plus I wanted to use open source software. And I wanted more control over the process than I could get with monolithic tools like visual layout editors. - -The above tools are, for me anyway, the simplest possible workflow which outputs the highest quality product. - -<strong>What about Prince?</strong> - -What does The Purple One have to do with writing books? Oh, that [Prince](http://www.princexml.com/). Actually I really like Prince and it can do a few things that WeasyPrint cannot (like execute JavaScript which is handy for code highlighting or allow for `@font-face` font embedding), but it's not free and in the end, I decided, not worth the money. - -<strong>Can you share your shell script?</strong> - -Here's the basic idea, adjust file paths to suit your working habits. - -~~~.language-bash -#!/bin/sh -#Update PDF: -pandoc --toc --toc-depth=2 --smart --template=lib/template.html5 --include-before-body=lib/header.html -t html5 -o rwd.html draft.txt && weasyprint rwd.html rwd.pdf - -#Update epub: -pandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-epub.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o rwd.epub draft.txt - -#update Mobi: -pandoc -S -s --smart -t epub3 --include-before-body=lib/header.html --template=lib/template_epub.html --epub-metadata=lib/epub-metadata.xml --epub-stylesheet=lib/print-kindle.css --epub-cover-image=lib/covers/cover-portrait.png --toc --toc-depth=2 -o kindle.epub Draft.txt && kindlegen kindle.epub -o rwd.mobi -~~~ - -I just run this script and bang, all my files are updated. - -<strong>What Advice can you Offer for People Wanting to Write an Ebook?</strong> - -At the risk of sounding trite, just do it. - -Writing a book is not easy, or rather the writing is never easy, but I don't think it's ever been this easy to *produce* a book. It took me two afternoons to come up with a workflow that involves all free, open source software and allows me to publish literally any text file on my hard drive as a book that can then be read by millions. I type two key strokes and I have a book. Even if millions don't ever read your book (and, for the record, millions have most definitely not read my books), that is still f'ing amazing. - -Now go make something cool (and be sure to tell me about it). diff --git a/src/published/2015-04-02_complete-guide-ssh-keys.txt b/src/published/2015-04-02_complete-guide-ssh-keys.txt deleted file mode 100644 index 25d646c..0000000 --- a/src/published/2015-04-02_complete-guide-ssh-keys.txt +++ /dev/null @@ -1,128 +0,0 @@ ---- -title: How to Setup SSH Keys for Secure Logins -pub_date: 2015-03-21 12:52:53 -slug: /blog/2015/03/set-up-ssh-keys-secure-logins -metadesc: How to set up SSH keys for more secure logins to your VPS. -code: True - ---- - -SSH keys are an easier, more secure way of logging into your virtual private server via SSH. Passwords are vulnerable to brute force attacks and just plain guessing. Key-based authentication is (currently) much more difficult to brute force and, when combined with a password on the key, provides a secure way of accessing your VPS instances from anywhere. - -Key-based authentication uses two keys, the first is the "public" key that anyone is allowed to see. The second is the "private" key that only you ever see. So to log in to a VPS using keys we need to create a pair -- a private key and a public key that matches it -- and then securely upload the public key to our VPS instance. We'll further protect our private key by adding a password to it. - -Open up your terminal application. On OS X, that's Terminal, which is in Applications >> Utilities folder. If you're using Linux I'll assume you know where the terminal app is and Windows fans can follow along after installing [Cygwin](http://cygwin.com/). - -Here's how to generate SSH keys in three simple steps. - - -## Setup SSH for More Secure Logins - -### Step 1: Check for SSH Keys - -Cut and paste this line into your terminal to check and see if you already have any SSH keys: - -~~~.language-bash -ls -al ~/.ssh -~~~ - -If you see output like this, then skip to Step 3: - -~~~.language-bash -id_dsa.pub -id_ecdsa.pub -id_ed25519.pub -id_rsa.pub -~~~ - -### Step 2: Generate an SSH Key - -Here's the command to create a new SSH key. Just cut and paste, but be sure to put in your own email address in quotes: - -~~~.language-bash -ssh-keygen -t rsa -C "your_email@example.com" -~~~ - -This will start a series of questions, just hit enter to accept the default choice for all of them, including the last one which asks where to save the file. - -Then it will ask for a passphrase, pick a good long one. And don't worry you won't need to enter this every time, there's something called `ssh-agent` that will ask for your passphrase and then store it for you for the duration of your session (i.e. until you restart your computer). - -~~~.language-bash -Enter passphrase (empty for no passphrase): [Type a passphrase] -Enter same passphrase again: [Type passphrase again] -~~~ - -Once you've put in the passphrase, SSH will spit out a "fingerprint" that looks a bit like this: - -~~~.language-bash -# Your identification has been saved in /Users/you/.ssh/id_rsa. -# Your public key has been saved in /Users/you/.ssh/id_rsa.pub. -# The key fingerprint is: -# d3:50:dc:0f:f4:65:29:93:dd:53:c2:d6:85:51:e5:a2 scott@longhandpixels.net -~~~ - -### Step 3 Copy Your Public Key to your VPS - -If you have ssh-copy-id installed on your system you can use this line to transfer your keys: - -~~~.language-bash -ssh-copy-id user@123.45.56.78 -~~~ - -If that doesn't work, you can paste in the keys using SSH: - -~~~.language-bash -cat ~/.ssh/id_rsa.pub | ssh user@12.34.56.78 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys" -~~~ - -Whichever you use you should get a message like this: - - -~~~.language-bash -The authenticity of host '12.34.56.78 (12.34.56.78)' can't be established. -RSA key fingerprint is 01:3b:ca:85:d6:35:4d:5f:f0:a2:cd:c0:c4:48:86:12. -Are you sure you want to continue connecting (yes/no)? yes -Warning: Permanently added '12.34.56.78' (RSA) to the list of known hosts. -username@12.34.56.78's password: -~~~ - - Now try logging into the machine, with "ssh 'user@12.34.56.78'", and check in: - -~~~.language-bash -~/.ssh/authorized_keys -~~~ - -to make sure we haven't added extra keys that you weren't expecting. - -Now log in to your VPS with ssh like so: - -~~~.language-bash - ssh username@12.34.56.78 -~~~ - -And you won't be prompted for a password by the server. You will, however, be prompted for the passphrase you used to encrypt your SSH key. You'll need to enter that passphrase to unlock your SSH key, but ssh-agent should store that for you so you only need to re-enter it when you logout or restart your computer. - -And there you have it, secure, key-based log-ins for your VPS. - -### Bonus: SSH config - -If you'd rather not type `ssh myuser@12.34.56.78` all the time you can add that host to your SSH config file and refer to it by hostname. - -The SSH config file lives in `~/.ssh/config`. This command will either open that file if it exists or create it if it doesn't: - -~~~.language-bash -nano ~/.ssh/config -~~~ - -Now we need to create a host entry. Here's what mine looks like: - -~~~.language-bash -Host myname - Hostname 12.34.56.78 - user myvpsusername - #Port 24857 #if you set a non-standard port uncomment this line - CheckHostIP yes - TCPKeepAlive yes -~~~ - -Then to login all I need to do is type `ssh myname`. This is even more helpful when using `scp` since you can skip the whole username@server and just type: `scp myname:/home/myuser/somefile.txt .` to copy a file. diff --git a/src/published/2015-04-03_set-up-secure-first-vps.txt b/src/published/2015-04-03_set-up-secure-first-vps.txt deleted file mode 100644 index ebb9b30..0000000 --- a/src/published/2015-04-03_set-up-secure-first-vps.txt +++ /dev/null @@ -1,147 +0,0 @@ ---- -title: How to Setup And Secure Your First VPS -pub_date: 2015-03-31 12:52:53 -slug: /blog/2015/03/set-up-secure-first-vps -metadesc: Still using shared hosting? It's 2015, time to set up your own VPS. Here's a complete guide to launching your first VPS on Digital Ocean or Vultr. -code: True - ---- - -Let's talk about your server hosting situation. I know a lot of you are still using a shared web host. The thing is, it's 2015, shared hosting is only necessary if you really want unexplained site outages and over-crowded servers that slow to a crawl. - -It's time to break free of those shared hosting chains. It time to stop accepting the software stack you're handed. It's time to stop settling for whatever outdated server software and configurations some shared hosting company sticks you with. - -**It's time to take charge of your server; you need a VPS** - -What? Virtual Private Servers? Those are expensive and complicated... don't I need to know Linux or something? - -No, no and not really. - -Thanks to an increasingly competitive market you can pick up a very capable VPS for $5 a month. Setting up your VPS *is* a little more complicated than using a shared host, but most VPS's these days have one-click installers that will set up a Rails, Django or even WordPress environment for you. - -As for Linux, knowing your way around the command line certainly won't hurt, but these tutorials will teach you everything you really need to know. We'll also automate everything so that critical security updates for your server are applied automatically without you lifting a finger. - -## Pick a VPS Provider - -There are hundreds, possibly thousands of VPS providers these days. You can nerd out comparing all of them on [serverbear.com](http://serverbear.com/) if you want. When you're starting out I suggest sticking with what I call the big three: Linode, Digital Ocean or Vultr. - -Linode would be my choice for mission critical hosting. I use it for client projects, but Vultr and Digital Ocean are cheaper and perfect for personal projects and experiments. Both offer $5 a month servers, which gets you .5 GB of RAM, plenty of bandwidth and 20-30GB of a SSD-based storage space. Vultr actually gives you a little more RAM, which is helpful if you're setting up a Rails or Django environment (i.e. a long running process that requires more memory), but I've been hosting a Django-based site on a 512MB Digital Ocean instance for 18 months and have never run out of memory. - -Also note that all these plans start off charging by the hour so you can spin up a new server, play around with it and then destroy it and you'll have only spent a few pennies. - -Which one is better? They're both good. I've been using Vultr more these days, but Digital Ocean has a nicer, somewhat slicker control panel. There are also many others I haven't named. Just pick one. - -Here's a link that will get you a $10 credit at [Vultr](http://www.vultr.com/?ref=6825229) and here's one that will get you a $10 credit at [Digital Ocean](https://www.digitalocean.com/?refcode=3bda91345045) (both of those are affiliate links and help cover the cost of hosting this site *and* get you some free VPS time). - -For simplicity's sake, and because it offers more one-click installers, I'll use Digital Ocean for the rest of this tutorial. - -## Create Your First VPS - -In Digital Ocean you'll create a "Droplet". It's a three step process: pick a plan (stick with the $5 a month plan for starters), pick a location (stick with the defaults) and then install a bare OS or go with a one-click installer. Let's get WordPress up and running, so select WordPress on 14.04 under the Applications tab. - -If you want automatic backups, and you do, check that box. Backups are not free, but generally won't add more than about $1 to your monthly bill -- it's money well spent. - -The last thing we need to do is add an SSH key to our account. If we don't Digital Ocean will email our root password in a plain text email. Yikes. - -If you need to generate some SSH keys, here's a short guide, [How to Generate SSH keys](/blog/2015/03/set-up-ssh-keys-secure-logins). You can skip step 3 in that guide. Once you've got your keys set up on your local machine you just need to add them to your droplet. - -If you're on OS X, you can use this command to copy your public key to the clipboard: - -~~~.language-bash -pbcopy < ~/.ssh/id_rsa.pub -~~~ - -Otherwise you can use cat to print it out and copy it: - -~~~.language-bash -cat ~/.ssh/id_rsa.pub -~~~ - -Now click the button to "add an SSH key". Then paste the contents of your clipboard into the box. Hit "add SSH Key" and you're done. - -Now just click the giant "Create Droplet". - -Congratulations you just deployed your first VPS server. - -## Secure Your VPS - -Now we can log in to our new VPS with this code: - -~~~.language-bash -ssh root@127.87.87.87 -~~~ - -That will cause SSH to ask if you want to add the server to list of known hosts. Say yes and then on OS X you'll get a dialog asking for the passphrase you created a minute ago when you generate your SSH key. Enter it, check the box to save it to your keychain so you don't have to enter it again. - -And you're now logged in to your VPS as root. That's not how we want to log in though since root is a very privileged user that can wreak all sorts of havoc. The first thing we'll do is change the password of the root user. To do that, just enter: - -~~~.language-bash -passwd -~~~ - -And type a new password. - -Now let's create a new user: - -~~~.language-bash -adduser myusername -~~~ - -Give your username a secure password and then enter this command: - -~~~.language-bash -visudo -~~~ - -If you get an error saying that there is no app installed, you'll need to first install sudo (`apt-get install sudo` on Debian, which does not ship with sudo). That will open a file. Use the arrow key to move the cursor down to the line that reads: - -~~~.language-bash -root ALL=(ALL:ALL) ALL -~~~ - -Now add this line: - -~~~.language-bash -myusername ALL=(ALL:ALL) ALL -~~~ - -Where myusername is the username you created just a minute ago. Now we need to save the file. To do that hit Control-X, type a Y and then hit return. - -Now, **WITHOUT LOGGING OUT OF YOUR CURRENT ROOT SESSION** open another terminal window and make sure you can login with your new user: - -~~~.language-bash -ssh myusername@12.34.56.78 -~~~ - -You'll be asked for the password that we created just a minute ago on the server (not the one for our SSH key). Enter that password and you should be logged in. To make sure we can get root access when we need it, try entering this command: - -~~~.language-bash -sudo apt-get update -~~~ - -That should ask for your password again and then spit out a bunch of information, all of which you can ignore for now. - -Okay, now you can log out of your root terminal window. To do that just hit Control-D. - -## Finishing Up - -What about actually accessing our VPS on the web? Where's WordPress? Just point your browser to the bare IP address you used to log in and you should get the first screen of the WordPress installer. - -We now have a VPS deployed and we've taken some very basic steps to secure it. We can do a lot more to make things more secure, but I've covered that in a separate article: - -One last thing: the user we created does not have access to our SSH keys, we need to add them. First make sure you're logged out of the server (type Control-D and you'll get a message telling you the connection has been closed). Now, on your local machine paste this command: - -~~~.language-bash -cat ~/.ssh/id_rsa.pub | ssh myusername@45.63.48.114 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys" -~~~ - -You'll have to put in your password one last time, but from now on you can login via SSH. - -## Next Steps - -Congratulations you made it past the first hurdle, you're well on your way to taking control over your server. Kick back, relax and write some blog posts. - -Write down any problems you had with this tutorial and send me a link so I can check out your blog (I'll try to help figure out what went wrong too). - -Because we used a pre-built image from Digital Ocean though we're really not much better off than if we went with shared hosting, but that's okay, you have to start somewhere. Next up we'll do the same things, but this time create a bare OS which will serve as the basis for a custom built version of Nginx that's highly optimized and way faster than any stock server. - diff --git a/src/published/2015-10-28_pass.txt b/src/published/2015-10-28_pass.txt deleted file mode 100644 index f02998c..0000000 --- a/src/published/2015-10-28_pass.txt +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Switching from LastPass to Pass -pub_date: 2015-10-28 12:04:25 -slug: /src/pass -tags: command line, security - ---- - -I never used to use a password manager. I kept all my passwords in my head used some tricks I learned from my very, very limited understanding of what memory champions like [Ed Cooke][1] do, to keep track of them. I generated strings using [pwgen][2] and then memorized them. As you might imagine, this did not scale well. Or rather it led to me getting lazy. I don't want to memorize a new strong password for some one-off site I'll probably never log in to again. So I would use a less strong password for those. Worse I'd re-use that password at multiple sites. - -Recognizing that this was a bad idea, I gave up at some point and started using LastPass for these sorts of things. But my really important passwords (email and financial sites), are still only in my head. I never particularly like that my passwords were stored on a third-party server, but LastPass was just *so* easy. Then LogMeIn bought LastPass and suddenly I was motivated to move on. - -As I outlined in a [brief piece][3] for The Register, there are lots of replacement services out there -- I like [Dashlane][4], despite the price -- but I didn't want my password data on a third party server any more. I wanted to be in total control. - -I can't remember how I ran across [pass][5], but I've been meaning to switch over to it for a while now. It exactly what I wanted in a password tool -- a simple, secure, command line based system using tested tools like GnuPG. There's also [Firefox add-on][6] and [an Android app][7] to make life a bit easier. So far though, I'm not using either. - -So I cleaned up my LastPass account, exported everything to CSV and imported it all into pass with this [Ruby script][8]. - -Once you have the basics installed there are two ways to run pass, with Git and without. I can't tell you how many times Git has saved my ass, so naturally I went with a Git-based setup that I host on a private server. That, combined with regular syncing to my Debian machine, my wife's Mac, rsyncing to a storage server, and routine backups to Amazon S3 means my encrypted password files are backed up on six different physical machines. Moderately insane, but sufficiently redundant that I don't worry about losing anything. - -If you go this route there's one other thing you need to backup -- your GPG keys. The public key is easy, but the private one is a bit harder. I got some good ideas from [here][9]. On one hand you could be paranoid-level secure and make a paper print out of your key. I suggest using a barcode or QR code, and then printing on card stock which you laminate for protection from the elements and then store it in a secure location like a safe deposit box. I may do this at some point, but for now I went with the less secure plan B -- I simply encrypted my private key with a passphrase. - -Yes, that essentially negates at least some of the benefit of using a key instead of passphrase in the first place. However, since, as noted above, I don't store any passwords that would, so to speak, give you the keys to my kingdom, I'm not terribly worried about it. Besides, if you really want to get these passwords it would be far easier to just take my laptop and [hit me with a $5 wrench][10] until I told you the passphrase for gnome-keyring. - -The more realistic thing to worry about is how other, potentially far less tech-savvy people can access these passwords should something happen to you. No one in my immediate family knows how to use GPG. Yet. So should something happen to me before I teach my kids how to use it, I periodically print out my important passwords and store that file in a secure place along with a will, advance directive and so on. - - -[1]: https://twitter.com/tedcooke -[2]: https://packages.debian.org/search?keywords=pwgen -[3]: tk -[4]: http://dashlane.com/ -[5]: http://www.passwordstore.org/ -[6]: https://github.com/jvenant/passff#readme -[7]: https://github.com/zeapo/Android-Password-Store -[8]: http://git.zx2c4.com/password-store/tree/contrib/importers/lastpass2pass.rb -[9]: http://security.stackexchange.com/questions/51771/where-do-you-store-your-personal-private-gpg-key -[10]: https://www.xkcd.com/538/ diff --git a/src/published/2015-11-05_how-googles-amp-project-speeds-web-sandblasting-ht.txt b/src/published/2015-11-05_how-googles-amp-project-speeds-web-sandblasting-ht.txt deleted file mode 100644 index 5c443da..0000000 --- a/src/published/2015-11-05_how-googles-amp-project-speeds-web-sandblasting-ht.txt +++ /dev/null @@ -1,107 +0,0 @@ ---- -title: How Google’s AMP project speeds up the Web—by sandblasting HTML -pub_date: 2015-11-05 12:04:25 -slug: /src/how-googles-amp-project-speeds-web-sandblasting-ht -tags: IndieWeb - ---- - -[**This story originally appeared on <a href="http://arstechnica.com/information-technology/2015/11/googles-amp-an-internet-giant-tackles-the-old-myth-of-the-web-is-too-slow/" rel="me">Ars Technica</a>, to comment and enjoy the full reading experience with images (including a TRS-80 browsing the web) you should read it over there.**] - -There's a story going around today that the Web is too slow, especially over mobile networks. It's a pretty good story—and it's a perpetual story. The Web, while certainly improved from the days of 14.4k modems, has never been as fast as we want it to be, which is to say that the Web has never been instantaneous. - -Curiously, rather than a focus on possible cures, like increasing network speeds, finding ways to decrease network latency, or even speeding up Web browsers, the latest version of the "Web is too slow" story pins the blame on the Web itself. And, perhaps more pointedly, this blame falls directly on the people who make it. - -The average webpage has increased in size at a terrific rate. In January 2012, the average page tracked by HTTPArchive [transferred 1,239kB and made 86 requests](http://httparchive.org/trends.php?s=All&minlabel=Oct+1+2012&maxlabel=Oct+1+2015#bytesTotal&reqTotal). Fast forward to September 2015, and the average page loads 2,162kB of data and makes 103 requests. These numbers don't directly correlate to longer page load-and-render times, of course, especially if download speeds are also increasing. But these figures are one indicator of how quickly webpages are bulking up. - -Native mobile applications, on the other hand, are getting faster. Mobile devices get more powerful with every release cycle, and native apps take better advantage of that power. - -So as the story goes, apps get faster, the Web gets slower. This is allegedly why Facebook must invent Facebook Instant Articles, why Apple News must be built, and why Google must now create [Accelerated Mobile Pages](http://arstechnica.com/information-technology/2015/10/googles-new-amp-html-spec-wants-to-make-mobile-websites-load-instantly/) (AMP). Google is late to the game, but AMP has the same goal as Facebook's and Apple's efforts—making the Web feel like a native application on mobile devices. (It's worth noting that all three solutions focus exclusively on mobile content.) - -For AMP, two things in particular stand in the way of a lean, mean browsing experience: JavaScript... and advertisements that use JavaScript. The AMP story is compelling. It has good guys (Google) and bad guys (everyone not using Google Ads), and it's true to most of our experiences. But this narrative has some fundamental problems. For example, Google owns the largest ad server network on the Web. If ads are such a problem, why doesn't Google get to work speeding up the ads? - -There are other potential issues looming with the AMP initiative as well, some as big as the state of the open Web itself. But to think through the possible ramifications of AMP, first you need to understand Google's new offering itself. - -## What is AMP? - -To understand AMP, you first need to understand Facebook's Instant Articles. Instant Articles use RSS and standard HTML tags to create an optimized, slightly stripped-down version of an article. Facebook then allows for some extra rich content like auto-playing video or audio clips. Despite this, Facebook claims that Instant Articles are up to 10 times faster than their siblings on the open Web. Some of that speed comes from stripping things out, while some likely comes from aggressive caching. - -But the key is that Instant Articles are only available via Facebook's mobile apps—and only to established publishers who sign a deal with Facebook. That means reading articles from Facebook's Instant Article partners like National Geographic, BBC, and Buzzfeed is a faster, richer experience than reading those same articles when they appear on the publisher's site. Apple News appears to work roughly the same way, taking RSS feeds from publishers and then optimizing the content for delivery within Apple's application. - -All this app-based content delivery cuts out the Web. That's a problem for the Web and, by extension, for Google, which leads us to Google's Accelerated Mobile Pages project. - -Unlike Facebook Articles and Apple News, AMP eschews standards like RSS and HTML in favor of its own little modified subset of HTML. AMP HTML looks a lot like HTML without the bells and whistles. In fact, if you head over to the [AMP project announcement](https://www.ampproject.org/how-it-works/), you'll see an AMP page rendered in your browser. It looks like any other page on the Web. - -AMP markup uses an extremely limited set of tags. Form tags? Nope. Audio or video tags? Nope. Embed tags? Certainly not. Script tags? Nope. There's a very short list of the HTML tags allowed in AMP documents available over on the [project page](https://github.com/ampproject/amphtml/blob/master/spec/amp-html-format.md). There's also no JavaScript allowed. Those ads and tracking scripts will never be part of AMP documents (but don't worry, Google will still be tracking you). - -AMP defines several of its own tags, things like amp-youtube, amp-ad, or amp-pixel. The extra tags are part of what's known as [Web components](http://www.w3.org/TR/components-intro/), which will likely become a Web standard (or it might turn out to be "ActiveX part 2," only the future knows for sure). - -So far AMP probably sounds like a pretty good idea—faster pages, no tracking scripts, no JavaScript at all (and so no overlay ads about signing up for newsletters). However, there are some problematic design choices in AMP. (At least, they're problematic if you like the open Web and current HTML standards.) - -AMP re-invents the wheel for images by using the custom component amp-img instead of HTML's img tag, and it does the same thing with amp-audio and amp-video rather than use the HTML standard audio and video. AMP developers argue that this allows AMP to serve images only when required, which isn't possible with the HTML img tag. That, however, is a limitation of Web browsers, not HTML itself. AMP has also very clearly treated [accessibility](https://en.wikipedia.org/wiki/Computer_accessibility) as an afterthought. You lose more than just a few HTML tags with AMP. - -In other words, AMP is technically half baked at best. (There are dozens of open issues calling out some of the [most](https://github.com/ampproject/amphtml/issues/517) [egregious](https://github.com/ampproject/amphtml/issues/481) [decisions](https://github.com/ampproject/amphtml/issues/545) in AMP's technical design.) The good news is that AMP developers are listening. One of the worst things about AMP's initial code was the decision to disable pinch-and-zoom on articles, and thankfully, Google has reversed course and [eliminated the tag that prevented pinch and zoom](https://github.com/ampproject/amphtml/issues/592). - -But AMP's markup language is really just one part of the picture. After all, if all AMP really wanted to do was strip out all the enhancements and just present the content of a page, there are existing ways to do that. Speeding things up for users is a nice side benefit, but the point of AMP, as with Facebook Articles, looks to be more about locking in users to a particular site/format/service. In this case, though, the "users" aren't you and I as readers; the "users" are the publishers putting content on the Web. - -## It's the ads, stupid - -The goal of Facebook Instant Articles is to keep you on Facebook. No need to explore the larger Web when it's all right there in Facebook, especially when it loads so much faster in the Facebook app than it does in a browser. - -Google seems to have recognized what a threat Facebook Instant Articles could be to Google's ability to serve ads. This is why Google's project is called Accelerated Mobile Pages. Sorry, desktop users, Google already knows how to get ads to you. - -If you watch the [AMP demo](https://googleblog.blogspot.com/2015/10/introducing-accelerated-mobile-pages.html), which shows how AMP might work when it's integrated into search results next year, you'll notice that the viewer effectively never leaves Google. AMP pages are laid over the Google search page in much the same way that outside webpages are loaded in native applications on most mobile platforms. The experience from the user's point of view is just like the experience of using a mobile app. - -Google needs the Web to be on par with the speeds in mobile apps. And to its credit, the company has some of the smartest engineers working on the problem. Google has made one of the fastest Web browsers (if not the fastest) by building Chrome, and in doing so the company has pushed other vendors to speed up their browsers as well. Since Chrome debuted, browsers have become faster and better at an astonishing rate. Score one for Google. - -The company has also been touting the benefits of mobile-friendly pages, first by labeling them as such in search results on mobile devices and then later by ranking mobile friendly pages above not-so-friendly ones when other factors are equal. Google has been quick to adopt speed-improving new HTML standards like the responsive images effort, which was first supported by Chrome. Score another one for Google. - -But pages keep growing faster than network speeds, and the Web slows down. In other words, Google has tried just about everything within its considerable power as a search behemoth to get Web developers and publishers large and small to speed up their pages. It just isn't working. - -One increasingly popular reaction to slow webpages has been the use of content blockers, typically browser add-ons that stop pages from loading anything but the primary content of the page. Content blockers have been around for over a decade now (No Script first appeared for Firefox in 2005), but their use has largely been limited to the desktop. That changed in Apple's iOS 9, which for the first time put simple content-blocking tools in the hands of millions of mobile users. - -Combine all the eyeballs that are using iOS with content blockers, reading Facebook Instant Articles, and perusing Apple News, and you suddenly have a whole lot of eyeballs that will never see any Google ads. That's a problem for Google, one that AMP is designed to fix. - -## Static pages that require Google's JavaScript - -The most basic thing you can do on the Web is create a flat HTML file that sits on a server and contains some basic tags. This type of page will always be lightning fast. It's also insanely simple. This is literally all you need to do to put information on the Web. There's no need for JavaScript, no need even for CSS. - -This is more or less the sort of page AMP wants you to create (AMP doesn't care if your pages are actually static or—more likely—generated from a database. The point is what's rendered is static). But then AMP wants to turn around and require that each page include a third-party script in order to load. AMP deliberately sets the opacity of the entire page to 0 until this script loads. Only then is the page revealed. - -This is a little odd; as developer Justin Avery [writes](https://responsivedesign.is/articles/whats-the-deal-with-accelerated-mobile-pages-amp), "Surely the document itself is going to be faster than loading a library to try and make it load faster." - -Pinboard.in creator Maciej Cegłowski did just that, putting together a demo page that duplicates the AMP-based AMP homepage without that JavaScript. Over a 3G connection, Cegłowski's page fills the viewport in [1.9 seconds](http://www.webpagetest.org/result/151016_RF_VNE/). The AMP homepage takes [9.2 seconds](http://www.webpagetest.org/result/151016_9J_VNN/). JavaScript slows down page load times, even when that JavaScript is part of Google's plan to speed up the Web. - -Ironically, for something that is ostensibly trying to encourage better behavior from developers and publishers, this means that pages using progressive enhancement, keeping scripts to a minimum and aggressively caching content—in other words sites following best practices and trying to do things right—may be slower in AMP. - -In the end, developers and publishers who have been following best practices for Web development and don't rely on dozens of tracking networks and ads have little to gain from AMP. Unfortunately, the publishers building their sites like that right now are few and far between. Most publishers have much to gain from generating AMP pages—at least in terms of speed. Google says that AMP can improve page speed index scores by between 15 to 85 percent. That huge range is likely a direct result of how many third-party scripts are being loaded on some sites. - -The dependency on JavaScript has another detrimental effect. AMP documents depend on JavaScript, which is to say that if their (albeit small) script fails to load for some reason—say, you're going through a tunnel on a train or only have a flaky one-bar connection at the beach—the AMP page is completely blank. When an AMP page fails, it fails spectacularly. - -Google knows better than this. Even Gmail still offers a pure HTML-based fallback version of itself. - -## AMP for publishers - -Under the AMP bargain, all big media has to do is give up its ad networks. And interactive maps. And data visualizations. And comment systems. - -Your WordPress blog can get in on the stripped-down AMP action as well. Given that WordPress powers roughly 24 percent of all sites on the Web, having an easy way to generate AMP documents from WordPress means a huge boost in adoption for AMP. It's certainly possible to build fast websites using WordPress, but it's also easy to do the opposite. WordPress plugins often have a dramatic (negative) impact on load times. It isn't uncommon to see a WordPress site loading not just one but several external JavaScript libraries because the user installed three plugins that each use a different library. AMP neatly solves that problem by stripping everything out. - -So why would publishers want to use AMP? Google, while its influence has dipped a tad across industries (as Facebook and Twitter continue to drive more traffic), remains a powerful driver of traffic. When Google promises more eyeballs on their stories, big media listens. - -AMP isn't trying to get rid of the Web as we know it; it just wants to create a parallel one. Under this system, publishers would not stop generating regular pages, but they would also start generating AMP files, usually (judging by the early adopter examples) by appending /amp to the end of the URL. The AMP page and the canonical page would reference each other through standard HTML tags. User agents could then pick and choose between them. That is, Google's Web crawler might grab the AMP page, but desktop Firefox might hit the AMP page and redirect to the canonical URL. - -On one hand, what this amounts to is that after years of telling the Web to stop making m. mobile-specific websites, Google is telling the Web to make /amp-specific mobile pages. On the other hand, this nudges publishers toward an idea that's big in the [IndieWeb movement](http://indiewebcamp.com/): Publish (on your) Own Site, Syndicate Elsewhere (or [POSSE](http://indiewebcamp.com/POSSE) for short). - -The idea is to own the canonical copy of the content on your own site but then to send that content everywhere you can. Or rather, everywhere you want to reach your readers. Facebook Instant Article? Sure, hook up the RSS feed. Apple News? Send the feed over there, too. AMP? Sure, generate an AMP page. No need to stop there—tap the new Medium API and half a dozen others as well. - -Reading is a fragmented experience. Some people will love reading on the Web, some via RSS in their favorite reader, some in Facebook Instant Articles, some via AMP pages on Twitter, some via Lynx in their terminal running on a [restored TRS-80](http://arstechnica.com/information-technology/2015/08/surfing-the-internet-from-my-trs-80-model-100/) (seriously, it can be done. See below). The beauty of the POSSE approach is that you can reach them all from a single, canonical source. - -## AMP and the open Web - -While AMP has problems and just might be designed to lock publishers into a Google-controlled format, so far it does seem friendlier to the open Web than Facebook Instant Articles. - -In fact, if you want to be optimistic, you could look at AMP as the carrot that Google has been looking for in its effort to speed up the Web. As noted Web developer (and AMP optimist) Jeremy Keith [writes](https://adactio.com/journal/9646) in a piece on AMP, "My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask 'Why can't our regular pages be this fast?' By showing that there is life beyond big bloated invasive webpages, perhaps the AMP project will work as a demo of what the whole Web could be." - -Not everyone is that optimistic about AMP, though. Developer and Author Tim Kadlec [writes](https://timkadlec.com/2015/10/amp-and-incentives/), "[AMP] doesn't feel like something helping the open Web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the Web... Using a very specific tool to build a tailored version of my page in order to 'reach everyone' doesn't fit any definition of the 'open Web' that I've ever heard." - -There's one other important aspect to AMP that helps speed up their pages: Google will cache your pages on its CDN for free. "AMP is caching... You can use their caching if you conform to certain rules," writes Dave Winer, developer and creator of RSS, [in a post on AMP](http://scripting.com/2015/10/10/supportingStandardsWithoutAllThatNastyInterop.html). "If you don't, you can use your own caching. I can't imagine there's a lot of difference unless Google weighs search results based on whether you use their code." diff --git a/src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt b/src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt deleted file mode 100644 index e83d8e3..0000000 --- a/src/published/2019-04-07_why-and-how-ditch-vagrant-for-lxd.txt +++ /dev/null @@ -1,216 +0,0 @@ -* **Updated July 2022**: This was getting a bit out of date in some places so I've fixed a few things. More importantly, I've run into to some issues with cgroups and lxc on Arch and added some notes below under the [special note to Arch users](#arch)* - -I've used Vagrant to manage my local development environment for quite some time. The developers I used to work with used it and, while I have no particular love for it, it works well enough. Eventually I got comfortable enough with Vagrant that I started using it in my own projects. I even wrote about [setting up a custom Debian 9 Vagrant box](/src/create-custom-debian-9-vagrant-box) to mirror the server running this site. - -The problem with Vagrant is that I have to run a huge memory-hungry virtual machine when all I really want to do is run Django's built-in dev server. - -My laptop only has 8GB of RAM. My browser is usually taking around 2GB, which means if I start two Vagrant machines, I'm pretty much maxed out. Django's dev server is also painfully slow to reload when anything changes. - -Recently I was talking with one of Canonical's [MAAS](https://maas.io/) developers and the topic of containers came up. When I mentioned I really didn't like Docker, but hadn't tried anything else, he told me I really needed to try LXD. Later that day I began reading through the [LinuxContainers](https://linuxcontainers.org/) site and tinkering with LXD. Now, a few days later, there's not a Vagrant machine left on my laptop. - -Since it's just me, I don't care that LXC only runs on Linux. LXC/LXD is blazing fast, lightweight, and dead simple. To quote, Canonical's [Michael Iatrou](https://blog.ubuntu.com/2018/01/26/lxd-5-easy-pieces), LXC "liberates your laptop from the tyranny of heavyweight virtualization and simplifies experimentation." - -Here's how I'm using LXD to manage containers for Django development on Arch Linux. I've also included instructions and commands for Ubuntu since I set it up there as well. - -### What's the difference between LXC, LXD and `lxc` - -I wrote this guide in part because I've been hearing about LXC for ages, but it seemed unapproachable, overwhelming, too enterprisey you might say. It's really not though, in fact I found it easier to understand than Vagrant or Docker. - -So what is a LXC container, what's LXD, and how are either different than say a VM or for that matter Docker? - -* LXC - low-level tools and a library to create and manage containers, powerful, but complicated. -* LXD - is a daemon which provides a REST API to drive LXC containers, much more user-friendly -* `lxc` - the command line client for LXD. - -In LXC parlance a container is essentially a virtual machine, if you want to get pedantic, see Stéphane Graber's post on the [various components that make up LXD](https://stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/). For the most part though, interacting with an LXC container is like interacting with a VM. You say ssh, LXD says socket, potato, potahto. Mostly. - -An LXC container is not a container in the same sense that Docker talks about containers. Think of it more as a VM that only uses the resources it needs to do whatever it's doing. Running this site in an LXC container uses very little RAM. Running it in Vagrant uses 2GB of RAM because that's what I allocated to the VM -- that's what it uses even if it doesn't need it. LXC is much smarter than that. - -Now what about LXD? LXC is the low level tool, you don't really need to go there. Instead you interact with your LXC container via the LXD API. It uses YAML config files and a command line tool `lxc`. - -That's the basic stack, let's install it. - -### Install LXD - -On Arch I used the version of [LXD in the AUR](https://aur.archlinux.org/packages/lxd/). Ubuntu users should go with the Snap package. The other thing you'll want is your distro's Btrfs or ZFS tools. - -Part of LXC's magic relies on either Btrfs and ZFS to read a virtual disk not as a file the way Virtualbox and others do, but as a block device. Both file systems also offer copy-on-write cloning and snapshot features, which makes it simple and fast to spin up new containers. It takes about 6 seconds to install and boot a complete and fully functional LXC container on my laptop, and most of that time is downloading the image file from the remote server. It takes about 3 seconds to clone that fully provisioned base container into a new container. - -In the end I set up my Arch machine using Btrfs or Ubuntu using ZFS to see if I could see any difference (so far, that would be no, the only difference I've run across in my research is that Btrfs can run LXC containers inside LXC containers. LXC Turtles all the way down). - -Assuming you have Snap packages set up already, Debian and Ubuntu users can get everything they need to install and run LXD with these commands: - -~~~~console -apt install zfsutils-linux -~~~~ - -And then install the snap version of lxd with: - -~~~~console -snap install lxd -~~~~ - -Once that's done we need to initialize LXD. I went with the defaults for everything. I've printed out the entire init command output so you can see what will happen: - -~~~~console -sudo lxd init -Create a new BTRFS pool? (yes/no) [default=yes]: -would you like to use LXD clustering? (yes/no) [default=no]: -Do you want to configure a new storage pool? (yes/no) [default=yes]: -Name of the new storage pool [default=default]: -Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: -Create a new BTRFS pool? (yes/no) [default=yes]: -Would you like to use an existing block device? (yes/no) [default=no]: -Size in GB of the new loop device (1GB minimum) [default=15GB]: -Would you like to connect to a MAAS server? (yes/no) [default=no]: -Would you like to create a new local network bridge? (yes/no) [default=yes]: -What should the new bridge be called? [default=lxdbr0]: -What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: -What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: -Would you like LXD to be available over the network? (yes/no) [default=no]: -Would you like stale cached images to be updated automatically? (yes/no) [default=yes] -Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes -~~~~ - -LXD will then spit out the contents of the profile you just created. It's a YAML file and you can edit it as you see fit after the fact. You can also create more than one profile if you like. To see all installed profiles use: - -~~~~console -lxc profile list -~~~~ - -To view the contents of a profile use: - -~~~~console -lxc profile show <profilename> -~~~~ - -To edit a profile use: - -~~~~console -lxc profile edit <profilename> -~~~~ - -So far I haven't needed to edit a profile by hand. I've also been happy with all the defaults although, when I do this again, I will probably enlarge the storage pool, and maybe partition off some dedicated disk space for it. But for now I'm just trying to figure things out so defaults it is. - -The last step in our setup is to add our user to the lxd group. By default LXD runs as the lxd group, so to interact with containers we'll need to make our user part of that group. - -~~~~console -sudo usermod -a -G lxd yourusername -~~~~ - -#####Special note for Arch users. {:#arch } - -To run unprivileged containers as your own user, you'll need to jump through a couple extra hoops. As usual, the [Arch User Wiki](https://wiki.archlinux.org/index.php/Linux_Containers#Enable_support_to_run_unprivileged_containers_(optional)) has you covered. Read through and follow those instructions and then reboot and everything below should work as you'd expect. - -Or at least it did until about June of 2022 when something changed with cgroups and I stopped being able to run my lxc containers. I kept getting errors like: - -~~~~console -Failed to create cgroup at_mnt 24() -lxc debian-base 20220713145726.259 ERROR conf - ../src/lxc/conf.c:lxc_mount_auto_mounts:851 - No such file or directory - Failed to mount "/sys/fs/cgroup" -~~~~ - -I tried debugging, and reading through all the bug reports I could find over the course of a couple of days and got nowhere. No one else seems to have this problem. I gave up and decided I'd skip virtualization and develop directly on Arch. I installed PostgreSQL... and it wouldn't start, also throwing an error about cgroups. That is when I dug deeper into cgroups and found a way to revert to the older behavior. I added this line to my boot params (in my case that's in /boot/loader/entries/arch.conf): - -~~~~console -systemd.unified_cgroup_hierarchy=0 -~~~~ - -That fixed all the issues for me. If anyone can explain *why* I'd be interested to hear from you in the comments. - -### Create Your First LXC Container - -Let's create our first container. This website runs on a Debian VM currently hosted on Vultr.com so I'm going to spin up a Debian container to mirror this environment for local development and testing. - -To create a new LXC container we use the `launch` command of the `lxc` tool. - -There are four ways you can get LXC containers, local (meaning a container base you've downloaded), images (which come from [https://images.linuxcontainers.org/](https://images.linuxcontainers.org/), ubuntu (release versions of Ubuntu), and ubuntu-daily (daily images). The images on linuxcontainers are unofficial, but the Debian image I used worked perfectly. There's also Alpine, Arch CentOS, Fedora, openSuse, Oracle, Palmo, Sabayon and lots of Ubuntu images. Pretty much every architecture you could imagine is in there too. - -I created a Debian 9 Stretch container with the amd64 image. To create an LXC container from one of the remote images the basic syntax is `lxc launch images:distroname/version/architecture containername`. For example: - -~~~~console -lxc launch images:debian/stretch/amd64 debian-base -Creating debian-base -Starting debian-base -~~~~ - -That will grab the amd64 image of Debian 9 Stretch and create a container out of it and then launch it. Now if we look at the list of installed containers we should see something like this: - -~~~~console -lxc list -+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+ -| debian-base | RUNNING | 10.171.188.236 (eth0) | fd42:e406:d1eb:e790:216:3eff:fe9f:ad9b (eth0) | PERSISTENT | | -+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+ -~~~~ - -Now what? This is what I love about LXC, we can interact with our container pretty much the same way we'd interact with a VM. Let's connect to the root shell: - -~~~~console -lxc exec debian-base -- /bin/bash -~~~~ - -Look at your prompt and you'll notice it says `root@nameofcontainer`. Now you can install everything you need on your container. For me, setting up a Django dev environment, that means Postgres, Python, Virtualenv, and, for this site, all the Geodjango requirements (Postgis, GDAL, etc), along with a few other odds and ends. - -You don't have to do it from inside the container though. Part of LXD's charm is to be able to run commands without logging into anything. Instead you can do this: - -~~~~console -lxc exec debian-base -- apt update -lxc exec debian-base -- apt install postgresql postgis virtualenv -~~~~ - -LXD will output the results of your command as if you were SSHed into a VM. Not being one for typing, I created a bash alias that looks like this: `alias luxdev='lxc exec debian-base -- '` so that all I need to type is `luxdev <command>`. - -What I haven't figured out is how to chain commands, this does not work: - -~~~~console -lxc exec debian-base -- su - lxf && cd site && source venv/bin/activate && ./manage.py runserver 0.0.0.0:8000 -~~~~ - -According to [a bug report](https://github.com/lxc/lxd/issues/2057), it should work in quotes, but it doesn't for me. Something must have changed since then, or I'm doing something wrong. - -The next thing I wanted to do was mount a directory on my host machine in the LXC instance. To do that you'll need to edit `/etc/subuid` and `/etc/subgid` to add your user id. Use the `id` command to get your user and group id (it's probably 1000 but if not, adjust the commands below). Once you have your user id, add it to the files with this one liner I got from the [Ubuntu blog](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd): - -~~~~console -echo 'root:1000:1' | sudo tee -a /etc/subuid /etc/subgid -~~~~ - -Then you need to configure your LXC instance to use the same uid: - -~~~~console -lxc config set debian-base raw.idmap 'both 1000 1000' -~~~~ - -The last step is to add a device to your config file so LXC will mount it. You'll need to stop and start the container for the changes to take effect. - -~~~~console -lxc config device add debian-base sitedir disk source=/path/to/your/directory path=/path/to/where/you/want/folder/in/lxc -lxc stop debian-base -lxc start debian-base -~~~~ - -That replicates my setup in Vagrant, but we've really just scratched the surface of what you can do with LXD. For example you'll notice I named the initial container "debian-base". That's because this is the base image (fully set up for Djano dev) which I clone whenever I start a new project. To clone a container, first take a snapshot of your base container, then copy that snapshot to create a new container: - -~~~~console -lxc snapshot debian-base debian-base-configured -lxc copy debian-base/debian-base-configured mycontainer -~~~~ - -Now you've got a new container named mycontainer. If you'd like to tweak anything, for example mount a different folder specific to this new project you're starting, you can edit the config file like this: - -~~~~console -lxc config edit mycontainer -~~~~ - -I highly suggest reading through Stéphane Graber's 12 part series on LXD to get a better idea of other things you can do, how to manage resources, manage local images, migrate containers, or connect LXD with Juju, Openstack or yes, even Docker. - -#####Shoulders stood upon - -* [Stéphane Graber's 12 part series on lxd 2.0](https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/) - Graber wrote LXC and LXD, this is the best resource I found and highly recommend reading it all. -* [Mounting your home directory in LXD](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd) -* [Official how to](https://linuxcontainers.org/lxd/getting-started-cli/) -* [Linux Containers Discourse site](https://discuss.linuxcontainers.org/t/deploying-django-applications/996) -* [LXD networking: lxdbr0 explained](https://blog.ubuntu.com/2016/04/07/lxd-networking-lxdbr0-explained) - - -[^1]: To be fair, I didn't need to get rid of Vagrant. You can use Vagrant to manage LXC containers, but I don't know why you'd bother. LXD's management tools and config system works great, why add yet another tool to the mix? Unless you're working with developers who use Windows, in which case LXC, which is short for, *Linux Container*, is not for you. diff --git a/src/published/arch-philosophy.txt b/src/published/arch-philosophy.txt deleted file mode 100644 index ff18521..0000000 --- a/src/published/arch-philosophy.txt +++ /dev/null @@ -1,24 +0,0 @@ -Everyone seems to have a post about why they ended up with Arch. This is mine. - -I recently made the switch to Arch Linux for my primary desktop and it's been great. Arch very much feels like the end of the line for me --the bottom of the rabbit hole as it were. Once you have a system that does everything you need it to do effortlessly, why bother with anything else? Some of it might be a pain at times, hand partitioning, hand mounting and generating your own fstab files, but it teaches you a lot. It pulls back the curtain so you can see that you are in fact the person behind the curtain, you just didn't realize it. - -<img src="images/2020/desktop042020_uAICE8n.png" id="image-2325" class="picwide caption" /> - -**[Updated July 2021: Still running Arch. Still happy about it. I did switch back to Openbox instead of i3, but otherwise my setup is unchanged]** - -Why bother? Control. Simplicity. Stubbornness. The good old DIY ethos, which is born out of the realization that if you don't do things yourself you'll have to accept the mediocrity that capitalism has produced. You never learn; you never grow. That's no way to live. - -I used to be a devoted Debian fan. I still agree with the Debian manifesto, such as it is. In practice however I found myself too often having to futz with things. - -I came to Arch for the AUR, though the truth is these days I don't use it much. Then for a while I [ran Sway](/src/guide-to-switching-i3-to-sway), which was really only practical on Arch. Since then though I went back to X.org. Sorry Wayland, but much as I love Sway, I did not love wrestling with MIDI controller drivers, JACK, and all the other elements of an audio/video workflow in Wayland. It can be done, but it’s more work, and I don’t want to work at getting software to work. I’m too old for that shit. I want to plug in a microphone, open Audacity, and record. If it’s any more complicated than that -- and it was for me in Wayland with the mics I own -- I will find something else. I really don’t care what my software stack is, so long as I can create what I want to create with it. - -Wayland was smoother, less graphically glitchy, but meh, whatever. Ninety percent of the time I’m writing in Vim in a Urxvt window. I need smooth scrolling and transitions like I need a hole in my head. I also set up Openbox to behave very much like Sway, so I still have the same shortcuts and honestly, aside from the fact that Tint2 has more icons than Waybar, I can’t tell the difference. Well, that’s not true. Vim works fine with the clipboard again, no need for Neovim. - -My Arch setup these days is minimalist: [Openbox](http://openbox.org/wiki/Main_Page) with [tint2](https://gitlab.com/o9000/tint2). I open apps with [dmenu](http://tools.suckless.org/dmenu/) and do most of my file system tasks from the terminal using bash (or [Ranger](http://nongnu.org/ranger/) if I want something fancier). Currently my setup uses about 200MB of RAM with no apps open. Arch doesn't have quite the software selection of Debian, but it has most of the software you'd ever want. My needs are simple: bash, vim, tmux, mutt, newsboat, mpd, mpv, git, feh, gimp, darktable and dev stuff like python3, postgis, etc. Every distro has this stuff. - -meaning I have no need to spend more than $400 on a laptop. - - -Arch's real strength though is how amazingly easy it is to package your own software. Because even Debian's epically oversized repos can't hold everything. The Debian repos pale next to the Arch User Respository (AUR), which has just about every piece of software available for Linux. And it's up-to-date. So up-to-date that half the AUR packages have a -git variant that's pulled straight from the project's git repo. The best part is there are tools to manage and update all these out of repo packages. I strongly suggest you learn to package and install AUR repos by hand, but once you've done that a few times and you know what's happening I suggest installing [yay](https://github.com/Jguer/yay) to simplify managing all those AUR installs. - -I've installed Arch on dozens of machines at this point. I started with my Macbook Pro, which I've since sold (no need for high end hardware with my setup), but it ran Arch like a champ (what a relief to not need OS X). Currently I use a Lenovo x270 that I picked up off eBay for $300. I added a larger hard drive, a second hard drive, and 32-gigabytes of RAM. It runs Arch like a champ and gives me all I could ever want in a laptop. Okay, a graphics card would be nice for my occasional bouts of video editing, but otherwise it's more than enough. diff --git a/src/published/backup-2.txt b/src/published/backup-2.txt deleted file mode 100644 index 4012668..0000000 --- a/src/published/backup-2.txt +++ /dev/null @@ -1,23 +0,0 @@ -I wrote previously about how I [backup database files](/src/automatic-offsite-postgresql-backups) automatically. The key word there being "automatically". If I have to remember to make a backup the odds of it happening drop to zero. So I automate as I described in that piece, but that's not the only backup I have. - -The point for me as a writer is that I don't want to lose these words. - -Part of the answer is backing up databases, but part of my solution is also creating workflows which automatically spawn backups. - -This is actually my preferred backup method because it's not just a backup, it's future proofing. PostgreSQL may not be around ten years from now (I hope it is, because it's pretty awesome, but it may not be), but it's not my only backup. - -In fact I've got at least half a dozen backups of these words and I haven't even finished this piece yet. Right now I'm typing these words in Vim and will save the file in a Git repo that will get pushed to a server. That's two backups. Later the containing folder will be backed up on S3 (weekly), as well as two local drives (one daily, one weekly, both [rsync](https://rsync.samba.org/) copies). - -None of that really requires any effort on my part. I do have to add this file to the git repo and then commit and push it to the remote server, but [Vim Fugitive](https://github.com/tpope/vim-fugitive) makes that ridiculously simple. - -That's not the end of the backups though. Once I'm done writing I'll cut and paste this piece into my Django app and hit a publish button that will write the results out to the flat HTML file you're actually reading right now (this file is another backup). I also output a plain text version (just append `.txt` to any luxagraf URL to see a plain text version of the page). - -The end result is that all this makes it very unlikely I will loose these words outright. - -However, when I plugged these words into the database I gave this article a relationship with other objects in that database. So even though the redundant backups built into my workflow make a total data loss unlikely, without the database I will lose the relationships I've created. That's why I [a solid PostgreSQL backup strategy](/src/automatic-offsite-postgresql-backups), but what if Postgres does disappear? - -I could and occasionally do output all the data in the database to flat files with JSON or YAML versions of the metadata attached. Or at least some of it. It's hard to output massive amounts of geodata in the text file (for example the shapefiles of [national parks](https://luxagraf.net/projects/national-parks/) aren't particularly useful as text data). - -I'm not sure what the answer is really, but lately I've been thinking that maybe the answer is just to let it go? The words are the story, that's what my family, my kids, my friends, and whatever few readers I have really want. I'm the only one that cares about the larger story that includes the metadata, the relationships between the stories. Maybe I don't need that. Maybe that it's here today at all is remarkable enough on its own. - -The web is after all an ephemeral thing. It depends on our continued ability to do so many things we won't be able to do forever, like burn fossil fuels. In the end the most lasting backup I have may well be the 8.5x11 sheets of paper I've recently taken to printing out. Everything else depends on so much. diff --git a/src/published/command-line-searchable-text-snippets.txt b/src/published/command-line-searchable-text-snippets.txt deleted file mode 100644 index 7bb149b..0000000 --- a/src/published/command-line-searchable-text-snippets.txt +++ /dev/null @@ -1,116 +0,0 @@ -Snippets are bits of text you use frequently. Boilerplate email responses, code blocks, and whatever else you regularly need to type. My general rule is, if I type it more than twice, I save it as a snippet. - -I have a lot of little snippets of text and code from years of doing this. When I used the i3 desktop (and X11) I used [Autokey](https://github.com/autokey/autokey) to invoke shortcuts and paste these snippets where I need them. In Autokey you define a shortcut for your longer chunk of text, and then whenever you type that shortcut Autokey "expands" it to your longer text. - -It's a great app, but I [switched to a Wayland-based desktop](/src/guide-to-switching-i3-to-sway) ([Sway](https://swaywm.org/)) and Autokey doesn't work in Wayland yet. It's unclear to me whether it's even possible to have an Autokey-like app work within Wayland's security model ([Hawck](https://github.com/snyball/Hawck) claims to, but I have not tested it). - -Instead, after giving it some thought, I came up with a way to do everything I need in a way like even better, using tools that I already have installed. - -###Rolling Your Own Text Snippet Manager - -Autokey is modeled on the idea of typing shortcuts and having them replaced with a larger chuck of text. It works to a point, but has the mental overhead of needing to remember all those keystroke combos. - -Dedicating memory to digital stuff feels like we're doing it wrong. Why not *search* for a snippet instead of trying to remember some key combo? If the searching is fast and seamless there's no loss of "flow," or switching contexts, and no need to remember some obtuse shortcut. - -To work though the search must be *fast*. Fortunately there's a great little command line app that offers lighting-fast search: [`fzf`](https://github.com/junegunn/fzf), a command line "fuzzy" finder. `fzf` is a find-as-you-type search interface that's incredibly fast, especially when you pair it with [`ripgrep`](https://github.com/BurntSushi/ripgrep) instead of `find`. - -I already use `fzf` as a DIY application launcher, so I thought why not use it to search for snippets? This way I can keep my snippets in a simple text file, parse them into an array, pass that to `fzf`, search, and then pass the selected result on to the clipboard. - -I combined Alacritty, a Python script, `fzf`, `sed`, and some Sway shortcuts to make a snippet manager I can call up and search through with a single keystroke. - -###Python - -It may be possible to do this entirely in a bash script, but I'm not that great at bash scripting so I did the text parsing in Python, which I know well enough. - -I wanted to keep all my snippets in a single text file, with the option to do multiline snippets for readability (in other words I didn't want to be writing `\n` characters just because that's easier to parse). I picked `---` as a delimiter because... no reason really. - -The other thing I wanted was the ability to use tags to simplify searching. Tags become a way of filtering searches. For example, all the snippets I use writing for Wired can be tagged wired and I can see them all in one view by typing "wired" in `fzf`. - -So my snippet files looks something like this: - -```` -<div class="cluster"> - <span class="row-2"> - </span> -</div> -tags:html cluster code - ---- -```python - -``` -tags: python code - ---- -```` - -Another goal, which you may notice above, is that I didn't want any format constraints. The snippets can take just about any ascii character. The tags line can have a space, not a have space, have commas, semicolons, doesn't matter because either way `fzf` can search it, and the tags will be stripped out before it hits the clipboard. - -Here's the script I cobbled together to parse this text file into an array I can pass to `fzf`: - -~~~python -import re -with open('~/.textsnippets.txt', 'r') as f: - data = f.read() -snips = re.split("---", data) -for snip in snips: - # strip the blank line at the end - s = '\n'.join(snip.split('\n')[1:-1]) - #make sure we output the newlines, but no string wrapping single quotes - print(repr(s.strip()).strip('\'')) -~~~ - -All this script does is open a file, read the contents into a variable, split those contents on `---`, strip any extra space and then return the results to stdout. - -The only tricky part is the last line. We need to preserve the linebreaks and to do that I used [`repr`](https://docs.python.org/3.8/library/functions.html#repr), but that means Python literally prints the string, with the single quotes wrapping it. So the last `.strip('\'')` gets rid of those. - -I saved that file to `~/bin` which is already on my `$PATH`. - -###Shell Scripting - -The next thing we need to do is call this script, and pass the results to `fzf` so we can search them. - -To do that I just wrote a bash script. - -~~~.bash -#!/usr/bin/env bash -selected="$(python ~/bin/snippet.py | fzf -i -e )" -#strip tags and any trailing space before sending to wl-copy -echo -e "$selected"| sed -e 's/tags\:\.\*\$//;$d' | wl-copy -~~~ - -What happens here is the Python script gets called, parses the snippets file into chunks of text, and then that is passed to `fzf`. After experimenting with some `fzf` options I settled on case-insensitive, exact match (`-i -e`) searching as the most efficient means of finding what I want. - -Once I search for and find the snippet I want, that selected bit of text is stored in a variable called, creatively, `selected`. The next line prints that variable, passes it to `sed` to strip out the tags, along with any space after that, and then sends that snippet of text the clipboard via wl-copy. - -I saved this file in a folder on my `PATH` (`~/bin`) and called it `fzsnip`. At this point in can run `fzsnip` in a terminal and everything works as I'd expect. As a bonus I have my snippets in a plain text file I can access to copy and paste snippets on my phone, tablet, and any other device where I can run [NextCloud](https://nextcloud.com/). - -That's cool, but on my laptop I don't want to have to switch to the terminal every time I need to access a snippet. Instead I invoke a small terminal window wherever I am. To do that, I set up a keybinding in my Sway config file like this: - -~~~.bash -bindsym $mod+s exec alacritty --class 'smsearch' --command bash -c 'fzsnip | xargs -r swaymsg -t command exec' -~~~ - -This is very similar to how I launch apps and search passwords, which I detailed in my post on [switching from i3 to Sway](/src/guide-to-switching-i3-to-sway). The basic idea is whatever virtual desktop I happen to be on, launch a new instance of [Alacritty](https://github.com/alacritty/alacritty), with the class `smsearch`. Assigning that class gives the new instance some styling I'll show below. The rest of the line fires off that shell script `fzsnip`. This allows me to hit `Alt+s` and get a small terminal window with a list of my snippets displayed. I search for the name of the snippet, hit return, the Alacritty window closes and the snippet is on my clipboard, ready to paste wherever I need it. - -This line in my Sway config file styles the window class `launcher`: - -~~~.bash -for_window [app_id="^smsearch$"] floating enable, border none, resize set width 80 ppt height 60 ppt, move position 0 px 0 px -~~~ - -That puts the window in the upper left corner of the screen and makes it about 1/3 the width of my screen. You can adjust the width and height to suite your tastes. - -If you don't use Alacritty, adjust the command to use the terminal app you prefer. If you don't use Sway, you'll need to use whatever system-wide shortcut tool your window manager or desktop environment offers. Another possibility it is using [Guake](https://github.com/Guake/guake) which might be able to this for GNOME users, but I've never used it. - -###Conclusion - -I hope this gives anyone searching for a way to replace Autokey on Wayland some ideas. If you have any questions for run into problems, don't hesitate to drop a comment below. - -Is it as nice as Autokey? I actually like this far better now. I often had trouble remembering my Autokey shortcuts, now I can search instead. - -As I said above, if I were a better bash scripter I get rid of the Python file and just use a bash loop. That would make it easy to wrap it in a neat package and distribute it, but as it is it has too many moving parts to make it more than some cut and paste code. - -####Shoulders Stood Upon - -- [Using `fzf` instead of `dmenu`](https://medium.com/njiuko/using-fzf-instead-of-dmenu-2780d184753f) -- This is the post that got me thinking about ways I could use tools I already use (`fzf`, Alacritty) to accomplish more tasks. diff --git a/src/published/technology.txt b/src/published/technology.txt deleted file mode 100644 index cb613cb..0000000 --- a/src/published/technology.txt +++ /dev/null @@ -1,56 +0,0 @@ -Sometimes people email me to ask how I make luxagraf. Here's now I do it: I write, take pictures and combine them into stories. - -I recognize that this is not particularly helpful. Or it is, I think, but it's not why people email me. They want to know about at the tools I use. Which is fine. I guess. Consumerism! Yeah! Anyway, I decided to make a page I can just point people to, this one. There's no affiliate links and I'd really prefer it if you didn't buy any of this stuff because you don't need it. I don't need it. I could get by with less. I should get by with less. - -Still, for better or worse. Here are the tools I use. - -### Notebook and Pen - -My primary "device" is my notebook. I don't have a fancy notebook. I do have several notebooks though. One is in my pocket at all times and is filled with illegible scribbles that I attempt to decipher later. The other is larger and it's my sort of captain's log, though I don't write in with the kind regularity captain's do. Or that I imagine captain's do. Then I have other notebooks for specific purposes, meditation journal, etc. - -I'm not all that picky about notebooks, if they have paper in them I'm happy enough, but I could devote thousands and thousands of words to pens. For what seems like forever I was religiously devoted to the Uniball Roller Stick Pen in micro point, which I used to swipe from my dad's desk drawer back in high school. It's a lovely pen, I was gratified to note it was the pen of choice at the lawyer's office where we finalized the sale of our house. And yes, I totally took one. - -Once I bought a fancy pen from Japan that takes Parker ink refills, and it's my pen of choice. I can't remember the brand or anything which sucks because I'd love to get another. - -When that's not handy I use Uniball Vision pens, which also fill my two primary requirements in a pen: 1) it writes well 2) I can buy it almost anywhere for next to nothing. - -### Camera - -This is what everyone wants to know about. I used a Sony A7ii. It's a full frame mirrorless camera that happens to make it easy to use legacy film lenses. I bought it specifically because it's the only full frame digital camera available that lets me use the old lens that I love. Without the old lenses I find the Sony's output to be a little digital for my tastes, though the RAW files from the A7ii have wonderful dynamic range, which was the other selling point for me. - -That said, it's not a cheap camera. You should not buy one. The Sony a6000 is very nearly at good and costs $500 ($400 on eBay). In fact, having tested dozens of cameras for Wired over the years I can say with some authority that the a6000 is the best value for money on the market period, but doubly so if you want at cheap way to test out some older lenses. - -All of my lenses are old and manual focus, which I prefer to autofocus lenses. I like the fact that they're cheap too, but really the main appeal for me with old lenses was the far superior focusing rings. I grew up using all manual focus cameras. Autofocus was probably around by the time I picked up a camera, but I never had it. My father had (still has) a screw mount Pentax. I bought a Minolta with money from a high school job. Eventually I upgraded to a Nikon F3 which was my primary camera until 2004. While there are advantages to autofocus, and certainly modern lenses are much sharper in most cases, neither autofocus nor perfect edge to edge sharpness are significant for the type of photos I like to make. - -####lenses - -One thing about shooting manual lenses is that there are a ton of cheap manual lenses out there. I have seen amazing photos produced with $10 lenses. Learn to manual focus a lens is like opening a door into a secret world. A secret world where lenses are cheap. The net result of my foray into this world is that I have a ridiculous collection of lenses. And we live in a bus, lord knows what I'd have if we had more space. - -That said, about 90% of the time I have a very fast, relatively lightweight Canon FD 50 f1.4. I love this lens. I love love love it. The other fifty I love love love is my minolta 50 f/2, which is the slow one in the Minolta 50 family, but man is a great lens. I bought it for $20. - -At the wide end of the spectrum I have another Canon, the FD 20mm f2.8. For portraits I use the Minolta MD 100 f2 and an Olympus M Zuiko 100 f/2.8. I also have this crazy Russian fisheye thing I bought one night on eBay after I'd been drinking. It's pretty hilarious bad at anything less than f8, but it's useful for shooting in small spaces, like the inside of the bus. - -I also have, cough, a few other lenses that I don't use very often or that I use for a while and pass along via eBay. Right now I have a Minolta 58 f/1.4 that I really like, a Pentax 28 f/3.5 that doesn't do much for me (28mm just isn't how I see the world) and Canon 35 f/1.8 that I like alot but won't mount on any adapter I have. I need to get it serviced. - - -### laptop - -My laptop is a Lenovo x250 I bought off eBay for $300. I upgraded the hard drives and put in an HD screen, which brought the total outlay to $550, which is really way too much to spend on a computer these days, but my excuse is that I make money using it. - -Why this particular laptop? It's small and the battery lasts quite a while (like 15 hrs when I'm writing, more 12 when editing photos). It also has a removable battery and can be upgraded by the user. I packed in almost 3TB of disk storage, which is nice. It does make a high pitch whining noise that drives me crazy whenever I'm in a quiet room with it, but since I mostly use it outdoors, sitting around our camps, this is rarely an issue. - -Still, like I said, I could get by with less. I should get by with less. - -The laptop runs Linux because everything else sucks a lot more than Linux. Which isn't too say that I love Linux, it could use some work too. But it sucks a whole lot less than the rest. I run Arch Linux, which I have written about elsewhere. The main appeal of Arch for me is that once I set it up I never have to think about it again. Because I test software for a living I also have a partition that hosts a revolving door of other Linux distros that I use from time to time, but never when I want to get work done. When I want to get work done, I use Arch. - -Because I am hopelessly bored with technology, I stick mainly with simple, text-based applications. Almost everything I do is done inside a single terminal (urxvt) window running tmux, which gives me four tabs. I write in Vim. For email I use mutt. I read RSS feeds with newsbeuter and I listen to music via mpd. I also have a command line calculator and a locally-hosted dictionary that I use pretty regularly. - -I do use a few GUI apps: Tor for browsing the web, Darktable and GIMP for editing photos, Stellarium for learning more about the night sky, and LibreOffice Calc for spreadsheets. That's about it. - -### ithing/tablet/drone/wrist tracking device thingy - -Yeah I don't have any of those. I'm one of those people. I pay for everything in cash too. Fucking weirdo is what I am. I told you you didn't want to know how I make stuff. - -<hr /> - -So there you have it, my technology stack. I am of course always looking for ways to get by with less technology, but I think, after years of getting rid of stuff, I reached something close to ideal. diff --git a/src/qutebrowser-notes.txt b/src/qutebrowser-notes.txt deleted file mode 100644 index 584a47a..0000000 --- a/src/qutebrowser-notes.txt +++ /dev/null @@ -1,18 +0,0 @@ -handy commands: - :download - -## shortcuts - -xo - open url in background tab -go - edit current url -gO - edit current url and open result in new tab -gf - view source -;y - yank hinted url -;i - hint only images -;b - open hint in background tab -;d - download hinted url -PP - Open URL from selection in new tab -ctrl+a Increment no. in URL -ctrl+x Decrement no. in URL - -Solarized theme: https://bitbucket.org/kartikynwa/dotty2hotty/src/1a9ba9b80f07e1f63b740da5e6970dc5a97f1037/qutebrowser.py?at=master&fileviewer=file-view-default diff --git a/src/src-guide-to-switching-i3-to-sway.txt b/src/src-guide-to-switching-i3-to-sway.txt deleted file mode 100644 index 20cff23..0000000 --- a/src/src-guide-to-switching-i3-to-sway.txt +++ /dev/null @@ -1,161 +0,0 @@ -[*Updated April 2021: I added some solutions I've found to a few of the issues below. And yes, I continue to use Sway.*] - -I recently made the switch from the [i3 tiling window manager](https://i3wm.org/) to [Sway](https://swaywm.org/), a Wayland-based i3 clone. I still [run Arch Linux on my personal machine](/src/why-i-switched-arch-linux), so all of this is within the context of Arch. - -I made the switch for a variety of reasons. There's the practical: Sway/Wayland gives me much better battery life on my laptop. As well as the more philosophical: Sway's lead developer Drew Devault's take on code is similar to mine[^1] (e.g. [avoid traumatic changes](https://drewdevault.com/2019/11/26/Avoid-traumatic-changes.html) or [avoid dependencies](https://drewdevault.com//2020/02/06/Dependencies-and-maintainers.html)), and after reading his blog for a year he's someone whose software I trust. - -I know some people would think this reason ridiculous, but it's important to me that the software I rely on be made by people I like and trust. Software is made by humans, for humans. The humans are important. And yes, it goes the other way too. I'm not going to name names, but there are some theoretically good software out there that I refuse to use because I do not like or trust the people who make it. - -When I find great software made by people who seem trustworthy, I use it. So I switched to Sway and it's been a good experience. - -Sway and Wayland have been very stable in my use. I get about 20 percent more out of my laptop battery. That seems insane to me, but as someone who [lives almost entirely off solar power](/1969-dodge-travco-motorhome) it's a huge win I can't ignore. - -### Before You Begin - -I did not blindly switch to Sway. Or rather I did and that did not go well. I switched back after a few hours and started doing some serious searching, both the search engine variety and the broader, what am I really trying to do here, variety. - -The latter led me to change a few tools, replace some things, and try some new work flows. Not all of it was good. I could never get imv to do the things I can with feh for instance, but mostly it was good. - -One thing I really wanted to do was avoid XWayland (which allows apps that need X11 to run under Wayland). Wherever I could I've opted for applications that run natively under Wayland. There's nothing wrong with XWayland, that was just a personal goal, for fun. - -Here's my notes on making the transition to Wayland along with the applications I use most frequently. - -##### Terminal - -I do almost everything in the terminal. I write in Vim, email with mutt, read RSS feeds with newsboat, listen to music with mpd, and browse files with ranger. - -I tested quite a few Wayland-native terminals and I really like [Alacritty](https://github.com/alacritty/alacritty). Highly recommended. [Kitty](https://github.com/kovidgoyal/kitty) is another option to consider. - -<span class="strike">That said, I am sticking with urxvt for now. There are two problems for me with Alacritty. First off Vim doesn't play well with the Wayland clipboard in Alacritty. Second, Ranger will not show image previews in Alacritty.</span> - -*Update April 2021:* I have never really solved either of these issues, but I switched to Alacritty anyway. I use Neovim instead of Vim, which was a mostly transparent switch and Neovim support the Wayland clipboard. As for previews in Ranger... I forgot about those. They were nice. But I guess I don't miss them that much. - - -##### Launcher - -I've always used dmenu to launch apps and grab passwords from pass. It's simple and fast. Unfortunately dmenu is probably never going to run natively in Wayland. - -I tested rofi, wofi, and other potential replacements, but I did not like any of them. Somewhere in my search for a replacement launcher I ran across [this post](https://medium.com/njiuko/using-fzf-instead-of-dmenu-2780d184753f) which suggested just calling up a small terminal window and piping a list of applications to [fzf](https://github.com/junegunn/fzf), a blazing fast search tool. - -That's what I've done and it works great. I created a keybinding to launch a new instance of Alacritty with a class name that I use to resize the window. Then within that small Alacritty window I call `compgen` to get a list of executables, then sort it to eliminate duplicates, and pass the results to fzf. Here's the code in my Sway config file: - -~~~console -bindsym $mod+Space exec alacritty --class 'launcher' --command bash -c 'compgen -c | sort -u | fzf | xargs -r swaymsg -t command exec' - -for_window [app_id="^launcher$"] floating enable, border none, resize set width 25 ppt height 20 ppt, move position 0 px 0 px -~~~ - -These lines together will open a small terminal window in the upper left corner of the screen with a fzf search interface. I type, for example, "dar" and Darktable comes up. I hit return, the terminal window closes, and Darktable launches. It's as simple as dmenu and requires no extra applications (since I was already using fzf in Vim). - -If you don't want to go that route, Bemenu is dmenu-like launcher that runs natively in Wayland. - -##### Browsers - -I mainly use [qutebrower](https://qutebrowser.org/), supplemented by [Vivaldi](https://vivaldi.com/)[^2] for research because having split screen tabs is brilliant for research. I also use [Firefox Developer Edition](https://www.mozilla.org/en-US/firefox/developer/) for any web development work, because the Firefox dev tools are far superior to anything else. - -All three work great under Wayland. In the case of qutebrowser though you'll need to set a few shell variables to get it to start under Wayland, out of the box it launches with XWayland for some reason. Here's what I added to `.bashrc` to get it to work: - -~~~bash -export XDG_SESSION_TYPE=wayland -export GDK_BACKEND=wayland -~~~ - -One thing to bear in mind if you do have a lot of X11 apps still is that with this in your shell you'll need to reset the `GDK_BACKEND` to X11 or those apps won't launch. Instead you'll get an error, `Gtk-WARNING **: cannot open display: :0`. To fix that error you'll need to reset `GDK_BACKEND=x11`, then launch your X11 app. - -There are several ways you can do this, but I prefer to override apps in `~/bin` (which is on my $PATH). So, for example, I have a file named `xkdenlive` in `~/bin` that looks like this: - -~~~bash -#! /bin/sh -GDK_BACKEND=x11 kdenlive -~~~ - -Note that for me this is easier, because the only apps I'm using that need X11 are Kdenlive and Slack. If you have a lot of X11 apps, you're probably better off making qutebrowser the special case by launching it like this: - -~~~bash -GDK_BACKEND=wayland qutebrowser -~~~ - -##### Clipboard - -I can't work without a clipboard manager, I keep the last 200 things I've copied, and I like to have things permanently stored as well. - -Clipman does a good job of saving clipboard history. - -You need to have wl-clipboard installed since Clipman reads and writes to and from that. I also use wofi instead of the default dmenu for viewing and searching clipboard history. Here's how I set up clipman in my Sway config file: - -~~~bash -exec wl-paste -t text --watch clipman store --max-items=60 --histpath="~/.local/share/clipman.json" -bindsym $mod+h exec clipman pick --tool="wofi" --max-items=30 --histpath="~/.local/share/clipman.json" -~~~ - -Clipman does not, however, have a way to permanently store bits of text. That's fine. Permanently stored bits of frequently used text are really not all that closely related to clipboard items and lumping them together in a single tool isn't a very Unix-y approach. Do one thing, do it well. - -For snippets I ended up bending [pet](https://github.com/knqyf263/pet), the "command line snippet manager" a little and combining it with the small launcher-style window idea above. So I store snippets in pet, mostly just `printf "my string of text"`, call up an Alacritty window, search, and hit return to inject the pet snippet into the clipboard. Then I paste it were I need it. - -##### Volume Controls - -Sway handles volume controls with pactl. Drop this in your Sway config file and you should be good: - -~~~bash -bindsym XF86AudioRaiseVolume exec pactl set-sink-volume @DEFAULT_SINK@ +5% -bindsym XF86AudioLowerVolume exec pactl set-sink-volume @DEFAULT_SINK@ -5% -bindsym XF86AudioMute exec pactl set-sink-mute @DEFAULT_SINK@ toggle -bindsym XF86AudioMicMute exec pactl set-source-mute @DEFAULT_SOURCE@ toggle -~~~ - -##### Brightness - -I like [light](https://github.com/haikarainen/light) for brightness. Once it's installed these lines from my Sway config file assign it to my brightness keys: - -~~~bash -bindsym --locked XF86MonBrightnessUp exec --no-startup-id light -A 10 -bindsym --locked XF86MonBrightnessDown exec --no-startup-id light -U 10 -~~~ - -### Quirks, Annoyances And Things I Haven't Fixed - -There have been surprisingly few of these, save the Vim and Ranger issues mentioned above. - -<span class="stike">I haven't found a working replacement for xcape. The only thing I used xcape for was to make my Cap Lock key dual-function: press generates Esc, hold generates Control. So far I have not found a way to do this in Wayland. There is ostensibly [caps2esc](https://gitlab.com/interception/linux/plugins/caps2esc), but it's poorly documented and all I've been able to reliably do with it is crash Wayland.</span> - -*Update April 2021*: I managed to get caps2esc working. First you need to install it, for Arch that's something like: - -~~~bash -yay -S interception-caps2esc -~~~ - -Once it's installed you need to create the config file. I keep mine at `/etc/interception/udevmon.d/caps2esc.yaml`. Open that up and paste in these lines: - -~~~yaml -- JOB: "intercept -g $DEVNODE | caps2esc | uinput -d $DEVNODE" - DEVICE: - EVENTS: - EV_KEY: [KEY_CAPSLOCK, KEY_ESC] -~~~ - -Then you need to start and enable the `udevmon` service unit, which is what runs the caps2esc code: - -~~~bash -sudo systemctl start udevmon -sudo systemctl enable udevmon -~~~ - -The last thing to do is restart. Once you've rebooted you should be able to hold down caps_lock and have it behave like control, but a quick press with give you escape instead. This is incredibly useful if you're a Vim user. - -The only other problems I've run into is the limited range of screen recording options -- there's wf-recorder and that's about it. It works well enough though for what I do. - -I've been using Sway exclusively for a year and half now and I have no reason or desire to ever go back to anything else. The rest of my family isn't fond of the tiling aspect of Sway so I do still run a couple of laptops with Openbox. I'd love to see a Wayland Openbox clone that's useable. I've played with [labwc](https://github.com/johanmalm/labwc), which is promising, but lacks a tint2-style launcher, which is really what I need (i.e., a system tray with launcher buttons, which Waybar does not have). Anyway, I am keeping an eye on labwc because it looks like a good project. - -That's how I did it. But I am just one person. If you run into snags, feel free to drop a comment below and I'll see if I can help. - -### Helpful pages: - -- **[Sway Wiki](https://github.com/swaywm/sway/wiki)**: A good overview of Sway, config examples (how to replicate things from i3), and application replacement tips for i3 users (like this fork of [redshift](https://github.com/minus7/redshift/tree/wayland) with support for Wayland). -- **[Arch Wiki Sway Page](https://wiki.archlinux.org/index.php/Sway)**: Another good Sway resource with solutions to a lot of common stuff: set wallpaper, take screenshots, HiDPI, etc. -- **[Sway Reddit](https://old.reddit.com/r/swaywm/)**: There's some useful info here, worth searching if you run into issues. Also quite a few good tips and tricks from fellow Sway users with more experience. -- **[Drew Devault's Blog](https://drewdevault.com/)**: He doesn't always write about Sway, but he does give updates on what he's working on, which sometimes has details on Sway updates. - - -[^1]: That's not to imply there's anything wrong with the i3 developers. - -[^2]: Vivaldi would be another good example of me trusting a developer. I've been interviewing Jon von Tetzchner for many years, all the way back to when he was at Opera. I don't always see eye to eye with him (I wish Vivaldi were open source) but I trust him, so I use Vivaldi. It's the only software I use that's not open source (not including work, which requires quite a few closed source crap apps). diff --git a/src/src-ranger.txt b/src/src-ranger.txt deleted file mode 100644 index f66a707..0000000 --- a/src/src-ranger.txt +++ /dev/null @@ -1,63 +0,0 @@ -[Ranger](http://nongnu.org/ranger/) is a terminal-based file browser with Vim-style keybindings. It uses ncurses and can hook into all sorts of other command line apps to create an incredibly powerful file manager. - -If you prefer a graphical experience, more power to you. I'm lazy. Since I'm already using the terminal for 90 percent of what I do, it make sense to not leave it just because I want to browse files. - -The keyword here for me is "browse." I do lots of things to files without using Ranger. Moving, copying, creating, things like that I tend to do directly with `cp`, `mv`, `touch`, `mkdir` and so on. But sometimes you want *browse* files, and in those cases Ranger is the best option I've used. - -That said, Ranger is something of a labyrinth of commands and keeping track of them all can be overwhelming. If I had a dollar for every time I've searched "show hidden files in Ranger" I could buy you a couple beers (the answer, fellow searchers, is `zh`). - -I'm going to assume you're familiar with the basics of movement in Ranger like `h`, `j`, `k`, `l`, `gg`, and `G`. Likewise that you're comfortable with `yy`, `dd`, `pp`, and other copy, cut, and paste commands. If you're not, if you're brand new to ranger, check out [the official documentation](https://github.com/ranger/ranger/wiki/Official-user-guide) which has a pretty good overview of how to do all the basic stuff you'll want to do with a file browser. - -Here's a few less obvious shortcuts I use all the time. Despite some overlap with Vim, I do not find these particularly intuitive, and had a difficult time remembering them at first: - -- `zh`: toggle hidden files -- `gh`: go home (`cd ~/`) -- `oc`: order by create date (newest at top) -- `7j`: jump down seven lines (any number followed by j or k will jump that many lines) -- `7G`: jump to line 7 (any number followed by `G` will jump to that line -- `.d`: show only directories -- `.f`: show only files -- `.c`: clear any filters (such as either of the previous two commands) - -Those are handy, but if you really want to speed up Ranger and bend it to the way you work, the config file is your friend. What follows are a few things I've done to tweak Ranger's config file to make my life easier. - -###Ranger Power User Recommendations - -Enabling line numbers was a revelation for me. Open `~/.config/ranger/rc.conf` and search for `set line_numbers` and change the value to either `absolute` or `relative`. The first numbers from the top no matter what, the `relative` option sets numbers relative to the cursor. I can't stand relative, but absolute works great for me, YMMV. - -Another big leap forward in my Ranger productivity came when I discovered local folder sorting options. As noted above, typing `oc` changes the sort order within a folder to sort by date created[^1]. While typing `oc` is pretty easy, there are some folders that I *always* want sorted by date modified. That's easily done with Ranger's `setlocal` config option. - -Here's a couple lines from my `rc.conf` file as an example: - -~~~bash -setlocal path=~/notes sort mtime -setlocal path=~/notes/reading sort mtime -~~~ - -This means that every time I open `~/notes` or `~/notes/reading` the files I've worked with most recently are at the top (and note that you can also use `sort_reverse` instead of `sort`). That puts the most recently edited files right at the top where I can find them. - -Having my most recent notes at the top of the pane is great, but what makes it even more useful is having line wrapped file previews so I don't even need to open the file to read it. To get that I currently use the latest Git version of Ranger which I installed via [Arch Linux's AUR](https://aur.archlinux.org/packages/ranger-git/). - -This feature, which is invaluable to me since one of my common use cases for Ranger is quickly scanning a bunch of text files, has been [merged to master](https://github.com/ranger/ranger/pull/1322), but not released yet. If you don't [use Arch Linux](/src/why-i-switched-arch-linux) you can always build from source, or you can wait for the next release which should include an option to line wrap your previews. - -###Bookmarks - -Part of what makes Ranger incredibly fast are bookmarks. With two keystrokes I can jump between folders, move/copy files, and so on. - -To set a bookmark, navigate to the directory, then hit `m` and whatever letter you want to serve as the bookmark. Once you've bookmarked it, type ``<letter>` to jump straight to that directory. I try to use Vim-like mnemonics for my bookmarks, e.g. ``d` takes me to documents, ``n` takes me to `~/notes`, ``l` takes me to the dev folder for this site, and so on. As with the other commands, typing just ``` will bring up a list of your bookmarks. - -###Conclusion - -Ranger is incredibly powerful and almost infinitely customizable. In fact I don't think I really appreciated how customizable it was until I wrote this and dug a little deeper into all the ways you can map shell scripts to one or two character shortcuts. It can end up being a lot to keep track of though. I suggest learning maybe one or two new shortcuts a week. When you know longer have to think abut them, move on to the next couple. - -Or you can do what I do, wait until you have something you want to do, but don't know how, figure out how to do it, then write it down so you remember it. - -####Shoulders Stood Upon - -* [Dquinton's Ranger setup details](http://dquinton.github.io/debian-install/config/ranger.html) - I have no idea who this person is, but their Ranger setup and detailed notes was hugely helpful. -* [Ranger documentation](https://ranger.github.io/ranger.1.html) - The docs have a pretty good overview of the options available, though sometimes it's challenging to translate that into real world use cases. -* [Arch Wiki Ranger page](https://wiki.archlinux.org/index.php/Ranger) - Where would we be without the Arch Wiki? - - - -[^1]: In fact, just type `o` and you'll get a list of other sorting options (and if you know what `normal` means, drop me a comment below, I'm still trying to figure out what that means). diff --git a/src/src-solving-common-nextcloud-problems.txt b/src/src-solving-common-nextcloud-problems.txt deleted file mode 100644 index b32a629..0000000 --- a/src/src-solving-common-nextcloud-problems.txt +++ /dev/null @@ -1,92 +0,0 @@ -I love [NextCloud](https://nextcloud.com). Nextcloud allows me to have all the convenience of Dropbox, but hosted by me, controlled by me, and customized to suit my needs. I mainly use the file syncing, calendar, and contacts features, but Nextcloud can do a crazy amount of things. - -The problem with NextCloud, and maybe you could argue that this is the price you pay for the freedom and control, is that I find it requires a bit of maintenance to keep it running smoothly. Nextcloud does some decidedly odd things from time to time, and knowing how to deal with them can save you some disk space and maybe avoid syncing headaches. - -I should note, that while I call these problems, I **have never lost data** using Nextcloud. These are really more annoyances and some ways to prevent them than *problems*. - -### How to Get Rid of Huge Thumbnails in Nextcloud - -If Nextcloud is taking up more disk space than you think it should, or your Nextcloud storage space is just running low, the first thing to check is the image thumbnails directory. - -At one point I poked around in the Nextcloud `data` directory and found 11-gigabytes worth of image previews for only 6-gigabytes worth of actual images stored. That is crazy. That should never happen. - -Nextcloud's image thumbnail defaults err on the side of "make it look good in the browser" where as I prefer to err on the side of keep it really small. - -I did some research and came up with a few solutions. First, it looks like my runaway 11-gigabyte problem might have been due to a bug in older versions of Nextcloud. Ideally I will not hit that issue again. But, I don't admin servers with hope and optimism, so I figured out how to tell Nextcloud to generate smaller image previews. I almost never look at the images within the web UI, so I really don't care about the previews at all. I made them much, much smaller than the defaults. Here's the values I use: - -~~~bash -occ config:app:set previewgenerator squareSizes --value="32 256" -occ config:app:set previewgenerator widthSizes --value="256 384" -occ config:app:set previewgenerator heightSizes --value="256" -occ config:system:set preview_max_x --value 500 -occ config:system:set preview_max_y --value 500 -occ config:system:set jpeg_quality --value 60 -occ config:app:set preview jpeg_quality --value="60" -~~~ - -Just ssh into your Nextcloud server and run all these commands. If you followed the basic Nextcloud install instructions you'll want to run these as your web server user. For me, with NextCloud running on Debian 10, the full command looks like this: - -~~~bash -sudo -u www-data php /var/www/nextcloud/occ config:app:set previewgenerator squareSizes --value="32 256" -sudo -u www-data php /var/www/nextcloud/occ config:app:set previewgenerator widthSizes --value="256 384" -# and so on, running all the commands listed above -~~~ - -This assumes you installed Nextcloud into the directory `/var/www/nextcloud`, if you installed it somewhere else, adjust the path to the Nextcloud command line tool `occ`. - -That will stop Nextcloud from generating huge preview files. So far so good. I deleted the existing previews and reclaimed 11-gigabytes. Sweet. You can pre-generate previews, which will make the web UI faster if you browse images in it. I do not, so I didn't generate any previews ahead of time. - -### How to Solve `File is Locked` Issues in Nextcloud - -No matter what I do, I always end up with locked file syncing issues. Researching this led me to try using Redis to cache things, but that didn't help. I don't know why this happens. I blame PHP. When in doubt, blame PHP. - -Thankfully it doesn't happen very often, but every six months or so I'll see an error, then two, then they start piling up. Here's how to fix it. - -First, put Nextcloud in maintenance mode (again, assuming Debian 10, with Nextcloud in the `/var/www/nextcloud` directory): - -~~~bash -sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --on -~~~ - -Now we're going directly into the database. For me that's Postgresql. If you use Mysql or Mariadb, you may need to adjust the syntax a little. - -~~~bash -psql -U yournextclouddbuser -hlocalhost -d yournextclouddbname -password: -nextclouddbname=> DELETE FROM oc_file_locks WHERE True; -~~~ - -That should get rid of all the locked file problems. For a while anyway. - -Don't forget to turn maintenance mode off: - -~~~bash -sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off -~~~ - -### Force a File Re-Scan - -If you frequently add and remove folders from Nextcloud, you may sometimes run into issues. I usually add a folder at the start of a new project, and then delete it when the project is finished. Mostly this just works, even with shared folders, on the rare occasion that I used them, but sometimes Nextcloud won't delete a folder. I have no idea why. It just throws an unhelpful error in the web admin and refuses to delete the folder from the server. - -I end up manually deleting it on the server using: `rm -rf path/to/storage/folder`. Nextcloud however, doesn't always seem to notice that the folder is gone, and still shows it in the web and sync client interfaces. The solution is to force Nextcloud to rescan all its files with this command: - -~~~bash -sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --on -sudo -u www-data php /var/www/nextcloud/occ files:scan --path="yournextcloudusername/files/NameOfYourExternalStorage" -sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off -~~~ - -Beware that on large data directories this can take some time. It takes about 30 seconds to scan my roughly 30GB of files. - -### Mostly Though, Nextcloud is Awesome - -Those are three annoyances I've hit with Nextcloud over the years and the little tricks I've used to solve them. Lest anyone think I am complaining, I am not. Not really anyway. The image thumbnail thing is pretty egregious for a piece of software that aims to be enterprise grade, but mostly Nextcloud is pretty awesome. - -I rely on Nextcloud for files syncing, Calendar and Contact hosting, and keeping my notes synced across devices. Aside from these three things, I have never had a problem. - -####Shoulder's Stood Upon - -* [Nextcloud's documentation](https://docs.nextcloud.com) isn't the best, but can help get you pointed in the right direction. -* I tried a few different solutions to the thumbnail problem, especially helpful was this post on [Understanding and Improving Nextcloud Previews](https://ownyourbits.com/2019/06/29/understanding-and-improving-nextcloud-previews/), but nachoparker. -* The [file lock solution](https://help.nextcloud.com/t/file-is-locked-how-to-unlock/1883) comes from the Nextcloud forums. -* The solution to scanning external storages comes from the [Nextcloud forums](https://help.nextcloud.com/t/automate-occ-filescan/35282/4). diff --git a/src/src-why-i-built-my-own-mailing-list-software.txt b/src/src-why-i-built-my-own-mailing-list-software.txt deleted file mode 100644 index b9877cf..0000000 --- a/src/src-why-i-built-my-own-mailing-list-software.txt +++ /dev/null @@ -1,52 +0,0 @@ -This is not a tutorial. If you don't already know how to write the code you need to run a mailing list, you probably shouldn't try to do it yourself. Still, I wanted to outline the reasons I built my own mailing list software in 2020, when there are dozens of commercial and open source projects that I could have used. - -At the core of my otherwise questionable decision is the notion that we ought to completely understand the core infrastructures in our lives. Why? Because it adds value and meaning to your life in the form of understanding. And that understand doesn't stop with the thing you understand either, it becomes part of you, you will find other places this understanding helps you. - -It's also just not that hard to do things yourself. It makes maintaining the system easier, and it often saves time (or money) in the long term. - -The only way to really understand a thing is to either build it yourself from scratch or completely disassemble it and put it back together. - -This is true for software as well as the rest of the world. I ripped all the electrical, propane, plumbing, and engine systems out of my home ([a 1969 RV](/1969-dodge-travco-motorhome)) because I needed to know how every single piece works, and how they all work together. - -I understand those systems now because I built them myself (with expert help when needed), and that makes maintaining them much easier. Otherwise I would always be dependant on someone else to keep my home running and that's no way to live. - -The same is true with software. If the software you're considering is a core part of your personal or business infrastructure, you need to understand every single part of it and how all those parts fit together. - -The question is, should you deconstruct an existing project or write your own from scratch? The answer depends on the situation, the right choice won't always be the same in every case. I do a mix a both and I'm sure most other people do too. There's no one right answer, which means you have to think things through in detail ahead of time. - -When I decided I wanted to [start a mailing list](/jrnl/2020/11/invitation), I looked around at the software that was available and very quickly realized that I had different goals than most mailing list software. That's when you should write your own. - -The available commercial software did not respect users privacy and did not allow me any control. There are some services that do provide a modicum of privacy for your subscribers, but you're going to be working against the software to enable it. - -*If you know of a dead simple commercial mailing list software that's built with user privacy in mind, please post a link in the comments, I'd love to have somewhere to point people. * - -I also wanted to be in complete control of the data. I host my own publishing systems. I consider myself a writer first, but publisher is a close second. What sort of publisher doesn't control their own publishing system?[^1] What makes email such a wonderful distributed publishing system is that no one owns the protocols that dictate how it works. That's great. I don't want to control the delivery mechanism, just the product at either end. - -Email is more or less the inverse of the web. You send a single copy to many readers, rather than many readers coming to a single copy as with a web page. The point is, there's no reason I can't create and host the original email here and send out the copies myself. The hard part -- creating the protocols and low-level tools that power email -- was taken care of decades ago. - -With that goal in mind I started looking at open source solutions. I use [Django](https://www.djangoproject.com) to publish what you're reading here, so I looked at some Django-based mailing list software. The two I considered most seriously were [Django Newsletter](https://django-newsletter.readthedocs.io/en/latest/) and [Emencia Django Newsletter](https://github.com/emencia/emencia-django-newsletter). I found a few other smaller projects as well, but those seem to be the big two in what's left of the Django universe. - -Those two, and some others influenced what I ended up writing in various ways, but none of them were quite what I wanted out of the box. Most of them still used some kind of tracking, whether a pixel embedded in the email or wrapping links with individual identifiers. I didn't want either of those things and stripping them out, while staying up-to-date with upstream changes would have been cumbersome. So, DIY then. - -But running a mail server is... difficult, risky, and probably going to keep you up at night. I tried it, briefly. - -One of the big problems with email is that, despite email being an open protocol, Google and other big corps are able to gain some control by using spam as a reason to tightly control who gets to send email[^2] That means if I just spin up a VPS at Vultr and try to send some emails with Postfix they're probably all going to end up in, best case, you Spam folder, but more likely they'd never be delivered. - -So while I wrote the publishing tools myself, host the newletter archive myself, designed everything about it myself, I handed off the sending to [Amazon's SES](https://aws.amazon.com/ses/), which has been around long enough, and is used by enough big names that mail sent through it isn't automatically deleted. It may possibly still end up in some Spam folders, but for the most part in my early testing (thank you to all my friends who helped out with that) that hasn't been an issue. - -In the end what I have is a fairly robust, loosely-joined system where I have control over the key elements and it's easy to swap out the sending mechanism down the road should I have problems, or just find something better (preferably something not owned by Amazon). - -###Was it Worth It? - -So far absolutely not. But I knew that when I started. - -I could have signed up for Mailchimp, picked some pre-made template, and spent the last year sending out newsletters to subscribers, and who knows, maybe I'd have tons of those by now. But that's okay, that was never the goal. - -I am and always have been playing a very long game when it comes to publishing. I am building a thing that I want to last the rest of my life and beyond if I can manage it. - -I am patient. I am not looking for a ton of readers, I am looking for the right readers. The sort of people who are in short supply these days, the sort of people who end up on a piece like this and actually read the whole thing. The people for whom signing up for Mailchimp would be too easy, too boring. - -I am looking for those who want some adventure in everything they do, the DIYer, the curious, the explorers, the misfits. There's more of us than most of us realize. If you're interested feel free to [join our club](/newsletter/friends). - -[^1]: Sadly, these days almost no publisher retains any control over their systems. They're all beholden to Google AMP, Facebook News, and whatever the flavor of year happens to be. A few of them are slowly coming around to the idea that it might be better to build their own audiences, which somehow passed for revolutionary in publishing today. But I digress. -[^2]: Not to go too conspiracy theory here, but I suspect that Google and its ilk generate a fair bit of the spam themselves, and do nothing to prevent the rest precisely because it allows for this control. Which is not to say spam isn't a problem, just that it's a very *convenient* problem. diff --git a/src/switching-to-lxc-lxd-for-django-dev-work-cuts.txt b/src/switching-to-lxc-lxd-for-django-dev-work-cuts.txt deleted file mode 100644 index d146c3f..0000000 --- a/src/switching-to-lxc-lxd-for-django-dev-work-cuts.txt +++ /dev/null @@ -1,201 +0,0 @@ -***Updated July 2022**: I've run into to some issues with cgroups and lxc on Arch and added some notes below under the [special note to Arch users](#arch)* - -I've used Vagrant to manage my local development environment for quite some time. The developers I used to work with used it and, while I have no particular love for it, it works well enough. Eventually I got comfortable enough with Vagrant that I started using it in my own projects. I even wrote about [setting up a custom Debian 9 Vagrant box](/src/create-custom-debian-9-vagrant-box) to mirror the server running this site. - -The problem with Vagrant is that I have to run a huge memory-hungry virtual machine when all I really want to do is run Django's built-in dev server. - -My laptop only has 8GB of RAM. My browser is usually taking around 2GB, which means if I start two Vagrant machines, I'm pretty much maxed out. Django's dev server is also painfully slow to reload when anything changes. - -Recently I was talking with one of Canonical's [MAAS](https://maas.io/) developers and the topic of containers came up. When I mentioned I really didn't like Docker, but hadn't tried anything else, he told me I really needed to try LXD. Later that day I began reading through the [LinuxContainers](https://linuxcontainers.org/) site and tinkering with LXD. Now, a few days later, there's not a Vagrant machine left on my laptop. - -Since it's just me, I don't care that LXC only runs on Linux. LXC/LXD is blazing fast, lightweight, and dead simple. To quote, Canonical's [Michael Iatrou](https://blog.ubuntu.com/2018/01/26/lxd-5-easy-pieces), LXC "liberates your laptop from the tyranny of heavyweight virtualization and simplifies experimentation." - -Here's how I'm using LXD to manage containers for Django development on Arch Linux. I've also included instructions and commands for Ubuntu since I set it up there as well. - -### What's the difference between LXC, LXD and `lxc` - -I wrote this guide in part because I've been hearing about LXC for ages, but it seemed unapproachable, overwhelming, too enterprisey you might say. It's really not though, in fact I found it easier to understand than Vagrant or Docker. - -So what is a LXC container, what's LXD, and how are either different than say a VM or for that matter Docker? - -* LXC - low-level tools and a library to create and manage containers, powerful, but complicated. -* LXD - is a daemon which provides a REST API to drive LXC containers, much more user-friendly -* `lxc` - the command line client for LXD. - -In LXC parlance a container is essentially a virtual machine, if you want to get pedantic, see Stéphane Graber's post on the [various components that make up LXD](https://stgraber.org/2016/03/11/lxd-2-0-introduction-to-lxd-112/). For the most part though, interacting with an LXC container is like interacting with a VM. You say ssh, LXD says socket, potato, potahto. Mostly. - -An LXC container is not a container in the same sense that Docker talks about containers. Think of it more as a VM that only uses the resources it needs to do whatever it's doing. Running this site in an LXC container uses very little RAM. Running it in Vagrant uses 2GB of RAM because that's what I allocated to the VM -- that's what it uses even if it doesn't need it. LXC is much smarter than that. - -Now what about LXD? LXC is the low level tool, you don't really need to go there. Instead you interact with your LXC container via the LXD API. It uses YAML config files and a command line tool `lxc`. - -That's the basic stack, let's install it. - -### Install LXD - -On Arch I used the version of [LXD in the AUR](https://aur.archlinux.org/packages/lxd/). Ubuntu users should go with the Snap package. The other thing you'll want is your distro's Btrfs or ZFS tools. - -Part of LXC's magic relies on either Btrfs and ZFS to read a virtual disk not as a file the way Virtualbox and others do, but as a block device. Both file systems also offer copy-on-write cloning and snapshot features, which makes it simple and fast to spin up new containers. It takes about 6 seconds to install and boot a complete and fully functional LXC container on my laptop, and most of that time is downloading the image file from the remote server. It takes about 3 seconds to clone that fully provisioned base container into a new container. - -In the end I set up my Arch machine using Btrfs or Ubuntu using ZFS to see if I could see any difference (so far, that would be no, the only difference I've run across in my research is that Btrfs can run LXC containers inside LXC containers. LXC Turtles all the way down). - -Assuming you have Snap packages set up already, Debian and Ubuntu users can get everything they need to install and run LXD with these commands: - -~~~~console -apt install zfsutils-linux -~~~~ - -And then install the snap version of lxd with: - -~~~~console -snap install lxd -~~~~ - -Once that's done we need to initialize LXD. I went with the defaults for everything. I've printed out the entire init command output so you can see what will happen: - -~~~~console -sudo lxd init -Create a new BTRFS pool? (yes/no) [default=yes]: -would you like to use LXD clustering? (yes/no) [default=no]: -Do you want to configure a new storage pool? (yes/no) [default=yes]: -Name of the new storage pool [default=default]: -Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: -Create a new BTRFS pool? (yes/no) [default=yes]: -Would you like to use an existing block device? (yes/no) [default=no]: -Size in GB of the new loop device (1GB minimum) [default=15GB]: -Would you like to connect to a MAAS server? (yes/no) [default=no]: -Would you like to create a new local network bridge? (yes/no) [default=yes]: -What should the new bridge be called? [default=lxdbr0]: -What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: -What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: -Would you like LXD to be available over the network? (yes/no) [default=no]: -Would you like stale cached images to be updated automatically? (yes/no) [default=yes] -Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes -~~~~ - -LXD will then spit out the contents of the profile you just created. It's a YAML file and you can edit it as you see fit after the fact. You can also create more than one profile if you like. To see all installed profiles use: - -~~~~console -lxc profile list -~~~~ - -To view the contents of a profile use: - -~~~~console -lxc profile show <profilename> -~~~~ - -To edit a profile use: - -~~~~console -lxc profile edit <profilename> -~~~~ - -So far I haven't needed to edit a profile by hand. I've also been happy with all the defaults although, when I do this again, I will probably enlarge the storage pool, and maybe partition off some dedicated disk space for it. But for now I'm just trying to figure things out so defaults it is. - -The last step in our setup is to add our user to the lxd group. By default LXD runs as the lxd group, so to interact with containers we'll need to make our user part of that group. - -~~~~console -sudo usermod -a -G lxd yourusername -~~~~ - -#####Special note for Arch users. {:#arch } - -To run unprivileged containers as your own user, you'll need to jump through a couple extra hoops. As usual, the [Arch User Wiki](https://wiki.archlinux.org/index.php/Linux_Containers#Enable_support_to_run_unprivileged_containers_(optional)) has you covered. Read through and follow those instructions and then reboot and everything below should work as you'd expect. - -### Create Your First LXC Container - -Let's create our first container. This website runs on a Debian VM currently hosted on Vultr.com so I'm going to spin up a Debian container to mirror this environment for local development and testing. - -To create a new LXC container we use the `launch` command of the `lxc` tool. - -There are four ways you can get LXC containers, local (meaning a container base you've downloaded), images (which come from [https://images.linuxcontainers.org/](https://images.linuxcontainers.org/), ubuntu (release versions of Ubuntu), and ubuntu-daily (daily images). The images on linuxcontainers are unofficial, but the Debian image I used worked perfectly. There's also Alpine, Arch CentOS, Fedora, openSuse, Oracle, Palmo, Sabayon and lots of Ubuntu images. Pretty much every architecture you could imagine is in there too. - -I created a Debian 9 Stretch container with the amd64 image. To create an LXC container from one of the remote images the basic syntax is `lxc launch images:distroname/version/architecture containername`. For example: - -~~~~console -lxc launch images:debian/stretch/amd64 debian-base -Creating debian-base -Starting debian-base -~~~~ - -That will grab the amd64 image of Debian 9 Stretch and create a container out of it and then launch it. Now if we look at the list of installed containers we should see something like this: - -~~~~console -lxc list -+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+ -| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | -+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+ -| debian-base | RUNNING | 10.171.188.236 (eth0) | fd42:e406:d1eb:e790:216:3eff:fe9f:ad9b (eth0) | PERSISTENT | | -+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+ -~~~~ - -Now what? This is what I love about LXC, we can interact with our container pretty much the same way we'd interact with a VM. Let's connect to the root shell: - -~~~~console -lxc exec debian-base -- /bin/bash -~~~~ - -Look at your prompt and you'll notice it says `root@nameofcontainer`. Now you can install everything you need on your container. For me, setting up a Django dev environment, that means Postgres, Python, Virtualenv, and, for this site, all the Geodjango requirements (Postgis, GDAL, etc), along with a few other odds and ends. - -You don't have to do it from inside the container though. Part of LXD's charm is to be able to run commands without logging into anything. Instead you can do this: - -~~~~console -lxc exec debian-base -- apt update -lxc exec debian-base -- apt install postgresql postgis virtualenv -~~~~ - -LXD will output the results of your command as if you were SSHed into a VM. Not being one for typing, I created a bash alias that looks like this: `alias luxdev='lxc exec debian-base -- '` so that all I need to type is `luxdev <command>`. - -What I haven't figured out is how to chain commands, this does not work: - -~~~~console -lxc exec debian-base -- su - lxf && cd site && source venv/bin/activate && ./manage.py runserver 0.0.0.0:8000 -~~~~ - -According to [a bug report](https://github.com/lxc/lxd/issues/2057), it should work in quotes, but it doesn't for me. Something must have changed since then, or I'm doing something wrong. - -The next thing I wanted to do was mount a directory on my host machine in the LXC instance. To do that you'll need to edit `/etc/subuid` and `/etc/subgid` to add your user id. Use the `id` command to get your user and group id (it's probably 1000 but if not, adjust the commands below). Once you have your user id, add it to the files with this one liner I got from the [Ubuntu blog](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd): - -~~~~console -echo 'root:1000:1' | sudo tee -a /etc/subuid /etc/subgid -~~~~ - -Then you need to configure your LXC instance to use the same uid: - -~~~~console -lxc config set debian-base raw.idmap 'both 1000 1000' -~~~~ - -The last step is to add a device to your config file so LXC will mount it. You'll need to stop and start the container for the changes to take effect. - -~~~~console -lxc config device add debian-base sitedir disk source=/path/to/your/directory path=/path/to/where/you/want/folder/in/lxc -lxc stop debian-base -lxc start debian-base -~~~~ - -That replicates my setup in Vagrant, but we've really just scratched the surface of what you can do with LXD. For example you'll notice I named the initial container "debian-base". That's because this is the base image (fully set up for Djano dev) which I clone whenever I start a new project. To clone a container, first take a snapshot of your base container, then copy that snapshot to create a new container: - -~~~~console -lxc snapshot debian-base debian-base-configured -lxc copy debian-base/debian-base-configured mycontainer -~~~~ - -Now you've got a new container named mycontainer. If you'd like to tweak anything, for example mount a different folder specific to this new project you're starting, you can edit the config file like this: - -~~~~console -lxc config edit mycontainer -~~~~ - -I highly suggest reading through Stéphane Graber's 12 part series on LXD to get a better idea of other things you can do, how to manage resources, manage local images, migrate containers, or connect LXD with Juju, Openstack or yes, even Docker. - -#####Shoulders stood upon - -* [Stéphane Graber's 12 part series on lxd 2.0](https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/) - Graber wrote LXC and LXD, this is the best resource I found and highly recommend reading it all. -* [Mounting your home directory in LXD](https://blog.ubuntu.com/2016/12/08/mounting-your-home-directory-in-lxd) -* [Official how to](https://linuxcontainers.org/lxd/getting-started-cli/) -* [Linux Containers Discourse site](https://discuss.linuxcontainers.org/t/deploying-django-applications/996) -* [LXD networking: lxdbr0 explained](https://blog.ubuntu.com/2016/04/07/lxd-networking-lxdbr0-explained) - - -[^1]: To be fair, I didn't need to get rid of Vagrant. You can use Vagrant to manage LXC containers, but I don't know why you'd bother. LXD's management tools and config system works great, why add yet another tool to the mix? Unless you're working with developers who use Windows, in which case LXC, which is short for, *Linux Container*, is not for you. diff --git a/src/vagrant-custom-box.txt b/src/vagrant-custom-box.txt deleted file mode 100644 index d73019d..0000000 --- a/src/vagrant-custom-box.txt +++ /dev/null @@ -1,171 +0,0 @@ -I'm a little old fashioned with my love of Vagrant. I should probably keep up with the kids, dig into to Docker and containers, but I like managing servers. I like to have the whole VM at my disposal. - -Why Vagrant? Well, I run Arch Linux on my laptop, but I usually deploy sites to either Debian, preferably v9, "Stretch", or (if a client is using AWS) Ubuntu, which means I need a virtual machine to develop and test in. Vagrant is the easiest way I've found to manage that workflow. - -When I'm deploying to Ubuntu-based machines I develop using the [Canonical-provided Vagrant box](https://app.vagrantup.com/ubuntu/boxes/bionic64) available through Vagrant's [cloud site](https://app.vagrantup.com/boxes/search). There is, however, no official Debian box provided by Debian. Worse, the most popular Debian 9 box on the Vagrant site has only 512MB of RAM. I prefer to have 1 or 2GB of RAM to mirror the cheap, but surprisingly powerful, [Vultr VPS instances](https://www.vultr.com/?ref=6825229) I generally use (You can use them too, in my experience they're faster and slightly cheaper than Digital Ocean. Here's a referral link that will get you [$50 in credit](https://www.vultr.com/?ref=7857293-4F)). - -That means I get to build my own Debian Vagrant box. - -Building a Vagrant base box from Debian 9 "Stretch" isn't hard, but most tutorials I found were outdated or relied on third-party tools like Packer. Why you'd want to install, setup and configure a tool like Packer to build one base box is a mystery to me. It's far faster to do it yourself by hand (which is not to slag Packer, it *is* useful when you're building an image from AWS or Digital Ocean or other provider). - -Here's my guide to building a Debian 9 "Stretch" Vagrant Box. - -### Create a Debian 9 Virtual Machine in Virtualbox - -We're going to use Virtualbox as our Vagrant provider because, while I prefer qemu for its speed, I run into more compatibility issues with qemu. Virtualbox seems to work everywhere. - -First install Virtualbox, either by [downloading an image](https://www.virtualbox.org/wiki/Downloads) or, preferably, using your package manager/app store. We'll also need the latest version of Debian 9's netinst CD image, which you can [grab from the Debian project](https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/) (scroll to the bottom of that page for the actual downloads). - -Once you've got a Debian CD, fire up Virtualbox and create a new virtual machine. In the screenshot below I've selected Expert Mode so I can go ahead and up the RAM (in the screenshot version I went with 1GB). - -<img src="images/2019/debian9-vagrant-base-box-virtualmachine.jpg" id="image-1859" class="picfull" /> - -Click "Create" and Virtualbox will ask you about the hard drive, I stick with the default type, but bump the size to 40GB, which matches the VPS instances I use. - -<img src="images/2019/debian9-vagrant-base-box-virtualdisk.jpg" id="image-1860" class="picfull" /> - -Click "Create" and then go to the main Virtualbox screen, select your new machine and click "Settings". Head to the audio tab and uncheck the Enable Audio option. Next go to the USB tab and disable USB. - -<img src="images/2019/debian9-vagrant-base-box-no-audio.jpg" id="image-1855" class="picfull" /> -<img src="images/2019/debian9-vagrant-base-box-no-usb.jpg" id="image-1856" class="picfull" /> - -Now click the network tab and make sure Network Adapter 1 is set to NAT. Click the "Advanced" arrow and then click the button that says Port Forwarding. Add a port forwarding rule. I call mine SSH, but the name isn't important. The important part is that the protocol is TCP, the Host and Guest IP address fields are blank, the Host port is 2222, the Guest port is 22. - -<img src="images/2019/debian9-vagrant-base-box-port-forward_EqGwcg4.jpg" id="image-1858" class="picfull" /> - -Hit okay to save your changes on both of those screens and now we're ready to boot Debian. - -### Install Debian - -To get Debian installed first click the start button for your new VM and Virtualbox will boot it up and ask you for the install CD. Navigate to wherever you saved the Debian netinst CD we downloaded earlier and select that. - -That should boot you to the Debian install screen. The most important thing here is to make sure you choose the second option, "Install", rather than "Graphical Install". Since we disabled USB, we won't have access to the mouse and the Debian graphical installer won't work. Stick with plain "Install". - -<img src="images/2019/debian9-vagrant-base-box-vm-install.jpg" id="image-1861" class="picfull" /> - -From here it's just a standard Debian install. Select the appropriate language, keyboard layout, hostname (doesn't matter), and network name (also doesn't matter). Set the root password to something you'll remember. Debian will then ask you to create a user. Create a user named "vagrant" (I used "vagrant" for the fullname and username) and set the password to "vagrant". - -Tip: to select (or unselect) a check box in the Debian installer, hit the space bar. - -Then Debian will get the network time, ask what timezone you're in and start setting up the disk. I go with the defaults all the way through. Next Debian will install the base system, which takes a minute or two. - -Since we're using the netinst CD, Debian will ask if we want to insert any other CDs (no), and then it will ask you to choose which mirrors to download packages from. I went with the defaults. Debian will then install Linux, udev and some other basic components. At some point it will ask if you want to participate in the Debian package survey. I always go with no because I feel like a virtual machine might skew the results in unhelpful ways, but I don't know, maybe I'm wrong on that. - -After that you can install your software. For now I uncheck everything except standard system utils (remember, you can select and unselect items by hitting the space bar). Debian will then go off and install everything, ask if you want to install Grub (you do -- select your virtual disk as the location for grub), and congratulations, you're done installing Debian. - -Now let's build a Debian 9 base box for Vagrant. - -### Set up Debian 9 Vagrant base box - -Since we've gone to the trouble of building our own Debian 9 base box, we may as well customize it. - -The first thing to do after you boot into the new system is to install sudo and set up our vagrant user as a passwordless superuser. Login to your new virtual machine as the root user and install sudo. You may as well add ssh while you're at it: - -~~~~console -apt install sudo ssh -~~~~ - -Now we need to add our vagrant user to the sudoers list. To do that we need to create and edit the file: - -~~~~console -visudo -f /etc/sudoers.d/vagrant -~~~~ - -That will open a new file where you can add this line: - -~~~~console -vagrant ALL=(ALL) NOPASSWD:ALL -~~~~ - -Hit control-x, then "y" and return to save the file and exit nano. Now logout of the root account by typing `exit` and login as the vagrant user. Double check that you can run commands with `sudo` without a password by typing `sudo ls /etc/` or similar. If you didn't get asked for a password then everything is working. - -Now we can install the vagrant insecure SSH key. Vagrant sends commands from the host machine over SSH using what the Vagrant project calls an insecure key, which is so called because everyone has it. We could in theory, all hack each other's Vagrant boxes. If this concerns you, it's not that complicated to set up your own more secure key, but I suggest doing that in your Vagrant instance, not the base box. For the base box, use the insecure key. - -Make sure you're logged in as the vagrant user and then use these commands to set up the insecure SSH key: - -~~~~console -mkdir ~/.ssh -chmod 0700 ~/.ssh -wget https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub -O ~/.ssh/authorized_keys -chmod 0600 ~/.ssh/authorized_keys -chown -R vagrant ~/.ssh -~~~~ - -Confirm that the key is in fact in the `authorized_keys` file by typing `cat ~/.ssh/authorized_keys`, which should print out the key for you. Now we need to set up SSH to allow our vagrant user to sign in: - -~~~~console -sudo nano /etc/ssh/sshd_config -~~~~ - -Uncomment the line `AuthorizedKeysFile ~/.ssh/authorized_keys ~/.ssh/authorized_keys2` and hit `control-x`, `y` and `enter` to save the file. Now restart SSH with this command: - -~~~~console -sudo systemctl restart ssh -~~~~ - -### Install Virtualbox Guest Additions - -The Virtualbox Guest Addition allows for nice extras like shared folders, as well as a performance boost. Since the VB Guest Additions require a compiler, and Linux header files, let's first get the prerequisites installed: - -~~~~console -sudo apt install gcc build-essential linux-headers-amd64 -~~~~ - -Now head to the VirtualBox window menu and click the "Devices" option and choose "Insert Guest Additions CD Image" (note that you should download the latest version if Virtualbox asks[^1]). That will insert an ISO of the Guest Additions into our virtual machine's CDROM drive. We just need to mount it and run the Guest Additions Installer: - -~~~~console -sudo mount /dev/cdrom /mnt -cd /mnt -sudo ./VBoxLinuxAdditions.run -~~~~ - -Assuming that finishes without error, you're done. Congratulations. Now you can add any extras you want your Debian 9 Vagrant base box to include. I primarily build things in Python with Django and Postgresql, so I always install packages like `postgresql`, `python3-dev`, `python3-pip`, `virtualenv`, and some other software I can't live without. Also edit the .bashrc file to create some aliases and helper scripts. Whatever you want all your future Vagrant boxes to have, now is the time to install it. - -### Packaging your Debian 9 Vagrant Box - -Before we package the box, we're going to zero out the drive to save a little space when we compress it down the road. Here's the commands to zero it out: - -~~~~console -sudo dd if=/dev/zero of=/zeroed bs=1M -sudo rm -f /zeroed -~~~~ - -Once that's done we can package up our box with this command: - -~~~~console -vagrant package --base debian9-64base -==> debian9-64base: Attempting graceful shutdown of VM... -==> debian9-64base: Clearing any previously set forwarded ports... -==> debian9-64base: Exporting VM... -==> debian9-64base: Compressing package to: /home/lxf/vms/package.box -~~~~ - -As you can see from the output, I keep my Vagrant boxes in a folder call `vms`, you can put yours wherever you like. Wherever you decide to keep it, move it there now and cd into that folder so you can add the box. Sticking the `vms` folder I use, the commands would look like this: - -~~~console -cd vms -vagrant box add debian9-64 package.box -~~~ - -Now when you want to create a new vagrant box from this base box, all you need to do is add this to your Vagrantfile: - -~~~~console -Vagrant.configure("2") do |config| - config.vm.box = "debian9-64" -end -~~~~ - -Then you start up the box as you always would: - -~~~~console -vagrant up -vagrant ssh -~~~~ - -#####Shoulders stood upon - -* [Vagrant docs](https://www.vagrantup.com/docs/virtualbox/boxes.html) -* [Engineyard's guide to Ubuntu](https://www.engineyard.com/blog/building-a-vagrant-box-from-start-to-finish) -* [Customizing an existing box](https://scotch.io/tutorials/how-to-create-a-vagrant-base-box-from-an-existing-one) - Good for when you don't need more RAM/disk space, just some software pre-installed. - -[^1]: On Arch, using Virtualbox 6.x I have had problems downloading the Guest Additions. Instead I've been using the package `virtualbox-guest-iso`. Note that after you install that, you'll need to reboot to get Virtualbox to find it. diff --git a/src/w3m-guide.txt b/src/w3m-guide.txt deleted file mode 100644 index 2b008ed..0000000 --- a/src/w3m-guide.txt +++ /dev/null @@ -1,17 +0,0 @@ -ah the irony of the number of websites that want to tell you how to use w3m, but don't themselves load in w3m because they are totally JS dependant. - -How do you open a link in a new tab? meh, you don't really need to, just hit "s" for the buffer selection window which has your whole browsing history. - -okay back is shift-b. s to list buffers. esc-e to edit, that seems to be the basics. - -Meta U to get the equivelant of ctrl-l (select URL bar), then bash shortcuts work: - -ctrl -u to delete everything behind the cursor -Ctrl-a Move cursor to beginning of line -- doesn't work -Ctrl-e Move cursor to end of line -Ctrl-b Move cursor back one letter -Ctrl-f Move cursor forward one letter - -Need to figure out how to save current buffers to file - -you can bookmark with esc-a to add esc b to view |