summaryrefslogtreecommitdiff
path: root/cmd
diff options
context:
space:
mode:
Diffstat (limited to 'cmd')
-rw-r--r--cmd/.gitignore2
-rw-r--r--cmd/Screen Shot 2015-04-01 at 1.34.13 PM.pngbin0 -> 408641 bytes
-rw-r--r--cmd/Screen Shot 2015-04-01 at 1.34.35 PM.pngbin0 -> 419873 bytes
-rw-r--r--cmd/adv-autojump.txt5
-rw-r--r--cmd/basics.txt196
-rw-r--r--cmd/changing-shells.txt73
-rw-r--r--cmd/draft_edits.txt35
-rw-r--r--cmd/homebrew.txt0
-rw-r--r--cmd/intro.txt0
-rw-r--r--cmd/notes.txt2
-rw-r--r--cmd/read-docs.txt55
-rw-r--r--cmd/ref/a whirlwind tour of web developer tools: command line utilities - wern ancheta.txt1215
-rw-r--r--cmd/ref/bash image tools for web designers - brettterpstra.com.txt53
-rw-r--r--cmd/ref/invaluable command line tools for web developers.txt89
-rw-r--r--cmd/ref/life on the command line.txt81
-rw-r--r--cmd/ref/oliver | an introduction to unix.txt1913
-rw-r--r--cmd/ref/some command line tips for the web developer.txt105
-rw-r--r--cmd/setup-vps-server.txt116
-rw-r--r--cmd/ssh-keys.txt94
19 files changed, 4034 insertions, 0 deletions
diff --git a/cmd/.gitignore b/cmd/.gitignore
new file mode 100644
index 0000000..fd35fbf
--- /dev/null
+++ b/cmd/.gitignore
@@ -0,0 +1,2 @@
+*.pdf
+lib/
diff --git a/cmd/Screen Shot 2015-04-01 at 1.34.13 PM.png b/cmd/Screen Shot 2015-04-01 at 1.34.13 PM.png
new file mode 100644
index 0000000..b8723d1
--- /dev/null
+++ b/cmd/Screen Shot 2015-04-01 at 1.34.13 PM.png
Binary files differ
diff --git a/cmd/Screen Shot 2015-04-01 at 1.34.35 PM.png b/cmd/Screen Shot 2015-04-01 at 1.34.35 PM.png
new file mode 100644
index 0000000..bb96cc5
--- /dev/null
+++ b/cmd/Screen Shot 2015-04-01 at 1.34.35 PM.png
Binary files differ
diff --git a/cmd/adv-autojump.txt b/cmd/adv-autojump.txt
new file mode 100644
index 0000000..dc39756
--- /dev/null
+++ b/cmd/adv-autojump.txt
@@ -0,0 +1,5 @@
+Autojump for advanced navigation:
+
+https://github.com/joelthelion/autojump
+
+
diff --git a/cmd/basics.txt b/cmd/basics.txt
new file mode 100644
index 0000000..9415829
--- /dev/null
+++ b/cmd/basics.txt
@@ -0,0 +1,196 @@
+Test this with an article for CSSTricks and a sign up for a tips mailing list. Just write the article, a weeks worth a tips and pitch to see what happens.
+
+
+
+
+The reason the command line comes across as pure wizardry much of the time is that popular tutorials and books teach commands, tools and skills that web developers don't necessarily need.
+
+Unless you're the rare web developer / sysadmin you don't really need to know how to tail log files, search with grep and awk, pipe things between apps or write shell scripts. All of those things can come in handy, but they're not essential for web developers.
+
+For web developers like us most command line books fail the basic "how will this help me?" test.
+
+I know because I've read a bunch of them and, while they all eventually did prove useful down the road, none of them helped me automate tasks with Grunt or got me started with Sass or helped improve my development environment and deployment tools.
+
+All that I had to figure out myself with man pages, scattered tutorials and more than a healthy dose of trial and error. More like trial and error, error, error, trial, error, error, error, error, error, error.
+
+**** tk I want to spare you that rather lengthy process and get you started using command line tools. This book is not cumulative. That is each chapter is designed to stand alone. Scan the table of contents here, find one thing you want to learn how to do and go read that chapter.***
+
+**** Once you've accomplished something then come back here and you can learn a little more foundational stuff.***
+
+## The Basics: Open Your Terminal Application
+
+If you're on OS X the built in application is Terminal, which lives in Applications >> Utilities. While Terminal will work, I highly suggest you grab iTerm instead. It's free, much faster and has a boatload of useful features for when you're more comfortable.
+
+If you're on Windows grab [Cygwin][1] and follow the setup instructions.
+
+If you're on Linux you probably didn't buy this book. But on the outside chance you did, your distro most likely came with some sort of terminal application installed. Search for "terminal" or "command line" and see what you can find. If you want an everything and kitchen sink terminal, check out [Terminology][2].
+
+Okay so you've got a window open in some kind of terminal emulator.
+
+Let's start with some basics, like why is it called an emulator?
+
+Well back in the day a terminal was just a screen and a keyboard that connected to a mainframe via a serial port. Seriously. The terminal was literally at the end of the line. You sat down and typed your command into the terminal and the terminal then sent them off to the actual computer. So a terminal emulator is just emulating that basic interface.
+
+Oh my gawd, why the @#$% would I want to go back to those days?
+
+I don't know, why do you drive a car with a 12,000 year old invention like the wheel still primitively strapped to the bottom of it?
+
+Because it works.
+
+So a terminal emulator opens up and then it loads a shell. Now a shell is an extra layer between you and that metaphorical mainframe off in the distance. To stick with the wheel analogy, a shell is like a rubber wheel vs the stone wheel of a bare terminal. While wheels are just as useful as ever after all these years, most of us don't use the stone ones anymore. Same for the terminal. The shell is much nicer than a bare terminal.
+
+By default most operating systems these days will load the Bash Shell. Another popular shell you'll sometimes see referenced is the Z Shell or more commonly ZSH. The two are pretty much indistinguishable in the beginning, though Zsh has some very powerful autocomplete feature Bash does not. For now we'll stick with Bash. In the next chapter we'll kick off the Bash training wheels and start using Zsh.
+
+## Basics: Figure Out Where You Are
+
+Okay, you've probably got a screen that looks something like this:
+
+![tk screen of terminal no frills]
+
+Go ahead and open up the preferences for your terminal and pick a color scheme that's a little easier on the eyes. I happen to like [Solarized Dark][3] which is the blueish color you'll see in all the screenshots. So when I open a terminal it looks like this:
+
+![tk iTerm win]
+
+Ah, that's so much better. Okay, now where are we? It turns out our terminal is telling us the answer to that question, we just need to figure out how to read it. Here's the basic line from the screenshots above:
+
+~~~
+Last login: <date> on ttys018
+You have mail.
+iMac:~ sng$
+~~~
+
+So first there's a line about the last time I logged in, which you can pretty much ignore unless you're a sysadmin, in which case email me for a refund. Then there's a line about me having mail because I have a cron job running that sometimes errors out and sends me a message that it failed. As you can see I just ignore this message.
+
+Then we have the stuff that actually matters, this bit:
+
+~~~
+iMac:~ sng$
+~~~
+
+Yours will be a little different because you machine name may well be more creative than "iMac" and your username will not be `sng`. But there is something else in there to note... See that `~` character after the colon and before the username?
+
+That's where we are. It just so happens that by default terminal windows open up in the user's home directory, another throwback to the old mainframe days, but one that still makes sense. After all there's a really good chance you want to be in your home directory since that's where all your projects and files are.
+
+If you want to know where you are, you can always find out just by typing `pwd`. The `pwd` command just tells you the absolute path to the current working directory.
+
+~~~
+~ » pwd
+/Users/sng
+~ »
+~~~
+
+Note that For the rest of the book I'm going to use a simplified prompt in all the example code. I'll omit the machine name (iMac in the above example) and use the >> symbol instead of the $. So everything will look like this:
+
+~~~
+~ » command
+~~~
+
+Nice and clean and simple. In a few chapters I'll show you how to modify what yours looks like.
+
+Okay, so we know we're in our home directory, let's see what's in this mysterious directory, type `ls` and hit return:
+
+~~~
+~ » ls
+Downloads Library Public
+Dropbox Movies Pictures
+Documents Music Sites
+~~~
+
+The exact spacing of the list and folders within it will depend on what's in your home directory. But if you go open up your file browser (that's the Finder on OS X and Windows Explorer in Windows) you should see the exact same like of directories or, as your file browser calls them, folders.
+
+So what is `ls`? Well the short answer is that it's the list command, it lists the contents of the current directory.
+
+When you type `ls` and nothing else it will show you the content of the current directory. Chances are, regardless of what platform you're on, there's a Documents folder in your home directory. Let's see what's in it:
+
+~~~
+~ » ls Documents
+~~~
+
+Chances are that produced a very long, jumbled and difficult to read list of all the stuff that's in your documents folder. To make it a bit prettier, let's use what are called flags or options, in case we'll add `-lh`
+
+~~~
+~ » ls -lh Documents
+drwxr-xr-x@ 4 sng staff 136B Mar 5 09:08 Virtual Machines.localized
+drwxr-xr-x@ 20 sng staff 680B Jan 12 20:57 archive
+drwxr-xr-x@ 31 sng staff 1.0K Feb 19 17:27 bak
+drwxr-xr-x@ 368 sng staff 12K Oct 29 11:25 bookmarks
+drwxr-xr-x@ 42 sng staff 1.4K Mar 9 21:36 dotfiles
+drwxr-xr-x@ 12 sng staff 408B Sep 24 14:22 misc
+drwxr-xr-x@ 6 sng staff 204B Oct 20 11:39 reading notes
+drwxr-xr-x@ 85 sng staff 2.8K Dec 30 09:33 recipes
+drwxr-xr-x@ 8 sng staff 272B Sep 1 2014 reference
+drwxr-xr-x@ 15 sng staff 510B Oct 13 10:49 writing
+drwxr-xr-x@ 36 sng staff 1.2K Mar 9 19:56 writing luxagraf
+drwxr-xr-x@ 18 sng staff 612B Feb 19 19:12 yellowstone
+~~~
+
+Now what's all that crap? Well, ignore the first column for now, those are the permissions flags, then comes the number of files in the directory if the item is a directory, then the owner, then the group name, then the file size. Because we added the `-h` flag we got the files size in 3 digits, in the example above you can see everything is either kilobytes or bytes, but if you have larger files you might see megabytes (MB) or gigabytes (GB). Then there's the last modified date and time and finally the name of the directory or file.
+
+Now you know how to see the contents of any directory on your machine using the command line. Or at least all the stuff you'd see in a graphical program. There may be hidden files though. To see the hidden files you can add the `-a` flag to the `ls` command. Try running this in your home directory to see some files you might have never known were there (they'll be the ones that start with a dot)
+
+~~~
+~ » ls -lah
+~~~
+
+Now you may have noticed that there's no easy way to tell directories from files. Some files might have extensions, like .txt or .pdf, but they don't have to have extension so how do you tell them apart? The most common way is with either color or bold text. Typically directories will be bold or a different color.
+
+So now we know how to figure out where we are (just type `pwd`) and what's in any folder (just type `ls -lh path/to/direcotry), let's figure out how to move to another directory so we can work in it.
+
+## Basics: How To Move Around
+
+Let's start by creating a (possibly) new folder in our home directory. We'll create a "Sites" folder where we can keep all our web dev projects. To do that we use the `mkdir` command, which is short for "make directory".
+
+~~~
+~ » mkdir Sites
+mkdir: Sites: File exists
+~~~
+
+As you can see I already have a Sites folder so `mkdir` did nothing, but if there were no folder there named Sites there would be now. Now let's create a sample project inside the Sites folder named "my-project". We'll use mkdir again, but this time we'll add the `-p` flag:
+
+~~~
+~ » mkdir -p Sites/my-project
+~~~
+
+Now go to your file browser (Finder/Windows Explorer) and see for yourself. There's a new folder inside your sites folder.
+
+The `-p` flag tells mkdir to make every directory in the specified path all at once. So if we wanted to create a folder at the path `Sites/my-project/design/sass/partials/` we could do it in a single step just by typing:
+
+~~~
+~ » mkdir -p Sites/my-project/design/sass/partials
+~~~
+
+Okay, let's now move to the my-projects folder inside the Sites folder.
+
+~~~
+~ » cd Sites/my-projects
+Sites/my-project »
+~~~
+
+Now notice the prompt has changed, the `~` is gone and we now see the path to where we are is Sites/my-project. Cool.
+
+Quick question though, did you type out "Sites" and "my-project"? I'd wager you did, but you didn't need to.
+
+Let's go back to our home directory, just type `cd` and hit return.
+
+Okay, we're home. Now type `cd S` and hit tab. Did that S magically turn into Sites? Very cool. Now type "m" and hit tab again to see "my-project" also auto-complete. Learn to love the tab key, it will autocomplete just about everything. you should never really need to type more than a few letters of what you're looking for and hit tab and it will autocomplete.
+
+Okay hit return and we're once again in our Sites/my-project folder. You can verify that you're in that folder by looking at your prompt.
+
+## Conclusion
+
+That was a lot, but now you know the basics of the command line.
+
+You can see where you are, list what files and folders are on your machine, create new folders when you need one and change to any folder you want. The only thing we didn't cover is how to create files, for that you can use `touch`. From your my-projects folder type this:
+
+~~~
+~ » touch my-first-file.txt
+~~~
+
+Now you can type `ls` and you'll see the file you just created.
+
+That's the basics of navigating the file system of your computer (or a remote computer you've logged into) from the command line.
+
+
+[1]: http://cygwin.com/
+[2]: http://www.enlightenment.org/p.php?p=about/terminology
+[3]:
diff --git a/cmd/changing-shells.txt b/cmd/changing-shells.txt
new file mode 100644
index 0000000..47921b9
--- /dev/null
+++ b/cmd/changing-shells.txt
@@ -0,0 +1,73 @@
+## Why You're Going to Change Your Shell to Zsh
+
+Wait, we just got the Terminal app open and now you want to go and change the shell, WTF?
+
+The reason many people give up on the command line isn't because the command line is overly difficult, it's because they're slower using it than they are using GUI apps.
+
+You've been using Photoshop for so long you know all the helpful keyboard shortcuts you need to work deficiently. The same is probably true of your favorite text editor and everything else you use regularly.
+
+In order to get you working much more quickly in the terminal we need to get a shell that has the equivalent of powerful keyboard shortcuts.
+
+We need Zsh.
+
+Zsh is going to make navigating the command line much faster and easier, especially when you're first starting out.
+
+The biggest help Zsh provides is auto-complete on steroids.
+
+The most time consuming thing you'll do on the command line is type out paths and commands. Sure, `cd` is much faster to type than `change-directory`, but you still need to type the full path to the directory you want to change to. Unless you have Zsh.
+
+Here's an animated gif showing me navigating from my home directory to /var/www where I keep the sites on my server:
+
+[! tk animated gif of var www change]
+
+There isn't much Zsh can't auto-complete. If you install the pre-built package Oh-My-Zsh -- which I'll show you how to do in the next section -- you'll be able to auto-complete everything from paths to command options, even git commands.
+
+For example, I tend to have a lot of symlinks on my servers and I like `ls` to show me the actual path when there's a symlink, but I can never remember what the flag is for that so I just type `ls -` and then hit `tab` and Zsh helpfully gives me a list of options (in this case the flag I want is `-L`:
+
+![tk screen of ls - flag complete]
+
+Another great feature of Zsh is spelling corrections. We'll be typing commands like `ls` (list all the files and folders in the current working directory) and `cd` (change directory, AKA move to a new folder) a lot and you will inevitably accidentally type `dc` or `sl` followed by some long path. Rather than make you retype the whole thing, Zsh will simply ask, did you mean `ls`? Type a 'y' for yes the Zsh will run the command as `ls`.
+
+All of the examples in this book and screenshots are done in Zsh. The Z shell is perhaps best thought of as Bash improved. Technically it was derived from the Korn Shell, but for the most part it's a drop in replacement for Bash. There are differences, but for now you can ignore them -- if it works in Bash, it'll work in Zsh.
+
+## Install and Change to Zsh
+
+Okay let's install Zsh and make it our default shell. If you're on OS X and you've already got homebrew installed just type:
+
+ brew install zsh
+
+Ubuntu/Debian users can use:
+
+ apt-get install zsh
+
+If you're on Windows, using Cygwin:
+
+ apt-cyg install zsh
+
+That gets Zsh installed, now we need to make it the default so that we always start in a Z shell when we open a new terminal window. On OS X and Linux you can use this command:
+
+ chsh -s $(which zsh)
+
+For Windows/Cygwin head to C:/cygwin/Cygwin.bat and open that file in a text editor. Look for the line:
+
+ cygwin\bin\bash --login -i
+
+and change it to:
+
+ cygwin\bin\zsh -l -i
+
+Now you just need to log out and log back in and you'll have Zsh set up.
+
+## Installing Oh-My-Zsh
+
+Zsh is our default shell now, that's a good start but we're going to go a step further and set up a wonderful set of tools that goes by the name [Oh-My-Zsh]().
+
+Oh-My-Zsh is, in it's own words, "community-driven framework for managing your zsh configuration. Includes 180+ optional plugins (rails, git, OSX, hub, capistrano, brew, ant, php, python, etc), over 120 themes to spice up your morning, and an auto-update tool so that makes it easy to keep up with the latest updates from the community."
+
+Basically it give you a lot of powerful tools that with some sane configuration files.
+
+To install Oh-My-Zsh you can just paste this line in your terminal:
+
+ wget https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O - | sh
+
+If you'd like to play around with Oh-My-Zsh and try some different configurations, be sure to read through [the documentation](https://github.com/robbyrussell/oh-my-zsh) on GitHub. For now we'll just stick with the defaults that Oh-My-Zsh set up for us.
diff --git a/cmd/draft_edits.txt b/cmd/draft_edits.txt
new file mode 100644
index 0000000..eb88a61
--- /dev/null
+++ b/cmd/draft_edits.txt
@@ -0,0 +1,35 @@
+
+
+
+
+
+Linux comes in many flavors. You may have heard of Ubuntu, it's not a bad choice, but I prefer something very similar called Debian. Debian is actually the base that Ubuntu is built on. Since we don't need a fancy desktop or any of that stuff we'll just stick with Debian. It's the most popular server OS on the web by a wide margin so we're not really going out on a limb here.
+
+If you picked Digital Ocean, here's what the setup screen looks like:
+
+
+
+
+Wait, you said we'd install a fully custom server...
+
+Patience my friend. Let's start with something basic, like getting WordPress up and running. Do I like WordPress? Not really, but it's popular
+
+
+
+
+Start with the problem
+Then the dream
+Then the solution
+
+With Bash if we want to auto-complete when we're changing directories with `cd` we have to type the first few letters of the directory name before tab auto-completes anything. It offers a list of all the directories available, but it doesn't actually complete anything. With zsh you can hit tab twice and it will auto-complete the first directory in the directory you're in. In other words hit tab twice in Zsh and it will auto-complete the first thing you'd see if you typed `ls`. Hit tab again and it moves to the next directory and so on.
+
+Zsh also autocomplete things like git and even flags, for example, here I just
+
+spelling set autocorrections
+
+syntax highlighting (valid commands are green
+Zsh goes one better. You can type part of a command and press <UP>
+
+
+
+It finds the last command we typed starting with ‘ls’. We could continue pressing up to cycle if we wanted.
diff --git a/cmd/homebrew.txt b/cmd/homebrew.txt
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/cmd/homebrew.txt
diff --git a/cmd/intro.txt b/cmd/intro.txt
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/cmd/intro.txt
diff --git a/cmd/notes.txt b/cmd/notes.txt
new file mode 100644
index 0000000..49ffcb5
--- /dev/null
+++ b/cmd/notes.txt
@@ -0,0 +1,2 @@
+As we navigate through the filesystem, there are some conventions. The current working directory (cwd)—whatever directory we happen to be in at the moment—is specified by a dot:
+.
diff --git a/cmd/read-docs.txt b/cmd/read-docs.txt
new file mode 100644
index 0000000..a0c4cd3
--- /dev/null
+++ b/cmd/read-docs.txt
@@ -0,0 +1,55 @@
+It is impossible to over-emphasize the importance of Reading The Fucking Manual. There are two ways to get better at stuff, reading and doing. The more you do the first the better you get at the second. And of course the better you get at doing the better you get.
+
+RTFM can be a little bit if a confusing notion for web developers though. Unlike programmers who have the Python documentation or RubyDoc files they can read through, HTML and CSS, the primary tools of our trade do not really have a manual, at least not in the traditional sense. There's the specification, and while reading specs is both as boring as you would expect and far more useful than you might expect (I encourage you to do it), it's not a place you'd normal turn to to answer a quick question.
+
+At the same time typing "how to use css transform" into Google is a crap shoot. You might get something useful, but it might be outdated, browsers might have changed things, the spec might have been re-written for some reason.
+
+The best docs I know of and what I use for HTML is the [Mozilla Developer Network](https://developer.mozilla.org), though it's rare that I visit the actual site. I do my development work in Vim and on the command line, switching to the browser, opening a new tab and then typing the URL followed by whatever I want to look up... fuck that. Sure, it doesn't sound like much, it's not much from a pure time perspective, but it's a huge context switch, it pulls my brain out of what I'm doing and directs it somewhere else from which it may not return immediately.
+
+Perhaps I just have some sort of attention deficit disorder, but I can't tell you how many times I have tabbed over to Chromium to find the answer to some question and found myself browsing 1969 Yellowstone trailer images (I'm restoring one if you must know) on obscure forums half an hour later. Context switching kills your ability to concentration.
+
+That's where Dash for OS X and Zeal for Windows/Linux come in. Both applications are offline browsers for documentation. You can download the MDN docs for HTML and CSS, as well as more traditional documentation sources like JavaScript, Ruby, Python or hundreds of others.
+
+This way I can minimize the potential distractions. It's still a context switch -- another app opens, though using Dash's HUD view my full screen terminal stays where it is in the background -- but there's no internet there to disappear into for hours.
+
+As an added bonus I can get answers even when I'm working somewhere I don't have access to wifi. This is especially useful since I like to block my time and actually work for half, sometimes a whole day without allowing myself access to the web. I sit down and write, just me the command line and the task at hand. No distractions.
+
+In fact Dash/Zeal may be the most effective focus/productivity tools I use and yet I never really think of them that way. Still, they can save you from yourself so go grab whichever one you need, follow the installation instructions, download some "docsets" (Dash/Zeal's name for documentation) and then come back and I'll tell you how I use them.
+
+Here's a little secret, I don't actually even have to open Dash or Zeal (I work on both an OS X machine and an Linux machines so I use both). Open an app? How primitive. I've got plugins for my text editor of choice (Vim), all I do is position my cursor over a word, hit the two-key shortcut combo and Dash/Zeal opens an overlay window with the results.
+
+That's great when you're in the editor, but sometimes my cursor is just there on the command line, no editor or anything else, I want an easy way to search Dash/Zeal right where I am. To make it easy to search Dash/Zeal straight from the command line I wrote a very simple shell function that will looking things up directly from the command line.
+
+Here's what that function looks like with Dash on OS X:
+
+~~~.
+function lu() {
+ open "dash://$*"
+}
+~~~
+
+Here's the Zeal version for Linux/Windows with Cygwin looks like:
+
+~~~.
+function lu() {
+tk
+}
+~~~
+
+I call it "lu", which is short for "look up", because it doesn't conflict with any existing tools I'm aware of (at least nothing I've ever used). With these functions I can do this:
+
+~~~.
+lu html:picture
+~~~
+
+And instantly get a window with the MDN article on the HTML5 Picture element. If I want to look up the CSS transition article it would look like this:
+
+~~~.
+lu css:transition
+~~~
+
+And here's what it looks like when I do that, note that this chapter is still there in the background:
+
+![tk pic ]
+
+When I'm done I just hit Esc twice and that pop up window goes away and I'm back to writing this sentence. Thanks to my function and Dash's keyboard shortcuts I never had to take my fingers off the keys and I never got distracted by anything shiny. I got in, got educated and got out.
diff --git a/cmd/ref/a whirlwind tour of web developer tools: command line utilities - wern ancheta.txt b/cmd/ref/a whirlwind tour of web developer tools: command line utilities - wern ancheta.txt
new file mode 100644
index 0000000..5a201a3
--- /dev/null
+++ b/cmd/ref/a whirlwind tour of web developer tools: command line utilities - wern ancheta.txt
@@ -0,0 +1,1215 @@
+---
+title: A Whirlwind Tour of Web Developer Tools: Command Line Utilities
+date: 2014-12-30T18:57:01Z
+source: http://wern-ancheta.com/blog/2014/03/08/a-whirlwind-tour-of-web-developer-tools-command-line-utilities/
+tags: #cmdline
+
+---
+
+In this part five of the series A Whirlwind Tour of Web Developer Tools I'll walk you through some of the tools that you can use in the command line. But before we dive in to some of the tools lets first define what a command line is. According to [Wikipedia][1]:
+
+> A command-line interface (CLI), also known as command-line user interface, console user interface, and character user interface (CUI), is a means of interacting with a computer program where the user (or client) issues commands to the program in the form of successive lines of text (command lines).
+
+So the command line is basically an interface where you can type in a bunch of commands to interact with the computer.
+
+### Command Line Basics
+
+Before we jump into the tools its important that we first understand the basics of using the command line. To access the command line in Linux press `ctrl + alt + t` on your keyboard. For Mac just look for the terminal from your menu. And for Windows just press `window + r` and then type in `cmd` then press `enter`.
+
+#### Commonly used Commands
+
+Here are some of the commands that you'll commonly used on a day to day basis:
+
+* **cd** – change directory
+* **mkdir** – create a new directory
+* **rmdir** – delete an existing directory
+* **touch** – create an empty file
+* **pushd** – push directory
+* **popd** – pop directory
+* **ls** – list files in a specific directory
+* **grep** – find specific text inside files
+* **man** – read a manual page
+* **apropos** – lists outs commands that does a specific action
+* **cat** – print out all the contents of a file
+* **less** – view the contents of a file (with pagination)
+* **sudo** – execute command as super user
+* **chmod** – modify the file permissions
+* **chown** – change file ownership
+* **find** – find files from a specific directory
+* **pwd** – print working directory
+* **history** – returns a list of the commands that you have previously executed
+* **tar** – creates a tar archive from a list of files
+
+If you are on Windows some commands might not be available to you. The solution would be to either switch to Linux, I definitely recommend Linux Mint or Ubuntu if you're planning to switch. Or if you want to stick with Windows you can install [Cygwin][2] or the [GNU utilities][3] for Windows.
+
+I won't go ahead and provide you with a tutorial on how to use the commands above. There's tons of tutorials out there so use Google to your advantage. You also have the `man` command to help you out. Here's how to use the `man` command:
+
+This will output all the information related to the `cd` command and how to use it. The `man` command is useful if you already know the name of the command. But in case you do not already know you also have access to the `apropos` command which lists out commands that matches a specific action. Here's how to use it:
+
+Executing the command above produces an output similar to the following:
+
+![apropos][4]
+
+As you can see you can pretty much scan through the results and determine the command that you need to use based on the description provided. So if you want to delete a file you can just call the `unlink` command.
+
+#### Aliases
+
+Once you've gotten comfortable with the default commands you can start using shortcuts in order to make typing commands faster and easier. You can add aliases by creating a `.bash_aliases` file inside your home directory then add contents similar to the following:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 17
+ 18
+ 19
+ 20
+ 21
+ 22
+ 23
+
+ |
+
+ alias subl='/usr/bin/subl'
+ alias c='clear'
+ alias install='sudo apt-get install'
+ alias cp='cp -iv'
+ alias mv='mv -iv'
+ alias md='mkdir'
+ alias t='touch'
+ alias rm='rm -i'
+ alias la='ls -alh'
+ alias web-dir='cd ~/web_files'
+ alias e='exit'
+ alias s='sudo'
+ alias a='echo "------------Your aliases------------";alias'
+ alias ni='sudo npm install'
+ alias snemo='sudo nemo'
+ alias gi='git init'
+ alias ga='git add'
+ alias gc='git commit -m'
+ alias gca='git commit --amend -m'
+ alias gu='git push'
+ alias gd='git pull'
+ alias gs='git status'
+ alias gl='git log'
+
+ |
+
+As you can see from the example above to add an alias simply put `alias` followed by the alias that you want to use, then `=` and followed by the path to the executable wrapped in quotes. If you do not know the path to the executable file you can use the `which` command followed by the command that you usually use. For example for the `less` command:
+
+It will then output the path to the executable file:
+
+This is the path that you can add in the specific alias.
+
+### Command Line Tools
+
+#### Wget
+
+Useful for pulling files from a server. For example you can use this to download a specific library or asset for your project into your current working directory:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ wget http://cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.10/angular.min.js
+
+ |
+
+The command above will pull the file from the URL that you specified and copy it into the directory where your current terminal window is opened.
+
+#### Curl
+
+Curl is used for making HTTP request. I'd like to describe it as a browser but for the command line. You can do all sorts of stuff with Curl. For example you can use it to request a specific page from the web:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl http://anchetawern.github.io
+
+ |
+
+##### Basic HTTP Authentication
+
+If the page uses basic HTTP authentication you can also specify a user name and a password. In the example below I am using Curl to request my recently bookmarked links from the delicious API:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl -u username:password https://api.del.icio.us/v1/posts/recent
+
+ |
+
+##### Saving the Results to a File
+
+This will return an XML string. If you want to copy the result to a file you can simply redirect the output to a file:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl -u username:password https://api.del.icio.us/v1/posts/recent > recent-bookmarks.xml
+
+ |
+
+##### Getting Header Information
+
+If you only want to get the header information from a specific request you can add the `-I` option:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl -I http://google.com
+
+ |
+
+This will output a result similar to the following:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+
+ |
+
+ Location: http://www.google.com/
+ Content-Type: text/html; charset=UTF-8
+ Date: Fri, 21 Feb 2014 10:16:19 GMT
+ Expires: Sun, 23 Mar 2014 10:16:19 GMT
+ Cache-Control: public, max-age=2592000
+ Server: gws
+ Content-Length: 219
+ X-XSS-Protection: 1; mode=block
+ X-Frame-Options: SAMEORIGIN
+ Alternate-Protocol: 80:quic
+
+ |
+
+This is the same as the one that you see under the network tab in Chrome Developer Tools under the headers section.
+
+##### Interacting with Forms
+
+You can also perform actions on forms. So for example if you have the following form from a web page somewhere:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+
+ |
+
+ <form action="form.php" method="GET">
+ <input type="text" name="query">
+ <input type="submit">
+ </form>
+
+ |
+
+You can fill up the form and perform the action as if you're in a browser by simply getting the required inputs and supplying them from your command:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl http://localhost/tester/curl/form.php?query=dogs
+
+ |
+
+For forms which has its method set to `POST`. You can also make the request using curl. All you have to do is add a `\--data` option followed by the name-value pair. With the name being the name assigned to the input and the value is the value that you want to supply:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl --data "query=cats" http://loca/form.php?query=cats
+
+ |
+
+##### Spoofing the HTTP referrer
+
+You can also spoof the http-referrer when making a request:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl --referer http://somesite.com http://anothersite.com
+
+ |
+
+This reminds us that using the HTTP referrer as a means of checking whether to perform a specific action or not is really useless as it can be easily spoofed.
+
+##### Follow Redirects
+
+Curl also allows you to follow redirects. So for example if you're accessing a page which has a redirect like this:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+
+ |
+
+ <?php
+ header('Location: anotherfile.php');
+ echo 'zup yo!';
+ ?>
+
+ |
+
+Simply using the following command will result in the execution of the `echo` statement below the redirect:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl http://localhost/tester/curl/file.php
+
+ |
+
+But if you add the `\--location` option curl will follow the page that is specified in the redirect:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl --location http://localhost/tester/curl/file.php
+
+ |
+
+So the output of the command above will be the contents of the `anotherfile.php`.
+
+##### Cookies
+
+You can also supply cookie information on the requests that you make. So for example you are requesting a page which uses cookies as a means of determining if a user is logged in or not (note: you shouldn't use this kind of code in production):
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+
+ |
+
+ <?php
+ $name = $_COOKIE["name"];
+ $db->query("SELECT id FROM tbl_users WHERE name = '$name'");
+ if($db->num_rows > 0){
+ echo 'logged in!';
+ }else{
+ echo 'sorry user does not exist';
+ }
+ ?>
+
+ |
+
+To request from the page above just add the `\--cookie` option followed by the cookies that the page needs:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl --cookie "name=fred" http://localhost/tester/curl/cookie.php
+
+ |
+
+If you need to specify more than one cookie simply separate them with a semi-colon:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl --cookie "name=fred;age=22" http://localhost/tester/curl/cookie.php
+
+ |
+
+#### jq
+
+If you normally work with web API's in your job, you might find the jq utility useful. What this does is formatting JSON strings, it also adds syntax highlighting so they become more readable. To install jq all you have to do is download the `jq` file from the [downloads page][5] and then move it into your `bin` folder:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ mv ~/Downloads/jq /bin/jq
+
+ |
+
+After that you can start using jq to process JSON strings that comes from curl requests by simply piping it to the `jq` command. For example, we are making a request to the following file:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 17
+ 18
+ 19
+ 20
+ 21
+ 22
+ 23
+ 24
+ 25
+ 26
+ 27
+ 28
+ 29
+ 30
+ 31
+ 32
+ 33
+ 34
+ 35
+ 36
+ 37
+ 38
+ 39
+ 40
+ 41
+ 42
+ 43
+ 44
+ 45
+ 46
+ 47
+ 48
+ 49
+ 50
+ 51
+ 52
+ 53
+ 54
+ 55
+ 56
+
+ |
+
+ <?php
+ $names = array(
+ array(
+ 'fname' => 'Gon',
+ 'lname' => 'Freecs',
+ 'nen_type' => 'enhancement',
+ 'abilities' => array(
+ 'rock', 'paper', 'scissors'
+ )
+ ),
+ array(
+ 'fname' => 'Killua',
+ 'lname' => 'Zoldyc',
+ 'nen_type' => 'transmutation',
+ 'abilities' => array(
+ 'lightning bolt',
+ 'thunderbolt',
+ 'godspeed'
+ )
+ ),
+ array(
+ 'fname' => 'Kurapika',
+ 'lname' => '',
+ 'nen_type' => array('conjuration', 'specialization'),
+ 'abilities' => array(
+ 'holy chain',
+ 'dowsing chain',
+ 'chain jail',
+ 'judgement chain',
+ 'emperor time'
+ )
+ ),
+ array(
+ 'fname' => 'Isaac',
+ 'lname' => 'Netero',
+ 'nen_type' => 'enhancement',
+ 'abilities' => array(
+ '100-Type Guanyin Bodhisattva',
+ 'First Hand',
+ 'Third Hand',
+ 'Ninety-Ninth Hand'
+ )
+ ),
+ array(
+ 'fname' => 'Neferpitou',
+ 'lname' => '',
+ 'nen_type' => 'specialization',
+ 'abilities' => array(
+ 'Terpsichora',
+ 'Doctor Blythe',
+ 'Puppeteering'
+ )
+ )
+
+ );
+ echo json_encode($names);
+
+ |
+
+Normally we would do something like this:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl http://localhost/tester/curl/json.php
+
+ |
+
+But this returns a result that looks like this:
+
+![json string][6]
+
+Piping the result to `jq`:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl http://localhost/tester/curl/json.php | jq "."
+
+ |
+
+We get a result similar to the following:
+
+![jq formatted][7]
+
+Pretty sweet! But you can do much more than that, check out the [manual page][8] for the jq project for more information.
+
+#### Vim
+
+Vim is a text-editor that is based on Vi, which is a text-editor that's pre-installed on common Linux distributions. But hey you might say that the main topic of this blog post is command-line tools why are we suddenly talking about text-editors here? Well its because Vim is tightly coupled with the terminal. It's like a terminal-text editor crossbreed. You can both execute commands and write code with it.
+
+You can download Vim from the [Vim downloads page][9] simply select the version that's applicable to the operating system that you're currently using. But if you're on Linux mint, Ubuntu or other Linux distributions that uses `apt-get` then you simply execute the following command from the terminal:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ sudo apt-get install vim
+
+ |
+
+There are lots of tutorials in the web that can help you with learning vim (I'll link to them later). But for now I'm going to give you a quick tutorial to get you started.
+
+First thing that you need to know is how to open up files with vim. You can do it by executing the following command:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ vim file_that_you_want_to_edit.txt
+
+ |
+
+You can also open up more than one file:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ vim file1.txt file2.txt file3.txt
+
+ |
+
+You can then switch between the files while on command mode. First list out the files that are currently opened in vim:
+
+This will give you an output of the list of files with an id that you can use to refer to them when switching:
+
+![list of files][10]
+
+To switch to `file2.txt` you can use the `:b` command followed by its id:
+
+An alternative would be to use the file name itself:
+
+Or you can also just switch to the next file:
+
+Or switch to the previous file:
+
+Second thing that you need to know is that vim has 3 modes:
+
+* **command** – used for telling vim to do things. This is the default mode that vim is in when you open it. If you are on another mode other than the command mode you can press on `esc` to go back to the command mode.
+* **insert** – used for inserting things on the current file that you're working on. This is basically the text editor mode. You can only get to this mode when you are currently on the command mode. To get to this mode press the `i` key.
+* **visual** – used for selecting text. Just like the insert mode you can only get to this mode when you are in command mode. To get to this mode press the `v` key.
+
+**Basic Commands**
+
+Here are some of the basic commands that you would commonly use when working with a file. Note that you can only type in these commands while you are in command mode.
+
+* `:w` – save file
+* `:wq` – save file and quit
+* `:q!` – quit vim without saving the changes
+* `u` – undo last action
+* `ctrl + r` – redo
+* `x` – delete character under the cursor
+* `dd` – delete current line
+* `D` – delete to the end of the line. The main difference between this and the `dd` command is that the `dd` command deletes even the line breaks but the `D` command simply deletes to the end of the line leaving the line break behind.
+
+**Basic Navigation**
+
+You can navigate a file while you're in the command mode or insert mode by pressing the arrow keys. You can also use the following keys for navigating but only when you are in command mode:
+
+* `h` – left
+* `l` – right
+* `j` – down
+* `k` – up
+* `0` – move to the beginning of the line
+* `$` – move to the end of the line
+* `w` – move forward by one word
+* `b` – move backward by one word
+* `gg` – move to the top of the screen
+* `G` – move to the bottom of the screen
+* `line_numberH` – move to a specific line number
+
+**Searching Text**
+
+You can search for a specific text while you are in command mode by pressing the `/` key and entering the text that you want to search for and then press enter to execute the command. Vim will then highlight each instance of the text. You can move to the next instance by pressing the `n` key or `N` to go back to the previous instance.
+
+**Modifying Text**
+
+You can modify text by switching to insert mode. You can switch to insert mode by first going to command mode (`esc` key) then press the `i` key. Once you are on insert mode you can now start typing text just like you do with a normal text editor. While inside this mode and you want to select specific text to copy press the `esc` key to go back to command mode and then press the `v` key to switch to visual mode. From the visual mode you can then start selecting the text. To copy the text switch to the command mode then press the `y` key. To paste the copied text press the `p` key. You can do the same thing when you want to cut and paste. Simply use the `d` key instead of the `y` key.
+
+**Vim Configuration**
+
+You can use the `.vimrc` file to configure vim settings. It doesn't exist by default so you have to create it under the home directory:
+
+Some of the most common configuration that you would want to add:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+
+ |
+
+ syntax on
+ set number
+ set wrap
+ set tabstop=2
+
+ |
+
+Here's a description of what each option does:
+
+* **syntax on** – this enables syntax highlighting
+* **set number** – this enables line numbers
+* **set wrap** – this tells vim to word wrap visually
+* **set tabstop** – you can use this to specify the tab size. In the example above I've set it to `2` so when you press tab vim will insert 2 spaces
+
+**Resources for learning Vim**
+
+Be sure to check out the resources below to learn more about Vim. Learning Vim is really a painful process since you have to memorize a whole bunch of commands and practice it like you're practicing how to play the piano. Learning Vim is not that easy, lots of practice is required before you can get productive with using it. You can easily get away with just using a text-editor when writing code but if you want some productivity boost then take the time to really learn Vim even if it is painful. Here are some resources for learning Vim:
+
+#### Siege
+
+Siege is an HTTP load testing and benchmarking utility. You can mainly use this tool to stress test your web project with a bunch of requests to see how it holds up. Execute the following command to install siege:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ sudo apt-get install siege
+
+ |
+
+To use it you can execute:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ siege -b -t60S -c30 http://url-of-the-web-project-that-you-want-to-test
+
+ |
+
+The `-b` option tells siege to run the tests without delay. By default siege runs the test with a one second delay between each requests. Adding the `-b` option makes sure that the requests are made concurrently.
+
+The `-t60S` option tells siege to run the tests in 60 seconds (60S). If you want to run it for 30 minutes you can do `30M`. Or `1H` for an hour.
+
+The `-c30` option tells siege to have 30 concurrent connections.
+
+The last part of the command is the url that you want to test. If you only want to test out one url you can directly specify it in the command. But if you want to test out more than one url then you can create a new text file with the urls that you want to test out (one url per line) and then add the `-f` option followed by the path to the text file that you created to tell siege that you want to make use of a file:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ siege -b -t60S -c30 ~/test/urls.text
+
+ |
+
+Here's an example usage of siege:
+
+![siege][11]
+
+Interpreting the results above:
+
+* **transactions** – the total number of hits to the server.
+* **availability** – This is the availability of your web project to users. Ideally you would want the availability to be 100%. Anything below it would mean that some users accessing your web project won't be able to access it because of the load.
+* **elapsed time** – this is the time you specified in your options when you executed siege. It wouldn't be perfect though, as you can see from the results above we only got 59.37 seconds but we specified 60 seconds.
+* **data transferred** – the size of transferred data for each request
+* **response time** – the average response time for each request
+* **transaction rate** – the number of hits to the server per second
+* **throughput** – the average number of bytes transferred every second from the server to all the simulated users
+* **concurrency** – the average number of simultaneous connections
+* **successful transactions** – the number of successful transactions
+* **failed transactions** – the number of failed transactions
+* **longest transaction** – the total number of seconds the longest transaction took to finish
+* **shortest transaction** – the total number of seconds the shortest transaction took to finish
+
+#### Sed
+
+Sed is a tool for automatically modifying files. You can basically use this for writing scripts that does search and replace on multiple files. A common use case for developers would be for writing scripts that automatically formats source code according to a specific [coding standard][12].
+
+Yes you can do this sort of task using the built-in search and replace utility on text-editors like Sublime Text. But if you want something that lets you specify a lot of options and offers a lot of flexibility then sed is the tool for the job. Sed is pre-installed on most Linux distributions and also on Mac OS so you won't really have to do any installation. For windows users there's also [Sed for Windows][13] which you can install.
+
+Here's an example on how to use sed. For example you have the following file (`sed-test.php`):
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+
+ |
+
+ <?php
+ $superStars = array();
+ $rockStars = array();
+ $keyboardNinjas = array();
+ ?>
+
+ |
+
+And you want to modify all variable declarations to be all in lowercase. You would do something like:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ sed 's/$([A-Za-z]*([$A-Za-z_,s]*))/$L1/' sed-test.php
+
+ |
+
+Sed will then output the following result in the terminal screen:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+
+ |
+
+ <?php
+ $superstars = array();
+ $rockstars = array();
+ $keyboardninjas = array();
+ ?>
+
+ |
+
+To save the changes to the same file you need to do a little bit of a trick since sed doesn't have the functionality to commit the changes to the input file. The trick would be to temporarily save the results to a new file (`sed-test.new.php`) and then use `mv` to rename the new file (`sed-test.new.php`) to the old file name (`sed-test.php`) :
+
+| ----- |
+|
+
+ 1
+ 2
+
+ |
+
+ sed 's/$([A-Za-z]*([$A-Za-z_,s]*))/$L1/' sed-test.php > sed-test.new.php
+ mv sed-test.new.php sed-test.php
+
+ |
+
+If you want to learn more about sed check out the following resources: – [Sed – An Introduction and Tutorial][14] – [Getting Started with Sed][15]
+
+You can also check out the following related tools:
+
+#### Ruby Gems
+
+There's also lots of command line tools in the Ruby world. And you can have access to those tools by installing Ruby.
+
+In Linux and in Mac OS you can install Ruby by using RVM (Ruby Version Manager). First make sure that all the packages are up to date by executing the following command:
+
+We will get RVM by using Curl so we also have to install it:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ sudo apt-get install curl
+
+ |
+
+Once curl is installed, download rvm using curl and then pipe it to `bash` so we can use it immediately right after the download is finished:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl -L https://get.rvm.io | bash -s stable
+
+ |
+
+Install Ruby version `1.9.3` using rvm. For this step you don't really have to stick with version `1.9.3`. If there is already a later and stable version available you can use that as well:
+
+Tell rvm to use Ruby version `1.9.3`:
+
+You can then install the latest version of `rubygems`:
+
+For Windows users you can just use the [ruby installer for Windows][16].
+
+Once ruby gems is installed you can now install gems like there's no tomorrow. Here's a starting point: [Ruby Gems for Command-line Apps][17]. On the next section there's a gem called `tmuxinator` that you can install to manage tmux projects easily.
+
+#### Tmux
+
+Tmux or terminal multiplexer is an application that allows you to multiplex several terminal windows. It basically makes it easier to work on several related terminal windows. In Linux you can install tmux from the terminal by executing the following command:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ sudo apt-get install tmux
+
+ |
+
+For Mac OS you can install tmux through brew:
+
+And on Windows tmux is not really directly supported. You first have to install [cygwin][18] and then add [this patch][19] to install tmux. Or if you don't want to go through all the trouble you can install [console2][20] which is a tmux alternative for Windows.
+
+Once you're done installing tmux you can now go ahead and play with it. To start tmux first create a new named session:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmux new -s name_of_session
+
+ |
+
+This will create a new tmux session with the name that you supplied: ![tmux session][21]
+
+You can then execute commands just like you do with a normal terminal window. If you want to create a new window press `ctrl + b` then release and then press `c`. This will create a new window under the current session:
+
+![tmux new window][22]
+
+As you can see from the screenshot above we now have two windows (see the text highlighted in green on the lower part of the terminal window on the left side). One is named `0:bash` and the other is `1:bash*`. The one with the `*` is the current window.
+
+You can rename the current window by pressing `ctrl + b` then release and then `,`. This will prompt you to enter a new name for the window. You can just press enter once you're done renaming it:
+
+![tmux rename window][23]
+
+To switch between the windows you can either press `ctrl + b` then release and then the index of the window that you want to switch to. You can determine the index by looking at the lower left part of the terminal screen. So if you have only two windows opened the index can either be 0 or 1. You can also press `ctrl + b` then release and then `p` for previous or `n` for next window.
+
+You can also further divide each window into multiple panes by pressing `ctrl + b` then release and then the `%` key to divide the current window vertically or the `"` key to divide it horizontally. This will give you a screen similar to the following:
+
+![tmux panes][24]
+
+You can then switch between those panes by pressing `ctrl + b` then release and then the `o` key.
+
+What's good about tmux is that it allows you to keep multiple terminal sessions and you'll be able to access them even after restarting your computer. To list out available sessions you can execute the following command:
+
+This will list out all the sessions that you created using the `tmux new - s` command or simply `tmux`. You can then open up the specific session by executing the following command:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmux attach -t name_of_session
+
+ |
+
+If you no longer want to work with a particular session you can just do the following:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmux kill-session -t name_of_session
+
+ |
+
+Or if you want to kill all sessions:
+
+There's also this ruby gem called [tmuxinator][25] which allows you to create and manage complex tmux sessions easily. You can install it via ruby gems:
+
+Or if you're like me and you installed Ruby via RVM:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ rvmsudo gem install tmuxinator
+
+ |
+
+You can then create project-based tmux sessions. To create a new project you can do:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmuxinator open name_of_project
+
+ |
+
+This will create a `name_of_project.yml` file under the `~/.tmuxinator` directory. You can then open up this file and modify the default configuration. For me I simply deleted the commented lines (except for the first one which is the path to the current file) and then specified the project path. In my case its the `octopress` directory under the home directory. Then under the `windows` the `layout` is `main-vertical`, this means that the panes that I will specify would be divided vertically. There would be 2 panes, one is empty so I can just type in whatever commands I wish to execute and the other is `rake preview` which is the command for previewing an octopress blog locally:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+ 11
+
+ |
+
+ # ~/.tmuxinator/blog.yml
+
+ name: blog
+ root: ~/octopress
+
+ windows:
+ - editor:
+ layout: main-vertical
+ panes:
+ - #empty
+ - rake preview
+
+ |
+
+To open up the project at a later time you execute the following:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmuxinator start name_of_project
+
+ |
+
+If you do not know the name of a specific project, you can list out all projects using the following command:
+
+If you no longer wish to work with a project in the future:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmuxinator delete name_of_project
+
+ |
+
+#### SSH
+
+SSH can be used to login to remote servers. SSH is pre-installed on both Linux and Mac OS. But for Windows you can use the alternative which is [open SSH][26] since SSH isn't installed on Windows by default.
+
+##### Logging in to remote server
+
+Once you have SSH installed you can now login to a remote server by executing the following command:
+
+Where the username is the `username` given to you by your web host. While the `hostname` can be a domain name, public dns or an IP address. For [Openshift][27] its something like:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxx@somesite-username.rhcloud.com
+
+ |
+
+Where `x` is a random string of number and letters.
+
+Executing the `ssh` command with the correct username and hostname combination will prompt you to enter your password. Again, the password here is the password given to you by your web host.
+
+##### SSH Keys
+
+You can also make use of SSH keys to authenticate yourself to a remote server. This will allow you to login without entering your password.
+
+To setup an ssh key navigate to the `.ssh` directory:
+
+If you don't have already one, create it by executing the following command:
+
+Once you're done with that, check if you already have a private and public key pair in the `~/.ssh` directory:
+
+It would look something like `id_rsa` and `id_rsa.pub`. If you don't already have those 2 files generate it by executing:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ ssh-keygen -t rsa -C "your_email@provider.com"
+
+ |
+
+This generates the `id_rsa` and `id_rsa.pub` files using your email address as the label. You can also use other information as the label.
+
+Next copy the public key (`id_rsa.pub`) into the remote server by using secure copy (`scp`):
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ scp -p id_rsa.pub username@hostname
+
+ |
+
+Now open up a new terminal window and login to the remote server.
+
+Check if the `id_rsa.pub` has indeed been copied by using the following command:
+
+If it returns "there's no such file or directory" return to the other terminal window (local machine) and execute the `scp` command again.
+
+Once that's done the next step is to copy all the contents of the `id_rsa.pub` file into the `authorized_keys` file inside the `.~/ssh` directory:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ cat id_rsa.pub > ~/.ssh/authorized_keys
+
+ |
+
+Next update the `/etc/ssh/sshd_config` file using either `vi` or `nanoc`:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ vi /etc/ssh/sshd_config
+
+ |
+
+Uncomment the line where it says `# AuthorizedKeysFile`, to uncomment all you have to do is remove the `#` symbol right before it. Vi is basically like vim so the key strokes that you use are pretty much the same. So first you place the cursor right above the `#` symbol then press `x` to delete the `#` symbol. And then press the `esc` key to go back to command mode and then type in `:wq` to save and quit editing the file:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ AuthorizedKeysFile %h/.ssh/authorized_keys
+
+ |
+
+Just make sure the path that its pointing to is the same path as the file that we updated earlier. The `%h` refers to the host so its basically the same as saying `~/.ssh/authorized_keys`.
+
+Once all of that is done you can now test it out by logging in once again. Note that for the first time after the update is done it will still ask you the password. But for the next one's it will no longer ask you the password.
+
+### Conclusion
+
+That's it! The command line is a must-use tool for every developer. In this blog post we've covered the essentials of using the command line along with some tools that can help you become more productive when it comes to using it. There's a lot more command line tools that I haven't covered in this blog post. I believe those tools deserves a blog post of their own so I'll be covering each of those in a future part of this series. For now I recommend that you check out the resources below for more command-line ninja skills.
+
+### Resources
+
+[1]: http://en.wikipedia.org/wiki/Command-line_interface
+[2]: http://www.cygwin.com/
+[3]: http://unxutils.sourceforge.net/
+[4]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/apropos.png
+[5]: http://stedolan.github.io/jq/download/
+[6]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/json-string.png
+[7]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/jq.png
+[8]: http://stedolan.github.io/jq/manual/
+[9]: http://www.vim.org/download.php
+[10]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/ls.png
+[11]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/siege.png
+[12]: http://en.wikipedia.org/wiki/Coding_conventions
+[13]: http://gnuwin32.sourceforge.net/packages/sed.htm
+[14]: http://www.grymoire.com/Unix/Sed.html
+[15]: http://sed.sourceforge.net/local/docs/An_introduction_to_sed.html
+[16]: http://rubyinstaller.org/
+[17]: http://www.awesomecommandlineapps.com/gems.html
+[18]: http://cygwin.org/
+[19]: http://sourceforge.net/mailarchive/message.php?msg_id=30850840
+[20]: http://sourceforge.net/projects/console/files/
+[21]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/tmux.png
+[22]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/tmux-new-window.png
+[23]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/tmux-rename-window.png
+[24]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/tmux-panes.png
+[25]: http://rubygems.org/gems/tmuxinator
+[26]: http://sshwindows.sourceforge.net/
+[27]: https://www.openshift.com/
diff --git a/cmd/ref/bash image tools for web designers - brettterpstra.com.txt b/cmd/ref/bash image tools for web designers - brettterpstra.com.txt
new file mode 100644
index 0000000..6eeff84
--- /dev/null
+++ b/cmd/ref/bash image tools for web designers - brettterpstra.com.txt
@@ -0,0 +1,53 @@
+---
+title: Bash image tools for web designers
+date: 2015-04-21T18:41:00Z
+source: http://brettterpstra.com/2013/07/24/bash-image-tools-for-web-designers/
+tags: #lhp, #cmdline
+
+---
+
+![][1]
+
+Here are a couple of my Bash functions for people who work with images in CSS or HTML. Nothing elaborate, just things that supplement my typical workflow.
+
+### Image dimensions
+
+First, a shell function for quickly getting the pixel dimensions of an image without leaving the shell. This trick can be incorporated into more complex functions in editors or other shell scripts. For example, when I add an image to my blog, a similar trick automatically includes the dimensions in the Jekyll (Liquid) image tag I use.
+
+Add this to your `.bash_profile` to be able to run `imgsize /path/to/image.jpg` and get back `600 x 343`.
+
+ # Quickly get image dimensions from the command line
+ function imgsize() {
+ local width height
+ if [[ -f $1 ]]; then
+ height=$(sips -g pixelHeight "$1"|tail -n 1|awk '{print $2}')
+ width=$(sips -g pixelWidth "$1"|tail -n 1|awk '{print $2}')
+ echo "${width} x ${height}"
+ else
+ echo "File not found"
+ fi
+ }
+
+You can, of course, take the $height and $width variables it creates and modify the output any way you like. You could output a full image tag using `<img href="$1" width="$width" height="$height">`, too.
+
+### Base 64 encoding
+
+I often use data-uri encoding to embed images in my CSS file, both for speed and convenience when distributing. The following function will take an image file as the only argument and place a full CSS background property with a Base64-encoded image in your clipboard, ready for pasting into a CSS file.
+
+ # encode a given image file as base64 and output css background property to clipboard
+ function 64enc() {
+ openssl base64 -in $1 | awk -v ext="${1#*.}" '{ str1=str1 $0 }END{ print "background:url(data:image/"ext";base64,"str1");" }'|pbcopy
+ echo "$1 encoded to clipboard"
+ }
+
+You can also do the same with fonts. I use this to embed a woff file. With a little alteration you can make versions for other formats, but usually when I'm embedding fonts it's because the stylesheet is being used in a particular context with a predictable browser.
+
+ function 64font() {
+ openssl base64 -in $1 | awk -v ext="${1#*.}" '{ str1=str1 $0 }END{ print "src:url("data:font/"ext";base64,"str1"") format("woff");" }'|pbcopy
+ echo "$1 encoded as font and copied to clipboard"
+ }
+
+Ryan Irelan has produced a series of [shell trick videos][2] based on BrettTerpstra.com posts. Readers can get 10% off using the coupon code `TERPSTRA`.
+
+[1]: http://cdn3.brettterpstra.com/images/grey.gif
+[2]: https://mijingo.com/products/screencasts/os-x-shell-tricks/ "Shell Tricks video series"
diff --git a/cmd/ref/invaluable command line tools for web developers.txt b/cmd/ref/invaluable command line tools for web developers.txt
new file mode 100644
index 0000000..c3c2b85
--- /dev/null
+++ b/cmd/ref/invaluable command line tools for web developers.txt
@@ -0,0 +1,89 @@
+---
+title: Invaluable command line tools for web developers
+date: 2014-12-30T18:57:50Z
+source: http://www.coderholic.com/invaluable-command-line-tools-for-web-developers/
+tags: #cmdline
+
+---
+
+Life as a web developer can be hard when things start going wrong. The problem could be in any number of places. Is there a problem with the request your sending, is the problem with the response, is there a problem with a request in a third party library you're using, is an external API failing? There are lots of different tools that can make our life a little bit easier. Here are some command line tools that I've found to be invaluable.
+
+**Curl** Curl is a network transfer tool that's very similar to wget, the main difference being that by default wget saves to file, and curl outputs to the command line. This makes is really simple to see the contents of a website. Here, for example, we can get our current IP from the [ipinfo.io][1] website:
+
+ $ curl ipinfo.io/ip
+ 93.96.141.93
+
+Curl's `-i` (show headers) and `-I` (show only headers) option make it a great tool for debugging HTTP responses and finding out exactly what a server is sending to you:
+
+ $ curl -I news.ycombinator.com
+ HTTP/1.1 200 OK
+ Content-Type: text/html; charset=utf-8
+ Cache-Control: private
+ Connection: close
+
+The `-L` option is handy, and makes Curl automatically follow redirects. Curl has support for HTTP Basic Auth, cookies, manually settings headers, and much much more.
+
+**Siege**
+
+Siege is a HTTP benchmarking tool. In addition to the load testing features it has a handy `-g` option that is very similar to `curl -iL` except it also shows you the request headers. Here's an example with www.google.com (I've removed some headers for brevity):
+
+ $ siege -g www.google.com
+ GET / HTTP/1.1
+ Host: www.google.com
+ User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)
+ Connection: close
+
+ HTTP/1.1 302 Found
+ Location: http://www.google.co.uk/
+ Content-Type: text/html; charset=UTF-8
+ Server: gws
+ Content-Length: 221
+ Connection: close
+
+ GET / HTTP/1.1
+ Host: www.google.co.uk
+ User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)
+ Connection: close
+
+ HTTP/1.1 200 OK
+ Content-Type: text/html; charset=ISO-8859-1
+ X-XSS-Protection: 1; mode=block
+ Connection: close
+
+What siege is really great at is server load testing. Just like `ab` (apache benchmark tool) you can send a number of concurrent requests to a site, and see how it handles the traffic. With the following command we test google with 20 concurrent connections for 30 seconds, and then get a nice report at the end:
+
+ $ siege -c20 www.google.co.uk -b -t30s
+ ...
+ Lifting the server siege... done.
+ Transactions: 1400 hits
+ Availability: 100.00 %
+ Elapsed time: 29.22 secs
+ Data transferred: 13.32 MB
+ Response time: 0.41 secs
+ Transaction rate: 47.91 trans/sec
+ Throughput: 0.46 MB/sec
+ Concurrency: 19.53
+ Successful transactions: 1400
+ Failed transactions: 0
+ Longest transaction: 4.08
+ Shortest transaction: 0.08
+
+One of the most useful features of siege is that it can take a url file as input, and hit those urls rather than just a single page. This is great for load testing, because you can replay real traffic against your site and see how it performs, rather than just hitting the same URL again and again. Here's how you would use siege to replay your apache logs against another server to load test it with:
+
+ $ cut -d ' ' -f7 /var/log/apache2/access.log > urls.txt
+ $ siege -c<concurreny rate> -b -f urls.txt
+
+**Ngrep**
+
+For serious network packet analysis there's [Wireshark][2], with it's thousands of settings, filters and different configuration options. There's also a command line version, tshark. For simple tasks I find wireshark can be overkill, so unless I need something more powerful, ngrep is my tool of choice. It allows you to do with network packets what grep does with files.
+
+For web traffic you almost always want the `-W byline` option which preserves linebreaks, and `-q` is a useful argument which supresses some additional output about non-matching packets. Here's an example that captures all packets that contain GET or POST:
+
+ ngrep -q -W byline "^(GET|POST) .*"
+
+You can also pass in additional packet filter options, such as limiting the matched packets to a certain host, IP or port. Here we filter all traffic going to or coming from google.com, port 80, and that contains the term "search".
+
+ ngrep -q -W byline "search" host www.google.com and port 80
+
+[1]: http://ipinfo.io
+[2]: http://www.wireshark.org/
diff --git a/cmd/ref/life on the command line.txt b/cmd/ref/life on the command line.txt
new file mode 100644
index 0000000..005ee19
--- /dev/null
+++ b/cmd/ref/life on the command line.txt
@@ -0,0 +1,81 @@
+---
+title: Life on the Command Line
+date: 2014-12-30T18:58:10Z
+source: http://stephenramsay.us/2011/04/09/life-on-the-command-line/
+tags: #cmdline
+
+---
+
+A few weeks ago, I realized that I no longer use graphical applications.
+
+That's right. I don't do anything with gui apps anymore, except surf the Web. And what's interesting about that, is that I rarely use cloudy, ajaxy replacements for desktop applications. Just about everything I do, I do exclusively on the command line. And I do what everyone else does: manage email, write things, listen to music, manage my todo list, keep track of my schedule, and chat with people. I also do a few things that most people don't do: including write software, analyze data, and keep track of students and their grades. But whatever the case, I do all of it on the lowly command line. I literally go for months without opening a single graphical desktop application. In fact, I don't — strictly speaking — have a desktop on my computer.
+
+I think this is a wonderful way to work. I won't say that _everything_ can be done on the command line, but most things can, and in general, I find the cli to be faster, easier to understand, easier to integrate, more scalable, more portable, more sustainable, more consistent, and many, many times more flexible than even the most well-thought-out graphical apps.
+
+I realize that's a bold series of claims. I also realize that such matters are always open to the charge that it's "just me" and the way I work, think, and view the world. That might be true, but I've seldom heard a usability expert end a discourse on human factors by acknowledging that graphical systems are only really the "best" solution for a certain group of people or a particular set of tasks. Most take the graphical desktop as ground truth — it's just the way we do things.
+
+I also don't do this out of some perverse hipster desire for retro-computing. I have work to do. If my system didn't work, I'd abandon it tomorrow. In a way, the cli reminds me of a bike courier's bicycle. Some might think there's something "hardcore" and cool about a bike that has one gear, no logos, and looks like it flew with the Luftwaffe, but the bike is not that way for style. It's that way because the bells and whistles (i.e. "features") that make a bike attractive in the store get in the way when you have to use it for work. I find it interesting that after bike couriers started paring down their rides years ago, we soon after witnessed a revival of the fixed-gear, fat-tire, coaster-break bike for adults. It's tempting to say that that was a good thing because "people didn't need" bikes inspired by lightweight racing bikes for what they wanted to do. But I think you could go further and say that lightweight racing bikes were getting in the way. Ironically, they were slowing people down.
+
+I've spent plenty of time with graphical systems. I'm just barely old enough to remember computers without graphical desktops, and like most people, I spent years taking it for granted that for a computer to be usable, it had to have windows, and icons, and wallpapers, and toolbars, and dancing paper clips, and whatever else. Over the course of the last ten years, all of that has fallen away. Now, when I try to go back, I feel as if I'm swimming through a syrupy sea of eye candy in which all the fish speak in incommensurable metaphors.
+
+I should say right away that I am talking about Linux/Unix. I don't know that I could have made the change successfully on a different platform. It's undoubtedly the case that what makes the cli work is very much about the way Unix works. So perhaps this is a plea not for the cli so much as for the cli as it has been imagined by Unix and its descendants. So be it.
+
+I'd like this to be the first of a series of short essays about my system. Essentially, I'd like to run through the things I (and most people) do, and show what it's like to run your life on the command line.
+
+First up . . .
+
+**Email**
+
+I think most email programs really suck. And that's a problem, because most people spend insane amounts of time in their email programs. Why, for starters, do they:
+
+_Take so long to load_
+
+Unless you keep the app open all the time (I'm assuming you do that because you have the focus of a guided missile), this is a program that you open and close several times a day. So why, oh why, does it take so much time to load?
+
+What? It's only a few seconds? Brothers and sisters, this is a _computer._ It should open _instantaneously._ You should be able to flit in and out of it with no delay at all. Boom, it's here. Boom, it's gone. Not, "Switch to the workplace that has the Web browser running, open a new tab, go to gmail, and watch a company with more programming power than any other organization on planet earth give you a…progress bar." And we won't even discuss Apple Mail, Outlook, or (people . . .) Lotus Notes.
+
+_Integrate so poorly with the rest of the system?_
+
+We want to organize our email messages, and most apps do a passable job of that with folders and whatnot. But they suck when it comes to organizing the content of email messages within the larger organizational scheme of your system.
+
+Some email messages contain things that other people want you to do. Some email messages have pictures that someone sent you from their vacation. Some email messages contain relevant information for performing some task. Some email messages have documents that need to be placed in particular project folders. Some messages are read-it-later.
+
+Nearly every email app tries to help you with this, but they do so in an extremely inconsistent and inflexible manner. Gmail gives you "Tasks," but it's a threadbare parody of the kind of todo lists most people actually need. Apple mail tries to integrate things with their Calendar app, but now you're tied to that calendar. So people sign up for Evernote, or Remember the Milk, or they buy OmniFocus (maybe all three). Or they go add a bump to the forum for company X in the hope that they'll write whatever glue is necessary to connect _x_ email program with _y_ task list manager.
+
+I think that you should be able to use _any_ app with _any_ other app in the context of _any_ organizational system. Go to any LifeHacker-style board and you'll see the same conversation over and over: "I tried OmniOrgMe, but it just seemed too complicated. I love EternalTask, but it isn't integrated with FragMail . . ." The idea that the "cloud" solves this is probably one of the bigger cons in modern computing.
+
+Problem 1 is immediately solved when you switch to a console-based email program. Pick any one of them. Type pine or mutt (for example), and your mail is before your eyes in the time it takes a graphical user to move their mouse to the envelope icon. Type q, and it's gone.
+
+Such programs tend to integrate well with the general command-line ecosystem, but I will admit that I didn't have problem 2 completely cracked until I switched to an email program that is now over twenty years old: [nmh][1].
+
+I've [written elsewhere][2] about nmh, so allow me to excerpt (a slightly modified) version of that:
+
+> The "n" in nmh stands for "new," but there's really nothing new about the program at all. In fact, it was originally developed at the rand Corporation decades ago.
+
+> We're talking old school. Type "inc" and it sends a numbered list of email subject lines to the screen, and returns you to the prompt. Type "show" and it will display the first message (in _any_ editor you like). You could then refile the message (with "refile") to another mailbox, or archive it, or forward it, and so on. There are thirty-nine separate commands in the nmh toolset, with names like "scan," "show," "mark," "sort," and "repl." On a day-to-day basis, you use maybe three or four.
+>
+> I've been using it for over a year. It is — hands down — the best email program I have ever used.
+>
+> Why? Because the dead simple things you need to do with mail are dead simple. Because there is no mail client in the world that is as fast. Because it never takes over your life (every time you do something, you're immediately back at the command prompt ready to do something else). Because everything — from the mailboxes to the mail itself — is just an ordinary plain text file ready to be munged. But most of all, because you can combine the nmh commands with ordinary unix commands to create things that would be difficult if not impossible to do with the gui clients.
+>
+> I now have a dozen little scripts that do nifty things with mail. I have scripts that archive old mail based on highly idiosyncratic aspects of my email usage. I have scripts that perform dynamic search queries based on analysis of past subject lines. I have scripts that mail todo list items and logs based on cron strings. I have scripts that save attachments to various places based on what's in my build files. None of these things are "features" of nmh. They're just little scripts that I hacked together with grep, sed, awk, and the shell. And every time I write one, I feel like a genius. The whole system just delights me. I want everything in my life to work like this program.
+
+Okay, I know what you're thinking: "Scripting. Isn't that, like, _programming?_ I don't want/know how to do that." This objection is going to keep re-appearing, so let me say something about it right away.
+
+The programming we're talking about for this kind of thing is very simple — so simple, that the skills necessary to carry it off could easily be part of the ordinary skillset of anyone who uses a computer on a regular basis. An entire industry has risen up around the notion that no user should ever do anything that looks remotely like giving coded instructions to a machine. I think that's another big con, and some day, I'll prove it to you by writing a tutorial that will turn you into a fearsome shell hacker. You'll be stunned at how easy it is.
+
+For now, I just want to make the point that once you move to the command line, everything is trivially connected to everything else, and so you are mostly freed from being locked in to any particular type of tool. You can use a todo list program that makes Omnifocus look like Notepad. You can use one that makes Gmail Tasks look like the U.N. Charter. Once we're in text land, the output of any program can in principle become the input to any other, and that changes everything.
+
+In the next installment, I'll demonstrate.
+
+[Greetings ProfHacker fans. Yes, this post is a little rantish; the conversation continues in a more sober, expansive vein with [The Mythical Man-Finger][3] and [The Man-Finger Aftermath][4]. Thanks to one and all for the many comments, which have deepened my thinking on all of this considerably.]
+
+[1]: http://www.nongnu.org/nmh/
+[2]: http://ra.tapor.ualberta.ca/~dayofdh2010/stephenramsay/2010/03/14/hello-world/
+[3]: /2011/07/25/the-mythical-man-finger/
+[4]: /2011/08/05/the-man-finger-aftermath/
+
+ [*rand]: Research ANd Development
+ [*gui]: Graphical User Interface
+ [*ajax]: Asychronous JavaScript and XML
+ [*cli]: Command-Line Interface \ No newline at end of file
diff --git a/cmd/ref/oliver | an introduction to unix.txt b/cmd/ref/oliver | an introduction to unix.txt
new file mode 100644
index 0000000..302c268
--- /dev/null
+++ b/cmd/ref/oliver | an introduction to unix.txt
@@ -0,0 +1,1913 @@
+---
+title: Oliver | An Introduction to Unix
+date: 2015-03-13T14:08:55Z
+source: http://www.oliverelliott.org/article/computing/tut_unix/#100UsefulUnixCommands
+tags: #cmdline
+
+---
+
+**Everybody Knows How to Use a Computer, but Not Everyone Knows How to Use the Command Line. Yet This is the Gateway to Doing Anything and Everything Sophisticated with a Computer and the Most Natural Starting Place to Learn Programming**
+_by Oliver; Jan. 13, 2014_
+                     
+
+
+## Introduction
+
+I took programming in high school, but I never took to it. This, I strongly believe, is because it wasn't taught right—and teaching it right means starting at the beginning, with unix. The reason for this is three-fold: _(1)_ it gives you a deeper sense of how a high-level computer works (which a glossy front, like Windows, conceals); _(2)_ it's the most natural port of entry into all other programming languages; and _(3)_ it's super-useful in its own right. If you don't know unix and start programming, some things will forever remain hazy and mysterious, even if you can't put your finger on exactly what they are. If you already know a lot about computers, the point is moot; if you don't, then by all means start your programming education by learning unix!
+
+A word about terminology here: I'm in the habit of horrendously confusing and misusing all of the precisely defined words "[Unix][1]", "[Linux][2]", "[The Command Line][3]", "[The Terminal][4]", "[Shell Scripting][5]", and "[Bash][6]." Properly speaking, _unix_ is an operating system while _linux_ refers to a closely-related family of unix-based operating systems, which includes commercial and non-commercial distributions [1]. (Unix was not free under its developer, AT&T, which caused the unix-linux schism.) The _command line_, as [Wikipedia][3] says, is:
+
+> ... a means of interacting with a computer program where the user issues commands to the program in the form of successive lines of text (command lines) ... The interface is usually implemented with a command line shell, which is a program that accepts commands as text input and converts commands to appropriate operating system functions.
+
+So what I mean when I proselytize for "unix", is simply that you learn how to punch commands in on the command line. The _terminal_ is your portal into this world. Here's what my mine looks like:
+
+![image][7]
+
+
+There is a suite of commands to become familiar with—[The GNU Core Utilities][8] ([wiki entry][9])—and, in the course of learning them, you learn about computers. Unix is a foundational piece of a programming education.
+
+In terms of bang for the buck, it's also an excellent investment. You can gain powerful abilities by learning just a little. My coworker was fresh out of his introductory CS course, when he was given a small task by our boss. He wrote a full-fledged program, reading input streams and doing heavy parsing, and then sent an email to the boss that began, _"After 1.5 days of madly absorbing perl syntax, I completed the exercise..."_ He didn't know how to use the command-line at the time, and now [a print-out of that email hangs on his wall][10] as a joke—and as a monument to the power of the terminal.
+
+You can find ringing endorsements for learning the command line from all corners of the internet. For instance, in the excellent course [Startup Engineering (Stanford/Coursera)][11] Balaji Srinivasan writes:
+
+> A command line interface (CLI) is a way to control your computer by typing in commands rather than clicking on buttons in a graphical user interface (GUI). Most computer users are only doing basic things like clicking on links, watching movies, and playing video games, and GUIs are fine for such purposes.
+>
+> But to do industrial strength programming - to analyze large datasets, ship a webapp, or build a software startup - you will need an intimate familiarity with the CLI. Not only can many daily tasks be done more quickly at the command line, many others can only be done at the command line, especially in non-Windows environments. You can understand this from an information transmission perspective: while a standard keyboard has 50+ keys that can be hit very precisely in quick succession, achieving the same speed in a GUI is impossible as it would require rapidly moving a mouse cursor over a profusion of 50 buttons. It is for this reason that expert computer users prefer command-line and keyboard-driven interfaces.
+
+To provide foreshadowing, here are some things you can do in unix:
+
+* make or rename 100 folders or files _en masse_
+* find all files of a given extension or any file that was created within the last week
+* log onto a computer remotely and access its files with ssh
+* copy files to your computer directly over the network (no external hard drive necessary!) with [rsync][12]
+* run a [Perl][13] or [Python][14] script
+* run one of the many programs that are only available on the command line
+* see all processes running on your computer or the space occupied by your folders
+* see or change the permissions on a file
+* parse a text file in any way imaginable (count lines, swap columns, replace words, etc.)
+* soundly encrypt your files or communications with [gpg2][15]
+* run your own web server on the [Amazon cloud][16] with [nginx][17]
+What do all of these have in common? All are hard to do in the [GUI][18], but easy to do on the command line. It would be remiss not to mention that unix is not for everything. In fact, it's not for a lot of things. Knowing which language to use for what is usually a matter of common sense. However, when you start messing about with computers in any serious capacity, you'll bump into unix very quickly—and that's why it's our starting point.
+
+Is this the truth, the whole truth, and nothing but the truth, [so help my white ass][19]? I believe it is, but also my trajectory through the world of computing began with unix. So perhaps I instinctively want to push this on other people: do it the way I did it. And, because it occupies a bunch of my neuronal real estate at the moment, I could be considered brainwashed :-)
+
+* * *
+
+
+[1] Still confused about unix vs linux? [Refer to the full family tree][20] and these more precise definitions from Wikipedia:
+**_unix_**: _a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, developed in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others_
+**_linux_**: _a Unix-like and mostly POSIX-compliant computer operating system assembled under the model of free and open-source software development and distribution [whose] defining component ... is the Linux kernel, an operating system kernel first released [in] 1991 by Linus Torvalds_ ↑
+
+## 100 Useful Unix Commands
+
+This article is an introduction to unix. It aims to teach the basic principles and neglects to mention many of the utilities that give unix superpowers. To learn about those, see [100 Useful Unix Commands][21].
+
+## Getting Started: Opening the Terminal
+
+If you have a Mac, navigate to Applications > Utilities and open the application named "Terminal":
+
+![image][22]
+
+If you have a PC, _abandon all hope, ye who enter here_! Just kidding—partially. None of the native Windows shells, such as [cmd.exe][23] or [PowerShell][24], are unix-like. Instead, they're marked with hideous deformities that betray their ignoble origin as grandchildren of the MS-DOS command interpreter. If you didn't have a compelling reason until now to quit using PCs, here you are [1]. Typically, my misguided PC friends don't use the command line on their local machines; instead, they have to `ssh` into some remote server running Linux. (You can do this with an ssh client like [PuTTY][25], [Chrome's Terminal Emulator][26], or [MobaXterm][27], but don't ask me how.) On Macintosh you can start practicing on the command line right away without having to install a Linux distribution [2] (the Mac-flavored unix is called [Darwin][28]).
+
+For both Mac and PC users who want a bona fide Linux command line, one easy way to get it is in the cloud with [Amazon EC2][16] via the [AWS Free Tier][29]. If you want to go whole hog, you can download and install a Linux distribution—[Ubuntu][30], [Mint][31], [Fedora][32], and [CentOs][33] are popular choices—but this is asking a lot of non-hardcore-nerds (less drastically, you could boot Linux off of a USB drive or run it in a virtual box).
+
+* * *
+
+
+[1] I should admit, you can and should get around this by downloading something like [Cygwin][34], whose homepage states: _"Get that Linux feeling - on Windows"_ ↑
+[2] However, if you're using Mac OS rather than Linux, note that OS does not come with the [GNU coreutils][9], which are the gold standard. [You should download them][35] ↑
+
+## The Definitive Guides to Unix, Bash, and the Coreutils
+
+Before going any further, it's only fair to plug the authoritative guides which, unsurprisingly, can be found right on your command line:
+
+ $ man bash
+ $ info coreutils
+
+(The $ at the beginning of the line represents the terminal's prompt.) These are good references, but overwhelming to serve as a starting point. There are also great resources online: although these guides, too, are exponentially more useful once you have a small foundation to build on.
+
+## The Unix Filestructure
+
+All the files and _directories_ (a fancy word for "folder") on your computer are stored in a hierarchical tree. Picture a tree in your backyard upside-down, so the trunk is on the top. If you proceed downward, you get to big branches, which then give way to smaller branches, and so on. The trunk contains everything in the sense that everything is connected to it. This is the way it looks on the computer, too, and the trunk is called the _root directory_. In unix it's represented with a slash:
+
+/
+
+The root contains directories, which contain other directories, and so on, just like our tree. To get to any particular file or directory, we need to specify the _path_, which is a slash-delimited address:
+
+/dir1/dir2/dir3/some_file
+
+Note that a full path always starts with the root, because the root contains everything. As we'll see below, this won't necessarily be the case if we specify the address in a _relative_ way, with respect to our current location in the filesystem.
+
+Let's examine the directory structure on our Macintosh. We'll go to the root directory and look down just one level with the unix command tree. (If we tried to look at the whole thing, we'd print out every file and directory on our computer!) We have:
+
+![image][36]
+
+
+While we're at it, let's also look at the directory /Users/username, which is the specially designated [_home directory][37]_ on the Macintosh:
+
+![image][38]
+
+
+One thing we notice right away is that the Desktop, which holds such a revered spot in the GUI, is just another directory—simply the first one we see when we turn on our computer.
+
+If you're on Linux rather than Mac OS, the directory tree might look less like the screenshot above and more like this:
+
+![image][39]
+
+
+The naming of these folders is not intuitive, but you can read about the role of each one [here][40]. I've arbitrarily traced out the path to /var/log, a location where some programs store their log files.
+
+If the syntax of a unix path looks familiar, it is. A webpage's [URL][41], with its telltale forward slashes, looks like a unix path with a domain prepended to it. This is not a coincidence! For a simple static website, its structure on the web is determined by its underlying directory structure on the server, so navigating to:
+
+http://www.example.com/abc/xyz
+
+will serve you content in the folder _websitepath/abc/xyz_ on the host's computer (i.e., the one owned by _example.com_). Modern dynamic websites are more sophisticated than this, but it's neat to reflect that the whole word has learned this unix syntax without knowing it.
+
+To learn more, see the [O'Reilly discussion of the unix file structure][42].
+
+## The Great Trailing Slash Debate
+
+Sometimes you'll see directories written with a trailing slash, as in:
+
+dir1/
+
+This helpfully reminds you that the entity is a directory rather than a file, but on the command line using the more compact _dir1_ is sufficient. There are a handful of unix commands which behave slightly differently if you leave the trailing slash on, but this sort of extreme pedantry isn't worth worrying about.
+
+## Where Are You? - Your _Path_ and How to Navigate through the Filesystem
+
+When you open up the terminal to browse through your filesystem, run a program, or do anything, _you're always somewhere_. Where? You start out in the designated _home directory_ when you open up the terminal. The home directory's path is preset by a global variable called HOME. Again, it's /Users/username on a Mac.
+
+As we navigate through the filesystem, there are some conventions. The _current working directory (cwd)_—whatever directory we happen to be in at the moment—is specified by a dot:
+
+.
+
+Sometimes it's convenient to write this as:
+
+./
+
+which is not to be confused with the root directory:
+
+/
+
+When a program is run in the cwd, you often see the syntax:
+
+ $ ./myprogram
+
+which emphasizes that you're executing a program from the current directory. The directory one above the cwd is specified by two dots:
+
+..
+
+With the trailing slash syntax, that's:
+
+../
+
+A tilde is shorthand for the home directory:
+
+~
+
+or:
+
+~/
+
+To see where we are, we can _print working directory_:
+
+ $ pwd
+
+To move around, we can _change directory_:
+
+ $ cd /some/path
+
+By convention, if we leave off the argument and just type:
+
+ $ cd
+
+we will go home. To _make directory_—i.e., create a new folder—we use:
+
+ $ mkdir
+
+As an example, suppose we're in our home directory, /Users/username, and want to get one back to /Users. We can do this two ways:
+
+ $ cd /Users
+
+or:
+
+ $ cd ..
+
+This illustrates the difference between an _absolute path_ and a _relative path_. In the former case, we specify the complete address, while in the later we give the address with respect to our _cwd_. We could even accomplish this with:
+
+ $ cd /Users/username/..
+
+or maniacally seesawing back and forth:
+
+ $ cd /Users/username/../username/..
+
+if our primary goal were obfuscation. This distinction between the two ways to specify a path may seem pedantic, but it's not. Many scripting errors are caused by programs expecting an absolute path and receiving a relative one instead or vice versa. Use relative paths if you can because they're more portable: if the whole directory structure gets moved, they'll still work.
+
+Let's mess around. We know cd with no arguments takes us home, so try the following experiment:
+
+ $ echo $HOME # print the variable HOME
+ /Users/username
+ $ cd # cd is equivalent to cd $HOME
+ $ pwd # print working directory shows us where we are
+ /Users/username
+
+ $ unset HOME # unset HOME erases its value
+ $ echo $HOME
+
+ $ cd /some/path # cd into /some/path
+ $ cd # take us HOME?
+ $ pwd
+ /some/path
+
+What happened? We stayed in /some/path rather than returning to /Users/username. The point? There's nothing magical about _home_—it's merely set by the variable HOME. More about variables soon!
+
+## Gently Wading In - The Top 10 Indispensable Unix Commands
+
+Now that we've dipped one toe into the water, let's make a list of the 10 most important unix commands in the universe:
+
+1. pwd
+2. ls
+3. cd
+4. mkdir
+5. echo
+6. cat
+7. cp
+8. mv
+9. rm
+10. man
+Every command has a help or _manual_ page, which can be summoned by typing man. To see more information about pwd, for example, we enter:
+
+ $ man pwd
+
+But pwd isn't particularly interesting and its man page is barely worth reading. A better example is afforded by one of the most fundamental commands of all, ls, which lists the contents of the _cwd_ or of whatever directories we give it as arguments:
+
+ $ man ls
+
+The man pages tend to give TMI (too much information) but the most important point is that commands have _flags_ which usually come in a _one-dash-one-letter_ or _two-dashes-one-word_ flavor:
+
+ command -f
+ command --flag
+
+and the docs will tell us what each option does. You can even try:
+
+ $ man man
+
+Below we'll discuss the commands in the top 10 list in more depth.
+
+## ls
+
+Let's go HOME and try out [ls][43] with various flags:
+
+ $ cd
+ $ ls
+ $ ls -1
+ $ ls -hl
+ $ ls -al
+
+Some screen shots:
+
+![image][44]
+
+
+
+![image][45]
+
+
+First, vanilla ls. We see our files—no surprises. And ls -1 merely displays our files in a column. To show the human-readable, long form we stack the -h and -l flags:
+
+ls -hl
+
+This is equivalent to:
+
+ls -h -l
+
+Screenshot:
+
+![image][46]
+
+
+This lists the owner of the file; the group to which he belongs (_staff_); the date the file was created; and the file size in human-readable form, which means bytes will be rounded to kilobytes, gigabytes, etc. The column on the left shows _permissions_. If you'll indulge mild hyperbole, this simple command is already revealing secrets that are well-hidden by the GUI and known only to unix users. In unix there are three spheres of permission—_user_, _group_, and _other/world_—as well as three particular types for each sphere—_read_, _write_, and _execute_. Everyone with an account on the computer is a unique _user_ and, although you may not realize it, can be part of various groups, such as a particular lab within a university or team in a company. To see yourself and what groups you belong to, try:
+
+ $ whoami
+ $ groups
+
+(To see more information about a user, [finger][47] his username.) A string of dashes displays permission:
+
+ ---------
+ rwx------
+ rwxrwx---
+ rwxrwxrwx
+
+This means, respectively: no permission for anybody; read, write, execute permission for only the user; _rwx_ permission for the user and anyone in the group; and _rwx_ permission for the user, group, and everybody else. Permission is especially important in a shared computing environment. You should internalize now that two of the most common errors in computing stem from the two _P_ words we've already learned: _paths and permissions_. The command chmod, which we'll learn later, governs permission.
+
+If you look at the screenshot above, you see a tenth letter prepended to the permission string, e.g.:
+
+This has nothing to do with permissions and instead tells you about the type of entity in the directory: _d_ stands for directory, _l_ stands for symbolic link, and a plain dash denotes a file.
+
+The -a option in:
+
+ls -al
+
+lists _all_ files in the directory, including [_dotfiles][48]_. These are files that begin with a dot and are hidden in the GUI. They're often system files—more about them later. Screenshot:
+
+![image][49]
+
+
+Note that, in contrast to ls -hl, the file sizes are in pure bytes, which makes them a little hard to read.
+
+A general point about unix commands: they're often robust. For example, with ls you can use an arbitrary number of arguments and it obeys the convention that an asterisk matches anything (this is known as [file _globbing_][50], and I think of it as the prequel to _regular expressions_). Take:
+
+ $ ls . dir1 .. dir2/*.txt dir3/A*.html
+
+This monstrosity would list anything in the _cwd_; anything in directory _dir1_; anything in the directory one above us; anything in directory _dir2_ that ends with _.txt_; and anything in directory _dir3_ that starts with _A_ and ends with _.html_. You get the point.
+
+## Single Line Comments in Unix
+
+Anything prefaced with a # —that's _pound-space_—is a comment and will not be executed:
+
+ $ # This is a comment.
+ $ # If we put the pound sign in front of a command, it won't do anything:
+ $ # ls -hl
+
+Suppose you write a line of code on the command line and decided you don't want to execute it. You have two choices. The first is pressing _Cntrl-c_, which serves as an "abort mission." The second is jumping to the beginning of the line (_Cntrl-a_) and adding the pound character. This has an advantage over the first method that the line will be saved in bash history (discussed below) and can thus be retrieved and modified later.
+
+In a script, pound-special-character (like _#!_) is sometimes interpreted (see below), so take note and include a space after # to be safe.
+
+## The Primacy of Text Files, Text Editors
+
+As we get deeper into unix, we'll frequently be using text editors to edit code, and viewing either data or code in text files. When I got my hands on a computer as a child, I remember text editors seemed like the most boring programs in the world (compared to, say, 1992 [Prince of Persia][51]). And text files were on the bottom of my food chain. But the years have changed me and now I like nothing better than a clean, unformatted _.txt_ file. It's all you need! If you store your data, your code, your correspondence, your book, or almost anything in _.txt_ files with a systematic structure, they can be parsed on the command line to reveal information from many facets. Here's some advice: do all of your text-related work in a good text editor. Open up clunky [Microsoft Word][52], and you've unwittingly spoken a demonic incantation and summoned the beast. Are these the words of a lone lunatic dispensing [hateration][53]? No, because on the command line you can count the words in a text file, search it with [grep][54], input it into a Python program, et cetera. However, a file in Microsoft Word's proprietary and unknown formatting is utterly unusable.
+
+Because text editors are extremely important, some people develop deep relationships with them. My co-worker, who is a [Vim][55] aficionado, turned to me not long ago and said, "You know how you should think about editing in Vim? _As if you're talking to it._" On the terminal, a ubiquitous and simple editor is [nano][56]. If you're more advanced, try [Vim][55] or [Emacs][57]. Not immune to my co-worker's proselytizing, I've converted to Vim. Although it's sprawling and the learning curve can be harsh—Vim is like a programming language in itself—you can do a zillion things with it. I put together a quick and dirty Vim wiki [here][58].
+
+On the GUI, there are many choices: [Sublime][59], [Aquamacs][60], [Smultron][61], etc. I used to use Smultron until I found, unforgivably, that the spacing of documents when you looked at them in the editor and on the terminal was different. I hear good things about Sublime and Aquamacs.
+
+_Exercise_: Let's try making a text file with nano. Type:
+
+ $ nano file.txt
+
+and make the following three-row two-column file: (It's _Cntrl-o_ to save and _Cntrl-x_ to exit.)
+
+## _echo_ and _cat_
+
+More essential commands: echo prints the _string_ passed to it as an argument, while cat prints the _contents_ of files passed to it as arguments. For example:
+
+ $ echo joe
+ $ echo "joe"
+
+would both print _joe_, while:
+
+ $ cat file.txt
+
+would print the contents of _file.txt_. Entering:
+
+ $ cat file.txt file2.txt
+
+would print out the contents of both _file.txt_ and _file2.txt_ concatenated together, which is where this command gets its slightly confusing name.
+
+Finally, a couple of nice flags for these commands:
+
+ $ echo -n "joe" # suppress newline
+ $ echo -e "joetjoenjoe" # interpret special chars ( t is tab, n newline )
+ $ cat -n file.txt # print file with line numbers
+
+## _cp_, _mv_, and _rm_
+
+Finishing off our top 10 list we have cp, mv, and rm. The command to make a copy of a file is cp:
+
+ $ cp file1 file2
+ $ cp -R dir1 dir2
+
+The first line would make an identical copy of _file1_ named _file2_, while the second would do the same thing for directories. Notice that for directories we use the -R flag (for _recursive_). The directory and everything inside it are copied.
+
+_Question_: what would the following do?
+
+ $ cp -R dir1 ../../
+
+_Answer_: it would make a copy of _dir1_ up two levels from our current working directory.
+
+To rename a file or directory we use mv:
+
+ $ mv file1 file2
+
+In a sense, this command also moves files, because we can rename a file into a different path. For example:
+
+ $ mv file1 dir1/dir2/file2
+
+would move _file1_ into _dir1/dir2/_ and change its name to _file2_, while:
+
+ $ mv file1 dir1/dir2/
+
+would simply move _file1_ into _dir1/dir2/_ or, if you like, rename _./file1_ as _./dir1/dir2/file1_.
+
+Finally, rm removes a file or directory:
+
+ $ rm file # removes a file
+ $ rm -r dir # removes a file or directory
+ $ rm -rf dir # force removal of a file or directory
+ # (i.e., ignore warnings)
+
+## Variables in Unix
+
+To declare something as a variable use an equals sign, with no spaces. Let's declare _a_ to be a variable:
+
+ $ a=3 # This syntax is right (no whitespace)
+ $ a = 3 # This syntax is wrong (whitespace)
+ -bash: a: command not found
+
+Once we've declared something as a variable, we need to use _$_ to access its value (and to let bash know it's a variable). For example:
+
+ $ a=3
+ $ echo a
+ a
+ $ echo $a
+ 3
+
+So, with no _$_ sign, bash thinks we just want to echo the string _a_. With a _$_ sign, however, it knows we want to access what the variable _a_ is storing, which is the value _3_. Variables in unix are loosely-typed, meaning you don't have to declare something as a string or an integer.
+
+ $ a=3 # a can be an integer
+ $ echo $a
+ 3
+
+ $ a=joe # or a can be a string
+ $ echo $a
+ joe
+
+ $ a="joe joe" # Use quotes if you want a string with spaces
+ $ echo $a
+ joe joe
+
+We can declare and echo two variables at the same time, and generally play fast and loose, as we're used to doing on the command line:
+
+ $ a=3; b=4
+ $ echo $a $b
+ 3 4
+ $ echo $a$b # mesh variables together as you like
+ 34
+ $ echo "$a$b" # use quotes if you like
+ 34
+ $ echo -e "$at$b" # the -e flag tells echo to interpret t as a tab
+ 3 4
+
+You should also be aware of how bash treats double vs single quotes. As we've seen, if you want to use a string with spaces, you use double quotes. If you use double quotes, any variable inside them will be expanded, the same as in Perl. If you use single quotes, everything is taken literally and variables are not expanded. Here's an example:
+
+ $ var=5
+ $ joe=hello $var
+ -bash: 5: command not found
+
+ $ joe="hello $var"
+ $ echo $joe
+ hello 5
+
+ $ joe='hello $var'
+ $ echo $joe
+ hello $var
+
+An important note is that often we use variables to store _paths_ in unix. Once we do this, we can use all of our familiar directory commands on the variable:
+
+ $ d=dir1/dir2/dir3
+ $ ls $d
+ $ cd $d
+
+ $ d=.. # this variable stores the directory one above us (relative path)
+ $ cd $d/.. # cd two directories up
+
+## Escape Sequences
+
+[Escape sequences][62] are important in every language. When bash reads _$a_ it interprets it as whatever's stored in the variable _a_. What if we actually want to echo the string _$a_? To do this, we use as an escape character:
+
+ $ a=3
+ $ echo $a
+ 3
+ $ echo $a
+ $a
+ $ echo "$a" # use quotes if you like
+ $a
+
+What if we want to echo the slash, too? Then we have to escape the escape character (using the escape character!):
+
+ $ echo \$a # escape the slash and the dollar sign
+ $a
+
+This really comes down to parsing. The slash helps bash figure out if your text is a plain old string or a variable. It goes without saying that you should avoid special characters in your variable names. In unix we might occasionally fall into a parsing tar-pit trap. To avoid this, and make extra sure bash parses our variable right, we can use the syntax _${a}_ as in:
+
+ $ echo ${a}
+ 3
+
+When could this possibly be an issue? Later, when we discuss scripting, we'll learn that _$n_, where _n_ is a number, is the _n_th argument to our script. If you were crazy enough to write a script with 11 arguments, you'd discover that bash interprets a=$11 as a=$1 (the first argument) concatenated with the string 1 while a=${11} properly represents the eleventh argument. This is getting in the weeds, but FYI.
+
+Here's a more practical example:
+
+ $ a=3
+ $ echo $a # variable a equals 3
+ 3
+ $ echo $apple # variable apple is not set
+
+ $ echo ${a}pple # this describes the variable a plus the string "pple"
+ 3pple
+
+## Global Variables in Unix
+
+In general, it is the convention to use capital letters for global variables. We've already learned about one: HOME. We can see all the variables set in our shell by simply typing:
+
+ $ set
+
+Some basic variables deserve comment:
+
+* HOME
+* PS1
+* TMPDIR
+* EDITOR
+* DISPLAY
+HOME, as we've already seen, is the path to our home directory (preset to /Users/username on Macintosh). PS1 sets the shell's prompt. For example:
+
+ $ PS1=':-) '
+
+changes our prompt from a dollar-sign into an emoticon, as in:
+
+![image][63]
+
+
+On your computer there is a designated temporary directory and its path is stored in TMPDIR. Some commands, such as sort, which we'll learn later, surreptitiously make use of this directory to store intermediate files. At work, we have a shared computer system and occasionally this common directory $TMPDIR will run out of space, causing programs trying to write there to fail. One solution is to simply set TMPDIR to a different path where there's free space. EDITOR sets the default text editor (you can invoke it by pressing _Cntrl-x-e_). And DISPLAY is a variable related to the [X Window System][64].
+
+Many programs rely on their own agreed-upon global variables. For example, if you're a Perl user, you may know that Perl looks for modules in the directory whose path is stored in PERL5LIB. Python looks for its modules in PYTHONPATH; R looks for packages in R_LIBS; Matlab uses MATLABPATH; awk uses AWKPATH; C++ looks for libraries in LD_LIBRARY_PATH; and so on. These variables don't exist in the shell by default. A program will make a system call and look for the variable. If the user has had the need or foresight to define it, the program can make use of it.
+
+## The _PATH_
+
+The most important global variable of all is the PATH. This is _the_ PATH, as distinct from _a_ path, a term we've already learned referring to a location in the filesystem. The PATH is a colon-delimited list of directories where unix will look for executable programs when you enter something on command line. If your program is in one of these directories, you can run it from any location by simply entering its name. If the program is not in one of these directories, you can still run it, of course, but you'll have to include its path.
+
+Let's revisit the idea of a command in unix. What's a command? It's nothing more than a program sitting in a directory somewhere. So, if ls is a program, where is it? Use the command which to see its path:
+
+ $ which ls # on my work computer
+ /bin/ls
+
+ $ which ls # on my home Mac
+ /usr/local/Cellar/coreutils/8.20/libexec/gnubin/ls
+
+For the sake of argument, let's say I download an updated version of the ls command, and then type ls in my terminal. What will happen—will the old ls or the new ls execute? The PATH comes into play here because it also determines priority. When you enter a command, unix will look for it in each directory of the PATH, from first to last, and execute the first instance it finds. For example, if:
+
+PATH=/bin/dir1:/bin/dir2:/bin/dir3
+
+and there's a command named _ls_ in both /bin/dir1 and /bin/dir2, the one in /bin/dir1 will be executed.
+
+Let's see what your PATH looks like. Enter:
+
+ $ echo $PATH
+
+For example, here's a screenshot of the default PATH on Ubuntu:
+
+![image][65]
+
+
+To emphasize the point again, all the programs in the directories specified by your PATH are all the programs that you can access on the command line by simply typing their names.
+
+The PATH is not immutable. You can set it to be anything you want, but in practice you'll want to augment, rather than overwrite, it. By default, it contains directories where unix expects executables, like:
+
+* /bin
+* /usr/bin
+* /usr/local/bin
+Let's say you have just written the command /mydir/newcommand. If you're not going to use the command very often, you can invoke it using its full path every time you need it:
+
+ $ /mydir/newcommand
+
+However, if you're going to be using it frequently, you can just add /mydir to the PATH and then invoke the command by name:
+
+ $ PATH=/mydir:$PATH # add /mydir to the front of PATH - highest priority
+ $ PATH=$PATH:/mydir # add /mydir to the back of PATH - lowest priority
+ $ newcommand # now invoking newcommand is this easy
+
+This is a frequent chore in unix. If you download some new program, you will often find yourself updating the PATH to include the directory containing its binaries. How can we avoid having to do this every time we open the terminal for a new session? We'll discuss this below when we learn about _.bashrc_.
+
+If you want to shoot yourself in the foot, you can vaporize the PATH:
+
+ $ unset PATH # not advisable
+ $ ls # now ls is not found
+ -bash: ls: No such file or directory
+
+but this is not advisable, save as a one-time educational experience.
+
+## Links
+
+While we're on the general subject of paths, let's talk about [_symbolic links][66]_. If you've ever used the _Make Alias_ command on a Macintosh (not to be confused with the unix command alias, discussed below), you've already developed intuition for what a link is. Suppose you have a file in one folder and you want that file to exist in another folder simultaneously. You could copy the file, but that would be wasteful. Moreover, if the file changes, you'll have to re-copy it—a huge ball-ache. Links solve this problem. A link to a file is a stand-in for the original file, often used to access the original file from an alternate file path. It's not a copy of the file but, rather, points to the file.
+
+To make a symbolic link, use the command [ln][67]:
+
+ $ ln -s /path/to/target/file mylink
+
+This produces:
+
+ mylink --> /path/to/target/file
+
+in the cwd, as ls -hl will show. Note that removing _mylink_:
+
+ $ rm mylink
+
+does not affect our original file.
+
+If we give the target (or source) path as the sole argument to ln, the name of the link will be the same as the source file's. So:
+
+ $ ln -s /path/to/target/file
+
+produces:
+
+ file --> /path/to/target/file
+
+Links are incredibly useful for all sorts of reasons—the primary one being, as we've already remarked, if you want a file to exist in multiple locations without having to make extraneous, space-consuming copies. You can make links to directories as well as files. Suppose you add a directory to your PATH that has a particular version of a program in it. If you install a newer version, you'll need to change the PATH to include the new directory. However, if you add a link to your PATH and keep the link always pointing to the most up-to-date directory, you won't need to keep fiddling with your PATH. The scenario could look like this:
+
+ $ ls -hl myprogram
+ current -> version3
+ version1
+ version2
+ version3
+
+(where I'm hiding some of the output in the long listing format.) In contrast to our other examples, the link is in the same directory as the target. Its purpose is to tell us which version, among the many crowding a directory, we should use.
+
+Another good practice is putting links in your home directory to folders you often use. This way, navigating to those folders is easy when you log in. If you make the link:
+
+ ~/MYLINK --> /some/long/and/complicated/path/to/an/often/used/directory
+
+then you need only type:
+
+ $ cd MYLINK
+
+rather than:
+
+ $ cd /some/long/and/complicated/path/to/an/often/used/directory
+
+Links are everywhere, so be glad you've made their acquaintance!
+
+## What is Scripting?
+
+By this point, you should be comfortable using basic utilities like echo, cat, mkdir, cd, and ls. Let's enter a series of commands, creating a directory with an empty file inside it, for no particular reason:
+
+ $ mkdir tmp
+ $ cd tmp
+ $ pwd
+ /Users/oliver/tmp
+ $ touch myfile.txt # the command touch creates an empty file
+ $ ls
+ myfile.txt
+ $ ls myfile_2.txt # purposely execute a command we know will fail
+ ls: cannot access myfile_2.txt: No such file or directory
+
+What if we want to repeat the exact same sequence of commands 5 minutes later? _Massive bombshell_—we can save all of these commands in a file! And then run them whenever we like! Try this:
+
+ $ nano myscript.sh
+
+and write the following:
+
+ # a first script
+ mkdir tmp
+ cd tmp
+ pwd
+ touch myfile.txt
+ ls
+ ls myfile_2.txt
+
+Gratuitous screenshot:
+
+![image][68]
+
+
+This file is called a _script_ (_.sh_ is a typical suffix for a shell script), and writing it constitutes our first step into the land of bona fide computer programming. In general usage, a script refers to a small program used to perform a niche task. What we've written is a recipe that says: **
+
+* create a directory called "tmp"
+* go into that directory
+* print our current path in the file system
+* make a new file called "myfile.txt"
+* list the contents of the directory we're in
+* specifically list the file "myfile_2.txt" (which doesn't exist)
+** This script, though silly and useless, teaches us the fundamental fact that all computer programs are ultimately just lists of commands.
+
+Let's run our program! Try:
+
+ $ ./myscript.sh
+ -bash: ./myscript.sh: Permission denied
+
+_WTF!_ It's dysfunctional. What's going on here is that the file permissions are not set properly. In unix, when you create a file, the default permission is _not executable_. You can think of this as a brake that's been engaged and must be released before we can go (and do something potentially dangerous). First, let's look at the file permissions:
+
+ $ ls -hl myscript.sh
+ -rw-r--r-- 1 oliver staff 75 Oct 12 11:43 myscript.sh
+
+Let's change the permissions with the command chmod and execute the script:
+
+ $ chmod u+x myscript.sh # add executable(x) permission for the user(u) only
+ $ ls -hl myscript.sh
+ -rwxr--r-- 1 oliver staff 75 Oct 12 11:43 myscript.sh
+
+ $ ./myscript.sh
+ /Users/oliver/tmp/tmp
+ myfile.txt
+ ls: cannot access myfile_2.txt: No such file or directory
+
+Not bad. Did it work? Yes, it did because it's printed stuff out and we see it's created tmp/myfile.txt:
+
+ $ ls
+ myfile.txt myscript.sh tmp
+ $ ls tmp
+ myfile.txt
+
+An important note is that even though there was a cd in our script, if we type:
+
+ $ pwd
+ /Users/oliver/tmp
+
+we see that we're still in the same directory as we were in when we ran the script. Even though the script entered /Users/oliver/tmp/tmp, and did its bidding, we stay in /Users/oliver/tmp. Scripts always work this way—where they go is independent of where we go.
+
+If you're wondering why anyone would write such a pointless script, you're right—it would be odd if we had occasion to repeat this combination of commands. There are some more realistic examples of scripting below.
+
+## File Suffixes in Unix
+
+As we begin to script it's worth following some file naming conventions. We should use common sense suffixes, like:
+
+* _.txt_ \- for text files
+* _.html_ \- for html files
+* _.sh_ \- for shell scripts
+* _.pl_ \- for Perl scripts
+* _.py_ \- for Python scripts
+* _.cpp_ \- for c++ code
+and so on. Adhering to this organizational practice will enable us to quickly scan our files, and make searching for particular file types easier [1]. As we saw above, commands like ls and find are particularly well-suited to use this kind of information. For example, list all text files in the cwd:
+
+ $ ls *.txt
+
+List all text files in the cwd and below (i.e., including child directories):
+
+ $ find . -name "*.txt"
+
+
+
+* * *
+
+
+[1] An astute reader noted that, for commands—as opposed to, say, html or text files—using suffixes is not the best practice because it violates the principle of encapsulation. The argument is that a user is neither supposed to know nor care about a program's internal implementation details, which the suffix advertises. You can imagine a program that starts out as a shell script called _mycommand.sh_, is upgraded to Python as _mycommand.py_, and then is rewritten in C for speed, becoming the binary _mycommand_. What if other programs depend on _mycommand_? Then each time _mycommand_'s suffix changes they have to be rewritten—a big problem. Although I make this sloppy mistake in this article, that doesn't excuse you! [Read the full argument][69] ↑
+
+## The Shebang
+
+We've left out one important detail about scripting. How does unix know we want to run a bash script, as opposed to, say, a Perl or Python script? There are two ways to do it. We'll illustrate with two simple scripts, a bash script and a Perl script:
+
+ $ cat myscript_1.sh # a bash script
+ echo "hello kitty"
+
+ $ cat myscript_1.pl # a Perl script
+ print "hello kittyn";
+
+The first way to tell unix which program to use to interpret the script is simply to say so on the command line. For example, we can use bash to execute bash scripts:
+
+ $ bash ./myscript_1.sh # use bash for bash scripts
+ hello kitty
+
+and perl for Perl scripts:
+
+ $ perl ./myscript_1.pl # use Perl for Perl scripts
+ hello kitty
+
+But this won't work for a Perl script:
+
+ $ ./myscript_1.pl # this won't work
+ ./myscript_1.pl: line 1: print: command not found
+
+And if we purposefully specify the wrong language, we'll get errors:
+
+ $ bash ./myscript_1.pl # let's purposefully do it backwards
+ ./myscript_1.pl: line 1: print: command not found
+
+ $ perl ./myscript_1.sh
+ String found where operator expected at ./myscript_1.sh line 1,
+ near "echo "hello kitty""
+ (Do you need to predeclare echo?)
+ syntax error at ./myscript_1.sh line 1, near "echo "hello kitty""
+ Execution of ./myscript_1.sh aborted due to compilation errors.
+
+The second way to specify the proper interpreter—and the better way, which you should emulate—is to put it in the script itself using a [_shebang_][70]. To do this, let's remind ourselves where bash and perl reside on our system. On my computer, they're here:
+
+ $ which perl
+ /usr/bin/perl
+
+ $ which bash
+ /bin/bash
+
+although perl could be somewhere else on your machine (bash should be in /bin by convention). The _shebang_ specifies the language in which your script is interpreted according to the syntax #! followed by the path to the language. It should be the first line of your script. Note that it's not a comment even though it looks like one. Let's add shebangs to our two scripts:
+
+ $ cat myscript_1.sh
+ #!/bin/bash
+ echo "hello kitty"
+
+ $ cat myscript_1.pl
+ #!/usr/bin/perl
+ print "hello kittyn";
+
+Now we can run them without specifying the interpreter in front:
+
+ $ ./myscript_1.sh
+ hello kitty
+ $ ./myscript_1.pl
+ hello kitty
+
+However, there's _still_ a lingering issue and it has to do with [portability][71], an important software principle. What if perl is in a different place on your machine than mine and you copy my scripts and try to run them? The path will be wrong and they won't work. The solution to this issue is courtesy of a neat trick using [env][72]. We can amend our script to be:
+
+ $ cat myscript_1.pl
+ #!/usr/bin/env perl
+ print "hello kittyn";
+
+Of course, this assumes you have a copy of env in /usr/bin, but this is usually a correct assumption. What env does here is to use whatever your environmental variable for perl is—i.e., the perl that's first in your PATH.
+
+This is a useful practice even if you're not sharing scripts. Suppose you've updated your version of perl and there's a newer copy than /usr/bin/perl. You've appropriately updated your PATH such that the directory containing the updated perl comes before /usr/bin. If you have env in your shebang, you're all set. However, if you've _hardwired_ the old path in your shebang, your script will run on the old perl [1].
+
+The question that the shebang resolves—which program will run your script?—reminds us of a more fundamental distinction between [interpreted languages][73] and [compiled languages][74]. The former are those like bash, Perl, and Python, where you can cat a script and look inside it. The later, like C++, require [_compilation][75]_, the process whereby code is translated into machine language (the result is sometimes called a _binary_). This can be done with a command line utility like [g++][76]:
+
+ $ g++ MyProgram.cpp -o MyProgram
+
+Compiled programs, such as the unix utilities themselves, tend to run faster. Don't try to cat a binary, such as ls, or it will spew out gibberish:
+
+ $ cat $( which ls ) # don't do this!
+
+
+
+* * *
+
+
+[1] Of course, depending on circumstances, you may very well want to stick with the old version of Perl or whatever's running your program. An update can have unforeseen consequences and this is the motivation for tools like [virtualenv][77] (Python), whose docs remind us: "_If an application works, any change in its libraries or the versions of those libraries can break the application_" ↑
+
+## _bash_
+
+We've thrown around the term _bash_ a few times but we haven't defined it. To do so, let's examine the special command, sh, which is more primitive than bash and came before it. To quote Wikipedia and the manual page:
+
+> The Bourne shell (sh) is a shell, or command-line interpreter, for computer operating systems. The shell is a command that reads lines from either a file or the terminal, interprets them, and generally executes other commands. It is the program that is running when a user logs into the system ... Commands can be typed directly to the running shell or can be put into a file and the file can be executed directly by the shell
+
+As it describes, sh is special because it's both a command interpreter and a command itself (usually found at /bin/sh). Put differently, you can run _myscript_ as:
+
+ $ sh ./myscript
+
+or you can simply type:
+
+ $ sh
+
+to start an interactive sh shell. If you're in this shell and run:
+
+ $ ./myscript
+
+without specifying an interpreter or using a shebang, your script will be interpreted by sh by default. On most computers, however, the default shell is no longer sh but bash (usually located at /bin/bash). To mash up Wikipedia and the manual page:
+
+> The **B**ourne-**A**gain **SH**ell (bash) a Unix shell written by Brian Fox for the GNU Project as a free software replacement for the Bourne shell. bash is an sh-compatible command language interpreter that executes commands read from the standard input or from a file ... There are some subtle differences between bash and traditional versions of sh
+
+Like sh, bash is a command you can either invoke on a script or use to start an interactive bash shell. Read more on Stackoverflow: [Difference between sh and bash][78].
+
+Which shell are you using right now? Almost certainly bash, but if you want to double check, there's a neat command [given here][79] to display your shell type:
+
+ $ ps -p $$
+
+There are more exotic shells, like [Z shell][80] and [tcsh][81], but they're beyond the scope of this article.
+
+## _chmod_
+
+Let's take a closer look at how to use [chmod][82]. Remember the three domains:
+
+* _u_ \- user
+* _g_ \- group
+* _o_ \- other/world
+and the three types of permission:
+* _r_ \- read
+* _w_ \- write
+* _x_ \- execute
+we can mix and match these how we like, using a plus sign to grant permissions according to the syntax:
+
+chmod entity+permissiontype
+
+or a minus sign to remove permissions:
+
+chmod entity-permissiontype
+
+E.g.:
+
+ $ chmod u+x myfile # make executable for you
+ $ chmod g+rxw myfile # add read write execute permissions for the group
+ $ chmod go-wx myfile # remove write execute permissions for the group
+ # and for everyone else (excluding you, the user)
+
+You can also use _a_ for "all of the above", as in:
+
+ $ chmod a-rwx myfile # remove all permissions for you, the group,
+ # and the rest of the world
+
+If you find the above syntax cumbersome, there's a numerical shorthand you can use with chmod. The only two I have memorized are _777_ and _755_:
+
+ $ chmod 777 myfile # grant all permissions (rwxrwxrwx)
+ $ chmod 755 myfile # reserve write access for the user,
+ # but grant all other permissions (rwxr-xr-x)
+
+Read more about the numeric code [here][83]. In general, it's a good practice to allow your files to be writable by you alone, unless you have a compelling reason to share access to them.
+
+## _ssh_
+
+In addition to chmod, there's another command it would be remiss not to mention. For many people, the first time they need to go to the command line, rather than the GUI, is to use the [Secure Shell (ssh)][84] protocol. Suppose you want to use a computer, but it's not the computer that's in front of you. It's a different computer in some other location—say, at your university, your company, or on the [Amazon cloud][16]. [ssh][85] is the command that allows you to log into a computer remotely over the network. Once you've sshed into a computer, you're in its shell and can run commands on it just as if it were your personal laptop. To ssh, you need to know your user name, the address of the host computer you want to log into, and the password [1]. The basic syntax is:
+
+ssh username@host
+
+For example:
+
+ $ ssh username@myhost.university.edu
+
+If you're trying to ssh into a private computer and don't know the hostname, use its IP address (_username@IP-address_).
+
+ssh also allows you to run a command on the remote server without logging in. For instance, to list of the contents of your remote computer's home directory, you could run:
+
+ $ ssh username@myhost.university.edu "ls -hl"
+
+Cool, eh? Moreover, if you have ssh access to a machine, you can copy files to or from it with the utility [rsync][12]—a great way to move data without an external hard drive.
+
+The file:
+
+~/.ssh/config
+
+determines ssh's behavior and you can create it if it doesn't exist (the dot in the name _.ssh_ confers invisibility—[see the discussion about dotfiles below][86]). On your own private computer, you can ssh into selected servers without having to type in a password by updating this configuration file. To do this, generate [rsa][87] ssh [keys][88]:
+
+ $ mkdir -p ~/.ssh
+ $ cd ~/.ssh
+ $ ssh-keygen -t rsa -f localkey
+
+This will create two files on your computer, a public key:
+
+~/.ssh/localkey.pub
+
+and a private key:
+
+~/.ssh/localkey
+
+You can share your public key, but _do not give anyone your private key!_ Suppose you want to ssh into _myserver.com_. Normally, that's:
+
+ $ ssh myusername@myserver.com
+
+Instead of doing this, add these lines to your _~/.ssh/config_ file:
+
+ Host Myserver
+ HostName myserver.com
+ User myusername
+ IdentityFile ~/.ssh/localkey
+
+Next, cat your public key and paste it into:
+
+~/.ssh/authorized_keys
+
+on the remote machine (i.e., the _myserver.com_ computer). Now on your local computer, you can ssh into _myserver.com_ without a password:
+
+ $ ssh Myserver
+
+You can also use this technique to push to [github.com][89] [2], without having to punch your password in each time, by pasting your public key into:
+
+Settings > SSH Keys > Add SSH Key
+
+on GitHub (read the [official tutorial][90]).
+
+If this is your first encounter with ssh, you'd be surprised how much of the work of the world is done by ssh. It's worth reading the extensive man page, which gets into matters of computer security and cryptography.
+
+* * *
+
+
+[1] The host also has to enable ssh access. On Macintosh, for example, it's disabled by default, but you can turn it on, as instructed [here][91] ↑
+[2] As you get deeper into the game, tracking your scripts and keeping a single, stable version of them becomes crucial. [Git][92], a vast subject for [another tutorial][93], is the neat solution to this problem and the industry standard for version control. On the web [GitHub][94] provides free hosting of script repositories and connects to the command line via the git interface ↑
+
+## Saving to a File; Stdout and Stderr
+
+To save to a file in unix, use an angle bracket:
+
+ $ echo joe > junk.txt # save to file
+ $ cat junk.txt
+ joe
+
+To append to the end of a pre-existing file, use a double angle bracket:
+
+ $ echo joe >> junk.txt # append to already-existing file
+ $ cat junk.txt
+ joe
+ joe
+
+Returning to our first script, [_myscript.sh][95]_, let's save the output to a file:
+
+ $ ./myscript.sh > out.txt
+ mkdir: cannot create directory 'tmp': File exists
+ ls: cannot access myfile_2.txt: No such file or directory
+
+ $ cat out.txt
+ /Users/oliver/tmp/tmp
+ myfile.txt
+
+This is interesting: _out.txt_ has its output. However, not everything went into _out.txt_, because some error messages were echoed to the console. What's going on here is that there are actually two [output streams][96]: _stdout_ (standard out) and _stderr_ (standard error). Look at the following figure from Wikipedia:
+
+![image][97]
+
+(Image credit: [Wikipedia: Standard streams][96])
+
+Proper output goes into stdout while errors go into stderr. The syntax for saving stderr in unix is 2> as in:
+
+ $ # save the output into out.txt and the error into err.txt
+ $ ./myscript.sh > out.txt 2> err.txt
+ $ cat out.txt
+ /Users/oliver/tmp/tmp
+ myfile.txt
+ $ cat err.txt
+ mkdir: cannot create directory 'tmp': File exists
+ ls: cannot access myfile_2.txt: No such file or directory
+
+When you think about it, the fact that output and error are separated is supremely useful. At work, sometimes we parallelize heavily and run 1000 instances of a script. For each instance, the error and output are saved separately. The 758th job, for example, might look like this:
+
+./myjob --instance 758 > out758.o 2> out758.e
+
+(I'm in the habit of using the suffixes _.o_ for output and _.e_ for error.) With this technique we can quickly scan through all 1000 _.e_ files and check if their size is 0. If it is, we know there was no error; if not, we can re-run the failed jobs. Some programs are in the habit of echoing run statistics or other information to stderr. This is an unfortunate practice because it muddies the water and, as in the example above, would make it hard to tell if there was an actual error.
+
+Output vs error is a distinction that many programming languages make. For example, in C++ writing to stdout and stderr is like this:
+
+ cout << "some output" << endl;
+ cerr << "some error" << endl;
+
+In Perl it's:
+
+ print STDOUT "some outputn";
+ print STDERR "some errorn";
+
+In Python it's:
+
+ import sys
+ sys.stdout.write("some outputn")
+ sys.stderr.write("some errorn")
+
+and so on.
+
+## More on Stdout and Stderr; Redirection
+
+For the sake of completeness, we should note that you can redirect standard error to standard output and vice versa. Let's make sure we get the syntax of all things pertaining to stdout and stderr right:
+
+ 1> # save stdout to (plain old > also works)
+ 2> # save stderr to
+
+as in:
+
+ $ ./myscript.sh 1> out.o 2> out.e
+ $ ./myscript.sh > out.o 2> out.e # these two lines are identical
+
+What if we want to choose where things will be printed from _within_ our script? Then we can use the following syntax:
+
+ &1 # standard out stream
+ &2 # standard error stream
+
+Let's examine five possible versions of our _Hello Kitty_ script:
+
+ #!/bin/bash
+ # version 1
+ echo "hello kitty"
+
+ #!/bin/bash
+ # version 2
+ echo "hello kitty" > somefile.txt
+
+ #!/bin/bash
+ # version 3
+ echo "hello kitty" > &1
+
+ #!/bin/bash
+ # version 4
+ echo "hello kitty" > &2
+
+ #!/bin/bash
+ # version 5
+ echo "hello kitty" > 1
+
+Here's how they work:
+
+* _version 1_ \- echo "hello kitty" to stdout
+* _version 2_ \- echo "hello kitty" to the file somefile.txt
+* _version 3_ \- same as version 1
+* _version 4_ \- echo "hello kitty" to sterr
+* _version 5_ \- echo "hello kitty" to _the file named 1_
+This illustrates the point of the ampersand syntax: it distinguishes between the output streams and files named _1_ or _2_. Let's try running script version 4 as a sanity check to make sure these scripts are working as expected:
+
+ $ # output saved to file but error printed to console
+ $ ./hellokitty.sh > junk.txt
+ hello kitty
+
+_hello kitty_ is indeed stderr because it's echoed to the console, not saved into _junk.txt_.
+
+This syntax makes it easy to see how we could, e.g., redirect the standard error to standard output:
+
+ $ ./somescript.sh 2> &1 # redirect stderr to stdout
+
+I rarely have occasion to do this and, although it's not something you need in your introductory unix toolkit, it's good to know.
+
+## Conditional Logic
+
+Conditional Logic is a universal feature of programming languages. The basic idea is, _if_ this condition, _then_ do something. It can be made more complex: _if_ this condition, _then_ do something; _else if_ that condition, _then_ do another thing; _else_ (if any other condition), _then_ do yet another thing. Let's see how to implement this in bash:
+
+ $ a=joe
+ $ if [ $a == "joe" ]; then echo hello; fi
+ hello
+
+or:
+
+ $ a=joe
+ $ if [ $a == "joe" ]; then echo hello; echo hello; echo hello; fi
+ hello
+ hello
+ hello
+
+The structure is:
+
+if [ _condition_ ]; then ... ; fi
+
+Everything between the words then and fi (_if_ backwards in case you didn't notice) will execute if the condition is satisfied. In other languages, this block is often defined by curly brackets: _{ }_. For example, in a Perl script, the same code would be:
+
+ #!/usr/bin/env perl
+
+ my $a="joe";
+
+ if ( $a eq "joe" )
+ {
+ print "hellon";
+ print "hellon";
+ print "hellon";
+ }
+
+In bash, _if_ is if, _else_ is else, and _else if_ is elif. In a script it would look like this:
+
+ #!/bin/bash
+
+ a=joe
+
+ if [ $a == "joe" ]; then
+ echo hello;
+ elif [ $a == "doe" ]; then
+ echo goodbye;
+ else
+ echo "ni hao";
+ fi
+
+You can also use a case statement to implement conditional logic. See an example of that [here][98].
+
+Although I said in the intro that unix is the best place to start your computer science education, I have to admit that the syntax for _if-then_ logic is somewhat unwieldy—even unfriendly. Bash is a bad teaching language for conditional logic, [arrays][99], [hashes][100], etc. But that's only because its element is not heavy-duty programming with lots of functions, numerical operations, sophisticated data structures, and logic. Its mastery is over the quick and dirty, manipulating files and directories, and doing system stuff. I still maintain it's the proper starting point because of its wonderful tools, and because knowing its fundamentals is a great asset. Every language has its place in the programming ecosystem. Back in College, I stumbled on a physics book called [_The Tiger and the Shark: Empirical Roots of Wave-Particle Dualism_][101] by Bruce Wheaton. The book had a great epigraph:
+
+> It is like a struggle between a tiger and a shark,
+each is supreme in his own element,
+but helpless in that of the other.
+_J.J. Thomson, 1925_
+
+In our context, this would read: bash is supreme on the command line, but not inside of a script.
+
+## File Test Operators; Return or Exit Status
+
+_File Test Operators_ and _exit status_ are two completely different topics, but since they both go well with if statements, I'll discuss them here. File Test Operators are things you can stick in an if statement to give you information about a file. Two common problems are _(1)_ checking if your file exists and _(2)_ checking if it's non-zero size: Let's create two files, one empty and one not:
+
+ $ touch emptyfile # create an empty file
+ $ echo joe > nonemptyfile # create a non-empty file
+
+The operator _-e_ tests for existence and _-s_ tests for non-zero-ness:
+
+ $ file=emptyfile
+ $ if [ -e $file ]; then echo "exists"; if [ -s $file ]; then echo "non-0"; fi; fi
+ exists
+
+ $ file=nonemptyfile
+ $ if [ -e $file ]; then echo "exists"; if [ -s $file ]; then echo "non-0"; fi; fi
+ exists
+ non-0
+
+Read The Linux Documentation Project's discussion of file test operators [here][102].
+
+Changing the subject altogether, you may be familiar with the idea of a return value in computer science. Functions can return a value upon completion. In unix, commands also have a return value or _exit code_, queryable with:
+
+$?
+
+This is usually employed to tell the user whether or not the command successfully executed. By convention, successful execution returns 0. For example:
+
+ $ echo joe
+ joe
+ $ echo $? # query exit code of previous command
+ 0
+
+Let's see how the exit code can be useful. We'll make a script, _test_exitcode.sh_, such that:
+
+ $ cat test_exitcode.sh
+ #!/bin/bash
+ sleep 10
+
+This script just pauses for 10 seconds. First, we'll let it run and then we'll interrupt it using _Cntrl-c_:
+
+ $ ./test_exitcode.sh; # let it run
+ $ echo $?
+ 0
+
+ $ ./test_exitcode.sh; # interrupt it
+ ^C
+ $ echo $?
+ 130
+
+The non-zero exit code tells us that it's failed. Now we'll try the same thing with an if statement:
+
+ $ ./test_exitcode.sh
+ $ if [ $? == 0 ]; then echo "program succeeded"; else echo "program failed"; fi
+ program succeeded
+
+ $ ./test_exitcode.sh;
+ ^C
+ $ if [ $? == 0 ]; then echo "program succeeded"; else echo "program failed"; fi
+ program failed
+
+In research, you might run hundreds of command-line programs in parallel. For each instance, there are two key questions: _(1)_ Did it finish? _(2)_ Did it run without error? Checking the exit status is the way to address the second point. You should always check the program you're running to find information about its exit code, since some use different conventions. Read The Linux Documentation Project's discussion of exit status [here][103].
+
+_Question_: What's going on here?
+
+ $ if echo joe; then echo joe; fi
+ joe
+ joe
+
+This is yet another example of bash allowing you to stretch syntax like silly putty. In this code snippet,
+
+echo joe
+
+is run, and its successful execution passes a _true_ return code to the if statement. So, the two _joe_s we see echoed to the console are from the statement to be evaluated and the statement inside the conditional. We can also invert this formula, doing something if our command fails:
+
+ $ outputdir=nonexistentdir # set output dir equal to a nonexistent dir
+ $ if ! cd $outputdir; then echo "couldnt cd into output dir"; fi
+ -bash: pushd: nonexistentdir: No such file or directory
+ couldnt cd into output dir
+
+ $ mkdir existentdir # make a test directory
+ $ outputdir=existentdir
+ $ if ! cd $outputdir; then echo "couldnt cd into output dir"; fi
+ $ # no error - now we're in the directory existentdir
+
+Did you follow that? (! means logical NOT in unix.) The idea is, we try to cd but, if it's unsuccessful, we echo an error message. This is a particularly useful line to include in a script. If the user gives an output directory as an argument and the directory doesn't exist, we exit. If it does exist, we cd into it and it's business as usual:
+
+if ! cd $outputdir; then echo "[error] couldn't cd into output dir"; exit; fi
+
+Without this line, the script will run in whatever directory it's in if cd fails. Once in lab, I was running a script that didn't have this kind of protection. The output directory wasn't found and the script starting making and deleting files in the wrong directory. It was powerfully uncool!
+
+We can implement similar constructions using the && and || operators rather than an if statement. Let's see how this works by making some test files:
+
+ $ touch file{1..4}
+ $ ls
+ file1 file2 file3 file4
+
+The && operator will chug through a chain of commands and keep on going until one of the commands fails, as in:
+
+ $ ( ls file1 ) && ( ls file2 ) && ( ls file3 ) && ( ls file4 )
+ file1
+ file2
+ file3
+ file4
+
+ $ ( ls file1 ) && ( ls file2 ) && ( ls fileX ) && ( ls file4 )
+ file1
+ file2
+ ls: cannot access fileX: No such file or directory
+
+In contrast, the || operator will proceed through the command chain and _stop_ after the first successful one, as in:
+
+ $ ( ls file1 ) || ( ls file2 ) || ( ls file3 ) || ( ls file4 )
+ file1
+
+ $ ( ls fileX ) || ( ls fileY ) || ( ls fileZ ) || ( ls file4 )
+ ls: cannot access fileX: No such file or directory
+ ls: cannot access fileY: No such file or directory
+ ls: cannot access fileZ: No such file or directory
+ file4
+
+## Basic Loops
+
+In programming, loops are a way of performing operations iteratively. Loops come in different flavors, but the _for loop_ and _while loop_ are the most basic. In bash, we can implement a for loop like this:
+
+ $ for i in 1 2 3; do echo $i; done
+ 1
+ 2
+ 3
+
+The structure is:
+
+for _variable_ in _list_; do ... ; done
+
+Put anything you like in the list:
+
+ $ for i in 1 2 hello; do echo $i; done
+ 1
+ 2
+ hello
+
+Many other languages wouldn't let you get away with combining data types in the iterations of a loop, but this is a recurrent bash theme: it's fast; it's loose; it's malleable.
+
+To count from 1 to 10, try:
+
+ $ for i in {1..10}; do echo -n "$i "; done; echo
+ 1 2 3 4 5 6 7 8 9 10
+
+But if we can just write:
+
+ $ echo {1..10}
+
+why do we need a loop here? Loops really come into their own in bash when—no surprise!—we're dealing with files, paths, and commands. For example, to loop through all of the text files in the cwd, use:
+
+ $ for i in *.txt; do echo $i; done
+
+Although this is nearly the same as:
+
+ $ ls *.txt
+
+the former construction has the advantage that we can stuff as much code as we like in the block between do and done. Let's make a random directory structure like so:
+
+ $ mkdir -p myfolder{1..3}/{X,Y}
+
+We can populate it with token files (fodder for our example) via a loop:
+
+ $ j=0; for i in myfolder*/*; do echo "*** "$i" ***"; touch ${i}/a_${j}.txt ${i}/b_${j}.txt; ((j++)); done
+
+In bash, ((j++)) is a way of incrementing j. We echo $i to get some visual feedback as the loop iterates. Now our directory structure looks like this:
+
+![image][104]
+
+
+To practice loops, suppose we want to find any file that begins with _b_ in any subfolder and make a symbolic link to it from the cwd:
+
+ $ for i in myfolder*/*/b*; do echo "*** "$i" ***"; ln -s $i; done
+
+As we learned above, a link is not a copy of a file but, rather, a kind of pointer that allows us to access a file from a path other than the one where it actually resides. Our loop yields the links:
+
+ b_0.txt -> myfolder1/X/b_0.txt
+ b_1.txt -> myfolder1/Y/b_1.txt
+ b_2.txt -> myfolder2/X/b_2.txt
+ b_3.txt -> myfolder2/Y/b_3.txt
+ b_4.txt -> myfolder3/X/b_4.txt
+ b_5.txt -> myfolder3/Y/b_5.txt
+
+allowing us to access the _b_ files from the cwd.
+
+I can't overstate all the heroic things you can do with loops in bash. Suppose we want to change the extension of any text file that begins with _a_ and resides in an _X_ subfolder from _.txt_ to _.html_:
+
+ $ for i in myfolder*/X/a*.txt; do echo "*** "$i" ***"; j=$( echo $i | sed 's|.txt|.html|' ); echo $j; mv $i $j; echo; done
+
+But I've jumped the gun! This example features three things we haven't learned yet: command substitution, piping, and sed. You should revisit it after reading those sections, but the idea is that the variable _j_ stores a path that looks like our file's but has the extension replaced. And you see that a knowledge of loops is like a stick of dynamite you can use to blow through large numbers of files.
+
+Here's another contrived example with these yet-to-be-discussed techniques:
+
+ $ for i in $( echo $PATH | tr ":" " " ); do echo "*** "$i" ***"; ls $i | head; echo; done | less
+
+Can you guess what this does? It shows the first ten commands in each folder in our PATH—not something you'd likely need to do, but a demonstration of the fluidity of these constructions.
+
+If we want to run a command or script in parallel, we can do that with loops, too. [gzip][105] is a utility to compress files, thereby saving hard drive space. To compress all text files in the cwd, in parallel, do:
+
+ $ for i in *.txt; do { echo $i; gzip $i & }; done
+
+But I've gotten ahead of myself again. We'll leave the discussion of this example to the section on processes.
+
+The structure of a while loop is:
+
+while _condition_; do ... ; done
+
+I use while loops much less than for loops, but here's an example:
+
+ $ x=1; while ((x <= 3)); do echo $x; ((x++)); done
+ 1
+ 2
+ 3
+
+The while loop can also take input from a file. Suppose there's a file _junk.txt_ such that:
+
+ $ cat junk.txt
+ 1
+ 2
+ 3
+
+You can iterate over this file as such:
+
+ $ while read x; do echo $x; done < junk.txt
+ 1
+ 2
+ 3
+
+## Arguments to a Script
+
+Now that we've covered basic [control flow][106], let's return to the subject of scripting. An important question is, how can we pass arguments to our script? Let's make a script called _hellokitty.sh_:
+
+ #!/bin/bash
+
+ echo hello
+
+Try running it:
+
+ $ chmod 755 hellokitty.sh
+ $ ./hellokitty.sh
+ hello
+
+We can change it to the following:
+
+ #!/bin/bash
+
+ echo hello $1
+
+Now:
+
+ $ ./hellokitty.sh kitty
+ hello kitty
+
+In bash $1 represents the first argument to the script, $2 the second, and so on. If our script is:
+
+ #!/bin/bash
+
+ echo $0
+ echo hello $1 $4
+
+Then:
+
+ $ ./hellokitty.sh my sweet kitty cat
+ ./hellokitty.sh
+ hello my cat
+
+In most programming languages, arguments passed in on the command line are stored as an array. Bash stores the _n_th element of this array in the variable $_n_. $0 is special and refers to the name of the script itself.
+
+For casual scripts this suits us well. However, as you go on to write more involved programs with many options, it becomes impractical to rely on the position of an argument to determine its function in your script. The proper way to do this is using _flags_ that can be deployed in arbitrary order, as in:
+
+command --flag1 1 --flag2 1 --flag3 5
+
+or, in short form:
+
+command -f1 1 -f2 1 -f3 5
+
+You can do this with the command [getopts][107], but it's sometimes easier just to write your own options parser. Here's a sample script called [_test_args][108]_. Although a case statement would be a good way to handle numerous conditions, I'll use an if statement:
+
+ #!/bin/bash
+
+ helpmessage="This script showcases how to read arguments"
+
+ ### get arguments
+ # while input array size greater than zero
+ while (($# > 0)); do
+ if [ "$1" == "-h" -o "$1" == "-help" -o "$1" == "--help" ]; then
+ shift;
+ echo "$helpmessage"
+ exit;
+ elif [ "$1" == "-f1" -o "$1" == "--flag1" ]; then
+ # store what's passed via flag1 in var1
+ shift; var1=$1; shift
+ elif [ "$1" == "-f2" -o "$1" == "--flag2" ]; then
+ shift; var2=$1; shift
+ elif [ "$1" == "-f3" -o "$1" == "--flag3" ]; then
+ shift; var3=$1; shift
+ # if unknown argument, just shift
+ else
+ shift
+ fi
+ done
+
+ ### main
+ # echo variable if not empty
+ if [ ! -z $var1 ]; then echo "flag1 passed "$var1; fi
+ if [ ! -z $var2 ]; then echo "flag2 passed "$var2; fi
+ if [ ! -z $var3 ]; then echo "flag3 passed "$var3; fi
+
+This has some things we haven't seen yet:
+
+* $# is the size of our input argument array
+* shift pops an element off of our array (the same as in Perl)
+* exit exits the script
+* -o is logical OR in unix
+* -z checks if a variable is empty
+The code loops through the argument array and keeps popping off elements until the array size is zero, whereupon it exits the loop. For example, one might run this script as:
+
+ $ ./test_args --flag1 x -f2 y --flag3 zzz
+ flag1 passed x
+ flag2 passed y
+ flag3 passed zzz
+
+To spell out how this works, the first argument is _\--flag1_. Since this matches one of our checks, we shift. This pops this element out of our array, so the first element, $1, becomes _x_. This is stored in the variable _var1_, then there's another shift and $1 becomes _-f2_, which matches another condition, and so on.
+
+The flags can come in any order:
+
+ $ ./test_args --flag3 x --flag1 zzz
+ flag1 passed zzz
+ flag3 passed x
+
+ $ ./test_args --flag2 asdf
+ flag2 passed asdf
+
+We're brushing up against the outer limits of bash here. My prejudice is that you usually shouldn't go this far with bash, because its limitations will come sharply into focus if you try to do too-involved scripting. Instead, use a more friendly language. In Perl, for example, the array containing inputs is @ARGV; in Python, it's sys.argv. Let's compare these common scripting languages:
+
+| ----- |
+| **Bash** | **Perl** | **Python** | **Description** |
+| $0 | $0 | sys.argv[0] | Name of Script Itself |
+| $* | | | String Containing All Input Arguments |
+| ("$@") | @ARGV | sys.argv | Array or List Containing All Input Arguments [1] |
+| $1 | $ARGV[0] | sys.argv[1] | First Argument |
+| $2 | $ARGV[1] | sys.argv[2] | Second Argument |
+
+Perl has a [Getopt][109] package that is convenient for reading arguments, and Python has an even better one called [argparse][110]. Their functionality is infinitely nicer than bash's, so steer clear of bash if you're going for a script with lots of options.
+
+* * *
+
+
+[1] The distinction between $* and $@ is knotty. Dive into these subtleties [on Stackoverflow][111] ↑
+
+## Multi-Line Comments, Multi-Line Strings in Bash
+
+Let's continue in the realm of scripting. You can do a multi-line comment in bash with an if statement:
+
+ # multi-line comment
+ if false; then
+ echo hello
+ echo hello
+ echo hello
+ fi
+
+(Yes, this is a bit of a hack!)
+
+Multi-line strings are handy for many things. For example, if you want a help section for your script, you can do it like this:
+
+ cat <<_EOF_
+
+ Usage:
+
+ $0 --flag1 STRING [--flag2 STRING] [--flag3 STRING]
+
+ Required Arguments:
+
+ --flag1 STRING This argument does this
+
+ Options:
+
+ --flag2 STRING This argument does that
+ --flag3 STRING This argument does another thing
+
+ _EOF_
+
+How does this syntax work? Everything between the __EOF__ tags comprises the string and is printed. This is called a [_Here Document][112]_. Read The Linux Documentation Project's discussion of Here Documents [here][113].
+
+## Source and Export
+
+_Question_: If we create some variables in a script and exit, what happens to those variables? Do they disappear? The answer is, yes, they do. Let's make a script called _test_src.sh_ such that:
+
+ $ cat ./test_src.sh
+ #!/bin/bash
+
+ myvariable=54
+ echo $myvariable
+
+If we run it and then check what happened to the variable on our command line, we get:
+
+ $ ./test_src.sh
+ 54
+ $ echo $myvariable
+
+The variable is undefined. The command [source][114] is for solving this problem. If we want the variable to persist, we run:
+
+ $ source ./test_src.sh
+ 54
+ $ echo $myvariable
+ 54
+
+and—voilà!—our variable exists in the shell. An equivalent syntax for sourcing uses a dot:
+
+ $ . ./test_src.sh # this is the same as "source ./test_src.sh"
+ 54
+
+But now observe the following. We'll make a new script, _test_src_2.sh_, such that:
+
+ $ cat ./test_src_2.sh
+ #!/bin/bash
+
+ echo $myvariable
+
+This script is also looking for _$myvariable_. Running it, we get:
+
+ $ ./test_src_2.sh
+
+Nothing! So _$myvariable_ is defined in the shell but, if we run another script, its existence is unknown. Let's amend our original script to add in an export:
+
+ $ cat ./test_src.sh
+ #!/bin/bash
+
+ export myvariable=54 # export this variable
+ echo $myvariable
+
+Now what happens?
+
+ $ ./test_src.sh
+ 54
+ $ ./test_src_2.sh
+
+Still nothing! Why? Because we didn't source _test_src.sh_. Trying again:
+
+ $ source ./test_src.sh
+ 54
+ $ ./test_src_2.sh
+ 54
+
+So, at last, we see how to do this. If we want access on the shell to a variable which is defined inside a script, we must source that script. If we want _other_ scripts to have access to that variable, we must source plus export.
+
+## Dotfiles (_.bashrc_ and _.bash_profile_)
+
+Dotfiles are simply files that begin with a dot. We can make a test one as follows:
+
+ $ touch .test
+
+Such a file will be invisible in the GUI and you won't see it with vanilla ls either. (This works the same way for directories.) The only way to see it is to use the list _all_ option:
+
+ls -al
+
+or to list it explicitly by name. This is useful for files that you generally want to keep hidden from the user or discourage tinkering with.
+
+Many programs, such as bash, [Vim][55], and [Git][92], are highly configurable. Each uses dotfiles to let the user add functionality, change options, switch key bindings, etc. For example, here are some of the dotfiles files each program employs:
+
+* bash - _.bashrc_
+* vim - _.vimrc_
+* git - _.gitconfig_
+The most famous dotfile in my circle is _.bashrc_ which resides in HOME and configures your bash. Actually, let me retract that: let's say _.bash_profile_ instead of _.bashrc_ (read about the difference [here][115]). In any case, the idea is that this dotfile gets executed as soon as you open up the terminal and start a new session. It is therefore ideal for setting your PATH and other variables, adding functions ([like this one][116]), creating _aliases_ (discussed below), and doing any other setup related chore. For example, suppose you download a new program into /some/path/to/prog and you want to add it to your PATH. Then in your _.bash_profile_ you'd add:
+
+ export PATH=/some/path/to/prog:$PATH
+
+Recalling how export works, this will allow any programs we run on the command line to have access to our amended PATH. Note that we're adding this to the front of our PATH (so, if the program exists in our PATH already, the existing copy will be superseded). Here's an example snippet of my setup file:
+
+ PATH=/apps/python/2.7.6/bin:$PATH # use this version of Python
+ PATH=/apps/R/3.1.2/bin:$PATH # use this version of R
+ PATH=/apps/gcc/4.6.0/bin/:$PATH # use this version of gcc
+ export PATH
+
+There is much ado about _.bashrc_ (read _.bash_profile_) and it inspired one of the greatest unix blog-post titles of all time: [_Pimp my .bashrc][117]_—although this blogger is only playing with his prompt, as it were. As you go on in unix and add things to your _.bash_profile_, it will evolve into a kind of fingerprint, optimizing bash in your own unique way (and potentially making it difficult for others to use).
+
+If you have multiple computers, you'll want to recycle much of your program configurations on all of them. My co-worker uses a nice system I've adopted where the local and global aspects of setup are separated. For example, if you wanted to use certain aliases across all your computers, you'd put them in a global settings file. However, changes to your PATH might be different on different machines, so you'd store this in a local settings file. Then any time you change computers you can simply copy the global files and get your familiar setup, saving lots of work. A convenient way to accomplish this goal of a unified shell environment across all the systems you work on is to put your dotfiles on a server, like [GitHub][94] or [Bitbucket][118], you can access from anywhere. This is exactly what I've done and you can [get the up-to-date versions of my dotfiles on GitHub][119].
+
+Here's a sketch of how this idea works: in HOME make a _.dotfiles/bash_ directory and populate it with your setup files, using a suffix of either _local_ or _share_:
+
+ $ ls -1 .dotfiles/bash/
+ bash_aliases_local
+ bash_aliases_share
+ bash_functions_share
+ bash_inirun_local
+ bash_paths_local
+ bash_settings_local
+ bash_settings_share
+ bash_welcome_local
+ bash_welcome_share
+
+When _.bash_profile_ is called at the startup of your session, it sources all these files:
+
+ # the directory where bash configuration files reside
+ INIT_DIR="${HOME}/.dotfiles/bash"
+
+ # to make local configurations, add these files into this directory:
+ # bash_aliases_local
+ # bash_paths_local
+ # bash_settings_local
+ # bash_welcome_local
+
+ # this line, e.g., protects the functionality of rsync by only turning on the below if the shell is in interactive mode
+ # In particular, rsync fails if things are echo-ed to the terminal
+ [[ "$-" != *i* ]] && return
+
+ # bash welcome
+ if [ -e "${INIT_DIR}/bash_welcome_local" ]; then
+ cat ${INIT_DIR}/bash_welcome_local
+ elif [ -e "${INIT_DIR}/bash_welcome_share" ]; then
+ cat ${INIT_DIR}/bash_welcome_share
+ fi
+
+ #--------------------LOCAL------------------------------
+ # aliases local
+ if [ -e "${INIT_DIR}/bash_aliases_local" ]; then
+ source "${INIT_DIR}/bash_aliases_local"
+ echo "bash_aliases_local loaded"
+ fi
+
+ # settings local
+ if [ -e "${INIT_DIR}/bash_settings_local" ]; then
+ source "${INIT_DIR}/bash_settings_local"
+ echo "bash_settings_local loaded"
+ fi
+
+ # paths local
+ if [ -e "${INIT_DIR}/bash_paths_local" ]; then
+ source "${INIT_DIR}/bash_paths_local"
+ echo "bash_paths_local loaded"
+ fi
+
+ #---------------SHARE-----------------------------
+ # aliases share
+ if [ -e "${INIT_DIR}/bash_aliases_share" ]; then
+ source "${INIT_DIR}/bash_aliases_share"
+ echo "bash_aliases_share loaded"
+ fi
+
+ # settings share
+ if [ -e "${INIT_DIR}/bash_settings_share" ]; then
+ source "${INIT_DIR}/bash_settings_share"
+ echo "bash_settings_share loaded"
+ fi
+
+ # functions share
+ if [ -e "${INIT_DIR}/bash_functions_share" ]; then
+ source "${INIT_DIR}/bash_functions_share"
+ echo "bash_functions_share loaded"
+ fi
+
+A word of caution: echoing things in your _.bash_profile_, as I'm doing here, can be dangerous and break the functionaly of utilities like scp and rsync. However, we protect against this with the cryptic line near the top.
+
+Taking care of bash is the hard part. Other programs are less of a chore because, even if you have different programs in your PATH on your home and work computers, you probably want everything else to behave the same. To accomplish this, just drop all your other configuration files into your _.dotfiles_ repository and link to them from your home directory:
+
+ .gitconfig -> .dotfiles/.gitconfig
+ .vimrc -> .dotfiles/.vimrc
+
+## Working Faster with Readline Functions and Key Bindings
+
+If you've started using the terminal extensively, you might find that things are a bit slow. Perhaps you need some long command you wrote yesterday and you don't want to write the damn thing again. Or, if you want to jump to the end of a line, it's tiresome to move the cursor one character at a time. Failure to immediately solve these problems will push your productivity back into the stone age and you may end up swearing off the terminal as a Rube Goldberg-ian dystopia. So—enter keyboard shortcuts!
+
+The backstory about shortcuts is that there are two massively influential text editors, [Emacs][57] and [Vim][55], whose users—to be overdramatic—are divided into two warring camps. Each program has its own conventions for shortcuts, like jumping words with your cursor, and in bash they're Emacs-flavored by default. But you can toggle between either one:
+
+ $ set -o emacs # Set emacs-style key bindings (this is the default)
+ $ set -o vi # Set vi-style key bindings
+
+Although I prefer Vim as a text-editor, I use Emacs key bindings on the command line. The reason is that in Vim there are multiple modes (normal mode, insert mode, command mode). If you want to jump to the front of a line, you have to switch from insert mode to normal mode, which breaks up the flow a little. In Emacs there's no such complication. Emacs commands usually start with the _Control_ key or the _Meta_ key (usually _Esc_). Here are some things you can do:
+
+* _Cntrl-a_ \- jump cursor to beginning of line
+* _Cntrl-e_ \- jump cursor to end of line
+* _Cntrl-k_ \- delete to end of line
+* _Cntrl-u_ \- delete to beginning of line
+* _Cntrl-w_ \- delete back one word
+* _Cntrl-y_ \- paste (yank) what was deleted with the above shortcuts
+* _Cntrl-r_ \- reverse-search history for a given word
+* _Cntrl-c_ \- kill the process running in the foreground; don't execute current line on the command line
+* _Cntrl-z_ \- suspend the process running in the foreground
+* _Cntrl-l_ \- clear screen. (this has an advantage over the unix command clear in that it works in the Python, MySQL, and other shells)
+* _Cntrl-d_ \- [end of transmission][120] (in practice, often synonymous with quit - e.g., exiting the Python or MySQL shells)
+* _Cntrl-s_ \- freeze screen
+* _Cntrl-q_ \- un-freeze screen
+_These are supremely useful!_ I use these numerous times a day. (On the Mac, the first three even work in the Google search bar!) The first bunch of these fall under the umbrella of [_ReadLine Functions][121]_ (read GNU's extensive documentation [here][122]). There are actually tons more, and you can see them all by entering:
+
+ $ bind -P # show all Readline Functions and their key bindings
+ $ bind -l # show all Readline Functions
+
+Four of the most excellent Readline Functions are:
+
+* _forward-word_ \- jump cursor forward a word
+* _backward-word_ \- jump cursor backward a word
+* _history-search-backward_ \- scroll through your bash history backward
+* _history-search-forward_ \- scroll through your bash history forward
+For the first two—which are absolutely indispensable—you can use the default Emacs way:
+* _Meta-f_ \- jump forward one word
+* _Meta-b_ \- jump backward one word
+However, reaching for the _Esc_ key is a royal pain in the ass—you have to re-position your hands on the keyboard. This is where _key-binding_ comes into play. Using the command bind, you can map a Readline Function to any key combination you like. Of course, you should be careful not to overwrite pre-existing key bindings that you want to use. I like to map the following keys to these Readline Functions:
+
+* _Cntrl-forward-arrow_ \- forward-word
+* _Cntrl-backward-arrow_ \- backward-word
+* _up-arrow_ \- history-search-backward
+* _down-arrow_ \- history-search-forward
+In my _.bash_profile_ (or, more accurately, in my global bash settings file) I use:
+
+ # make cursor jump over words
+ bind '"e[5C": forward-word' # control+arrow_right
+ bind '"e[5D": backward-word' # control+arrow_left
+
+ # make history searchable by entering the beginning of command
+ # and using up and down keys
+ bind '"e[A": history-search-backward' # arrow_up
+ bind '"e[B": history-search-forward' # arrow_down
+
+(although these may not work universally [1].) How does this cryptic symbology translate into these particular keybindings? There's a neat trick you can use, to be revealed in the next section.
+
+_Tip_: On Mac, you can move your cursor to any position on the line by holding down _Option_ and clicking your mouse there. I rarely use this, however, because it's faster to make your cursor jump via the keyboard.
+
+* * *
+
+
+[1] If you have trouble getting this to work on OS's terminal, try [iTerm2][123] instead, as described [here][124] ↑
+
+## More on Key Bindings, the ASCII Table, _Control-v_
+
+Before we get to the key binding conundrum, let's review [ASCII][125]. This is, simply, a way of mapping every character on your keyboard to a numeric code. As Wikipedia puts it:
+
+> The American Standard Code for Information Interchange (ASCII) is a character-encoding scheme originally based on the English alphabet that encodes 128 specified characters—the numbers 0-9, the letters a-z and A-Z, some basic punctuation symbols, some control codes that originated with Teletype machines, and a blank space—into the 7-bit binary integers.
+
+For example, the character _A_ is mapped to the number _65_, while _q_ is _113_. Of special interest are the _control characters_, which are the representations of things that cannot be printed like _return_ or _delete_. Again [from Wikipedia][126], here is the portion of the ASCII table for these control characters:
+
+| ----- |
+| **Binary** | **Oct** | **Dec** | **Hex** | **Abbr** | [**a]** | [**b]** | [**c]** | **Name** |
+| 000 0000 | 000 | 0 | 00 | NUL | ␀ | ^@ |
+
+[1]: http://en.wikipedia.org/wiki/Unix
+[2]: http://en.wikipedia.org/wiki/Linux
+[3]: http://en.wikipedia.org/wiki/Command-line_interface
+[4]: http://en.wikipedia.org/wiki/Terminal_emulator
+[5]: http://en.wikipedia.org/wiki/Shell_script
+[6]: http://en.wikipedia.org/wiki/Bourne-again_shell
+[7]: http://www.oliverelliott.org/static/article/img/terminal_591.png
+[8]: http://www.gnu.org/software/coreutils/
+[9]: http://en.wikipedia.org/wiki/GNU_Core_Utilities
+[10]: /static/img/letter_600.jpg
+[11]: https://class.coursera.org/startup-001
+[12]: http://ss64.com/bash/rsync.html
+[13]: http://www.perl.org
+[14]: http://www.python.org
+[15]: https://www.gnupg.org/index.html
+[16]: http://aws.amazon.com/ec2/
+[17]: http://nginx.org/
+[18]: http://en.wikipedia.org/wiki/Graphical_user_interface
+[19]: http://www.youtube.com/watch?v=WiX7GTelTPM
+[20]: http://upload.wikimedia.org/wikipedia/commons/c/cd/Unix_timeline.en.svg
+[21]: /article/computing/ref_unix/
+[22]: http://www.oliverelliott.org/static/article/img/terminal_119.png
+[23]: http://en.wikipedia.org/wiki/Cmd.exe
+[24]: http://en.wikipedia.org/wiki/Windows_PowerShell
+[25]: http://www.putty.org
+[26]: https://chrome.google.com/webstore/detail/secure-shell/pnhechapfaindjhompbnflcldabbghjo?hl=en
+[27]: http://mobaxterm.mobatek.net
+[28]: http://en.wikipedia.org/wiki/Darwin_(operating_system)
+[29]: http://aws.amazon.com/free/
+[30]: http://www.ubuntu.com/download
+[31]: http://www.linuxmint.com
+[32]: https://getfedora.org/
+[33]: http://www.centos.org
+[34]: https://www.cygwin.com/
+[35]: /article/computing/tips_mac/#InstalltheGNUCoreutils
+[36]: http://www.oliverelliott.org/static/article/img/root_dir_structure.png
+[37]: http://en.wikipedia.org/wiki/Home_directory
+[38]: http://www.oliverelliott.org/static/article/img/home_dir_structure.png
+[39]: http://www.oliverelliott.org/static/article/img/dir_struct_1125.png
+[40]: http://www.thegeekstuff.com/2010/09/linux-file-system-structure/
+[41]: http://en.wikipedia.org/wiki/Uniform_resource_locator
+[42]: http://www.e-reading.biz/htmbook.php/orelly/unix2.1/lrnunix/ch03_01.htm
+[43]: http://ss64.com/bash/ls.html
+[44]: http://www.oliverelliott.org/static/article/img/ls.png
+[45]: http://www.oliverelliott.org/static/article/img/ls1.png
+[46]: http://www.oliverelliott.org/static/article/img/lshl.png
+[47]: http://unixhelp.ed.ac.uk/CGI/man-cgi?finger
+[48]: http://en.wikipedia.org/wiki/Hidden_file_and_hidden_directory
+[49]: http://www.oliverelliott.org/static/article/img/lsal.png
+[50]: http://en.wikipedia.org/wiki/Glob_%28programming%29
+[51]: http://macintoshgarden.org/games/prince-of-persia
+[52]: http://en.wikipedia.org/wiki/Microsoft_Word
+[53]: http://www.youtube.com/watch?v=znlFu_lemsU
+[54]: http://en.wikipedia.org/wiki/Grep
+[55]: http://www.vim.org
+[56]: http://www.nano-editor.org/
+[57]: http://www.gnu.org/software/emacs/
+[58]: /article/computing/wik_vim/
+[59]: http://www.sublimetext.com/
+[60]: http://aquamacs.org/
+[61]: http://www.peterborgapps.com/smultron/
+[62]: http://en.wikipedia.org/wiki/Escape_character
+[63]: http://www.oliverelliott.org/static/article/img/bash_prompt_426.png
+[64]: http://en.wikipedia.org/wiki/X_Window_System
+[65]: http://www.oliverelliott.org/static/article/img/thepath_410.png
+[66]: http://en.wikipedia.org/wiki/Symbolic_link
+[67]: http://ss64.com/bash/ln.html
+[68]: http://www.oliverelliott.org/static/article/img/myscript_634.png
+[69]: https://www.talisman.org/~erlkonig/documents/commandname-extensions-considered-harmful.shtml
+[70]: http://en.wikipedia.org/wiki/Shebang_(Unix)
+[71]: http://en.wikipedia.org/wiki/Software_portability
+[72]: http://en.wikipedia.org/wiki/Env
+[73]: http://en.wikipedia.org/wiki/Interpreted_language
+[74]: http://en.wikipedia.org/wiki/Compiled_language
+[75]: http://en.wikipedia.org/wiki/Compiler
+[76]: http://gcc.gnu.org
+[77]: https://virtualenv.pypa.io/en/latest/
+[78]: http://stackoverflow.com/questions/5725296/difference-between-sh-and-bash
+[79]: http://www.cyberciti.biz/tips/how-do-i-find-out-what-shell-im-using.html
+[80]: http://en.wikipedia.org/wiki/Z_shell
+[81]: http://en.wikipedia.org/wiki/Tcsh
+[82]: http://ss64.com/bash/chmod.html
+[83]: http://en.wikipedia.org/wiki/Chmod
+[84]: http://en.wikipedia.org/wiki/Secure_Shell
+[85]: http://www.ss64.com/bash/ssh.html
+[86]: /article/computing/tut_unix/#Dotfilesbashrcandbash_profile
+[87]: http://en.wikipedia.org/wiki/RSA_(cryptosystem)
+[88]: http://en.wikipedia.org/wiki/Public-key_cryptography
+[89]: https://github.com/
+[90]: https://help.github.com/articles/generating-ssh-keys/
+[91]: /article/computing/tips_mac/#sshintoYourMac
+[92]: http://git-scm.com/
+[93]: /article/computing/wik_git/
+[94]: https://github.com
+[95]: /static/article/example/myscript.html
+[96]: http://en.wikipedia.org/wiki/Standard_streams
+[97]: http://www.oliverelliott.org/static/article/img/Stdstreams-notitle.svg.png
+[98]: http://bash.cyberciti.biz/guide/The_case_statement
+[99]: http://en.wikipedia.org/wiki/Array_data_structure
+[100]: http://en.wikipedia.org/wiki/Hash_table
+[101]: http://www.amazon.com/The-Tiger-Shark-Empirical-Wave-Particle/dp/0521358922
+[102]: http://www.tldp.org/LDP/abs/html/fto.html
+[103]: http://tldp.org/LDP/abs/html/exit-status.html
+[104]: http://www.oliverelliott.org/static/article/img/lsdirtree_234.jpg
+[105]: http://ss64.com/bash/gzip.html
+[106]: http://en.wikipedia.org/wiki/Control_flow
+[107]: http://wiki.bash-hackers.org/howto/getopts_tutorial
+[108]: /static/article/example/test_args.html
+[109]: http://perldoc.perl.org/Getopt/Long.html
+[110]: https://docs.python.org/2/howto/argparse.html
+[111]: http://stackoverflow.com/questions/12314451/accessing-bash-command-line-args-vs
+[112]: http://en.wikipedia.org/wiki/Here_document
+[113]: http://www.tldp.org/LDP/abs/html/here-docs.html
+[114]: http://ss64.com/bash/source.html
+[115]: http://www.joshstaiger.org/archives/2005/07/bash_profile_vs.html
+[116]: http://www.virtualblueness.net/linux-gazette/109/marinov.html
+[117]: http://zxvf-linux.blogspot.com/2013/05/pimp-my-bashrc.html
+[118]: https://bitbucket.org
+[119]: https://github.com/gitliver/.dotfiles
+[120]: http://en.wikipedia.org/wiki/End-of-transmission_character
+[121]: http://en.wikipedia.org/wiki/GNU_Readline
+[122]: http://tiswww.case.edu/php/chet/readline/readline.html
+[123]: http://iterm2.com/
+[124]: /article/computing/tips_mac/#InstalliTerm2
+[125]: http://en.wikipedia.org/wiki/ASCII
+[126]: http://en.wikipedia.org/wiki/ASCII#ASCII_control_characters
diff --git a/cmd/ref/some command line tips for the web developer.txt b/cmd/ref/some command line tips for the web developer.txt
new file mode 100644
index 0000000..c881c4a
--- /dev/null
+++ b/cmd/ref/some command line tips for the web developer.txt
@@ -0,0 +1,105 @@
+---
+title: Some command line tips for the web developer
+date: 2015-04-21T18:40:43Z
+source: http://tosbourn.com/some-command-line-tips-for-the-web-developer/
+tags: #lhp, #cmdline
+
+---
+
+I wanted to share some tips I have collated over the years that would be useful for web developers who occasionally need to roll their sleeves up and get their hands dirty working on a server. These are by no stretch of the imagination a complete list and nor do I get too specific. The tips below should work on the majority of Linux based web servers and should apply to the majority of setups.
+
+## Where to get more help
+
+The first thing I should point out is that if at any time you get stuck or need help you should consult your sysadmin, if you have no sysadmin and need to consult the internet, I would suggest browsing and then asking on [ServerFault][1], the people on that site seem to know their stuff and it is a vibrant enough community.
+
+## Using Tab
+
+When typing file or folder names you can tab once to automatically complete the name, this can vastly speed up your time in the terminal, so instead of typing `vi /home/username/longFolderName/MyFileNameIsLongToo.txt` you could type `vi /h TAB u TAB l TAB M TAB`.
+
+You can also double tap the tab key to see a list of options available, this is useful if you have two filenames and the tab autocomplete doesn't know what to do.
+
+## Listing Files
+
+You probably already know the `ls` command, but did you know that by adding `-lah` after it you can greatly improve the detail you get back from it.
+
+`-l` lists everything out in a nicer format, `-a` includes hidden (dot) files in the list and `-h` makes thing like sizes human readable.
+
+The other thing some people don't know about ls is that you can pass in the location you want to look at, so instead of typing `cd /home/user/` and then `ls` you can type `ls /home/user/`
+
+## Removing Files
+
+Again most people know about using `rm` to remove files, but some don't realise you can pass multiple files into it, for example `rm file1 file2`
+
+## Viewing Files
+
+`cat` is your friend for quickly viewing the contents of a file, so you can just type `cat my_file.txt` instead of going into your text editor.
+
+For larger files you might only want to see the very top of the file or the very bottom of the file, the quickest way to do that is to type `head my_file.txt` for the top or `tail my_file.txt` for the bottom of the file.
+
+If you are constantly checking a debug file you can set up a quick command to constantly monitor it by typing `tail -f my_file.txt` this follows the file and it will automatically update as new stuff enters the file.
+
+## Drive Space
+
+If you need to know how much space is on your drives just type `df -h`, you will normally get information for drives you probably didn't think existed, don't worry about it just focus on the ones that are clearly your main drives.
+
+## RAM
+
+If you need to know how much RAM is installed on your machine just type `free -m`, the main column you will want to worry about is the 'total' one.
+
+## History
+
+Did you just get shouted at because you forgot to prefix your command with sudo? Type `sudo !!` and it will run the last command as sudo.
+
+If you know you entered a command a couple of commands ago, hit up a couple of times and the terminal will add it in for you.
+
+If you used a command a while ago and want to be able to remember it, type `history` and you will see everything that you have typed in the last while come up along with a unique number for it. Take note of the number and type `![the number]` and the command will run.
+
+## The Pipe Character
+
+The pipe character will take anything that would normally appear on the screen and passes it somewhere else, this is crazy handy for doing basic searches. So instead of typing history and scanning for every time you used sudo, type `history | grep sudo` and you will get only items with sudo in it.
+
+## What is running
+
+Find out what is currently running by typing `top` — pay attention to the load average, if it is high there may be something going wrong. Generally look for a full disk, a process that is going mental or a load of traffic to the site.
+
+If you just care about the load average you can get it on one line by typing `uptime`.
+
+## Finding out More
+
+`man` \+ any command will give you the manual for that command, super handy for finding out what you need without resorting to Google — also just like programs you run on your computer different versions could be installed on the server, this means that the man page will be more relevant sometimes than what google searches will tell you.
+
+## Moving folder contents up a level quickly
+
+Move all contents of a folder into its parent with `mv child_dir/* ./`
+
+This means take everything in child_dir (but not directory called child_dir) and move it to the folder we are in. If you don't like the idea of typing `./` because it looks odd you can always type the full address.
+
+## Alias Commands
+
+If there are long or complex commands that you type regularly you should alias them, this means creating another name you can reference the command by.
+
+For example `alias lsa='ls -lah'` would allow you to type `lsa` and what would run would be `ls -lah`, you can even overwrite `ls` if you really wanted with `alias ls='ls -lah'`.
+
+If you want to pass parameters you can create an alias with a function like this (just type it into the command line); `mkdir_ls() { mkdir $*; cd $*;}` then when you type `mkdir_ls new_folder` it will make a new folder and the move you into that folder.
+
+## In closing
+
+Hopefully you find some of these useful, if you have any to add feel free to let me know on [Twitter][2].
+
+Share this on:
+[Facebook][3] [Twitter][4] [G+][5] [LinkedIn][6] [Reddit][7] [HN][8]
+
+Did you find this post helpful? If so I would really appreciate it if you could look at [5 ways you could help the site for free in under 5 minutes][9].
+
+Please enable JavaScript to view the [comments powered by Disqus.][10]
+
+[1]: http://www.serverfault.com
+[2]: https://twitter.com/tosbourn "Some command line tips for the web developer"
+[3]: http://www.facebook.com/sharer/sharer.php?u=http://tosbourn.com/some-command-line-tips-for-the-web-developer&t=Some command line tips for the web developer
+[4]: https://twitter.com/intent/tweet?text=Some command line tips for the web developer&url=http://tosbourn.com/some-command-line-tips-for-the-web-developer
+[5]: https://plus.google.com/share?url=http://tosbourn.com/some-command-line-tips-for-the-web-developer
+[6]: http://www.linkedin.com/shareArticle?mini=true&url=http://tosbourn.com/some-command-line-tips-for-the-web-developer&title=Some command line tips for the web developer&summary=Some command line tips for the web developer&source=http://tosbourn.com/some-command-line-tips-for-the-web-developer
+[7]: http://www.reddit.com/submit?url=http://tosbourn.com/some-command-line-tips-for-the-web-developer
+[8]: http://news.ycombinator.com/submitlink?u=http://tosbourn.com/some-command-line-tips-for-the-web-developer&t=Some command line tips for the web developer
+[9]: /help-the-site/
+[10]: https://disqus.com/?ref_noscript
diff --git a/cmd/setup-vps-server.txt b/cmd/setup-vps-server.txt
new file mode 100644
index 0000000..efee5d0
--- /dev/null
+++ b/cmd/setup-vps-server.txt
@@ -0,0 +1,116 @@
+Let's talk about your server hosting situation. I know a lot of you are still using a shared web host. The thing is, it's 2015, shared hosting is only necessary if you really want unexplained site outages and over-crowded servers that slow to a crawl.
+
+It's time to break free of those shared hosting chains. It time to stop accepting the software stack you're handed. It's time to stop settling for whatever outdated server software and configurations some shared hosting company sticks you with.
+
+**It's time to take charge of your server; you need a VPS**
+
+What? Virtual Private Servers? Those are expensive and complicated... don't I need to know Linux or something?
+
+No, no and not really.
+
+Thanks to an increasingly competitive market you can pick up a very capable VPS for $5 a month. Setting up your VPS *is* a little more complicated than using a shared host, but most VPS's these days have one-click installers that will set up a Rails, Django or even WordPress environment for you.
+
+As for Linux, knowing your way around the command line certainly won't hurt, but these tutorials will teach you everything you really need to know. We'll also automate everything so that critical security updates for your server are applied automatically without you lifting a finger.
+
+## Pick a VPS Provider
+
+There are hundreds, possibly thousands of VPS providers these days. You can nerd out comparing all of them on [serverbear.com](http://serverbear.com/) if you want. When you're starting out I suggest sticking with what I call the big three: Linode, Digital Ocean or Vultr.
+
+Linode would be my choice for mission critical hosting. I use it for client projects, but Vultr and Digital Ocean are cheaper and perfect for personal projects and experiments. Both offer $5 a month servers, which gets you .5 GB of RAM, plenty of bandwidth and 20-30GB of a SSD-based storage space. Vultr actually gives you a little more RAM, which is helpful if you're setting up a Rails or Django environment (i.e. a long running process that requires more memory), but I've been hosting a Django-based site on a 512MB Digital Ocean instance for 18 months and have never run out of memory.
+
+Also note that all these plans start off charging by the hour so you can spin up a new server, play around with it and then destroy it and you'll have only spent a few pennies.
+
+Which one is better? They're both good. I've been using Vultr more these days, but Digital Ocean has a nicer, somewhat slicker control panel. There are also many others I haven't named. Just pick one.
+
+Here's a link that will get you a $10 credit at [Vultr](http://www.vultr.com/?ref=6825229) and here's one that will get you a $10 credit at [Digital Ocean](https://www.digitalocean.com/?refcode=3bda91345045) (both of those are affiliate links and help cover the cost of hosting this site *and* get you some free VPS time).
+
+For simplicity's sake, and because it offers more one-click installers, I'll use Digital Ocean for the rest of this tutorial.
+
+## Create Your First VPS
+
+In Digital Ocean you'll create a "Droplet". It's a three step process: pick a plan (stick with the $5 a month plan for starters), pick a location (stick with the defaults) and then install a bare OS or go with a one-click installer. Let's get WordPress up and running, so select WordPress on 14.04 under the Applications tab.
+
+If you want automatic backups, and you do, check that box. Backups are not free, but generally won't add more than about $1 to your monthly bill -- it's money well spent.
+
+The last thing we need to do is add an SSH key to our account. If we don't Digital Ocean will email our root password in a plain text email. Yikes.
+
+If you need to generate some SSH keys, here's a short guide, [How to Generate SSH keys](). You can skip step 3 in that guide. Once you've got your keys set up on your local machine you just need to add them to your droplet.
+
+If you're on OS X, you can use this command to copy your public key to the clipboard:
+
+ pbcopy < ~/.ssh/id_rsa.pub
+
+Otherwise you can use cat to print it out and copy it:
+
+ cat ~/.ssh/id_rsa.pub
+
+Now click the button to "add an SSH key". Then paste the contents of your clipboard into the box. Hit "add SSH Key" and you're done.
+
+Now just click the giant "Create Droplet".
+
+Congratulations you just deployed your first VPS server.
+
+## Secure Your VPS
+
+Now we can log in to our new VPS with this code:
+
+ ssh root@127.87.87.87
+
+That will cause SSH to ask if you want to add the server to list of known hosts. Say yes and then on OS X you'll get a dialog asking for the passphrase you created a minute ago when you generate your SSH key. Enter it, check the box to save it to your keychain so you don't have to enter it again.
+
+And you're now logged in to your VPS as root. That's not how we want to log in though since root is a very privileged user that can wreak all sorts of havoc. The first thing we'll do is change the password of the root user. To do that, just enter:
+
+ passwd
+
+And type a new password.
+
+Now let's create a new user:
+
+ adduser myusername
+
+Give your username a secure password and then enter this command:
+
+ visudo
+
+If you get an error saying that there is no app installed, you'll need to first install sudo (`apt-get install sudo` on Debian, which does not ship with sudo). That will open a file. Use the arrow key to move the cursor down to the line that reads:
+
+ root ALL=(ALL:ALL) ALL
+
+Now add this line:
+
+ myusername ALL=(ALL:ALL) ALL
+
+Where myusername is the username you created just a minute ago. Now we need to save the file. To do that hit Control-X, type a Y and then hit return.
+
+Now, **WITHOUT LOGGING OUT OF YOUR CURRENT ROOT SESSION** open another terminal window and make sure you can login with your new user:
+
+ ssh myusername@12.34.56.78
+
+You'll be asked for the password that we created just a minute ago on the server (not the one for our SSH key). Enter that password and you should be logged in. To make sure we can get root access when we need it, try entering this command:
+
+ sudo apt-get update
+
+That should ask for your password again and then spit out a bunch of information, all of which you can ignore for now.
+
+Okay, now you can log out of your root terminal window. To do that just hit Control-D.
+
+## Finishing Up
+
+What about actually accessing our VPS on the web? Where's WordPress? Just point your browser to the bare IP address you used to log in and you should get the first screen of the WordPress installer.
+
+We now have a VPS deployed and we've taken some very basic steps to secure it. We can do a lot more to make things more secure, but I've covered that in a separate article:
+
+One last thing: the user we created does not have access to our SSH keys, we need to add them. First make sure you're logged out of the server (type Control-D and you'll get a message telling you the connection has been closed). Now, on your local machine paste this command:
+
+ cat ~/.ssh/id_rsa.pub | ssh myusername@45.63.48.114 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
+
+You'll have to put in your password one last time, but from now on you can login via SSH.
+
+## Next Steps
+
+Congratulations you made it past the first hurdle, you're well on your way to taking control over your server. Kick back, relax and write some blog posts.
+
+Write down any problems you had with this tutorial and send me a link so I can check out your blog (I'll try to help figure out what went wrong too).
+
+Because we used a pre-built image from Digital Ocean though we're really not much better off than if we went with shared hosting, but that's okay, you have to start somewhere. Next up we'll do the same things, but this time create a bare OS which will serve as the basis for a custom built version of Nginx that's highly optimized and way faster than any stock server.
+
diff --git a/cmd/ssh-keys.txt b/cmd/ssh-keys.txt
new file mode 100644
index 0000000..25c1b8c
--- /dev/null
+++ b/cmd/ssh-keys.txt
@@ -0,0 +1,94 @@
+SSH keys are an easier, more secure way of logging into your virtual private server via SSH. Passwords are vulnerable to brute force attacks and just plain guessing. Key-based authentication is (currently) much more difficult to brute force and, when combined with a password on the key, provides a secure way of accessing your VPS instances from anywhere.
+
+Key-based authentication uses two keys, the first is the "public" key that anyone is allowed to see. The second is the "private" key that only you ever see. So to log in to a VPS using keys we need to create a pair -- a private key and a public key that matches it -- and then securely upload the public key to our VPS instance. We'll further protect our private key by adding a password to it.
+
+Open up your terminal application. On OS X, that's Terminal, which is in Applications >> Utilities folder. If you're using Linux I'll assume you know where the terminal app is and Windows fans can follow along after installing [Cygwin](http://cygwin.com/).
+
+Here's how to generate SSH keys in three simple steps.
+
+## Setup SSH for More Secure Logins
+
+### Step 1: Check for SSH Keys
+
+Cut and paste this line into your terminal to check and see if you already have any SSH keys:
+
+ ls -al ~/.ssh
+
+If you see output like this, then skip to Step 3:
+
+ id_dsa.pub
+ id_ecdsa.pub
+ id_ed25519.pub
+ id_rsa.pub
+
+### Step 2: Generate an SSH Key
+
+Here's the command to create a new SSH key. Just cut and paste, but be sure to put in your own email address in quotes:
+
+ ssh-keygen -t rsa -C "your_email@example.com"
+
+This will start a series of questions, just hit enter to accept the default choice for all of them, including the last one which asks where to save the file.
+
+Then it will ask for a passphrase, pick a good long one. And don't worry you won't need to enter this every time, there's something called `ssh-agent` that will ask for your passphrase and then store it for you for the duration of your session (i.e. until you restart your computer).
+
+ Enter passphrase (empty for no passphrase): [Type a passphrase]
+ Enter same passphrase again: [Type passphrase again]
+
+Once you've put in the passphrase, SSH will spit out a "fingerprint" that looks a bit like this:
+
+ # Your identification has been saved in /Users/you/.ssh/id_rsa.
+ # Your public key has been saved in /Users/you/.ssh/id_rsa.pub.
+ # The key fingerprint is:
+ # d3:50:dc:0f:f4:65:29:93:dd:53:c2:d6:85:51:e5:a2 scott@longhandpixels.net
+
+### Step 3 Copy Your Public Key to your VPS
+
+If you have ssh-copy-id installed on your system you can use this line to transfer your keys:
+
+ssh-copy-id user@123.45.56.78
+
+If that doesn't work, you can paste in the keys using SSH:
+
+cat ~/.ssh/id_rsa.pub | ssh user@12.34.56.78 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
+
+Whichever you use you should get a message like this:
+
+
+ The authenticity of host '12.34.56.78 (12.34.56.78)' can't be established.
+ RSA key fingerprint is 01:3b:ca:85:d6:35:4d:5f:f0:a2:cd:c0:c4:48:86:12.
+ Are you sure you want to continue connecting (yes/no)? yes
+ Warning: Permanently added '12.34.56.78' (RSA) to the list of known hosts.
+ username@12.34.56.78's password:
+
+ Now try logging into the machine, with "ssh 'user@12.34.56.78'", and check in:
+
+ ~/.ssh/authorized_keys
+
+ to make sure we haven't added extra keys that you weren't expecting.
+
+Now log in to your VPS with ssh like so:
+
+ ssh username@12.34.56.78
+
+And you won't be prompted for a password by the server. You will, however, be prompted for the passphrase you used to encrypt your SSH key. You'll need to enter that passphrase to unlock your SSH key, but ssh-agent should store that for you so you only need to re-enter it when you logout or restart your computer.
+
+And there you have it, secure, key-based log-ins for your VPS.
+
+### Bonus: SSH config
+
+If you'd rather not type `ssh myuser@12.34.56.78` all the time you can add that host to your SSH config file and refer to it by hostname.
+
+The SSH config file lives in `~/.ssh/config`. This command will either open that file if it exists or create it if it doesn't:
+
+ nano ~/.ssh/config
+
+Now we need to create a host entry. Here's what mine looks like:
+
+ Host myname
+ Hostname 12.34.56.78
+ user myvpsusername
+ #Port 24857 #if you set a non-standard port uncomment this line
+ CheckHostIP yes
+ TCPKeepAlive yes
+
+Then to login all I need to do is type ssh myname. This is even more helpful when using `scp` since you can skip the whole username@server and just type: `scp myname:/home/myuser/somefile.txt .` to copy a file.