summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--bivouac/Bivouac.sitbin0 -> 192871737 bytes
-rw-r--r--bivouac/lagsolo.zipbin0 -> 276406 bytes
-rw-r--r--bivouac/laura's cover.sitbin0 -> 36116937 bytes
-rw-r--r--bivouac/lauraSolomonPics.zipbin0 -> 234987 bytes
-rw-r--r--budget travel/Travel_social_networking2.docbin0 -> 73728 bytes
-rw-r--r--budget travel/Travel_social_networking3-1.docbin0 -> 73728 bytes
-rw-r--r--budget travel/Travel_social_networking3.pdfbin0 -> 50672 bytes
-rw-r--r--budget travel/Travel_social_networking3.rtf40
-rw-r--r--budget travel/Travel_social_networking4.docbin0 -> 73728 bytes
-rw-r--r--budget travel/budgettravel.txt1
-rw-r--r--budget travel/travelsocialnetworks.txt1
-rw-r--r--cmd/.gitignore2
-rw-r--r--cmd/Screen Shot 2015-04-01 at 1.34.13 PM.pngbin0 -> 408641 bytes
-rw-r--r--cmd/Screen Shot 2015-04-01 at 1.34.35 PM.pngbin0 -> 419873 bytes
-rw-r--r--cmd/adv-autojump.txt5
-rw-r--r--cmd/basics.txt196
-rw-r--r--cmd/changing-shells.txt73
-rw-r--r--cmd/draft_edits.txt35
-rw-r--r--cmd/homebrew.txt0
-rw-r--r--cmd/intro.txt0
-rw-r--r--cmd/notes.txt2
-rw-r--r--cmd/read-docs.txt55
-rw-r--r--cmd/ref/a whirlwind tour of web developer tools: command line utilities - wern ancheta.txt1215
-rw-r--r--cmd/ref/bash image tools for web designers - brettterpstra.com.txt53
-rw-r--r--cmd/ref/invaluable command line tools for web developers.txt89
-rw-r--r--cmd/ref/life on the command line.txt81
-rw-r--r--cmd/ref/oliver | an introduction to unix.txt1913
-rw-r--r--cmd/ref/some command line tips for the web developer.txt105
-rw-r--r--cmd/setup-vps-server.txt116
-rw-r--r--cmd/ssh-keys.txt94
-rw-r--r--consumer-digest-best-buys.txt62
-rw-r--r--consumer-digest-outline.txt22
-rw-r--r--consumer-digest.txt81
-rw-r--r--dentsu-invoice.pdfbin0 -> 80486 bytes
-rw-r--r--invoice-wheels-hosting-2015.pdfbin0 -> 78897 bytes
-rw-r--r--invoice_duke_update.pdfbin0 -> 82266 bytes
-rw-r--r--paradiseherbs-inv068600.pdfbin0 -> 75605 bytes
-rw-r--r--paradiseherbs-inv070812.pdfbin0 -> 52979 bytes
-rw-r--r--pitfallsofrwd.txt37
-rw-r--r--postmarkapp/invoices/scott_gilbertson_invoice_01.odtbin0 -> 11427 bytes
-rw-r--r--postmarkapp/invoices/scott_gilbertson_invoice_01.pdfbin0 -> 29845 bytes
-rw-r--r--postmarkapp/invoices/scott_gilbertson_invoice_02.odtbin0 -> 11644 bytes
-rw-r--r--postmarkapp/invoices/scott_gilbertson_invoice_02.pdfbin0 -> 29673 bytes
-rw-r--r--postmarkapp/mailbrush.txt80
-rw-r--r--postmarkapp/monitoring-email-2.txt17
-rw-r--r--postmarkapp/monitoring-email.txt31
-rw-r--r--postmarkapp/separating-transactional-bulk.txt22
-rw-r--r--postmarkapp/separating-transactional-bulkv2.txt22
-rw-r--r--postmarkapp/tools-techniques-delivery.txt36
-rw-r--r--sifterapp/Diagram.jpgbin0 -> 239664 bytes
-rw-r--r--sifterapp/choosing-bug-tracker.txt16
-rw-r--r--sifterapp/complete/browserstack-images.zipbin0 -> 865862 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack01.jpgbin0 -> 282678 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack02.jpgbin0 -> 206844 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack03.jpgbin0 -> 28982 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack04.jpgbin0 -> 189414 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack05.jpgbin0 -> 216895 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack06.jpgbin0 -> 121851 bytes
-rw-r--r--sifterapp/complete/browserstack.txt90
-rw-r--r--sifterapp/complete/bugs-issues-notes.txt18
-rw-r--r--sifterapp/complete/bugs-issues.txt52
-rw-r--r--sifterapp/complete/forcing-responsibility-software.txt29
-rw-r--r--sifterapp/complete/how-to-respond-to-bug-reports.txt29
-rw-r--r--sifterapp/complete/issue-tracking-challenges.txt63
-rw-r--r--sifterapp/complete/private-issues.txt28
-rw-r--r--sifterapp/complete/sifter-pagespeed-after.pngbin0 -> 553545 bytes
-rw-r--r--sifterapp/complete/sifter-pagespeed-before.pngbin0 -> 599783 bytes
-rw-r--r--sifterapp/complete/states-vs-resolutions.txt22
-rw-r--r--sifterapp/complete/streamlining-issue-creation.txt55
-rw-r--r--sifterapp/complete/triaging.txt62
-rw-r--r--sifterapp/complete/webpagetest-notes.txt35
-rw-r--r--sifterapp/complete/webpagetestp1.txt79
-rw-r--r--sifterapp/complete/webpagetestp2.txt85
-rw-r--r--sifterapp/complete/yosemite-mail.txt14
-rw-r--r--sifterapp/ideal-sifter-workflow.txt31
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_01.rtf62
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_02.rtf62
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_03.rtf50
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_04.rtf77
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_05.rtf55
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_06.rtf55
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_07.rtf55
-rw-r--r--sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.35 AM.pngbin0 -> 1262819 bytes
-rw-r--r--sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.39 AM.pngbin0 -> 591440 bytes
-rw-r--r--sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.45 AM.pngbin0 -> 715947 bytes
-rw-r--r--sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.49 AM.pngbin0 -> 439351 bytes
-rw-r--r--sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.52 AM.pngbin0 -> 450804 bytes
-rw-r--r--sifterapp/new-article.txt62
-rw-r--r--sifterapp/org-chaos.txt33
-rw-r--r--sifterapp/requiresments-vs-assumptions.txt28
-rw-r--r--sifterapp/streamlining-issue-benefits.txt19
-rw-r--r--sifterapp/zapier-announcement.txt31
-rw-r--r--wheelsinvoice121511.pdfbin0 -> 80515 bytes
-rw-r--r--wheelsupdateinvoice.pdfbin0 -> 84525 bytes
94 files changed, 5783 insertions, 0 deletions
diff --git a/bivouac/Bivouac.sit b/bivouac/Bivouac.sit
new file mode 100644
index 0000000..5b79b2c
--- /dev/null
+++ b/bivouac/Bivouac.sit
Binary files differ
diff --git a/bivouac/lagsolo.zip b/bivouac/lagsolo.zip
new file mode 100644
index 0000000..782fd8e
--- /dev/null
+++ b/bivouac/lagsolo.zip
Binary files differ
diff --git a/bivouac/laura's cover.sit b/bivouac/laura's cover.sit
new file mode 100644
index 0000000..b53045e
--- /dev/null
+++ b/bivouac/laura's cover.sit
Binary files differ
diff --git a/bivouac/lauraSolomonPics.zip b/bivouac/lauraSolomonPics.zip
new file mode 100644
index 0000000..9244710
--- /dev/null
+++ b/bivouac/lauraSolomonPics.zip
Binary files differ
diff --git a/budget travel/Travel_social_networking2.doc b/budget travel/Travel_social_networking2.doc
new file mode 100644
index 0000000..11ad177
--- /dev/null
+++ b/budget travel/Travel_social_networking2.doc
Binary files differ
diff --git a/budget travel/Travel_social_networking3-1.doc b/budget travel/Travel_social_networking3-1.doc
new file mode 100644
index 0000000..b3c8632
--- /dev/null
+++ b/budget travel/Travel_social_networking3-1.doc
Binary files differ
diff --git a/budget travel/Travel_social_networking3.pdf b/budget travel/Travel_social_networking3.pdf
new file mode 100644
index 0000000..e0ae663
--- /dev/null
+++ b/budget travel/Travel_social_networking3.pdf
Binary files differ
diff --git a/budget travel/Travel_social_networking3.rtf b/budget travel/Travel_social_networking3.rtf
new file mode 100644
index 0000000..2efc21a
--- /dev/null
+++ b/budget travel/Travel_social_networking3.rtf
@@ -0,0 +1,40 @@
+{\rtf1\ansi\deff0\adeflang1025
+{\fonttbl{\f0\froman\fprq2\fcharset0 Times New Roman;}{\f1\froman\fprq2\fcharset0 Times New Roman;}{\f2\fnil\fprq2\fcharset0 Arial;}{\f3\froman\fprq2\fcharset0 Thorndale{\*\falt Times New Roman};}{\f4\fswiss\fprq2\fcharset0 Albany{\*\falt Arial};}{\f5\fnil\fprq0\fcharset0 Times New Roman;}{\f6\fnil\fprq2\fcharset0 HG Mincho Light J{\*\falt msmincho};}{\f7\fnil\fprq2\fcharset0 Arial Unicode MS;}}
+{\colortbl;\red0\green0\blue0;\red0\green0\blue128;\red255\green255\blue255;\red128\green128\blue128;}
+{\stylesheet{\s1\li86\ri86\lin86\rin86\fi0\sb86\sa86\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033\snext1 Normal;}
+{\s2\li86\ri86\lin86\rin86\fi0\sb240\sa283\keepn\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af7\afs28\lang255\ltrch\dbch\af6\langfe255\hich\f4\fs28\lang1033\loch\f4\fs28\lang1033\sbasedon1\snext3 Heading;}
+{\s3\li0\ri0\lin0\rin0\fi0\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033\sbasedon1\snext3 Body Text;}
+{\s4\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033\sbasedon3\snext4 List;}
+{\s5\li86\ri86\lin86\rin86\fi0\sb120\sa120\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af5\afs24\lang255\ai\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\i\loch\f5\fs24\lang1033\i\sbasedon1\snext5 caption;}
+{\s6\li86\ri86\lin86\rin86\fi0\sb86\sa86\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033\sbasedon1\snext6 Index;}
+{\s7\li86\ri86\lin86\rin86\fi0\sa283\brdrb\brdrdb\brdrw15\brdrcf4\brsp0{\*\brdrb\brdlncol4\brdlnin1\brdlnout1\brdlndist20}\brsp0\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs12\lang1033\loch\f5\fs12\lang1033\sbasedon1\snext3 Horizontal Line;}
+{\s8\li86\ri86\lin86\rin86\fi0\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\i\loch\f5\fs24\lang1033\i\sbasedon1\snext8 envelope return;}
+{\s9\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033\sbasedon3\snext9 Table Contents;}
+{\s10\li86\ri86\lin86\rin86\fi0\sb86\sa86\cf0{\*\tlswg8236}\tqc\tx4904{\*\tlswg8236}\tqr\tx9723{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033\sbasedon1\snext10 footer;}
+{\s11\li86\ri86\lin86\rin86\fi0\sb86\sa86\cf0{\*\tlswg8236}\tqc\tx4904{\*\tlswg8236}\tqr\tx9723{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033\sbasedon1\snext11 header;}
+{\s12\li86\ri86\lin86\rin86\fi0\sb240\sa283\keepn\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\rtlch\af7\afs48\lang255\ab\ltrch\dbch\af6\langfe255\hich\f3\fs48\lang1033\b\loch\f3\fs48\lang1033\b\sbasedon2\snext3{\*\soutlvl0} heading 1;}
+{\*\cs14\cf0\rtlch\af2\afs24\lang255\ltrch\dbch\af2\langfe255\hich\f0\fs24\lang1033\loch\f0\fs24\lang1033 Endnote Symbol;}
+{\*\cs15\cf0\rtlch\af2\afs24\lang255\ltrch\dbch\af2\langfe255\hich\f0\fs24\lang1033\loch\f0\fs24\lang1033 Footnote Symbol;}
+{\*\cs16\cf2\ul\ulc0\rtlch\af2\afs24\lang255\ltrch\dbch\af2\langfe255\hich\f0\fs24\lang1033\loch\f0\fs24\lang1033 Internet link;}
+}
+{\info{\creatim\yr2008\mo11\dy24\hr13\min2}{\revtim\yr1601\mo1\dy1\hr0\min0}{\printim\yr1601\mo1\dy1\hr0\min0}{\comment StarWriter}{\vern6800}}\deftab709
+{\*\pgdsctbl
+{\pgdsc0\pgdscuse195\pgwsxn12240\pghsxn15840\marglsxn1800\margrsxn1800\margtsxn1440\margbsxn1440\pgdscnxt0 Standard;}
+{\pgdsc1\pgdscuse195\pgwsxn12240\pghsxn15840\marglsxn1800\margrsxn1800\margtsxn1440\margbsxn1440\pgdscnxt1 Endnote;}
+{\pgdsc2\pgdscuse195\pgwsxn12240\pghsxn15840\marglsxn1134\margrsxn567\margtsxn567\margbsxn567{\cbpat3}\pgdscnxt2 HTML;}}
+{\*\pgdscno2}\paperh15840\paperw12240\margl1134\margr567\margt567\margb567\sectd\sbknone\pgwsxn12240\pghsxn15840\marglsxn1134\margrsxn567\margtsxn567\margbsxn567{\cbpat3}\ftnbj\ftnstart1\ftnrstcont\ftnnar\aenddoc\aftnrstcont\aftnstart1\aftnnrlc
+\pard\plain \ltrpar\s3\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\li0\ri0\lin0\rin0\fi0\sa283\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033{\rtlch \ltrch\loch\f5\fs24\lang1033\i0\b{\b RUBRIC: Websmart}}{\rtlch \ltrch\loch\f5\fs24\lang1033\i0\b0 }
+\par \pard\plain \ltrpar\s3\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\li0\ri0\lin0\rin0\fi0\sa283\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033{\rtlch \ltrch\loch\f5\fs24\lang1033\i0\b{\b HED: TKTKTKTKTK}}{\rtlch \ltrch\loch\f5\fs24\lang1033\i0\b0 \line \line {\b Matador Travel}\line Matador mixes professionally-written content with Facebook-style blogs where fellow travelers share stories and offer inside tips on your destination. Check out the community map, where quick links can show you map pins for
+member blogs, staff guides, eco organizations and more. Fancy yourself a travel writer? Matador is one of the few sites on the web that that pays contributors. Just head to the "Bounty Board" and see what stories are up for grabs. The pay isn't enough to f
+inance that dream trip to Fiji, but it's better than most.\~\line \line {\b TripIt}}
+\par \pard\plain \ltrpar\s3\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\li0\ri0\lin0\rin0\fi0\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033 {\rtlch \ltrch\loch\f5\fs24\lang1033\i0\b0 One of the site\'92s coolest functions is the itinerary builder: Forward all of your e-mail confirmations from airlines, hotels, and travel agents to plans@tripit.com. There's no need to set up an account, the site will do that automatically, as well as compi
+le your confirmations into a downloadable schedule, along with weather forecasts, maps, and restaurant ideas. TripIt also has a strong community of travelers -- members often use the site to arrange carpools to and from the airport, serve as "hometown guid
+es" and more. }
+\par \pard\plain \ltrpar\s3\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\li0\ri0\lin0\rin0\fi0\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033
+\par \pard\plain \ltrpar\s3\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\li0\ri0\lin0\rin0\fi0\sa283\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033 {\rtlch \ltrch\loch\f5\fs24\lang1033\i0\b0 \line {\b GeckoGo} }
+\par \pard\plain \ltrpar\s3\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\li0\ri0\lin0\rin0\fi0\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033 {\rtlch \ltrch\loch\f5\fs24\lang1033\i0\b0 By opening up nearly all its content to member editing -- it works much like Wikipedia -- GeckoGo almost instantly attracted a thriving community full of trekkers supplying hotel and reviews, photos, comments, and suggestions. Be sure to check out the "ans
+wers" tab, which acts like a Yahoo Answers for travel. Post your questions about a place and members will offer their advice. Our query about boutique hotels in Paris had three replies inside of 48 hours.}
+\par \pard\plain \ltrpar\s3\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\li0\ri0\lin0\rin0\fi0\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033
+\par \pard\plain \ltrpar\s3\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\li0\ri0\lin0\rin0\fi0\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033
+\par \pard\plain \ltrpar\s3\cf0{\*\hyphen2\hyphlead2\hyphtrail2\hyphmax0}\li0\ri0\lin0\rin0\fi0\rtlch\af5\afs24\lang255\ltrch\dbch\af5\langfe255\hich\f5\fs24\lang1033\loch\f5\fs24\lang1033
+\par } \ No newline at end of file
diff --git a/budget travel/Travel_social_networking4.doc b/budget travel/Travel_social_networking4.doc
new file mode 100644
index 0000000..0452a72
--- /dev/null
+++ b/budget travel/Travel_social_networking4.doc
Binary files differ
diff --git a/budget travel/budgettravel.txt b/budget travel/budgettravel.txt
new file mode 100644
index 0000000..8bf049e
--- /dev/null
+++ b/budget travel/budgettravel.txt
@@ -0,0 +1 @@
+Facebook keeps you in touch with your friends while you're traveling, but if you're looking to connect with fellow travelers or get the latest information on out of the way destinations, travel-specific social networks are the answer. There are plenty of Facebook clones for travelers on the web, but these three rise above the rest. Matador Travel: A sprawling internet metropolis of travel, Matador Travel has a little bit of everything -- from professionally-written guides to socially-connected member blogs. Check out the trip reports for the inside scoop on your destination, use the planning tools to map your route and be sure to sign up so you can impart your own wisdom when you get home. TripIt: TripIt wants to organize all your confirmation e-mails from travel agents, airlines, hotels, etc, and turn them into an itinerary. Just forward the e-mail to TripIt and the site will categorize them for you -- you don't even need to sign up, Tripit will automatically create an account when you send the first e-mail. TripIt also boasts a burgeoning community of travelers -- organize an airport carpool, hook up with locals when you land or server as a guide for fellow travelers headed to your area. GeckoGo: Newcomer GeckoGo has already attracted a thriving community full of travel guides, photos, comments and suggestions. GeckoGo caters to independent travelers looking to escape the package trip hordes. Be sure to check out the "answers" tab, which acts like a Yahoo Answers for travel. Post your questions and a travel expert, as well as the GeckoGo community, will chime in with answers. \ No newline at end of file
diff --git a/budget travel/travelsocialnetworks.txt b/budget travel/travelsocialnetworks.txt
new file mode 100644
index 0000000..8bf049e
--- /dev/null
+++ b/budget travel/travelsocialnetworks.txt
@@ -0,0 +1 @@
+Facebook keeps you in touch with your friends while you're traveling, but if you're looking to connect with fellow travelers or get the latest information on out of the way destinations, travel-specific social networks are the answer. There are plenty of Facebook clones for travelers on the web, but these three rise above the rest. Matador Travel: A sprawling internet metropolis of travel, Matador Travel has a little bit of everything -- from professionally-written guides to socially-connected member blogs. Check out the trip reports for the inside scoop on your destination, use the planning tools to map your route and be sure to sign up so you can impart your own wisdom when you get home. TripIt: TripIt wants to organize all your confirmation e-mails from travel agents, airlines, hotels, etc, and turn them into an itinerary. Just forward the e-mail to TripIt and the site will categorize them for you -- you don't even need to sign up, Tripit will automatically create an account when you send the first e-mail. TripIt also boasts a burgeoning community of travelers -- organize an airport carpool, hook up with locals when you land or server as a guide for fellow travelers headed to your area. GeckoGo: Newcomer GeckoGo has already attracted a thriving community full of travel guides, photos, comments and suggestions. GeckoGo caters to independent travelers looking to escape the package trip hordes. Be sure to check out the "answers" tab, which acts like a Yahoo Answers for travel. Post your questions and a travel expert, as well as the GeckoGo community, will chime in with answers. \ No newline at end of file
diff --git a/cmd/.gitignore b/cmd/.gitignore
new file mode 100644
index 0000000..fd35fbf
--- /dev/null
+++ b/cmd/.gitignore
@@ -0,0 +1,2 @@
+*.pdf
+lib/
diff --git a/cmd/Screen Shot 2015-04-01 at 1.34.13 PM.png b/cmd/Screen Shot 2015-04-01 at 1.34.13 PM.png
new file mode 100644
index 0000000..b8723d1
--- /dev/null
+++ b/cmd/Screen Shot 2015-04-01 at 1.34.13 PM.png
Binary files differ
diff --git a/cmd/Screen Shot 2015-04-01 at 1.34.35 PM.png b/cmd/Screen Shot 2015-04-01 at 1.34.35 PM.png
new file mode 100644
index 0000000..bb96cc5
--- /dev/null
+++ b/cmd/Screen Shot 2015-04-01 at 1.34.35 PM.png
Binary files differ
diff --git a/cmd/adv-autojump.txt b/cmd/adv-autojump.txt
new file mode 100644
index 0000000..dc39756
--- /dev/null
+++ b/cmd/adv-autojump.txt
@@ -0,0 +1,5 @@
+Autojump for advanced navigation:
+
+https://github.com/joelthelion/autojump
+
+
diff --git a/cmd/basics.txt b/cmd/basics.txt
new file mode 100644
index 0000000..9415829
--- /dev/null
+++ b/cmd/basics.txt
@@ -0,0 +1,196 @@
+Test this with an article for CSSTricks and a sign up for a tips mailing list. Just write the article, a weeks worth a tips and pitch to see what happens.
+
+
+
+
+The reason the command line comes across as pure wizardry much of the time is that popular tutorials and books teach commands, tools and skills that web developers don't necessarily need.
+
+Unless you're the rare web developer / sysadmin you don't really need to know how to tail log files, search with grep and awk, pipe things between apps or write shell scripts. All of those things can come in handy, but they're not essential for web developers.
+
+For web developers like us most command line books fail the basic "how will this help me?" test.
+
+I know because I've read a bunch of them and, while they all eventually did prove useful down the road, none of them helped me automate tasks with Grunt or got me started with Sass or helped improve my development environment and deployment tools.
+
+All that I had to figure out myself with man pages, scattered tutorials and more than a healthy dose of trial and error. More like trial and error, error, error, trial, error, error, error, error, error, error.
+
+**** tk I want to spare you that rather lengthy process and get you started using command line tools. This book is not cumulative. That is each chapter is designed to stand alone. Scan the table of contents here, find one thing you want to learn how to do and go read that chapter.***
+
+**** Once you've accomplished something then come back here and you can learn a little more foundational stuff.***
+
+## The Basics: Open Your Terminal Application
+
+If you're on OS X the built in application is Terminal, which lives in Applications >> Utilities. While Terminal will work, I highly suggest you grab iTerm instead. It's free, much faster and has a boatload of useful features for when you're more comfortable.
+
+If you're on Windows grab [Cygwin][1] and follow the setup instructions.
+
+If you're on Linux you probably didn't buy this book. But on the outside chance you did, your distro most likely came with some sort of terminal application installed. Search for "terminal" or "command line" and see what you can find. If you want an everything and kitchen sink terminal, check out [Terminology][2].
+
+Okay so you've got a window open in some kind of terminal emulator.
+
+Let's start with some basics, like why is it called an emulator?
+
+Well back in the day a terminal was just a screen and a keyboard that connected to a mainframe via a serial port. Seriously. The terminal was literally at the end of the line. You sat down and typed your command into the terminal and the terminal then sent them off to the actual computer. So a terminal emulator is just emulating that basic interface.
+
+Oh my gawd, why the @#$% would I want to go back to those days?
+
+I don't know, why do you drive a car with a 12,000 year old invention like the wheel still primitively strapped to the bottom of it?
+
+Because it works.
+
+So a terminal emulator opens up and then it loads a shell. Now a shell is an extra layer between you and that metaphorical mainframe off in the distance. To stick with the wheel analogy, a shell is like a rubber wheel vs the stone wheel of a bare terminal. While wheels are just as useful as ever after all these years, most of us don't use the stone ones anymore. Same for the terminal. The shell is much nicer than a bare terminal.
+
+By default most operating systems these days will load the Bash Shell. Another popular shell you'll sometimes see referenced is the Z Shell or more commonly ZSH. The two are pretty much indistinguishable in the beginning, though Zsh has some very powerful autocomplete feature Bash does not. For now we'll stick with Bash. In the next chapter we'll kick off the Bash training wheels and start using Zsh.
+
+## Basics: Figure Out Where You Are
+
+Okay, you've probably got a screen that looks something like this:
+
+![tk screen of terminal no frills]
+
+Go ahead and open up the preferences for your terminal and pick a color scheme that's a little easier on the eyes. I happen to like [Solarized Dark][3] which is the blueish color you'll see in all the screenshots. So when I open a terminal it looks like this:
+
+![tk iTerm win]
+
+Ah, that's so much better. Okay, now where are we? It turns out our terminal is telling us the answer to that question, we just need to figure out how to read it. Here's the basic line from the screenshots above:
+
+~~~
+Last login: <date> on ttys018
+You have mail.
+iMac:~ sng$
+~~~
+
+So first there's a line about the last time I logged in, which you can pretty much ignore unless you're a sysadmin, in which case email me for a refund. Then there's a line about me having mail because I have a cron job running that sometimes errors out and sends me a message that it failed. As you can see I just ignore this message.
+
+Then we have the stuff that actually matters, this bit:
+
+~~~
+iMac:~ sng$
+~~~
+
+Yours will be a little different because you machine name may well be more creative than "iMac" and your username will not be `sng`. But there is something else in there to note... See that `~` character after the colon and before the username?
+
+That's where we are. It just so happens that by default terminal windows open up in the user's home directory, another throwback to the old mainframe days, but one that still makes sense. After all there's a really good chance you want to be in your home directory since that's where all your projects and files are.
+
+If you want to know where you are, you can always find out just by typing `pwd`. The `pwd` command just tells you the absolute path to the current working directory.
+
+~~~
+~ » pwd
+/Users/sng
+~ »
+~~~
+
+Note that For the rest of the book I'm going to use a simplified prompt in all the example code. I'll omit the machine name (iMac in the above example) and use the >> symbol instead of the $. So everything will look like this:
+
+~~~
+~ » command
+~~~
+
+Nice and clean and simple. In a few chapters I'll show you how to modify what yours looks like.
+
+Okay, so we know we're in our home directory, let's see what's in this mysterious directory, type `ls` and hit return:
+
+~~~
+~ » ls
+Downloads Library Public
+Dropbox Movies Pictures
+Documents Music Sites
+~~~
+
+The exact spacing of the list and folders within it will depend on what's in your home directory. But if you go open up your file browser (that's the Finder on OS X and Windows Explorer in Windows) you should see the exact same like of directories or, as your file browser calls them, folders.
+
+So what is `ls`? Well the short answer is that it's the list command, it lists the contents of the current directory.
+
+When you type `ls` and nothing else it will show you the content of the current directory. Chances are, regardless of what platform you're on, there's a Documents folder in your home directory. Let's see what's in it:
+
+~~~
+~ » ls Documents
+~~~
+
+Chances are that produced a very long, jumbled and difficult to read list of all the stuff that's in your documents folder. To make it a bit prettier, let's use what are called flags or options, in case we'll add `-lh`
+
+~~~
+~ » ls -lh Documents
+drwxr-xr-x@ 4 sng staff 136B Mar 5 09:08 Virtual Machines.localized
+drwxr-xr-x@ 20 sng staff 680B Jan 12 20:57 archive
+drwxr-xr-x@ 31 sng staff 1.0K Feb 19 17:27 bak
+drwxr-xr-x@ 368 sng staff 12K Oct 29 11:25 bookmarks
+drwxr-xr-x@ 42 sng staff 1.4K Mar 9 21:36 dotfiles
+drwxr-xr-x@ 12 sng staff 408B Sep 24 14:22 misc
+drwxr-xr-x@ 6 sng staff 204B Oct 20 11:39 reading notes
+drwxr-xr-x@ 85 sng staff 2.8K Dec 30 09:33 recipes
+drwxr-xr-x@ 8 sng staff 272B Sep 1 2014 reference
+drwxr-xr-x@ 15 sng staff 510B Oct 13 10:49 writing
+drwxr-xr-x@ 36 sng staff 1.2K Mar 9 19:56 writing luxagraf
+drwxr-xr-x@ 18 sng staff 612B Feb 19 19:12 yellowstone
+~~~
+
+Now what's all that crap? Well, ignore the first column for now, those are the permissions flags, then comes the number of files in the directory if the item is a directory, then the owner, then the group name, then the file size. Because we added the `-h` flag we got the files size in 3 digits, in the example above you can see everything is either kilobytes or bytes, but if you have larger files you might see megabytes (MB) or gigabytes (GB). Then there's the last modified date and time and finally the name of the directory or file.
+
+Now you know how to see the contents of any directory on your machine using the command line. Or at least all the stuff you'd see in a graphical program. There may be hidden files though. To see the hidden files you can add the `-a` flag to the `ls` command. Try running this in your home directory to see some files you might have never known were there (they'll be the ones that start with a dot)
+
+~~~
+~ » ls -lah
+~~~
+
+Now you may have noticed that there's no easy way to tell directories from files. Some files might have extensions, like .txt or .pdf, but they don't have to have extension so how do you tell them apart? The most common way is with either color or bold text. Typically directories will be bold or a different color.
+
+So now we know how to figure out where we are (just type `pwd`) and what's in any folder (just type `ls -lh path/to/direcotry), let's figure out how to move to another directory so we can work in it.
+
+## Basics: How To Move Around
+
+Let's start by creating a (possibly) new folder in our home directory. We'll create a "Sites" folder where we can keep all our web dev projects. To do that we use the `mkdir` command, which is short for "make directory".
+
+~~~
+~ » mkdir Sites
+mkdir: Sites: File exists
+~~~
+
+As you can see I already have a Sites folder so `mkdir` did nothing, but if there were no folder there named Sites there would be now. Now let's create a sample project inside the Sites folder named "my-project". We'll use mkdir again, but this time we'll add the `-p` flag:
+
+~~~
+~ » mkdir -p Sites/my-project
+~~~
+
+Now go to your file browser (Finder/Windows Explorer) and see for yourself. There's a new folder inside your sites folder.
+
+The `-p` flag tells mkdir to make every directory in the specified path all at once. So if we wanted to create a folder at the path `Sites/my-project/design/sass/partials/` we could do it in a single step just by typing:
+
+~~~
+~ » mkdir -p Sites/my-project/design/sass/partials
+~~~
+
+Okay, let's now move to the my-projects folder inside the Sites folder.
+
+~~~
+~ » cd Sites/my-projects
+Sites/my-project »
+~~~
+
+Now notice the prompt has changed, the `~` is gone and we now see the path to where we are is Sites/my-project. Cool.
+
+Quick question though, did you type out "Sites" and "my-project"? I'd wager you did, but you didn't need to.
+
+Let's go back to our home directory, just type `cd` and hit return.
+
+Okay, we're home. Now type `cd S` and hit tab. Did that S magically turn into Sites? Very cool. Now type "m" and hit tab again to see "my-project" also auto-complete. Learn to love the tab key, it will autocomplete just about everything. you should never really need to type more than a few letters of what you're looking for and hit tab and it will autocomplete.
+
+Okay hit return and we're once again in our Sites/my-project folder. You can verify that you're in that folder by looking at your prompt.
+
+## Conclusion
+
+That was a lot, but now you know the basics of the command line.
+
+You can see where you are, list what files and folders are on your machine, create new folders when you need one and change to any folder you want. The only thing we didn't cover is how to create files, for that you can use `touch`. From your my-projects folder type this:
+
+~~~
+~ » touch my-first-file.txt
+~~~
+
+Now you can type `ls` and you'll see the file you just created.
+
+That's the basics of navigating the file system of your computer (or a remote computer you've logged into) from the command line.
+
+
+[1]: http://cygwin.com/
+[2]: http://www.enlightenment.org/p.php?p=about/terminology
+[3]:
diff --git a/cmd/changing-shells.txt b/cmd/changing-shells.txt
new file mode 100644
index 0000000..47921b9
--- /dev/null
+++ b/cmd/changing-shells.txt
@@ -0,0 +1,73 @@
+## Why You're Going to Change Your Shell to Zsh
+
+Wait, we just got the Terminal app open and now you want to go and change the shell, WTF?
+
+The reason many people give up on the command line isn't because the command line is overly difficult, it's because they're slower using it than they are using GUI apps.
+
+You've been using Photoshop for so long you know all the helpful keyboard shortcuts you need to work deficiently. The same is probably true of your favorite text editor and everything else you use regularly.
+
+In order to get you working much more quickly in the terminal we need to get a shell that has the equivalent of powerful keyboard shortcuts.
+
+We need Zsh.
+
+Zsh is going to make navigating the command line much faster and easier, especially when you're first starting out.
+
+The biggest help Zsh provides is auto-complete on steroids.
+
+The most time consuming thing you'll do on the command line is type out paths and commands. Sure, `cd` is much faster to type than `change-directory`, but you still need to type the full path to the directory you want to change to. Unless you have Zsh.
+
+Here's an animated gif showing me navigating from my home directory to /var/www where I keep the sites on my server:
+
+[! tk animated gif of var www change]
+
+There isn't much Zsh can't auto-complete. If you install the pre-built package Oh-My-Zsh -- which I'll show you how to do in the next section -- you'll be able to auto-complete everything from paths to command options, even git commands.
+
+For example, I tend to have a lot of symlinks on my servers and I like `ls` to show me the actual path when there's a symlink, but I can never remember what the flag is for that so I just type `ls -` and then hit `tab` and Zsh helpfully gives me a list of options (in this case the flag I want is `-L`:
+
+![tk screen of ls - flag complete]
+
+Another great feature of Zsh is spelling corrections. We'll be typing commands like `ls` (list all the files and folders in the current working directory) and `cd` (change directory, AKA move to a new folder) a lot and you will inevitably accidentally type `dc` or `sl` followed by some long path. Rather than make you retype the whole thing, Zsh will simply ask, did you mean `ls`? Type a 'y' for yes the Zsh will run the command as `ls`.
+
+All of the examples in this book and screenshots are done in Zsh. The Z shell is perhaps best thought of as Bash improved. Technically it was derived from the Korn Shell, but for the most part it's a drop in replacement for Bash. There are differences, but for now you can ignore them -- if it works in Bash, it'll work in Zsh.
+
+## Install and Change to Zsh
+
+Okay let's install Zsh and make it our default shell. If you're on OS X and you've already got homebrew installed just type:
+
+ brew install zsh
+
+Ubuntu/Debian users can use:
+
+ apt-get install zsh
+
+If you're on Windows, using Cygwin:
+
+ apt-cyg install zsh
+
+That gets Zsh installed, now we need to make it the default so that we always start in a Z shell when we open a new terminal window. On OS X and Linux you can use this command:
+
+ chsh -s $(which zsh)
+
+For Windows/Cygwin head to C:/cygwin/Cygwin.bat and open that file in a text editor. Look for the line:
+
+ cygwin\bin\bash --login -i
+
+and change it to:
+
+ cygwin\bin\zsh -l -i
+
+Now you just need to log out and log back in and you'll have Zsh set up.
+
+## Installing Oh-My-Zsh
+
+Zsh is our default shell now, that's a good start but we're going to go a step further and set up a wonderful set of tools that goes by the name [Oh-My-Zsh]().
+
+Oh-My-Zsh is, in it's own words, "community-driven framework for managing your zsh configuration. Includes 180+ optional plugins (rails, git, OSX, hub, capistrano, brew, ant, php, python, etc), over 120 themes to spice up your morning, and an auto-update tool so that makes it easy to keep up with the latest updates from the community."
+
+Basically it give you a lot of powerful tools that with some sane configuration files.
+
+To install Oh-My-Zsh you can just paste this line in your terminal:
+
+ wget https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O - | sh
+
+If you'd like to play around with Oh-My-Zsh and try some different configurations, be sure to read through [the documentation](https://github.com/robbyrussell/oh-my-zsh) on GitHub. For now we'll just stick with the defaults that Oh-My-Zsh set up for us.
diff --git a/cmd/draft_edits.txt b/cmd/draft_edits.txt
new file mode 100644
index 0000000..eb88a61
--- /dev/null
+++ b/cmd/draft_edits.txt
@@ -0,0 +1,35 @@
+
+
+
+
+
+Linux comes in many flavors. You may have heard of Ubuntu, it's not a bad choice, but I prefer something very similar called Debian. Debian is actually the base that Ubuntu is built on. Since we don't need a fancy desktop or any of that stuff we'll just stick with Debian. It's the most popular server OS on the web by a wide margin so we're not really going out on a limb here.
+
+If you picked Digital Ocean, here's what the setup screen looks like:
+
+
+
+
+Wait, you said we'd install a fully custom server...
+
+Patience my friend. Let's start with something basic, like getting WordPress up and running. Do I like WordPress? Not really, but it's popular
+
+
+
+
+Start with the problem
+Then the dream
+Then the solution
+
+With Bash if we want to auto-complete when we're changing directories with `cd` we have to type the first few letters of the directory name before tab auto-completes anything. It offers a list of all the directories available, but it doesn't actually complete anything. With zsh you can hit tab twice and it will auto-complete the first directory in the directory you're in. In other words hit tab twice in Zsh and it will auto-complete the first thing you'd see if you typed `ls`. Hit tab again and it moves to the next directory and so on.
+
+Zsh also autocomplete things like git and even flags, for example, here I just
+
+spelling set autocorrections
+
+syntax highlighting (valid commands are green
+Zsh goes one better. You can type part of a command and press <UP>
+
+
+
+It finds the last command we typed starting with ‘ls’. We could continue pressing up to cycle if we wanted.
diff --git a/cmd/homebrew.txt b/cmd/homebrew.txt
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/cmd/homebrew.txt
diff --git a/cmd/intro.txt b/cmd/intro.txt
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/cmd/intro.txt
diff --git a/cmd/notes.txt b/cmd/notes.txt
new file mode 100644
index 0000000..49ffcb5
--- /dev/null
+++ b/cmd/notes.txt
@@ -0,0 +1,2 @@
+As we navigate through the filesystem, there are some conventions. The current working directory (cwd)—whatever directory we happen to be in at the moment—is specified by a dot:
+.
diff --git a/cmd/read-docs.txt b/cmd/read-docs.txt
new file mode 100644
index 0000000..a0c4cd3
--- /dev/null
+++ b/cmd/read-docs.txt
@@ -0,0 +1,55 @@
+It is impossible to over-emphasize the importance of Reading The Fucking Manual. There are two ways to get better at stuff, reading and doing. The more you do the first the better you get at the second. And of course the better you get at doing the better you get.
+
+RTFM can be a little bit if a confusing notion for web developers though. Unlike programmers who have the Python documentation or RubyDoc files they can read through, HTML and CSS, the primary tools of our trade do not really have a manual, at least not in the traditional sense. There's the specification, and while reading specs is both as boring as you would expect and far more useful than you might expect (I encourage you to do it), it's not a place you'd normal turn to to answer a quick question.
+
+At the same time typing "how to use css transform" into Google is a crap shoot. You might get something useful, but it might be outdated, browsers might have changed things, the spec might have been re-written for some reason.
+
+The best docs I know of and what I use for HTML is the [Mozilla Developer Network](https://developer.mozilla.org), though it's rare that I visit the actual site. I do my development work in Vim and on the command line, switching to the browser, opening a new tab and then typing the URL followed by whatever I want to look up... fuck that. Sure, it doesn't sound like much, it's not much from a pure time perspective, but it's a huge context switch, it pulls my brain out of what I'm doing and directs it somewhere else from which it may not return immediately.
+
+Perhaps I just have some sort of attention deficit disorder, but I can't tell you how many times I have tabbed over to Chromium to find the answer to some question and found myself browsing 1969 Yellowstone trailer images (I'm restoring one if you must know) on obscure forums half an hour later. Context switching kills your ability to concentration.
+
+That's where Dash for OS X and Zeal for Windows/Linux come in. Both applications are offline browsers for documentation. You can download the MDN docs for HTML and CSS, as well as more traditional documentation sources like JavaScript, Ruby, Python or hundreds of others.
+
+This way I can minimize the potential distractions. It's still a context switch -- another app opens, though using Dash's HUD view my full screen terminal stays where it is in the background -- but there's no internet there to disappear into for hours.
+
+As an added bonus I can get answers even when I'm working somewhere I don't have access to wifi. This is especially useful since I like to block my time and actually work for half, sometimes a whole day without allowing myself access to the web. I sit down and write, just me the command line and the task at hand. No distractions.
+
+In fact Dash/Zeal may be the most effective focus/productivity tools I use and yet I never really think of them that way. Still, they can save you from yourself so go grab whichever one you need, follow the installation instructions, download some "docsets" (Dash/Zeal's name for documentation) and then come back and I'll tell you how I use them.
+
+Here's a little secret, I don't actually even have to open Dash or Zeal (I work on both an OS X machine and an Linux machines so I use both). Open an app? How primitive. I've got plugins for my text editor of choice (Vim), all I do is position my cursor over a word, hit the two-key shortcut combo and Dash/Zeal opens an overlay window with the results.
+
+That's great when you're in the editor, but sometimes my cursor is just there on the command line, no editor or anything else, I want an easy way to search Dash/Zeal right where I am. To make it easy to search Dash/Zeal straight from the command line I wrote a very simple shell function that will looking things up directly from the command line.
+
+Here's what that function looks like with Dash on OS X:
+
+~~~.
+function lu() {
+ open "dash://$*"
+}
+~~~
+
+Here's the Zeal version for Linux/Windows with Cygwin looks like:
+
+~~~.
+function lu() {
+tk
+}
+~~~
+
+I call it "lu", which is short for "look up", because it doesn't conflict with any existing tools I'm aware of (at least nothing I've ever used). With these functions I can do this:
+
+~~~.
+lu html:picture
+~~~
+
+And instantly get a window with the MDN article on the HTML5 Picture element. If I want to look up the CSS transition article it would look like this:
+
+~~~.
+lu css:transition
+~~~
+
+And here's what it looks like when I do that, note that this chapter is still there in the background:
+
+![tk pic ]
+
+When I'm done I just hit Esc twice and that pop up window goes away and I'm back to writing this sentence. Thanks to my function and Dash's keyboard shortcuts I never had to take my fingers off the keys and I never got distracted by anything shiny. I got in, got educated and got out.
diff --git a/cmd/ref/a whirlwind tour of web developer tools: command line utilities - wern ancheta.txt b/cmd/ref/a whirlwind tour of web developer tools: command line utilities - wern ancheta.txt
new file mode 100644
index 0000000..5a201a3
--- /dev/null
+++ b/cmd/ref/a whirlwind tour of web developer tools: command line utilities - wern ancheta.txt
@@ -0,0 +1,1215 @@
+---
+title: A Whirlwind Tour of Web Developer Tools: Command Line Utilities
+date: 2014-12-30T18:57:01Z
+source: http://wern-ancheta.com/blog/2014/03/08/a-whirlwind-tour-of-web-developer-tools-command-line-utilities/
+tags: #cmdline
+
+---
+
+In this part five of the series A Whirlwind Tour of Web Developer Tools I'll walk you through some of the tools that you can use in the command line. But before we dive in to some of the tools lets first define what a command line is. According to [Wikipedia][1]:
+
+> A command-line interface (CLI), also known as command-line user interface, console user interface, and character user interface (CUI), is a means of interacting with a computer program where the user (or client) issues commands to the program in the form of successive lines of text (command lines).
+
+So the command line is basically an interface where you can type in a bunch of commands to interact with the computer.
+
+### Command Line Basics
+
+Before we jump into the tools its important that we first understand the basics of using the command line. To access the command line in Linux press `ctrl + alt + t` on your keyboard. For Mac just look for the terminal from your menu. And for Windows just press `window + r` and then type in `cmd` then press `enter`.
+
+#### Commonly used Commands
+
+Here are some of the commands that you'll commonly used on a day to day basis:
+
+* **cd** – change directory
+* **mkdir** – create a new directory
+* **rmdir** – delete an existing directory
+* **touch** – create an empty file
+* **pushd** – push directory
+* **popd** – pop directory
+* **ls** – list files in a specific directory
+* **grep** – find specific text inside files
+* **man** – read a manual page
+* **apropos** – lists outs commands that does a specific action
+* **cat** – print out all the contents of a file
+* **less** – view the contents of a file (with pagination)
+* **sudo** – execute command as super user
+* **chmod** – modify the file permissions
+* **chown** – change file ownership
+* **find** – find files from a specific directory
+* **pwd** – print working directory
+* **history** – returns a list of the commands that you have previously executed
+* **tar** – creates a tar archive from a list of files
+
+If you are on Windows some commands might not be available to you. The solution would be to either switch to Linux, I definitely recommend Linux Mint or Ubuntu if you're planning to switch. Or if you want to stick with Windows you can install [Cygwin][2] or the [GNU utilities][3] for Windows.
+
+I won't go ahead and provide you with a tutorial on how to use the commands above. There's tons of tutorials out there so use Google to your advantage. You also have the `man` command to help you out. Here's how to use the `man` command:
+
+This will output all the information related to the `cd` command and how to use it. The `man` command is useful if you already know the name of the command. But in case you do not already know you also have access to the `apropos` command which lists out commands that matches a specific action. Here's how to use it:
+
+Executing the command above produces an output similar to the following:
+
+![apropos][4]
+
+As you can see you can pretty much scan through the results and determine the command that you need to use based on the description provided. So if you want to delete a file you can just call the `unlink` command.
+
+#### Aliases
+
+Once you've gotten comfortable with the default commands you can start using shortcuts in order to make typing commands faster and easier. You can add aliases by creating a `.bash_aliases` file inside your home directory then add contents similar to the following:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 17
+ 18
+ 19
+ 20
+ 21
+ 22
+ 23
+
+ |
+
+ alias subl='/usr/bin/subl'
+ alias c='clear'
+ alias install='sudo apt-get install'
+ alias cp='cp -iv'
+ alias mv='mv -iv'
+ alias md='mkdir'
+ alias t='touch'
+ alias rm='rm -i'
+ alias la='ls -alh'
+ alias web-dir='cd ~/web_files'
+ alias e='exit'
+ alias s='sudo'
+ alias a='echo "------------Your aliases------------";alias'
+ alias ni='sudo npm install'
+ alias snemo='sudo nemo'
+ alias gi='git init'
+ alias ga='git add'
+ alias gc='git commit -m'
+ alias gca='git commit --amend -m'
+ alias gu='git push'
+ alias gd='git pull'
+ alias gs='git status'
+ alias gl='git log'
+
+ |
+
+As you can see from the example above to add an alias simply put `alias` followed by the alias that you want to use, then `=` and followed by the path to the executable wrapped in quotes. If you do not know the path to the executable file you can use the `which` command followed by the command that you usually use. For example for the `less` command:
+
+It will then output the path to the executable file:
+
+This is the path that you can add in the specific alias.
+
+### Command Line Tools
+
+#### Wget
+
+Useful for pulling files from a server. For example you can use this to download a specific library or asset for your project into your current working directory:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ wget http://cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.10/angular.min.js
+
+ |
+
+The command above will pull the file from the URL that you specified and copy it into the directory where your current terminal window is opened.
+
+#### Curl
+
+Curl is used for making HTTP request. I'd like to describe it as a browser but for the command line. You can do all sorts of stuff with Curl. For example you can use it to request a specific page from the web:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl http://anchetawern.github.io
+
+ |
+
+##### Basic HTTP Authentication
+
+If the page uses basic HTTP authentication you can also specify a user name and a password. In the example below I am using Curl to request my recently bookmarked links from the delicious API:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl -u username:password https://api.del.icio.us/v1/posts/recent
+
+ |
+
+##### Saving the Results to a File
+
+This will return an XML string. If you want to copy the result to a file you can simply redirect the output to a file:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl -u username:password https://api.del.icio.us/v1/posts/recent > recent-bookmarks.xml
+
+ |
+
+##### Getting Header Information
+
+If you only want to get the header information from a specific request you can add the `-I` option:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl -I http://google.com
+
+ |
+
+This will output a result similar to the following:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+
+ |
+
+ Location: http://www.google.com/
+ Content-Type: text/html; charset=UTF-8
+ Date: Fri, 21 Feb 2014 10:16:19 GMT
+ Expires: Sun, 23 Mar 2014 10:16:19 GMT
+ Cache-Control: public, max-age=2592000
+ Server: gws
+ Content-Length: 219
+ X-XSS-Protection: 1; mode=block
+ X-Frame-Options: SAMEORIGIN
+ Alternate-Protocol: 80:quic
+
+ |
+
+This is the same as the one that you see under the network tab in Chrome Developer Tools under the headers section.
+
+##### Interacting with Forms
+
+You can also perform actions on forms. So for example if you have the following form from a web page somewhere:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+
+ |
+
+ <form action="form.php" method="GET">
+ <input type="text" name="query">
+ <input type="submit">
+ </form>
+
+ |
+
+You can fill up the form and perform the action as if you're in a browser by simply getting the required inputs and supplying them from your command:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl http://localhost/tester/curl/form.php?query=dogs
+
+ |
+
+For forms which has its method set to `POST`. You can also make the request using curl. All you have to do is add a `\--data` option followed by the name-value pair. With the name being the name assigned to the input and the value is the value that you want to supply:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl --data "query=cats" http://loca/form.php?query=cats
+
+ |
+
+##### Spoofing the HTTP referrer
+
+You can also spoof the http-referrer when making a request:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl --referer http://somesite.com http://anothersite.com
+
+ |
+
+This reminds us that using the HTTP referrer as a means of checking whether to perform a specific action or not is really useless as it can be easily spoofed.
+
+##### Follow Redirects
+
+Curl also allows you to follow redirects. So for example if you're accessing a page which has a redirect like this:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+
+ |
+
+ <?php
+ header('Location: anotherfile.php');
+ echo 'zup yo!';
+ ?>
+
+ |
+
+Simply using the following command will result in the execution of the `echo` statement below the redirect:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl http://localhost/tester/curl/file.php
+
+ |
+
+But if you add the `\--location` option curl will follow the page that is specified in the redirect:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl --location http://localhost/tester/curl/file.php
+
+ |
+
+So the output of the command above will be the contents of the `anotherfile.php`.
+
+##### Cookies
+
+You can also supply cookie information on the requests that you make. So for example you are requesting a page which uses cookies as a means of determining if a user is logged in or not (note: you shouldn't use this kind of code in production):
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+
+ |
+
+ <?php
+ $name = $_COOKIE["name"];
+ $db->query("SELECT id FROM tbl_users WHERE name = '$name'");
+ if($db->num_rows > 0){
+ echo 'logged in!';
+ }else{
+ echo 'sorry user does not exist';
+ }
+ ?>
+
+ |
+
+To request from the page above just add the `\--cookie` option followed by the cookies that the page needs:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl --cookie "name=fred" http://localhost/tester/curl/cookie.php
+
+ |
+
+If you need to specify more than one cookie simply separate them with a semi-colon:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl --cookie "name=fred;age=22" http://localhost/tester/curl/cookie.php
+
+ |
+
+#### jq
+
+If you normally work with web API's in your job, you might find the jq utility useful. What this does is formatting JSON strings, it also adds syntax highlighting so they become more readable. To install jq all you have to do is download the `jq` file from the [downloads page][5] and then move it into your `bin` folder:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ mv ~/Downloads/jq /bin/jq
+
+ |
+
+After that you can start using jq to process JSON strings that comes from curl requests by simply piping it to the `jq` command. For example, we are making a request to the following file:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+ 11
+ 12
+ 13
+ 14
+ 15
+ 16
+ 17
+ 18
+ 19
+ 20
+ 21
+ 22
+ 23
+ 24
+ 25
+ 26
+ 27
+ 28
+ 29
+ 30
+ 31
+ 32
+ 33
+ 34
+ 35
+ 36
+ 37
+ 38
+ 39
+ 40
+ 41
+ 42
+ 43
+ 44
+ 45
+ 46
+ 47
+ 48
+ 49
+ 50
+ 51
+ 52
+ 53
+ 54
+ 55
+ 56
+
+ |
+
+ <?php
+ $names = array(
+ array(
+ 'fname' => 'Gon',
+ 'lname' => 'Freecs',
+ 'nen_type' => 'enhancement',
+ 'abilities' => array(
+ 'rock', 'paper', 'scissors'
+ )
+ ),
+ array(
+ 'fname' => 'Killua',
+ 'lname' => 'Zoldyc',
+ 'nen_type' => 'transmutation',
+ 'abilities' => array(
+ 'lightning bolt',
+ 'thunderbolt',
+ 'godspeed'
+ )
+ ),
+ array(
+ 'fname' => 'Kurapika',
+ 'lname' => '',
+ 'nen_type' => array('conjuration', 'specialization'),
+ 'abilities' => array(
+ 'holy chain',
+ 'dowsing chain',
+ 'chain jail',
+ 'judgement chain',
+ 'emperor time'
+ )
+ ),
+ array(
+ 'fname' => 'Isaac',
+ 'lname' => 'Netero',
+ 'nen_type' => 'enhancement',
+ 'abilities' => array(
+ '100-Type Guanyin Bodhisattva',
+ 'First Hand',
+ 'Third Hand',
+ 'Ninety-Ninth Hand'
+ )
+ ),
+ array(
+ 'fname' => 'Neferpitou',
+ 'lname' => '',
+ 'nen_type' => 'specialization',
+ 'abilities' => array(
+ 'Terpsichora',
+ 'Doctor Blythe',
+ 'Puppeteering'
+ )
+ )
+
+ );
+ echo json_encode($names);
+
+ |
+
+Normally we would do something like this:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl http://localhost/tester/curl/json.php
+
+ |
+
+But this returns a result that looks like this:
+
+![json string][6]
+
+Piping the result to `jq`:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl http://localhost/tester/curl/json.php | jq "."
+
+ |
+
+We get a result similar to the following:
+
+![jq formatted][7]
+
+Pretty sweet! But you can do much more than that, check out the [manual page][8] for the jq project for more information.
+
+#### Vim
+
+Vim is a text-editor that is based on Vi, which is a text-editor that's pre-installed on common Linux distributions. But hey you might say that the main topic of this blog post is command-line tools why are we suddenly talking about text-editors here? Well its because Vim is tightly coupled with the terminal. It's like a terminal-text editor crossbreed. You can both execute commands and write code with it.
+
+You can download Vim from the [Vim downloads page][9] simply select the version that's applicable to the operating system that you're currently using. But if you're on Linux mint, Ubuntu or other Linux distributions that uses `apt-get` then you simply execute the following command from the terminal:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ sudo apt-get install vim
+
+ |
+
+There are lots of tutorials in the web that can help you with learning vim (I'll link to them later). But for now I'm going to give you a quick tutorial to get you started.
+
+First thing that you need to know is how to open up files with vim. You can do it by executing the following command:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ vim file_that_you_want_to_edit.txt
+
+ |
+
+You can also open up more than one file:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ vim file1.txt file2.txt file3.txt
+
+ |
+
+You can then switch between the files while on command mode. First list out the files that are currently opened in vim:
+
+This will give you an output of the list of files with an id that you can use to refer to them when switching:
+
+![list of files][10]
+
+To switch to `file2.txt` you can use the `:b` command followed by its id:
+
+An alternative would be to use the file name itself:
+
+Or you can also just switch to the next file:
+
+Or switch to the previous file:
+
+Second thing that you need to know is that vim has 3 modes:
+
+* **command** – used for telling vim to do things. This is the default mode that vim is in when you open it. If you are on another mode other than the command mode you can press on `esc` to go back to the command mode.
+* **insert** – used for inserting things on the current file that you're working on. This is basically the text editor mode. You can only get to this mode when you are currently on the command mode. To get to this mode press the `i` key.
+* **visual** – used for selecting text. Just like the insert mode you can only get to this mode when you are in command mode. To get to this mode press the `v` key.
+
+**Basic Commands**
+
+Here are some of the basic commands that you would commonly use when working with a file. Note that you can only type in these commands while you are in command mode.
+
+* `:w` – save file
+* `:wq` – save file and quit
+* `:q!` – quit vim without saving the changes
+* `u` – undo last action
+* `ctrl + r` – redo
+* `x` – delete character under the cursor
+* `dd` – delete current line
+* `D` – delete to the end of the line. The main difference between this and the `dd` command is that the `dd` command deletes even the line breaks but the `D` command simply deletes to the end of the line leaving the line break behind.
+
+**Basic Navigation**
+
+You can navigate a file while you're in the command mode or insert mode by pressing the arrow keys. You can also use the following keys for navigating but only when you are in command mode:
+
+* `h` – left
+* `l` – right
+* `j` – down
+* `k` – up
+* `0` – move to the beginning of the line
+* `$` – move to the end of the line
+* `w` – move forward by one word
+* `b` – move backward by one word
+* `gg` – move to the top of the screen
+* `G` – move to the bottom of the screen
+* `line_numberH` – move to a specific line number
+
+**Searching Text**
+
+You can search for a specific text while you are in command mode by pressing the `/` key and entering the text that you want to search for and then press enter to execute the command. Vim will then highlight each instance of the text. You can move to the next instance by pressing the `n` key or `N` to go back to the previous instance.
+
+**Modifying Text**
+
+You can modify text by switching to insert mode. You can switch to insert mode by first going to command mode (`esc` key) then press the `i` key. Once you are on insert mode you can now start typing text just like you do with a normal text editor. While inside this mode and you want to select specific text to copy press the `esc` key to go back to command mode and then press the `v` key to switch to visual mode. From the visual mode you can then start selecting the text. To copy the text switch to the command mode then press the `y` key. To paste the copied text press the `p` key. You can do the same thing when you want to cut and paste. Simply use the `d` key instead of the `y` key.
+
+**Vim Configuration**
+
+You can use the `.vimrc` file to configure vim settings. It doesn't exist by default so you have to create it under the home directory:
+
+Some of the most common configuration that you would want to add:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+
+ |
+
+ syntax on
+ set number
+ set wrap
+ set tabstop=2
+
+ |
+
+Here's a description of what each option does:
+
+* **syntax on** – this enables syntax highlighting
+* **set number** – this enables line numbers
+* **set wrap** – this tells vim to word wrap visually
+* **set tabstop** – you can use this to specify the tab size. In the example above I've set it to `2` so when you press tab vim will insert 2 spaces
+
+**Resources for learning Vim**
+
+Be sure to check out the resources below to learn more about Vim. Learning Vim is really a painful process since you have to memorize a whole bunch of commands and practice it like you're practicing how to play the piano. Learning Vim is not that easy, lots of practice is required before you can get productive with using it. You can easily get away with just using a text-editor when writing code but if you want some productivity boost then take the time to really learn Vim even if it is painful. Here are some resources for learning Vim:
+
+#### Siege
+
+Siege is an HTTP load testing and benchmarking utility. You can mainly use this tool to stress test your web project with a bunch of requests to see how it holds up. Execute the following command to install siege:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ sudo apt-get install siege
+
+ |
+
+To use it you can execute:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ siege -b -t60S -c30 http://url-of-the-web-project-that-you-want-to-test
+
+ |
+
+The `-b` option tells siege to run the tests without delay. By default siege runs the test with a one second delay between each requests. Adding the `-b` option makes sure that the requests are made concurrently.
+
+The `-t60S` option tells siege to run the tests in 60 seconds (60S). If you want to run it for 30 minutes you can do `30M`. Or `1H` for an hour.
+
+The `-c30` option tells siege to have 30 concurrent connections.
+
+The last part of the command is the url that you want to test. If you only want to test out one url you can directly specify it in the command. But if you want to test out more than one url then you can create a new text file with the urls that you want to test out (one url per line) and then add the `-f` option followed by the path to the text file that you created to tell siege that you want to make use of a file:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ siege -b -t60S -c30 ~/test/urls.text
+
+ |
+
+Here's an example usage of siege:
+
+![siege][11]
+
+Interpreting the results above:
+
+* **transactions** – the total number of hits to the server.
+* **availability** – This is the availability of your web project to users. Ideally you would want the availability to be 100%. Anything below it would mean that some users accessing your web project won't be able to access it because of the load.
+* **elapsed time** – this is the time you specified in your options when you executed siege. It wouldn't be perfect though, as you can see from the results above we only got 59.37 seconds but we specified 60 seconds.
+* **data transferred** – the size of transferred data for each request
+* **response time** – the average response time for each request
+* **transaction rate** – the number of hits to the server per second
+* **throughput** – the average number of bytes transferred every second from the server to all the simulated users
+* **concurrency** – the average number of simultaneous connections
+* **successful transactions** – the number of successful transactions
+* **failed transactions** – the number of failed transactions
+* **longest transaction** – the total number of seconds the longest transaction took to finish
+* **shortest transaction** – the total number of seconds the shortest transaction took to finish
+
+#### Sed
+
+Sed is a tool for automatically modifying files. You can basically use this for writing scripts that does search and replace on multiple files. A common use case for developers would be for writing scripts that automatically formats source code according to a specific [coding standard][12].
+
+Yes you can do this sort of task using the built-in search and replace utility on text-editors like Sublime Text. But if you want something that lets you specify a lot of options and offers a lot of flexibility then sed is the tool for the job. Sed is pre-installed on most Linux distributions and also on Mac OS so you won't really have to do any installation. For windows users there's also [Sed for Windows][13] which you can install.
+
+Here's an example on how to use sed. For example you have the following file (`sed-test.php`):
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+
+ |
+
+ <?php
+ $superStars = array();
+ $rockStars = array();
+ $keyboardNinjas = array();
+ ?>
+
+ |
+
+And you want to modify all variable declarations to be all in lowercase. You would do something like:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ sed 's/$([A-Za-z]*([$A-Za-z_,s]*))/$L1/' sed-test.php
+
+ |
+
+Sed will then output the following result in the terminal screen:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+
+ |
+
+ <?php
+ $superstars = array();
+ $rockstars = array();
+ $keyboardninjas = array();
+ ?>
+
+ |
+
+To save the changes to the same file you need to do a little bit of a trick since sed doesn't have the functionality to commit the changes to the input file. The trick would be to temporarily save the results to a new file (`sed-test.new.php`) and then use `mv` to rename the new file (`sed-test.new.php`) to the old file name (`sed-test.php`) :
+
+| ----- |
+|
+
+ 1
+ 2
+
+ |
+
+ sed 's/$([A-Za-z]*([$A-Za-z_,s]*))/$L1/' sed-test.php > sed-test.new.php
+ mv sed-test.new.php sed-test.php
+
+ |
+
+If you want to learn more about sed check out the following resources: – [Sed – An Introduction and Tutorial][14] – [Getting Started with Sed][15]
+
+You can also check out the following related tools:
+
+#### Ruby Gems
+
+There's also lots of command line tools in the Ruby world. And you can have access to those tools by installing Ruby.
+
+In Linux and in Mac OS you can install Ruby by using RVM (Ruby Version Manager). First make sure that all the packages are up to date by executing the following command:
+
+We will get RVM by using Curl so we also have to install it:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ sudo apt-get install curl
+
+ |
+
+Once curl is installed, download rvm using curl and then pipe it to `bash` so we can use it immediately right after the download is finished:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ curl -L https://get.rvm.io | bash -s stable
+
+ |
+
+Install Ruby version `1.9.3` using rvm. For this step you don't really have to stick with version `1.9.3`. If there is already a later and stable version available you can use that as well:
+
+Tell rvm to use Ruby version `1.9.3`:
+
+You can then install the latest version of `rubygems`:
+
+For Windows users you can just use the [ruby installer for Windows][16].
+
+Once ruby gems is installed you can now install gems like there's no tomorrow. Here's a starting point: [Ruby Gems for Command-line Apps][17]. On the next section there's a gem called `tmuxinator` that you can install to manage tmux projects easily.
+
+#### Tmux
+
+Tmux or terminal multiplexer is an application that allows you to multiplex several terminal windows. It basically makes it easier to work on several related terminal windows. In Linux you can install tmux from the terminal by executing the following command:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ sudo apt-get install tmux
+
+ |
+
+For Mac OS you can install tmux through brew:
+
+And on Windows tmux is not really directly supported. You first have to install [cygwin][18] and then add [this patch][19] to install tmux. Or if you don't want to go through all the trouble you can install [console2][20] which is a tmux alternative for Windows.
+
+Once you're done installing tmux you can now go ahead and play with it. To start tmux first create a new named session:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmux new -s name_of_session
+
+ |
+
+This will create a new tmux session with the name that you supplied: ![tmux session][21]
+
+You can then execute commands just like you do with a normal terminal window. If you want to create a new window press `ctrl + b` then release and then press `c`. This will create a new window under the current session:
+
+![tmux new window][22]
+
+As you can see from the screenshot above we now have two windows (see the text highlighted in green on the lower part of the terminal window on the left side). One is named `0:bash` and the other is `1:bash*`. The one with the `*` is the current window.
+
+You can rename the current window by pressing `ctrl + b` then release and then `,`. This will prompt you to enter a new name for the window. You can just press enter once you're done renaming it:
+
+![tmux rename window][23]
+
+To switch between the windows you can either press `ctrl + b` then release and then the index of the window that you want to switch to. You can determine the index by looking at the lower left part of the terminal screen. So if you have only two windows opened the index can either be 0 or 1. You can also press `ctrl + b` then release and then `p` for previous or `n` for next window.
+
+You can also further divide each window into multiple panes by pressing `ctrl + b` then release and then the `%` key to divide the current window vertically or the `"` key to divide it horizontally. This will give you a screen similar to the following:
+
+![tmux panes][24]
+
+You can then switch between those panes by pressing `ctrl + b` then release and then the `o` key.
+
+What's good about tmux is that it allows you to keep multiple terminal sessions and you'll be able to access them even after restarting your computer. To list out available sessions you can execute the following command:
+
+This will list out all the sessions that you created using the `tmux new - s` command or simply `tmux`. You can then open up the specific session by executing the following command:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmux attach -t name_of_session
+
+ |
+
+If you no longer want to work with a particular session you can just do the following:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmux kill-session -t name_of_session
+
+ |
+
+Or if you want to kill all sessions:
+
+There's also this ruby gem called [tmuxinator][25] which allows you to create and manage complex tmux sessions easily. You can install it via ruby gems:
+
+Or if you're like me and you installed Ruby via RVM:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ rvmsudo gem install tmuxinator
+
+ |
+
+You can then create project-based tmux sessions. To create a new project you can do:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmuxinator open name_of_project
+
+ |
+
+This will create a `name_of_project.yml` file under the `~/.tmuxinator` directory. You can then open up this file and modify the default configuration. For me I simply deleted the commented lines (except for the first one which is the path to the current file) and then specified the project path. In my case its the `octopress` directory under the home directory. Then under the `windows` the `layout` is `main-vertical`, this means that the panes that I will specify would be divided vertically. There would be 2 panes, one is empty so I can just type in whatever commands I wish to execute and the other is `rake preview` which is the command for previewing an octopress blog locally:
+
+| ----- |
+|
+
+ 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+ 10
+ 11
+
+ |
+
+ # ~/.tmuxinator/blog.yml
+
+ name: blog
+ root: ~/octopress
+
+ windows:
+ - editor:
+ layout: main-vertical
+ panes:
+ - #empty
+ - rake preview
+
+ |
+
+To open up the project at a later time you execute the following:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmuxinator start name_of_project
+
+ |
+
+If you do not know the name of a specific project, you can list out all projects using the following command:
+
+If you no longer wish to work with a project in the future:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ tmuxinator delete name_of_project
+
+ |
+
+#### SSH
+
+SSH can be used to login to remote servers. SSH is pre-installed on both Linux and Mac OS. But for Windows you can use the alternative which is [open SSH][26] since SSH isn't installed on Windows by default.
+
+##### Logging in to remote server
+
+Once you have SSH installed you can now login to a remote server by executing the following command:
+
+Where the username is the `username` given to you by your web host. While the `hostname` can be a domain name, public dns or an IP address. For [Openshift][27] its something like:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxx@somesite-username.rhcloud.com
+
+ |
+
+Where `x` is a random string of number and letters.
+
+Executing the `ssh` command with the correct username and hostname combination will prompt you to enter your password. Again, the password here is the password given to you by your web host.
+
+##### SSH Keys
+
+You can also make use of SSH keys to authenticate yourself to a remote server. This will allow you to login without entering your password.
+
+To setup an ssh key navigate to the `.ssh` directory:
+
+If you don't have already one, create it by executing the following command:
+
+Once you're done with that, check if you already have a private and public key pair in the `~/.ssh` directory:
+
+It would look something like `id_rsa` and `id_rsa.pub`. If you don't already have those 2 files generate it by executing:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ ssh-keygen -t rsa -C "your_email@provider.com"
+
+ |
+
+This generates the `id_rsa` and `id_rsa.pub` files using your email address as the label. You can also use other information as the label.
+
+Next copy the public key (`id_rsa.pub`) into the remote server by using secure copy (`scp`):
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ scp -p id_rsa.pub username@hostname
+
+ |
+
+Now open up a new terminal window and login to the remote server.
+
+Check if the `id_rsa.pub` has indeed been copied by using the following command:
+
+If it returns "there's no such file or directory" return to the other terminal window (local machine) and execute the `scp` command again.
+
+Once that's done the next step is to copy all the contents of the `id_rsa.pub` file into the `authorized_keys` file inside the `.~/ssh` directory:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ cat id_rsa.pub > ~/.ssh/authorized_keys
+
+ |
+
+Next update the `/etc/ssh/sshd_config` file using either `vi` or `nanoc`:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ vi /etc/ssh/sshd_config
+
+ |
+
+Uncomment the line where it says `# AuthorizedKeysFile`, to uncomment all you have to do is remove the `#` symbol right before it. Vi is basically like vim so the key strokes that you use are pretty much the same. So first you place the cursor right above the `#` symbol then press `x` to delete the `#` symbol. And then press the `esc` key to go back to command mode and then type in `:wq` to save and quit editing the file:
+
+| ----- |
+|
+
+ 1
+
+ |
+
+ AuthorizedKeysFile %h/.ssh/authorized_keys
+
+ |
+
+Just make sure the path that its pointing to is the same path as the file that we updated earlier. The `%h` refers to the host so its basically the same as saying `~/.ssh/authorized_keys`.
+
+Once all of that is done you can now test it out by logging in once again. Note that for the first time after the update is done it will still ask you the password. But for the next one's it will no longer ask you the password.
+
+### Conclusion
+
+That's it! The command line is a must-use tool for every developer. In this blog post we've covered the essentials of using the command line along with some tools that can help you become more productive when it comes to using it. There's a lot more command line tools that I haven't covered in this blog post. I believe those tools deserves a blog post of their own so I'll be covering each of those in a future part of this series. For now I recommend that you check out the resources below for more command-line ninja skills.
+
+### Resources
+
+[1]: http://en.wikipedia.org/wiki/Command-line_interface
+[2]: http://www.cygwin.com/
+[3]: http://unxutils.sourceforge.net/
+[4]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/apropos.png
+[5]: http://stedolan.github.io/jq/download/
+[6]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/json-string.png
+[7]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/jq.png
+[8]: http://stedolan.github.io/jq/manual/
+[9]: http://www.vim.org/download.php
+[10]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/ls.png
+[11]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/siege.png
+[12]: http://en.wikipedia.org/wiki/Coding_conventions
+[13]: http://gnuwin32.sourceforge.net/packages/sed.htm
+[14]: http://www.grymoire.com/Unix/Sed.html
+[15]: http://sed.sourceforge.net/local/docs/An_introduction_to_sed.html
+[16]: http://rubyinstaller.org/
+[17]: http://www.awesomecommandlineapps.com/gems.html
+[18]: http://cygwin.org/
+[19]: http://sourceforge.net/mailarchive/message.php?msg_id=30850840
+[20]: http://sourceforge.net/projects/console/files/
+[21]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/tmux.png
+[22]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/tmux-new-window.png
+[23]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/tmux-rename-window.png
+[24]: http://wern-ancheta.com/images/posts/2014-02-21-a-whirlwind-tour-of-web-developer-tools-command-line-utilities/tmux-panes.png
+[25]: http://rubygems.org/gems/tmuxinator
+[26]: http://sshwindows.sourceforge.net/
+[27]: https://www.openshift.com/
diff --git a/cmd/ref/bash image tools for web designers - brettterpstra.com.txt b/cmd/ref/bash image tools for web designers - brettterpstra.com.txt
new file mode 100644
index 0000000..6eeff84
--- /dev/null
+++ b/cmd/ref/bash image tools for web designers - brettterpstra.com.txt
@@ -0,0 +1,53 @@
+---
+title: Bash image tools for web designers
+date: 2015-04-21T18:41:00Z
+source: http://brettterpstra.com/2013/07/24/bash-image-tools-for-web-designers/
+tags: #lhp, #cmdline
+
+---
+
+![][1]
+
+Here are a couple of my Bash functions for people who work with images in CSS or HTML. Nothing elaborate, just things that supplement my typical workflow.
+
+### Image dimensions
+
+First, a shell function for quickly getting the pixel dimensions of an image without leaving the shell. This trick can be incorporated into more complex functions in editors or other shell scripts. For example, when I add an image to my blog, a similar trick automatically includes the dimensions in the Jekyll (Liquid) image tag I use.
+
+Add this to your `.bash_profile` to be able to run `imgsize /path/to/image.jpg` and get back `600 x 343`.
+
+ # Quickly get image dimensions from the command line
+ function imgsize() {
+ local width height
+ if [[ -f $1 ]]; then
+ height=$(sips -g pixelHeight "$1"|tail -n 1|awk '{print $2}')
+ width=$(sips -g pixelWidth "$1"|tail -n 1|awk '{print $2}')
+ echo "${width} x ${height}"
+ else
+ echo "File not found"
+ fi
+ }
+
+You can, of course, take the $height and $width variables it creates and modify the output any way you like. You could output a full image tag using `<img href="$1" width="$width" height="$height">`, too.
+
+### Base 64 encoding
+
+I often use data-uri encoding to embed images in my CSS file, both for speed and convenience when distributing. The following function will take an image file as the only argument and place a full CSS background property with a Base64-encoded image in your clipboard, ready for pasting into a CSS file.
+
+ # encode a given image file as base64 and output css background property to clipboard
+ function 64enc() {
+ openssl base64 -in $1 | awk -v ext="${1#*.}" '{ str1=str1 $0 }END{ print "background:url(data:image/"ext";base64,"str1");" }'|pbcopy
+ echo "$1 encoded to clipboard"
+ }
+
+You can also do the same with fonts. I use this to embed a woff file. With a little alteration you can make versions for other formats, but usually when I'm embedding fonts it's because the stylesheet is being used in a particular context with a predictable browser.
+
+ function 64font() {
+ openssl base64 -in $1 | awk -v ext="${1#*.}" '{ str1=str1 $0 }END{ print "src:url("data:font/"ext";base64,"str1"") format("woff");" }'|pbcopy
+ echo "$1 encoded as font and copied to clipboard"
+ }
+
+Ryan Irelan has produced a series of [shell trick videos][2] based on BrettTerpstra.com posts. Readers can get 10% off using the coupon code `TERPSTRA`.
+
+[1]: http://cdn3.brettterpstra.com/images/grey.gif
+[2]: https://mijingo.com/products/screencasts/os-x-shell-tricks/ "Shell Tricks video series"
diff --git a/cmd/ref/invaluable command line tools for web developers.txt b/cmd/ref/invaluable command line tools for web developers.txt
new file mode 100644
index 0000000..c3c2b85
--- /dev/null
+++ b/cmd/ref/invaluable command line tools for web developers.txt
@@ -0,0 +1,89 @@
+---
+title: Invaluable command line tools for web developers
+date: 2014-12-30T18:57:50Z
+source: http://www.coderholic.com/invaluable-command-line-tools-for-web-developers/
+tags: #cmdline
+
+---
+
+Life as a web developer can be hard when things start going wrong. The problem could be in any number of places. Is there a problem with the request your sending, is the problem with the response, is there a problem with a request in a third party library you're using, is an external API failing? There are lots of different tools that can make our life a little bit easier. Here are some command line tools that I've found to be invaluable.
+
+**Curl** Curl is a network transfer tool that's very similar to wget, the main difference being that by default wget saves to file, and curl outputs to the command line. This makes is really simple to see the contents of a website. Here, for example, we can get our current IP from the [ipinfo.io][1] website:
+
+ $ curl ipinfo.io/ip
+ 93.96.141.93
+
+Curl's `-i` (show headers) and `-I` (show only headers) option make it a great tool for debugging HTTP responses and finding out exactly what a server is sending to you:
+
+ $ curl -I news.ycombinator.com
+ HTTP/1.1 200 OK
+ Content-Type: text/html; charset=utf-8
+ Cache-Control: private
+ Connection: close
+
+The `-L` option is handy, and makes Curl automatically follow redirects. Curl has support for HTTP Basic Auth, cookies, manually settings headers, and much much more.
+
+**Siege**
+
+Siege is a HTTP benchmarking tool. In addition to the load testing features it has a handy `-g` option that is very similar to `curl -iL` except it also shows you the request headers. Here's an example with www.google.com (I've removed some headers for brevity):
+
+ $ siege -g www.google.com
+ GET / HTTP/1.1
+ Host: www.google.com
+ User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)
+ Connection: close
+
+ HTTP/1.1 302 Found
+ Location: http://www.google.co.uk/
+ Content-Type: text/html; charset=UTF-8
+ Server: gws
+ Content-Length: 221
+ Connection: close
+
+ GET / HTTP/1.1
+ Host: www.google.co.uk
+ User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)
+ Connection: close
+
+ HTTP/1.1 200 OK
+ Content-Type: text/html; charset=ISO-8859-1
+ X-XSS-Protection: 1; mode=block
+ Connection: close
+
+What siege is really great at is server load testing. Just like `ab` (apache benchmark tool) you can send a number of concurrent requests to a site, and see how it handles the traffic. With the following command we test google with 20 concurrent connections for 30 seconds, and then get a nice report at the end:
+
+ $ siege -c20 www.google.co.uk -b -t30s
+ ...
+ Lifting the server siege... done.
+ Transactions: 1400 hits
+ Availability: 100.00 %
+ Elapsed time: 29.22 secs
+ Data transferred: 13.32 MB
+ Response time: 0.41 secs
+ Transaction rate: 47.91 trans/sec
+ Throughput: 0.46 MB/sec
+ Concurrency: 19.53
+ Successful transactions: 1400
+ Failed transactions: 0
+ Longest transaction: 4.08
+ Shortest transaction: 0.08
+
+One of the most useful features of siege is that it can take a url file as input, and hit those urls rather than just a single page. This is great for load testing, because you can replay real traffic against your site and see how it performs, rather than just hitting the same URL again and again. Here's how you would use siege to replay your apache logs against another server to load test it with:
+
+ $ cut -d ' ' -f7 /var/log/apache2/access.log > urls.txt
+ $ siege -c<concurreny rate> -b -f urls.txt
+
+**Ngrep**
+
+For serious network packet analysis there's [Wireshark][2], with it's thousands of settings, filters and different configuration options. There's also a command line version, tshark. For simple tasks I find wireshark can be overkill, so unless I need something more powerful, ngrep is my tool of choice. It allows you to do with network packets what grep does with files.
+
+For web traffic you almost always want the `-W byline` option which preserves linebreaks, and `-q` is a useful argument which supresses some additional output about non-matching packets. Here's an example that captures all packets that contain GET or POST:
+
+ ngrep -q -W byline "^(GET|POST) .*"
+
+You can also pass in additional packet filter options, such as limiting the matched packets to a certain host, IP or port. Here we filter all traffic going to or coming from google.com, port 80, and that contains the term "search".
+
+ ngrep -q -W byline "search" host www.google.com and port 80
+
+[1]: http://ipinfo.io
+[2]: http://www.wireshark.org/
diff --git a/cmd/ref/life on the command line.txt b/cmd/ref/life on the command line.txt
new file mode 100644
index 0000000..005ee19
--- /dev/null
+++ b/cmd/ref/life on the command line.txt
@@ -0,0 +1,81 @@
+---
+title: Life on the Command Line
+date: 2014-12-30T18:58:10Z
+source: http://stephenramsay.us/2011/04/09/life-on-the-command-line/
+tags: #cmdline
+
+---
+
+A few weeks ago, I realized that I no longer use graphical applications.
+
+That's right. I don't do anything with gui apps anymore, except surf the Web. And what's interesting about that, is that I rarely use cloudy, ajaxy replacements for desktop applications. Just about everything I do, I do exclusively on the command line. And I do what everyone else does: manage email, write things, listen to music, manage my todo list, keep track of my schedule, and chat with people. I also do a few things that most people don't do: including write software, analyze data, and keep track of students and their grades. But whatever the case, I do all of it on the lowly command line. I literally go for months without opening a single graphical desktop application. In fact, I don't — strictly speaking — have a desktop on my computer.
+
+I think this is a wonderful way to work. I won't say that _everything_ can be done on the command line, but most things can, and in general, I find the cli to be faster, easier to understand, easier to integrate, more scalable, more portable, more sustainable, more consistent, and many, many times more flexible than even the most well-thought-out graphical apps.
+
+I realize that's a bold series of claims. I also realize that such matters are always open to the charge that it's "just me" and the way I work, think, and view the world. That might be true, but I've seldom heard a usability expert end a discourse on human factors by acknowledging that graphical systems are only really the "best" solution for a certain group of people or a particular set of tasks. Most take the graphical desktop as ground truth — it's just the way we do things.
+
+I also don't do this out of some perverse hipster desire for retro-computing. I have work to do. If my system didn't work, I'd abandon it tomorrow. In a way, the cli reminds me of a bike courier's bicycle. Some might think there's something "hardcore" and cool about a bike that has one gear, no logos, and looks like it flew with the Luftwaffe, but the bike is not that way for style. It's that way because the bells and whistles (i.e. "features") that make a bike attractive in the store get in the way when you have to use it for work. I find it interesting that after bike couriers started paring down their rides years ago, we soon after witnessed a revival of the fixed-gear, fat-tire, coaster-break bike for adults. It's tempting to say that that was a good thing because "people didn't need" bikes inspired by lightweight racing bikes for what they wanted to do. But I think you could go further and say that lightweight racing bikes were getting in the way. Ironically, they were slowing people down.
+
+I've spent plenty of time with graphical systems. I'm just barely old enough to remember computers without graphical desktops, and like most people, I spent years taking it for granted that for a computer to be usable, it had to have windows, and icons, and wallpapers, and toolbars, and dancing paper clips, and whatever else. Over the course of the last ten years, all of that has fallen away. Now, when I try to go back, I feel as if I'm swimming through a syrupy sea of eye candy in which all the fish speak in incommensurable metaphors.
+
+I should say right away that I am talking about Linux/Unix. I don't know that I could have made the change successfully on a different platform. It's undoubtedly the case that what makes the cli work is very much about the way Unix works. So perhaps this is a plea not for the cli so much as for the cli as it has been imagined by Unix and its descendants. So be it.
+
+I'd like this to be the first of a series of short essays about my system. Essentially, I'd like to run through the things I (and most people) do, and show what it's like to run your life on the command line.
+
+First up . . .
+
+**Email**
+
+I think most email programs really suck. And that's a problem, because most people spend insane amounts of time in their email programs. Why, for starters, do they:
+
+_Take so long to load_
+
+Unless you keep the app open all the time (I'm assuming you do that because you have the focus of a guided missile), this is a program that you open and close several times a day. So why, oh why, does it take so much time to load?
+
+What? It's only a few seconds? Brothers and sisters, this is a _computer._ It should open _instantaneously._ You should be able to flit in and out of it with no delay at all. Boom, it's here. Boom, it's gone. Not, "Switch to the workplace that has the Web browser running, open a new tab, go to gmail, and watch a company with more programming power than any other organization on planet earth give you a…progress bar." And we won't even discuss Apple Mail, Outlook, or (people . . .) Lotus Notes.
+
+_Integrate so poorly with the rest of the system?_
+
+We want to organize our email messages, and most apps do a passable job of that with folders and whatnot. But they suck when it comes to organizing the content of email messages within the larger organizational scheme of your system.
+
+Some email messages contain things that other people want you to do. Some email messages have pictures that someone sent you from their vacation. Some email messages contain relevant information for performing some task. Some email messages have documents that need to be placed in particular project folders. Some messages are read-it-later.
+
+Nearly every email app tries to help you with this, but they do so in an extremely inconsistent and inflexible manner. Gmail gives you "Tasks," but it's a threadbare parody of the kind of todo lists most people actually need. Apple mail tries to integrate things with their Calendar app, but now you're tied to that calendar. So people sign up for Evernote, or Remember the Milk, or they buy OmniFocus (maybe all three). Or they go add a bump to the forum for company X in the hope that they'll write whatever glue is necessary to connect _x_ email program with _y_ task list manager.
+
+I think that you should be able to use _any_ app with _any_ other app in the context of _any_ organizational system. Go to any LifeHacker-style board and you'll see the same conversation over and over: "I tried OmniOrgMe, but it just seemed too complicated. I love EternalTask, but it isn't integrated with FragMail . . ." The idea that the "cloud" solves this is probably one of the bigger cons in modern computing.
+
+Problem 1 is immediately solved when you switch to a console-based email program. Pick any one of them. Type pine or mutt (for example), and your mail is before your eyes in the time it takes a graphical user to move their mouse to the envelope icon. Type q, and it's gone.
+
+Such programs tend to integrate well with the general command-line ecosystem, but I will admit that I didn't have problem 2 completely cracked until I switched to an email program that is now over twenty years old: [nmh][1].
+
+I've [written elsewhere][2] about nmh, so allow me to excerpt (a slightly modified) version of that:
+
+> The "n" in nmh stands for "new," but there's really nothing new about the program at all. In fact, it was originally developed at the rand Corporation decades ago.
+
+> We're talking old school. Type "inc" and it sends a numbered list of email subject lines to the screen, and returns you to the prompt. Type "show" and it will display the first message (in _any_ editor you like). You could then refile the message (with "refile") to another mailbox, or archive it, or forward it, and so on. There are thirty-nine separate commands in the nmh toolset, with names like "scan," "show," "mark," "sort," and "repl." On a day-to-day basis, you use maybe three or four.
+>
+> I've been using it for over a year. It is — hands down — the best email program I have ever used.
+>
+> Why? Because the dead simple things you need to do with mail are dead simple. Because there is no mail client in the world that is as fast. Because it never takes over your life (every time you do something, you're immediately back at the command prompt ready to do something else). Because everything — from the mailboxes to the mail itself — is just an ordinary plain text file ready to be munged. But most of all, because you can combine the nmh commands with ordinary unix commands to create things that would be difficult if not impossible to do with the gui clients.
+>
+> I now have a dozen little scripts that do nifty things with mail. I have scripts that archive old mail based on highly idiosyncratic aspects of my email usage. I have scripts that perform dynamic search queries based on analysis of past subject lines. I have scripts that mail todo list items and logs based on cron strings. I have scripts that save attachments to various places based on what's in my build files. None of these things are "features" of nmh. They're just little scripts that I hacked together with grep, sed, awk, and the shell. And every time I write one, I feel like a genius. The whole system just delights me. I want everything in my life to work like this program.
+
+Okay, I know what you're thinking: "Scripting. Isn't that, like, _programming?_ I don't want/know how to do that." This objection is going to keep re-appearing, so let me say something about it right away.
+
+The programming we're talking about for this kind of thing is very simple — so simple, that the skills necessary to carry it off could easily be part of the ordinary skillset of anyone who uses a computer on a regular basis. An entire industry has risen up around the notion that no user should ever do anything that looks remotely like giving coded instructions to a machine. I think that's another big con, and some day, I'll prove it to you by writing a tutorial that will turn you into a fearsome shell hacker. You'll be stunned at how easy it is.
+
+For now, I just want to make the point that once you move to the command line, everything is trivially connected to everything else, and so you are mostly freed from being locked in to any particular type of tool. You can use a todo list program that makes Omnifocus look like Notepad. You can use one that makes Gmail Tasks look like the U.N. Charter. Once we're in text land, the output of any program can in principle become the input to any other, and that changes everything.
+
+In the next installment, I'll demonstrate.
+
+[Greetings ProfHacker fans. Yes, this post is a little rantish; the conversation continues in a more sober, expansive vein with [The Mythical Man-Finger][3] and [The Man-Finger Aftermath][4]. Thanks to one and all for the many comments, which have deepened my thinking on all of this considerably.]
+
+[1]: http://www.nongnu.org/nmh/
+[2]: http://ra.tapor.ualberta.ca/~dayofdh2010/stephenramsay/2010/03/14/hello-world/
+[3]: /2011/07/25/the-mythical-man-finger/
+[4]: /2011/08/05/the-man-finger-aftermath/
+
+ [*rand]: Research ANd Development
+ [*gui]: Graphical User Interface
+ [*ajax]: Asychronous JavaScript and XML
+ [*cli]: Command-Line Interface \ No newline at end of file
diff --git a/cmd/ref/oliver | an introduction to unix.txt b/cmd/ref/oliver | an introduction to unix.txt
new file mode 100644
index 0000000..302c268
--- /dev/null
+++ b/cmd/ref/oliver | an introduction to unix.txt
@@ -0,0 +1,1913 @@
+---
+title: Oliver | An Introduction to Unix
+date: 2015-03-13T14:08:55Z
+source: http://www.oliverelliott.org/article/computing/tut_unix/#100UsefulUnixCommands
+tags: #cmdline
+
+---
+
+**Everybody Knows How to Use a Computer, but Not Everyone Knows How to Use the Command Line. Yet This is the Gateway to Doing Anything and Everything Sophisticated with a Computer and the Most Natural Starting Place to Learn Programming**
+_by Oliver; Jan. 13, 2014_
+                     
+
+
+## Introduction
+
+I took programming in high school, but I never took to it. This, I strongly believe, is because it wasn't taught right—and teaching it right means starting at the beginning, with unix. The reason for this is three-fold: _(1)_ it gives you a deeper sense of how a high-level computer works (which a glossy front, like Windows, conceals); _(2)_ it's the most natural port of entry into all other programming languages; and _(3)_ it's super-useful in its own right. If you don't know unix and start programming, some things will forever remain hazy and mysterious, even if you can't put your finger on exactly what they are. If you already know a lot about computers, the point is moot; if you don't, then by all means start your programming education by learning unix!
+
+A word about terminology here: I'm in the habit of horrendously confusing and misusing all of the precisely defined words "[Unix][1]", "[Linux][2]", "[The Command Line][3]", "[The Terminal][4]", "[Shell Scripting][5]", and "[Bash][6]." Properly speaking, _unix_ is an operating system while _linux_ refers to a closely-related family of unix-based operating systems, which includes commercial and non-commercial distributions [1]. (Unix was not free under its developer, AT&T, which caused the unix-linux schism.) The _command line_, as [Wikipedia][3] says, is:
+
+> ... a means of interacting with a computer program where the user issues commands to the program in the form of successive lines of text (command lines) ... The interface is usually implemented with a command line shell, which is a program that accepts commands as text input and converts commands to appropriate operating system functions.
+
+So what I mean when I proselytize for "unix", is simply that you learn how to punch commands in on the command line. The _terminal_ is your portal into this world. Here's what my mine looks like:
+
+![image][7]
+
+
+There is a suite of commands to become familiar with—[The GNU Core Utilities][8] ([wiki entry][9])—and, in the course of learning them, you learn about computers. Unix is a foundational piece of a programming education.
+
+In terms of bang for the buck, it's also an excellent investment. You can gain powerful abilities by learning just a little. My coworker was fresh out of his introductory CS course, when he was given a small task by our boss. He wrote a full-fledged program, reading input streams and doing heavy parsing, and then sent an email to the boss that began, _"After 1.5 days of madly absorbing perl syntax, I completed the exercise..."_ He didn't know how to use the command-line at the time, and now [a print-out of that email hangs on his wall][10] as a joke—and as a monument to the power of the terminal.
+
+You can find ringing endorsements for learning the command line from all corners of the internet. For instance, in the excellent course [Startup Engineering (Stanford/Coursera)][11] Balaji Srinivasan writes:
+
+> A command line interface (CLI) is a way to control your computer by typing in commands rather than clicking on buttons in a graphical user interface (GUI). Most computer users are only doing basic things like clicking on links, watching movies, and playing video games, and GUIs are fine for such purposes.
+>
+> But to do industrial strength programming - to analyze large datasets, ship a webapp, or build a software startup - you will need an intimate familiarity with the CLI. Not only can many daily tasks be done more quickly at the command line, many others can only be done at the command line, especially in non-Windows environments. You can understand this from an information transmission perspective: while a standard keyboard has 50+ keys that can be hit very precisely in quick succession, achieving the same speed in a GUI is impossible as it would require rapidly moving a mouse cursor over a profusion of 50 buttons. It is for this reason that expert computer users prefer command-line and keyboard-driven interfaces.
+
+To provide foreshadowing, here are some things you can do in unix:
+
+* make or rename 100 folders or files _en masse_
+* find all files of a given extension or any file that was created within the last week
+* log onto a computer remotely and access its files with ssh
+* copy files to your computer directly over the network (no external hard drive necessary!) with [rsync][12]
+* run a [Perl][13] or [Python][14] script
+* run one of the many programs that are only available on the command line
+* see all processes running on your computer or the space occupied by your folders
+* see or change the permissions on a file
+* parse a text file in any way imaginable (count lines, swap columns, replace words, etc.)
+* soundly encrypt your files or communications with [gpg2][15]
+* run your own web server on the [Amazon cloud][16] with [nginx][17]
+What do all of these have in common? All are hard to do in the [GUI][18], but easy to do on the command line. It would be remiss not to mention that unix is not for everything. In fact, it's not for a lot of things. Knowing which language to use for what is usually a matter of common sense. However, when you start messing about with computers in any serious capacity, you'll bump into unix very quickly—and that's why it's our starting point.
+
+Is this the truth, the whole truth, and nothing but the truth, [so help my white ass][19]? I believe it is, but also my trajectory through the world of computing began with unix. So perhaps I instinctively want to push this on other people: do it the way I did it. And, because it occupies a bunch of my neuronal real estate at the moment, I could be considered brainwashed :-)
+
+* * *
+
+
+[1] Still confused about unix vs linux? [Refer to the full family tree][20] and these more precise definitions from Wikipedia:
+**_unix_**: _a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, developed in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others_
+**_linux_**: _a Unix-like and mostly POSIX-compliant computer operating system assembled under the model of free and open-source software development and distribution [whose] defining component ... is the Linux kernel, an operating system kernel first released [in] 1991 by Linus Torvalds_ ↑
+
+## 100 Useful Unix Commands
+
+This article is an introduction to unix. It aims to teach the basic principles and neglects to mention many of the utilities that give unix superpowers. To learn about those, see [100 Useful Unix Commands][21].
+
+## Getting Started: Opening the Terminal
+
+If you have a Mac, navigate to Applications > Utilities and open the application named "Terminal":
+
+![image][22]
+
+If you have a PC, _abandon all hope, ye who enter here_! Just kidding—partially. None of the native Windows shells, such as [cmd.exe][23] or [PowerShell][24], are unix-like. Instead, they're marked with hideous deformities that betray their ignoble origin as grandchildren of the MS-DOS command interpreter. If you didn't have a compelling reason until now to quit using PCs, here you are [1]. Typically, my misguided PC friends don't use the command line on their local machines; instead, they have to `ssh` into some remote server running Linux. (You can do this with an ssh client like [PuTTY][25], [Chrome's Terminal Emulator][26], or [MobaXterm][27], but don't ask me how.) On Macintosh you can start practicing on the command line right away without having to install a Linux distribution [2] (the Mac-flavored unix is called [Darwin][28]).
+
+For both Mac and PC users who want a bona fide Linux command line, one easy way to get it is in the cloud with [Amazon EC2][16] via the [AWS Free Tier][29]. If you want to go whole hog, you can download and install a Linux distribution—[Ubuntu][30], [Mint][31], [Fedora][32], and [CentOs][33] are popular choices—but this is asking a lot of non-hardcore-nerds (less drastically, you could boot Linux off of a USB drive or run it in a virtual box).
+
+* * *
+
+
+[1] I should admit, you can and should get around this by downloading something like [Cygwin][34], whose homepage states: _"Get that Linux feeling - on Windows"_ ↑
+[2] However, if you're using Mac OS rather than Linux, note that OS does not come with the [GNU coreutils][9], which are the gold standard. [You should download them][35] ↑
+
+## The Definitive Guides to Unix, Bash, and the Coreutils
+
+Before going any further, it's only fair to plug the authoritative guides which, unsurprisingly, can be found right on your command line:
+
+ $ man bash
+ $ info coreutils
+
+(The $ at the beginning of the line represents the terminal's prompt.) These are good references, but overwhelming to serve as a starting point. There are also great resources online: although these guides, too, are exponentially more useful once you have a small foundation to build on.
+
+## The Unix Filestructure
+
+All the files and _directories_ (a fancy word for "folder") on your computer are stored in a hierarchical tree. Picture a tree in your backyard upside-down, so the trunk is on the top. If you proceed downward, you get to big branches, which then give way to smaller branches, and so on. The trunk contains everything in the sense that everything is connected to it. This is the way it looks on the computer, too, and the trunk is called the _root directory_. In unix it's represented with a slash:
+
+/
+
+The root contains directories, which contain other directories, and so on, just like our tree. To get to any particular file or directory, we need to specify the _path_, which is a slash-delimited address:
+
+/dir1/dir2/dir3/some_file
+
+Note that a full path always starts with the root, because the root contains everything. As we'll see below, this won't necessarily be the case if we specify the address in a _relative_ way, with respect to our current location in the filesystem.
+
+Let's examine the directory structure on our Macintosh. We'll go to the root directory and look down just one level with the unix command tree. (If we tried to look at the whole thing, we'd print out every file and directory on our computer!) We have:
+
+![image][36]
+
+
+While we're at it, let's also look at the directory /Users/username, which is the specially designated [_home directory][37]_ on the Macintosh:
+
+![image][38]
+
+
+One thing we notice right away is that the Desktop, which holds such a revered spot in the GUI, is just another directory—simply the first one we see when we turn on our computer.
+
+If you're on Linux rather than Mac OS, the directory tree might look less like the screenshot above and more like this:
+
+![image][39]
+
+
+The naming of these folders is not intuitive, but you can read about the role of each one [here][40]. I've arbitrarily traced out the path to /var/log, a location where some programs store their log files.
+
+If the syntax of a unix path looks familiar, it is. A webpage's [URL][41], with its telltale forward slashes, looks like a unix path with a domain prepended to it. This is not a coincidence! For a simple static website, its structure on the web is determined by its underlying directory structure on the server, so navigating to:
+
+http://www.example.com/abc/xyz
+
+will serve you content in the folder _websitepath/abc/xyz_ on the host's computer (i.e., the one owned by _example.com_). Modern dynamic websites are more sophisticated than this, but it's neat to reflect that the whole word has learned this unix syntax without knowing it.
+
+To learn more, see the [O'Reilly discussion of the unix file structure][42].
+
+## The Great Trailing Slash Debate
+
+Sometimes you'll see directories written with a trailing slash, as in:
+
+dir1/
+
+This helpfully reminds you that the entity is a directory rather than a file, but on the command line using the more compact _dir1_ is sufficient. There are a handful of unix commands which behave slightly differently if you leave the trailing slash on, but this sort of extreme pedantry isn't worth worrying about.
+
+## Where Are You? - Your _Path_ and How to Navigate through the Filesystem
+
+When you open up the terminal to browse through your filesystem, run a program, or do anything, _you're always somewhere_. Where? You start out in the designated _home directory_ when you open up the terminal. The home directory's path is preset by a global variable called HOME. Again, it's /Users/username on a Mac.
+
+As we navigate through the filesystem, there are some conventions. The _current working directory (cwd)_—whatever directory we happen to be in at the moment—is specified by a dot:
+
+.
+
+Sometimes it's convenient to write this as:
+
+./
+
+which is not to be confused with the root directory:
+
+/
+
+When a program is run in the cwd, you often see the syntax:
+
+ $ ./myprogram
+
+which emphasizes that you're executing a program from the current directory. The directory one above the cwd is specified by two dots:
+
+..
+
+With the trailing slash syntax, that's:
+
+../
+
+A tilde is shorthand for the home directory:
+
+~
+
+or:
+
+~/
+
+To see where we are, we can _print working directory_:
+
+ $ pwd
+
+To move around, we can _change directory_:
+
+ $ cd /some/path
+
+By convention, if we leave off the argument and just type:
+
+ $ cd
+
+we will go home. To _make directory_—i.e., create a new folder—we use:
+
+ $ mkdir
+
+As an example, suppose we're in our home directory, /Users/username, and want to get one back to /Users. We can do this two ways:
+
+ $ cd /Users
+
+or:
+
+ $ cd ..
+
+This illustrates the difference between an _absolute path_ and a _relative path_. In the former case, we specify the complete address, while in the later we give the address with respect to our _cwd_. We could even accomplish this with:
+
+ $ cd /Users/username/..
+
+or maniacally seesawing back and forth:
+
+ $ cd /Users/username/../username/..
+
+if our primary goal were obfuscation. This distinction between the two ways to specify a path may seem pedantic, but it's not. Many scripting errors are caused by programs expecting an absolute path and receiving a relative one instead or vice versa. Use relative paths if you can because they're more portable: if the whole directory structure gets moved, they'll still work.
+
+Let's mess around. We know cd with no arguments takes us home, so try the following experiment:
+
+ $ echo $HOME # print the variable HOME
+ /Users/username
+ $ cd # cd is equivalent to cd $HOME
+ $ pwd # print working directory shows us where we are
+ /Users/username
+
+ $ unset HOME # unset HOME erases its value
+ $ echo $HOME
+
+ $ cd /some/path # cd into /some/path
+ $ cd # take us HOME?
+ $ pwd
+ /some/path
+
+What happened? We stayed in /some/path rather than returning to /Users/username. The point? There's nothing magical about _home_—it's merely set by the variable HOME. More about variables soon!
+
+## Gently Wading In - The Top 10 Indispensable Unix Commands
+
+Now that we've dipped one toe into the water, let's make a list of the 10 most important unix commands in the universe:
+
+1. pwd
+2. ls
+3. cd
+4. mkdir
+5. echo
+6. cat
+7. cp
+8. mv
+9. rm
+10. man
+Every command has a help or _manual_ page, which can be summoned by typing man. To see more information about pwd, for example, we enter:
+
+ $ man pwd
+
+But pwd isn't particularly interesting and its man page is barely worth reading. A better example is afforded by one of the most fundamental commands of all, ls, which lists the contents of the _cwd_ or of whatever directories we give it as arguments:
+
+ $ man ls
+
+The man pages tend to give TMI (too much information) but the most important point is that commands have _flags_ which usually come in a _one-dash-one-letter_ or _two-dashes-one-word_ flavor:
+
+ command -f
+ command --flag
+
+and the docs will tell us what each option does. You can even try:
+
+ $ man man
+
+Below we'll discuss the commands in the top 10 list in more depth.
+
+## ls
+
+Let's go HOME and try out [ls][43] with various flags:
+
+ $ cd
+ $ ls
+ $ ls -1
+ $ ls -hl
+ $ ls -al
+
+Some screen shots:
+
+![image][44]
+
+
+
+![image][45]
+
+
+First, vanilla ls. We see our files—no surprises. And ls -1 merely displays our files in a column. To show the human-readable, long form we stack the -h and -l flags:
+
+ls -hl
+
+This is equivalent to:
+
+ls -h -l
+
+Screenshot:
+
+![image][46]
+
+
+This lists the owner of the file; the group to which he belongs (_staff_); the date the file was created; and the file size in human-readable form, which means bytes will be rounded to kilobytes, gigabytes, etc. The column on the left shows _permissions_. If you'll indulge mild hyperbole, this simple command is already revealing secrets that are well-hidden by the GUI and known only to unix users. In unix there are three spheres of permission—_user_, _group_, and _other/world_—as well as three particular types for each sphere—_read_, _write_, and _execute_. Everyone with an account on the computer is a unique _user_ and, although you may not realize it, can be part of various groups, such as a particular lab within a university or team in a company. To see yourself and what groups you belong to, try:
+
+ $ whoami
+ $ groups
+
+(To see more information about a user, [finger][47] his username.) A string of dashes displays permission:
+
+ ---------
+ rwx------
+ rwxrwx---
+ rwxrwxrwx
+
+This means, respectively: no permission for anybody; read, write, execute permission for only the user; _rwx_ permission for the user and anyone in the group; and _rwx_ permission for the user, group, and everybody else. Permission is especially important in a shared computing environment. You should internalize now that two of the most common errors in computing stem from the two _P_ words we've already learned: _paths and permissions_. The command chmod, which we'll learn later, governs permission.
+
+If you look at the screenshot above, you see a tenth letter prepended to the permission string, e.g.:
+
+This has nothing to do with permissions and instead tells you about the type of entity in the directory: _d_ stands for directory, _l_ stands for symbolic link, and a plain dash denotes a file.
+
+The -a option in:
+
+ls -al
+
+lists _all_ files in the directory, including [_dotfiles][48]_. These are files that begin with a dot and are hidden in the GUI. They're often system files—more about them later. Screenshot:
+
+![image][49]
+
+
+Note that, in contrast to ls -hl, the file sizes are in pure bytes, which makes them a little hard to read.
+
+A general point about unix commands: they're often robust. For example, with ls you can use an arbitrary number of arguments and it obeys the convention that an asterisk matches anything (this is known as [file _globbing_][50], and I think of it as the prequel to _regular expressions_). Take:
+
+ $ ls . dir1 .. dir2/*.txt dir3/A*.html
+
+This monstrosity would list anything in the _cwd_; anything in directory _dir1_; anything in the directory one above us; anything in directory _dir2_ that ends with _.txt_; and anything in directory _dir3_ that starts with _A_ and ends with _.html_. You get the point.
+
+## Single Line Comments in Unix
+
+Anything prefaced with a # —that's _pound-space_—is a comment and will not be executed:
+
+ $ # This is a comment.
+ $ # If we put the pound sign in front of a command, it won't do anything:
+ $ # ls -hl
+
+Suppose you write a line of code on the command line and decided you don't want to execute it. You have two choices. The first is pressing _Cntrl-c_, which serves as an "abort mission." The second is jumping to the beginning of the line (_Cntrl-a_) and adding the pound character. This has an advantage over the first method that the line will be saved in bash history (discussed below) and can thus be retrieved and modified later.
+
+In a script, pound-special-character (like _#!_) is sometimes interpreted (see below), so take note and include a space after # to be safe.
+
+## The Primacy of Text Files, Text Editors
+
+As we get deeper into unix, we'll frequently be using text editors to edit code, and viewing either data or code in text files. When I got my hands on a computer as a child, I remember text editors seemed like the most boring programs in the world (compared to, say, 1992 [Prince of Persia][51]). And text files were on the bottom of my food chain. But the years have changed me and now I like nothing better than a clean, unformatted _.txt_ file. It's all you need! If you store your data, your code, your correspondence, your book, or almost anything in _.txt_ files with a systematic structure, they can be parsed on the command line to reveal information from many facets. Here's some advice: do all of your text-related work in a good text editor. Open up clunky [Microsoft Word][52], and you've unwittingly spoken a demonic incantation and summoned the beast. Are these the words of a lone lunatic dispensing [hateration][53]? No, because on the command line you can count the words in a text file, search it with [grep][54], input it into a Python program, et cetera. However, a file in Microsoft Word's proprietary and unknown formatting is utterly unusable.
+
+Because text editors are extremely important, some people develop deep relationships with them. My co-worker, who is a [Vim][55] aficionado, turned to me not long ago and said, "You know how you should think about editing in Vim? _As if you're talking to it._" On the terminal, a ubiquitous and simple editor is [nano][56]. If you're more advanced, try [Vim][55] or [Emacs][57]. Not immune to my co-worker's proselytizing, I've converted to Vim. Although it's sprawling and the learning curve can be harsh—Vim is like a programming language in itself—you can do a zillion things with it. I put together a quick and dirty Vim wiki [here][58].
+
+On the GUI, there are many choices: [Sublime][59], [Aquamacs][60], [Smultron][61], etc. I used to use Smultron until I found, unforgivably, that the spacing of documents when you looked at them in the editor and on the terminal was different. I hear good things about Sublime and Aquamacs.
+
+_Exercise_: Let's try making a text file with nano. Type:
+
+ $ nano file.txt
+
+and make the following three-row two-column file: (It's _Cntrl-o_ to save and _Cntrl-x_ to exit.)
+
+## _echo_ and _cat_
+
+More essential commands: echo prints the _string_ passed to it as an argument, while cat prints the _contents_ of files passed to it as arguments. For example:
+
+ $ echo joe
+ $ echo "joe"
+
+would both print _joe_, while:
+
+ $ cat file.txt
+
+would print the contents of _file.txt_. Entering:
+
+ $ cat file.txt file2.txt
+
+would print out the contents of both _file.txt_ and _file2.txt_ concatenated together, which is where this command gets its slightly confusing name.
+
+Finally, a couple of nice flags for these commands:
+
+ $ echo -n "joe" # suppress newline
+ $ echo -e "joetjoenjoe" # interpret special chars ( t is tab, n newline )
+ $ cat -n file.txt # print file with line numbers
+
+## _cp_, _mv_, and _rm_
+
+Finishing off our top 10 list we have cp, mv, and rm. The command to make a copy of a file is cp:
+
+ $ cp file1 file2
+ $ cp -R dir1 dir2
+
+The first line would make an identical copy of _file1_ named _file2_, while the second would do the same thing for directories. Notice that for directories we use the -R flag (for _recursive_). The directory and everything inside it are copied.
+
+_Question_: what would the following do?
+
+ $ cp -R dir1 ../../
+
+_Answer_: it would make a copy of _dir1_ up two levels from our current working directory.
+
+To rename a file or directory we use mv:
+
+ $ mv file1 file2
+
+In a sense, this command also moves files, because we can rename a file into a different path. For example:
+
+ $ mv file1 dir1/dir2/file2
+
+would move _file1_ into _dir1/dir2/_ and change its name to _file2_, while:
+
+ $ mv file1 dir1/dir2/
+
+would simply move _file1_ into _dir1/dir2/_ or, if you like, rename _./file1_ as _./dir1/dir2/file1_.
+
+Finally, rm removes a file or directory:
+
+ $ rm file # removes a file
+ $ rm -r dir # removes a file or directory
+ $ rm -rf dir # force removal of a file or directory
+ # (i.e., ignore warnings)
+
+## Variables in Unix
+
+To declare something as a variable use an equals sign, with no spaces. Let's declare _a_ to be a variable:
+
+ $ a=3 # This syntax is right (no whitespace)
+ $ a = 3 # This syntax is wrong (whitespace)
+ -bash: a: command not found
+
+Once we've declared something as a variable, we need to use _$_ to access its value (and to let bash know it's a variable). For example:
+
+ $ a=3
+ $ echo a
+ a
+ $ echo $a
+ 3
+
+So, with no _$_ sign, bash thinks we just want to echo the string _a_. With a _$_ sign, however, it knows we want to access what the variable _a_ is storing, which is the value _3_. Variables in unix are loosely-typed, meaning you don't have to declare something as a string or an integer.
+
+ $ a=3 # a can be an integer
+ $ echo $a
+ 3
+
+ $ a=joe # or a can be a string
+ $ echo $a
+ joe
+
+ $ a="joe joe" # Use quotes if you want a string with spaces
+ $ echo $a
+ joe joe
+
+We can declare and echo two variables at the same time, and generally play fast and loose, as we're used to doing on the command line:
+
+ $ a=3; b=4
+ $ echo $a $b
+ 3 4
+ $ echo $a$b # mesh variables together as you like
+ 34
+ $ echo "$a$b" # use quotes if you like
+ 34
+ $ echo -e "$at$b" # the -e flag tells echo to interpret t as a tab
+ 3 4
+
+You should also be aware of how bash treats double vs single quotes. As we've seen, if you want to use a string with spaces, you use double quotes. If you use double quotes, any variable inside them will be expanded, the same as in Perl. If you use single quotes, everything is taken literally and variables are not expanded. Here's an example:
+
+ $ var=5
+ $ joe=hello $var
+ -bash: 5: command not found
+
+ $ joe="hello $var"
+ $ echo $joe
+ hello 5
+
+ $ joe='hello $var'
+ $ echo $joe
+ hello $var
+
+An important note is that often we use variables to store _paths_ in unix. Once we do this, we can use all of our familiar directory commands on the variable:
+
+ $ d=dir1/dir2/dir3
+ $ ls $d
+ $ cd $d
+
+ $ d=.. # this variable stores the directory one above us (relative path)
+ $ cd $d/.. # cd two directories up
+
+## Escape Sequences
+
+[Escape sequences][62] are important in every language. When bash reads _$a_ it interprets it as whatever's stored in the variable _a_. What if we actually want to echo the string _$a_? To do this, we use as an escape character:
+
+ $ a=3
+ $ echo $a
+ 3
+ $ echo $a
+ $a
+ $ echo "$a" # use quotes if you like
+ $a
+
+What if we want to echo the slash, too? Then we have to escape the escape character (using the escape character!):
+
+ $ echo \$a # escape the slash and the dollar sign
+ $a
+
+This really comes down to parsing. The slash helps bash figure out if your text is a plain old string or a variable. It goes without saying that you should avoid special characters in your variable names. In unix we might occasionally fall into a parsing tar-pit trap. To avoid this, and make extra sure bash parses our variable right, we can use the syntax _${a}_ as in:
+
+ $ echo ${a}
+ 3
+
+When could this possibly be an issue? Later, when we discuss scripting, we'll learn that _$n_, where _n_ is a number, is the _n_th argument to our script. If you were crazy enough to write a script with 11 arguments, you'd discover that bash interprets a=$11 as a=$1 (the first argument) concatenated with the string 1 while a=${11} properly represents the eleventh argument. This is getting in the weeds, but FYI.
+
+Here's a more practical example:
+
+ $ a=3
+ $ echo $a # variable a equals 3
+ 3
+ $ echo $apple # variable apple is not set
+
+ $ echo ${a}pple # this describes the variable a plus the string "pple"
+ 3pple
+
+## Global Variables in Unix
+
+In general, it is the convention to use capital letters for global variables. We've already learned about one: HOME. We can see all the variables set in our shell by simply typing:
+
+ $ set
+
+Some basic variables deserve comment:
+
+* HOME
+* PS1
+* TMPDIR
+* EDITOR
+* DISPLAY
+HOME, as we've already seen, is the path to our home directory (preset to /Users/username on Macintosh). PS1 sets the shell's prompt. For example:
+
+ $ PS1=':-) '
+
+changes our prompt from a dollar-sign into an emoticon, as in:
+
+![image][63]
+
+
+On your computer there is a designated temporary directory and its path is stored in TMPDIR. Some commands, such as sort, which we'll learn later, surreptitiously make use of this directory to store intermediate files. At work, we have a shared computer system and occasionally this common directory $TMPDIR will run out of space, causing programs trying to write there to fail. One solution is to simply set TMPDIR to a different path where there's free space. EDITOR sets the default text editor (you can invoke it by pressing _Cntrl-x-e_). And DISPLAY is a variable related to the [X Window System][64].
+
+Many programs rely on their own agreed-upon global variables. For example, if you're a Perl user, you may know that Perl looks for modules in the directory whose path is stored in PERL5LIB. Python looks for its modules in PYTHONPATH; R looks for packages in R_LIBS; Matlab uses MATLABPATH; awk uses AWKPATH; C++ looks for libraries in LD_LIBRARY_PATH; and so on. These variables don't exist in the shell by default. A program will make a system call and look for the variable. If the user has had the need or foresight to define it, the program can make use of it.
+
+## The _PATH_
+
+The most important global variable of all is the PATH. This is _the_ PATH, as distinct from _a_ path, a term we've already learned referring to a location in the filesystem. The PATH is a colon-delimited list of directories where unix will look for executable programs when you enter something on command line. If your program is in one of these directories, you can run it from any location by simply entering its name. If the program is not in one of these directories, you can still run it, of course, but you'll have to include its path.
+
+Let's revisit the idea of a command in unix. What's a command? It's nothing more than a program sitting in a directory somewhere. So, if ls is a program, where is it? Use the command which to see its path:
+
+ $ which ls # on my work computer
+ /bin/ls
+
+ $ which ls # on my home Mac
+ /usr/local/Cellar/coreutils/8.20/libexec/gnubin/ls
+
+For the sake of argument, let's say I download an updated version of the ls command, and then type ls in my terminal. What will happen—will the old ls or the new ls execute? The PATH comes into play here because it also determines priority. When you enter a command, unix will look for it in each directory of the PATH, from first to last, and execute the first instance it finds. For example, if:
+
+PATH=/bin/dir1:/bin/dir2:/bin/dir3
+
+and there's a command named _ls_ in both /bin/dir1 and /bin/dir2, the one in /bin/dir1 will be executed.
+
+Let's see what your PATH looks like. Enter:
+
+ $ echo $PATH
+
+For example, here's a screenshot of the default PATH on Ubuntu:
+
+![image][65]
+
+
+To emphasize the point again, all the programs in the directories specified by your PATH are all the programs that you can access on the command line by simply typing their names.
+
+The PATH is not immutable. You can set it to be anything you want, but in practice you'll want to augment, rather than overwrite, it. By default, it contains directories where unix expects executables, like:
+
+* /bin
+* /usr/bin
+* /usr/local/bin
+Let's say you have just written the command /mydir/newcommand. If you're not going to use the command very often, you can invoke it using its full path every time you need it:
+
+ $ /mydir/newcommand
+
+However, if you're going to be using it frequently, you can just add /mydir to the PATH and then invoke the command by name:
+
+ $ PATH=/mydir:$PATH # add /mydir to the front of PATH - highest priority
+ $ PATH=$PATH:/mydir # add /mydir to the back of PATH - lowest priority
+ $ newcommand # now invoking newcommand is this easy
+
+This is a frequent chore in unix. If you download some new program, you will often find yourself updating the PATH to include the directory containing its binaries. How can we avoid having to do this every time we open the terminal for a new session? We'll discuss this below when we learn about _.bashrc_.
+
+If you want to shoot yourself in the foot, you can vaporize the PATH:
+
+ $ unset PATH # not advisable
+ $ ls # now ls is not found
+ -bash: ls: No such file or directory
+
+but this is not advisable, save as a one-time educational experience.
+
+## Links
+
+While we're on the general subject of paths, let's talk about [_symbolic links][66]_. If you've ever used the _Make Alias_ command on a Macintosh (not to be confused with the unix command alias, discussed below), you've already developed intuition for what a link is. Suppose you have a file in one folder and you want that file to exist in another folder simultaneously. You could copy the file, but that would be wasteful. Moreover, if the file changes, you'll have to re-copy it—a huge ball-ache. Links solve this problem. A link to a file is a stand-in for the original file, often used to access the original file from an alternate file path. It's not a copy of the file but, rather, points to the file.
+
+To make a symbolic link, use the command [ln][67]:
+
+ $ ln -s /path/to/target/file mylink
+
+This produces:
+
+ mylink --> /path/to/target/file
+
+in the cwd, as ls -hl will show. Note that removing _mylink_:
+
+ $ rm mylink
+
+does not affect our original file.
+
+If we give the target (or source) path as the sole argument to ln, the name of the link will be the same as the source file's. So:
+
+ $ ln -s /path/to/target/file
+
+produces:
+
+ file --> /path/to/target/file
+
+Links are incredibly useful for all sorts of reasons—the primary one being, as we've already remarked, if you want a file to exist in multiple locations without having to make extraneous, space-consuming copies. You can make links to directories as well as files. Suppose you add a directory to your PATH that has a particular version of a program in it. If you install a newer version, you'll need to change the PATH to include the new directory. However, if you add a link to your PATH and keep the link always pointing to the most up-to-date directory, you won't need to keep fiddling with your PATH. The scenario could look like this:
+
+ $ ls -hl myprogram
+ current -> version3
+ version1
+ version2
+ version3
+
+(where I'm hiding some of the output in the long listing format.) In contrast to our other examples, the link is in the same directory as the target. Its purpose is to tell us which version, among the many crowding a directory, we should use.
+
+Another good practice is putting links in your home directory to folders you often use. This way, navigating to those folders is easy when you log in. If you make the link:
+
+ ~/MYLINK --> /some/long/and/complicated/path/to/an/often/used/directory
+
+then you need only type:
+
+ $ cd MYLINK
+
+rather than:
+
+ $ cd /some/long/and/complicated/path/to/an/often/used/directory
+
+Links are everywhere, so be glad you've made their acquaintance!
+
+## What is Scripting?
+
+By this point, you should be comfortable using basic utilities like echo, cat, mkdir, cd, and ls. Let's enter a series of commands, creating a directory with an empty file inside it, for no particular reason:
+
+ $ mkdir tmp
+ $ cd tmp
+ $ pwd
+ /Users/oliver/tmp
+ $ touch myfile.txt # the command touch creates an empty file
+ $ ls
+ myfile.txt
+ $ ls myfile_2.txt # purposely execute a command we know will fail
+ ls: cannot access myfile_2.txt: No such file or directory
+
+What if we want to repeat the exact same sequence of commands 5 minutes later? _Massive bombshell_—we can save all of these commands in a file! And then run them whenever we like! Try this:
+
+ $ nano myscript.sh
+
+and write the following:
+
+ # a first script
+ mkdir tmp
+ cd tmp
+ pwd
+ touch myfile.txt
+ ls
+ ls myfile_2.txt
+
+Gratuitous screenshot:
+
+![image][68]
+
+
+This file is called a _script_ (_.sh_ is a typical suffix for a shell script), and writing it constitutes our first step into the land of bona fide computer programming. In general usage, a script refers to a small program used to perform a niche task. What we've written is a recipe that says: **
+
+* create a directory called "tmp"
+* go into that directory
+* print our current path in the file system
+* make a new file called "myfile.txt"
+* list the contents of the directory we're in
+* specifically list the file "myfile_2.txt" (which doesn't exist)
+** This script, though silly and useless, teaches us the fundamental fact that all computer programs are ultimately just lists of commands.
+
+Let's run our program! Try:
+
+ $ ./myscript.sh
+ -bash: ./myscript.sh: Permission denied
+
+_WTF!_ It's dysfunctional. What's going on here is that the file permissions are not set properly. In unix, when you create a file, the default permission is _not executable_. You can think of this as a brake that's been engaged and must be released before we can go (and do something potentially dangerous). First, let's look at the file permissions:
+
+ $ ls -hl myscript.sh
+ -rw-r--r-- 1 oliver staff 75 Oct 12 11:43 myscript.sh
+
+Let's change the permissions with the command chmod and execute the script:
+
+ $ chmod u+x myscript.sh # add executable(x) permission for the user(u) only
+ $ ls -hl myscript.sh
+ -rwxr--r-- 1 oliver staff 75 Oct 12 11:43 myscript.sh
+
+ $ ./myscript.sh
+ /Users/oliver/tmp/tmp
+ myfile.txt
+ ls: cannot access myfile_2.txt: No such file or directory
+
+Not bad. Did it work? Yes, it did because it's printed stuff out and we see it's created tmp/myfile.txt:
+
+ $ ls
+ myfile.txt myscript.sh tmp
+ $ ls tmp
+ myfile.txt
+
+An important note is that even though there was a cd in our script, if we type:
+
+ $ pwd
+ /Users/oliver/tmp
+
+we see that we're still in the same directory as we were in when we ran the script. Even though the script entered /Users/oliver/tmp/tmp, and did its bidding, we stay in /Users/oliver/tmp. Scripts always work this way—where they go is independent of where we go.
+
+If you're wondering why anyone would write such a pointless script, you're right—it would be odd if we had occasion to repeat this combination of commands. There are some more realistic examples of scripting below.
+
+## File Suffixes in Unix
+
+As we begin to script it's worth following some file naming conventions. We should use common sense suffixes, like:
+
+* _.txt_ \- for text files
+* _.html_ \- for html files
+* _.sh_ \- for shell scripts
+* _.pl_ \- for Perl scripts
+* _.py_ \- for Python scripts
+* _.cpp_ \- for c++ code
+and so on. Adhering to this organizational practice will enable us to quickly scan our files, and make searching for particular file types easier [1]. As we saw above, commands like ls and find are particularly well-suited to use this kind of information. For example, list all text files in the cwd:
+
+ $ ls *.txt
+
+List all text files in the cwd and below (i.e., including child directories):
+
+ $ find . -name "*.txt"
+
+
+
+* * *
+
+
+[1] An astute reader noted that, for commands—as opposed to, say, html or text files—using suffixes is not the best practice because it violates the principle of encapsulation. The argument is that a user is neither supposed to know nor care about a program's internal implementation details, which the suffix advertises. You can imagine a program that starts out as a shell script called _mycommand.sh_, is upgraded to Python as _mycommand.py_, and then is rewritten in C for speed, becoming the binary _mycommand_. What if other programs depend on _mycommand_? Then each time _mycommand_'s suffix changes they have to be rewritten—a big problem. Although I make this sloppy mistake in this article, that doesn't excuse you! [Read the full argument][69] ↑
+
+## The Shebang
+
+We've left out one important detail about scripting. How does unix know we want to run a bash script, as opposed to, say, a Perl or Python script? There are two ways to do it. We'll illustrate with two simple scripts, a bash script and a Perl script:
+
+ $ cat myscript_1.sh # a bash script
+ echo "hello kitty"
+
+ $ cat myscript_1.pl # a Perl script
+ print "hello kittyn";
+
+The first way to tell unix which program to use to interpret the script is simply to say so on the command line. For example, we can use bash to execute bash scripts:
+
+ $ bash ./myscript_1.sh # use bash for bash scripts
+ hello kitty
+
+and perl for Perl scripts:
+
+ $ perl ./myscript_1.pl # use Perl for Perl scripts
+ hello kitty
+
+But this won't work for a Perl script:
+
+ $ ./myscript_1.pl # this won't work
+ ./myscript_1.pl: line 1: print: command not found
+
+And if we purposefully specify the wrong language, we'll get errors:
+
+ $ bash ./myscript_1.pl # let's purposefully do it backwards
+ ./myscript_1.pl: line 1: print: command not found
+
+ $ perl ./myscript_1.sh
+ String found where operator expected at ./myscript_1.sh line 1,
+ near "echo "hello kitty""
+ (Do you need to predeclare echo?)
+ syntax error at ./myscript_1.sh line 1, near "echo "hello kitty""
+ Execution of ./myscript_1.sh aborted due to compilation errors.
+
+The second way to specify the proper interpreter—and the better way, which you should emulate—is to put it in the script itself using a [_shebang_][70]. To do this, let's remind ourselves where bash and perl reside on our system. On my computer, they're here:
+
+ $ which perl
+ /usr/bin/perl
+
+ $ which bash
+ /bin/bash
+
+although perl could be somewhere else on your machine (bash should be in /bin by convention). The _shebang_ specifies the language in which your script is interpreted according to the syntax #! followed by the path to the language. It should be the first line of your script. Note that it's not a comment even though it looks like one. Let's add shebangs to our two scripts:
+
+ $ cat myscript_1.sh
+ #!/bin/bash
+ echo "hello kitty"
+
+ $ cat myscript_1.pl
+ #!/usr/bin/perl
+ print "hello kittyn";
+
+Now we can run them without specifying the interpreter in front:
+
+ $ ./myscript_1.sh
+ hello kitty
+ $ ./myscript_1.pl
+ hello kitty
+
+However, there's _still_ a lingering issue and it has to do with [portability][71], an important software principle. What if perl is in a different place on your machine than mine and you copy my scripts and try to run them? The path will be wrong and they won't work. The solution to this issue is courtesy of a neat trick using [env][72]. We can amend our script to be:
+
+ $ cat myscript_1.pl
+ #!/usr/bin/env perl
+ print "hello kittyn";
+
+Of course, this assumes you have a copy of env in /usr/bin, but this is usually a correct assumption. What env does here is to use whatever your environmental variable for perl is—i.e., the perl that's first in your PATH.
+
+This is a useful practice even if you're not sharing scripts. Suppose you've updated your version of perl and there's a newer copy than /usr/bin/perl. You've appropriately updated your PATH such that the directory containing the updated perl comes before /usr/bin. If you have env in your shebang, you're all set. However, if you've _hardwired_ the old path in your shebang, your script will run on the old perl [1].
+
+The question that the shebang resolves—which program will run your script?—reminds us of a more fundamental distinction between [interpreted languages][73] and [compiled languages][74]. The former are those like bash, Perl, and Python, where you can cat a script and look inside it. The later, like C++, require [_compilation][75]_, the process whereby code is translated into machine language (the result is sometimes called a _binary_). This can be done with a command line utility like [g++][76]:
+
+ $ g++ MyProgram.cpp -o MyProgram
+
+Compiled programs, such as the unix utilities themselves, tend to run faster. Don't try to cat a binary, such as ls, or it will spew out gibberish:
+
+ $ cat $( which ls ) # don't do this!
+
+
+
+* * *
+
+
+[1] Of course, depending on circumstances, you may very well want to stick with the old version of Perl or whatever's running your program. An update can have unforeseen consequences and this is the motivation for tools like [virtualenv][77] (Python), whose docs remind us: "_If an application works, any change in its libraries or the versions of those libraries can break the application_" ↑
+
+## _bash_
+
+We've thrown around the term _bash_ a few times but we haven't defined it. To do so, let's examine the special command, sh, which is more primitive than bash and came before it. To quote Wikipedia and the manual page:
+
+> The Bourne shell (sh) is a shell, or command-line interpreter, for computer operating systems. The shell is a command that reads lines from either a file or the terminal, interprets them, and generally executes other commands. It is the program that is running when a user logs into the system ... Commands can be typed directly to the running shell or can be put into a file and the file can be executed directly by the shell
+
+As it describes, sh is special because it's both a command interpreter and a command itself (usually found at /bin/sh). Put differently, you can run _myscript_ as:
+
+ $ sh ./myscript
+
+or you can simply type:
+
+ $ sh
+
+to start an interactive sh shell. If you're in this shell and run:
+
+ $ ./myscript
+
+without specifying an interpreter or using a shebang, your script will be interpreted by sh by default. On most computers, however, the default shell is no longer sh but bash (usually located at /bin/bash). To mash up Wikipedia and the manual page:
+
+> The **B**ourne-**A**gain **SH**ell (bash) a Unix shell written by Brian Fox for the GNU Project as a free software replacement for the Bourne shell. bash is an sh-compatible command language interpreter that executes commands read from the standard input or from a file ... There are some subtle differences between bash and traditional versions of sh
+
+Like sh, bash is a command you can either invoke on a script or use to start an interactive bash shell. Read more on Stackoverflow: [Difference between sh and bash][78].
+
+Which shell are you using right now? Almost certainly bash, but if you want to double check, there's a neat command [given here][79] to display your shell type:
+
+ $ ps -p $$
+
+There are more exotic shells, like [Z shell][80] and [tcsh][81], but they're beyond the scope of this article.
+
+## _chmod_
+
+Let's take a closer look at how to use [chmod][82]. Remember the three domains:
+
+* _u_ \- user
+* _g_ \- group
+* _o_ \- other/world
+and the three types of permission:
+* _r_ \- read
+* _w_ \- write
+* _x_ \- execute
+we can mix and match these how we like, using a plus sign to grant permissions according to the syntax:
+
+chmod entity+permissiontype
+
+or a minus sign to remove permissions:
+
+chmod entity-permissiontype
+
+E.g.:
+
+ $ chmod u+x myfile # make executable for you
+ $ chmod g+rxw myfile # add read write execute permissions for the group
+ $ chmod go-wx myfile # remove write execute permissions for the group
+ # and for everyone else (excluding you, the user)
+
+You can also use _a_ for "all of the above", as in:
+
+ $ chmod a-rwx myfile # remove all permissions for you, the group,
+ # and the rest of the world
+
+If you find the above syntax cumbersome, there's a numerical shorthand you can use with chmod. The only two I have memorized are _777_ and _755_:
+
+ $ chmod 777 myfile # grant all permissions (rwxrwxrwx)
+ $ chmod 755 myfile # reserve write access for the user,
+ # but grant all other permissions (rwxr-xr-x)
+
+Read more about the numeric code [here][83]. In general, it's a good practice to allow your files to be writable by you alone, unless you have a compelling reason to share access to them.
+
+## _ssh_
+
+In addition to chmod, there's another command it would be remiss not to mention. For many people, the first time they need to go to the command line, rather than the GUI, is to use the [Secure Shell (ssh)][84] protocol. Suppose you want to use a computer, but it's not the computer that's in front of you. It's a different computer in some other location—say, at your university, your company, or on the [Amazon cloud][16]. [ssh][85] is the command that allows you to log into a computer remotely over the network. Once you've sshed into a computer, you're in its shell and can run commands on it just as if it were your personal laptop. To ssh, you need to know your user name, the address of the host computer you want to log into, and the password [1]. The basic syntax is:
+
+ssh username@host
+
+For example:
+
+ $ ssh username@myhost.university.edu
+
+If you're trying to ssh into a private computer and don't know the hostname, use its IP address (_username@IP-address_).
+
+ssh also allows you to run a command on the remote server without logging in. For instance, to list of the contents of your remote computer's home directory, you could run:
+
+ $ ssh username@myhost.university.edu "ls -hl"
+
+Cool, eh? Moreover, if you have ssh access to a machine, you can copy files to or from it with the utility [rsync][12]—a great way to move data without an external hard drive.
+
+The file:
+
+~/.ssh/config
+
+determines ssh's behavior and you can create it if it doesn't exist (the dot in the name _.ssh_ confers invisibility—[see the discussion about dotfiles below][86]). On your own private computer, you can ssh into selected servers without having to type in a password by updating this configuration file. To do this, generate [rsa][87] ssh [keys][88]:
+
+ $ mkdir -p ~/.ssh
+ $ cd ~/.ssh
+ $ ssh-keygen -t rsa -f localkey
+
+This will create two files on your computer, a public key:
+
+~/.ssh/localkey.pub
+
+and a private key:
+
+~/.ssh/localkey
+
+You can share your public key, but _do not give anyone your private key!_ Suppose you want to ssh into _myserver.com_. Normally, that's:
+
+ $ ssh myusername@myserver.com
+
+Instead of doing this, add these lines to your _~/.ssh/config_ file:
+
+ Host Myserver
+ HostName myserver.com
+ User myusername
+ IdentityFile ~/.ssh/localkey
+
+Next, cat your public key and paste it into:
+
+~/.ssh/authorized_keys
+
+on the remote machine (i.e., the _myserver.com_ computer). Now on your local computer, you can ssh into _myserver.com_ without a password:
+
+ $ ssh Myserver
+
+You can also use this technique to push to [github.com][89] [2], without having to punch your password in each time, by pasting your public key into:
+
+Settings > SSH Keys > Add SSH Key
+
+on GitHub (read the [official tutorial][90]).
+
+If this is your first encounter with ssh, you'd be surprised how much of the work of the world is done by ssh. It's worth reading the extensive man page, which gets into matters of computer security and cryptography.
+
+* * *
+
+
+[1] The host also has to enable ssh access. On Macintosh, for example, it's disabled by default, but you can turn it on, as instructed [here][91] ↑
+[2] As you get deeper into the game, tracking your scripts and keeping a single, stable version of them becomes crucial. [Git][92], a vast subject for [another tutorial][93], is the neat solution to this problem and the industry standard for version control. On the web [GitHub][94] provides free hosting of script repositories and connects to the command line via the git interface ↑
+
+## Saving to a File; Stdout and Stderr
+
+To save to a file in unix, use an angle bracket:
+
+ $ echo joe > junk.txt # save to file
+ $ cat junk.txt
+ joe
+
+To append to the end of a pre-existing file, use a double angle bracket:
+
+ $ echo joe >> junk.txt # append to already-existing file
+ $ cat junk.txt
+ joe
+ joe
+
+Returning to our first script, [_myscript.sh][95]_, let's save the output to a file:
+
+ $ ./myscript.sh > out.txt
+ mkdir: cannot create directory 'tmp': File exists
+ ls: cannot access myfile_2.txt: No such file or directory
+
+ $ cat out.txt
+ /Users/oliver/tmp/tmp
+ myfile.txt
+
+This is interesting: _out.txt_ has its output. However, not everything went into _out.txt_, because some error messages were echoed to the console. What's going on here is that there are actually two [output streams][96]: _stdout_ (standard out) and _stderr_ (standard error). Look at the following figure from Wikipedia:
+
+![image][97]
+
+(Image credit: [Wikipedia: Standard streams][96])
+
+Proper output goes into stdout while errors go into stderr. The syntax for saving stderr in unix is 2> as in:
+
+ $ # save the output into out.txt and the error into err.txt
+ $ ./myscript.sh > out.txt 2> err.txt
+ $ cat out.txt
+ /Users/oliver/tmp/tmp
+ myfile.txt
+ $ cat err.txt
+ mkdir: cannot create directory 'tmp': File exists
+ ls: cannot access myfile_2.txt: No such file or directory
+
+When you think about it, the fact that output and error are separated is supremely useful. At work, sometimes we parallelize heavily and run 1000 instances of a script. For each instance, the error and output are saved separately. The 758th job, for example, might look like this:
+
+./myjob --instance 758 > out758.o 2> out758.e
+
+(I'm in the habit of using the suffixes _.o_ for output and _.e_ for error.) With this technique we can quickly scan through all 1000 _.e_ files and check if their size is 0. If it is, we know there was no error; if not, we can re-run the failed jobs. Some programs are in the habit of echoing run statistics or other information to stderr. This is an unfortunate practice because it muddies the water and, as in the example above, would make it hard to tell if there was an actual error.
+
+Output vs error is a distinction that many programming languages make. For example, in C++ writing to stdout and stderr is like this:
+
+ cout << "some output" << endl;
+ cerr << "some error" << endl;
+
+In Perl it's:
+
+ print STDOUT "some outputn";
+ print STDERR "some errorn";
+
+In Python it's:
+
+ import sys
+ sys.stdout.write("some outputn")
+ sys.stderr.write("some errorn")
+
+and so on.
+
+## More on Stdout and Stderr; Redirection
+
+For the sake of completeness, we should note that you can redirect standard error to standard output and vice versa. Let's make sure we get the syntax of all things pertaining to stdout and stderr right:
+
+ 1> # save stdout to (plain old > also works)
+ 2> # save stderr to
+
+as in:
+
+ $ ./myscript.sh 1> out.o 2> out.e
+ $ ./myscript.sh > out.o 2> out.e # these two lines are identical
+
+What if we want to choose where things will be printed from _within_ our script? Then we can use the following syntax:
+
+ &1 # standard out stream
+ &2 # standard error stream
+
+Let's examine five possible versions of our _Hello Kitty_ script:
+
+ #!/bin/bash
+ # version 1
+ echo "hello kitty"
+
+ #!/bin/bash
+ # version 2
+ echo "hello kitty" > somefile.txt
+
+ #!/bin/bash
+ # version 3
+ echo "hello kitty" > &1
+
+ #!/bin/bash
+ # version 4
+ echo "hello kitty" > &2
+
+ #!/bin/bash
+ # version 5
+ echo "hello kitty" > 1
+
+Here's how they work:
+
+* _version 1_ \- echo "hello kitty" to stdout
+* _version 2_ \- echo "hello kitty" to the file somefile.txt
+* _version 3_ \- same as version 1
+* _version 4_ \- echo "hello kitty" to sterr
+* _version 5_ \- echo "hello kitty" to _the file named 1_
+This illustrates the point of the ampersand syntax: it distinguishes between the output streams and files named _1_ or _2_. Let's try running script version 4 as a sanity check to make sure these scripts are working as expected:
+
+ $ # output saved to file but error printed to console
+ $ ./hellokitty.sh > junk.txt
+ hello kitty
+
+_hello kitty_ is indeed stderr because it's echoed to the console, not saved into _junk.txt_.
+
+This syntax makes it easy to see how we could, e.g., redirect the standard error to standard output:
+
+ $ ./somescript.sh 2> &1 # redirect stderr to stdout
+
+I rarely have occasion to do this and, although it's not something you need in your introductory unix toolkit, it's good to know.
+
+## Conditional Logic
+
+Conditional Logic is a universal feature of programming languages. The basic idea is, _if_ this condition, _then_ do something. It can be made more complex: _if_ this condition, _then_ do something; _else if_ that condition, _then_ do another thing; _else_ (if any other condition), _then_ do yet another thing. Let's see how to implement this in bash:
+
+ $ a=joe
+ $ if [ $a == "joe" ]; then echo hello; fi
+ hello
+
+or:
+
+ $ a=joe
+ $ if [ $a == "joe" ]; then echo hello; echo hello; echo hello; fi
+ hello
+ hello
+ hello
+
+The structure is:
+
+if [ _condition_ ]; then ... ; fi
+
+Everything between the words then and fi (_if_ backwards in case you didn't notice) will execute if the condition is satisfied. In other languages, this block is often defined by curly brackets: _{ }_. For example, in a Perl script, the same code would be:
+
+ #!/usr/bin/env perl
+
+ my $a="joe";
+
+ if ( $a eq "joe" )
+ {
+ print "hellon";
+ print "hellon";
+ print "hellon";
+ }
+
+In bash, _if_ is if, _else_ is else, and _else if_ is elif. In a script it would look like this:
+
+ #!/bin/bash
+
+ a=joe
+
+ if [ $a == "joe" ]; then
+ echo hello;
+ elif [ $a == "doe" ]; then
+ echo goodbye;
+ else
+ echo "ni hao";
+ fi
+
+You can also use a case statement to implement conditional logic. See an example of that [here][98].
+
+Although I said in the intro that unix is the best place to start your computer science education, I have to admit that the syntax for _if-then_ logic is somewhat unwieldy—even unfriendly. Bash is a bad teaching language for conditional logic, [arrays][99], [hashes][100], etc. But that's only because its element is not heavy-duty programming with lots of functions, numerical operations, sophisticated data structures, and logic. Its mastery is over the quick and dirty, manipulating files and directories, and doing system stuff. I still maintain it's the proper starting point because of its wonderful tools, and because knowing its fundamentals is a great asset. Every language has its place in the programming ecosystem. Back in College, I stumbled on a physics book called [_The Tiger and the Shark: Empirical Roots of Wave-Particle Dualism_][101] by Bruce Wheaton. The book had a great epigraph:
+
+> It is like a struggle between a tiger and a shark,
+each is supreme in his own element,
+but helpless in that of the other.
+_J.J. Thomson, 1925_
+
+In our context, this would read: bash is supreme on the command line, but not inside of a script.
+
+## File Test Operators; Return or Exit Status
+
+_File Test Operators_ and _exit status_ are two completely different topics, but since they both go well with if statements, I'll discuss them here. File Test Operators are things you can stick in an if statement to give you information about a file. Two common problems are _(1)_ checking if your file exists and _(2)_ checking if it's non-zero size: Let's create two files, one empty and one not:
+
+ $ touch emptyfile # create an empty file
+ $ echo joe > nonemptyfile # create a non-empty file
+
+The operator _-e_ tests for existence and _-s_ tests for non-zero-ness:
+
+ $ file=emptyfile
+ $ if [ -e $file ]; then echo "exists"; if [ -s $file ]; then echo "non-0"; fi; fi
+ exists
+
+ $ file=nonemptyfile
+ $ if [ -e $file ]; then echo "exists"; if [ -s $file ]; then echo "non-0"; fi; fi
+ exists
+ non-0
+
+Read The Linux Documentation Project's discussion of file test operators [here][102].
+
+Changing the subject altogether, you may be familiar with the idea of a return value in computer science. Functions can return a value upon completion. In unix, commands also have a return value or _exit code_, queryable with:
+
+$?
+
+This is usually employed to tell the user whether or not the command successfully executed. By convention, successful execution returns 0. For example:
+
+ $ echo joe
+ joe
+ $ echo $? # query exit code of previous command
+ 0
+
+Let's see how the exit code can be useful. We'll make a script, _test_exitcode.sh_, such that:
+
+ $ cat test_exitcode.sh
+ #!/bin/bash
+ sleep 10
+
+This script just pauses for 10 seconds. First, we'll let it run and then we'll interrupt it using _Cntrl-c_:
+
+ $ ./test_exitcode.sh; # let it run
+ $ echo $?
+ 0
+
+ $ ./test_exitcode.sh; # interrupt it
+ ^C
+ $ echo $?
+ 130
+
+The non-zero exit code tells us that it's failed. Now we'll try the same thing with an if statement:
+
+ $ ./test_exitcode.sh
+ $ if [ $? == 0 ]; then echo "program succeeded"; else echo "program failed"; fi
+ program succeeded
+
+ $ ./test_exitcode.sh;
+ ^C
+ $ if [ $? == 0 ]; then echo "program succeeded"; else echo "program failed"; fi
+ program failed
+
+In research, you might run hundreds of command-line programs in parallel. For each instance, there are two key questions: _(1)_ Did it finish? _(2)_ Did it run without error? Checking the exit status is the way to address the second point. You should always check the program you're running to find information about its exit code, since some use different conventions. Read The Linux Documentation Project's discussion of exit status [here][103].
+
+_Question_: What's going on here?
+
+ $ if echo joe; then echo joe; fi
+ joe
+ joe
+
+This is yet another example of bash allowing you to stretch syntax like silly putty. In this code snippet,
+
+echo joe
+
+is run, and its successful execution passes a _true_ return code to the if statement. So, the two _joe_s we see echoed to the console are from the statement to be evaluated and the statement inside the conditional. We can also invert this formula, doing something if our command fails:
+
+ $ outputdir=nonexistentdir # set output dir equal to a nonexistent dir
+ $ if ! cd $outputdir; then echo "couldnt cd into output dir"; fi
+ -bash: pushd: nonexistentdir: No such file or directory
+ couldnt cd into output dir
+
+ $ mkdir existentdir # make a test directory
+ $ outputdir=existentdir
+ $ if ! cd $outputdir; then echo "couldnt cd into output dir"; fi
+ $ # no error - now we're in the directory existentdir
+
+Did you follow that? (! means logical NOT in unix.) The idea is, we try to cd but, if it's unsuccessful, we echo an error message. This is a particularly useful line to include in a script. If the user gives an output directory as an argument and the directory doesn't exist, we exit. If it does exist, we cd into it and it's business as usual:
+
+if ! cd $outputdir; then echo "[error] couldn't cd into output dir"; exit; fi
+
+Without this line, the script will run in whatever directory it's in if cd fails. Once in lab, I was running a script that didn't have this kind of protection. The output directory wasn't found and the script starting making and deleting files in the wrong directory. It was powerfully uncool!
+
+We can implement similar constructions using the && and || operators rather than an if statement. Let's see how this works by making some test files:
+
+ $ touch file{1..4}
+ $ ls
+ file1 file2 file3 file4
+
+The && operator will chug through a chain of commands and keep on going until one of the commands fails, as in:
+
+ $ ( ls file1 ) && ( ls file2 ) && ( ls file3 ) && ( ls file4 )
+ file1
+ file2
+ file3
+ file4
+
+ $ ( ls file1 ) && ( ls file2 ) && ( ls fileX ) && ( ls file4 )
+ file1
+ file2
+ ls: cannot access fileX: No such file or directory
+
+In contrast, the || operator will proceed through the command chain and _stop_ after the first successful one, as in:
+
+ $ ( ls file1 ) || ( ls file2 ) || ( ls file3 ) || ( ls file4 )
+ file1
+
+ $ ( ls fileX ) || ( ls fileY ) || ( ls fileZ ) || ( ls file4 )
+ ls: cannot access fileX: No such file or directory
+ ls: cannot access fileY: No such file or directory
+ ls: cannot access fileZ: No such file or directory
+ file4
+
+## Basic Loops
+
+In programming, loops are a way of performing operations iteratively. Loops come in different flavors, but the _for loop_ and _while loop_ are the most basic. In bash, we can implement a for loop like this:
+
+ $ for i in 1 2 3; do echo $i; done
+ 1
+ 2
+ 3
+
+The structure is:
+
+for _variable_ in _list_; do ... ; done
+
+Put anything you like in the list:
+
+ $ for i in 1 2 hello; do echo $i; done
+ 1
+ 2
+ hello
+
+Many other languages wouldn't let you get away with combining data types in the iterations of a loop, but this is a recurrent bash theme: it's fast; it's loose; it's malleable.
+
+To count from 1 to 10, try:
+
+ $ for i in {1..10}; do echo -n "$i "; done; echo
+ 1 2 3 4 5 6 7 8 9 10
+
+But if we can just write:
+
+ $ echo {1..10}
+
+why do we need a loop here? Loops really come into their own in bash when—no surprise!—we're dealing with files, paths, and commands. For example, to loop through all of the text files in the cwd, use:
+
+ $ for i in *.txt; do echo $i; done
+
+Although this is nearly the same as:
+
+ $ ls *.txt
+
+the former construction has the advantage that we can stuff as much code as we like in the block between do and done. Let's make a random directory structure like so:
+
+ $ mkdir -p myfolder{1..3}/{X,Y}
+
+We can populate it with token files (fodder for our example) via a loop:
+
+ $ j=0; for i in myfolder*/*; do echo "*** "$i" ***"; touch ${i}/a_${j}.txt ${i}/b_${j}.txt; ((j++)); done
+
+In bash, ((j++)) is a way of incrementing j. We echo $i to get some visual feedback as the loop iterates. Now our directory structure looks like this:
+
+![image][104]
+
+
+To practice loops, suppose we want to find any file that begins with _b_ in any subfolder and make a symbolic link to it from the cwd:
+
+ $ for i in myfolder*/*/b*; do echo "*** "$i" ***"; ln -s $i; done
+
+As we learned above, a link is not a copy of a file but, rather, a kind of pointer that allows us to access a file from a path other than the one where it actually resides. Our loop yields the links:
+
+ b_0.txt -> myfolder1/X/b_0.txt
+ b_1.txt -> myfolder1/Y/b_1.txt
+ b_2.txt -> myfolder2/X/b_2.txt
+ b_3.txt -> myfolder2/Y/b_3.txt
+ b_4.txt -> myfolder3/X/b_4.txt
+ b_5.txt -> myfolder3/Y/b_5.txt
+
+allowing us to access the _b_ files from the cwd.
+
+I can't overstate all the heroic things you can do with loops in bash. Suppose we want to change the extension of any text file that begins with _a_ and resides in an _X_ subfolder from _.txt_ to _.html_:
+
+ $ for i in myfolder*/X/a*.txt; do echo "*** "$i" ***"; j=$( echo $i | sed 's|.txt|.html|' ); echo $j; mv $i $j; echo; done
+
+But I've jumped the gun! This example features three things we haven't learned yet: command substitution, piping, and sed. You should revisit it after reading those sections, but the idea is that the variable _j_ stores a path that looks like our file's but has the extension replaced. And you see that a knowledge of loops is like a stick of dynamite you can use to blow through large numbers of files.
+
+Here's another contrived example with these yet-to-be-discussed techniques:
+
+ $ for i in $( echo $PATH | tr ":" " " ); do echo "*** "$i" ***"; ls $i | head; echo; done | less
+
+Can you guess what this does? It shows the first ten commands in each folder in our PATH—not something you'd likely need to do, but a demonstration of the fluidity of these constructions.
+
+If we want to run a command or script in parallel, we can do that with loops, too. [gzip][105] is a utility to compress files, thereby saving hard drive space. To compress all text files in the cwd, in parallel, do:
+
+ $ for i in *.txt; do { echo $i; gzip $i & }; done
+
+But I've gotten ahead of myself again. We'll leave the discussion of this example to the section on processes.
+
+The structure of a while loop is:
+
+while _condition_; do ... ; done
+
+I use while loops much less than for loops, but here's an example:
+
+ $ x=1; while ((x <= 3)); do echo $x; ((x++)); done
+ 1
+ 2
+ 3
+
+The while loop can also take input from a file. Suppose there's a file _junk.txt_ such that:
+
+ $ cat junk.txt
+ 1
+ 2
+ 3
+
+You can iterate over this file as such:
+
+ $ while read x; do echo $x; done < junk.txt
+ 1
+ 2
+ 3
+
+## Arguments to a Script
+
+Now that we've covered basic [control flow][106], let's return to the subject of scripting. An important question is, how can we pass arguments to our script? Let's make a script called _hellokitty.sh_:
+
+ #!/bin/bash
+
+ echo hello
+
+Try running it:
+
+ $ chmod 755 hellokitty.sh
+ $ ./hellokitty.sh
+ hello
+
+We can change it to the following:
+
+ #!/bin/bash
+
+ echo hello $1
+
+Now:
+
+ $ ./hellokitty.sh kitty
+ hello kitty
+
+In bash $1 represents the first argument to the script, $2 the second, and so on. If our script is:
+
+ #!/bin/bash
+
+ echo $0
+ echo hello $1 $4
+
+Then:
+
+ $ ./hellokitty.sh my sweet kitty cat
+ ./hellokitty.sh
+ hello my cat
+
+In most programming languages, arguments passed in on the command line are stored as an array. Bash stores the _n_th element of this array in the variable $_n_. $0 is special and refers to the name of the script itself.
+
+For casual scripts this suits us well. However, as you go on to write more involved programs with many options, it becomes impractical to rely on the position of an argument to determine its function in your script. The proper way to do this is using _flags_ that can be deployed in arbitrary order, as in:
+
+command --flag1 1 --flag2 1 --flag3 5
+
+or, in short form:
+
+command -f1 1 -f2 1 -f3 5
+
+You can do this with the command [getopts][107], but it's sometimes easier just to write your own options parser. Here's a sample script called [_test_args][108]_. Although a case statement would be a good way to handle numerous conditions, I'll use an if statement:
+
+ #!/bin/bash
+
+ helpmessage="This script showcases how to read arguments"
+
+ ### get arguments
+ # while input array size greater than zero
+ while (($# > 0)); do
+ if [ "$1" == "-h" -o "$1" == "-help" -o "$1" == "--help" ]; then
+ shift;
+ echo "$helpmessage"
+ exit;
+ elif [ "$1" == "-f1" -o "$1" == "--flag1" ]; then
+ # store what's passed via flag1 in var1
+ shift; var1=$1; shift
+ elif [ "$1" == "-f2" -o "$1" == "--flag2" ]; then
+ shift; var2=$1; shift
+ elif [ "$1" == "-f3" -o "$1" == "--flag3" ]; then
+ shift; var3=$1; shift
+ # if unknown argument, just shift
+ else
+ shift
+ fi
+ done
+
+ ### main
+ # echo variable if not empty
+ if [ ! -z $var1 ]; then echo "flag1 passed "$var1; fi
+ if [ ! -z $var2 ]; then echo "flag2 passed "$var2; fi
+ if [ ! -z $var3 ]; then echo "flag3 passed "$var3; fi
+
+This has some things we haven't seen yet:
+
+* $# is the size of our input argument array
+* shift pops an element off of our array (the same as in Perl)
+* exit exits the script
+* -o is logical OR in unix
+* -z checks if a variable is empty
+The code loops through the argument array and keeps popping off elements until the array size is zero, whereupon it exits the loop. For example, one might run this script as:
+
+ $ ./test_args --flag1 x -f2 y --flag3 zzz
+ flag1 passed x
+ flag2 passed y
+ flag3 passed zzz
+
+To spell out how this works, the first argument is _\--flag1_. Since this matches one of our checks, we shift. This pops this element out of our array, so the first element, $1, becomes _x_. This is stored in the variable _var1_, then there's another shift and $1 becomes _-f2_, which matches another condition, and so on.
+
+The flags can come in any order:
+
+ $ ./test_args --flag3 x --flag1 zzz
+ flag1 passed zzz
+ flag3 passed x
+
+ $ ./test_args --flag2 asdf
+ flag2 passed asdf
+
+We're brushing up against the outer limits of bash here. My prejudice is that you usually shouldn't go this far with bash, because its limitations will come sharply into focus if you try to do too-involved scripting. Instead, use a more friendly language. In Perl, for example, the array containing inputs is @ARGV; in Python, it's sys.argv. Let's compare these common scripting languages:
+
+| ----- |
+| **Bash** | **Perl** | **Python** | **Description** |
+| $0 | $0 | sys.argv[0] | Name of Script Itself |
+| $* | | | String Containing All Input Arguments |
+| ("$@") | @ARGV | sys.argv | Array or List Containing All Input Arguments [1] |
+| $1 | $ARGV[0] | sys.argv[1] | First Argument |
+| $2 | $ARGV[1] | sys.argv[2] | Second Argument |
+
+Perl has a [Getopt][109] package that is convenient for reading arguments, and Python has an even better one called [argparse][110]. Their functionality is infinitely nicer than bash's, so steer clear of bash if you're going for a script with lots of options.
+
+* * *
+
+
+[1] The distinction between $* and $@ is knotty. Dive into these subtleties [on Stackoverflow][111] ↑
+
+## Multi-Line Comments, Multi-Line Strings in Bash
+
+Let's continue in the realm of scripting. You can do a multi-line comment in bash with an if statement:
+
+ # multi-line comment
+ if false; then
+ echo hello
+ echo hello
+ echo hello
+ fi
+
+(Yes, this is a bit of a hack!)
+
+Multi-line strings are handy for many things. For example, if you want a help section for your script, you can do it like this:
+
+ cat <<_EOF_
+
+ Usage:
+
+ $0 --flag1 STRING [--flag2 STRING] [--flag3 STRING]
+
+ Required Arguments:
+
+ --flag1 STRING This argument does this
+
+ Options:
+
+ --flag2 STRING This argument does that
+ --flag3 STRING This argument does another thing
+
+ _EOF_
+
+How does this syntax work? Everything between the __EOF__ tags comprises the string and is printed. This is called a [_Here Document][112]_. Read The Linux Documentation Project's discussion of Here Documents [here][113].
+
+## Source and Export
+
+_Question_: If we create some variables in a script and exit, what happens to those variables? Do they disappear? The answer is, yes, they do. Let's make a script called _test_src.sh_ such that:
+
+ $ cat ./test_src.sh
+ #!/bin/bash
+
+ myvariable=54
+ echo $myvariable
+
+If we run it and then check what happened to the variable on our command line, we get:
+
+ $ ./test_src.sh
+ 54
+ $ echo $myvariable
+
+The variable is undefined. The command [source][114] is for solving this problem. If we want the variable to persist, we run:
+
+ $ source ./test_src.sh
+ 54
+ $ echo $myvariable
+ 54
+
+and—voilà!—our variable exists in the shell. An equivalent syntax for sourcing uses a dot:
+
+ $ . ./test_src.sh # this is the same as "source ./test_src.sh"
+ 54
+
+But now observe the following. We'll make a new script, _test_src_2.sh_, such that:
+
+ $ cat ./test_src_2.sh
+ #!/bin/bash
+
+ echo $myvariable
+
+This script is also looking for _$myvariable_. Running it, we get:
+
+ $ ./test_src_2.sh
+
+Nothing! So _$myvariable_ is defined in the shell but, if we run another script, its existence is unknown. Let's amend our original script to add in an export:
+
+ $ cat ./test_src.sh
+ #!/bin/bash
+
+ export myvariable=54 # export this variable
+ echo $myvariable
+
+Now what happens?
+
+ $ ./test_src.sh
+ 54
+ $ ./test_src_2.sh
+
+Still nothing! Why? Because we didn't source _test_src.sh_. Trying again:
+
+ $ source ./test_src.sh
+ 54
+ $ ./test_src_2.sh
+ 54
+
+So, at last, we see how to do this. If we want access on the shell to a variable which is defined inside a script, we must source that script. If we want _other_ scripts to have access to that variable, we must source plus export.
+
+## Dotfiles (_.bashrc_ and _.bash_profile_)
+
+Dotfiles are simply files that begin with a dot. We can make a test one as follows:
+
+ $ touch .test
+
+Such a file will be invisible in the GUI and you won't see it with vanilla ls either. (This works the same way for directories.) The only way to see it is to use the list _all_ option:
+
+ls -al
+
+or to list it explicitly by name. This is useful for files that you generally want to keep hidden from the user or discourage tinkering with.
+
+Many programs, such as bash, [Vim][55], and [Git][92], are highly configurable. Each uses dotfiles to let the user add functionality, change options, switch key bindings, etc. For example, here are some of the dotfiles files each program employs:
+
+* bash - _.bashrc_
+* vim - _.vimrc_
+* git - _.gitconfig_
+The most famous dotfile in my circle is _.bashrc_ which resides in HOME and configures your bash. Actually, let me retract that: let's say _.bash_profile_ instead of _.bashrc_ (read about the difference [here][115]). In any case, the idea is that this dotfile gets executed as soon as you open up the terminal and start a new session. It is therefore ideal for setting your PATH and other variables, adding functions ([like this one][116]), creating _aliases_ (discussed below), and doing any other setup related chore. For example, suppose you download a new program into /some/path/to/prog and you want to add it to your PATH. Then in your _.bash_profile_ you'd add:
+
+ export PATH=/some/path/to/prog:$PATH
+
+Recalling how export works, this will allow any programs we run on the command line to have access to our amended PATH. Note that we're adding this to the front of our PATH (so, if the program exists in our PATH already, the existing copy will be superseded). Here's an example snippet of my setup file:
+
+ PATH=/apps/python/2.7.6/bin:$PATH # use this version of Python
+ PATH=/apps/R/3.1.2/bin:$PATH # use this version of R
+ PATH=/apps/gcc/4.6.0/bin/:$PATH # use this version of gcc
+ export PATH
+
+There is much ado about _.bashrc_ (read _.bash_profile_) and it inspired one of the greatest unix blog-post titles of all time: [_Pimp my .bashrc][117]_—although this blogger is only playing with his prompt, as it were. As you go on in unix and add things to your _.bash_profile_, it will evolve into a kind of fingerprint, optimizing bash in your own unique way (and potentially making it difficult for others to use).
+
+If you have multiple computers, you'll want to recycle much of your program configurations on all of them. My co-worker uses a nice system I've adopted where the local and global aspects of setup are separated. For example, if you wanted to use certain aliases across all your computers, you'd put them in a global settings file. However, changes to your PATH might be different on different machines, so you'd store this in a local settings file. Then any time you change computers you can simply copy the global files and get your familiar setup, saving lots of work. A convenient way to accomplish this goal of a unified shell environment across all the systems you work on is to put your dotfiles on a server, like [GitHub][94] or [Bitbucket][118], you can access from anywhere. This is exactly what I've done and you can [get the up-to-date versions of my dotfiles on GitHub][119].
+
+Here's a sketch of how this idea works: in HOME make a _.dotfiles/bash_ directory and populate it with your setup files, using a suffix of either _local_ or _share_:
+
+ $ ls -1 .dotfiles/bash/
+ bash_aliases_local
+ bash_aliases_share
+ bash_functions_share
+ bash_inirun_local
+ bash_paths_local
+ bash_settings_local
+ bash_settings_share
+ bash_welcome_local
+ bash_welcome_share
+
+When _.bash_profile_ is called at the startup of your session, it sources all these files:
+
+ # the directory where bash configuration files reside
+ INIT_DIR="${HOME}/.dotfiles/bash"
+
+ # to make local configurations, add these files into this directory:
+ # bash_aliases_local
+ # bash_paths_local
+ # bash_settings_local
+ # bash_welcome_local
+
+ # this line, e.g., protects the functionality of rsync by only turning on the below if the shell is in interactive mode
+ # In particular, rsync fails if things are echo-ed to the terminal
+ [[ "$-" != *i* ]] && return
+
+ # bash welcome
+ if [ -e "${INIT_DIR}/bash_welcome_local" ]; then
+ cat ${INIT_DIR}/bash_welcome_local
+ elif [ -e "${INIT_DIR}/bash_welcome_share" ]; then
+ cat ${INIT_DIR}/bash_welcome_share
+ fi
+
+ #--------------------LOCAL------------------------------
+ # aliases local
+ if [ -e "${INIT_DIR}/bash_aliases_local" ]; then
+ source "${INIT_DIR}/bash_aliases_local"
+ echo "bash_aliases_local loaded"
+ fi
+
+ # settings local
+ if [ -e "${INIT_DIR}/bash_settings_local" ]; then
+ source "${INIT_DIR}/bash_settings_local"
+ echo "bash_settings_local loaded"
+ fi
+
+ # paths local
+ if [ -e "${INIT_DIR}/bash_paths_local" ]; then
+ source "${INIT_DIR}/bash_paths_local"
+ echo "bash_paths_local loaded"
+ fi
+
+ #---------------SHARE-----------------------------
+ # aliases share
+ if [ -e "${INIT_DIR}/bash_aliases_share" ]; then
+ source "${INIT_DIR}/bash_aliases_share"
+ echo "bash_aliases_share loaded"
+ fi
+
+ # settings share
+ if [ -e "${INIT_DIR}/bash_settings_share" ]; then
+ source "${INIT_DIR}/bash_settings_share"
+ echo "bash_settings_share loaded"
+ fi
+
+ # functions share
+ if [ -e "${INIT_DIR}/bash_functions_share" ]; then
+ source "${INIT_DIR}/bash_functions_share"
+ echo "bash_functions_share loaded"
+ fi
+
+A word of caution: echoing things in your _.bash_profile_, as I'm doing here, can be dangerous and break the functionaly of utilities like scp and rsync. However, we protect against this with the cryptic line near the top.
+
+Taking care of bash is the hard part. Other programs are less of a chore because, even if you have different programs in your PATH on your home and work computers, you probably want everything else to behave the same. To accomplish this, just drop all your other configuration files into your _.dotfiles_ repository and link to them from your home directory:
+
+ .gitconfig -> .dotfiles/.gitconfig
+ .vimrc -> .dotfiles/.vimrc
+
+## Working Faster with Readline Functions and Key Bindings
+
+If you've started using the terminal extensively, you might find that things are a bit slow. Perhaps you need some long command you wrote yesterday and you don't want to write the damn thing again. Or, if you want to jump to the end of a line, it's tiresome to move the cursor one character at a time. Failure to immediately solve these problems will push your productivity back into the stone age and you may end up swearing off the terminal as a Rube Goldberg-ian dystopia. So—enter keyboard shortcuts!
+
+The backstory about shortcuts is that there are two massively influential text editors, [Emacs][57] and [Vim][55], whose users—to be overdramatic—are divided into two warring camps. Each program has its own conventions for shortcuts, like jumping words with your cursor, and in bash they're Emacs-flavored by default. But you can toggle between either one:
+
+ $ set -o emacs # Set emacs-style key bindings (this is the default)
+ $ set -o vi # Set vi-style key bindings
+
+Although I prefer Vim as a text-editor, I use Emacs key bindings on the command line. The reason is that in Vim there are multiple modes (normal mode, insert mode, command mode). If you want to jump to the front of a line, you have to switch from insert mode to normal mode, which breaks up the flow a little. In Emacs there's no such complication. Emacs commands usually start with the _Control_ key or the _Meta_ key (usually _Esc_). Here are some things you can do:
+
+* _Cntrl-a_ \- jump cursor to beginning of line
+* _Cntrl-e_ \- jump cursor to end of line
+* _Cntrl-k_ \- delete to end of line
+* _Cntrl-u_ \- delete to beginning of line
+* _Cntrl-w_ \- delete back one word
+* _Cntrl-y_ \- paste (yank) what was deleted with the above shortcuts
+* _Cntrl-r_ \- reverse-search history for a given word
+* _Cntrl-c_ \- kill the process running in the foreground; don't execute current line on the command line
+* _Cntrl-z_ \- suspend the process running in the foreground
+* _Cntrl-l_ \- clear screen. (this has an advantage over the unix command clear in that it works in the Python, MySQL, and other shells)
+* _Cntrl-d_ \- [end of transmission][120] (in practice, often synonymous with quit - e.g., exiting the Python or MySQL shells)
+* _Cntrl-s_ \- freeze screen
+* _Cntrl-q_ \- un-freeze screen
+_These are supremely useful!_ I use these numerous times a day. (On the Mac, the first three even work in the Google search bar!) The first bunch of these fall under the umbrella of [_ReadLine Functions][121]_ (read GNU's extensive documentation [here][122]). There are actually tons more, and you can see them all by entering:
+
+ $ bind -P # show all Readline Functions and their key bindings
+ $ bind -l # show all Readline Functions
+
+Four of the most excellent Readline Functions are:
+
+* _forward-word_ \- jump cursor forward a word
+* _backward-word_ \- jump cursor backward a word
+* _history-search-backward_ \- scroll through your bash history backward
+* _history-search-forward_ \- scroll through your bash history forward
+For the first two—which are absolutely indispensable—you can use the default Emacs way:
+* _Meta-f_ \- jump forward one word
+* _Meta-b_ \- jump backward one word
+However, reaching for the _Esc_ key is a royal pain in the ass—you have to re-position your hands on the keyboard. This is where _key-binding_ comes into play. Using the command bind, you can map a Readline Function to any key combination you like. Of course, you should be careful not to overwrite pre-existing key bindings that you want to use. I like to map the following keys to these Readline Functions:
+
+* _Cntrl-forward-arrow_ \- forward-word
+* _Cntrl-backward-arrow_ \- backward-word
+* _up-arrow_ \- history-search-backward
+* _down-arrow_ \- history-search-forward
+In my _.bash_profile_ (or, more accurately, in my global bash settings file) I use:
+
+ # make cursor jump over words
+ bind '"e[5C": forward-word' # control+arrow_right
+ bind '"e[5D": backward-word' # control+arrow_left
+
+ # make history searchable by entering the beginning of command
+ # and using up and down keys
+ bind '"e[A": history-search-backward' # arrow_up
+ bind '"e[B": history-search-forward' # arrow_down
+
+(although these may not work universally [1].) How does this cryptic symbology translate into these particular keybindings? There's a neat trick you can use, to be revealed in the next section.
+
+_Tip_: On Mac, you can move your cursor to any position on the line by holding down _Option_ and clicking your mouse there. I rarely use this, however, because it's faster to make your cursor jump via the keyboard.
+
+* * *
+
+
+[1] If you have trouble getting this to work on OS's terminal, try [iTerm2][123] instead, as described [here][124] ↑
+
+## More on Key Bindings, the ASCII Table, _Control-v_
+
+Before we get to the key binding conundrum, let's review [ASCII][125]. This is, simply, a way of mapping every character on your keyboard to a numeric code. As Wikipedia puts it:
+
+> The American Standard Code for Information Interchange (ASCII) is a character-encoding scheme originally based on the English alphabet that encodes 128 specified characters—the numbers 0-9, the letters a-z and A-Z, some basic punctuation symbols, some control codes that originated with Teletype machines, and a blank space—into the 7-bit binary integers.
+
+For example, the character _A_ is mapped to the number _65_, while _q_ is _113_. Of special interest are the _control characters_, which are the representations of things that cannot be printed like _return_ or _delete_. Again [from Wikipedia][126], here is the portion of the ASCII table for these control characters:
+
+| ----- |
+| **Binary** | **Oct** | **Dec** | **Hex** | **Abbr** | [**a]** | [**b]** | [**c]** | **Name** |
+| 000 0000 | 000 | 0 | 00 | NUL | ␀ | ^@ |
+
+[1]: http://en.wikipedia.org/wiki/Unix
+[2]: http://en.wikipedia.org/wiki/Linux
+[3]: http://en.wikipedia.org/wiki/Command-line_interface
+[4]: http://en.wikipedia.org/wiki/Terminal_emulator
+[5]: http://en.wikipedia.org/wiki/Shell_script
+[6]: http://en.wikipedia.org/wiki/Bourne-again_shell
+[7]: http://www.oliverelliott.org/static/article/img/terminal_591.png
+[8]: http://www.gnu.org/software/coreutils/
+[9]: http://en.wikipedia.org/wiki/GNU_Core_Utilities
+[10]: /static/img/letter_600.jpg
+[11]: https://class.coursera.org/startup-001
+[12]: http://ss64.com/bash/rsync.html
+[13]: http://www.perl.org
+[14]: http://www.python.org
+[15]: https://www.gnupg.org/index.html
+[16]: http://aws.amazon.com/ec2/
+[17]: http://nginx.org/
+[18]: http://en.wikipedia.org/wiki/Graphical_user_interface
+[19]: http://www.youtube.com/watch?v=WiX7GTelTPM
+[20]: http://upload.wikimedia.org/wikipedia/commons/c/cd/Unix_timeline.en.svg
+[21]: /article/computing/ref_unix/
+[22]: http://www.oliverelliott.org/static/article/img/terminal_119.png
+[23]: http://en.wikipedia.org/wiki/Cmd.exe
+[24]: http://en.wikipedia.org/wiki/Windows_PowerShell
+[25]: http://www.putty.org
+[26]: https://chrome.google.com/webstore/detail/secure-shell/pnhechapfaindjhompbnflcldabbghjo?hl=en
+[27]: http://mobaxterm.mobatek.net
+[28]: http://en.wikipedia.org/wiki/Darwin_(operating_system)
+[29]: http://aws.amazon.com/free/
+[30]: http://www.ubuntu.com/download
+[31]: http://www.linuxmint.com
+[32]: https://getfedora.org/
+[33]: http://www.centos.org
+[34]: https://www.cygwin.com/
+[35]: /article/computing/tips_mac/#InstalltheGNUCoreutils
+[36]: http://www.oliverelliott.org/static/article/img/root_dir_structure.png
+[37]: http://en.wikipedia.org/wiki/Home_directory
+[38]: http://www.oliverelliott.org/static/article/img/home_dir_structure.png
+[39]: http://www.oliverelliott.org/static/article/img/dir_struct_1125.png
+[40]: http://www.thegeekstuff.com/2010/09/linux-file-system-structure/
+[41]: http://en.wikipedia.org/wiki/Uniform_resource_locator
+[42]: http://www.e-reading.biz/htmbook.php/orelly/unix2.1/lrnunix/ch03_01.htm
+[43]: http://ss64.com/bash/ls.html
+[44]: http://www.oliverelliott.org/static/article/img/ls.png
+[45]: http://www.oliverelliott.org/static/article/img/ls1.png
+[46]: http://www.oliverelliott.org/static/article/img/lshl.png
+[47]: http://unixhelp.ed.ac.uk/CGI/man-cgi?finger
+[48]: http://en.wikipedia.org/wiki/Hidden_file_and_hidden_directory
+[49]: http://www.oliverelliott.org/static/article/img/lsal.png
+[50]: http://en.wikipedia.org/wiki/Glob_%28programming%29
+[51]: http://macintoshgarden.org/games/prince-of-persia
+[52]: http://en.wikipedia.org/wiki/Microsoft_Word
+[53]: http://www.youtube.com/watch?v=znlFu_lemsU
+[54]: http://en.wikipedia.org/wiki/Grep
+[55]: http://www.vim.org
+[56]: http://www.nano-editor.org/
+[57]: http://www.gnu.org/software/emacs/
+[58]: /article/computing/wik_vim/
+[59]: http://www.sublimetext.com/
+[60]: http://aquamacs.org/
+[61]: http://www.peterborgapps.com/smultron/
+[62]: http://en.wikipedia.org/wiki/Escape_character
+[63]: http://www.oliverelliott.org/static/article/img/bash_prompt_426.png
+[64]: http://en.wikipedia.org/wiki/X_Window_System
+[65]: http://www.oliverelliott.org/static/article/img/thepath_410.png
+[66]: http://en.wikipedia.org/wiki/Symbolic_link
+[67]: http://ss64.com/bash/ln.html
+[68]: http://www.oliverelliott.org/static/article/img/myscript_634.png
+[69]: https://www.talisman.org/~erlkonig/documents/commandname-extensions-considered-harmful.shtml
+[70]: http://en.wikipedia.org/wiki/Shebang_(Unix)
+[71]: http://en.wikipedia.org/wiki/Software_portability
+[72]: http://en.wikipedia.org/wiki/Env
+[73]: http://en.wikipedia.org/wiki/Interpreted_language
+[74]: http://en.wikipedia.org/wiki/Compiled_language
+[75]: http://en.wikipedia.org/wiki/Compiler
+[76]: http://gcc.gnu.org
+[77]: https://virtualenv.pypa.io/en/latest/
+[78]: http://stackoverflow.com/questions/5725296/difference-between-sh-and-bash
+[79]: http://www.cyberciti.biz/tips/how-do-i-find-out-what-shell-im-using.html
+[80]: http://en.wikipedia.org/wiki/Z_shell
+[81]: http://en.wikipedia.org/wiki/Tcsh
+[82]: http://ss64.com/bash/chmod.html
+[83]: http://en.wikipedia.org/wiki/Chmod
+[84]: http://en.wikipedia.org/wiki/Secure_Shell
+[85]: http://www.ss64.com/bash/ssh.html
+[86]: /article/computing/tut_unix/#Dotfilesbashrcandbash_profile
+[87]: http://en.wikipedia.org/wiki/RSA_(cryptosystem)
+[88]: http://en.wikipedia.org/wiki/Public-key_cryptography
+[89]: https://github.com/
+[90]: https://help.github.com/articles/generating-ssh-keys/
+[91]: /article/computing/tips_mac/#sshintoYourMac
+[92]: http://git-scm.com/
+[93]: /article/computing/wik_git/
+[94]: https://github.com
+[95]: /static/article/example/myscript.html
+[96]: http://en.wikipedia.org/wiki/Standard_streams
+[97]: http://www.oliverelliott.org/static/article/img/Stdstreams-notitle.svg.png
+[98]: http://bash.cyberciti.biz/guide/The_case_statement
+[99]: http://en.wikipedia.org/wiki/Array_data_structure
+[100]: http://en.wikipedia.org/wiki/Hash_table
+[101]: http://www.amazon.com/The-Tiger-Shark-Empirical-Wave-Particle/dp/0521358922
+[102]: http://www.tldp.org/LDP/abs/html/fto.html
+[103]: http://tldp.org/LDP/abs/html/exit-status.html
+[104]: http://www.oliverelliott.org/static/article/img/lsdirtree_234.jpg
+[105]: http://ss64.com/bash/gzip.html
+[106]: http://en.wikipedia.org/wiki/Control_flow
+[107]: http://wiki.bash-hackers.org/howto/getopts_tutorial
+[108]: /static/article/example/test_args.html
+[109]: http://perldoc.perl.org/Getopt/Long.html
+[110]: https://docs.python.org/2/howto/argparse.html
+[111]: http://stackoverflow.com/questions/12314451/accessing-bash-command-line-args-vs
+[112]: http://en.wikipedia.org/wiki/Here_document
+[113]: http://www.tldp.org/LDP/abs/html/here-docs.html
+[114]: http://ss64.com/bash/source.html
+[115]: http://www.joshstaiger.org/archives/2005/07/bash_profile_vs.html
+[116]: http://www.virtualblueness.net/linux-gazette/109/marinov.html
+[117]: http://zxvf-linux.blogspot.com/2013/05/pimp-my-bashrc.html
+[118]: https://bitbucket.org
+[119]: https://github.com/gitliver/.dotfiles
+[120]: http://en.wikipedia.org/wiki/End-of-transmission_character
+[121]: http://en.wikipedia.org/wiki/GNU_Readline
+[122]: http://tiswww.case.edu/php/chet/readline/readline.html
+[123]: http://iterm2.com/
+[124]: /article/computing/tips_mac/#InstalliTerm2
+[125]: http://en.wikipedia.org/wiki/ASCII
+[126]: http://en.wikipedia.org/wiki/ASCII#ASCII_control_characters
diff --git a/cmd/ref/some command line tips for the web developer.txt b/cmd/ref/some command line tips for the web developer.txt
new file mode 100644
index 0000000..c881c4a
--- /dev/null
+++ b/cmd/ref/some command line tips for the web developer.txt
@@ -0,0 +1,105 @@
+---
+title: Some command line tips for the web developer
+date: 2015-04-21T18:40:43Z
+source: http://tosbourn.com/some-command-line-tips-for-the-web-developer/
+tags: #lhp, #cmdline
+
+---
+
+I wanted to share some tips I have collated over the years that would be useful for web developers who occasionally need to roll their sleeves up and get their hands dirty working on a server. These are by no stretch of the imagination a complete list and nor do I get too specific. The tips below should work on the majority of Linux based web servers and should apply to the majority of setups.
+
+## Where to get more help
+
+The first thing I should point out is that if at any time you get stuck or need help you should consult your sysadmin, if you have no sysadmin and need to consult the internet, I would suggest browsing and then asking on [ServerFault][1], the people on that site seem to know their stuff and it is a vibrant enough community.
+
+## Using Tab
+
+When typing file or folder names you can tab once to automatically complete the name, this can vastly speed up your time in the terminal, so instead of typing `vi /home/username/longFolderName/MyFileNameIsLongToo.txt` you could type `vi /h TAB u TAB l TAB M TAB`.
+
+You can also double tap the tab key to see a list of options available, this is useful if you have two filenames and the tab autocomplete doesn't know what to do.
+
+## Listing Files
+
+You probably already know the `ls` command, but did you know that by adding `-lah` after it you can greatly improve the detail you get back from it.
+
+`-l` lists everything out in a nicer format, `-a` includes hidden (dot) files in the list and `-h` makes thing like sizes human readable.
+
+The other thing some people don't know about ls is that you can pass in the location you want to look at, so instead of typing `cd /home/user/` and then `ls` you can type `ls /home/user/`
+
+## Removing Files
+
+Again most people know about using `rm` to remove files, but some don't realise you can pass multiple files into it, for example `rm file1 file2`
+
+## Viewing Files
+
+`cat` is your friend for quickly viewing the contents of a file, so you can just type `cat my_file.txt` instead of going into your text editor.
+
+For larger files you might only want to see the very top of the file or the very bottom of the file, the quickest way to do that is to type `head my_file.txt` for the top or `tail my_file.txt` for the bottom of the file.
+
+If you are constantly checking a debug file you can set up a quick command to constantly monitor it by typing `tail -f my_file.txt` this follows the file and it will automatically update as new stuff enters the file.
+
+## Drive Space
+
+If you need to know how much space is on your drives just type `df -h`, you will normally get information for drives you probably didn't think existed, don't worry about it just focus on the ones that are clearly your main drives.
+
+## RAM
+
+If you need to know how much RAM is installed on your machine just type `free -m`, the main column you will want to worry about is the 'total' one.
+
+## History
+
+Did you just get shouted at because you forgot to prefix your command with sudo? Type `sudo !!` and it will run the last command as sudo.
+
+If you know you entered a command a couple of commands ago, hit up a couple of times and the terminal will add it in for you.
+
+If you used a command a while ago and want to be able to remember it, type `history` and you will see everything that you have typed in the last while come up along with a unique number for it. Take note of the number and type `![the number]` and the command will run.
+
+## The Pipe Character
+
+The pipe character will take anything that would normally appear on the screen and passes it somewhere else, this is crazy handy for doing basic searches. So instead of typing history and scanning for every time you used sudo, type `history | grep sudo` and you will get only items with sudo in it.
+
+## What is running
+
+Find out what is currently running by typing `top` — pay attention to the load average, if it is high there may be something going wrong. Generally look for a full disk, a process that is going mental or a load of traffic to the site.
+
+If you just care about the load average you can get it on one line by typing `uptime`.
+
+## Finding out More
+
+`man` \+ any command will give you the manual for that command, super handy for finding out what you need without resorting to Google — also just like programs you run on your computer different versions could be installed on the server, this means that the man page will be more relevant sometimes than what google searches will tell you.
+
+## Moving folder contents up a level quickly
+
+Move all contents of a folder into its parent with `mv child_dir/* ./`
+
+This means take everything in child_dir (but not directory called child_dir) and move it to the folder we are in. If you don't like the idea of typing `./` because it looks odd you can always type the full address.
+
+## Alias Commands
+
+If there are long or complex commands that you type regularly you should alias them, this means creating another name you can reference the command by.
+
+For example `alias lsa='ls -lah'` would allow you to type `lsa` and what would run would be `ls -lah`, you can even overwrite `ls` if you really wanted with `alias ls='ls -lah'`.
+
+If you want to pass parameters you can create an alias with a function like this (just type it into the command line); `mkdir_ls() { mkdir $*; cd $*;}` then when you type `mkdir_ls new_folder` it will make a new folder and the move you into that folder.
+
+## In closing
+
+Hopefully you find some of these useful, if you have any to add feel free to let me know on [Twitter][2].
+
+Share this on:
+[Facebook][3] [Twitter][4] [G+][5] [LinkedIn][6] [Reddit][7] [HN][8]
+
+Did you find this post helpful? If so I would really appreciate it if you could look at [5 ways you could help the site for free in under 5 minutes][9].
+
+Please enable JavaScript to view the [comments powered by Disqus.][10]
+
+[1]: http://www.serverfault.com
+[2]: https://twitter.com/tosbourn "Some command line tips for the web developer"
+[3]: http://www.facebook.com/sharer/sharer.php?u=http://tosbourn.com/some-command-line-tips-for-the-web-developer&t=Some command line tips for the web developer
+[4]: https://twitter.com/intent/tweet?text=Some command line tips for the web developer&url=http://tosbourn.com/some-command-line-tips-for-the-web-developer
+[5]: https://plus.google.com/share?url=http://tosbourn.com/some-command-line-tips-for-the-web-developer
+[6]: http://www.linkedin.com/shareArticle?mini=true&url=http://tosbourn.com/some-command-line-tips-for-the-web-developer&title=Some command line tips for the web developer&summary=Some command line tips for the web developer&source=http://tosbourn.com/some-command-line-tips-for-the-web-developer
+[7]: http://www.reddit.com/submit?url=http://tosbourn.com/some-command-line-tips-for-the-web-developer
+[8]: http://news.ycombinator.com/submitlink?u=http://tosbourn.com/some-command-line-tips-for-the-web-developer&t=Some command line tips for the web developer
+[9]: /help-the-site/
+[10]: https://disqus.com/?ref_noscript
diff --git a/cmd/setup-vps-server.txt b/cmd/setup-vps-server.txt
new file mode 100644
index 0000000..efee5d0
--- /dev/null
+++ b/cmd/setup-vps-server.txt
@@ -0,0 +1,116 @@
+Let's talk about your server hosting situation. I know a lot of you are still using a shared web host. The thing is, it's 2015, shared hosting is only necessary if you really want unexplained site outages and over-crowded servers that slow to a crawl.
+
+It's time to break free of those shared hosting chains. It time to stop accepting the software stack you're handed. It's time to stop settling for whatever outdated server software and configurations some shared hosting company sticks you with.
+
+**It's time to take charge of your server; you need a VPS**
+
+What? Virtual Private Servers? Those are expensive and complicated... don't I need to know Linux or something?
+
+No, no and not really.
+
+Thanks to an increasingly competitive market you can pick up a very capable VPS for $5 a month. Setting up your VPS *is* a little more complicated than using a shared host, but most VPS's these days have one-click installers that will set up a Rails, Django or even WordPress environment for you.
+
+As for Linux, knowing your way around the command line certainly won't hurt, but these tutorials will teach you everything you really need to know. We'll also automate everything so that critical security updates for your server are applied automatically without you lifting a finger.
+
+## Pick a VPS Provider
+
+There are hundreds, possibly thousands of VPS providers these days. You can nerd out comparing all of them on [serverbear.com](http://serverbear.com/) if you want. When you're starting out I suggest sticking with what I call the big three: Linode, Digital Ocean or Vultr.
+
+Linode would be my choice for mission critical hosting. I use it for client projects, but Vultr and Digital Ocean are cheaper and perfect for personal projects and experiments. Both offer $5 a month servers, which gets you .5 GB of RAM, plenty of bandwidth and 20-30GB of a SSD-based storage space. Vultr actually gives you a little more RAM, which is helpful if you're setting up a Rails or Django environment (i.e. a long running process that requires more memory), but I've been hosting a Django-based site on a 512MB Digital Ocean instance for 18 months and have never run out of memory.
+
+Also note that all these plans start off charging by the hour so you can spin up a new server, play around with it and then destroy it and you'll have only spent a few pennies.
+
+Which one is better? They're both good. I've been using Vultr more these days, but Digital Ocean has a nicer, somewhat slicker control panel. There are also many others I haven't named. Just pick one.
+
+Here's a link that will get you a $10 credit at [Vultr](http://www.vultr.com/?ref=6825229) and here's one that will get you a $10 credit at [Digital Ocean](https://www.digitalocean.com/?refcode=3bda91345045) (both of those are affiliate links and help cover the cost of hosting this site *and* get you some free VPS time).
+
+For simplicity's sake, and because it offers more one-click installers, I'll use Digital Ocean for the rest of this tutorial.
+
+## Create Your First VPS
+
+In Digital Ocean you'll create a "Droplet". It's a three step process: pick a plan (stick with the $5 a month plan for starters), pick a location (stick with the defaults) and then install a bare OS or go with a one-click installer. Let's get WordPress up and running, so select WordPress on 14.04 under the Applications tab.
+
+If you want automatic backups, and you do, check that box. Backups are not free, but generally won't add more than about $1 to your monthly bill -- it's money well spent.
+
+The last thing we need to do is add an SSH key to our account. If we don't Digital Ocean will email our root password in a plain text email. Yikes.
+
+If you need to generate some SSH keys, here's a short guide, [How to Generate SSH keys](). You can skip step 3 in that guide. Once you've got your keys set up on your local machine you just need to add them to your droplet.
+
+If you're on OS X, you can use this command to copy your public key to the clipboard:
+
+ pbcopy < ~/.ssh/id_rsa.pub
+
+Otherwise you can use cat to print it out and copy it:
+
+ cat ~/.ssh/id_rsa.pub
+
+Now click the button to "add an SSH key". Then paste the contents of your clipboard into the box. Hit "add SSH Key" and you're done.
+
+Now just click the giant "Create Droplet".
+
+Congratulations you just deployed your first VPS server.
+
+## Secure Your VPS
+
+Now we can log in to our new VPS with this code:
+
+ ssh root@127.87.87.87
+
+That will cause SSH to ask if you want to add the server to list of known hosts. Say yes and then on OS X you'll get a dialog asking for the passphrase you created a minute ago when you generate your SSH key. Enter it, check the box to save it to your keychain so you don't have to enter it again.
+
+And you're now logged in to your VPS as root. That's not how we want to log in though since root is a very privileged user that can wreak all sorts of havoc. The first thing we'll do is change the password of the root user. To do that, just enter:
+
+ passwd
+
+And type a new password.
+
+Now let's create a new user:
+
+ adduser myusername
+
+Give your username a secure password and then enter this command:
+
+ visudo
+
+If you get an error saying that there is no app installed, you'll need to first install sudo (`apt-get install sudo` on Debian, which does not ship with sudo). That will open a file. Use the arrow key to move the cursor down to the line that reads:
+
+ root ALL=(ALL:ALL) ALL
+
+Now add this line:
+
+ myusername ALL=(ALL:ALL) ALL
+
+Where myusername is the username you created just a minute ago. Now we need to save the file. To do that hit Control-X, type a Y and then hit return.
+
+Now, **WITHOUT LOGGING OUT OF YOUR CURRENT ROOT SESSION** open another terminal window and make sure you can login with your new user:
+
+ ssh myusername@12.34.56.78
+
+You'll be asked for the password that we created just a minute ago on the server (not the one for our SSH key). Enter that password and you should be logged in. To make sure we can get root access when we need it, try entering this command:
+
+ sudo apt-get update
+
+That should ask for your password again and then spit out a bunch of information, all of which you can ignore for now.
+
+Okay, now you can log out of your root terminal window. To do that just hit Control-D.
+
+## Finishing Up
+
+What about actually accessing our VPS on the web? Where's WordPress? Just point your browser to the bare IP address you used to log in and you should get the first screen of the WordPress installer.
+
+We now have a VPS deployed and we've taken some very basic steps to secure it. We can do a lot more to make things more secure, but I've covered that in a separate article:
+
+One last thing: the user we created does not have access to our SSH keys, we need to add them. First make sure you're logged out of the server (type Control-D and you'll get a message telling you the connection has been closed). Now, on your local machine paste this command:
+
+ cat ~/.ssh/id_rsa.pub | ssh myusername@45.63.48.114 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
+
+You'll have to put in your password one last time, but from now on you can login via SSH.
+
+## Next Steps
+
+Congratulations you made it past the first hurdle, you're well on your way to taking control over your server. Kick back, relax and write some blog posts.
+
+Write down any problems you had with this tutorial and send me a link so I can check out your blog (I'll try to help figure out what went wrong too).
+
+Because we used a pre-built image from Digital Ocean though we're really not much better off than if we went with shared hosting, but that's okay, you have to start somewhere. Next up we'll do the same things, but this time create a bare OS which will serve as the basis for a custom built version of Nginx that's highly optimized and way faster than any stock server.
+
diff --git a/cmd/ssh-keys.txt b/cmd/ssh-keys.txt
new file mode 100644
index 0000000..25c1b8c
--- /dev/null
+++ b/cmd/ssh-keys.txt
@@ -0,0 +1,94 @@
+SSH keys are an easier, more secure way of logging into your virtual private server via SSH. Passwords are vulnerable to brute force attacks and just plain guessing. Key-based authentication is (currently) much more difficult to brute force and, when combined with a password on the key, provides a secure way of accessing your VPS instances from anywhere.
+
+Key-based authentication uses two keys, the first is the "public" key that anyone is allowed to see. The second is the "private" key that only you ever see. So to log in to a VPS using keys we need to create a pair -- a private key and a public key that matches it -- and then securely upload the public key to our VPS instance. We'll further protect our private key by adding a password to it.
+
+Open up your terminal application. On OS X, that's Terminal, which is in Applications >> Utilities folder. If you're using Linux I'll assume you know where the terminal app is and Windows fans can follow along after installing [Cygwin](http://cygwin.com/).
+
+Here's how to generate SSH keys in three simple steps.
+
+## Setup SSH for More Secure Logins
+
+### Step 1: Check for SSH Keys
+
+Cut and paste this line into your terminal to check and see if you already have any SSH keys:
+
+ ls -al ~/.ssh
+
+If you see output like this, then skip to Step 3:
+
+ id_dsa.pub
+ id_ecdsa.pub
+ id_ed25519.pub
+ id_rsa.pub
+
+### Step 2: Generate an SSH Key
+
+Here's the command to create a new SSH key. Just cut and paste, but be sure to put in your own email address in quotes:
+
+ ssh-keygen -t rsa -C "your_email@example.com"
+
+This will start a series of questions, just hit enter to accept the default choice for all of them, including the last one which asks where to save the file.
+
+Then it will ask for a passphrase, pick a good long one. And don't worry you won't need to enter this every time, there's something called `ssh-agent` that will ask for your passphrase and then store it for you for the duration of your session (i.e. until you restart your computer).
+
+ Enter passphrase (empty for no passphrase): [Type a passphrase]
+ Enter same passphrase again: [Type passphrase again]
+
+Once you've put in the passphrase, SSH will spit out a "fingerprint" that looks a bit like this:
+
+ # Your identification has been saved in /Users/you/.ssh/id_rsa.
+ # Your public key has been saved in /Users/you/.ssh/id_rsa.pub.
+ # The key fingerprint is:
+ # d3:50:dc:0f:f4:65:29:93:dd:53:c2:d6:85:51:e5:a2 scott@longhandpixels.net
+
+### Step 3 Copy Your Public Key to your VPS
+
+If you have ssh-copy-id installed on your system you can use this line to transfer your keys:
+
+ssh-copy-id user@123.45.56.78
+
+If that doesn't work, you can paste in the keys using SSH:
+
+cat ~/.ssh/id_rsa.pub | ssh user@12.34.56.78 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
+
+Whichever you use you should get a message like this:
+
+
+ The authenticity of host '12.34.56.78 (12.34.56.78)' can't be established.
+ RSA key fingerprint is 01:3b:ca:85:d6:35:4d:5f:f0:a2:cd:c0:c4:48:86:12.
+ Are you sure you want to continue connecting (yes/no)? yes
+ Warning: Permanently added '12.34.56.78' (RSA) to the list of known hosts.
+ username@12.34.56.78's password:
+
+ Now try logging into the machine, with "ssh 'user@12.34.56.78'", and check in:
+
+ ~/.ssh/authorized_keys
+
+ to make sure we haven't added extra keys that you weren't expecting.
+
+Now log in to your VPS with ssh like so:
+
+ ssh username@12.34.56.78
+
+And you won't be prompted for a password by the server. You will, however, be prompted for the passphrase you used to encrypt your SSH key. You'll need to enter that passphrase to unlock your SSH key, but ssh-agent should store that for you so you only need to re-enter it when you logout or restart your computer.
+
+And there you have it, secure, key-based log-ins for your VPS.
+
+### Bonus: SSH config
+
+If you'd rather not type `ssh myuser@12.34.56.78` all the time you can add that host to your SSH config file and refer to it by hostname.
+
+The SSH config file lives in `~/.ssh/config`. This command will either open that file if it exists or create it if it doesn't:
+
+ nano ~/.ssh/config
+
+Now we need to create a host entry. Here's what mine looks like:
+
+ Host myname
+ Hostname 12.34.56.78
+ user myvpsusername
+ #Port 24857 #if you set a non-standard port uncomment this line
+ CheckHostIP yes
+ TCPKeepAlive yes
+
+Then to login all I need to do is type ssh myname. This is even more helpful when using `scp` since you can skip the whole username@server and just type: `scp myname:/home/myuser/somefile.txt .` to copy a file.
diff --git a/consumer-digest-best-buys.txt b/consumer-digest-best-buys.txt
new file mode 100644
index 0000000..b039379
--- /dev/null
+++ b/consumer-digest-best-buys.txt
@@ -0,0 +1,62 @@
+Best Buys in photo and video editing software are based on performance, features and ease of use.
+
+Apple users are well served by the iLife suite, both for photo and video editing and have not be included in the Best Buys section.
+
+Mobile Photo Editors
+
+[E] Adobe Photoshop Express (free)
+Photoshop Express offers a simplified version of the same editing tools you'll find in more powerful desktop software and can quickly polish your images into something web-worthy.
+
+Features:
+versions: iOS, Android
+* Advanced editing tools like filters and effects
+* Ability to upload photos to popular sites like Facebook
+
+[E] PhotoForge2 ($1.99)
+PhotoForge2 takes the basics of Photoshop Express and adds extras like noise reduction and a wealth of image effects.
+
+Features:
+versions: iOS
+* Advanced noise reduction
+* Simulated HDR
+* Clone Stamp tool for repairing blemishes
+
+Desktop Photo Editors
+
+[E] Google Picasa (free)
+Picasa finds the sweet spot between organizer and editor, offering plenty of ways to retouch, crop and enhance your images along with advanced sorting and organizing tools like facial recognition and geo-location. It's tough to beat Picasa's features and it's free.
+
+Features:
+versions: Windows, Mac, Linux
+* Timeline view for sorting photos
+* Slide-show creator
+* Web uploader
+
+[M] Adobe Photoshop Elements 10 ($99)
+Deservedly one of the top selling photo editors on the market, Elements organizes, edits and uploads your images in a tightly integrated package. Editing tools that far exceed what's available in free options like Picasa make Elements well worth the money for consumers looking to do more with their images.
+
+Features:
+Versions: Windows, Mac
+* Photo organizer
+* Advanced editing tools
+* Slide-show creator
+
+Desktop Video Editors
+
+[M] Adobe Premiere elements ($99)
+A video editor for the rest of us, Premiere Elements doesn't have everything you'll find in Premiere proper, but it doesn't require a film degree to understand either. When comparing features and price, Premiere Elements stands alone in the market.
+
+Features:
+Versions: Windows, Mac
+* Timeline-based editor
+* Burn DVDs
+* Export to web-friendly movie formats
+
+[M] Pinnacle Studio HD ($49)
+Unlike Adobe's application, Pinnacle takes a more traditional approach to video editing tools. It has more features and it's cheaper, but its much more difficult to use. Pinnacle is made be the same company that makes the professional grade Avid and it shows -- both in good and bad ways. It's more powerful than Premiere Elements, but also more complex.
+
+Features:
+Versions: Windows
+* Timeline Editor
+* HD editing
+* Burn to DVD
diff --git a/consumer-digest-outline.txt b/consumer-digest-outline.txt
new file mode 100644
index 0000000..c46e14f
--- /dev/null
+++ b/consumer-digest-outline.txt
@@ -0,0 +1,22 @@
+1800-1900 for analysis
+600-800 for the sidebar picks
+touch on apps in the main but then deicate a sidebar
+
+main bar -- freeware and shareware, picassa or even apps that act like filters, apops that can do what instagram can do.
+
+high end - 150+
+midrange - 75-100
+economy - <30
+
+nov 21
+
+===Outline===
+
+So, intro, then a bit about what editing software is for -- making your images better sure, but also organizing them, sorting them and helping you find the images you want when you want them.
+
+Then a look at free editors -- iPhoto, Picasa and what ships on your phone and Andriod (i.e. photoshop mobile.). Then do a few grafs on what you get for a bit of money, photoshop elements, perhaps another. Then lightroom, photoshop.
+
+Then we move to video, talk about formats for a second, look again at the free options, this time iMovie and Windows Movie maker. Talk about how there is no good tool for organizing video. Perhaps the future of video editing is editing on your phone
+
+Eventually the software that ships on your phone might be enough, but today limited processing power and a limited export optuions make editing full fledged video on your phpone an excercise in frustration best left to professionals who can prove the exception to the rule that editing movies on your phone is next to impossible.
+
diff --git a/consumer-digest.txt b/consumer-digest.txt
new file mode 100644
index 0000000..dc71865
--- /dev/null
+++ b/consumer-digest.txt
@@ -0,0 +1,81 @@
+Today's digital camera is no longer a digital camera. The lens has left the building and now tags along in our pockets, our purses, our overnight bags. Thanks to cellphones, tablets and laptops, cameras are embedded into our daily lives, and that means we're no longer just capturing life's milestones, we're capturing everyday moments -- the sidewalk on the way to the gym, the sunset at our children's soccer practices, even the food on our plate is fair game for today's pervasive camera lens.
+
+The ubiquity of the cameraphone means there's a good chance you've got a camera in your pocket right now. And the tiny, but powerful cameras in today's mobile phones mean that most of us have more pictures and movies than ever before, which has led to an explosion of new photo and video editing software options for the consumer.
+
+A camera in every pocket means that more than ever we're awash in digital images. Some images we want to share with friends on Twitter, other videos we might post for family on Facebook and still other photos end up on the wall, framed in all their 8x10 glory. It's not always easy to organize and edit such vast libraries of images, but thankfully a new crop of photo and video editors has come along to help with the task. The best so-called editing software on the market today doesn't just fix red eye or crop close-ups, it organizes as well, helping you sort, group and filter through your photos to find the ones you'd like to edit.
+
+Just as what we think of as a camera has expanded, the range of editing software has also grown. You no longer need to choose between expensive, complex software aimed at professionals and very basic editors that can't do everything you'd like. Today's photo and video editing software options cover a wide spectrum of editing needs, from easy-to-use mobile apps for those who just want to crop and adjust an image before posting it to Facebook, to desktop software that's capable of creating high quality images suitable for framing. There's even a whole new category of sophisticated, but reasonably priced (sometimes even free) mobile apps that can edit your images right on your phone.
+
+Long-popular high-end editors like Adobe Photoshop CS5 or Apple's Final Cut Pro are still the place to turn when you want to go beyond the snapshot or YouTube movie, but much of what's new and exciting about today's photo and video editing software is found where the camera increasingly is -- on the small screen.
+
+==The rise of the mobile editor==
+
+You take your photos and capture video of life's little moments from your phone, why not edit them there as well?
+
+Make no mistake, you aren't going to put together professional quality videos on your phone, nor will you be able to print large images -- phones and even tablets lack the processing power necessary to run full-fledged photo and video editing software -- but that doesn't mean you can't put together a fun little movie on your phone and share it with friends on the web.
+
+Apple sells a simplified, but still very capable, version of its flagship iMovie video editor for iOS. If you're shooting video from your iPhone and want a quick way to make a few edits and post the results to YouTube or Facebook then iMovie fits the bill. Using iMovie you can combine still images and videos, add musical soundtracks, create fades and add effects like the Ken Burns-style photo panning that Apple made famous in the desktop version of iMovie.
+
+If your phone runs Google's Android operating system you won't find anything quite like iMovie just yet, but Google does offer a video editing suite for the tablet version of Android.
+
+When it comes to photos the mobile story is even brighter. Photo editors on the small screen are quickly growing to rival their desktop cousins. Adobe even makes a version of Photoshop designed exclusively for mobile platforms like Apple's iOS or Google's Android. Photoshop Express, as the mobile version is known, can handle all the basics -- cropping, rotating and fixing red eye. If you want to go further with your images you can even adjust the exposure and saturation, apply filters such as soft focus and add effects and borders. As with any good editor you can always undo and redo changes, including stepping all the way back to the original photo. Once you're done, Photoshop Express can upload the results to Facebook and other online sharing services.
+
+There are, according to Apple, over half a million apps available in the Apple App Store. The Android Market is similarly choked full of applications and choosing among the myriad options can be tough. Among the things to look for in a good photo editor on mobile devices is a noise reduction filter. Always a problem with digital images, noise -- the small amounts of grain and speckling that sometimes mar your photos -- is even more of a concern with phone cameras. A quality editor will offer a way to smooth out those flaws and give your images a more polished look.
+
+But part of the appeal of editing on the small screen isn't fixing blemishes and striving for photo realism. In fact it's quite the opposite. Call it the playful nature of the medium, or just a way to hide the shortcomings of cellphone cameras, apps like Instagram (iOS), Camera360 (Android) and BubbleGum (Android) have popularized the rise of the deliberately low-fi photo.
+
+These apps can, with a few touches of the finger, transform your images into something that looks like it just climbed out of a shoebox you haven't opened since 1974. Colors can be faded and washed out, light leaks added and scratches and blemishes tailored to suit the mood. Once you've got a suitably aged photo you can even add borders to make it look like an old Polaroid or film negative.
+
+The lo-fi photo trend is even spilling over into desktop software. Analog for the Mac and InstantRetro for Windows can give you that same lo-fi look on the desktop.
+
+== Desktop Photo Options ==
+
+The cellphone camera has the advantage of always being with you, but sometimes it's just not enough. Your child's first birthday, the dream vacation to Paris, some photos deserve more than just your phone. There's no denying that a phone can't match the results of digital SLRs, video cameras and even some point-and-shoot models. Today most of us take two types of photos, the semi-disposable ones that end up on Facebook and more thoughtful images using a traditional digital camera to capture something special. And something special deserves better software.
+
+Fear not, the photo and video editing world isn't all Instagram and Facebook photos. In fact there's a new middle ground of editors that often offer tools that can both polish your images for printing and create fun special effects photos for sharing online.
+
+All of the image editing software available handles the same set of basic tasks, such as cropping your photos or removing red-eye. What sets the good options above the rest are the organizing and searching tools they offer. The best photo editing and organizing hybrids can wrangle your tangled images into well-organized, easily searchable collections that make it easy to then edit the images you love.
+
+Today's photo editing software can organize your images with intelligent tools like facial recognition or geographic data. Facial recognition can group your photos by person. It's never perfect on its own, but with a few corrections on your part, the software can become almost alarmingly intelligent at sorting your family photos. Some software can also plot your vacation photos on a map using the geographic data recorded by your camera. For those without geographically-aware cameras there's typically a map where you can drag-and-drop photos to add the data yourself.
+
+Fortunately for consumers, the advanced features of today's best photo editors do not necessarily mean more expensive software.
+
+If you spent over $1000 on your camera and consider yourself a serious hobbyist then you might want to invest in a high-end editor like Adobe Photoshop CS5. However, if your camera costs less than Photoshop ($699 retail) it's tough to justify the hefty price tag when there are cheaper and even free options that perform nearly as well.
+
+Photoshop does offers some advanced editing options you won't find in less expensive alternatives, like the ability to edit different layers of a photo, fine tune color settings and correct for lens distortion. But if you'd like to have many of the advanced features of Photoshop without the price tag, the awkwardly named GIMP image editor can do nearly everything Photoshop does and doesn't cost a thing.
+
+Most consumers will be better off skipping the high-priced software and sticking with free editing tools like iPhoto (Mac) and Google's Picasa (Mac, Windows). Google's Picasa makes an excellent choice for consumers just jumping into the photo editing world and will automatically scan your computer's hard drive and pull in all your images without you needing to lift a finger. Like iPhoto, Picasa offers facial recognition photo sorting so you can find all the photos of your kids in a single view. You can also drop your photos on the built-in map so you can search for them by location. In addition to these advanced features Picasa also offers a timeline-based view for sorting your shots by event or date and the ability to organize them into "film rolls." The application manages to wrap all these features in a simple, intuitive interface that doesn't take long to master.
+
+Picasa is tightly integrated with Google's sharing service of the same name, giving you a gigabyte's worth of free online storage for sharing your images with friends and family. For those who want to use other online services, plugins can be used to upload photos to Facebook or the online shutterbug favorite, Flickr.com. Picasa is not just about online images though, the application is more than capable of creating high quality prints suitable for framing.
+
+Apple users can find similar features in iPhoto, which comes free with a new Mac or is available as part of the iLife suite for $50. IPhoto can do everything Picasa does and also works with Apple's new iCloud service to backup and sync your photos across a wide range of Apple devices.
+
+If Picasa or iPhoto leave you wanting, you can step up to a several different editors in the under $100 range, the most notable of which is Adobe's Photoshop Elements 10. Elements 10 is a stripped down version of its sibling, Photoshop CS5, but it still offers plenty of more advanced editing tools like lens correction, curves controls for toning, black and white conversions and dozens of easy-to-use filters and effects. Elements also integrates with Facebook using the service's face-tagging feature to help organize your library. Elements is available for $99.
+
+Mac users have another option in the form of Acorn, a photo editor that's aimed at the middle ground between iPhoto and Photoshop. At $49 it's cheaper than Elements and supports many of the same features like masking, filters and even layers within your photos. As an added bonus, Acorn can import and export layered Photoshop images.
+
+If you do decide to spend the money on something more advanced, whether its Acorn, Elements, Photoshop CS5 or something else, be sure to try before you buy. All of the applications listed offer a free trial period so you can evaluate how well the software meets your needs before forking over your hard earned cash.
+
+== video editors ==
+
+Thanks to consumer demand created by services like YouTube, video is a must have for today's point and shoot cameras. Even some high-end phones can ably shoot short movies, sometimes even in high definition. Unfortunately, while the cameras are capable, editing videos is still no simple task. Video editing software is far less intuitive and requires more computer horsepower and disc space than photo editors.
+
+While turning your video clips into movies worthy of the big screen is more difficult than editing photos, improvements in software have made the process somewhat less painful. The free options available can handle simple tasks like combining movie clips, adding music and effects and then burning the results to DVD or uploading them to the web.
+
+Much of the complexity, and need for a powerful computer with a large hard drive, comes with more advanced and more expensive software.
+
+One important thing to keep a close eye on when you're shopping for video editing software is the file formats it supports. Not every editor can work with every camera's file format, especially when it comes to today's high definition video cameras. Be sure to find out what format your camera saves movies in and ensure that the software of your choice can edit it before you buy.
+
+Most consumers will be well served by the free options that ship with today's computers. The two most readily available free video editors are Apple's iMovie and Microsoft's Windows Movie Maker. Windows Movie Maker and iMovie both offer timeline-based editing where you can drag-and-drop video clips and arrange them into a longer movie. From there it's easy to add a soundtrack, polish transitions and fades or add built-in effects and titles.
+
+Another option for the budget-minded consumer is the wealth of online editors. Video Sharing websites like YouTube offer free basic video editing tools into the site. Just upload your movies, splice the tracks together and add some music. You won't find any fancy features, but the price is hard to beat. The downside is that you have to upload what are often very large movie files over the web. If your internet connection speeds are not up to snuff online video editors can quickly become an exercise in frustration.
+
+Any budding auteur looking to do some more serious video editing will, unfortunately, need to reach for their wallet. Video editing software can be very expensive with options like Adobe Premiere selling for $799. Apple's once similarly priced Final Cut Pro was recently revamped and now sells for $299.
+
+As with Photoshop, Adobe makes an Elements version of Premiere for $99. Premiere Elements is again a stripped down version of its sibling, but when it comes to video editing stripped down is good news for newcomers. Premiere Elements still isn't the easiest software in the world to master, but if you've outgrown Windows Movie Maker or iMovie and aren't ready to commit to the more expensive software, Premiere makes a less expensive step up.
+
+Fortunately 30-day trial versions are available for Premiere, Premiere Elements and Final Cut Pro so you can take them for a test drive before you commit.
+
+
+
+
diff --git a/dentsu-invoice.pdf b/dentsu-invoice.pdf
new file mode 100644
index 0000000..953ef69
--- /dev/null
+++ b/dentsu-invoice.pdf
Binary files differ
diff --git a/invoice-wheels-hosting-2015.pdf b/invoice-wheels-hosting-2015.pdf
new file mode 100644
index 0000000..e4a1156
--- /dev/null
+++ b/invoice-wheels-hosting-2015.pdf
Binary files differ
diff --git a/invoice_duke_update.pdf b/invoice_duke_update.pdf
new file mode 100644
index 0000000..6a63ff5
--- /dev/null
+++ b/invoice_duke_update.pdf
Binary files differ
diff --git a/paradiseherbs-inv068600.pdf b/paradiseherbs-inv068600.pdf
new file mode 100644
index 0000000..5633f40
--- /dev/null
+++ b/paradiseherbs-inv068600.pdf
Binary files differ
diff --git a/paradiseherbs-inv070812.pdf b/paradiseherbs-inv070812.pdf
new file mode 100644
index 0000000..dd9359b
--- /dev/null
+++ b/paradiseherbs-inv070812.pdf
Binary files differ
diff --git a/pitfallsofrwd.txt b/pitfallsofrwd.txt
new file mode 100644
index 0000000..38bb097
--- /dev/null
+++ b/pitfallsofrwd.txt
@@ -0,0 +1,37 @@
+There's a lot more to responsive design than fluid layouts and some @media queries. In fact responsive design means a whole new way of approaching web design. With that in mind, here are some common pitfalls to watch out for in your next responsive design.
+
+## Device Dimensions
+
+Don't get hung up on the most common screen sizes of today. They'll change tomorrow. And again the day after that. To create a more <a href="http://futurefriendlyweb.com/">future-friendly</a> site, don't focus on the breakpoints where your design changes, but on what's happening between those breakpoints.
+
+If you take a <a href="https://longhandpixels.net/blog/2014/04/why-mobile-first-responsive-design">mobile-first approach</a>, you can start <a href="http://www.slideshare.net/stephenhay/responsive-design-workflow">building</a> your interface for the smallest screen you're going to support. Then just enlarge your browser window until your design falls apart. Insert a breakpoint there and fix things in a @media query. Rinse and repeat.
+
+Make sure you're delivering a great user experience on the screen sizes <em>between</em> today's most popular devices and you'll be sure to delight tomorrow's users as well.
+
+## Speed
+
+Responsive design is about creating an excellent experience for mobile users. If your site isn't fast it doesn't matter how great your content fits a small screen or how well your images scale because no one is going to stick around to see that.
+
+Make your site fast above all else. Create a performance budget and stick to it. Use Webpagetest.org to make sure your site is fast even over 3G. Pay special attention to the "Speed Index" which tells you how long it takes before your users see content on the page.
+
+## Images
+
+Nothing will speed up your mobile site like reducing the size of images. The new HTML <a href="https://longhandpixels.net/blog/2014/02/complete-guide-picture-element">Picture element</a> will be supported by several browsers later this year and in the mean time there's a polyfill available in the form of <a href="https://github.com/scottjehl/picturefill">PictureFill</a>.
+
+I suggest you go with PictureFill so you can transition to just the <code>&lt;picture&gt;</code> element down the road when it's more widely supported, but there are other, older options as well, such as Adaptive Images.
+
+Some kind of responsive images solution is a must.
+
+## Don't Hide Content
+
+It's hard to get everything the user wants on small screens. It might be tempting to hide some functionality on mobile devices -- don't! Mobile users visiting your site expect to be able to do everything they need to do. Don't penalize them just because they're using a device with a small screen.
+
+This is part of what makes responsive design difficult -- getting everything the user needs on such a small screen is a challenge. Don't just punt and start hiding things, check out established UI patterns for some inspiration and do plenty of testing to see how users are interacting with your site.
+
+It might take a bit more work, but giving your users what they're looking for turns users into happy customers.
+
+## Don't reinvent the wheel
+
+Responsive design is hard enough, don't make it harder by reinventing the wheel. Start from a solid base. Grab a responsive template from <a target="_blank" href="http://www.templatemonster.com/wordpress-themes.php">here</a> and be sure to read up on best practices and helpful <a href="https://bradfrost.github.io/this-is-responsive/patterns.html">pattern libraries</a>.
+
+
diff --git a/postmarkapp/invoices/scott_gilbertson_invoice_01.odt b/postmarkapp/invoices/scott_gilbertson_invoice_01.odt
new file mode 100644
index 0000000..37640c2
--- /dev/null
+++ b/postmarkapp/invoices/scott_gilbertson_invoice_01.odt
Binary files differ
diff --git a/postmarkapp/invoices/scott_gilbertson_invoice_01.pdf b/postmarkapp/invoices/scott_gilbertson_invoice_01.pdf
new file mode 100644
index 0000000..39bc4b7
--- /dev/null
+++ b/postmarkapp/invoices/scott_gilbertson_invoice_01.pdf
Binary files differ
diff --git a/postmarkapp/invoices/scott_gilbertson_invoice_02.odt b/postmarkapp/invoices/scott_gilbertson_invoice_02.odt
new file mode 100644
index 0000000..b8d4215
--- /dev/null
+++ b/postmarkapp/invoices/scott_gilbertson_invoice_02.odt
Binary files differ
diff --git a/postmarkapp/invoices/scott_gilbertson_invoice_02.pdf b/postmarkapp/invoices/scott_gilbertson_invoice_02.pdf
new file mode 100644
index 0000000..7a856a5
--- /dev/null
+++ b/postmarkapp/invoices/scott_gilbertson_invoice_02.pdf
Binary files differ
diff --git a/postmarkapp/mailbrush.txt b/postmarkapp/mailbrush.txt
new file mode 100644
index 0000000..0e1c558
--- /dev/null
+++ b/postmarkapp/mailbrush.txt
@@ -0,0 +1,80 @@
+Many apps these days require installing some bit of code. Even small snippets of code are much easier to read and understand when they're properly syntax-highlighted. Most existing libraries for syntax highlighting are huge combinations of CSS and JavaScript and don't work at all in most mail clients.
+
+That's why we created MailBrush. MailBrush lets you add syntax highlighting to code snippets so they can be used in your email templates.
+
+Instead of plain snippets in your email templates that look like this:
+
+{
+ "key": "value",
+ "key2": "value 2"
+}
+
+Your snippets will now look like this:
+
+tk Image here
+
+MailBrush has syntax highlighting support for HTML, CSS and JavaScript snippets, as well as JSON, PHP, HTTP and Bash. It allows for full customization of highlighting colors and styles so you can customize your highlighting to fit with the rest of your mail templates. Once you run MailBrush on your code the resulting HTML snippet will work in all major email clients:
+
+* Desktop
+ - Apple Mail 8, 9, 10
+ - Outlook 2003, 2007, 2010, 2011, 2013, 2016
+ - Windows 10 Mail
+* Mobile
+ - Gmail App (Android)
+ - iPhones
+ - iPads
+* Web
+ - AOL
+ - Gmail
+ - Outlook.com
+ - Yahoo
+
+Convinced? Okay, let's set up MailBrush and start generating some HTML. The first step is to install Node.js. The easiest way to get Node is to [grab the installer](https://nodejs.org/en/download/) from the Node.js site. That will work for both OS X and Windows. If you're on Linux you can also install via the Node download page, though you may be better off using your distro's package repository to install Node.
+
+Once you've got Node installed, installing MailBrush is simple. Just open up your terminal application and enter this command:
+
+npm install mailbrush --save
+
+That's it, you've got MailBrush installed. Now let's run it on some code. There's some basic Node boilerplate code we need to write to make everything work, so here's a snippet you can use to get started:
+
+const mailbrush = require('mailbrush');
+
+// Specify options
+const options = {
+ language: 'json',
+ cssOptions: {
+ backgroundColor: 'pink'
+ }
+};
+
+// The code snippet you want to beautify
+const snippet = `{
+ "key": "value",
+ "key2": "value 2"
+}`
+
+mailbrush.convert(snippet, options, (html) => {
+ // Returns HTML with inlined CSS for email client compatibility
+ console.log(html);
+});
+
+Save that snippet in a file named app.js (or whatever you'd like to call it) and then you can run it with this command:
+
+node app.js
+
+In this case that would output this code:
+
+lxf@maya:~$ node app.js
+<table cellpadding="0" cellspacing="0" style="background: white;"><tr><td style="background: white; color: #000; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px; padding: 10px 15px;"><pre style="-moz-tab-size: 2; -ms-hyphens: none; -o-tab-size: 2; -webkit-hyphens: none; color: #000; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px; hyphens: none; line-height: 1.5; overflow: auto; tab-size: 2; text-align: left; white-space: pre; word-break: break-all; word-spacing: normal; word-wrap: normal;"><span style="color: #999; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px;">{</span>
+ <span style="color: #905; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px;">"key"</span><span style="color: #a67f59; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px;">:</span> <span style="color: #690; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px;">"value"</span><span style="color: #999; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px;">,</span>
+ <span style="color: #905; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px;">"key2"</span><span style="color: #a67f59; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px;">:</span> <span style="color: #690; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px;">"value 2"</span>
+<span style="color: #999; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 13px;">}</span></pre></td></tr></table>
+
+That's all the HTML (and inline CSS) you need to put that snippet into your mail template. This code will be highlighted and will look good in all the supported mail clients listed above.
+
+Now, to actually generate markup for your code you can change the script above. The `options` section is where you can change colors, fonts and highlighting. See the [MailBrush Github page](https://github.com/wildbit/mailbrush) for a full list of options.
+
+The other two things you'll want to change is the language variable, which can be `markup`, `php`, `javascript`, `css`, `http`, or `bash`. Finally you'll want to change the `snippet` section to the actual code you want highlighted.
+
+And that's all there is to it. Happy highlighting!
+
diff --git a/postmarkapp/monitoring-email-2.txt b/postmarkapp/monitoring-email-2.txt
new file mode 100644
index 0000000..d2894c1
--- /dev/null
+++ b/postmarkapp/monitoring-email-2.txt
@@ -0,0 +1,17 @@
+If you've ever spent any time in the dashboard of your email service provider (ESP), you've probably focused on the open or click rate of you email. Why aren't people opening your email? It's tempting to immediately start re-writing your emails, using stronger headlines perhaps, or maybe toning down your copy to sound less aggressive.
+
+While those certainly may be problems, there is often a much simpler answer. A lot of issues with low open/click rates are really just delivery issues. Your customers can’t open/click on an email they never receive. If you’re seeing unusually low open/click rates for your industry, it’s very possible that it’s caused by delivery issues.
+
+Instead of spending time and resources A/B testing subject lines or revamping the design of your CTA buttons, you might be better off making sure your emails are actually being received.
+
+In the last post we covered some ways to monitor common issues that might stop your email from being delivered. And here, by delivered we really mean accepted by your customer's mail server (or service). That's half the battle. Once it's on your customer's server though there's a quite a few ways to monitor how that server delivers that email to the customer. That is, monitoring the so-called last mile -- does it get delivered to the inbox, sent to spam or hung up by some filter the customer never gets to see.
+
+Unfortunately, deliverability is hard to measure directly because you can’t login directly to everyone’s mailbox to see if an email was delivered. Your ESP probably doesn't know. When it says an email was "delivered" it could well mean that it was delivered to the spam folder, or in the case of Gmail, the promotions folder. There are however, some services that can help you get a better idea of where your email is going.
+
+The two big services in this space are [250ok](https://250ok.com/) and [ReturnPath](https://returnpath.com/). Using these services can give you quantifiable data about your inbox delivery rate. In other words you don’t just trust your ESP that their delivery is great, measure and monitor it.
+
+Once you've got some mailbox delivery monitoring system set up, you can start looking at the data. The big question will likely be how much of your email is ending up in the spam folder and why. Now you know how much is ending up there -- the industry standard is somewhere around 20 percent, depending on who you ask, though we think you should aim for much lower than that -- the question becomes, how do you fix it?
+
+Before you start hiring copy writers and running textual analysis on your keywords, consider that it might be something much lower level, like the quality of your ESP. Now that you have the tools to monitor delivery it's well worth your time to start comparing services. Just like you'd A/B test headlines, it's worth the time and effort to A/B test ESPs.
+
+We've already showcased a few examples of how monitoring delivery rates can help you migrate to a better ESP. One is Childcare.co.uk, which helps connects parents and childcare providers and tutors. When childcare.co.uk switched from Amazon SES to Postmark, they saw their open rates [go up 11 percent](https://postmarkapp.com/customers/childcare-co-uk). That's a huge win, but it's one you can only make if you're monitoring where your mail ends up.k
diff --git a/postmarkapp/monitoring-email.txt b/postmarkapp/monitoring-email.txt
new file mode 100644
index 0000000..ca2287b
--- /dev/null
+++ b/postmarkapp/monitoring-email.txt
@@ -0,0 +1,31 @@
+We usually take email delivery for granted. You hit send and assume the recipient will get it. That's usually a good assumption for personal email, but when you move beyond single emails delivery gets decidedly less certain.
+
+Fortunately you don't have to sit around just wondering whether your business' emails are being delivered. The internet is awash in helpful tools and techniques for monitoring email delivery. One of the keys though is to move beyond depending on your email service provider (ESP). ESPs can monitor delivery for you and, in many cases, even tell you what, if anything, is going wrong. However ESPs can't monitor everything about your system and may not always be able to diagnose your delivery problems.
+
+To help you start looking here are a few services you can use to track and measure delivery rates and diagnose potential problems.
+
+The first thing to keep in mind with email problems is Occam's razor -- your delivery problems often have a very simple cause and solution. It often won't be what many people assume, that the largely inscrutable spam filtering processes of ISPs or Google is out to get them.
+
+The problem is often much simpler than that. Even if it is an ISP out to get you, make sure you've eliminated the other possibilities first. Always start with the simple.
+
+### Monitor DNS records
+
+DNS records? Yes, DNS. It's so simple and so obvious we often forget how important it is. When you’re working at a company using several services, possibly several dozen domains, redirects and other possibly complicated configurations, it’s possible, even likely that the correct DNS configuration can get screwed up when someone makes an update. If you're not monitoring the DNS end of your delivery system you're going to end up with emails failing to be delivered. It's a simple problem with a simple fix, but one you won't know about if you aren't actively looking.
+
+
+### Monitor Blacklists
+
+Starting with the very simple, make sure that your domains and IPs (or your ESP's IPs) haven't ended up on a spam blacklist. There are dozens of services out there that will monitor this for you. With hundreds of blacklists out there to keep track of you'll probably want to pay for a decent service that regular scans all public lists for any mention of your IPs and domains.
+
+### Track Delivery Time with MailHandler
+
+Once you know your DNS is set up correctly and that you're not on any blacklists the next place to look is delivery speed. How long does it take from the time you hit send to the time that email ends up in your customer's mailbox? You're probably familiar with this problem from the other side -- nothing is more frustrating than that password reset form that claims it just emailed you but you know there's nothing in your inbox. That's a problem with delivery speed.
+
+To help you track and monitor delivery speed, Postmark built [MailHandler](https://github.com/wildbit/MailHandler), a set of tools you can use to send and retrieve emails and at the same time get details on how long these operations took. MailHandler is available as a Ruby gem and can send email through the Postmark API or by standard SMTP protocol. More importantly it also allows checking email delivery by IMAP protocol.
+
+Using MailHandler you can begin to find and fix any delays in your delivery system
+
+### Success! Delivered. Or Not.
+
+Once you've got a good system set up for monitoring your sending system and you know your emails are being sent in a timely manner it's time to move on to the last mile so to speak -- the customer's inbox. When we say an email is "delivered" what we really mean is that was accepted by your customer's mail server (or service). Whether or not it was then placed in your customer's inbox is an entirely different problem that we'll tackle in the next installment.
+
diff --git a/postmarkapp/separating-transactional-bulk.txt b/postmarkapp/separating-transactional-bulk.txt
new file mode 100644
index 0000000..9d275e2
--- /dev/null
+++ b/postmarkapp/separating-transactional-bulk.txt
@@ -0,0 +1,22 @@
+You've probably been told to keep your transactional emails separate from your bulk emails, but what does that mean and why should you listen to this advice?
+
+Transactional emails are emails that your customer triggers -- a welcome email after a customer signs up, an alert email a customer has set up in your app, an invoice email and a comment notification all qualify as transactional email. There are others as well, the exact nature of transactional email you send will vary according to the type of app you run.
+
+A bulk email is an email that is sent to more than one person that contains the exact same content and is not triggered by an event. A bulk email would be anything a customer did not specifically trigger, for example a weekly newsletter, a marketing email or an announcement about your site's recent updates. Currently, Postmark does not send bulk emails like these.
+
+To understand why Postmark doesn't send bulk emails, let's go back to that advice you may have heard -- why keep these two types of email separate? In a nutshell: keeping your important transactional emails -- the ones your customers are expecting -- separate makes them much more likely to land in the customer's inbox.
+
+The reputation of the IP address, domain and email address all play a role in getting your email into your customer's inbox rather than their spam folder, or, in the case of Google, one of the inbox sub-categories like "Promotional". Email providers know that customers want and expect transactional emails, but it's not always easy for them to tell what's transactional and what's better classified as promotional.
+
+Using separate domains or emails addresses for each kind of mail makes it much more likely that your important transactional email will to get to your customers.
+
+If you use the same servers and email address to send both bulk and transactional email filtering systems like Gmail's will likely classify it all as bulk email. That's why Gmail officially suggests that "if you send both promotional mail and transactional mail relating to your organization, we recommend separating mail by purpose as much as possible." Given the staggering number of Gmail users out there, adhering to this advice makes good business sense.
+
+Since lost or even delayed transactional emails will result in support requests, like the familiar, "I tried to reset my password, but I never got a response from you", and more work for your organization. Then there's the potential lost customer trust. Searching through your spam folder for a legitimate email is never fun, don't send your customers into their spam folders when there's an easy way to avoid it -- keep them separated.
+
+So how to do you separate your mail? And what does “separate” mean? It means making sure your bulk emails comes from one source and your transactional from another, separating IP addresses, domains and possibly email addresses as well. These days reputation is increasingly shifting to domains because IPs are disposable.
+
+For example you might send your bulk email from promo.yourdomain.com and your transactional email from trx.yourdomain.com. You could even use entirely different domains, for instance, mypromodomain.com and mydomain.com. Or you might be able to get by with just using promo@yourdmain.com for bulk and welcome@yourdomain.com.
+
+For us this isn't just a theory. Postmark sees the best inbox rates and delivery times in the industry because we only send transactional emails. That means our IPs have great reputations. We police them better than the other providers who are a bit more lax about it since they send bulk mail as well and rely on dedicated IPs when customers run into delivery issues on shared IPs. We know all this because a significant number of our customers are “refugees” from the other providers. They ran into delivery problems sending both bulk and transactional from a single source and came to us to make sure their transactional emails get where they need to be -- in their customer's inbox.
+
diff --git a/postmarkapp/separating-transactional-bulkv2.txt b/postmarkapp/separating-transactional-bulkv2.txt
new file mode 100644
index 0000000..9d275e2
--- /dev/null
+++ b/postmarkapp/separating-transactional-bulkv2.txt
@@ -0,0 +1,22 @@
+You've probably been told to keep your transactional emails separate from your bulk emails, but what does that mean and why should you listen to this advice?
+
+Transactional emails are emails that your customer triggers -- a welcome email after a customer signs up, an alert email a customer has set up in your app, an invoice email and a comment notification all qualify as transactional email. There are others as well, the exact nature of transactional email you send will vary according to the type of app you run.
+
+A bulk email is an email that is sent to more than one person that contains the exact same content and is not triggered by an event. A bulk email would be anything a customer did not specifically trigger, for example a weekly newsletter, a marketing email or an announcement about your site's recent updates. Currently, Postmark does not send bulk emails like these.
+
+To understand why Postmark doesn't send bulk emails, let's go back to that advice you may have heard -- why keep these two types of email separate? In a nutshell: keeping your important transactional emails -- the ones your customers are expecting -- separate makes them much more likely to land in the customer's inbox.
+
+The reputation of the IP address, domain and email address all play a role in getting your email into your customer's inbox rather than their spam folder, or, in the case of Google, one of the inbox sub-categories like "Promotional". Email providers know that customers want and expect transactional emails, but it's not always easy for them to tell what's transactional and what's better classified as promotional.
+
+Using separate domains or emails addresses for each kind of mail makes it much more likely that your important transactional email will to get to your customers.
+
+If you use the same servers and email address to send both bulk and transactional email filtering systems like Gmail's will likely classify it all as bulk email. That's why Gmail officially suggests that "if you send both promotional mail and transactional mail relating to your organization, we recommend separating mail by purpose as much as possible." Given the staggering number of Gmail users out there, adhering to this advice makes good business sense.
+
+Since lost or even delayed transactional emails will result in support requests, like the familiar, "I tried to reset my password, but I never got a response from you", and more work for your organization. Then there's the potential lost customer trust. Searching through your spam folder for a legitimate email is never fun, don't send your customers into their spam folders when there's an easy way to avoid it -- keep them separated.
+
+So how to do you separate your mail? And what does “separate” mean? It means making sure your bulk emails comes from one source and your transactional from another, separating IP addresses, domains and possibly email addresses as well. These days reputation is increasingly shifting to domains because IPs are disposable.
+
+For example you might send your bulk email from promo.yourdomain.com and your transactional email from trx.yourdomain.com. You could even use entirely different domains, for instance, mypromodomain.com and mydomain.com. Or you might be able to get by with just using promo@yourdmain.com for bulk and welcome@yourdomain.com.
+
+For us this isn't just a theory. Postmark sees the best inbox rates and delivery times in the industry because we only send transactional emails. That means our IPs have great reputations. We police them better than the other providers who are a bit more lax about it since they send bulk mail as well and rely on dedicated IPs when customers run into delivery issues on shared IPs. We know all this because a significant number of our customers are “refugees” from the other providers. They ran into delivery problems sending both bulk and transactional from a single source and came to us to make sure their transactional emails get where they need to be -- in their customer's inbox.
+
diff --git a/postmarkapp/tools-techniques-delivery.txt b/postmarkapp/tools-techniques-delivery.txt
new file mode 100644
index 0000000..174c20c
--- /dev/null
+++ b/postmarkapp/tools-techniques-delivery.txt
@@ -0,0 +1,36 @@
+
+Tools and techniques for monitoring delivery
+
+
+
+
+*Using Detailed Logs for Troubleshooting* The big idea here is to create a version of our Troubleshooting guide that’s more Postmark-specific. In general, our guides are service-agnostic, but there’s a lot of powerful tools in Postmark that can make troubleshooting easier. We’d want to draw attention to the guide, but then this blog posts should focus specifically on where to find the solutions to these problems within Postmark.
+
+*What are the standard problems when it comes to delivery?* Understanding these helps understand the layers of troubleshooting. i.e. Don’t start researching the most unlikely problem. - Filtering (I.e. Technically “delivered” but to spam, promotions tab, etc.) - Bounces (I.e. Typos or mail boxes that no longer exist.) - Over-aggressive systems-level filters. (I.e. Overaggressive corporate firewalls)
+
+*Postmark Logs* - Events - Processed, Delivered, Bounced, Opened, Clicked - Content - Reviewing the full content of an email often exposes content that may have triggered spam filters
+
+*Troubleshooting steps*
+- What can you look into from your end?
+ - Was the email sent? (Sometimes people expect an email when one
+shouldn’t have been sent or hasn’t been sent.)
+ - How long has it ben? (Sometimes, delays happen. Either with ESP issues,
+receiving mail server issues, or something else entirely.)
+ - Was it “delivered” (really “accepted by the mail server” is a better
+phrase. Delivery doesn’t imply that it was passed to the inbox.)
+ - Did it go to spam or a promotional tab?
+ - Was there anything in the content that seems likely to trigger a spam
+filter? (Sometimes, there’s incredibly unlikely things like phone numbers
+or addresses that trigger the filters, but that’s less likely, so we
+investigate that later.) Postmark’s spam check tool is often handy here.
+ - Were their attachments? Files like docx and other attack vectors are
+often blocked by default from uknown sources.
+ - Verify authentication (SPF, DKIM, DMARC)
+ - If all else fails, check Google Postmark tools
+- What can the recipient look into from their end if you don’t see any
+obvious problems?
+ - Have them check their spam folder and promotional tabs if relevant
+ - Have the recipient reach out to their IT department with the delivery
+details to have them investigate.
+ - If relevant, have the recipient ask their IT department to whitelist
+the address, domain, or IPs depending on how their firewall works.
diff --git a/sifterapp/Diagram.jpg b/sifterapp/Diagram.jpg
new file mode 100644
index 0000000..efb8f1e
--- /dev/null
+++ b/sifterapp/Diagram.jpg
Binary files differ
diff --git a/sifterapp/choosing-bug-tracker.txt b/sifterapp/choosing-bug-tracker.txt
new file mode 100644
index 0000000..7837b65
--- /dev/null
+++ b/sifterapp/choosing-bug-tracker.txt
@@ -0,0 +1,16 @@
+
+> I just had another idea inspired by this. “Why it’s so difficult for your team to settle on a bug tracker?”
+>
+> The premise is the Goldilocks story. This bug tracker is too simple. This bug tracker is too complicated. This bug tracker is just right.
+>
+> Effectively, with bug and issue trackers, there’s a spectrum. On one end, you have todo lists/spreadsheets. Then you have things like GitHub Issues/Sifter/Trello somewhere in the middle, and then Jira/Fogbugz/etc. all of the way to the other end.
+>
+> Each of these tools makes tradeoffs. With Sifter, we keep it simple so small teams can actually have bug tracking because for them, they’d just as soon have no bug tracking as use Jira. It’s way too complicated for them.
+>
+> Other teams doing really complex things find Sifter laughably simple. Their complexity needs justify the investment in training and excluding non-technical users.
+>
+> This is all complicated by the fact that teams inevitably want ease-of-use and powerful customization, which, while not inherently exclusive of each other, are generally at odds with each other.
+>
+> To make matters worse, on a larger team, some portion of the team values simplicity more while other parts of the team value advanced configuration and control. Neither group is wrong, but it is incredibly difficult to find a tool with the balance that’s right for a team.
+>
+> I could probably expand on this more, but that should be enough to see if you think it’s a good topic or not.
diff --git a/sifterapp/complete/browserstack-images.zip b/sifterapp/complete/browserstack-images.zip
new file mode 100644
index 0000000..5aa9646
--- /dev/null
+++ b/sifterapp/complete/browserstack-images.zip
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack01.jpg b/sifterapp/complete/browserstack-images/browserstack01.jpg
new file mode 100644
index 0000000..85da575
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack01.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack02.jpg b/sifterapp/complete/browserstack-images/browserstack02.jpg
new file mode 100644
index 0000000..d53c254
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack02.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack03.jpg b/sifterapp/complete/browserstack-images/browserstack03.jpg
new file mode 100644
index 0000000..ccdaf46
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack03.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack04.jpg b/sifterapp/complete/browserstack-images/browserstack04.jpg
new file mode 100644
index 0000000..d32ffc9
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack04.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack05.jpg b/sifterapp/complete/browserstack-images/browserstack05.jpg
new file mode 100644
index 0000000..ebe3b2d
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack05.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack06.jpg b/sifterapp/complete/browserstack-images/browserstack06.jpg
new file mode 100644
index 0000000..c11505f
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack06.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack.txt b/sifterapp/complete/browserstack.txt
new file mode 100644
index 0000000..e23e224
--- /dev/null
+++ b/sifterapp/complete/browserstack.txt
@@ -0,0 +1,90 @@
+Testing websites across today's myriad browsers and devices can be overwhelming.
+
+There are roughly [18,700 unique Android devices](http://opensignal.com/reports/2014/android-fragmentation/) on the market. There are somewhere around 14 browsers, powered by several different rendering engines available for each of those devices. Then there's iOS, Windows Mobile and scores of other platforms to consider.
+
+Trying to test everything is impossible at this point.
+
+The good news is that you don't need to worry about every device on the web. Don't make testing harder than it needs to be.
+
+Don't test more, test smarter.
+
+Before you dive into testing, consult your analytics and narrow the field based on what you know about your traffic. Find out which devices and browsers the majority of your visitors are using. Test on the platform/browsers that make up the top 80 percent of your visitors. Once you're confident you've got a great experience for the majority of your visitors you can move on to the more obscure cases.
+
+If you're launching something new you'll need to do some research about which devices your target audience favors. That way you'll know, for example, that your target market tends to favor iOS devices and you'll want to spend some extra time testing on various iPhone/iPad configurations.
+
+Once you know what your visitors are actually using, you can start testing smarter.
+
+Let's say you know that the majority of your visitors come from 8 different device/browser configurations. You also know from studying the trends in your analytics that you're making headway in some new markets overseas and traffic from Opera Mini is on the rise, so you want to pay special attention to Opera Mini.
+
+Armed with information like that you're ready to start testing. Old school developers might fire up the virtual machines at this point. There's nothing wrong with that, but these days there are better tools for testing your site.
+
+### Introducing BrowserStack
+
+[BrowserStack](http://www.browserstack.com/) is a browser-based virtualization service that puts nearly every operating system and browser combination under the sun at your fingertips.
+
+BrowserStack also offers mobile emulators for testing almost every version of iOS, Android and Opera Mobile (sadly, at the time of writing, there are no Windows Mobile emulators available on BrowserStack).
+
+You can also choose the screen resolution you'd like to test at, which makes possible to test resolution-based CSS media queries if you're using them. BrowserStack also has a dedicated [Responsive Design testing service](http://www.browserstack.com/responsive).
+
+BrowserStack is not just a screenshot service. It launches a fully interactive virtual machine in your browser window. That means you can interact with your site just like you would if you were using a "real" virtual machine or had the actual device in your hand. It also means you can use the virtual browser's developer tools to debug any problems you encounter.
+
+### Getting Started With BrowserStack
+
+Getting started with BrowserStack is simple, just sign up for the service and then log in. A free trial account will get you 30 minutes of virtual machine time. Pick an OS/browser combination you want to test, enter your URL and start up your virtual machine.
+
+<figure>
+ <img src="browserstack01.jpg" alt="Configuring BrowserStack Virtual Machine">
+ <figcaption><b>1</b> Configuring your virtual machine.</figcaption>
+</figure>
+
+BrowserStack will then launch a virtual machine in your browser window. Now you have a real virtual machine running, in this case IE 10 on Windows 7.
+
+<figure>
+ <img src="browserstack02.jpg" alt="BrowserStack Virtual Machine">
+ <figcaption><b>2</b> Testing sifterapp.com using IE 10 on Windows 7.</figcaption>
+</figure>
+
+Quick tip: to grab a screenshot of a bug to share with your developers, just click the little gear icon, which will reveal a camera icon.
+
+<figure>
+ <img src="browserstack03.jpg" alt="Taking a screenshot in BrowserStack">
+ <figcaption><b>3</b> Taking a screenshot in BrowserStack.</figcaption>
+</figure>
+
+
+Click the camera and BrowserStack will generate a screenshot that you can annotate and share with your team. You could, for example, download it and add it to the relevant issue in your bug tracker.
+
+<figure>
+ <img src="browserstack04.jpg" alt="Screenshot annoations in BrowserStack">
+ <figcaption><b>4</b> Annotating screenshots in BrowserStack.</figcaption>
+</figure>
+
+### Local Testing with BrowserStack
+
+If you're building a brand new site or app, chances are you'll want to do your testing before everything is public. If you have a staging server you could point BrowserStack to that URL, but there's another very handy option -- just point BrowserStack to local files on your computer.
+
+To do this BrowserStack will need to install a browser plugin, but once that's ready to go testing a local site is no more difficult than any other URL.
+
+Start by clicking the "Start local testing" button in the sidebar at the left side of the screen. This will present you with a choice to use either a local server or a local folder.
+
+<figure>
+ <img src="browserstack05.jpg" alt="Setting up local testing in BrowserStack">
+ <figcaption><b>5</b> Setting up local testing in BrowserStack.</figcaption>
+</figure>
+
+If you've got a dynamic app, pick the local server option and point BrowserStack to your local URL. Alternately, just point BrowserStack to a folder of files and it will serve them up for you.
+
+<figure>
+ <img src="browserstack06.jpg" alt="Testing a local folder in BrowserStack">
+ <figcaption><b>6</b> Testing a local folder of files with BrowserStack.</figcaption>
+</figure>
+
+That's it! Now you can edit files locally, make your changes and refresh BrowserStack's virtual machine to test across platforms without ever making your site public.
+
+### Beyond the Basics
+
+Once you start using BrowserStack you'll wonder how you ever did without it.
+
+There's also a good bit more than can be covered in a short review like this, including [automated functional testing](https://www.browserstack.com/automate), a responsive design testing service that can show your site on multiple devices and browsers, an [automated screenshot service](http://www.browserstack.com/screenshots) and more. You can even [integrate it with Visual Studio](http://www.hanselman.com/blog/CrossBrowserDebuggingIntegratedIntoVisualStudioWithBrowserStack.aspx).
+
+BrowserStack offers a free trial with 30 minutes of virtual machine time, which you can use for testing. If you decide it's right for you there are a variety of reasonably priced plans starting at $49/month.
diff --git a/sifterapp/complete/bugs-issues-notes.txt b/sifterapp/complete/bugs-issues-notes.txt
new file mode 100644
index 0000000..120f2f4
--- /dev/null
+++ b/sifterapp/complete/bugs-issues-notes.txt
@@ -0,0 +1,18 @@
+What a task "is"
+Thus,
+
+Best-guess order of magnitude. 1 minute. 1 hour. 1 day. 1 week. 1 month. (Anything longer than
+a month, and the task hasn't been sufficiently broken down into smaller
+pieces.)
+
+
+
+> More often than not, the goal of classifying things along those lines isn't
+> about *what* the individual issue is, but whether it's in or out of
+> scope. Whether something is in out of scope is a worthwhile facet for
+> classification, but using bug vs. feature as a proxy for that confuses the
+> issue. More often than not, in or out of scope is best handled through
+> discussion, not simply reclassifying the issue. When that happens,
+
+ Think "separate but equal".
+>
diff --git a/sifterapp/complete/bugs-issues.txt b/sifterapp/complete/bugs-issues.txt
new file mode 100644
index 0000000..4b31cce
--- /dev/null
+++ b/sifterapp/complete/bugs-issues.txt
@@ -0,0 +1,52 @@
+Love it. The only thing that I think could be worked in is mentioning the (The separate is inherently unequal bit.) The article does a great job of explaining how it's not necessarily helpful, but I think it could go even further illustrating that it's even *potentially* harmful to create that dichotomy.
+
+
+
+One of the hardest things you ever have to do is figure out what to do next. There's shelf after shelf of books in the self help section dedicated to helping you discover the answer to that question.
+
+We can't help you with the abstract version of that question, but it isn't just the abstract question that's hard. All of the software development teams we've talked to struggle with a version of this same question.
+
+Knowing which work to do next is the most difficult problem out there.
+
+Every development team's wish list is incredibly long. Every time you sit down at your screen there are a dizzying array of choices. This is part of what makes software development exciting, but it can also be overwhelming. What should you do next? Fix bugs? Ship new features? Improve security or performance? There's no perfect science for calculating which of these things is most important.
+
+One thing we can tell you won't help -- keeping track of everything in separate systems only makes the decision process more challenging.
+
+This can be counter-intuitive at first. For example, you're probably used to tools and systems that have some things you call "bugs", quantifiable shortcomings in software, some you call "issues", potential problems that aren't directly to related code, for example pricing decisions, and other things you call "features" or "enhancements" for ideas that haven't been implemented yet.
+
+Organizing tasks by category like this offers a comforting way to break things down. It makes us feel like we've done something, even when we haven't. All we've really done is rename the problem. We still don't have any better idea of what to do next.
+
+If you've worked with such a system for long you've probably seen it fall apart. The boundaries between these things --bugs, issues, new features -- are actually quite vague and trying to make your tasks fit into an arbitrary category rarely helps you figure out what to work on next. Worse it forces you to classify everything in one of those categories even when the actual problem might be too complex to fit just one.
+
+We've found that it can be even worse than just "not helpful", divide your work in to categories like this and some tasks will automatically become second class citizens. There is no such thing as separate but equal. Separate is inherently unequal. In this case bugs and issues that take a backseat to new features.
+
+It's time to take a different approach.
+
+We've found that the best technique for deciding what you should do next is not to classify what you need to do. That's a subtle form of procrastination.
+
+To actually decide between possibilities you need to figure out the priority of the task. To determine priority you need to look at all your possible next tasks and balance two factors: the positive impact on your customers and the effort it will take to complete that task. Priority isn't much help without considering both impact and effort.
+
+Establish a priority hierarchy for your tasks and you'll never wonder what you should do next again. You'll know.
+
+Sometimes finding that balance between impact on the customer and effort expended is easy. A bug that affects 20 percent of customers and takes 5 minutes to fix is a no-brainer -- you fix it, test and ship.
+
+Unfortunately, prioritizing your tasks will rarely be this black and white.
+
+What to do with a bug that only affects one customer (that you know of), but would take a full day to fix, is less immediately obvious.
+
+What if that customer is your biggest customer? What if that customer is a small, but very influential one? What if your next big feature could exacerbate the impact of that bug? What if your next big feature will eliminate the bug?
+
+There's no formula to definitively know which task will be the best use of your time. If the bugs are minor, but the next feature could help your customers be more successful by an order of magnitude, your customers might be more than willing to look the other way on a few bugs.
+
+There's only really one thing that's more or less the same with every decision: What a task is (bug, issue, feature) is much less important than the priority you assign it.
+
+The classification that matters is "what percentage of customers does this
+affect" and "how long will it take to do this task?"
+
+That does not mean you should dump all your categories. For instance, if you group tasks based on modules like "Login", "Filtering" or "Search", that grouping helps you find related tasks when you sit down to work on a given area. In that case the categories become helpful because they help your team focus.
+
+Some categories are useful, but whether something is a bug, issue or feature should have almost no bearing on whether the task in question is truly important.
+
+A bug might be classified as a bug, but it also means that a given feature is incomplete because it's not working correctly. It's a gray area where "bug" vs. "feature" doesn't help teams make decisions, it only lets us feel good about organizing issues. It's the path of least resistance and one that many teams choose, but it doesn't really help get work done.
+
+If you want to get real work done, focus your efforts on determining priority. Decide where the biggest wins are for your team by looking at the impact on your customers versus the time required to complete the task. There's no formula here, but prioritize your tasks rather than just organizing them and you'll never need to wonder, what should I do next.
diff --git a/sifterapp/complete/forcing-responsibility-software.txt b/sifterapp/complete/forcing-responsibility-software.txt
new file mode 100644
index 0000000..611d602
--- /dev/null
+++ b/sifterapp/complete/forcing-responsibility-software.txt
@@ -0,0 +1,29 @@
+Stop Forcing Responsibility onto Software
+
+There's a common belief among programmers that automating tedious tasks is exactly the reason software was invented. The idea is that software can save us from all this drudgery by making simple decisions for us and removing the tedious things from our lives.
+
+This is often true. Think of all the automation in your life, from your thermostat to your automobile's service engine light, software *does* remove a tremendous number of tedious tasks from our lives.
+
+Perhaps the best example of this is the auto-save feature that runs in the background of most applications these days. Auto-save frees you from the tedious task of saving your document. You no longer need to pound CTRL-S every minute or two like an animal. Instead, your lovely TPS reports are automatically saved as you change them and you don't have to worry about it.
+
+Unfortunately when you have a hammer as powerful as software everything starts to look like a nail. Which is to say that, just because a task is tedious, does not mean it's something that can be offloaded to software.
+
+It's just as important to think about whether the task is something that software *can* be good at. For example, while auto-saving your TPS reports is definitely something software can be good at, actually writing the reports is probably something humans are better at.
+
+This temptation to automate away the difficult, sometimes tedious, tasks in our lives is particularly tempting when it comes to prioritizing issues in your issue tracking software.
+
+Software is good at tracking issues, but sadly, most of the time software turns out to be terrible at prioritizing them. To understand why, consider the varying factors that go into prioritizing issues.
+
+At a minimum prioritizing means weighing such disparate factors as resource availability, other potential blockers and dependencies, customer impact, level of effort, date/calendar limitations, and more. We often think that by plugging all of this information into software, we can automatically determine a priority, but that's just not the case.
+
+Plugging all that information into the software helps collate it all in one place where it's easy to see, but when it comes to actually making decisions about which issue to tackle next, a human is far more likely to make good decisions. Software helps you make more informed choices, but good decisions still require human understanding.
+
+When all those often conflicting factors surrounding prioritization are thrown together as a series of data points, which software is supposed to then parse and understand, what you'll most likely get back from your software is exactly what you've entered -- conflicts.
+
+It might not be the most exciting task in your day, but prioritizing issues is a management task, that is, it requires your management. You need to use intuition and understanding to make decisions based on what's most important *in this case* and assign a single simple priority accordingly.
+
+Consider two open issues you need to make a decision on. The first impacts ten customers. The second only impacts one customer directly, but indirectly impacts a feature that could help 1,000 customers. So to what degree is the second issue actually impacting customers? And which should you focus on? Algorithms designed to prioritize customer impact will pick the first, but is that really the right choice?
+
+These questions aren't black and white, and it's difficult for a software system to accurately classify/quantify them and take every possible variable into account.
+
+Perhaps in the AI-driven quantum computing future this will be something well-suited for software. In the mean time though, tedious or not, human beings still make the best decisions about which issues should be a priority and what your team should tackle next.
diff --git a/sifterapp/complete/how-to-respond-to-bug-reports.txt b/sifterapp/complete/how-to-respond-to-bug-reports.txt
new file mode 100644
index 0000000..617e1ca
--- /dev/null
+++ b/sifterapp/complete/how-to-respond-to-bug-reports.txt
@@ -0,0 +1,29 @@
+If you look at the bug reports on many big open source software projects it's almost like the developers have a bug report Magic-8 Ball. Reports come in and developers just give the Magic-8 Ball a shake and spit out the response. You'll see the same four or five terse answers over and over again, "working as designed", "won't fix", "need more info", "can't reproduce" and so on.
+
+At the same time large software projects often have very detailed guidelines on how to *report* bugs. Developers know that the average user doesn't think like a developer so developers create guidelines, checklists and other tips designed to make their lives easier.
+
+The one thing you almost never see is a set of guidelines for *responding* to bug reports. The other side of the equation gets almost no attention at all. I've never seen an open source project that had an explicit guide for developers on how to respond to bugs.
+
+If such a guide existed projects would not be littered with Magic-8 Ball-style messages that not only discourage outsiders from collaborating, but showcase how out of touch the developers are with the users of their software.
+
+It's time to throw away the Magic 8 Ball of bug reports and get serious about improving your software.
+
+## Simple Rules for Responding to Bug Reports.
+
+1. **Don't take bug reports personally**. The reporters are trying to help. They may be frustrated, they may even be rude, but remember they're upset and frustrated with a bug, not you. Now they may not phrase it that way, they may think they're upset with you. Part of your job as a developer is to negotiate that social gap between angry users and yourself. The first step is to stop taking bug reports personally. You are not your software.
+
+2. **Be specific in your responses**. Magic 8 Ball responses like "can't reproduce" and "need more info" aren't just rude, they're failures to communicate. Which is to say that both may be true in many cases, but neither are helpful for the bug reporter. The bug reporter may not be providing helpful info, but in dropping in these one-liners you're not being helpful either.
+
+In the case of "need more info" take a few seconds to ask for what you actually need. Need to know the OS or browser version? Then ask for that. If you "can't reproduce" tell the user in detail what you did, what platform or browser you were using and any other specifics that might help them see what's different in their case. Be specific, ask "What browser were you using?" or "Can you send a screenshot of that page or copy and paste the URL so that I can see what you're seeing?" instead of "Need more info".
+
+3. **Be collaborative**. This is related to point one, but remember that the reporter is trying to help and the best way to let them help you is to, well, let them help you. Let them collaborate and be part of the process. If your project is open source remember that some of your best contributors will start off as prolific bug reporters. The more you make them part of the process the more helpful they'll become over time.
+
+4. **Avoid technical jargon**. For example, instead of "URL" say "the address from the web browser address bar". This can be tricky sometimes since what you think of as everyday speech may read like technical jargon to your users. When in doubt err on the side of simple, direct language.
+
+Along the same lines, don't assume too much technical knowledge on the part of bug reporters. If you're going to need a log file be sure to tell the user exactly how and where to find the information you need. Don't just say, "what's the output of `tail -f /var/log/syslog`?", tell them where their terminal application is, how to open it and then to cut and paste the command and results you want. A little bit of hand holding goes a long way.
+
+5. **Be patient**. Don't dismiss reports just because they will involve more effort and research. It's often said that there is no such thing as writing, just re-writing. The same is true of software development, fixing bugs *is* software development. The time and research it takes to adequately respond to bug reports isn't taking you away from "real" development, it is the real development.
+
+6. **Help them help you**. Think of this as the one rule to rule the previous five. Bug reports are like free software development training. Just because you're the developer doesn't mean your users don't have things to teach you, provided you're open to learning. Take everything as a teaching/learning opportunity and you'll find that not only do your bug reports make your software better, they make you a better software developer.
+
+It can be hard the remember all this stuff when you have a pile of bugs you want to quickly work through. Try to resist that urge to work through all the new bug reports before lunch or otherwise rush it. Often times it's more effective to invest a few extra moments collaborating with the reporter to make sure that bugs are handled well.
diff --git a/sifterapp/complete/issue-tracking-challenges.txt b/sifterapp/complete/issue-tracking-challenges.txt
new file mode 100644
index 0000000..fc5e03e
--- /dev/null
+++ b/sifterapp/complete/issue-tracking-challenges.txt
@@ -0,0 +1,63 @@
+Tracking issues isn't easy. It's tough to find them, tough to keep them updated and accurate, and tougher still to actually resolve them.
+
+There are quite a few challenges that can weigh down a good issue tracking process, but fortunately there are some simple fixes for each. Here are a few we've discovered over the years and some ways to solve them.
+
+# Lack of participation.
+
+The most basic problem is getting your team to actually use the software. Participation and collaboration are the cornerstones of a good issue tracking process, but that collaboration and teamwork runs smoothest when everyone is using the same system.
+
+If everyone is not working together the process will fall apart before you even get started.
+
+If your team is using email or littering their desks with sticky notes that's a pretty sure sign you have a problem.
+
+Solution: Get everyone using the same system and make sure everyone is comfortable with it.
+
+# Too difficult to report issues.
+
+If it's too hard to actually report an issue, for example, if there are too many required, but irrelevant fields in your forms, or it's too hard to upload relevant files, or it's just too difficult to login and find the "new issue" link, then no one will ever report issues in the first place. And unreported issues are unsolved issues.
+
+Solution: Keep your forms simple, offer drag-and-drop file uploading, and, if all else fails, email integration.
+
+# Too difficult to find issues.
+
+If you can't find an issue, you can't fix it.
+
+Typically the inability to find what you're looking for is the result of weak or minimal searching and filtering tools in your issue tracker.
+
+Sorting and filtering needs to be powerful and yet simple. Overcomplicating these tools can mean you'll accidentally filter out half of your relevant issues and not even realize it.
+
+Solution: Simplify the process of searching, filtering and finding issues.
+
+# Over-engineering the process.
+
+The best way to avoid over-engineering anything is to keep things as simple as possible. For example, try to solve your problems with your existing tools before you create new tools.
+
+One example of this we've discovered is having [too many possible statuses](https://sifterapp.com/blog/2012/08/the-challenges-with-custom-statuses/) for an issue.
+
+Keeping things simple -- we have three choices, "On Hold", "Assigned", and "Accepted" -- avoids both the paradox of choice and any overlap. If you have ten possibilities (or worse, completely custom statuses) you've added mental overhead to the process of choosing one. Keeping it simple means you don't have to waste time picking a status for a given issue.
+
+Too many statuses creates crevices for issues to hide in and be forgotten when filtering issues. Overly detailed statuses can also confuse non-technical people who will wonder, "what's the difference between accepted and in progress?" Good question. Avoid it by making statuses clear and simple.
+
+There are also clear, hard lines between each of these three statuses and no questions about what they mean.
+
+A related problem, and the reason some teams will clamor of more status possibilities, is the tendency to conflate statuses with resolutions. For example, "working as designed" isn't a status; it's a resolution. Similarly, "can't reproduce" isn't a status; it's a resolution.
+
+Solution: Keep your status options simple and focus on truly different states of work with clear lines between them.
+
+
+# Over-reliance on software for process.
+
+Software is a tool. Tools are wielded by people. The tool alone can only do so much. Without people to guide them even the best of tools will fail.
+
+That's why you need to make people the most important part of your issue process.
+
+Make room for the human aspects of issue tracking, like regular testing sessions, consistent iteration and release cycles, and dedicated time for fixing bugs.
+
+Solution: Invest time and effort in the human processes that will pair with
+and support the software.
+
+# Conclusion
+
+Tracking issues isn't always easy, but you can make it easier by simplifying.
+
+Cut out the cruft. Make sure you have good software and good processes that help your team wield that software effectively. Let the software do the things software is good at and let your team fill in the parts of the process that people are good at.
diff --git a/sifterapp/complete/private-issues.txt b/sifterapp/complete/private-issues.txt
new file mode 100644
index 0000000..c2b033d
--- /dev/null
+++ b/sifterapp/complete/private-issues.txt
@@ -0,0 +1,28 @@
+Sifter does not offer "private" issues. Here's why.
+
+In most cases the reasons teams want private issues are the very reason private issues are problematic. There seems to be three primary reasons teams want private issues. The first is so that clients don't see your "mistakes" and will somehow perceive the work as higher quality. Except that this is highly flawed thinking. The idea that presenting a nice clean front will convince the client you're some kind of flawless machine is like trying to make unicorns out of rhinos.
+
+The far more likely outcome is that clients can't see half of what you're working on and end up thinking you aren't actually working. Or they might think you're ignoring their issues (which are public). If your highest priority work is all in private issues the client is cut off from the development process and will never get the bigger picture view. This can lead to all sorts of problems including the client feeling like they're not in control. That's often the point at which clients will either pull the plug or want to step in and micromanage the project.
+
+What you end up doing when you use private issues this way is protecting your image at the client's expense. First and foremost remember that the work isn't about your image, it's about the client and what they want. Assuming you're doing quality work, your image isn't going to suffer just because you're doing that work in front of the client, warts and all. Rhinos have one huge evolutionary advantage over unicorns -- they're real.
+
+Keeping issues private to protect your image ends up skewing expectations the wrong way and can easily end up doing far more damage to your reputation with the client than showing them a few bugs in the software you're developing.
+
+Another reason some teams want private issues is related, but slightly different -- they want to shield the client from the technical stuff so they don't get distracted from the larger picture.
+
+This is indeed tempting, especially with the sort of client that likes to micromanage every step of the development process whether their input is need or not (and seemingly more often when it is not). It's tempting, when faced with this sort of client, to resort to using private issues as a way of avoiding conflict or avoiding them period.
+
+However, as noted above, using private issues to make your development process partly or wholly opaque to the client is just as likely to make your client want to step in and micromanage as it is to prevent them from being able to do so.
+
+The problem is that you're trying to solve a human problem with software and that's never going to work. If your client is "in the way" then you need to help them understand what they're doing and how they can do it better.
+
+Even with clients that don't micromanage it can be tempting to use private issue to shield the client from technical details you don't want to explain. But clients don't have to (and aren't interested in) digging into things not assigned to or related to them. People are great at tuning things out and they will if you give them the chance.
+
+The third use that we've seen for private issues is that they serve as a kind of internal backchannel, a place your developers can discuss things without worrying that the client is watching. This is the most potentially disastrous way to use private issues. If the history of the internet has taught us anything it's that "private" is rarely actually private.
+
+Backchannels backfire. Private conversations end up public. Clients get to see what your developers are saying in private and the results are often ugly.
+
+Backchannels also undermine a sense of collaboration by creating an environment in which not all speech is equal. The client has no say in the backchannel and never gets a chance to contribute to that portion of the project.
+
+It’s important to involve clients in the big picture and let them get as involved as they want to be. Private issues subvert this from the outset by setting up an us vs. them mentality that percolates out into other areas of development as well. The real challenge is not keeping clients separate, but getting them involved, and setting up fences and other tactics to prevent them from being fully integrated into the project hinders collaboration and produces sub-par work.
+
diff --git a/sifterapp/complete/sifter-pagespeed-after.png b/sifterapp/complete/sifter-pagespeed-after.png
new file mode 100644
index 0000000..6c35499
--- /dev/null
+++ b/sifterapp/complete/sifter-pagespeed-after.png
Binary files differ
diff --git a/sifterapp/complete/sifter-pagespeed-before.png b/sifterapp/complete/sifter-pagespeed-before.png
new file mode 100644
index 0000000..5c36514
--- /dev/null
+++ b/sifterapp/complete/sifter-pagespeed-before.png
Binary files differ
diff --git a/sifterapp/complete/states-vs-resolutions.txt b/sifterapp/complete/states-vs-resolutions.txt
new file mode 100644
index 0000000..8652c81
--- /dev/null
+++ b/sifterapp/complete/states-vs-resolutions.txt
@@ -0,0 +1,22 @@
+We've written before about why Sifter has only [three possible statuses](https://sifterapp.com/blog/2012/08/the-challenges-with-custom-statuses/) -- Open/Reopened, Resolved and Closed. The short answer is that more than that over-complicates the issue tracking process without adding any real value.
+
+Why? Well, there are projects for which this will not be enough, but provided your project scope isn't quite as encompassing as say, NASA's, there's a good chance these three, in conjunction with some supplementary tools, will not only be enough, but speed up your workflow and help you close issues faster.
+
+Why are custom statuses unnecessary and how does using them over-complicate things? Much of the answer lies in how your team uses status messages -- are your statuses really describing the current state of an issue or are they trying to do more?
+
+One of the big reasons that teams often want more status possibilities is that they're using status messages for far more than just setting the status of an issue. For example, some teams use status to indicate resolutions. Probably the most common example of this is directly using resolutions as a status indicator. That is, the status of the issue is a stand-in for its outcome.
+
+How many times have you tracked down an issue in your favorite software project only to encounter a terse resolution like "working as designed" or the dreaded, "won't fix"? The problem with these statuses is that they don't describe state the issue is in, they describe the outcome. In other words they aren't statuses, they're resolutions.
+
+The status is not "won't fix", the status is closed. The *resolution* is that the issue won't be fixed.
+
+Trying to convey an outcome in a status message is like trying to fit the proverbial square peg in a round hole.
+
+Worse, in this case, you're missing an opportunity to provide a true resolution. What do you learn from these status-message "resolutions"? Nothing. What does the team working on that project learn when they revisit the issue a year from now? Nothing. That's a lost opportunity.
+
+This is part of why statuses do not make good resolutions. Resolutions are generally best captured in the description where there's room to explain a bit more about why you aren't going to fix something. Take a minute to explain why you aren't going to fix something or why it's designed the way it is and your users will thank you.
+
+Perhaps even more important, your future self will thank you when the same issue comes up again and you can refer back to your quick notes in the resolution to see why things are the way they are.
+
+Provided you use status messages solely for setting the status of an issue then there's rarely a need for more statuses than those Sifter offers -- Open, Resolved and Closed.
+
diff --git a/sifterapp/complete/streamlining-issue-creation.txt b/sifterapp/complete/streamlining-issue-creation.txt
new file mode 100644
index 0000000..543e420
--- /dev/null
+++ b/sifterapp/complete/streamlining-issue-creation.txt
@@ -0,0 +1,55 @@
+No one likes filling out forms. The internet is littered with user surveys that show reducing the number of fields in a form means far more people fill it out. Hubspot rather famously found that dropping just one field from forms [increased conversion][1] by almost 50%. People really hate forms.
+
+Who cares? Well, "people" includes you and your team. And "forms" include those you use to file issues and bugs in the software you're developing.
+
+Want to get more issues filed? Create simpler forms.
+
+At the same time you do need to capture certain bits of information. Take the typical web contact form. Dropping the email field might increase conversion rates significantly, but it means you don't have all the information you need. Forms need to be simple, not simplistic.
+
+And therein lies the rub -- which fields really need to be on your issue form?
+
+Let's break it down using a typical issue form which might consist of a dozen or more fields. There will almost always be a field for the "status" of an item. Other options typically include fields for Resolution, Assignee, Opener, Creation Date, Due Date, Category, Type, Release, Priority, Severity, Impact, LOE (estimated), LOE (actual), Browser/OS, Relationships and possibly even more.
+
+All those fields create a huge cognitive overhead which quickly leads to "decision fatigue", a fancy name for "people have better things to do than fill out long overly detailed forms on their computers." Let's tackle these one by one.
+
+* Status -- We've [written previously][2] about how status messages are unnecessary. The short story is that the status is either open or closed, there is no other status. Everything else is really a resolution. For example, the dreaded "wont fix" status is not a status. The status is closed. The *resolution* is that the issue won't be fixed.
+
+* **Resolution** -- We need a spot to record what we've done so keep this one.
+
+* Assignee -- Another necessary field, but it can be captured implicitly without adding another field to the form. So keep this one but it won't be part of the issue form.
+
+* Opener -- Again, good info to have, but not info you should need to fill in. Lose the field and capture it behind the scenes.
+
+* Creation Date -- Like Opener, this should be captured automatically when the issue is created.
+
+* Due Date -- The due date of every issue is "yesterday", there's no need to ask people to figure this out in the issue creation form. Figuring out the due date means [figuring out the priority of the issue][3] and that can't be done without an overview of the whole project. The issue creation form is the wrong place to determine priority and thus due date.
+
+* **Category** -- Categories are good, they help classify the issue -- is it a feature request, is it a bug? is it something else? Categories are helpful when trying to determine the priority of an issue as well so let's keep this one.
+
+* Type -- The type of issue is more or less the same as the category. No need to make a decision twice; keep it simple and lose the Type field. The same is true of "Tags" or any other variation on the categories theme.
+
+* **Release** -- Soon, but not yet. This one is useful for planning.
+
+* **Priority** -- Setting the priority of the issue is important so we'll keep this one as well.
+
+* Severity -- The severity of an issue can and should be a factor in setting the priority, but it doesn't need its own field in the form. Keep severity as part of your decision making process, but don't track it separately from what's it's influencing, namely, the Priority field.
+
+* Impact -- Like Severity, the impact of an issue is part of what determines the priority, but again there's no need to track it separately.
+
+* Level of Effort (estimated) -- The level of effort necessary to fix any individual issue is nearly impossible to estimate and not all that useful even if you do happen to estimate correctly. All this field does is create cognitive overhead.
+
+* Level of Effort (actual) -- Again, you're just creating overhead and getting nothing in return, lose it.
+
+* Browser/OS - This is useful information to have, but it doesn't apply directly to the issue. This is best captured in the comments or description field.
+
+After trimming our form down to fields we actually need we're left with, in addition to subject and description, a Resolution field, a field for the Priority, another for Category and one for Release.
+
+With just six fields, three of which don't need to be filled out when the issue is created -- Resolution, Priority, Release -- our form is considerably smaller.
+
+What we've created is a form that's simple enough you don't need to train your team on how to use it. Open up the form, create a new issue, give it a name, a brief description and a category; hit Create and you're done.
+
+Streamlining the process of creating issues means that the workflow is simple enough that even the "non-techie" members of your team will be able to use it. That means every person on the team has the potential to become a valuable contributor to the system.
+
+[1]: http://blog.hubspot.com/blog/tabid/6307/bid/6746/Which-Types-of-Form-Fields-Lower-Landing-Page-Conversions.aspx
+[2]: link to status vs issue piece
+[3]: link to piece on setting priority
diff --git a/sifterapp/complete/triaging.txt b/sifterapp/complete/triaging.txt
new file mode 100644
index 0000000..7012687
--- /dev/null
+++ b/sifterapp/complete/triaging.txt
@@ -0,0 +1,62 @@
+Few elements in your development process are as important as testing your code. We've found that the best way to ensure you get the most out of your testing is to set a schedule and stick to it.
+
+Set aside time to test and fix the problems you encounter every week or two. The process of testing and fixing might look something like this:
+
+1. Code Freeze. Everybody stops coding and prepares for testing. Everyone.
+2. Everyone tests. Preferably not their own modules.
+3. Triage. This is where most testing strategies break down. Issues don't organize themselves and many teams don't take the time to organize.
+4. Fix. Testing is useless without time dedicated to fixing.
+
+Let's focus on an oft overlooked part of the testing process -- triaging.
+
+## What is it?
+
+Triage means prioritizing. The word comes from the medical world, where it refers to the process of prioritizing patients based on the severity of their problems.
+
+When resources are limited -- and resources are almost always limited -- triaging is a way of helping as many people as possible and in the right order. In the ER, for example, that means treating the person having a heart attack before moving to the person with a broken toe.
+
+In application development triaging means sorting through all the issues you discover in testing and organizing the resulting work based on priority.
+
+Sometimes priority is obvious, some times it's not. That's why triaging takes time and usually involves a project manager and key stakeholder or client.
+
+## Why do it?
+
+Testing creates a lot of work because it’s designed to find as many problems as possible. To stick with the medical analogy, testing is what brings your patients to the door.
+
+The result of a good testing phase will be an abundance of work to do, which can be overwhelming. Fail to properly triage your results and you won't know what to do or where to start, you'll simply be drowning in a sea of bugs and issues.
+
+Triaging is also a way to build consensus and agreement on the priority and organization of work.
+
+## How does Triaging Work?
+
+Successful triaging depends on a consistent process. In the ER there is an intake process which assesses the severity of the problem and then sets the priority accordingly.
+
+The process is more or less the same in software development.
+
+The first step is setting the priority for each issue you discovered during the actual testing. Figuring out the priority of an issue is the most complex problem in the process, but you'll be more successful at this the more you can involve the client or primary stakeholder.
+
+Part of what makes prioritization difficult is determining scope. Many "bugs" will actually be enhancements. What qualifies as a bug and what's an enhancement will vary by project and client, which is why it's key to have the client involved in the decision process.
+
+Bring the client into the triage process helps ensure that your prioritizing matches the client's expectations. By setting priorities with the client, disagreements can be caught before they become problems down the road.
+
+Another key part of setting priorities is acknowledging the trade-offs involved. Failure to take into account, for instance, the time needed to fix something, will turn your priorities into wishful thinking rather than accurate and manageable tasks for your team. Wishful thinking does not get things done, realistic, well understood expectations and discrete lists get things done.
+
+Once you have the priorities established and you know what you want to work on next you can move on to de-duplication. The DRY principle doesn't apply just to writing code. Be diligent when listing issues and make sure you don't have two issues noting the same problem. Before you assign any new bugs you've prioritized, make sure to de-duplicate. Often this has the additional advantage of exposing an underlying problem behind several related bugs. If you routinely have bugs in one module of code, this could be a sign that the whole module needs to be rewritten rather than just patching the latest round of related bugs.
+
+The final step in the triage process is assigning the work that needs to be done. Work may be assigned directly to individual designers and developers or passed on to team leads who can better identify who should do which aspect of the work.
+
+Mastering the triage process will make your team more efficient and productive. It will also get bug fixes and new features out to customers faster. While bugs will still be found outside of testing, triaging helps minimize the number of bugs that are mis-classified or incomplete. Triaging helps ensure your entire team knows what the problems are and when they're going to be fixed.
+
+## Weekly Stagnation Meetings
+
+The triage process isn't limited to new bugs you find in intensive testing sessions. You can also perform a second type of triage -- reviewing stagnant issues.
+
+If an issue hasn't been touched in a while, then something needs to be done.
+
+The exact definition of "a while" varies by team and project, but if an issue has been sitting longer than you think it should have it's time to do something. Sometimes that might mean reassigning the work or making it higher priority. Sometimes the fact that it hasn't been done is telling you that it doesn't need to be done, close the issue and move on.
+
+Letting stale issues pile up can have a vampiric effect on a project, sucking some of the life out of it. Just knowing those unsolved problems are out there, not being worked on, adds a level of stress to your work that you don't need. Reassess, reassign and get back to work.
+
+## Conclusion
+
+Issues don't track themselves. Having good human processes around your tools will make a world of difference in their effectiveness. And don't forget to set aside time for fixing. Testing without fixing is pointless.
diff --git a/sifterapp/complete/webpagetest-notes.txt b/sifterapp/complete/webpagetest-notes.txt
new file mode 100644
index 0000000..0dbbe05
--- /dev/null
+++ b/sifterapp/complete/webpagetest-notes.txt
@@ -0,0 +1,35 @@
+
+> Performance audit...
+>
+> 0. Scope: SifterApp.com - Basically everything on our marketing site (not
+> on a subdomain) is static content created by Jekyll and served straight
+> through Nginx.
+>
+> 1. Context: Our marketing site used to live within the application
+> codebase, and so Rails and Capistrano handled most of the asset
+> optimization and serving. Now that it's all Jekyll, we're just tossing
+> files up via Nginx with little consideration for performance. We need to
+> fix that, especially for mobile. (And eventually, I even see the picture
+> element as being part of that.)
+>
+> 2. Front-end/back-end, nothing's off limits. I expect that we'll have room
+> for improvement in both areas. Just looking at the scores from the tests
+> and a cursory glance at the resulting advice, we'll need to make some
+> changes with all of it. The big thing is that I just don't have the
+> bandwidth to research it and understand the best solutions for us.
+>
+> 3. Structure of article. I think it should focus primarily on the tools and
+> the information that they provide and only use Sifter as examples. That way
+> it's about the tools instead of Sifter. My only fear is that if we're
+> already optimized in some areas, there won't be as much to share about what
+> the tools help you find. That is, our performance may suck but not bad
+> enough to show the full capabilities of the tools.
+>
+> I know there are countless tools/techniques that make sense, and I see them
+> in two distinct categories. 1.) Tools that help you see your problems. 2.)
+> Tools that help you fix your problems. I'd like to see us focus on the
+> tools that help you see the problems to focus on the "bug" finding aspect.
+> For each of the problems, I think we should link to relevant tools or
+> tutorials that can help solve the problem, but we should leave the
+> researching and choosing of those tools to the reader.
+>
diff --git a/sifterapp/complete/webpagetestp1.txt b/sifterapp/complete/webpagetestp1.txt
new file mode 100644
index 0000000..c8c09ba
--- /dev/null
+++ b/sifterapp/complete/webpagetestp1.txt
@@ -0,0 +1,79 @@
+The web is getting fatter and slower. Compare the [HTTPArchive][1] data for this month to six months ago. Chances are you'll find that overall page size has grown and page load times increased.
+
+This is bad news for the web at large, but it can be good news for your site. It means there's an easy way to stand out from the crowd -- build a blazing fast website.
+
+To do that you should make performance testing part of your design process. Plan for speed from the very beginning and you'll end up with a fast site. Didn't start your site off on the right foot? That's okay. We'll show you how you can speed up existing pages too.
+
+First let's talk about what we mean by performance. Performance is more than page load times, more than page "weight". These things matter, but not by themselves. Load times and download size only matter in relation to the most important part of performance -- how your visitors perceive your pages loading. Performance is ultimately a very subjective thing, despite being surrounded by some very objective data.
+
+Just remember, perception is what matters the most, not numbers.
+
+This is why performance is not necessarily a concrete target to aim for, but a spectrum.
+
+At one end of the spectrum you have what are known as ideal time targets. These were [popularized by Jakob Nielsen][2] in his book <cite>Usability Engineering</cite>. Even if you've never read the book, you've probably heard these times mentioned in the context of good user experience:
+
+> * 0.1 second is about the limit for having the user feel that the system is reacting instantaneously.
+> * 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay.
+> * 10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done.
+
+The quick takeaway is that you have one second to get something rendered on the screen or your visitor will already be thinking about other things. By the time 10 seconds rolls around they're long gone.
+
+## Aim High, Fail High
+
+The one second rule makes a nice target to aim for, but let's face it, most sites don't meet that goal. Even Google's generally speedy pages don't load in less than one second. And Google still manages to run a multi-billion dollar business.
+
+That said, Google has been [very vocal][3] about the fact that it is trying to get its pages down below that magical one second mark. And [it wants you to aim for one second][4] as well. Remember, the higher you aim the higher you are when you fail.
+
+That said, how fast is fast enough? To answer that question you need to do some performance testing.
+
+You need to test your site to find out where you can improve, but you should also test your competitors' sites as well. Why? It's the other end of the spectrum. At one end is the one second nirvana, at the other are your competitors' sites. The goal is to move your site from that end toward the one second end.
+
+If you can beat another site's page load times by 20 percent people will perceive your site as faster. Even if you can't get to that nearly instantaneous goal of one second or less, you can still beat the competition. That means more conversions and more happy users.
+
+To figure out where you stand, and where your competition stands, you'll want to do a performance audit -- that is, figure out how fast your pages load and identify bottlenecks that can easily be eliminated.
+
+## Running a Performance Audit
+
+There are three tools that form a kind of triumvirate of performance testing -- [WebpageTest.org][5], a web-based performance testing tool, Google's PageSpeed tools, and the network panel of your browser's developer tools.
+
+These three will help you diagnose your performance problems and give you a good idea of where you can to start improving.
+
+To show you how these tools can be combined, we're going to perform a basic performance audit on the [Sifter homepage][6]. We'll identify some performance "bugs" and show you how we found them.
+
+The first step in any performance audit is to see where you're starting from. To do that we use WebpageTest.org. Like BrowserStack, which we've [written about before][7], WebpageTest is a free tool built around virtual machines. You give it a URL and it will run various tests to see how your site performs under different conditions.
+
+## Testing with WebpageTest
+
+Head over to WebpageTest, drop the URL you want to test into the form and then click the yellow link that says "Advanced Settings". Here you can control the bandwidth being used, the number of tests to run and whether or not to capture video. There's also an option to keep the test results private if you're working with a site that isn't public.
+
+To establish a performance baseline we suggest running two separate tests -- one over a high speed connection (the default is fine) and one over a 3G network. For each test you'll want to run several passes -- you can run the test up to 9 times -- and let WebpageTest pick the median. We typically run 7 (WebpageTest will pick the median result so be sure to use an odd number of tests). Also, make sure to check the box that says "Capture Video".
+
+Once your tests are done you'll see a page that looks something like this:
+
+![screenshot of initial rest results page]
+
+There are a bunch of numbers here, but the main ones we want to track are the "Start Render" time and the "Speed Index". The latter is the most important of the two.
+
+What we really care about when we're trying to speed up a page is the time before the visitor sees something on the screen.
+
+The overall page load time is secondary to getting *something* -- anything really, but ideally the most important content -- on the screen as soon as possible. Give your visitors something to hold their interest or interact with and they'll perceive your page as loading faster than it actually is. People won't care (or even know) if the rest of the page is still loading in the background.
+
+The [Speed Index metric][8] represents the average time it takes to fill the initial viewport with, well, something. The number is in milliseconds and depends on size of the viewport. A smaller viewport (a phone for example) will need less on the screen before the viewport is filled than a massive HD desktop monitor.
+
+For example, let's say your Speed Index is around 6000 milliseconds over a mobile connection. That sounds pretty good right? It's not one second, but it's better than most sites out there.
+
+Now go ahead and click the link to the video of your page rendering and force yourself to sit through it. Suddenly 6 seconds doesn't sound so fast does it? In fact it's a little painful to sit through isn't it? If you're like us you were probably fidgeting a little before the video was over.
+
+That's how you visitors *feel* every time they use your site.
+
+Now that you have some idea of how people are perceiving your site and what it feels like, it's time to go back to the numbers. In the next installment we'll take a look at some tools you can use to find, diagnose and fix problems in the HTML and improve the performance of your site.
+
+
+[1]: http://httparchive.org/
+[2]: http://www.nngroup.com/articles/response-times-3-important-limits/
+[3]: http://www.youtube.com/watch?v=Il4swGfTOSM
+[4]: http://googlewebmastercentral.blogspot.com/2013/08/making-smartphone-sites-load-fast.html
+[5]: http://www.webpagetest.org/
+[6]: https://sifterapp.com/
+[7]: link
+[8]: https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index
diff --git a/sifterapp/complete/webpagetestp2.txt b/sifterapp/complete/webpagetestp2.txt
new file mode 100644
index 0000000..5a28dd1
--- /dev/null
+++ b/sifterapp/complete/webpagetestp2.txt
@@ -0,0 +1,85 @@
+In the last installment we looked at how WebpageTest can be used to establish a performance baseline. Now it's time to dig a bit deeper and see what we can do about some common performance bottlenecks.
+
+To do that we'll turn to a new tool, [Google PageSpeed Insights][1].
+
+Before we dive in, recall what we said last time about the importance of the Speed Index. That is, the time it takes to get something on the screen. This is different than the time it takes to fully load the page. Keep that in mind when you're picking and choosing what to optimize.
+
+For example, if PageSpeed Insights tells you "leverage browser caching" -- which means have your server set an Expires Header -- that's good advice in the broader sense, but it won't change your Speed Index number for first-time visitors.
+
+To start with we suggest focusing on things that will get you the biggest wins on the Speed Index. That's what we'll be focusing on here.
+
+## Google PageSpeed Insights
+
+Now that we know how long it's taking to load the page, it's time to start finding the bottlenecks in that process. If you know how to read a waterfall chart then WebpageTest can tell you most of what you want to know, but Google's PageSpeed Insights tool offers a nicer interface and puts more of an emphasis on mobile performance improvements.
+
+There are two ways to use PageSpeed. You can use [the online service][2] and plug in your URL or you can install the [PageSpeed Insights add-on for Chrome][3], which will add a PageSpeed tab to the Chrome developer tools.
+
+The latter is very handy, but lacks some of the features found in the online tool, most notably checks on "[critical path][4]" performance (another name for Speed Index) and mobile user experience analysis. For that reason we suggest using both. The online service does a better job of suggesting fixes for mobile and offers a score you can use to gauge your improvements (though you should go back to WebpageTest and rerun your same tests to make sure that your Speed Index times have actually dropped).
+
+The browser add-on, on the other hand , will look at other network conditions, like redirects, which can hurt your Speed Index times as well.
+
+PageSpeed Insights fetches the page twice, once with a mobile user-agent, and once with a desktop user-agent. It does not, however, simulate the constrained bandwidth of a mobile connection. For that you'll need to go back to WebpageTest. Complete details on what PageSpeed insights does are available in Google's [developer documentation][5].
+
+When we ran PageSpeed Insights on the Sifter homepage the service made a number of suggestions:
+
+![Screenshot of initial run]
+
+Notice the color coding, red is high priority, yellow less and green is all the stuff you're already doing right. But those priorities are Google's suggestions, not hard and fast rules. As we mentioned above, one of the high priority suggestions is to add Expires Headers to our static assets. That's a good idea and it will help speed up the experience of visiting the site again or loading a second page that uses the same assets. But it won't help first time visitors and it won't change that Speed Index number for initial page loads.
+
+Enabling compression on the other hand will. Adding GZip compression to our stylesheet and SVG icons would shave 154.8KB off the total page size. Fewer KBs to download always means faster page load times. This is especially true for the stylesheet since the browser stops rendering the page whenever it encounters a CSS file. It doesn't start rendering again until it has completely downloaded and parsed the CSS file so anything we can do to decrease the size of the stylesheet will see big wins.
+
+Another thing that the online tool doesn't consider high priority, but shows up in the browser add-on is to minimize redirects.
+
+To take a closer look at how redirects hurt your page load times let's take a look at the third tool for performance testing, your browser tools network panel.
+
+## The Network Panel
+
+All modern web browsers have some form of developer tools and all of them offer a "Network" panel of some sort. For these examples we'll be using Chrome, but you can see the same thing in Firefox, Safari, Opera and IE.
+
+In this example you can see that the fonts.css file returned a 302 (temporarily moved) error:
+
+![Screenshot of Network Panel]
+
+To find out more about what this redirect is and why it's happening, we'll select it in the network panel and have a look at the actual response headers.
+
+![Screenshot of Network Panel response headers]
+
+In this case you can see that it redirected to another CSS file on our domain.
+
+This file is eating up time twice. First it's on a different domain (our webfont provider's domain) which means there's another DNS lookup to perform. That's a big deal on mobile, [see this talk][6] from Google's Ilya Grigorik for an incredibly thorough explanation of why.
+
+The second time suck is the actual redirect which forces the browser to try loading the same resource again (and keep in mind that this resource is a CSS file, so it's blocking rendering throughout these delays) from a different location. The second time is succeeds, but there's definitely a performance hit.
+
+Given all that why still serve up this file? Because it's an acceptable trade off. Tungsten (the font being loaded) is an integral part of the design and in this case there are other areas we can optimize -- like enabling server-side GZip compression -- that will get us some big wins. It may be that we're able to get down close enough to the ideal one second end of the spectrum that we're okay with the font loading.
+
+This highlights what is perhaps one of the hardest aspects of improving performance -- nothing comes for free.
+
+When it comes to page load times there is no such thing as too fast, but there can be such a thing as over-optimization. If we ditch the font we might speed up the page load time a tiny bit, but we might also lose some of less tangible aspects of the reading experience. We might get the page to our visitors 500ms faster, but they also might be less delighted with what we've given them. The right answer to this question of what stays and what goes is a case-by-case problem.
+
+For example, if you eliminate a JavaScript library to speed up your page but without the library your app stops working, well that would be silly. Moving that JavaScript library to CDN and caching it with a far-future Expires Header? Now that's smart.
+
+Performance is always a series of trade-offs. CSS blocks the loading of the page, but no one wants to see your site without its CSS. To speed up your site you don't get rid of your CSS, but you might consider inlining some of it. That is, move some of your critical CSS into the actual HTML document, enough that the initial viewport is rendered properly, and then load the stylesheet at the bottom of the page where it won't block rendering. Tools like Google's [PageSpeed Module][7] for Apache and Nginx can automatically do this for you.
+
+The answer to performance problems is rarely to move from one extreme to the other, but to find the middle ground where performance, functionality and great user experience meet.
+
+## What We Did
+
+After running Sifter through Webpagetest we identified the two biggest wins -- enabling GZip compression and setting Expires Headers. The first means users download less data, so the page loads faster. The second means repeat views will be even faster because common elements like stylesheet and fonts are already in the browsers cache.
+
+We also removed some analytics scripts which were really only necessary on particular pages we're testing, not the site as a whole.
+
+For us the change meant adding a few lines to Nginx. One gotcha for fellow Nginx users, you need to add your GZip and Expires configuration to your application *and* load balancing servers. Other than that snag, the changes hardly took any time at all.
+
+The result? Our initial page load times as determined by Pagespeedtest dropped down under 4 seconds over 3G. That's a two second improvement for mobile users with very little work on our end. For those with high speed connections the Sifter homepage now gets very close to that magical one second mark.
+
+We were able to get there because we did the testing, identified the problems and were able to target the biggest wins rather than trying to do it all. Remember, don't test more, test smarter.
+
+
+
+[1]: https://developers.google.com/speed/pagespeed/insights/
+[2]: https://developers.google.com/speed/pagespeed/insights/
+[3]: https://chrome.google.com/webstore/detail/pagespeed-insights-by-goo/gplegfbjlmmehdoakndmohflojccocli?hl=en
+[4]: https://developers.google.com/web/fundamentals/performance/critical-rendering-path/
+[5]: https://developers.google.com/speed/docs/insights/about
+[6]: https://www.youtube.com/watch?v=a4SbDZ9Y-I4#t=175
+[7]: https://developers.google.com/speed/pagespeed/module
diff --git a/sifterapp/complete/yosemite-mail.txt b/sifterapp/complete/yosemite-mail.txt
new file mode 100644
index 0000000..e713152
--- /dev/null
+++ b/sifterapp/complete/yosemite-mail.txt
@@ -0,0 +1,14 @@
+If you've updated to Apple's latest version of OS X, Yosemite, you have a powerful new tool for creating issues via email and you may not even know it.
+
+Yosemite debuts something Apple calls Extensions, which are little apps that run inside other apps. Extensions are available in both OS X and iOS, but since both are brand new not a lot of applications are taking advantage of them just yet.
+
+The latest version of Apple Mail does, however, make use of Extensions through it's new Markup feature. Markup is a tool to quickly add simple notes and annotations to image and PDF files directly within Mail.
+
+Here's how it works: First you add an image or PDF to a mail message. Then you click on the file and an icon appears in the top-left corner of the file preview. Click the little icon and select Markup. The image will then zoom out and you'll see a toolbar above it with options to draw shapes and add text on top of it.
+
+Most demos we've seen of Markup show people adding arrows to maps to indicate where to meet and other things you'll probably never actually do, but this is a very powerful tool for software developers. It makes adding a little bit of visual help your issues much easier.
+
+For example, your workflow might look like this: you discover a bug with some visual component to it, let's say some CSS fails to line up your submit buttons on a form. So, you grab a screenshot (just press CMD-Shift-3 and OS X will take a screenshot), drag it to a new mail message, annotate it with some arrows pointing to the problem and a quick note about how it should look. Then you send it off to your issue tracking software which creates a new issue and attaches your screenshot complete with annotations.
+
+This way your designers don't have to wade through a bunch of prose trying to figure out what you mean by "doesn't line up". Instead they see the image with your notes and can jump straight into fixing the issue.
+
diff --git a/sifterapp/ideal-sifter-workflow.txt b/sifterapp/ideal-sifter-workflow.txt
new file mode 100644
index 0000000..4fdc269
--- /dev/null
+++ b/sifterapp/ideal-sifter-workflow.txt
@@ -0,0 +1,31 @@
+Streamlining Bug & Issue Tracking Workflow
+
+The quest to have a perfect workflow leads to ambiguous and redundant statuses. Can an issue be resolved and not in progress at the same time? i.e. The original tester is no longer around, so who retests it. Can it be pending/resolved at the same time? If it's moved to pending, it now becomes difficult to remember what state it was at previously.
+
+
+This would essentially help people visualize the implicit nature of frequently requested statuses that we don’t explicitly plan on adding.
+
+
+How do we suggest working with issues in Sifter?
+
+This would be a diagram illustrating both the explicit and virtual states that Sifter helps manage.
+
+
+> I’d create a diagram to illustrate it kind of like this with guidance on how to create/bookmark the corresponding filters for convenience.
+
+
+* New -> Open w/o milestone/assignee
+* Accepted -> Open & Assigned to a milestone.
+* Pending/On Hold -> Open & No milestone, no assignee
+* In Progress -> Open & Assigned to a user.
+* Open
+* Resolved
+* Closed
+* Rejected -> Closed w/ explanation.
+* Working as Designed -> Closed w/ explanation.
+* Duplicate -> Closed w/ explanation.
+
+The underlying challenge is that we often want machines to explicitly handle every potential scenario. In some cases, this is useful, but in others, we're pushing work off to a machine when humans are infinitely better at handling it. Trying to eliminate any ambiguity requires giving the system a lot of additional information.
+
+
+These "meta-statuses" shouldn't be treated the same as actual statuses. They add additional layers of meaning to a status, but they shouldn't live in parallel with the primary statuses.
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_01.rtf b/sifterapp/invoices/scott_gilbertson_invoice_01.rtf
new file mode 100644
index 0000000..1e335fb
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_01.rtf
@@ -0,0 +1,62 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qr
+
+\f0\b\fs24 \cf0 Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \
+
+\f0\b 9/03/14
+\f1\b0 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qj
+
+\f0 \cf0 \
+\
+\
+\
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300
+\cf0 Invoice Number:
+\b 0001
+\f1\b0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 Invoice Date: 09/03/14\
+Time Period: 08/01/14-09/30/14
+\f1 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\sa283
+\cf0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 13\
+\
+\pard\pardeftab720\ri0
+
+\fs28 \cf0 TOTAL FOR INVOICE: $975\
+\pard\pardeftab720\ri0
+
+\f1\fs24 \cf0 \
+\
+\
+\
+\
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300
+
+\f0 \cf0 Bank: SchoolsFirstFCU\
+Address:
+\f1 P.O. Box 11547, Santa Ana, CA 92711-1547 \
+
+\f0 Account Name: Checking
+\f1 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qj
+
+\f0 \cf0 Routing: 322282001\
+Account: 0172510703
+\f1 } \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_02.rtf b/sifterapp/invoices/scott_gilbertson_invoice_02.rtf
new file mode 100644
index 0000000..ab48b28
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_02.rtf
@@ -0,0 +1,62 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qr
+
+\f0\b\fs24 \cf0 Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \
+
+\f0\b 10/02/14
+\f1\b0 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qj
+
+\f0 \cf0 \
+\
+\
+\
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300
+\cf0 Invoice Number:
+\b 0001
+\f1\b0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 Invoice Date: 09/03/14\
+Time Period: 10/01/14-10/31/14
+\f1 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\sa283
+\cf0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 10\
+\
+\pard\pardeftab720\ri0
+
+\fs28 \cf0 TOTAL FOR INVOICE: $750\
+\pard\pardeftab720\ri0
+
+\f1\fs24 \cf0 \
+\
+\
+\
+\
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300
+
+\f0 \cf0 Bank: SchoolsFirstFCU\
+Address:
+\f1 P.O. Box 11547, Santa Ana, CA 92711-1547 \
+
+\f0 Account Name: Checking
+\f1 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qj
+
+\f0 \cf0 Routing: 322282001\
+Account: 0172510703
+\f1 } \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_03.rtf b/sifterapp/invoices/scott_gilbertson_invoice_03.rtf
new file mode 100644
index 0000000..64d2ca1
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_03.rtf
@@ -0,0 +1,50 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qr
+
+\f0\b\fs24 \cf0 Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \
+
+\f0\b 11/07/14
+\f1\b0 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qj
+
+\f0 \cf0 \
+\
+\
+\
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300
+\cf0 Invoice Number:
+\b 0003
+\f1\b0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 Invoice Date: 11/07/14\
+Time Period: 11/01/14-11/30/14
+\f1 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\sa283
+\cf0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 10\
+\
+\pard\pardeftab720\ri0
+
+\fs28 \cf0 TOTAL FOR INVOICE: $750\
+\pard\pardeftab720\ri0
+
+\f1\fs24 \cf0 \
+\
+\
+\
+\
+} \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_04.rtf b/sifterapp/invoices/scott_gilbertson_invoice_04.rtf
new file mode 100644
index 0000000..eb04890
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_04.rtf
@@ -0,0 +1,77 @@
+{\rtf1\ansi\ansicpg1252\deff0\uc1
+{\fonttbl
+{\f0\fnil\fcharset0\fprq0\fttruetype ArialMT;}
+{\f1\fnil\fcharset0\fprq0\fttruetype TimesNewRomanPSMT;}
+{\f2\fnil\fcharset0\fprq0\fttruetype Liberation Sans;}
+{\f3\fnil\fcharset0\fprq0\fttruetype Liberation Serif;}
+{\f4\fnil\fcharset0\fprq0\fttruetype Courier New;}}
+{\colortbl
+\red0\green0\blue0;
+\red255\green255\blue255;
+\red255\green255\blue255;}
+{\stylesheet
+{\s6\fi-431\li720\sbasedon28\snext28 Contents 1;}
+{\s7\fi-431\li1440\sbasedon28\snext28 Contents 2;}
+{\s1\fi-431\li720 Arrowhead List;}
+{\s27\fi-431\li720\sbasedon28 Lower Roman List;}
+{\s29\tx431\sbasedon20\snext28 Numbered Heading 1;}
+{\s30\tx431\sbasedon21\snext28 Numbered Heading 2;}
+{\s12\fi-431\li720 Diamond List;}
+{\s9\fi-431\li2880\sbasedon28\snext28 Contents 4;}
+{\s8\fi-431\li2160\sbasedon28\snext28 Contents 3;}
+{\s31\tx431\sbasedon22\snext28 Numbered Heading 3;}
+{\s32\fi-431\li720 Numbered List;}
+{\s15\sbasedon28 Endnote Text;}
+{\*\cs14\fs20\super Endnote Reference;}
+{\s4\fi-431\li720 Bullet List;}
+{\s5\tx1584\sbasedon29\snext28 Chapter Heading;}
+{\s35\fi-431\li720 Square List;}
+{\s11\fi-431\li720 Dashed List;}
+{\s22\sb440\sa60\f2\fs24\b\sbasedon28\snext28 Heading 3;}
+{\s37\fi-431\li720 Tick List;}
+{\s24\fi-431\li720 Heart List;}
+{\s40\fi-431\li720\sbasedon32 Upper Roman List;}
+{\s39\fi-431\li720\sbasedon32 Upper Case List;}
+{\s16\fi-288\li288\fs20\sbasedon28 Footnote;}
+{\s19\fi-431\li720 Hand List;}
+{\s18\fs20\sbasedon28 Footnote Text;}
+{\s20\sb440\sa60\f2\fs34\b\sbasedon28\snext28 Heading 1;}
+{\s21\sb440\sa60\f2\fs28\b\sbasedon28\snext28 Heading 2;}
+{\s10\qc\sb240\sa120\f2\fs32\b\sbasedon28\snext28 Contents Header;}
+{\s23\sb440\sa60\f2\fs24\b\sbasedon28\snext28 Heading 4;}
+{\s28\f3\fs24 Normal;}
+{\s26\fi-431\li720\sbasedon32 Lower Case List;}
+{\s2\li1440\ri1440\sa120\sbasedon28 Block Text;}
+{\s33\f4\sbasedon28 Plain Text;}
+{\s34\tx1584\sbasedon29\snext28 Section Heading;}
+{\s25\fi-431\li720 Implies List;}
+{\s3\fi-431\li720 Box List;}
+{\s36\fi-431\li720 Star List;}
+{\*\cs17\fs20\super Footnote Reference;}
+{\s38\fi-431\li720 Triangle List;}
+{\s13\fi-288\li288\sbasedon28 Endnote;}}
+\kerning0\cf0\ftnbj\fet2\ftnstart1\ftnnar\aftnnar\ftnstart1\aftnstart1\aenddoc\revprop3{\*\rdf}{\info\uc1{\author None Yo}}\deftab720\viewkind1\paperw12240\paperh15840\margl1440\margr1440\widowctrl
+\sectd\sbknone\colsx0\marglsxn1800\margrsxn1800\pgncont\ltrsect
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch Scott Gilbertson}{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch 412 Holman Ave}{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch Athens, GA 30606}{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch 706 438 4297 }{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch sng@luxagraf.net}{\f1\fs24\lang1033{\*\listtag0} }{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch 12/03/14}{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qj\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qj\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qj\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qj\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\lang1033{\*\listtag0}\abinodiroverride\ltrch Invoice Number: }{\f0\fs24\b\lang1033{\*\listtag0}0004}{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs24\lang1033{\*\listtag0}\abinodiroverride\ltrch Invoice Date: 12/03/14}{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs24\lang1033{\*\listtag0}\abinodiroverride\ltrch Time Period: 12/01/14-12/31/14}{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sa282\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs24\lang1033{\*\listtag0}\abinodiroverride\ltrch DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr}{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs24\lang1033{\*\listtag0}\abinodiroverride\ltrch TOTAL HOURS: 10}{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\abinodiroverride\ltrch TOTAL FOR INVOICE: $750}{\f0\fs28\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\par}} \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_05.rtf b/sifterapp/invoices/scott_gilbertson_invoice_05.rtf
new file mode 100644
index 0000000..5bb5b2f
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_05.rtf
@@ -0,0 +1,55 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\margl1440\margr1440\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qr
+
+\f0\b\fs24 \cf0 \expnd0\expndtw0\kerning0
+Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \expnd0\expndtw0\kerning0
+
+\f0\b \expnd0\expndtw0\kerning0
+\
+02/02/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qj
+\cf0 \expnd0\expndtw0\kerning0
+\
+\
+\
+\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1
+
+\b0 \cf0 \expnd0\expndtw0\kerning0
+Invoice Number:
+\b \expnd0\expndtw0\kerning0
+0005
+\b0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+Invoice Date: 02/02/15\
+Time Period: 02/01/15-02/28/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\sa282
+\cf0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 10\
+\
+\pard\pardeftab720
+
+\fs28 \cf0 \expnd0\expndtw0\kerning0
+TOTAL FOR INVOICE: $750\
+\
+\
+\
+\
+\
+} \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_06.rtf b/sifterapp/invoices/scott_gilbertson_invoice_06.rtf
new file mode 100644
index 0000000..774cd77
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_06.rtf
@@ -0,0 +1,55 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\margl1440\margr1440\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qr
+
+\f0\b\fs24 \cf0 \expnd0\expndtw0\kerning0
+Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \expnd0\expndtw0\kerning0
+
+\f0\b \expnd0\expndtw0\kerning0
+\
+03/02/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qj
+\cf0 \expnd0\expndtw0\kerning0
+\
+\
+\
+\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1
+
+\b0 \cf0 \expnd0\expndtw0\kerning0
+Invoice Number:
+\b \expnd0\expndtw0\kerning0
+0006
+\b0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+Invoice Date: 03/02/15\
+Time Period: 02/01/15-02/28/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\sa282
+\cf0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 10\
+\
+\pard\pardeftab720
+
+\fs28 \cf0 \expnd0\expndtw0\kerning0
+TOTAL FOR INVOICE: $750\
+\
+\
+\
+\
+\
+} \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_07.rtf b/sifterapp/invoices/scott_gilbertson_invoice_07.rtf
new file mode 100644
index 0000000..538dcca
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_07.rtf
@@ -0,0 +1,55 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\margl1440\margr1440\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qr
+
+\f0\b\fs24 \cf0 \expnd0\expndtw0\kerning0
+Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \expnd0\expndtw0\kerning0
+
+\f0\b \expnd0\expndtw0\kerning0
+\
+04/01/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qj
+\cf0 \expnd0\expndtw0\kerning0
+\
+\
+\
+\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1
+
+\b0 \cf0 \expnd0\expndtw0\kerning0
+Invoice Number:
+\b \expnd0\expndtw0\kerning0
+0007
+\b0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+Invoice Date: 04/01/15\
+Time Period: 04/01/15-04/30/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\sa282
+\cf0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 10\
+\
+\pard\pardeftab720
+
+\fs28 \cf0 \expnd0\expndtw0\kerning0
+TOTAL FOR INVOICE: $750\
+\
+\
+\
+\
+\
+} \ No newline at end of file
diff --git a/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.35 AM.png b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.35 AM.png
new file mode 100644
index 0000000..af1d87e
--- /dev/null
+++ b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.35 AM.png
Binary files differ
diff --git a/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.39 AM.png b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.39 AM.png
new file mode 100644
index 0000000..799bbde
--- /dev/null
+++ b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.39 AM.png
Binary files differ
diff --git a/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.45 AM.png b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.45 AM.png
new file mode 100644
index 0000000..aaef516
--- /dev/null
+++ b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.45 AM.png
Binary files differ
diff --git a/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.49 AM.png b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.49 AM.png
new file mode 100644
index 0000000..9a24e2f
--- /dev/null
+++ b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.49 AM.png
Binary files differ
diff --git a/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.52 AM.png b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.52 AM.png
new file mode 100644
index 0000000..bc60ae6
--- /dev/null
+++ b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.52 AM.png
Binary files differ
diff --git a/sifterapp/new-article.txt b/sifterapp/new-article.txt
new file mode 100644
index 0000000..d8b6e22
--- /dev/null
+++ b/sifterapp/new-article.txt
@@ -0,0 +1,62 @@
+> I’m including the original brainstorm email for context/convenience at the bottom.
+>
+> > I know nothing about Jira and Fogbugz beyond the things I've seen on twitter where people complain about Jira.
+>
+> Jira is heavily customizable. The catch is that it creates the paradox of choice problem both at a configuration level, and, if configured to do a lot of stuff, at the data entry and daily use level.
+>
+> For a developer, the configuration is fun. It can literally do anything. You can have different types of tasks. Bugs, New Features, Tasks, Enhancements, Stories, etc. And I believe you can even have the fields and workflow adapt depending on the type.
+>
+> The video on the home page does a good job of giving an overview. It’s just really flexible.
+>
+> The catch is that it looks great from a checklist standpoint, but the more custom configuration is created, the more confusing it can be for a non-developer. Say someone wants to report an issue. They can’t just file it. They have to decide if it’s a bug or a feature. Or should it be a story? Well, maybe this is just a task? The result is that people get confused, misfile things, or just feel like they must not belong there because they can’t understand it. Frequently, they’ll just email it to their favorite developer and let them handle it. And the result is that the work slips through the cracks.
+>
+> That’s where things start to fall apart, but that’s not something a developer will see or acknowledge because the problem is largely invisible. If issues simply aren’t being reported or team members aren’t using a tool, it’s harder to recognize that there even is a problem.
+>
+> > Can you give me some concrete examples of things that exist at the more complex end of the spectrum and why a team might want them.
+>
+> Customizable workflow, issue types, and things like custom fields. There are some plans to begin adding some of this to Sifter.
+>
+> Customizable fields is a big one. People want to add a field for “Module” (where the bug was found) “User Agent” (for browser-related bugs) “Version Target”, “Due Date”. The problem is that as you add fields, the likelihood of a field being relevant to all types of issues goes down and you end up with a long form of fields that are only relevant a fraction of the time. For instance, people get tired of reporting User Agent when it’s only useful in 1 out of 10 bugs. That’s a lot of wasted effort. But as a developer, you want all of the info you can possibly get so you don’t have to go back and forth. For Microsoft, Adobe, etc, that makes sense. For the small team of 5 working on a Wordpress install for a marketing site and blog, that’s just not important enough to justify a dedicated field.
+>
+> Also the fields can be customized based on the type of issue. So a bug might need user agent info, but a question wouldn’t. Or a Story might have the whole “As a _____ I want to ______ so I can ______.” And then it might have some other fields, but it won’t need a user agent field. So in order to solve the problem of having too many irrelevant fields, they add the “Type” option to drive the fields that are available.
+>
+> Now they simply have a new problem. With the “Type” field for classifying the issue, is it a “Bug” a “Front-end Bug” or an “Enhancement” or a “Story"? Client’s don’t really care. Not only do they not care, but they’re thinking “WTF is a Story?” (or whatever other random type there is) They just want something fixed/changed. Forcing people to classify something doesn’t contribute to solving the problem on a small team. And if your client submits something as a bug and you have to change it to an enhancement, it’s not only creating overhead, but it might piss the client off. So it’s generally best to have that conversation in the comments instead of what most developers would do which is reclassify it and move on without much of an explanation.
+>
+> > Beyond that, I really like the point that different people on the same team will want different things out of a bug tracker (or whatever I'm working on a site right now with a new agency where that's true with something as simple as asset sharing). But anyway, is there a solution there? Or should I just frame that as one of those things that humans need to manage rather than chasing some ultimate tool? ie ultimately you'll never make everyone happy.
+>
+> Much of this truly is a human problem rather than a software problem. People are generally much better at adapting than software, but as developers, we all want to believe that software should always bend to our whims. Since it’s usually more technology-focused people making the purchase decision, they rarely bother to evaluate the usability or participation implications of choosing a more complex tool.
+>
+> For large teams, the complexity is generally worth it, but for a small team, they often adopt large bulky solutions that they used in the past without appreciating that at smaller sizes that process suffocates more than it enables.
+>
+> For instance, with Sifter, there’s no classifying an issue type. When you’re working with maybe 100-200 issues per project, it doesn’t really matter if something is a bug, feature, or a task. It just needs to get done and shipped. So all that really matters is priority. But like with the lego analogy, developers feel better if they organize all the things. For instance, if we have a team of 5 people collaborating on a project, 1 back-end dev, 1 designer, 1 front-end dev, and a couple of non-technical business people, there’s really no need for heavy workflow. It’s more important that it’s lightweight and easy to use. It’s entirely possible for a small team like that that they might wast more time designing and deciding on workflow than actually getting things done.
+>
+> For some more context, a lot of it is the ecosystem. Atlassian has a bunch of products, but at the end of the day, they’re really all just features of the “Atlassian Product”. Those features give you more power, but even their signup page is overwhelming with choices.
+> http://cl.ly/dJTJ
+>
+> Some people will see this and get excited at the options. Others will see it and get confused or overwhelmed at the decisions they need to make. That’s just kind of the nature of the beast.
+>
+> On the flip side, many developers will look at something like Sifter or Trello and just feel like it’s the equivalent of using Duplos and it’s for amateurs. But their business and non-technical people love it because they get it.
+>
+> I’m going to stop there. Hopefully that makes some sense of it. :)
+>
+> - G
+>
+>
+> > On Aug 2, 2015, at 9:44 AM, Garrett Dimon <garrett@nextupdate.com> wrote:
+> >
+> > I just had another idea inspired by this. “Why it’s so difficult for your team to settle on a bug tracker?”
+> >
+> > The premise is the Goldilocks story. This bug tracker is too simple. This bug tracker is too complicated. This bug tracker is just right.
+> >
+> > Effectively, with bug and issue trackers, there’s a spectrum. On one end, you have todo lists/spreadsheets. Then you have things like GitHub Issues/Sifter/Trello somewhere in the middle, and then Jira/Fogbugz/etc. all of the way to the other end.
+> >
+> > Each of these tools makes tradeoffs. With Sifter, we keep it simple so small teams can actually have bug tracking because for them, they’d just as soon have no bug tracking as use Jira. It’s way too complicated for them.
+> >
+> > Other teams doing really complex things find Sifter laughably simple. Their complexity needs justify the investment in training and excluding non-technical users.
+> >
+> > This is all complicated by the fact that teams inevitably want ease-of-use and powerful customization, which, while not inherently exclusive of each other, are generally at odds with each other.
+> >
+> > To make matters worse, on a larger team, some portion of the team values simplicity more while other parts of the team value advanced configuration and control. Neither group is wrong, but it is incredibly difficult to find a tool with the balance that’s right for a team.
+> >
+> > I could probably expand on this more, but that should be enough to see if you think it’s a good topic or not.
+>
diff --git a/sifterapp/org-chaos.txt b/sifterapp/org-chaos.txt
new file mode 100644
index 0000000..b695823
--- /dev/null
+++ b/sifterapp/org-chaos.txt
@@ -0,0 +1,33 @@
+Chaos vs. Organization
+
+Programmers love organization. Programming is organization after all, organize electrons in a particular way and some result follows.
+
+Organization is an essential part of programming. Everyone may have their *style* of organization -- just mention a style on your favorite social media to cue up a firestorm of why you're right and wrong -- but few would argue that organization itself is bad.
+
+But why? What are you trying to achieve when you organize something? Most of the time the goal of organizing is to achieve a state of simplicity in which the next step is obvious.
+
+The only problem is that once you start organizing things there's a tendency to overdo it. More organization options do not always mean greater simplicity.
+
+Think of your issues and the todo list that grows out of them as a giant pile of Lego. If you wanted to organize a thousand Lego pieces you might choose something like color. That would give you perhaps a dozen buckets to separate your Lego pieces into. That would help you find what you need when you need it. What you don't want is 1000 buckets each with one Lego in them. That doesn't simplify anything.
+
+
+A little bit of metadata is incredibly powerful, without it we'd be starting over from scratch all the time. Your project needs some metadata -- which elements of metadata are essential will depend on the project -- but too much will quickly have the opposite effect. More fields and meta data only bring additional organization if your team is big enough to both value the additional organization and keep up with it.
+
+For example, categorizing 20 things into 20 categories creates more chaos than it removes, but categorizing those same 20 items into 4 categories can help you find what you need when you need it.
+
+If you have a small team, trying to keep up with a lot of extra, ever-changing meta data will only bring chaos, not organization. You never want your meta data fields to outgrow your team’s ability to stay on top of it. In other words, you just enough meta data.
+
+What’s really important to your project? Due date? Version number? Priority? Severity? Estimate? Level of effort? Category? Tags? How do you bring just the right amount of structure and organization?
+
+The problem is that too many ways of organizing, cross-filing and slicing meta data creates more chaos for most small teams. Too many buckets makes it harder to find the Lego you need. Worse, complex organization systems have a way of becoming the focus. What you think will help you keep every idea at your fingertips ends up becoming so complicated its the only idea you can focus on. You lose the bug fixing forest in the proverbial trees of metadata.
+
+In other words spend all your time filing your Lego into buckets and you'll never get anything built.
+
+Here's the truth of programming that only comes with years of experience: software bugs are inherently messy. They’re unpredictable and often unreproducible. Some are easy to fix and others border on impossible. The process is implicitly chaotic and unorganized.
+
+But we're programmers and some fundamental level that chaos and unpredictability is simply unacceptable. So we slice and dice metadata about that chaos; we classify it all and spend our time imposing order rather than actually making things.
+
+Some organization is of course needed. Formal testing is about bringing a degree of order to the process, but as software developers, sometimes we want too much order. The results is too much focus on irrelevant details because they bring a semblance of order.
+
+Rocket Ships and Space Shuttles need very detailed organization. Small web application, static web site, or CMS site only need basic levels organization. Trying to track bugs like Boeing or NASA is overkill in these situations. It won't make your small team more productive and it won't get your projects completed. In fact it will more than likely do exactly the opposite.
+
diff --git a/sifterapp/requiresments-vs-assumptions.txt b/sifterapp/requiresments-vs-assumptions.txt
new file mode 100644
index 0000000..b5287c2
--- /dev/null
+++ b/sifterapp/requiresments-vs-assumptions.txt
@@ -0,0 +1,28 @@
+One of the toughest things any team has to do is grow and change. Yet adapting to these changes is perhaps the most important part of growing your software business. There is, as they say, no constant but change. Projects change, goals change.
+
+Ultimately customers drive software development. And since most projects don't start with many customers, change is inevitable. Change can be difficult, but failing to adapt as your project grows is the surest way to doom it.
+
+The biggest risk many companies and development teams face is that you end up working on the wrong things. When the project needs change your team needs to change with it, otherwise you end up wasting time and development effort on things that are better left undone -- things that are no longer necessary, but still feel necessary.
+
+To avoid this and maintain the sort of team and company culture that is able to adapt and change as the project needs change, it helps to cultivate a culture in which there are no sacred cows. That is, to let go of all the things you *think* you need to do and stop, listen to your customers, and figure out what you *really* need to do.
+
+Consider, for example, your project's list of "requirements". It's very likely that before you ever wrote a single line of code you wrote out some basic "requirements", things you thought your software needed to be able to do. In the beginning that's good, though be careful with the word "requirement".
+
+The things that we define as “requirements” at the beginning of a project are actually little more than assumptions based on incomplete information. Not a line of code has been written and you're already sure you know what your customers want? It can be humbling to realize, but the truth is you most likely have only a vague idea of what your customers want.
+
+To know what your customers really want, and what your requirements actually are, you need customers. If you get too rigid too early, define too many things as "requirements", before you know it everyone down the chain of command believes those requirements are immutable simply because they're called requirements. It's too rigid of a word. You end up wasting time on features and improvements your customers don't even want just because you labeled them "requirements".
+
+To give a real world example, we thought an option to "theme" and brand Sifter was an absolute requirement, but after launching, we found that almost none of our customers cared. We've received maybe three half-hearted requests for an option to theme Sifter in 6+ years. What we thought was a requirement would have been a waste of time if we had kept pursuing it. The team at Basecamp also apparently felt that theming was no longer a requirement, the new version of Basecamp dropped support for custom themes. If customers don't want it, ditch it.
+
+On the other hand new "requirements" may come up. Customers may clamor for something you never even considered. These are the features you want to invest your time in.
+
+So how do you find that balance? How do you know which features really should be requirements and which just feel like requirements because you've been calling them that for so long?
+
+One of the best ways is to step back and re-evaluate your requirements on a regular basis, perhaps when you sit down to set the priority of your issues or during weekly/monthly reviews. This alone will help change your team's outlook on "requirements" by making them a moving target. More often than not, when a team comes up with a list of requirements they're more focused on checking off the items on that initial list than they are about stepping back to see if they’re really getting those features right or if customers even want those features at all.
+
+Another helpful thing is to drop the whole concept of requirements, or at least use the word sparingly. When *everything* is called a "requirement", nothing is a requirement. Toss that word around too casually and it becomes disingenuous at best and unrealistic at worst.
+
+One of the best ways of changing your approach to requirements is by making features earn their way. Is the feature something your customers are clamoring for? Or something you yourself desperately need? For instance, a lot of things that teams often think are requirements before launch turn out to be entirely unimportant post-launch when customers start making requests. If your customers haven't asked for something there's a good chance it's not a requirement. It might still be a cool new feature or something that's nice to have, but it's not a requirement.
+
+The whole approach of viewing things as “mandatory” for success is often plain wrong. Many of a project's "requirements" turn out to be incorrect assumptions. The sooner you let go of those assumptions the sooner your team can get that burden off their shoulders and the sooner you can get back to work on the things that really are important to your customers.
+
diff --git a/sifterapp/streamlining-issue-benefits.txt b/sifterapp/streamlining-issue-benefits.txt
new file mode 100644
index 0000000..969ed8b
--- /dev/null
+++ b/sifterapp/streamlining-issue-benefits.txt
@@ -0,0 +1,19 @@
+When it comes to workflows complexity is bad. We've written previously about how we've streamlined the issue workflow by creating simple forms and eliminated confusing, distracting elements you don't need.
+
+The purpose of any filing system is to translate noise into signal. Perhaps surprisingly, one of the best ways to do that is to keep things simple.
+
+The simpler the workflow the easier it is for work to flow though it. A simple workflow means there are fewer "buckets" for issues to get lost in. Eliminating the extra steps translates into less ambiguity in your workflow. For example, by keeping statuses limited to open and closed, we eliminate the need to file and re-file issues as work progresses. Fewer steps in the workflow means less work.
+
+It also means there are fewer places to file issues, which means fewer places to lose them. You want to *track* issues after all, not just file them away in some chaotic system where they get lost amidst an overwhelming sea of issues.
+
+The simpler workflow can sometimes feel inflexible though. The set-in-stone rules don't allow for individuals to make decisions on a case by case basis. And that's the point. You don't want to make decisions on a case by case basis, you want the system to work for every case. The solution then is not to change the system, but to help your team understand the process and use the guidelines you've established to work within the system.
+
+tk example of how "guidelines restore flexibility"
+
+Stick to your system and you'll eliminate two of the biggest productivity roadblock -- uncertainty and doubt. To stick with the statuses example, simply using open and close means no one ever has to worry about the new issue that Jim assigned "super extra high priority" status because that never happened. Priority is established during regular reviews, not assigned through a status message.
+
+tk another example of benefits
+
+Filing bugs and creating new issues might make you feel like something is getting done -- and it is -- but let's not lose site of the endgame -- fixing those issues and shipping better software.
+
+When it comes to issues in software development that means keeping track of all the noise and using regular triaging efforts to file it into simple, discreet and most importantly, fixed number of "buckets". From there your team can then prioritize and get to work.
diff --git a/sifterapp/zapier-announcement.txt b/sifterapp/zapier-announcement.txt
new file mode 100644
index 0000000..878ce14
--- /dev/null
+++ b/sifterapp/zapier-announcement.txt
@@ -0,0 +1,31 @@
+Bugs, feature requests and issues don't exist in a vacuum. Wouldn't it be nice if you could create issues in Sifter and have them automatically link up with other tools your team uses, whether it's HipChat, Dropbox, Gmail or Twitter? Well now you can thanks to Zapier.com.
+
+Today we're excited to announce the official Sifter integration for Zapier. Zapier, which is like a matchmaker for web services, now supports Sifter, making it easier than ever to integrate Sifter with over 300 other web services instantly.
+
+## What is Zapier?
+
+Zapier is a very simple, but powerful tool. Zapier is the glue that binds together different web services. Zapier’s "About" page reads: "Zapier is for busy people who know their time is better spent selling, marketing, or coding. Instead of wasting valuable time coming up with complicated systems -- you can use Zapier to automate the web services you and your team are already using on a daily basis."
+
+Zapier automates the web by giving you the power to connect web apps. When you do something in one web service Zapier notices and pushes that information to another service according to "Triggers" and "Actions" you define. That is the first web service "triggers" an action that does something with data from that service.
+
+Sound vague? That's partly because the possibilities are nearly unlimited.
+
+## What can Sifter and Zapier do together, for me?
+
+It's probably easiest to understand Zapier with an example.
+
+By default Sifter uses email notifications sparingly, we don't want to clutter your inbox so we only email you if you're directly involved with the issue. That said, you can connect your Sifter account to Zapier in order to route notifications of any new issues to email and other places. For instance, we push notifications to our team Slack account any time someone creates a new issue. This way, everyone keeps abreast of the activity without cluttering their inbox.
+
+tk screenshot
+
+Our Zapier integration can also be used in slightly more sophisticated and powerful ways if you tap into the possibilities of advanced notifications. For example, getting emails about new issues can be helpful, but in some cases you might even want to go further.
+
+Let's say you want to be notified by text message every time there's a new issue in the "Security" category for your project. Using Zapier you simply create a trigger for new issues, filter by category so that only security issues show up, connect that to your mobile number and bam! instant SMS updates whenever someone opens a new security issue on your project.
+
+tk screenshot
+
+## Getting Started
+
+If you don't have one yet, go create a Zapier account. Once you've got that set up, check out the [Sifter page](https://zapier.com/zapbook/sifter/) and start connecting your issues to the other services you use. Found a cool way to automate the tedious tasks in your workflow? Be sure to let us know.
+
+
diff --git a/wheelsinvoice121511.pdf b/wheelsinvoice121511.pdf
new file mode 100644
index 0000000..c48e26b
--- /dev/null
+++ b/wheelsinvoice121511.pdf
Binary files differ
diff --git a/wheelsupdateinvoice.pdf b/wheelsupdateinvoice.pdf
new file mode 100644
index 0000000..fb08ef8
--- /dev/null
+++ b/wheelsupdateinvoice.pdf
Binary files differ