summaryrefslogtreecommitdiff
path: root/sifterapp
diff options
context:
space:
mode:
Diffstat (limited to 'sifterapp')
-rw-r--r--sifterapp/Diagram.jpgbin0 -> 239664 bytes
-rw-r--r--sifterapp/choosing-bug-tracker.txt16
-rw-r--r--sifterapp/complete/browserstack-images.zipbin0 -> 865862 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack01.jpgbin0 -> 282678 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack02.jpgbin0 -> 206844 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack03.jpgbin0 -> 28982 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack04.jpgbin0 -> 189414 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack05.jpgbin0 -> 216895 bytes
-rw-r--r--sifterapp/complete/browserstack-images/browserstack06.jpgbin0 -> 121851 bytes
-rw-r--r--sifterapp/complete/browserstack.txt90
-rw-r--r--sifterapp/complete/bugs-issues-notes.txt18
-rw-r--r--sifterapp/complete/bugs-issues.txt52
-rw-r--r--sifterapp/complete/forcing-responsibility-software.txt29
-rw-r--r--sifterapp/complete/how-to-respond-to-bug-reports.txt29
-rw-r--r--sifterapp/complete/issue-tracking-challenges.txt63
-rw-r--r--sifterapp/complete/private-issues.txt28
-rw-r--r--sifterapp/complete/sifter-pagespeed-after.pngbin0 -> 553545 bytes
-rw-r--r--sifterapp/complete/sifter-pagespeed-before.pngbin0 -> 599783 bytes
-rw-r--r--sifterapp/complete/states-vs-resolutions.txt22
-rw-r--r--sifterapp/complete/streamlining-issue-creation.txt55
-rw-r--r--sifterapp/complete/triaging.txt62
-rw-r--r--sifterapp/complete/webpagetest-notes.txt35
-rw-r--r--sifterapp/complete/webpagetestp1.txt79
-rw-r--r--sifterapp/complete/webpagetestp2.txt85
-rw-r--r--sifterapp/complete/yosemite-mail.txt14
-rw-r--r--sifterapp/ideal-sifter-workflow.txt31
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_01.rtf62
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_02.rtf62
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_03.rtf50
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_04.rtf77
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_05.rtf55
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_06.rtf55
-rw-r--r--sifterapp/invoices/scott_gilbertson_invoice_07.rtf55
-rw-r--r--sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.35 AM.pngbin0 -> 1262819 bytes
-rw-r--r--sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.39 AM.pngbin0 -> 591440 bytes
-rw-r--r--sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.45 AM.pngbin0 -> 715947 bytes
-rw-r--r--sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.49 AM.pngbin0 -> 439351 bytes
-rw-r--r--sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.52 AM.pngbin0 -> 450804 bytes
-rw-r--r--sifterapp/new-article.txt62
-rw-r--r--sifterapp/org-chaos.txt33
-rw-r--r--sifterapp/requiresments-vs-assumptions.txt28
-rw-r--r--sifterapp/streamlining-issue-benefits.txt19
-rw-r--r--sifterapp/zapier-announcement.txt31
43 files changed, 1297 insertions, 0 deletions
diff --git a/sifterapp/Diagram.jpg b/sifterapp/Diagram.jpg
new file mode 100644
index 0000000..efb8f1e
--- /dev/null
+++ b/sifterapp/Diagram.jpg
Binary files differ
diff --git a/sifterapp/choosing-bug-tracker.txt b/sifterapp/choosing-bug-tracker.txt
new file mode 100644
index 0000000..7837b65
--- /dev/null
+++ b/sifterapp/choosing-bug-tracker.txt
@@ -0,0 +1,16 @@
+
+> I just had another idea inspired by this. “Why it’s so difficult for your team to settle on a bug tracker?”
+>
+> The premise is the Goldilocks story. This bug tracker is too simple. This bug tracker is too complicated. This bug tracker is just right.
+>
+> Effectively, with bug and issue trackers, there’s a spectrum. On one end, you have todo lists/spreadsheets. Then you have things like GitHub Issues/Sifter/Trello somewhere in the middle, and then Jira/Fogbugz/etc. all of the way to the other end.
+>
+> Each of these tools makes tradeoffs. With Sifter, we keep it simple so small teams can actually have bug tracking because for them, they’d just as soon have no bug tracking as use Jira. It’s way too complicated for them.
+>
+> Other teams doing really complex things find Sifter laughably simple. Their complexity needs justify the investment in training and excluding non-technical users.
+>
+> This is all complicated by the fact that teams inevitably want ease-of-use and powerful customization, which, while not inherently exclusive of each other, are generally at odds with each other.
+>
+> To make matters worse, on a larger team, some portion of the team values simplicity more while other parts of the team value advanced configuration and control. Neither group is wrong, but it is incredibly difficult to find a tool with the balance that’s right for a team.
+>
+> I could probably expand on this more, but that should be enough to see if you think it’s a good topic or not.
diff --git a/sifterapp/complete/browserstack-images.zip b/sifterapp/complete/browserstack-images.zip
new file mode 100644
index 0000000..5aa9646
--- /dev/null
+++ b/sifterapp/complete/browserstack-images.zip
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack01.jpg b/sifterapp/complete/browserstack-images/browserstack01.jpg
new file mode 100644
index 0000000..85da575
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack01.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack02.jpg b/sifterapp/complete/browserstack-images/browserstack02.jpg
new file mode 100644
index 0000000..d53c254
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack02.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack03.jpg b/sifterapp/complete/browserstack-images/browserstack03.jpg
new file mode 100644
index 0000000..ccdaf46
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack03.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack04.jpg b/sifterapp/complete/browserstack-images/browserstack04.jpg
new file mode 100644
index 0000000..d32ffc9
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack04.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack05.jpg b/sifterapp/complete/browserstack-images/browserstack05.jpg
new file mode 100644
index 0000000..ebe3b2d
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack05.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack-images/browserstack06.jpg b/sifterapp/complete/browserstack-images/browserstack06.jpg
new file mode 100644
index 0000000..c11505f
--- /dev/null
+++ b/sifterapp/complete/browserstack-images/browserstack06.jpg
Binary files differ
diff --git a/sifterapp/complete/browserstack.txt b/sifterapp/complete/browserstack.txt
new file mode 100644
index 0000000..e23e224
--- /dev/null
+++ b/sifterapp/complete/browserstack.txt
@@ -0,0 +1,90 @@
+Testing websites across today's myriad browsers and devices can be overwhelming.
+
+There are roughly [18,700 unique Android devices](http://opensignal.com/reports/2014/android-fragmentation/) on the market. There are somewhere around 14 browsers, powered by several different rendering engines available for each of those devices. Then there's iOS, Windows Mobile and scores of other platforms to consider.
+
+Trying to test everything is impossible at this point.
+
+The good news is that you don't need to worry about every device on the web. Don't make testing harder than it needs to be.
+
+Don't test more, test smarter.
+
+Before you dive into testing, consult your analytics and narrow the field based on what you know about your traffic. Find out which devices and browsers the majority of your visitors are using. Test on the platform/browsers that make up the top 80 percent of your visitors. Once you're confident you've got a great experience for the majority of your visitors you can move on to the more obscure cases.
+
+If you're launching something new you'll need to do some research about which devices your target audience favors. That way you'll know, for example, that your target market tends to favor iOS devices and you'll want to spend some extra time testing on various iPhone/iPad configurations.
+
+Once you know what your visitors are actually using, you can start testing smarter.
+
+Let's say you know that the majority of your visitors come from 8 different device/browser configurations. You also know from studying the trends in your analytics that you're making headway in some new markets overseas and traffic from Opera Mini is on the rise, so you want to pay special attention to Opera Mini.
+
+Armed with information like that you're ready to start testing. Old school developers might fire up the virtual machines at this point. There's nothing wrong with that, but these days there are better tools for testing your site.
+
+### Introducing BrowserStack
+
+[BrowserStack](http://www.browserstack.com/) is a browser-based virtualization service that puts nearly every operating system and browser combination under the sun at your fingertips.
+
+BrowserStack also offers mobile emulators for testing almost every version of iOS, Android and Opera Mobile (sadly, at the time of writing, there are no Windows Mobile emulators available on BrowserStack).
+
+You can also choose the screen resolution you'd like to test at, which makes possible to test resolution-based CSS media queries if you're using them. BrowserStack also has a dedicated [Responsive Design testing service](http://www.browserstack.com/responsive).
+
+BrowserStack is not just a screenshot service. It launches a fully interactive virtual machine in your browser window. That means you can interact with your site just like you would if you were using a "real" virtual machine or had the actual device in your hand. It also means you can use the virtual browser's developer tools to debug any problems you encounter.
+
+### Getting Started With BrowserStack
+
+Getting started with BrowserStack is simple, just sign up for the service and then log in. A free trial account will get you 30 minutes of virtual machine time. Pick an OS/browser combination you want to test, enter your URL and start up your virtual machine.
+
+<figure>
+ <img src="browserstack01.jpg" alt="Configuring BrowserStack Virtual Machine">
+ <figcaption><b>1</b> Configuring your virtual machine.</figcaption>
+</figure>
+
+BrowserStack will then launch a virtual machine in your browser window. Now you have a real virtual machine running, in this case IE 10 on Windows 7.
+
+<figure>
+ <img src="browserstack02.jpg" alt="BrowserStack Virtual Machine">
+ <figcaption><b>2</b> Testing sifterapp.com using IE 10 on Windows 7.</figcaption>
+</figure>
+
+Quick tip: to grab a screenshot of a bug to share with your developers, just click the little gear icon, which will reveal a camera icon.
+
+<figure>
+ <img src="browserstack03.jpg" alt="Taking a screenshot in BrowserStack">
+ <figcaption><b>3</b> Taking a screenshot in BrowserStack.</figcaption>
+</figure>
+
+
+Click the camera and BrowserStack will generate a screenshot that you can annotate and share with your team. You could, for example, download it and add it to the relevant issue in your bug tracker.
+
+<figure>
+ <img src="browserstack04.jpg" alt="Screenshot annoations in BrowserStack">
+ <figcaption><b>4</b> Annotating screenshots in BrowserStack.</figcaption>
+</figure>
+
+### Local Testing with BrowserStack
+
+If you're building a brand new site or app, chances are you'll want to do your testing before everything is public. If you have a staging server you could point BrowserStack to that URL, but there's another very handy option -- just point BrowserStack to local files on your computer.
+
+To do this BrowserStack will need to install a browser plugin, but once that's ready to go testing a local site is no more difficult than any other URL.
+
+Start by clicking the "Start local testing" button in the sidebar at the left side of the screen. This will present you with a choice to use either a local server or a local folder.
+
+<figure>
+ <img src="browserstack05.jpg" alt="Setting up local testing in BrowserStack">
+ <figcaption><b>5</b> Setting up local testing in BrowserStack.</figcaption>
+</figure>
+
+If you've got a dynamic app, pick the local server option and point BrowserStack to your local URL. Alternately, just point BrowserStack to a folder of files and it will serve them up for you.
+
+<figure>
+ <img src="browserstack06.jpg" alt="Testing a local folder in BrowserStack">
+ <figcaption><b>6</b> Testing a local folder of files with BrowserStack.</figcaption>
+</figure>
+
+That's it! Now you can edit files locally, make your changes and refresh BrowserStack's virtual machine to test across platforms without ever making your site public.
+
+### Beyond the Basics
+
+Once you start using BrowserStack you'll wonder how you ever did without it.
+
+There's also a good bit more than can be covered in a short review like this, including [automated functional testing](https://www.browserstack.com/automate), a responsive design testing service that can show your site on multiple devices and browsers, an [automated screenshot service](http://www.browserstack.com/screenshots) and more. You can even [integrate it with Visual Studio](http://www.hanselman.com/blog/CrossBrowserDebuggingIntegratedIntoVisualStudioWithBrowserStack.aspx).
+
+BrowserStack offers a free trial with 30 minutes of virtual machine time, which you can use for testing. If you decide it's right for you there are a variety of reasonably priced plans starting at $49/month.
diff --git a/sifterapp/complete/bugs-issues-notes.txt b/sifterapp/complete/bugs-issues-notes.txt
new file mode 100644
index 0000000..120f2f4
--- /dev/null
+++ b/sifterapp/complete/bugs-issues-notes.txt
@@ -0,0 +1,18 @@
+What a task "is"
+Thus,
+
+Best-guess order of magnitude. 1 minute. 1 hour. 1 day. 1 week. 1 month. (Anything longer than
+a month, and the task hasn't been sufficiently broken down into smaller
+pieces.)
+
+
+
+> More often than not, the goal of classifying things along those lines isn't
+> about *what* the individual issue is, but whether it's in or out of
+> scope. Whether something is in out of scope is a worthwhile facet for
+> classification, but using bug vs. feature as a proxy for that confuses the
+> issue. More often than not, in or out of scope is best handled through
+> discussion, not simply reclassifying the issue. When that happens,
+
+ Think "separate but equal".
+>
diff --git a/sifterapp/complete/bugs-issues.txt b/sifterapp/complete/bugs-issues.txt
new file mode 100644
index 0000000..4b31cce
--- /dev/null
+++ b/sifterapp/complete/bugs-issues.txt
@@ -0,0 +1,52 @@
+Love it. The only thing that I think could be worked in is mentioning the (The separate is inherently unequal bit.) The article does a great job of explaining how it's not necessarily helpful, but I think it could go even further illustrating that it's even *potentially* harmful to create that dichotomy.
+
+
+
+One of the hardest things you ever have to do is figure out what to do next. There's shelf after shelf of books in the self help section dedicated to helping you discover the answer to that question.
+
+We can't help you with the abstract version of that question, but it isn't just the abstract question that's hard. All of the software development teams we've talked to struggle with a version of this same question.
+
+Knowing which work to do next is the most difficult problem out there.
+
+Every development team's wish list is incredibly long. Every time you sit down at your screen there are a dizzying array of choices. This is part of what makes software development exciting, but it can also be overwhelming. What should you do next? Fix bugs? Ship new features? Improve security or performance? There's no perfect science for calculating which of these things is most important.
+
+One thing we can tell you won't help -- keeping track of everything in separate systems only makes the decision process more challenging.
+
+This can be counter-intuitive at first. For example, you're probably used to tools and systems that have some things you call "bugs", quantifiable shortcomings in software, some you call "issues", potential problems that aren't directly to related code, for example pricing decisions, and other things you call "features" or "enhancements" for ideas that haven't been implemented yet.
+
+Organizing tasks by category like this offers a comforting way to break things down. It makes us feel like we've done something, even when we haven't. All we've really done is rename the problem. We still don't have any better idea of what to do next.
+
+If you've worked with such a system for long you've probably seen it fall apart. The boundaries between these things --bugs, issues, new features -- are actually quite vague and trying to make your tasks fit into an arbitrary category rarely helps you figure out what to work on next. Worse it forces you to classify everything in one of those categories even when the actual problem might be too complex to fit just one.
+
+We've found that it can be even worse than just "not helpful", divide your work in to categories like this and some tasks will automatically become second class citizens. There is no such thing as separate but equal. Separate is inherently unequal. In this case bugs and issues that take a backseat to new features.
+
+It's time to take a different approach.
+
+We've found that the best technique for deciding what you should do next is not to classify what you need to do. That's a subtle form of procrastination.
+
+To actually decide between possibilities you need to figure out the priority of the task. To determine priority you need to look at all your possible next tasks and balance two factors: the positive impact on your customers and the effort it will take to complete that task. Priority isn't much help without considering both impact and effort.
+
+Establish a priority hierarchy for your tasks and you'll never wonder what you should do next again. You'll know.
+
+Sometimes finding that balance between impact on the customer and effort expended is easy. A bug that affects 20 percent of customers and takes 5 minutes to fix is a no-brainer -- you fix it, test and ship.
+
+Unfortunately, prioritizing your tasks will rarely be this black and white.
+
+What to do with a bug that only affects one customer (that you know of), but would take a full day to fix, is less immediately obvious.
+
+What if that customer is your biggest customer? What if that customer is a small, but very influential one? What if your next big feature could exacerbate the impact of that bug? What if your next big feature will eliminate the bug?
+
+There's no formula to definitively know which task will be the best use of your time. If the bugs are minor, but the next feature could help your customers be more successful by an order of magnitude, your customers might be more than willing to look the other way on a few bugs.
+
+There's only really one thing that's more or less the same with every decision: What a task is (bug, issue, feature) is much less important than the priority you assign it.
+
+The classification that matters is "what percentage of customers does this
+affect" and "how long will it take to do this task?"
+
+That does not mean you should dump all your categories. For instance, if you group tasks based on modules like "Login", "Filtering" or "Search", that grouping helps you find related tasks when you sit down to work on a given area. In that case the categories become helpful because they help your team focus.
+
+Some categories are useful, but whether something is a bug, issue or feature should have almost no bearing on whether the task in question is truly important.
+
+A bug might be classified as a bug, but it also means that a given feature is incomplete because it's not working correctly. It's a gray area where "bug" vs. "feature" doesn't help teams make decisions, it only lets us feel good about organizing issues. It's the path of least resistance and one that many teams choose, but it doesn't really help get work done.
+
+If you want to get real work done, focus your efforts on determining priority. Decide where the biggest wins are for your team by looking at the impact on your customers versus the time required to complete the task. There's no formula here, but prioritize your tasks rather than just organizing them and you'll never need to wonder, what should I do next.
diff --git a/sifterapp/complete/forcing-responsibility-software.txt b/sifterapp/complete/forcing-responsibility-software.txt
new file mode 100644
index 0000000..611d602
--- /dev/null
+++ b/sifterapp/complete/forcing-responsibility-software.txt
@@ -0,0 +1,29 @@
+Stop Forcing Responsibility onto Software
+
+There's a common belief among programmers that automating tedious tasks is exactly the reason software was invented. The idea is that software can save us from all this drudgery by making simple decisions for us and removing the tedious things from our lives.
+
+This is often true. Think of all the automation in your life, from your thermostat to your automobile's service engine light, software *does* remove a tremendous number of tedious tasks from our lives.
+
+Perhaps the best example of this is the auto-save feature that runs in the background of most applications these days. Auto-save frees you from the tedious task of saving your document. You no longer need to pound CTRL-S every minute or two like an animal. Instead, your lovely TPS reports are automatically saved as you change them and you don't have to worry about it.
+
+Unfortunately when you have a hammer as powerful as software everything starts to look like a nail. Which is to say that, just because a task is tedious, does not mean it's something that can be offloaded to software.
+
+It's just as important to think about whether the task is something that software *can* be good at. For example, while auto-saving your TPS reports is definitely something software can be good at, actually writing the reports is probably something humans are better at.
+
+This temptation to automate away the difficult, sometimes tedious, tasks in our lives is particularly tempting when it comes to prioritizing issues in your issue tracking software.
+
+Software is good at tracking issues, but sadly, most of the time software turns out to be terrible at prioritizing them. To understand why, consider the varying factors that go into prioritizing issues.
+
+At a minimum prioritizing means weighing such disparate factors as resource availability, other potential blockers and dependencies, customer impact, level of effort, date/calendar limitations, and more. We often think that by plugging all of this information into software, we can automatically determine a priority, but that's just not the case.
+
+Plugging all that information into the software helps collate it all in one place where it's easy to see, but when it comes to actually making decisions about which issue to tackle next, a human is far more likely to make good decisions. Software helps you make more informed choices, but good decisions still require human understanding.
+
+When all those often conflicting factors surrounding prioritization are thrown together as a series of data points, which software is supposed to then parse and understand, what you'll most likely get back from your software is exactly what you've entered -- conflicts.
+
+It might not be the most exciting task in your day, but prioritizing issues is a management task, that is, it requires your management. You need to use intuition and understanding to make decisions based on what's most important *in this case* and assign a single simple priority accordingly.
+
+Consider two open issues you need to make a decision on. The first impacts ten customers. The second only impacts one customer directly, but indirectly impacts a feature that could help 1,000 customers. So to what degree is the second issue actually impacting customers? And which should you focus on? Algorithms designed to prioritize customer impact will pick the first, but is that really the right choice?
+
+These questions aren't black and white, and it's difficult for a software system to accurately classify/quantify them and take every possible variable into account.
+
+Perhaps in the AI-driven quantum computing future this will be something well-suited for software. In the mean time though, tedious or not, human beings still make the best decisions about which issues should be a priority and what your team should tackle next.
diff --git a/sifterapp/complete/how-to-respond-to-bug-reports.txt b/sifterapp/complete/how-to-respond-to-bug-reports.txt
new file mode 100644
index 0000000..617e1ca
--- /dev/null
+++ b/sifterapp/complete/how-to-respond-to-bug-reports.txt
@@ -0,0 +1,29 @@
+If you look at the bug reports on many big open source software projects it's almost like the developers have a bug report Magic-8 Ball. Reports come in and developers just give the Magic-8 Ball a shake and spit out the response. You'll see the same four or five terse answers over and over again, "working as designed", "won't fix", "need more info", "can't reproduce" and so on.
+
+At the same time large software projects often have very detailed guidelines on how to *report* bugs. Developers know that the average user doesn't think like a developer so developers create guidelines, checklists and other tips designed to make their lives easier.
+
+The one thing you almost never see is a set of guidelines for *responding* to bug reports. The other side of the equation gets almost no attention at all. I've never seen an open source project that had an explicit guide for developers on how to respond to bugs.
+
+If such a guide existed projects would not be littered with Magic-8 Ball-style messages that not only discourage outsiders from collaborating, but showcase how out of touch the developers are with the users of their software.
+
+It's time to throw away the Magic 8 Ball of bug reports and get serious about improving your software.
+
+## Simple Rules for Responding to Bug Reports.
+
+1. **Don't take bug reports personally**. The reporters are trying to help. They may be frustrated, they may even be rude, but remember they're upset and frustrated with a bug, not you. Now they may not phrase it that way, they may think they're upset with you. Part of your job as a developer is to negotiate that social gap between angry users and yourself. The first step is to stop taking bug reports personally. You are not your software.
+
+2. **Be specific in your responses**. Magic 8 Ball responses like "can't reproduce" and "need more info" aren't just rude, they're failures to communicate. Which is to say that both may be true in many cases, but neither are helpful for the bug reporter. The bug reporter may not be providing helpful info, but in dropping in these one-liners you're not being helpful either.
+
+In the case of "need more info" take a few seconds to ask for what you actually need. Need to know the OS or browser version? Then ask for that. If you "can't reproduce" tell the user in detail what you did, what platform or browser you were using and any other specifics that might help them see what's different in their case. Be specific, ask "What browser were you using?" or "Can you send a screenshot of that page or copy and paste the URL so that I can see what you're seeing?" instead of "Need more info".
+
+3. **Be collaborative**. This is related to point one, but remember that the reporter is trying to help and the best way to let them help you is to, well, let them help you. Let them collaborate and be part of the process. If your project is open source remember that some of your best contributors will start off as prolific bug reporters. The more you make them part of the process the more helpful they'll become over time.
+
+4. **Avoid technical jargon**. For example, instead of "URL" say "the address from the web browser address bar". This can be tricky sometimes since what you think of as everyday speech may read like technical jargon to your users. When in doubt err on the side of simple, direct language.
+
+Along the same lines, don't assume too much technical knowledge on the part of bug reporters. If you're going to need a log file be sure to tell the user exactly how and where to find the information you need. Don't just say, "what's the output of `tail -f /var/log/syslog`?", tell them where their terminal application is, how to open it and then to cut and paste the command and results you want. A little bit of hand holding goes a long way.
+
+5. **Be patient**. Don't dismiss reports just because they will involve more effort and research. It's often said that there is no such thing as writing, just re-writing. The same is true of software development, fixing bugs *is* software development. The time and research it takes to adequately respond to bug reports isn't taking you away from "real" development, it is the real development.
+
+6. **Help them help you**. Think of this as the one rule to rule the previous five. Bug reports are like free software development training. Just because you're the developer doesn't mean your users don't have things to teach you, provided you're open to learning. Take everything as a teaching/learning opportunity and you'll find that not only do your bug reports make your software better, they make you a better software developer.
+
+It can be hard the remember all this stuff when you have a pile of bugs you want to quickly work through. Try to resist that urge to work through all the new bug reports before lunch or otherwise rush it. Often times it's more effective to invest a few extra moments collaborating with the reporter to make sure that bugs are handled well.
diff --git a/sifterapp/complete/issue-tracking-challenges.txt b/sifterapp/complete/issue-tracking-challenges.txt
new file mode 100644
index 0000000..fc5e03e
--- /dev/null
+++ b/sifterapp/complete/issue-tracking-challenges.txt
@@ -0,0 +1,63 @@
+Tracking issues isn't easy. It's tough to find them, tough to keep them updated and accurate, and tougher still to actually resolve them.
+
+There are quite a few challenges that can weigh down a good issue tracking process, but fortunately there are some simple fixes for each. Here are a few we've discovered over the years and some ways to solve them.
+
+# Lack of participation.
+
+The most basic problem is getting your team to actually use the software. Participation and collaboration are the cornerstones of a good issue tracking process, but that collaboration and teamwork runs smoothest when everyone is using the same system.
+
+If everyone is not working together the process will fall apart before you even get started.
+
+If your team is using email or littering their desks with sticky notes that's a pretty sure sign you have a problem.
+
+Solution: Get everyone using the same system and make sure everyone is comfortable with it.
+
+# Too difficult to report issues.
+
+If it's too hard to actually report an issue, for example, if there are too many required, but irrelevant fields in your forms, or it's too hard to upload relevant files, or it's just too difficult to login and find the "new issue" link, then no one will ever report issues in the first place. And unreported issues are unsolved issues.
+
+Solution: Keep your forms simple, offer drag-and-drop file uploading, and, if all else fails, email integration.
+
+# Too difficult to find issues.
+
+If you can't find an issue, you can't fix it.
+
+Typically the inability to find what you're looking for is the result of weak or minimal searching and filtering tools in your issue tracker.
+
+Sorting and filtering needs to be powerful and yet simple. Overcomplicating these tools can mean you'll accidentally filter out half of your relevant issues and not even realize it.
+
+Solution: Simplify the process of searching, filtering and finding issues.
+
+# Over-engineering the process.
+
+The best way to avoid over-engineering anything is to keep things as simple as possible. For example, try to solve your problems with your existing tools before you create new tools.
+
+One example of this we've discovered is having [too many possible statuses](https://sifterapp.com/blog/2012/08/the-challenges-with-custom-statuses/) for an issue.
+
+Keeping things simple -- we have three choices, "On Hold", "Assigned", and "Accepted" -- avoids both the paradox of choice and any overlap. If you have ten possibilities (or worse, completely custom statuses) you've added mental overhead to the process of choosing one. Keeping it simple means you don't have to waste time picking a status for a given issue.
+
+Too many statuses creates crevices for issues to hide in and be forgotten when filtering issues. Overly detailed statuses can also confuse non-technical people who will wonder, "what's the difference between accepted and in progress?" Good question. Avoid it by making statuses clear and simple.
+
+There are also clear, hard lines between each of these three statuses and no questions about what they mean.
+
+A related problem, and the reason some teams will clamor of more status possibilities, is the tendency to conflate statuses with resolutions. For example, "working as designed" isn't a status; it's a resolution. Similarly, "can't reproduce" isn't a status; it's a resolution.
+
+Solution: Keep your status options simple and focus on truly different states of work with clear lines between them.
+
+
+# Over-reliance on software for process.
+
+Software is a tool. Tools are wielded by people. The tool alone can only do so much. Without people to guide them even the best of tools will fail.
+
+That's why you need to make people the most important part of your issue process.
+
+Make room for the human aspects of issue tracking, like regular testing sessions, consistent iteration and release cycles, and dedicated time for fixing bugs.
+
+Solution: Invest time and effort in the human processes that will pair with
+and support the software.
+
+# Conclusion
+
+Tracking issues isn't always easy, but you can make it easier by simplifying.
+
+Cut out the cruft. Make sure you have good software and good processes that help your team wield that software effectively. Let the software do the things software is good at and let your team fill in the parts of the process that people are good at.
diff --git a/sifterapp/complete/private-issues.txt b/sifterapp/complete/private-issues.txt
new file mode 100644
index 0000000..c2b033d
--- /dev/null
+++ b/sifterapp/complete/private-issues.txt
@@ -0,0 +1,28 @@
+Sifter does not offer "private" issues. Here's why.
+
+In most cases the reasons teams want private issues are the very reason private issues are problematic. There seems to be three primary reasons teams want private issues. The first is so that clients don't see your "mistakes" and will somehow perceive the work as higher quality. Except that this is highly flawed thinking. The idea that presenting a nice clean front will convince the client you're some kind of flawless machine is like trying to make unicorns out of rhinos.
+
+The far more likely outcome is that clients can't see half of what you're working on and end up thinking you aren't actually working. Or they might think you're ignoring their issues (which are public). If your highest priority work is all in private issues the client is cut off from the development process and will never get the bigger picture view. This can lead to all sorts of problems including the client feeling like they're not in control. That's often the point at which clients will either pull the plug or want to step in and micromanage the project.
+
+What you end up doing when you use private issues this way is protecting your image at the client's expense. First and foremost remember that the work isn't about your image, it's about the client and what they want. Assuming you're doing quality work, your image isn't going to suffer just because you're doing that work in front of the client, warts and all. Rhinos have one huge evolutionary advantage over unicorns -- they're real.
+
+Keeping issues private to protect your image ends up skewing expectations the wrong way and can easily end up doing far more damage to your reputation with the client than showing them a few bugs in the software you're developing.
+
+Another reason some teams want private issues is related, but slightly different -- they want to shield the client from the technical stuff so they don't get distracted from the larger picture.
+
+This is indeed tempting, especially with the sort of client that likes to micromanage every step of the development process whether their input is need or not (and seemingly more often when it is not). It's tempting, when faced with this sort of client, to resort to using private issues as a way of avoiding conflict or avoiding them period.
+
+However, as noted above, using private issues to make your development process partly or wholly opaque to the client is just as likely to make your client want to step in and micromanage as it is to prevent them from being able to do so.
+
+The problem is that you're trying to solve a human problem with software and that's never going to work. If your client is "in the way" then you need to help them understand what they're doing and how they can do it better.
+
+Even with clients that don't micromanage it can be tempting to use private issue to shield the client from technical details you don't want to explain. But clients don't have to (and aren't interested in) digging into things not assigned to or related to them. People are great at tuning things out and they will if you give them the chance.
+
+The third use that we've seen for private issues is that they serve as a kind of internal backchannel, a place your developers can discuss things without worrying that the client is watching. This is the most potentially disastrous way to use private issues. If the history of the internet has taught us anything it's that "private" is rarely actually private.
+
+Backchannels backfire. Private conversations end up public. Clients get to see what your developers are saying in private and the results are often ugly.
+
+Backchannels also undermine a sense of collaboration by creating an environment in which not all speech is equal. The client has no say in the backchannel and never gets a chance to contribute to that portion of the project.
+
+It’s important to involve clients in the big picture and let them get as involved as they want to be. Private issues subvert this from the outset by setting up an us vs. them mentality that percolates out into other areas of development as well. The real challenge is not keeping clients separate, but getting them involved, and setting up fences and other tactics to prevent them from being fully integrated into the project hinders collaboration and produces sub-par work.
+
diff --git a/sifterapp/complete/sifter-pagespeed-after.png b/sifterapp/complete/sifter-pagespeed-after.png
new file mode 100644
index 0000000..6c35499
--- /dev/null
+++ b/sifterapp/complete/sifter-pagespeed-after.png
Binary files differ
diff --git a/sifterapp/complete/sifter-pagespeed-before.png b/sifterapp/complete/sifter-pagespeed-before.png
new file mode 100644
index 0000000..5c36514
--- /dev/null
+++ b/sifterapp/complete/sifter-pagespeed-before.png
Binary files differ
diff --git a/sifterapp/complete/states-vs-resolutions.txt b/sifterapp/complete/states-vs-resolutions.txt
new file mode 100644
index 0000000..8652c81
--- /dev/null
+++ b/sifterapp/complete/states-vs-resolutions.txt
@@ -0,0 +1,22 @@
+We've written before about why Sifter has only [three possible statuses](https://sifterapp.com/blog/2012/08/the-challenges-with-custom-statuses/) -- Open/Reopened, Resolved and Closed. The short answer is that more than that over-complicates the issue tracking process without adding any real value.
+
+Why? Well, there are projects for which this will not be enough, but provided your project scope isn't quite as encompassing as say, NASA's, there's a good chance these three, in conjunction with some supplementary tools, will not only be enough, but speed up your workflow and help you close issues faster.
+
+Why are custom statuses unnecessary and how does using them over-complicate things? Much of the answer lies in how your team uses status messages -- are your statuses really describing the current state of an issue or are they trying to do more?
+
+One of the big reasons that teams often want more status possibilities is that they're using status messages for far more than just setting the status of an issue. For example, some teams use status to indicate resolutions. Probably the most common example of this is directly using resolutions as a status indicator. That is, the status of the issue is a stand-in for its outcome.
+
+How many times have you tracked down an issue in your favorite software project only to encounter a terse resolution like "working as designed" or the dreaded, "won't fix"? The problem with these statuses is that they don't describe state the issue is in, they describe the outcome. In other words they aren't statuses, they're resolutions.
+
+The status is not "won't fix", the status is closed. The *resolution* is that the issue won't be fixed.
+
+Trying to convey an outcome in a status message is like trying to fit the proverbial square peg in a round hole.
+
+Worse, in this case, you're missing an opportunity to provide a true resolution. What do you learn from these status-message "resolutions"? Nothing. What does the team working on that project learn when they revisit the issue a year from now? Nothing. That's a lost opportunity.
+
+This is part of why statuses do not make good resolutions. Resolutions are generally best captured in the description where there's room to explain a bit more about why you aren't going to fix something. Take a minute to explain why you aren't going to fix something or why it's designed the way it is and your users will thank you.
+
+Perhaps even more important, your future self will thank you when the same issue comes up again and you can refer back to your quick notes in the resolution to see why things are the way they are.
+
+Provided you use status messages solely for setting the status of an issue then there's rarely a need for more statuses than those Sifter offers -- Open, Resolved and Closed.
+
diff --git a/sifterapp/complete/streamlining-issue-creation.txt b/sifterapp/complete/streamlining-issue-creation.txt
new file mode 100644
index 0000000..543e420
--- /dev/null
+++ b/sifterapp/complete/streamlining-issue-creation.txt
@@ -0,0 +1,55 @@
+No one likes filling out forms. The internet is littered with user surveys that show reducing the number of fields in a form means far more people fill it out. Hubspot rather famously found that dropping just one field from forms [increased conversion][1] by almost 50%. People really hate forms.
+
+Who cares? Well, "people" includes you and your team. And "forms" include those you use to file issues and bugs in the software you're developing.
+
+Want to get more issues filed? Create simpler forms.
+
+At the same time you do need to capture certain bits of information. Take the typical web contact form. Dropping the email field might increase conversion rates significantly, but it means you don't have all the information you need. Forms need to be simple, not simplistic.
+
+And therein lies the rub -- which fields really need to be on your issue form?
+
+Let's break it down using a typical issue form which might consist of a dozen or more fields. There will almost always be a field for the "status" of an item. Other options typically include fields for Resolution, Assignee, Opener, Creation Date, Due Date, Category, Type, Release, Priority, Severity, Impact, LOE (estimated), LOE (actual), Browser/OS, Relationships and possibly even more.
+
+All those fields create a huge cognitive overhead which quickly leads to "decision fatigue", a fancy name for "people have better things to do than fill out long overly detailed forms on their computers." Let's tackle these one by one.
+
+* Status -- We've [written previously][2] about how status messages are unnecessary. The short story is that the status is either open or closed, there is no other status. Everything else is really a resolution. For example, the dreaded "wont fix" status is not a status. The status is closed. The *resolution* is that the issue won't be fixed.
+
+* **Resolution** -- We need a spot to record what we've done so keep this one.
+
+* Assignee -- Another necessary field, but it can be captured implicitly without adding another field to the form. So keep this one but it won't be part of the issue form.
+
+* Opener -- Again, good info to have, but not info you should need to fill in. Lose the field and capture it behind the scenes.
+
+* Creation Date -- Like Opener, this should be captured automatically when the issue is created.
+
+* Due Date -- The due date of every issue is "yesterday", there's no need to ask people to figure this out in the issue creation form. Figuring out the due date means [figuring out the priority of the issue][3] and that can't be done without an overview of the whole project. The issue creation form is the wrong place to determine priority and thus due date.
+
+* **Category** -- Categories are good, they help classify the issue -- is it a feature request, is it a bug? is it something else? Categories are helpful when trying to determine the priority of an issue as well so let's keep this one.
+
+* Type -- The type of issue is more or less the same as the category. No need to make a decision twice; keep it simple and lose the Type field. The same is true of "Tags" or any other variation on the categories theme.
+
+* **Release** -- Soon, but not yet. This one is useful for planning.
+
+* **Priority** -- Setting the priority of the issue is important so we'll keep this one as well.
+
+* Severity -- The severity of an issue can and should be a factor in setting the priority, but it doesn't need its own field in the form. Keep severity as part of your decision making process, but don't track it separately from what's it's influencing, namely, the Priority field.
+
+* Impact -- Like Severity, the impact of an issue is part of what determines the priority, but again there's no need to track it separately.
+
+* Level of Effort (estimated) -- The level of effort necessary to fix any individual issue is nearly impossible to estimate and not all that useful even if you do happen to estimate correctly. All this field does is create cognitive overhead.
+
+* Level of Effort (actual) -- Again, you're just creating overhead and getting nothing in return, lose it.
+
+* Browser/OS - This is useful information to have, but it doesn't apply directly to the issue. This is best captured in the comments or description field.
+
+After trimming our form down to fields we actually need we're left with, in addition to subject and description, a Resolution field, a field for the Priority, another for Category and one for Release.
+
+With just six fields, three of which don't need to be filled out when the issue is created -- Resolution, Priority, Release -- our form is considerably smaller.
+
+What we've created is a form that's simple enough you don't need to train your team on how to use it. Open up the form, create a new issue, give it a name, a brief description and a category; hit Create and you're done.
+
+Streamlining the process of creating issues means that the workflow is simple enough that even the "non-techie" members of your team will be able to use it. That means every person on the team has the potential to become a valuable contributor to the system.
+
+[1]: http://blog.hubspot.com/blog/tabid/6307/bid/6746/Which-Types-of-Form-Fields-Lower-Landing-Page-Conversions.aspx
+[2]: link to status vs issue piece
+[3]: link to piece on setting priority
diff --git a/sifterapp/complete/triaging.txt b/sifterapp/complete/triaging.txt
new file mode 100644
index 0000000..7012687
--- /dev/null
+++ b/sifterapp/complete/triaging.txt
@@ -0,0 +1,62 @@
+Few elements in your development process are as important as testing your code. We've found that the best way to ensure you get the most out of your testing is to set a schedule and stick to it.
+
+Set aside time to test and fix the problems you encounter every week or two. The process of testing and fixing might look something like this:
+
+1. Code Freeze. Everybody stops coding and prepares for testing. Everyone.
+2. Everyone tests. Preferably not their own modules.
+3. Triage. This is where most testing strategies break down. Issues don't organize themselves and many teams don't take the time to organize.
+4. Fix. Testing is useless without time dedicated to fixing.
+
+Let's focus on an oft overlooked part of the testing process -- triaging.
+
+## What is it?
+
+Triage means prioritizing. The word comes from the medical world, where it refers to the process of prioritizing patients based on the severity of their problems.
+
+When resources are limited -- and resources are almost always limited -- triaging is a way of helping as many people as possible and in the right order. In the ER, for example, that means treating the person having a heart attack before moving to the person with a broken toe.
+
+In application development triaging means sorting through all the issues you discover in testing and organizing the resulting work based on priority.
+
+Sometimes priority is obvious, some times it's not. That's why triaging takes time and usually involves a project manager and key stakeholder or client.
+
+## Why do it?
+
+Testing creates a lot of work because it’s designed to find as many problems as possible. To stick with the medical analogy, testing is what brings your patients to the door.
+
+The result of a good testing phase will be an abundance of work to do, which can be overwhelming. Fail to properly triage your results and you won't know what to do or where to start, you'll simply be drowning in a sea of bugs and issues.
+
+Triaging is also a way to build consensus and agreement on the priority and organization of work.
+
+## How does Triaging Work?
+
+Successful triaging depends on a consistent process. In the ER there is an intake process which assesses the severity of the problem and then sets the priority accordingly.
+
+The process is more or less the same in software development.
+
+The first step is setting the priority for each issue you discovered during the actual testing. Figuring out the priority of an issue is the most complex problem in the process, but you'll be more successful at this the more you can involve the client or primary stakeholder.
+
+Part of what makes prioritization difficult is determining scope. Many "bugs" will actually be enhancements. What qualifies as a bug and what's an enhancement will vary by project and client, which is why it's key to have the client involved in the decision process.
+
+Bring the client into the triage process helps ensure that your prioritizing matches the client's expectations. By setting priorities with the client, disagreements can be caught before they become problems down the road.
+
+Another key part of setting priorities is acknowledging the trade-offs involved. Failure to take into account, for instance, the time needed to fix something, will turn your priorities into wishful thinking rather than accurate and manageable tasks for your team. Wishful thinking does not get things done, realistic, well understood expectations and discrete lists get things done.
+
+Once you have the priorities established and you know what you want to work on next you can move on to de-duplication. The DRY principle doesn't apply just to writing code. Be diligent when listing issues and make sure you don't have two issues noting the same problem. Before you assign any new bugs you've prioritized, make sure to de-duplicate. Often this has the additional advantage of exposing an underlying problem behind several related bugs. If you routinely have bugs in one module of code, this could be a sign that the whole module needs to be rewritten rather than just patching the latest round of related bugs.
+
+The final step in the triage process is assigning the work that needs to be done. Work may be assigned directly to individual designers and developers or passed on to team leads who can better identify who should do which aspect of the work.
+
+Mastering the triage process will make your team more efficient and productive. It will also get bug fixes and new features out to customers faster. While bugs will still be found outside of testing, triaging helps minimize the number of bugs that are mis-classified or incomplete. Triaging helps ensure your entire team knows what the problems are and when they're going to be fixed.
+
+## Weekly Stagnation Meetings
+
+The triage process isn't limited to new bugs you find in intensive testing sessions. You can also perform a second type of triage -- reviewing stagnant issues.
+
+If an issue hasn't been touched in a while, then something needs to be done.
+
+The exact definition of "a while" varies by team and project, but if an issue has been sitting longer than you think it should have it's time to do something. Sometimes that might mean reassigning the work or making it higher priority. Sometimes the fact that it hasn't been done is telling you that it doesn't need to be done, close the issue and move on.
+
+Letting stale issues pile up can have a vampiric effect on a project, sucking some of the life out of it. Just knowing those unsolved problems are out there, not being worked on, adds a level of stress to your work that you don't need. Reassess, reassign and get back to work.
+
+## Conclusion
+
+Issues don't track themselves. Having good human processes around your tools will make a world of difference in their effectiveness. And don't forget to set aside time for fixing. Testing without fixing is pointless.
diff --git a/sifterapp/complete/webpagetest-notes.txt b/sifterapp/complete/webpagetest-notes.txt
new file mode 100644
index 0000000..0dbbe05
--- /dev/null
+++ b/sifterapp/complete/webpagetest-notes.txt
@@ -0,0 +1,35 @@
+
+> Performance audit...
+>
+> 0. Scope: SifterApp.com - Basically everything on our marketing site (not
+> on a subdomain) is static content created by Jekyll and served straight
+> through Nginx.
+>
+> 1. Context: Our marketing site used to live within the application
+> codebase, and so Rails and Capistrano handled most of the asset
+> optimization and serving. Now that it's all Jekyll, we're just tossing
+> files up via Nginx with little consideration for performance. We need to
+> fix that, especially for mobile. (And eventually, I even see the picture
+> element as being part of that.)
+>
+> 2. Front-end/back-end, nothing's off limits. I expect that we'll have room
+> for improvement in both areas. Just looking at the scores from the tests
+> and a cursory glance at the resulting advice, we'll need to make some
+> changes with all of it. The big thing is that I just don't have the
+> bandwidth to research it and understand the best solutions for us.
+>
+> 3. Structure of article. I think it should focus primarily on the tools and
+> the information that they provide and only use Sifter as examples. That way
+> it's about the tools instead of Sifter. My only fear is that if we're
+> already optimized in some areas, there won't be as much to share about what
+> the tools help you find. That is, our performance may suck but not bad
+> enough to show the full capabilities of the tools.
+>
+> I know there are countless tools/techniques that make sense, and I see them
+> in two distinct categories. 1.) Tools that help you see your problems. 2.)
+> Tools that help you fix your problems. I'd like to see us focus on the
+> tools that help you see the problems to focus on the "bug" finding aspect.
+> For each of the problems, I think we should link to relevant tools or
+> tutorials that can help solve the problem, but we should leave the
+> researching and choosing of those tools to the reader.
+>
diff --git a/sifterapp/complete/webpagetestp1.txt b/sifterapp/complete/webpagetestp1.txt
new file mode 100644
index 0000000..c8c09ba
--- /dev/null
+++ b/sifterapp/complete/webpagetestp1.txt
@@ -0,0 +1,79 @@
+The web is getting fatter and slower. Compare the [HTTPArchive][1] data for this month to six months ago. Chances are you'll find that overall page size has grown and page load times increased.
+
+This is bad news for the web at large, but it can be good news for your site. It means there's an easy way to stand out from the crowd -- build a blazing fast website.
+
+To do that you should make performance testing part of your design process. Plan for speed from the very beginning and you'll end up with a fast site. Didn't start your site off on the right foot? That's okay. We'll show you how you can speed up existing pages too.
+
+First let's talk about what we mean by performance. Performance is more than page load times, more than page "weight". These things matter, but not by themselves. Load times and download size only matter in relation to the most important part of performance -- how your visitors perceive your pages loading. Performance is ultimately a very subjective thing, despite being surrounded by some very objective data.
+
+Just remember, perception is what matters the most, not numbers.
+
+This is why performance is not necessarily a concrete target to aim for, but a spectrum.
+
+At one end of the spectrum you have what are known as ideal time targets. These were [popularized by Jakob Nielsen][2] in his book <cite>Usability Engineering</cite>. Even if you've never read the book, you've probably heard these times mentioned in the context of good user experience:
+
+> * 0.1 second is about the limit for having the user feel that the system is reacting instantaneously.
+> * 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay.
+> * 10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done.
+
+The quick takeaway is that you have one second to get something rendered on the screen or your visitor will already be thinking about other things. By the time 10 seconds rolls around they're long gone.
+
+## Aim High, Fail High
+
+The one second rule makes a nice target to aim for, but let's face it, most sites don't meet that goal. Even Google's generally speedy pages don't load in less than one second. And Google still manages to run a multi-billion dollar business.
+
+That said, Google has been [very vocal][3] about the fact that it is trying to get its pages down below that magical one second mark. And [it wants you to aim for one second][4] as well. Remember, the higher you aim the higher you are when you fail.
+
+That said, how fast is fast enough? To answer that question you need to do some performance testing.
+
+You need to test your site to find out where you can improve, but you should also test your competitors' sites as well. Why? It's the other end of the spectrum. At one end is the one second nirvana, at the other are your competitors' sites. The goal is to move your site from that end toward the one second end.
+
+If you can beat another site's page load times by 20 percent people will perceive your site as faster. Even if you can't get to that nearly instantaneous goal of one second or less, you can still beat the competition. That means more conversions and more happy users.
+
+To figure out where you stand, and where your competition stands, you'll want to do a performance audit -- that is, figure out how fast your pages load and identify bottlenecks that can easily be eliminated.
+
+## Running a Performance Audit
+
+There are three tools that form a kind of triumvirate of performance testing -- [WebpageTest.org][5], a web-based performance testing tool, Google's PageSpeed tools, and the network panel of your browser's developer tools.
+
+These three will help you diagnose your performance problems and give you a good idea of where you can to start improving.
+
+To show you how these tools can be combined, we're going to perform a basic performance audit on the [Sifter homepage][6]. We'll identify some performance "bugs" and show you how we found them.
+
+The first step in any performance audit is to see where you're starting from. To do that we use WebpageTest.org. Like BrowserStack, which we've [written about before][7], WebpageTest is a free tool built around virtual machines. You give it a URL and it will run various tests to see how your site performs under different conditions.
+
+## Testing with WebpageTest
+
+Head over to WebpageTest, drop the URL you want to test into the form and then click the yellow link that says "Advanced Settings". Here you can control the bandwidth being used, the number of tests to run and whether or not to capture video. There's also an option to keep the test results private if you're working with a site that isn't public.
+
+To establish a performance baseline we suggest running two separate tests -- one over a high speed connection (the default is fine) and one over a 3G network. For each test you'll want to run several passes -- you can run the test up to 9 times -- and let WebpageTest pick the median. We typically run 7 (WebpageTest will pick the median result so be sure to use an odd number of tests). Also, make sure to check the box that says "Capture Video".
+
+Once your tests are done you'll see a page that looks something like this:
+
+![screenshot of initial rest results page]
+
+There are a bunch of numbers here, but the main ones we want to track are the "Start Render" time and the "Speed Index". The latter is the most important of the two.
+
+What we really care about when we're trying to speed up a page is the time before the visitor sees something on the screen.
+
+The overall page load time is secondary to getting *something* -- anything really, but ideally the most important content -- on the screen as soon as possible. Give your visitors something to hold their interest or interact with and they'll perceive your page as loading faster than it actually is. People won't care (or even know) if the rest of the page is still loading in the background.
+
+The [Speed Index metric][8] represents the average time it takes to fill the initial viewport with, well, something. The number is in milliseconds and depends on size of the viewport. A smaller viewport (a phone for example) will need less on the screen before the viewport is filled than a massive HD desktop monitor.
+
+For example, let's say your Speed Index is around 6000 milliseconds over a mobile connection. That sounds pretty good right? It's not one second, but it's better than most sites out there.
+
+Now go ahead and click the link to the video of your page rendering and force yourself to sit through it. Suddenly 6 seconds doesn't sound so fast does it? In fact it's a little painful to sit through isn't it? If you're like us you were probably fidgeting a little before the video was over.
+
+That's how you visitors *feel* every time they use your site.
+
+Now that you have some idea of how people are perceiving your site and what it feels like, it's time to go back to the numbers. In the next installment we'll take a look at some tools you can use to find, diagnose and fix problems in the HTML and improve the performance of your site.
+
+
+[1]: http://httparchive.org/
+[2]: http://www.nngroup.com/articles/response-times-3-important-limits/
+[3]: http://www.youtube.com/watch?v=Il4swGfTOSM
+[4]: http://googlewebmastercentral.blogspot.com/2013/08/making-smartphone-sites-load-fast.html
+[5]: http://www.webpagetest.org/
+[6]: https://sifterapp.com/
+[7]: link
+[8]: https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index
diff --git a/sifterapp/complete/webpagetestp2.txt b/sifterapp/complete/webpagetestp2.txt
new file mode 100644
index 0000000..5a28dd1
--- /dev/null
+++ b/sifterapp/complete/webpagetestp2.txt
@@ -0,0 +1,85 @@
+In the last installment we looked at how WebpageTest can be used to establish a performance baseline. Now it's time to dig a bit deeper and see what we can do about some common performance bottlenecks.
+
+To do that we'll turn to a new tool, [Google PageSpeed Insights][1].
+
+Before we dive in, recall what we said last time about the importance of the Speed Index. That is, the time it takes to get something on the screen. This is different than the time it takes to fully load the page. Keep that in mind when you're picking and choosing what to optimize.
+
+For example, if PageSpeed Insights tells you "leverage browser caching" -- which means have your server set an Expires Header -- that's good advice in the broader sense, but it won't change your Speed Index number for first-time visitors.
+
+To start with we suggest focusing on things that will get you the biggest wins on the Speed Index. That's what we'll be focusing on here.
+
+## Google PageSpeed Insights
+
+Now that we know how long it's taking to load the page, it's time to start finding the bottlenecks in that process. If you know how to read a waterfall chart then WebpageTest can tell you most of what you want to know, but Google's PageSpeed Insights tool offers a nicer interface and puts more of an emphasis on mobile performance improvements.
+
+There are two ways to use PageSpeed. You can use [the online service][2] and plug in your URL or you can install the [PageSpeed Insights add-on for Chrome][3], which will add a PageSpeed tab to the Chrome developer tools.
+
+The latter is very handy, but lacks some of the features found in the online tool, most notably checks on "[critical path][4]" performance (another name for Speed Index) and mobile user experience analysis. For that reason we suggest using both. The online service does a better job of suggesting fixes for mobile and offers a score you can use to gauge your improvements (though you should go back to WebpageTest and rerun your same tests to make sure that your Speed Index times have actually dropped).
+
+The browser add-on, on the other hand , will look at other network conditions, like redirects, which can hurt your Speed Index times as well.
+
+PageSpeed Insights fetches the page twice, once with a mobile user-agent, and once with a desktop user-agent. It does not, however, simulate the constrained bandwidth of a mobile connection. For that you'll need to go back to WebpageTest. Complete details on what PageSpeed insights does are available in Google's [developer documentation][5].
+
+When we ran PageSpeed Insights on the Sifter homepage the service made a number of suggestions:
+
+![Screenshot of initial run]
+
+Notice the color coding, red is high priority, yellow less and green is all the stuff you're already doing right. But those priorities are Google's suggestions, not hard and fast rules. As we mentioned above, one of the high priority suggestions is to add Expires Headers to our static assets. That's a good idea and it will help speed up the experience of visiting the site again or loading a second page that uses the same assets. But it won't help first time visitors and it won't change that Speed Index number for initial page loads.
+
+Enabling compression on the other hand will. Adding GZip compression to our stylesheet and SVG icons would shave 154.8KB off the total page size. Fewer KBs to download always means faster page load times. This is especially true for the stylesheet since the browser stops rendering the page whenever it encounters a CSS file. It doesn't start rendering again until it has completely downloaded and parsed the CSS file so anything we can do to decrease the size of the stylesheet will see big wins.
+
+Another thing that the online tool doesn't consider high priority, but shows up in the browser add-on is to minimize redirects.
+
+To take a closer look at how redirects hurt your page load times let's take a look at the third tool for performance testing, your browser tools network panel.
+
+## The Network Panel
+
+All modern web browsers have some form of developer tools and all of them offer a "Network" panel of some sort. For these examples we'll be using Chrome, but you can see the same thing in Firefox, Safari, Opera and IE.
+
+In this example you can see that the fonts.css file returned a 302 (temporarily moved) error:
+
+![Screenshot of Network Panel]
+
+To find out more about what this redirect is and why it's happening, we'll select it in the network panel and have a look at the actual response headers.
+
+![Screenshot of Network Panel response headers]
+
+In this case you can see that it redirected to another CSS file on our domain.
+
+This file is eating up time twice. First it's on a different domain (our webfont provider's domain) which means there's another DNS lookup to perform. That's a big deal on mobile, [see this talk][6] from Google's Ilya Grigorik for an incredibly thorough explanation of why.
+
+The second time suck is the actual redirect which forces the browser to try loading the same resource again (and keep in mind that this resource is a CSS file, so it's blocking rendering throughout these delays) from a different location. The second time is succeeds, but there's definitely a performance hit.
+
+Given all that why still serve up this file? Because it's an acceptable trade off. Tungsten (the font being loaded) is an integral part of the design and in this case there are other areas we can optimize -- like enabling server-side GZip compression -- that will get us some big wins. It may be that we're able to get down close enough to the ideal one second end of the spectrum that we're okay with the font loading.
+
+This highlights what is perhaps one of the hardest aspects of improving performance -- nothing comes for free.
+
+When it comes to page load times there is no such thing as too fast, but there can be such a thing as over-optimization. If we ditch the font we might speed up the page load time a tiny bit, but we might also lose some of less tangible aspects of the reading experience. We might get the page to our visitors 500ms faster, but they also might be less delighted with what we've given them. The right answer to this question of what stays and what goes is a case-by-case problem.
+
+For example, if you eliminate a JavaScript library to speed up your page but without the library your app stops working, well that would be silly. Moving that JavaScript library to CDN and caching it with a far-future Expires Header? Now that's smart.
+
+Performance is always a series of trade-offs. CSS blocks the loading of the page, but no one wants to see your site without its CSS. To speed up your site you don't get rid of your CSS, but you might consider inlining some of it. That is, move some of your critical CSS into the actual HTML document, enough that the initial viewport is rendered properly, and then load the stylesheet at the bottom of the page where it won't block rendering. Tools like Google's [PageSpeed Module][7] for Apache and Nginx can automatically do this for you.
+
+The answer to performance problems is rarely to move from one extreme to the other, but to find the middle ground where performance, functionality and great user experience meet.
+
+## What We Did
+
+After running Sifter through Webpagetest we identified the two biggest wins -- enabling GZip compression and setting Expires Headers. The first means users download less data, so the page loads faster. The second means repeat views will be even faster because common elements like stylesheet and fonts are already in the browsers cache.
+
+We also removed some analytics scripts which were really only necessary on particular pages we're testing, not the site as a whole.
+
+For us the change meant adding a few lines to Nginx. One gotcha for fellow Nginx users, you need to add your GZip and Expires configuration to your application *and* load balancing servers. Other than that snag, the changes hardly took any time at all.
+
+The result? Our initial page load times as determined by Pagespeedtest dropped down under 4 seconds over 3G. That's a two second improvement for mobile users with very little work on our end. For those with high speed connections the Sifter homepage now gets very close to that magical one second mark.
+
+We were able to get there because we did the testing, identified the problems and were able to target the biggest wins rather than trying to do it all. Remember, don't test more, test smarter.
+
+
+
+[1]: https://developers.google.com/speed/pagespeed/insights/
+[2]: https://developers.google.com/speed/pagespeed/insights/
+[3]: https://chrome.google.com/webstore/detail/pagespeed-insights-by-goo/gplegfbjlmmehdoakndmohflojccocli?hl=en
+[4]: https://developers.google.com/web/fundamentals/performance/critical-rendering-path/
+[5]: https://developers.google.com/speed/docs/insights/about
+[6]: https://www.youtube.com/watch?v=a4SbDZ9Y-I4#t=175
+[7]: https://developers.google.com/speed/pagespeed/module
diff --git a/sifterapp/complete/yosemite-mail.txt b/sifterapp/complete/yosemite-mail.txt
new file mode 100644
index 0000000..e713152
--- /dev/null
+++ b/sifterapp/complete/yosemite-mail.txt
@@ -0,0 +1,14 @@
+If you've updated to Apple's latest version of OS X, Yosemite, you have a powerful new tool for creating issues via email and you may not even know it.
+
+Yosemite debuts something Apple calls Extensions, which are little apps that run inside other apps. Extensions are available in both OS X and iOS, but since both are brand new not a lot of applications are taking advantage of them just yet.
+
+The latest version of Apple Mail does, however, make use of Extensions through it's new Markup feature. Markup is a tool to quickly add simple notes and annotations to image and PDF files directly within Mail.
+
+Here's how it works: First you add an image or PDF to a mail message. Then you click on the file and an icon appears in the top-left corner of the file preview. Click the little icon and select Markup. The image will then zoom out and you'll see a toolbar above it with options to draw shapes and add text on top of it.
+
+Most demos we've seen of Markup show people adding arrows to maps to indicate where to meet and other things you'll probably never actually do, but this is a very powerful tool for software developers. It makes adding a little bit of visual help your issues much easier.
+
+For example, your workflow might look like this: you discover a bug with some visual component to it, let's say some CSS fails to line up your submit buttons on a form. So, you grab a screenshot (just press CMD-Shift-3 and OS X will take a screenshot), drag it to a new mail message, annotate it with some arrows pointing to the problem and a quick note about how it should look. Then you send it off to your issue tracking software which creates a new issue and attaches your screenshot complete with annotations.
+
+This way your designers don't have to wade through a bunch of prose trying to figure out what you mean by "doesn't line up". Instead they see the image with your notes and can jump straight into fixing the issue.
+
diff --git a/sifterapp/ideal-sifter-workflow.txt b/sifterapp/ideal-sifter-workflow.txt
new file mode 100644
index 0000000..4fdc269
--- /dev/null
+++ b/sifterapp/ideal-sifter-workflow.txt
@@ -0,0 +1,31 @@
+Streamlining Bug & Issue Tracking Workflow
+
+The quest to have a perfect workflow leads to ambiguous and redundant statuses. Can an issue be resolved and not in progress at the same time? i.e. The original tester is no longer around, so who retests it. Can it be pending/resolved at the same time? If it's moved to pending, it now becomes difficult to remember what state it was at previously.
+
+
+This would essentially help people visualize the implicit nature of frequently requested statuses that we don’t explicitly plan on adding.
+
+
+How do we suggest working with issues in Sifter?
+
+This would be a diagram illustrating both the explicit and virtual states that Sifter helps manage.
+
+
+> I’d create a diagram to illustrate it kind of like this with guidance on how to create/bookmark the corresponding filters for convenience.
+
+
+* New -> Open w/o milestone/assignee
+* Accepted -> Open & Assigned to a milestone.
+* Pending/On Hold -> Open & No milestone, no assignee
+* In Progress -> Open & Assigned to a user.
+* Open
+* Resolved
+* Closed
+* Rejected -> Closed w/ explanation.
+* Working as Designed -> Closed w/ explanation.
+* Duplicate -> Closed w/ explanation.
+
+The underlying challenge is that we often want machines to explicitly handle every potential scenario. In some cases, this is useful, but in others, we're pushing work off to a machine when humans are infinitely better at handling it. Trying to eliminate any ambiguity requires giving the system a lot of additional information.
+
+
+These "meta-statuses" shouldn't be treated the same as actual statuses. They add additional layers of meaning to a status, but they shouldn't live in parallel with the primary statuses.
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_01.rtf b/sifterapp/invoices/scott_gilbertson_invoice_01.rtf
new file mode 100644
index 0000000..1e335fb
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_01.rtf
@@ -0,0 +1,62 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qr
+
+\f0\b\fs24 \cf0 Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \
+
+\f0\b 9/03/14
+\f1\b0 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qj
+
+\f0 \cf0 \
+\
+\
+\
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300
+\cf0 Invoice Number:
+\b 0001
+\f1\b0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 Invoice Date: 09/03/14\
+Time Period: 08/01/14-09/30/14
+\f1 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\sa283
+\cf0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 13\
+\
+\pard\pardeftab720\ri0
+
+\fs28 \cf0 TOTAL FOR INVOICE: $975\
+\pard\pardeftab720\ri0
+
+\f1\fs24 \cf0 \
+\
+\
+\
+\
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300
+
+\f0 \cf0 Bank: SchoolsFirstFCU\
+Address:
+\f1 P.O. Box 11547, Santa Ana, CA 92711-1547 \
+
+\f0 Account Name: Checking
+\f1 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qj
+
+\f0 \cf0 Routing: 322282001\
+Account: 0172510703
+\f1 } \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_02.rtf b/sifterapp/invoices/scott_gilbertson_invoice_02.rtf
new file mode 100644
index 0000000..ab48b28
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_02.rtf
@@ -0,0 +1,62 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qr
+
+\f0\b\fs24 \cf0 Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \
+
+\f0\b 10/02/14
+\f1\b0 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qj
+
+\f0 \cf0 \
+\
+\
+\
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300
+\cf0 Invoice Number:
+\b 0001
+\f1\b0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 Invoice Date: 09/03/14\
+Time Period: 10/01/14-10/31/14
+\f1 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\sa283
+\cf0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 10\
+\
+\pard\pardeftab720\ri0
+
+\fs28 \cf0 TOTAL FOR INVOICE: $750\
+\pard\pardeftab720\ri0
+
+\f1\fs24 \cf0 \
+\
+\
+\
+\
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300
+
+\f0 \cf0 Bank: SchoolsFirstFCU\
+Address:
+\f1 P.O. Box 11547, Santa Ana, CA 92711-1547 \
+
+\f0 Account Name: Checking
+\f1 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qj
+
+\f0 \cf0 Routing: 322282001\
+Account: 0172510703
+\f1 } \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_03.rtf b/sifterapp/invoices/scott_gilbertson_invoice_03.rtf
new file mode 100644
index 0000000..64d2ca1
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_03.rtf
@@ -0,0 +1,50 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qr
+
+\f0\b\fs24 \cf0 Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \
+
+\f0\b 11/07/14
+\f1\b0 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\qj
+
+\f0 \cf0 \
+\
+\
+\
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300
+\cf0 Invoice Number:
+\b 0003
+\f1\b0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 Invoice Date: 11/07/14\
+Time Period: 11/01/14-11/30/14
+\f1 \
+\pard\tx709\tx1418\tx2127\tx2836\tx3545\tx4254\tx4963\tx5672\tx6381\tx7090\tx7799\tx8508\tx9217\pardeftab720\ri0\sl300\sa283
+\cf0 \
+\pard\pardeftab720\ri0
+
+\f0 \cf0 DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 10\
+\
+\pard\pardeftab720\ri0
+
+\fs28 \cf0 TOTAL FOR INVOICE: $750\
+\pard\pardeftab720\ri0
+
+\f1\fs24 \cf0 \
+\
+\
+\
+\
+} \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_04.rtf b/sifterapp/invoices/scott_gilbertson_invoice_04.rtf
new file mode 100644
index 0000000..eb04890
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_04.rtf
@@ -0,0 +1,77 @@
+{\rtf1\ansi\ansicpg1252\deff0\uc1
+{\fonttbl
+{\f0\fnil\fcharset0\fprq0\fttruetype ArialMT;}
+{\f1\fnil\fcharset0\fprq0\fttruetype TimesNewRomanPSMT;}
+{\f2\fnil\fcharset0\fprq0\fttruetype Liberation Sans;}
+{\f3\fnil\fcharset0\fprq0\fttruetype Liberation Serif;}
+{\f4\fnil\fcharset0\fprq0\fttruetype Courier New;}}
+{\colortbl
+\red0\green0\blue0;
+\red255\green255\blue255;
+\red255\green255\blue255;}
+{\stylesheet
+{\s6\fi-431\li720\sbasedon28\snext28 Contents 1;}
+{\s7\fi-431\li1440\sbasedon28\snext28 Contents 2;}
+{\s1\fi-431\li720 Arrowhead List;}
+{\s27\fi-431\li720\sbasedon28 Lower Roman List;}
+{\s29\tx431\sbasedon20\snext28 Numbered Heading 1;}
+{\s30\tx431\sbasedon21\snext28 Numbered Heading 2;}
+{\s12\fi-431\li720 Diamond List;}
+{\s9\fi-431\li2880\sbasedon28\snext28 Contents 4;}
+{\s8\fi-431\li2160\sbasedon28\snext28 Contents 3;}
+{\s31\tx431\sbasedon22\snext28 Numbered Heading 3;}
+{\s32\fi-431\li720 Numbered List;}
+{\s15\sbasedon28 Endnote Text;}
+{\*\cs14\fs20\super Endnote Reference;}
+{\s4\fi-431\li720 Bullet List;}
+{\s5\tx1584\sbasedon29\snext28 Chapter Heading;}
+{\s35\fi-431\li720 Square List;}
+{\s11\fi-431\li720 Dashed List;}
+{\s22\sb440\sa60\f2\fs24\b\sbasedon28\snext28 Heading 3;}
+{\s37\fi-431\li720 Tick List;}
+{\s24\fi-431\li720 Heart List;}
+{\s40\fi-431\li720\sbasedon32 Upper Roman List;}
+{\s39\fi-431\li720\sbasedon32 Upper Case List;}
+{\s16\fi-288\li288\fs20\sbasedon28 Footnote;}
+{\s19\fi-431\li720 Hand List;}
+{\s18\fs20\sbasedon28 Footnote Text;}
+{\s20\sb440\sa60\f2\fs34\b\sbasedon28\snext28 Heading 1;}
+{\s21\sb440\sa60\f2\fs28\b\sbasedon28\snext28 Heading 2;}
+{\s10\qc\sb240\sa120\f2\fs32\b\sbasedon28\snext28 Contents Header;}
+{\s23\sb440\sa60\f2\fs24\b\sbasedon28\snext28 Heading 4;}
+{\s28\f3\fs24 Normal;}
+{\s26\fi-431\li720\sbasedon32 Lower Case List;}
+{\s2\li1440\ri1440\sa120\sbasedon28 Block Text;}
+{\s33\f4\sbasedon28 Plain Text;}
+{\s34\tx1584\sbasedon29\snext28 Section Heading;}
+{\s25\fi-431\li720 Implies List;}
+{\s3\fi-431\li720 Box List;}
+{\s36\fi-431\li720 Star List;}
+{\*\cs17\fs20\super Footnote Reference;}
+{\s38\fi-431\li720 Triangle List;}
+{\s13\fi-288\li288\sbasedon28 Endnote;}}
+\kerning0\cf0\ftnbj\fet2\ftnstart1\ftnnar\aftnnar\ftnstart1\aftnstart1\aenddoc\revprop3{\*\rdf}{\info\uc1{\author None Yo}}\deftab720\viewkind1\paperw12240\paperh15840\margl1440\margr1440\widowctrl
+\sectd\sbknone\colsx0\marglsxn1800\margrsxn1800\pgncont\ltrsect
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch Scott Gilbertson}{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch 412 Holman Ave}{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch Athens, GA 30606}{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch 706 438 4297 }{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch sng@luxagraf.net}{\f1\fs24\lang1033{\*\listtag0} }{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qr\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\abinodiroverride\ltrch 12/03/14}{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qj\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qj\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qj\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\qj\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\b\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\lang1033{\*\listtag0}\abinodiroverride\ltrch Invoice Number: }{\f0\fs24\b\lang1033{\*\listtag0}0004}{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs24\lang1033{\*\listtag0}\abinodiroverride\ltrch Invoice Date: 12/03/14}{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs24\lang1033{\*\listtag0}\abinodiroverride\ltrch Time Period: 12/01/14-12/31/14}{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sa282\sl300\slmult1\itap0\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs24\lang1033{\*\listtag0}\abinodiroverride\ltrch DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr}{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs24\lang1033{\*\listtag0}\abinodiroverride\ltrch TOTAL HOURS: 10}{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs24\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\abinodiroverride\ltrch TOTAL FOR INVOICE: $750}{\f0\fs28\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\par}
+\pard\plain\ltrpar\ql\sl240\slmult1\itap0{\f0\fs28\lang1033{\*\listtag0}\par}} \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_05.rtf b/sifterapp/invoices/scott_gilbertson_invoice_05.rtf
new file mode 100644
index 0000000..5bb5b2f
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_05.rtf
@@ -0,0 +1,55 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\margl1440\margr1440\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qr
+
+\f0\b\fs24 \cf0 \expnd0\expndtw0\kerning0
+Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \expnd0\expndtw0\kerning0
+
+\f0\b \expnd0\expndtw0\kerning0
+\
+02/02/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qj
+\cf0 \expnd0\expndtw0\kerning0
+\
+\
+\
+\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1
+
+\b0 \cf0 \expnd0\expndtw0\kerning0
+Invoice Number:
+\b \expnd0\expndtw0\kerning0
+0005
+\b0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+Invoice Date: 02/02/15\
+Time Period: 02/01/15-02/28/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\sa282
+\cf0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 10\
+\
+\pard\pardeftab720
+
+\fs28 \cf0 \expnd0\expndtw0\kerning0
+TOTAL FOR INVOICE: $750\
+\
+\
+\
+\
+\
+} \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_06.rtf b/sifterapp/invoices/scott_gilbertson_invoice_06.rtf
new file mode 100644
index 0000000..774cd77
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_06.rtf
@@ -0,0 +1,55 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\margl1440\margr1440\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qr
+
+\f0\b\fs24 \cf0 \expnd0\expndtw0\kerning0
+Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \expnd0\expndtw0\kerning0
+
+\f0\b \expnd0\expndtw0\kerning0
+\
+03/02/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qj
+\cf0 \expnd0\expndtw0\kerning0
+\
+\
+\
+\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1
+
+\b0 \cf0 \expnd0\expndtw0\kerning0
+Invoice Number:
+\b \expnd0\expndtw0\kerning0
+0006
+\b0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+Invoice Date: 03/02/15\
+Time Period: 02/01/15-02/28/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\sa282
+\cf0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 10\
+\
+\pard\pardeftab720
+
+\fs28 \cf0 \expnd0\expndtw0\kerning0
+TOTAL FOR INVOICE: $750\
+\
+\
+\
+\
+\
+} \ No newline at end of file
diff --git a/sifterapp/invoices/scott_gilbertson_invoice_07.rtf b/sifterapp/invoices/scott_gilbertson_invoice_07.rtf
new file mode 100644
index 0000000..538dcca
--- /dev/null
+++ b/sifterapp/invoices/scott_gilbertson_invoice_07.rtf
@@ -0,0 +1,55 @@
+{\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf210
+{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 TimesNewRomanPSMT;}
+{\colortbl;\red255\green255\blue255;}
+{\info
+{\author None Yo}}\margl1440\margr1440\vieww12240\viewh15840\viewkind1
+\deftab720
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qr
+
+\f0\b\fs24 \cf0 \expnd0\expndtw0\kerning0
+Scott Gilbertson\
+412 Holman Ave\
+Athens, GA 30606\
+706 438 4297 \
+sng@luxagraf.net
+\f1\b0 \expnd0\expndtw0\kerning0
+
+\f0\b \expnd0\expndtw0\kerning0
+\
+04/01/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\qj
+\cf0 \expnd0\expndtw0\kerning0
+\
+\
+\
+\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1
+
+\b0 \cf0 \expnd0\expndtw0\kerning0
+Invoice Number:
+\b \expnd0\expndtw0\kerning0
+0007
+\b0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+Invoice Date: 04/01/15\
+Time Period: 04/01/15-04/30/15\
+\pard\tx708\tx1417\tx2126\tx2835\tx3545\tx4254\tx4963\tx5672\tx6381\tx7089\tx7798\tx8507\tx9216\pardeftab720\sl300\slmult1\sa282
+\cf0 \expnd0\expndtw0\kerning0
+\
+\pard\pardeftab720
+\cf0 \expnd0\expndtw0\kerning0
+DESCRIPTION OF SERVICE: Freelance Writing\uc0\u8232 HOURLY RATE / DAILY RATE: 75/hr\
+TOTAL HOURS: 10\
+\
+\pard\pardeftab720
+
+\fs28 \cf0 \expnd0\expndtw0\kerning0
+TOTAL FOR INVOICE: $750\
+\
+\
+\
+\
+\
+} \ No newline at end of file
diff --git a/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.35 AM.png b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.35 AM.png
new file mode 100644
index 0000000..af1d87e
--- /dev/null
+++ b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.35 AM.png
Binary files differ
diff --git a/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.39 AM.png b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.39 AM.png
new file mode 100644
index 0000000..799bbde
--- /dev/null
+++ b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.39 AM.png
Binary files differ
diff --git a/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.45 AM.png b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.45 AM.png
new file mode 100644
index 0000000..aaef516
--- /dev/null
+++ b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.45 AM.png
Binary files differ
diff --git a/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.49 AM.png b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.49 AM.png
new file mode 100644
index 0000000..9a24e2f
--- /dev/null
+++ b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.49 AM.png
Binary files differ
diff --git a/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.52 AM.png b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.52 AM.png
new file mode 100644
index 0000000..bc60ae6
--- /dev/null
+++ b/sifterapp/load-testing/Screen Shot 2014-11-05 at 10.10.52 AM.png
Binary files differ
diff --git a/sifterapp/new-article.txt b/sifterapp/new-article.txt
new file mode 100644
index 0000000..d8b6e22
--- /dev/null
+++ b/sifterapp/new-article.txt
@@ -0,0 +1,62 @@
+> I’m including the original brainstorm email for context/convenience at the bottom.
+>
+> > I know nothing about Jira and Fogbugz beyond the things I've seen on twitter where people complain about Jira.
+>
+> Jira is heavily customizable. The catch is that it creates the paradox of choice problem both at a configuration level, and, if configured to do a lot of stuff, at the data entry and daily use level.
+>
+> For a developer, the configuration is fun. It can literally do anything. You can have different types of tasks. Bugs, New Features, Tasks, Enhancements, Stories, etc. And I believe you can even have the fields and workflow adapt depending on the type.
+>
+> The video on the home page does a good job of giving an overview. It’s just really flexible.
+>
+> The catch is that it looks great from a checklist standpoint, but the more custom configuration is created, the more confusing it can be for a non-developer. Say someone wants to report an issue. They can’t just file it. They have to decide if it’s a bug or a feature. Or should it be a story? Well, maybe this is just a task? The result is that people get confused, misfile things, or just feel like they must not belong there because they can’t understand it. Frequently, they’ll just email it to their favorite developer and let them handle it. And the result is that the work slips through the cracks.
+>
+> That’s where things start to fall apart, but that’s not something a developer will see or acknowledge because the problem is largely invisible. If issues simply aren’t being reported or team members aren’t using a tool, it’s harder to recognize that there even is a problem.
+>
+> > Can you give me some concrete examples of things that exist at the more complex end of the spectrum and why a team might want them.
+>
+> Customizable workflow, issue types, and things like custom fields. There are some plans to begin adding some of this to Sifter.
+>
+> Customizable fields is a big one. People want to add a field for “Module” (where the bug was found) “User Agent” (for browser-related bugs) “Version Target”, “Due Date”. The problem is that as you add fields, the likelihood of a field being relevant to all types of issues goes down and you end up with a long form of fields that are only relevant a fraction of the time. For instance, people get tired of reporting User Agent when it’s only useful in 1 out of 10 bugs. That’s a lot of wasted effort. But as a developer, you want all of the info you can possibly get so you don’t have to go back and forth. For Microsoft, Adobe, etc, that makes sense. For the small team of 5 working on a Wordpress install for a marketing site and blog, that’s just not important enough to justify a dedicated field.
+>
+> Also the fields can be customized based on the type of issue. So a bug might need user agent info, but a question wouldn’t. Or a Story might have the whole “As a _____ I want to ______ so I can ______.” And then it might have some other fields, but it won’t need a user agent field. So in order to solve the problem of having too many irrelevant fields, they add the “Type” option to drive the fields that are available.
+>
+> Now they simply have a new problem. With the “Type” field for classifying the issue, is it a “Bug” a “Front-end Bug” or an “Enhancement” or a “Story"? Client’s don’t really care. Not only do they not care, but they’re thinking “WTF is a Story?” (or whatever other random type there is) They just want something fixed/changed. Forcing people to classify something doesn’t contribute to solving the problem on a small team. And if your client submits something as a bug and you have to change it to an enhancement, it’s not only creating overhead, but it might piss the client off. So it’s generally best to have that conversation in the comments instead of what most developers would do which is reclassify it and move on without much of an explanation.
+>
+> > Beyond that, I really like the point that different people on the same team will want different things out of a bug tracker (or whatever I'm working on a site right now with a new agency where that's true with something as simple as asset sharing). But anyway, is there a solution there? Or should I just frame that as one of those things that humans need to manage rather than chasing some ultimate tool? ie ultimately you'll never make everyone happy.
+>
+> Much of this truly is a human problem rather than a software problem. People are generally much better at adapting than software, but as developers, we all want to believe that software should always bend to our whims. Since it’s usually more technology-focused people making the purchase decision, they rarely bother to evaluate the usability or participation implications of choosing a more complex tool.
+>
+> For large teams, the complexity is generally worth it, but for a small team, they often adopt large bulky solutions that they used in the past without appreciating that at smaller sizes that process suffocates more than it enables.
+>
+> For instance, with Sifter, there’s no classifying an issue type. When you’re working with maybe 100-200 issues per project, it doesn’t really matter if something is a bug, feature, or a task. It just needs to get done and shipped. So all that really matters is priority. But like with the lego analogy, developers feel better if they organize all the things. For instance, if we have a team of 5 people collaborating on a project, 1 back-end dev, 1 designer, 1 front-end dev, and a couple of non-technical business people, there’s really no need for heavy workflow. It’s more important that it’s lightweight and easy to use. It’s entirely possible for a small team like that that they might wast more time designing and deciding on workflow than actually getting things done.
+>
+> For some more context, a lot of it is the ecosystem. Atlassian has a bunch of products, but at the end of the day, they’re really all just features of the “Atlassian Product”. Those features give you more power, but even their signup page is overwhelming with choices.
+> http://cl.ly/dJTJ
+>
+> Some people will see this and get excited at the options. Others will see it and get confused or overwhelmed at the decisions they need to make. That’s just kind of the nature of the beast.
+>
+> On the flip side, many developers will look at something like Sifter or Trello and just feel like it’s the equivalent of using Duplos and it’s for amateurs. But their business and non-technical people love it because they get it.
+>
+> I’m going to stop there. Hopefully that makes some sense of it. :)
+>
+> - G
+>
+>
+> > On Aug 2, 2015, at 9:44 AM, Garrett Dimon <garrett@nextupdate.com> wrote:
+> >
+> > I just had another idea inspired by this. “Why it’s so difficult for your team to settle on a bug tracker?”
+> >
+> > The premise is the Goldilocks story. This bug tracker is too simple. This bug tracker is too complicated. This bug tracker is just right.
+> >
+> > Effectively, with bug and issue trackers, there’s a spectrum. On one end, you have todo lists/spreadsheets. Then you have things like GitHub Issues/Sifter/Trello somewhere in the middle, and then Jira/Fogbugz/etc. all of the way to the other end.
+> >
+> > Each of these tools makes tradeoffs. With Sifter, we keep it simple so small teams can actually have bug tracking because for them, they’d just as soon have no bug tracking as use Jira. It’s way too complicated for them.
+> >
+> > Other teams doing really complex things find Sifter laughably simple. Their complexity needs justify the investment in training and excluding non-technical users.
+> >
+> > This is all complicated by the fact that teams inevitably want ease-of-use and powerful customization, which, while not inherently exclusive of each other, are generally at odds with each other.
+> >
+> > To make matters worse, on a larger team, some portion of the team values simplicity more while other parts of the team value advanced configuration and control. Neither group is wrong, but it is incredibly difficult to find a tool with the balance that’s right for a team.
+> >
+> > I could probably expand on this more, but that should be enough to see if you think it’s a good topic or not.
+>
diff --git a/sifterapp/org-chaos.txt b/sifterapp/org-chaos.txt
new file mode 100644
index 0000000..b695823
--- /dev/null
+++ b/sifterapp/org-chaos.txt
@@ -0,0 +1,33 @@
+Chaos vs. Organization
+
+Programmers love organization. Programming is organization after all, organize electrons in a particular way and some result follows.
+
+Organization is an essential part of programming. Everyone may have their *style* of organization -- just mention a style on your favorite social media to cue up a firestorm of why you're right and wrong -- but few would argue that organization itself is bad.
+
+But why? What are you trying to achieve when you organize something? Most of the time the goal of organizing is to achieve a state of simplicity in which the next step is obvious.
+
+The only problem is that once you start organizing things there's a tendency to overdo it. More organization options do not always mean greater simplicity.
+
+Think of your issues and the todo list that grows out of them as a giant pile of Lego. If you wanted to organize a thousand Lego pieces you might choose something like color. That would give you perhaps a dozen buckets to separate your Lego pieces into. That would help you find what you need when you need it. What you don't want is 1000 buckets each with one Lego in them. That doesn't simplify anything.
+
+
+A little bit of metadata is incredibly powerful, without it we'd be starting over from scratch all the time. Your project needs some metadata -- which elements of metadata are essential will depend on the project -- but too much will quickly have the opposite effect. More fields and meta data only bring additional organization if your team is big enough to both value the additional organization and keep up with it.
+
+For example, categorizing 20 things into 20 categories creates more chaos than it removes, but categorizing those same 20 items into 4 categories can help you find what you need when you need it.
+
+If you have a small team, trying to keep up with a lot of extra, ever-changing meta data will only bring chaos, not organization. You never want your meta data fields to outgrow your team’s ability to stay on top of it. In other words, you just enough meta data.
+
+What’s really important to your project? Due date? Version number? Priority? Severity? Estimate? Level of effort? Category? Tags? How do you bring just the right amount of structure and organization?
+
+The problem is that too many ways of organizing, cross-filing and slicing meta data creates more chaos for most small teams. Too many buckets makes it harder to find the Lego you need. Worse, complex organization systems have a way of becoming the focus. What you think will help you keep every idea at your fingertips ends up becoming so complicated its the only idea you can focus on. You lose the bug fixing forest in the proverbial trees of metadata.
+
+In other words spend all your time filing your Lego into buckets and you'll never get anything built.
+
+Here's the truth of programming that only comes with years of experience: software bugs are inherently messy. They’re unpredictable and often unreproducible. Some are easy to fix and others border on impossible. The process is implicitly chaotic and unorganized.
+
+But we're programmers and some fundamental level that chaos and unpredictability is simply unacceptable. So we slice and dice metadata about that chaos; we classify it all and spend our time imposing order rather than actually making things.
+
+Some organization is of course needed. Formal testing is about bringing a degree of order to the process, but as software developers, sometimes we want too much order. The results is too much focus on irrelevant details because they bring a semblance of order.
+
+Rocket Ships and Space Shuttles need very detailed organization. Small web application, static web site, or CMS site only need basic levels organization. Trying to track bugs like Boeing or NASA is overkill in these situations. It won't make your small team more productive and it won't get your projects completed. In fact it will more than likely do exactly the opposite.
+
diff --git a/sifterapp/requiresments-vs-assumptions.txt b/sifterapp/requiresments-vs-assumptions.txt
new file mode 100644
index 0000000..b5287c2
--- /dev/null
+++ b/sifterapp/requiresments-vs-assumptions.txt
@@ -0,0 +1,28 @@
+One of the toughest things any team has to do is grow and change. Yet adapting to these changes is perhaps the most important part of growing your software business. There is, as they say, no constant but change. Projects change, goals change.
+
+Ultimately customers drive software development. And since most projects don't start with many customers, change is inevitable. Change can be difficult, but failing to adapt as your project grows is the surest way to doom it.
+
+The biggest risk many companies and development teams face is that you end up working on the wrong things. When the project needs change your team needs to change with it, otherwise you end up wasting time and development effort on things that are better left undone -- things that are no longer necessary, but still feel necessary.
+
+To avoid this and maintain the sort of team and company culture that is able to adapt and change as the project needs change, it helps to cultivate a culture in which there are no sacred cows. That is, to let go of all the things you *think* you need to do and stop, listen to your customers, and figure out what you *really* need to do.
+
+Consider, for example, your project's list of "requirements". It's very likely that before you ever wrote a single line of code you wrote out some basic "requirements", things you thought your software needed to be able to do. In the beginning that's good, though be careful with the word "requirement".
+
+The things that we define as “requirements” at the beginning of a project are actually little more than assumptions based on incomplete information. Not a line of code has been written and you're already sure you know what your customers want? It can be humbling to realize, but the truth is you most likely have only a vague idea of what your customers want.
+
+To know what your customers really want, and what your requirements actually are, you need customers. If you get too rigid too early, define too many things as "requirements", before you know it everyone down the chain of command believes those requirements are immutable simply because they're called requirements. It's too rigid of a word. You end up wasting time on features and improvements your customers don't even want just because you labeled them "requirements".
+
+To give a real world example, we thought an option to "theme" and brand Sifter was an absolute requirement, but after launching, we found that almost none of our customers cared. We've received maybe three half-hearted requests for an option to theme Sifter in 6+ years. What we thought was a requirement would have been a waste of time if we had kept pursuing it. The team at Basecamp also apparently felt that theming was no longer a requirement, the new version of Basecamp dropped support for custom themes. If customers don't want it, ditch it.
+
+On the other hand new "requirements" may come up. Customers may clamor for something you never even considered. These are the features you want to invest your time in.
+
+So how do you find that balance? How do you know which features really should be requirements and which just feel like requirements because you've been calling them that for so long?
+
+One of the best ways is to step back and re-evaluate your requirements on a regular basis, perhaps when you sit down to set the priority of your issues or during weekly/monthly reviews. This alone will help change your team's outlook on "requirements" by making them a moving target. More often than not, when a team comes up with a list of requirements they're more focused on checking off the items on that initial list than they are about stepping back to see if they’re really getting those features right or if customers even want those features at all.
+
+Another helpful thing is to drop the whole concept of requirements, or at least use the word sparingly. When *everything* is called a "requirement", nothing is a requirement. Toss that word around too casually and it becomes disingenuous at best and unrealistic at worst.
+
+One of the best ways of changing your approach to requirements is by making features earn their way. Is the feature something your customers are clamoring for? Or something you yourself desperately need? For instance, a lot of things that teams often think are requirements before launch turn out to be entirely unimportant post-launch when customers start making requests. If your customers haven't asked for something there's a good chance it's not a requirement. It might still be a cool new feature or something that's nice to have, but it's not a requirement.
+
+The whole approach of viewing things as “mandatory” for success is often plain wrong. Many of a project's "requirements" turn out to be incorrect assumptions. The sooner you let go of those assumptions the sooner your team can get that burden off their shoulders and the sooner you can get back to work on the things that really are important to your customers.
+
diff --git a/sifterapp/streamlining-issue-benefits.txt b/sifterapp/streamlining-issue-benefits.txt
new file mode 100644
index 0000000..969ed8b
--- /dev/null
+++ b/sifterapp/streamlining-issue-benefits.txt
@@ -0,0 +1,19 @@
+When it comes to workflows complexity is bad. We've written previously about how we've streamlined the issue workflow by creating simple forms and eliminated confusing, distracting elements you don't need.
+
+The purpose of any filing system is to translate noise into signal. Perhaps surprisingly, one of the best ways to do that is to keep things simple.
+
+The simpler the workflow the easier it is for work to flow though it. A simple workflow means there are fewer "buckets" for issues to get lost in. Eliminating the extra steps translates into less ambiguity in your workflow. For example, by keeping statuses limited to open and closed, we eliminate the need to file and re-file issues as work progresses. Fewer steps in the workflow means less work.
+
+It also means there are fewer places to file issues, which means fewer places to lose them. You want to *track* issues after all, not just file them away in some chaotic system where they get lost amidst an overwhelming sea of issues.
+
+The simpler workflow can sometimes feel inflexible though. The set-in-stone rules don't allow for individuals to make decisions on a case by case basis. And that's the point. You don't want to make decisions on a case by case basis, you want the system to work for every case. The solution then is not to change the system, but to help your team understand the process and use the guidelines you've established to work within the system.
+
+tk example of how "guidelines restore flexibility"
+
+Stick to your system and you'll eliminate two of the biggest productivity roadblock -- uncertainty and doubt. To stick with the statuses example, simply using open and close means no one ever has to worry about the new issue that Jim assigned "super extra high priority" status because that never happened. Priority is established during regular reviews, not assigned through a status message.
+
+tk another example of benefits
+
+Filing bugs and creating new issues might make you feel like something is getting done -- and it is -- but let's not lose site of the endgame -- fixing those issues and shipping better software.
+
+When it comes to issues in software development that means keeping track of all the noise and using regular triaging efforts to file it into simple, discreet and most importantly, fixed number of "buckets". From there your team can then prioritize and get to work.
diff --git a/sifterapp/zapier-announcement.txt b/sifterapp/zapier-announcement.txt
new file mode 100644
index 0000000..878ce14
--- /dev/null
+++ b/sifterapp/zapier-announcement.txt
@@ -0,0 +1,31 @@
+Bugs, feature requests and issues don't exist in a vacuum. Wouldn't it be nice if you could create issues in Sifter and have them automatically link up with other tools your team uses, whether it's HipChat, Dropbox, Gmail or Twitter? Well now you can thanks to Zapier.com.
+
+Today we're excited to announce the official Sifter integration for Zapier. Zapier, which is like a matchmaker for web services, now supports Sifter, making it easier than ever to integrate Sifter with over 300 other web services instantly.
+
+## What is Zapier?
+
+Zapier is a very simple, but powerful tool. Zapier is the glue that binds together different web services. Zapier’s "About" page reads: "Zapier is for busy people who know their time is better spent selling, marketing, or coding. Instead of wasting valuable time coming up with complicated systems -- you can use Zapier to automate the web services you and your team are already using on a daily basis."
+
+Zapier automates the web by giving you the power to connect web apps. When you do something in one web service Zapier notices and pushes that information to another service according to "Triggers" and "Actions" you define. That is the first web service "triggers" an action that does something with data from that service.
+
+Sound vague? That's partly because the possibilities are nearly unlimited.
+
+## What can Sifter and Zapier do together, for me?
+
+It's probably easiest to understand Zapier with an example.
+
+By default Sifter uses email notifications sparingly, we don't want to clutter your inbox so we only email you if you're directly involved with the issue. That said, you can connect your Sifter account to Zapier in order to route notifications of any new issues to email and other places. For instance, we push notifications to our team Slack account any time someone creates a new issue. This way, everyone keeps abreast of the activity without cluttering their inbox.
+
+tk screenshot
+
+Our Zapier integration can also be used in slightly more sophisticated and powerful ways if you tap into the possibilities of advanced notifications. For example, getting emails about new issues can be helpful, but in some cases you might even want to go further.
+
+Let's say you want to be notified by text message every time there's a new issue in the "Security" category for your project. Using Zapier you simply create a trigger for new issues, filter by category so that only security issues show up, connect that to your mobile number and bam! instant SMS updates whenever someone opens a new security issue on your project.
+
+tk screenshot
+
+## Getting Started
+
+If you don't have one yet, go create a Zapier account. Once you've got that set up, check out the [Sifter page](https://zapier.com/zapbook/sifter/) and start connecting your issues to the other services you use. Found a cool way to automate the tedious tasks in your workflow? Be sure to let us know.
+
+