diff options
Diffstat (limited to 'sifterapp/complete')
23 files changed, 661 insertions, 0 deletions
diff --git a/sifterapp/complete/browserstack-images.zip b/sifterapp/complete/browserstack-images.zip Binary files differnew file mode 100644 index 0000000..5aa9646 --- /dev/null +++ b/sifterapp/complete/browserstack-images.zip diff --git a/sifterapp/complete/browserstack-images/browserstack01.jpg b/sifterapp/complete/browserstack-images/browserstack01.jpg Binary files differnew file mode 100644 index 0000000..85da575 --- /dev/null +++ b/sifterapp/complete/browserstack-images/browserstack01.jpg diff --git a/sifterapp/complete/browserstack-images/browserstack02.jpg b/sifterapp/complete/browserstack-images/browserstack02.jpg Binary files differnew file mode 100644 index 0000000..d53c254 --- /dev/null +++ b/sifterapp/complete/browserstack-images/browserstack02.jpg diff --git a/sifterapp/complete/browserstack-images/browserstack03.jpg b/sifterapp/complete/browserstack-images/browserstack03.jpg Binary files differnew file mode 100644 index 0000000..ccdaf46 --- /dev/null +++ b/sifterapp/complete/browserstack-images/browserstack03.jpg diff --git a/sifterapp/complete/browserstack-images/browserstack04.jpg b/sifterapp/complete/browserstack-images/browserstack04.jpg Binary files differnew file mode 100644 index 0000000..d32ffc9 --- /dev/null +++ b/sifterapp/complete/browserstack-images/browserstack04.jpg diff --git a/sifterapp/complete/browserstack-images/browserstack05.jpg b/sifterapp/complete/browserstack-images/browserstack05.jpg Binary files differnew file mode 100644 index 0000000..ebe3b2d --- /dev/null +++ b/sifterapp/complete/browserstack-images/browserstack05.jpg diff --git a/sifterapp/complete/browserstack-images/browserstack06.jpg b/sifterapp/complete/browserstack-images/browserstack06.jpg Binary files differnew file mode 100644 index 0000000..c11505f --- /dev/null +++ b/sifterapp/complete/browserstack-images/browserstack06.jpg diff --git a/sifterapp/complete/browserstack.txt b/sifterapp/complete/browserstack.txt new file mode 100644 index 0000000..e23e224 --- /dev/null +++ b/sifterapp/complete/browserstack.txt @@ -0,0 +1,90 @@ +Testing websites across today's myriad browsers and devices can be overwhelming. + +There are roughly [18,700 unique Android devices](http://opensignal.com/reports/2014/android-fragmentation/) on the market. There are somewhere around 14 browsers, powered by several different rendering engines available for each of those devices. Then there's iOS, Windows Mobile and scores of other platforms to consider. + +Trying to test everything is impossible at this point. + +The good news is that you don't need to worry about every device on the web. Don't make testing harder than it needs to be. + +Don't test more, test smarter. + +Before you dive into testing, consult your analytics and narrow the field based on what you know about your traffic. Find out which devices and browsers the majority of your visitors are using. Test on the platform/browsers that make up the top 80 percent of your visitors. Once you're confident you've got a great experience for the majority of your visitors you can move on to the more obscure cases. + +If you're launching something new you'll need to do some research about which devices your target audience favors. That way you'll know, for example, that your target market tends to favor iOS devices and you'll want to spend some extra time testing on various iPhone/iPad configurations. + +Once you know what your visitors are actually using, you can start testing smarter. + +Let's say you know that the majority of your visitors come from 8 different device/browser configurations. You also know from studying the trends in your analytics that you're making headway in some new markets overseas and traffic from Opera Mini is on the rise, so you want to pay special attention to Opera Mini. + +Armed with information like that you're ready to start testing. Old school developers might fire up the virtual machines at this point. There's nothing wrong with that, but these days there are better tools for testing your site. + +### Introducing BrowserStack + +[BrowserStack](http://www.browserstack.com/) is a browser-based virtualization service that puts nearly every operating system and browser combination under the sun at your fingertips. + +BrowserStack also offers mobile emulators for testing almost every version of iOS, Android and Opera Mobile (sadly, at the time of writing, there are no Windows Mobile emulators available on BrowserStack). + +You can also choose the screen resolution you'd like to test at, which makes possible to test resolution-based CSS media queries if you're using them. BrowserStack also has a dedicated [Responsive Design testing service](http://www.browserstack.com/responsive). + +BrowserStack is not just a screenshot service. It launches a fully interactive virtual machine in your browser window. That means you can interact with your site just like you would if you were using a "real" virtual machine or had the actual device in your hand. It also means you can use the virtual browser's developer tools to debug any problems you encounter. + +### Getting Started With BrowserStack + +Getting started with BrowserStack is simple, just sign up for the service and then log in. A free trial account will get you 30 minutes of virtual machine time. Pick an OS/browser combination you want to test, enter your URL and start up your virtual machine. + +<figure> + <img src="browserstack01.jpg" alt="Configuring BrowserStack Virtual Machine"> + <figcaption><b>1</b> Configuring your virtual machine.</figcaption> +</figure> + +BrowserStack will then launch a virtual machine in your browser window. Now you have a real virtual machine running, in this case IE 10 on Windows 7. + +<figure> + <img src="browserstack02.jpg" alt="BrowserStack Virtual Machine"> + <figcaption><b>2</b> Testing sifterapp.com using IE 10 on Windows 7.</figcaption> +</figure> + +Quick tip: to grab a screenshot of a bug to share with your developers, just click the little gear icon, which will reveal a camera icon. + +<figure> + <img src="browserstack03.jpg" alt="Taking a screenshot in BrowserStack"> + <figcaption><b>3</b> Taking a screenshot in BrowserStack.</figcaption> +</figure> + + +Click the camera and BrowserStack will generate a screenshot that you can annotate and share with your team. You could, for example, download it and add it to the relevant issue in your bug tracker. + +<figure> + <img src="browserstack04.jpg" alt="Screenshot annoations in BrowserStack"> + <figcaption><b>4</b> Annotating screenshots in BrowserStack.</figcaption> +</figure> + +### Local Testing with BrowserStack + +If you're building a brand new site or app, chances are you'll want to do your testing before everything is public. If you have a staging server you could point BrowserStack to that URL, but there's another very handy option -- just point BrowserStack to local files on your computer. + +To do this BrowserStack will need to install a browser plugin, but once that's ready to go testing a local site is no more difficult than any other URL. + +Start by clicking the "Start local testing" button in the sidebar at the left side of the screen. This will present you with a choice to use either a local server or a local folder. + +<figure> + <img src="browserstack05.jpg" alt="Setting up local testing in BrowserStack"> + <figcaption><b>5</b> Setting up local testing in BrowserStack.</figcaption> +</figure> + +If you've got a dynamic app, pick the local server option and point BrowserStack to your local URL. Alternately, just point BrowserStack to a folder of files and it will serve them up for you. + +<figure> + <img src="browserstack06.jpg" alt="Testing a local folder in BrowserStack"> + <figcaption><b>6</b> Testing a local folder of files with BrowserStack.</figcaption> +</figure> + +That's it! Now you can edit files locally, make your changes and refresh BrowserStack's virtual machine to test across platforms without ever making your site public. + +### Beyond the Basics + +Once you start using BrowserStack you'll wonder how you ever did without it. + +There's also a good bit more than can be covered in a short review like this, including [automated functional testing](https://www.browserstack.com/automate), a responsive design testing service that can show your site on multiple devices and browsers, an [automated screenshot service](http://www.browserstack.com/screenshots) and more. You can even [integrate it with Visual Studio](http://www.hanselman.com/blog/CrossBrowserDebuggingIntegratedIntoVisualStudioWithBrowserStack.aspx). + +BrowserStack offers a free trial with 30 minutes of virtual machine time, which you can use for testing. If you decide it's right for you there are a variety of reasonably priced plans starting at $49/month. diff --git a/sifterapp/complete/bugs-issues-notes.txt b/sifterapp/complete/bugs-issues-notes.txt new file mode 100644 index 0000000..120f2f4 --- /dev/null +++ b/sifterapp/complete/bugs-issues-notes.txt @@ -0,0 +1,18 @@ +What a task "is" +Thus, + +Best-guess order of magnitude. 1 minute. 1 hour. 1 day. 1 week. 1 month. (Anything longer than +a month, and the task hasn't been sufficiently broken down into smaller +pieces.) + + + +> More often than not, the goal of classifying things along those lines isn't +> about *what* the individual issue is, but whether it's in or out of +> scope. Whether something is in out of scope is a worthwhile facet for +> classification, but using bug vs. feature as a proxy for that confuses the +> issue. More often than not, in or out of scope is best handled through +> discussion, not simply reclassifying the issue. When that happens, + + Think "separate but equal". +> diff --git a/sifterapp/complete/bugs-issues.txt b/sifterapp/complete/bugs-issues.txt new file mode 100644 index 0000000..4b31cce --- /dev/null +++ b/sifterapp/complete/bugs-issues.txt @@ -0,0 +1,52 @@ +Love it. The only thing that I think could be worked in is mentioning the (The separate is inherently unequal bit.) The article does a great job of explaining how it's not necessarily helpful, but I think it could go even further illustrating that it's even *potentially* harmful to create that dichotomy. + + + +One of the hardest things you ever have to do is figure out what to do next. There's shelf after shelf of books in the self help section dedicated to helping you discover the answer to that question. + +We can't help you with the abstract version of that question, but it isn't just the abstract question that's hard. All of the software development teams we've talked to struggle with a version of this same question. + +Knowing which work to do next is the most difficult problem out there. + +Every development team's wish list is incredibly long. Every time you sit down at your screen there are a dizzying array of choices. This is part of what makes software development exciting, but it can also be overwhelming. What should you do next? Fix bugs? Ship new features? Improve security or performance? There's no perfect science for calculating which of these things is most important. + +One thing we can tell you won't help -- keeping track of everything in separate systems only makes the decision process more challenging. + +This can be counter-intuitive at first. For example, you're probably used to tools and systems that have some things you call "bugs", quantifiable shortcomings in software, some you call "issues", potential problems that aren't directly to related code, for example pricing decisions, and other things you call "features" or "enhancements" for ideas that haven't been implemented yet. + +Organizing tasks by category like this offers a comforting way to break things down. It makes us feel like we've done something, even when we haven't. All we've really done is rename the problem. We still don't have any better idea of what to do next. + +If you've worked with such a system for long you've probably seen it fall apart. The boundaries between these things --bugs, issues, new features -- are actually quite vague and trying to make your tasks fit into an arbitrary category rarely helps you figure out what to work on next. Worse it forces you to classify everything in one of those categories even when the actual problem might be too complex to fit just one. + +We've found that it can be even worse than just "not helpful", divide your work in to categories like this and some tasks will automatically become second class citizens. There is no such thing as separate but equal. Separate is inherently unequal. In this case bugs and issues that take a backseat to new features. + +It's time to take a different approach. + +We've found that the best technique for deciding what you should do next is not to classify what you need to do. That's a subtle form of procrastination. + +To actually decide between possibilities you need to figure out the priority of the task. To determine priority you need to look at all your possible next tasks and balance two factors: the positive impact on your customers and the effort it will take to complete that task. Priority isn't much help without considering both impact and effort. + +Establish a priority hierarchy for your tasks and you'll never wonder what you should do next again. You'll know. + +Sometimes finding that balance between impact on the customer and effort expended is easy. A bug that affects 20 percent of customers and takes 5 minutes to fix is a no-brainer -- you fix it, test and ship. + +Unfortunately, prioritizing your tasks will rarely be this black and white. + +What to do with a bug that only affects one customer (that you know of), but would take a full day to fix, is less immediately obvious. + +What if that customer is your biggest customer? What if that customer is a small, but very influential one? What if your next big feature could exacerbate the impact of that bug? What if your next big feature will eliminate the bug? + +There's no formula to definitively know which task will be the best use of your time. If the bugs are minor, but the next feature could help your customers be more successful by an order of magnitude, your customers might be more than willing to look the other way on a few bugs. + +There's only really one thing that's more or less the same with every decision: What a task is (bug, issue, feature) is much less important than the priority you assign it. + +The classification that matters is "what percentage of customers does this +affect" and "how long will it take to do this task?" + +That does not mean you should dump all your categories. For instance, if you group tasks based on modules like "Login", "Filtering" or "Search", that grouping helps you find related tasks when you sit down to work on a given area. In that case the categories become helpful because they help your team focus. + +Some categories are useful, but whether something is a bug, issue or feature should have almost no bearing on whether the task in question is truly important. + +A bug might be classified as a bug, but it also means that a given feature is incomplete because it's not working correctly. It's a gray area where "bug" vs. "feature" doesn't help teams make decisions, it only lets us feel good about organizing issues. It's the path of least resistance and one that many teams choose, but it doesn't really help get work done. + +If you want to get real work done, focus your efforts on determining priority. Decide where the biggest wins are for your team by looking at the impact on your customers versus the time required to complete the task. There's no formula here, but prioritize your tasks rather than just organizing them and you'll never need to wonder, what should I do next. diff --git a/sifterapp/complete/forcing-responsibility-software.txt b/sifterapp/complete/forcing-responsibility-software.txt new file mode 100644 index 0000000..611d602 --- /dev/null +++ b/sifterapp/complete/forcing-responsibility-software.txt @@ -0,0 +1,29 @@ +Stop Forcing Responsibility onto Software + +There's a common belief among programmers that automating tedious tasks is exactly the reason software was invented. The idea is that software can save us from all this drudgery by making simple decisions for us and removing the tedious things from our lives. + +This is often true. Think of all the automation in your life, from your thermostat to your automobile's service engine light, software *does* remove a tremendous number of tedious tasks from our lives. + +Perhaps the best example of this is the auto-save feature that runs in the background of most applications these days. Auto-save frees you from the tedious task of saving your document. You no longer need to pound CTRL-S every minute or two like an animal. Instead, your lovely TPS reports are automatically saved as you change them and you don't have to worry about it. + +Unfortunately when you have a hammer as powerful as software everything starts to look like a nail. Which is to say that, just because a task is tedious, does not mean it's something that can be offloaded to software. + +It's just as important to think about whether the task is something that software *can* be good at. For example, while auto-saving your TPS reports is definitely something software can be good at, actually writing the reports is probably something humans are better at. + +This temptation to automate away the difficult, sometimes tedious, tasks in our lives is particularly tempting when it comes to prioritizing issues in your issue tracking software. + +Software is good at tracking issues, but sadly, most of the time software turns out to be terrible at prioritizing them. To understand why, consider the varying factors that go into prioritizing issues. + +At a minimum prioritizing means weighing such disparate factors as resource availability, other potential blockers and dependencies, customer impact, level of effort, date/calendar limitations, and more. We often think that by plugging all of this information into software, we can automatically determine a priority, but that's just not the case. + +Plugging all that information into the software helps collate it all in one place where it's easy to see, but when it comes to actually making decisions about which issue to tackle next, a human is far more likely to make good decisions. Software helps you make more informed choices, but good decisions still require human understanding. + +When all those often conflicting factors surrounding prioritization are thrown together as a series of data points, which software is supposed to then parse and understand, what you'll most likely get back from your software is exactly what you've entered -- conflicts. + +It might not be the most exciting task in your day, but prioritizing issues is a management task, that is, it requires your management. You need to use intuition and understanding to make decisions based on what's most important *in this case* and assign a single simple priority accordingly. + +Consider two open issues you need to make a decision on. The first impacts ten customers. The second only impacts one customer directly, but indirectly impacts a feature that could help 1,000 customers. So to what degree is the second issue actually impacting customers? And which should you focus on? Algorithms designed to prioritize customer impact will pick the first, but is that really the right choice? + +These questions aren't black and white, and it's difficult for a software system to accurately classify/quantify them and take every possible variable into account. + +Perhaps in the AI-driven quantum computing future this will be something well-suited for software. In the mean time though, tedious or not, human beings still make the best decisions about which issues should be a priority and what your team should tackle next. diff --git a/sifterapp/complete/how-to-respond-to-bug-reports.txt b/sifterapp/complete/how-to-respond-to-bug-reports.txt new file mode 100644 index 0000000..617e1ca --- /dev/null +++ b/sifterapp/complete/how-to-respond-to-bug-reports.txt @@ -0,0 +1,29 @@ +If you look at the bug reports on many big open source software projects it's almost like the developers have a bug report Magic-8 Ball. Reports come in and developers just give the Magic-8 Ball a shake and spit out the response. You'll see the same four or five terse answers over and over again, "working as designed", "won't fix", "need more info", "can't reproduce" and so on. + +At the same time large software projects often have very detailed guidelines on how to *report* bugs. Developers know that the average user doesn't think like a developer so developers create guidelines, checklists and other tips designed to make their lives easier. + +The one thing you almost never see is a set of guidelines for *responding* to bug reports. The other side of the equation gets almost no attention at all. I've never seen an open source project that had an explicit guide for developers on how to respond to bugs. + +If such a guide existed projects would not be littered with Magic-8 Ball-style messages that not only discourage outsiders from collaborating, but showcase how out of touch the developers are with the users of their software. + +It's time to throw away the Magic 8 Ball of bug reports and get serious about improving your software. + +## Simple Rules for Responding to Bug Reports. + +1. **Don't take bug reports personally**. The reporters are trying to help. They may be frustrated, they may even be rude, but remember they're upset and frustrated with a bug, not you. Now they may not phrase it that way, they may think they're upset with you. Part of your job as a developer is to negotiate that social gap between angry users and yourself. The first step is to stop taking bug reports personally. You are not your software. + +2. **Be specific in your responses**. Magic 8 Ball responses like "can't reproduce" and "need more info" aren't just rude, they're failures to communicate. Which is to say that both may be true in many cases, but neither are helpful for the bug reporter. The bug reporter may not be providing helpful info, but in dropping in these one-liners you're not being helpful either. + +In the case of "need more info" take a few seconds to ask for what you actually need. Need to know the OS or browser version? Then ask for that. If you "can't reproduce" tell the user in detail what you did, what platform or browser you were using and any other specifics that might help them see what's different in their case. Be specific, ask "What browser were you using?" or "Can you send a screenshot of that page or copy and paste the URL so that I can see what you're seeing?" instead of "Need more info". + +3. **Be collaborative**. This is related to point one, but remember that the reporter is trying to help and the best way to let them help you is to, well, let them help you. Let them collaborate and be part of the process. If your project is open source remember that some of your best contributors will start off as prolific bug reporters. The more you make them part of the process the more helpful they'll become over time. + +4. **Avoid technical jargon**. For example, instead of "URL" say "the address from the web browser address bar". This can be tricky sometimes since what you think of as everyday speech may read like technical jargon to your users. When in doubt err on the side of simple, direct language. + +Along the same lines, don't assume too much technical knowledge on the part of bug reporters. If you're going to need a log file be sure to tell the user exactly how and where to find the information you need. Don't just say, "what's the output of `tail -f /var/log/syslog`?", tell them where their terminal application is, how to open it and then to cut and paste the command and results you want. A little bit of hand holding goes a long way. + +5. **Be patient**. Don't dismiss reports just because they will involve more effort and research. It's often said that there is no such thing as writing, just re-writing. The same is true of software development, fixing bugs *is* software development. The time and research it takes to adequately respond to bug reports isn't taking you away from "real" development, it is the real development. + +6. **Help them help you**. Think of this as the one rule to rule the previous five. Bug reports are like free software development training. Just because you're the developer doesn't mean your users don't have things to teach you, provided you're open to learning. Take everything as a teaching/learning opportunity and you'll find that not only do your bug reports make your software better, they make you a better software developer. + +It can be hard the remember all this stuff when you have a pile of bugs you want to quickly work through. Try to resist that urge to work through all the new bug reports before lunch or otherwise rush it. Often times it's more effective to invest a few extra moments collaborating with the reporter to make sure that bugs are handled well. diff --git a/sifterapp/complete/issue-tracking-challenges.txt b/sifterapp/complete/issue-tracking-challenges.txt new file mode 100644 index 0000000..fc5e03e --- /dev/null +++ b/sifterapp/complete/issue-tracking-challenges.txt @@ -0,0 +1,63 @@ +Tracking issues isn't easy. It's tough to find them, tough to keep them updated and accurate, and tougher still to actually resolve them. + +There are quite a few challenges that can weigh down a good issue tracking process, but fortunately there are some simple fixes for each. Here are a few we've discovered over the years and some ways to solve them. + +# Lack of participation. + +The most basic problem is getting your team to actually use the software. Participation and collaboration are the cornerstones of a good issue tracking process, but that collaboration and teamwork runs smoothest when everyone is using the same system. + +If everyone is not working together the process will fall apart before you even get started. + +If your team is using email or littering their desks with sticky notes that's a pretty sure sign you have a problem. + +Solution: Get everyone using the same system and make sure everyone is comfortable with it. + +# Too difficult to report issues. + +If it's too hard to actually report an issue, for example, if there are too many required, but irrelevant fields in your forms, or it's too hard to upload relevant files, or it's just too difficult to login and find the "new issue" link, then no one will ever report issues in the first place. And unreported issues are unsolved issues. + +Solution: Keep your forms simple, offer drag-and-drop file uploading, and, if all else fails, email integration. + +# Too difficult to find issues. + +If you can't find an issue, you can't fix it. + +Typically the inability to find what you're looking for is the result of weak or minimal searching and filtering tools in your issue tracker. + +Sorting and filtering needs to be powerful and yet simple. Overcomplicating these tools can mean you'll accidentally filter out half of your relevant issues and not even realize it. + +Solution: Simplify the process of searching, filtering and finding issues. + +# Over-engineering the process. + +The best way to avoid over-engineering anything is to keep things as simple as possible. For example, try to solve your problems with your existing tools before you create new tools. + +One example of this we've discovered is having [too many possible statuses](https://sifterapp.com/blog/2012/08/the-challenges-with-custom-statuses/) for an issue. + +Keeping things simple -- we have three choices, "On Hold", "Assigned", and "Accepted" -- avoids both the paradox of choice and any overlap. If you have ten possibilities (or worse, completely custom statuses) you've added mental overhead to the process of choosing one. Keeping it simple means you don't have to waste time picking a status for a given issue. + +Too many statuses creates crevices for issues to hide in and be forgotten when filtering issues. Overly detailed statuses can also confuse non-technical people who will wonder, "what's the difference between accepted and in progress?" Good question. Avoid it by making statuses clear and simple. + +There are also clear, hard lines between each of these three statuses and no questions about what they mean. + +A related problem, and the reason some teams will clamor of more status possibilities, is the tendency to conflate statuses with resolutions. For example, "working as designed" isn't a status; it's a resolution. Similarly, "can't reproduce" isn't a status; it's a resolution. + +Solution: Keep your status options simple and focus on truly different states of work with clear lines between them. + + +# Over-reliance on software for process. + +Software is a tool. Tools are wielded by people. The tool alone can only do so much. Without people to guide them even the best of tools will fail. + +That's why you need to make people the most important part of your issue process. + +Make room for the human aspects of issue tracking, like regular testing sessions, consistent iteration and release cycles, and dedicated time for fixing bugs. + +Solution: Invest time and effort in the human processes that will pair with +and support the software. + +# Conclusion + +Tracking issues isn't always easy, but you can make it easier by simplifying. + +Cut out the cruft. Make sure you have good software and good processes that help your team wield that software effectively. Let the software do the things software is good at and let your team fill in the parts of the process that people are good at. diff --git a/sifterapp/complete/private-issues.txt b/sifterapp/complete/private-issues.txt new file mode 100644 index 0000000..c2b033d --- /dev/null +++ b/sifterapp/complete/private-issues.txt @@ -0,0 +1,28 @@ +Sifter does not offer "private" issues. Here's why. + +In most cases the reasons teams want private issues are the very reason private issues are problematic. There seems to be three primary reasons teams want private issues. The first is so that clients don't see your "mistakes" and will somehow perceive the work as higher quality. Except that this is highly flawed thinking. The idea that presenting a nice clean front will convince the client you're some kind of flawless machine is like trying to make unicorns out of rhinos. + +The far more likely outcome is that clients can't see half of what you're working on and end up thinking you aren't actually working. Or they might think you're ignoring their issues (which are public). If your highest priority work is all in private issues the client is cut off from the development process and will never get the bigger picture view. This can lead to all sorts of problems including the client feeling like they're not in control. That's often the point at which clients will either pull the plug or want to step in and micromanage the project. + +What you end up doing when you use private issues this way is protecting your image at the client's expense. First and foremost remember that the work isn't about your image, it's about the client and what they want. Assuming you're doing quality work, your image isn't going to suffer just because you're doing that work in front of the client, warts and all. Rhinos have one huge evolutionary advantage over unicorns -- they're real. + +Keeping issues private to protect your image ends up skewing expectations the wrong way and can easily end up doing far more damage to your reputation with the client than showing them a few bugs in the software you're developing. + +Another reason some teams want private issues is related, but slightly different -- they want to shield the client from the technical stuff so they don't get distracted from the larger picture. + +This is indeed tempting, especially with the sort of client that likes to micromanage every step of the development process whether their input is need or not (and seemingly more often when it is not). It's tempting, when faced with this sort of client, to resort to using private issues as a way of avoiding conflict or avoiding them period. + +However, as noted above, using private issues to make your development process partly or wholly opaque to the client is just as likely to make your client want to step in and micromanage as it is to prevent them from being able to do so. + +The problem is that you're trying to solve a human problem with software and that's never going to work. If your client is "in the way" then you need to help them understand what they're doing and how they can do it better. + +Even with clients that don't micromanage it can be tempting to use private issue to shield the client from technical details you don't want to explain. But clients don't have to (and aren't interested in) digging into things not assigned to or related to them. People are great at tuning things out and they will if you give them the chance. + +The third use that we've seen for private issues is that they serve as a kind of internal backchannel, a place your developers can discuss things without worrying that the client is watching. This is the most potentially disastrous way to use private issues. If the history of the internet has taught us anything it's that "private" is rarely actually private. + +Backchannels backfire. Private conversations end up public. Clients get to see what your developers are saying in private and the results are often ugly. + +Backchannels also undermine a sense of collaboration by creating an environment in which not all speech is equal. The client has no say in the backchannel and never gets a chance to contribute to that portion of the project. + +It’s important to involve clients in the big picture and let them get as involved as they want to be. Private issues subvert this from the outset by setting up an us vs. them mentality that percolates out into other areas of development as well. The real challenge is not keeping clients separate, but getting them involved, and setting up fences and other tactics to prevent them from being fully integrated into the project hinders collaboration and produces sub-par work. + diff --git a/sifterapp/complete/sifter-pagespeed-after.png b/sifterapp/complete/sifter-pagespeed-after.png Binary files differnew file mode 100644 index 0000000..6c35499 --- /dev/null +++ b/sifterapp/complete/sifter-pagespeed-after.png diff --git a/sifterapp/complete/sifter-pagespeed-before.png b/sifterapp/complete/sifter-pagespeed-before.png Binary files differnew file mode 100644 index 0000000..5c36514 --- /dev/null +++ b/sifterapp/complete/sifter-pagespeed-before.png diff --git a/sifterapp/complete/states-vs-resolutions.txt b/sifterapp/complete/states-vs-resolutions.txt new file mode 100644 index 0000000..8652c81 --- /dev/null +++ b/sifterapp/complete/states-vs-resolutions.txt @@ -0,0 +1,22 @@ +We've written before about why Sifter has only [three possible statuses](https://sifterapp.com/blog/2012/08/the-challenges-with-custom-statuses/) -- Open/Reopened, Resolved and Closed. The short answer is that more than that over-complicates the issue tracking process without adding any real value. + +Why? Well, there are projects for which this will not be enough, but provided your project scope isn't quite as encompassing as say, NASA's, there's a good chance these three, in conjunction with some supplementary tools, will not only be enough, but speed up your workflow and help you close issues faster. + +Why are custom statuses unnecessary and how does using them over-complicate things? Much of the answer lies in how your team uses status messages -- are your statuses really describing the current state of an issue or are they trying to do more? + +One of the big reasons that teams often want more status possibilities is that they're using status messages for far more than just setting the status of an issue. For example, some teams use status to indicate resolutions. Probably the most common example of this is directly using resolutions as a status indicator. That is, the status of the issue is a stand-in for its outcome. + +How many times have you tracked down an issue in your favorite software project only to encounter a terse resolution like "working as designed" or the dreaded, "won't fix"? The problem with these statuses is that they don't describe state the issue is in, they describe the outcome. In other words they aren't statuses, they're resolutions. + +The status is not "won't fix", the status is closed. The *resolution* is that the issue won't be fixed. + +Trying to convey an outcome in a status message is like trying to fit the proverbial square peg in a round hole. + +Worse, in this case, you're missing an opportunity to provide a true resolution. What do you learn from these status-message "resolutions"? Nothing. What does the team working on that project learn when they revisit the issue a year from now? Nothing. That's a lost opportunity. + +This is part of why statuses do not make good resolutions. Resolutions are generally best captured in the description where there's room to explain a bit more about why you aren't going to fix something. Take a minute to explain why you aren't going to fix something or why it's designed the way it is and your users will thank you. + +Perhaps even more important, your future self will thank you when the same issue comes up again and you can refer back to your quick notes in the resolution to see why things are the way they are. + +Provided you use status messages solely for setting the status of an issue then there's rarely a need for more statuses than those Sifter offers -- Open, Resolved and Closed. + diff --git a/sifterapp/complete/streamlining-issue-creation.txt b/sifterapp/complete/streamlining-issue-creation.txt new file mode 100644 index 0000000..543e420 --- /dev/null +++ b/sifterapp/complete/streamlining-issue-creation.txt @@ -0,0 +1,55 @@ +No one likes filling out forms. The internet is littered with user surveys that show reducing the number of fields in a form means far more people fill it out. Hubspot rather famously found that dropping just one field from forms [increased conversion][1] by almost 50%. People really hate forms. + +Who cares? Well, "people" includes you and your team. And "forms" include those you use to file issues and bugs in the software you're developing. + +Want to get more issues filed? Create simpler forms. + +At the same time you do need to capture certain bits of information. Take the typical web contact form. Dropping the email field might increase conversion rates significantly, but it means you don't have all the information you need. Forms need to be simple, not simplistic. + +And therein lies the rub -- which fields really need to be on your issue form? + +Let's break it down using a typical issue form which might consist of a dozen or more fields. There will almost always be a field for the "status" of an item. Other options typically include fields for Resolution, Assignee, Opener, Creation Date, Due Date, Category, Type, Release, Priority, Severity, Impact, LOE (estimated), LOE (actual), Browser/OS, Relationships and possibly even more. + +All those fields create a huge cognitive overhead which quickly leads to "decision fatigue", a fancy name for "people have better things to do than fill out long overly detailed forms on their computers." Let's tackle these one by one. + +* Status -- We've [written previously][2] about how status messages are unnecessary. The short story is that the status is either open or closed, there is no other status. Everything else is really a resolution. For example, the dreaded "wont fix" status is not a status. The status is closed. The *resolution* is that the issue won't be fixed. + +* **Resolution** -- We need a spot to record what we've done so keep this one. + +* Assignee -- Another necessary field, but it can be captured implicitly without adding another field to the form. So keep this one but it won't be part of the issue form. + +* Opener -- Again, good info to have, but not info you should need to fill in. Lose the field and capture it behind the scenes. + +* Creation Date -- Like Opener, this should be captured automatically when the issue is created. + +* Due Date -- The due date of every issue is "yesterday", there's no need to ask people to figure this out in the issue creation form. Figuring out the due date means [figuring out the priority of the issue][3] and that can't be done without an overview of the whole project. The issue creation form is the wrong place to determine priority and thus due date. + +* **Category** -- Categories are good, they help classify the issue -- is it a feature request, is it a bug? is it something else? Categories are helpful when trying to determine the priority of an issue as well so let's keep this one. + +* Type -- The type of issue is more or less the same as the category. No need to make a decision twice; keep it simple and lose the Type field. The same is true of "Tags" or any other variation on the categories theme. + +* **Release** -- Soon, but not yet. This one is useful for planning. + +* **Priority** -- Setting the priority of the issue is important so we'll keep this one as well. + +* Severity -- The severity of an issue can and should be a factor in setting the priority, but it doesn't need its own field in the form. Keep severity as part of your decision making process, but don't track it separately from what's it's influencing, namely, the Priority field. + +* Impact -- Like Severity, the impact of an issue is part of what determines the priority, but again there's no need to track it separately. + +* Level of Effort (estimated) -- The level of effort necessary to fix any individual issue is nearly impossible to estimate and not all that useful even if you do happen to estimate correctly. All this field does is create cognitive overhead. + +* Level of Effort (actual) -- Again, you're just creating overhead and getting nothing in return, lose it. + +* Browser/OS - This is useful information to have, but it doesn't apply directly to the issue. This is best captured in the comments or description field. + +After trimming our form down to fields we actually need we're left with, in addition to subject and description, a Resolution field, a field for the Priority, another for Category and one for Release. + +With just six fields, three of which don't need to be filled out when the issue is created -- Resolution, Priority, Release -- our form is considerably smaller. + +What we've created is a form that's simple enough you don't need to train your team on how to use it. Open up the form, create a new issue, give it a name, a brief description and a category; hit Create and you're done. + +Streamlining the process of creating issues means that the workflow is simple enough that even the "non-techie" members of your team will be able to use it. That means every person on the team has the potential to become a valuable contributor to the system. + +[1]: http://blog.hubspot.com/blog/tabid/6307/bid/6746/Which-Types-of-Form-Fields-Lower-Landing-Page-Conversions.aspx +[2]: link to status vs issue piece +[3]: link to piece on setting priority diff --git a/sifterapp/complete/triaging.txt b/sifterapp/complete/triaging.txt new file mode 100644 index 0000000..7012687 --- /dev/null +++ b/sifterapp/complete/triaging.txt @@ -0,0 +1,62 @@ +Few elements in your development process are as important as testing your code. We've found that the best way to ensure you get the most out of your testing is to set a schedule and stick to it. + +Set aside time to test and fix the problems you encounter every week or two. The process of testing and fixing might look something like this: + +1. Code Freeze. Everybody stops coding and prepares for testing. Everyone. +2. Everyone tests. Preferably not their own modules. +3. Triage. This is where most testing strategies break down. Issues don't organize themselves and many teams don't take the time to organize. +4. Fix. Testing is useless without time dedicated to fixing. + +Let's focus on an oft overlooked part of the testing process -- triaging. + +## What is it? + +Triage means prioritizing. The word comes from the medical world, where it refers to the process of prioritizing patients based on the severity of their problems. + +When resources are limited -- and resources are almost always limited -- triaging is a way of helping as many people as possible and in the right order. In the ER, for example, that means treating the person having a heart attack before moving to the person with a broken toe. + +In application development triaging means sorting through all the issues you discover in testing and organizing the resulting work based on priority. + +Sometimes priority is obvious, some times it's not. That's why triaging takes time and usually involves a project manager and key stakeholder or client. + +## Why do it? + +Testing creates a lot of work because it’s designed to find as many problems as possible. To stick with the medical analogy, testing is what brings your patients to the door. + +The result of a good testing phase will be an abundance of work to do, which can be overwhelming. Fail to properly triage your results and you won't know what to do or where to start, you'll simply be drowning in a sea of bugs and issues. + +Triaging is also a way to build consensus and agreement on the priority and organization of work. + +## How does Triaging Work? + +Successful triaging depends on a consistent process. In the ER there is an intake process which assesses the severity of the problem and then sets the priority accordingly. + +The process is more or less the same in software development. + +The first step is setting the priority for each issue you discovered during the actual testing. Figuring out the priority of an issue is the most complex problem in the process, but you'll be more successful at this the more you can involve the client or primary stakeholder. + +Part of what makes prioritization difficult is determining scope. Many "bugs" will actually be enhancements. What qualifies as a bug and what's an enhancement will vary by project and client, which is why it's key to have the client involved in the decision process. + +Bring the client into the triage process helps ensure that your prioritizing matches the client's expectations. By setting priorities with the client, disagreements can be caught before they become problems down the road. + +Another key part of setting priorities is acknowledging the trade-offs involved. Failure to take into account, for instance, the time needed to fix something, will turn your priorities into wishful thinking rather than accurate and manageable tasks for your team. Wishful thinking does not get things done, realistic, well understood expectations and discrete lists get things done. + +Once you have the priorities established and you know what you want to work on next you can move on to de-duplication. The DRY principle doesn't apply just to writing code. Be diligent when listing issues and make sure you don't have two issues noting the same problem. Before you assign any new bugs you've prioritized, make sure to de-duplicate. Often this has the additional advantage of exposing an underlying problem behind several related bugs. If you routinely have bugs in one module of code, this could be a sign that the whole module needs to be rewritten rather than just patching the latest round of related bugs. + +The final step in the triage process is assigning the work that needs to be done. Work may be assigned directly to individual designers and developers or passed on to team leads who can better identify who should do which aspect of the work. + +Mastering the triage process will make your team more efficient and productive. It will also get bug fixes and new features out to customers faster. While bugs will still be found outside of testing, triaging helps minimize the number of bugs that are mis-classified or incomplete. Triaging helps ensure your entire team knows what the problems are and when they're going to be fixed. + +## Weekly Stagnation Meetings + +The triage process isn't limited to new bugs you find in intensive testing sessions. You can also perform a second type of triage -- reviewing stagnant issues. + +If an issue hasn't been touched in a while, then something needs to be done. + +The exact definition of "a while" varies by team and project, but if an issue has been sitting longer than you think it should have it's time to do something. Sometimes that might mean reassigning the work or making it higher priority. Sometimes the fact that it hasn't been done is telling you that it doesn't need to be done, close the issue and move on. + +Letting stale issues pile up can have a vampiric effect on a project, sucking some of the life out of it. Just knowing those unsolved problems are out there, not being worked on, adds a level of stress to your work that you don't need. Reassess, reassign and get back to work. + +## Conclusion + +Issues don't track themselves. Having good human processes around your tools will make a world of difference in their effectiveness. And don't forget to set aside time for fixing. Testing without fixing is pointless. diff --git a/sifterapp/complete/webpagetest-notes.txt b/sifterapp/complete/webpagetest-notes.txt new file mode 100644 index 0000000..0dbbe05 --- /dev/null +++ b/sifterapp/complete/webpagetest-notes.txt @@ -0,0 +1,35 @@ + +> Performance audit... +> +> 0. Scope: SifterApp.com - Basically everything on our marketing site (not +> on a subdomain) is static content created by Jekyll and served straight +> through Nginx. +> +> 1. Context: Our marketing site used to live within the application +> codebase, and so Rails and Capistrano handled most of the asset +> optimization and serving. Now that it's all Jekyll, we're just tossing +> files up via Nginx with little consideration for performance. We need to +> fix that, especially for mobile. (And eventually, I even see the picture +> element as being part of that.) +> +> 2. Front-end/back-end, nothing's off limits. I expect that we'll have room +> for improvement in both areas. Just looking at the scores from the tests +> and a cursory glance at the resulting advice, we'll need to make some +> changes with all of it. The big thing is that I just don't have the +> bandwidth to research it and understand the best solutions for us. +> +> 3. Structure of article. I think it should focus primarily on the tools and +> the information that they provide and only use Sifter as examples. That way +> it's about the tools instead of Sifter. My only fear is that if we're +> already optimized in some areas, there won't be as much to share about what +> the tools help you find. That is, our performance may suck but not bad +> enough to show the full capabilities of the tools. +> +> I know there are countless tools/techniques that make sense, and I see them +> in two distinct categories. 1.) Tools that help you see your problems. 2.) +> Tools that help you fix your problems. I'd like to see us focus on the +> tools that help you see the problems to focus on the "bug" finding aspect. +> For each of the problems, I think we should link to relevant tools or +> tutorials that can help solve the problem, but we should leave the +> researching and choosing of those tools to the reader. +> diff --git a/sifterapp/complete/webpagetestp1.txt b/sifterapp/complete/webpagetestp1.txt new file mode 100644 index 0000000..c8c09ba --- /dev/null +++ b/sifterapp/complete/webpagetestp1.txt @@ -0,0 +1,79 @@ +The web is getting fatter and slower. Compare the [HTTPArchive][1] data for this month to six months ago. Chances are you'll find that overall page size has grown and page load times increased. + +This is bad news for the web at large, but it can be good news for your site. It means there's an easy way to stand out from the crowd -- build a blazing fast website. + +To do that you should make performance testing part of your design process. Plan for speed from the very beginning and you'll end up with a fast site. Didn't start your site off on the right foot? That's okay. We'll show you how you can speed up existing pages too. + +First let's talk about what we mean by performance. Performance is more than page load times, more than page "weight". These things matter, but not by themselves. Load times and download size only matter in relation to the most important part of performance -- how your visitors perceive your pages loading. Performance is ultimately a very subjective thing, despite being surrounded by some very objective data. + +Just remember, perception is what matters the most, not numbers. + +This is why performance is not necessarily a concrete target to aim for, but a spectrum. + +At one end of the spectrum you have what are known as ideal time targets. These were [popularized by Jakob Nielsen][2] in his book <cite>Usability Engineering</cite>. Even if you've never read the book, you've probably heard these times mentioned in the context of good user experience: + +> * 0.1 second is about the limit for having the user feel that the system is reacting instantaneously. +> * 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. +> * 10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. + +The quick takeaway is that you have one second to get something rendered on the screen or your visitor will already be thinking about other things. By the time 10 seconds rolls around they're long gone. + +## Aim High, Fail High + +The one second rule makes a nice target to aim for, but let's face it, most sites don't meet that goal. Even Google's generally speedy pages don't load in less than one second. And Google still manages to run a multi-billion dollar business. + +That said, Google has been [very vocal][3] about the fact that it is trying to get its pages down below that magical one second mark. And [it wants you to aim for one second][4] as well. Remember, the higher you aim the higher you are when you fail. + +That said, how fast is fast enough? To answer that question you need to do some performance testing. + +You need to test your site to find out where you can improve, but you should also test your competitors' sites as well. Why? It's the other end of the spectrum. At one end is the one second nirvana, at the other are your competitors' sites. The goal is to move your site from that end toward the one second end. + +If you can beat another site's page load times by 20 percent people will perceive your site as faster. Even if you can't get to that nearly instantaneous goal of one second or less, you can still beat the competition. That means more conversions and more happy users. + +To figure out where you stand, and where your competition stands, you'll want to do a performance audit -- that is, figure out how fast your pages load and identify bottlenecks that can easily be eliminated. + +## Running a Performance Audit + +There are three tools that form a kind of triumvirate of performance testing -- [WebpageTest.org][5], a web-based performance testing tool, Google's PageSpeed tools, and the network panel of your browser's developer tools. + +These three will help you diagnose your performance problems and give you a good idea of where you can to start improving. + +To show you how these tools can be combined, we're going to perform a basic performance audit on the [Sifter homepage][6]. We'll identify some performance "bugs" and show you how we found them. + +The first step in any performance audit is to see where you're starting from. To do that we use WebpageTest.org. Like BrowserStack, which we've [written about before][7], WebpageTest is a free tool built around virtual machines. You give it a URL and it will run various tests to see how your site performs under different conditions. + +## Testing with WebpageTest + +Head over to WebpageTest, drop the URL you want to test into the form and then click the yellow link that says "Advanced Settings". Here you can control the bandwidth being used, the number of tests to run and whether or not to capture video. There's also an option to keep the test results private if you're working with a site that isn't public. + +To establish a performance baseline we suggest running two separate tests -- one over a high speed connection (the default is fine) and one over a 3G network. For each test you'll want to run several passes -- you can run the test up to 9 times -- and let WebpageTest pick the median. We typically run 7 (WebpageTest will pick the median result so be sure to use an odd number of tests). Also, make sure to check the box that says "Capture Video". + +Once your tests are done you'll see a page that looks something like this: + +![screenshot of initial rest results page] + +There are a bunch of numbers here, but the main ones we want to track are the "Start Render" time and the "Speed Index". The latter is the most important of the two. + +What we really care about when we're trying to speed up a page is the time before the visitor sees something on the screen. + +The overall page load time is secondary to getting *something* -- anything really, but ideally the most important content -- on the screen as soon as possible. Give your visitors something to hold their interest or interact with and they'll perceive your page as loading faster than it actually is. People won't care (or even know) if the rest of the page is still loading in the background. + +The [Speed Index metric][8] represents the average time it takes to fill the initial viewport with, well, something. The number is in milliseconds and depends on size of the viewport. A smaller viewport (a phone for example) will need less on the screen before the viewport is filled than a massive HD desktop monitor. + +For example, let's say your Speed Index is around 6000 milliseconds over a mobile connection. That sounds pretty good right? It's not one second, but it's better than most sites out there. + +Now go ahead and click the link to the video of your page rendering and force yourself to sit through it. Suddenly 6 seconds doesn't sound so fast does it? In fact it's a little painful to sit through isn't it? If you're like us you were probably fidgeting a little before the video was over. + +That's how you visitors *feel* every time they use your site. + +Now that you have some idea of how people are perceiving your site and what it feels like, it's time to go back to the numbers. In the next installment we'll take a look at some tools you can use to find, diagnose and fix problems in the HTML and improve the performance of your site. + + +[1]: http://httparchive.org/ +[2]: http://www.nngroup.com/articles/response-times-3-important-limits/ +[3]: http://www.youtube.com/watch?v=Il4swGfTOSM +[4]: http://googlewebmastercentral.blogspot.com/2013/08/making-smartphone-sites-load-fast.html +[5]: http://www.webpagetest.org/ +[6]: https://sifterapp.com/ +[7]: link +[8]: https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index diff --git a/sifterapp/complete/webpagetestp2.txt b/sifterapp/complete/webpagetestp2.txt new file mode 100644 index 0000000..5a28dd1 --- /dev/null +++ b/sifterapp/complete/webpagetestp2.txt @@ -0,0 +1,85 @@ +In the last installment we looked at how WebpageTest can be used to establish a performance baseline. Now it's time to dig a bit deeper and see what we can do about some common performance bottlenecks. + +To do that we'll turn to a new tool, [Google PageSpeed Insights][1]. + +Before we dive in, recall what we said last time about the importance of the Speed Index. That is, the time it takes to get something on the screen. This is different than the time it takes to fully load the page. Keep that in mind when you're picking and choosing what to optimize. + +For example, if PageSpeed Insights tells you "leverage browser caching" -- which means have your server set an Expires Header -- that's good advice in the broader sense, but it won't change your Speed Index number for first-time visitors. + +To start with we suggest focusing on things that will get you the biggest wins on the Speed Index. That's what we'll be focusing on here. + +## Google PageSpeed Insights + +Now that we know how long it's taking to load the page, it's time to start finding the bottlenecks in that process. If you know how to read a waterfall chart then WebpageTest can tell you most of what you want to know, but Google's PageSpeed Insights tool offers a nicer interface and puts more of an emphasis on mobile performance improvements. + +There are two ways to use PageSpeed. You can use [the online service][2] and plug in your URL or you can install the [PageSpeed Insights add-on for Chrome][3], which will add a PageSpeed tab to the Chrome developer tools. + +The latter is very handy, but lacks some of the features found in the online tool, most notably checks on "[critical path][4]" performance (another name for Speed Index) and mobile user experience analysis. For that reason we suggest using both. The online service does a better job of suggesting fixes for mobile and offers a score you can use to gauge your improvements (though you should go back to WebpageTest and rerun your same tests to make sure that your Speed Index times have actually dropped). + +The browser add-on, on the other hand , will look at other network conditions, like redirects, which can hurt your Speed Index times as well. + +PageSpeed Insights fetches the page twice, once with a mobile user-agent, and once with a desktop user-agent. It does not, however, simulate the constrained bandwidth of a mobile connection. For that you'll need to go back to WebpageTest. Complete details on what PageSpeed insights does are available in Google's [developer documentation][5]. + +When we ran PageSpeed Insights on the Sifter homepage the service made a number of suggestions: + +![Screenshot of initial run] + +Notice the color coding, red is high priority, yellow less and green is all the stuff you're already doing right. But those priorities are Google's suggestions, not hard and fast rules. As we mentioned above, one of the high priority suggestions is to add Expires Headers to our static assets. That's a good idea and it will help speed up the experience of visiting the site again or loading a second page that uses the same assets. But it won't help first time visitors and it won't change that Speed Index number for initial page loads. + +Enabling compression on the other hand will. Adding GZip compression to our stylesheet and SVG icons would shave 154.8KB off the total page size. Fewer KBs to download always means faster page load times. This is especially true for the stylesheet since the browser stops rendering the page whenever it encounters a CSS file. It doesn't start rendering again until it has completely downloaded and parsed the CSS file so anything we can do to decrease the size of the stylesheet will see big wins. + +Another thing that the online tool doesn't consider high priority, but shows up in the browser add-on is to minimize redirects. + +To take a closer look at how redirects hurt your page load times let's take a look at the third tool for performance testing, your browser tools network panel. + +## The Network Panel + +All modern web browsers have some form of developer tools and all of them offer a "Network" panel of some sort. For these examples we'll be using Chrome, but you can see the same thing in Firefox, Safari, Opera and IE. + +In this example you can see that the fonts.css file returned a 302 (temporarily moved) error: + +![Screenshot of Network Panel] + +To find out more about what this redirect is and why it's happening, we'll select it in the network panel and have a look at the actual response headers. + +![Screenshot of Network Panel response headers] + +In this case you can see that it redirected to another CSS file on our domain. + +This file is eating up time twice. First it's on a different domain (our webfont provider's domain) which means there's another DNS lookup to perform. That's a big deal on mobile, [see this talk][6] from Google's Ilya Grigorik for an incredibly thorough explanation of why. + +The second time suck is the actual redirect which forces the browser to try loading the same resource again (and keep in mind that this resource is a CSS file, so it's blocking rendering throughout these delays) from a different location. The second time is succeeds, but there's definitely a performance hit. + +Given all that why still serve up this file? Because it's an acceptable trade off. Tungsten (the font being loaded) is an integral part of the design and in this case there are other areas we can optimize -- like enabling server-side GZip compression -- that will get us some big wins. It may be that we're able to get down close enough to the ideal one second end of the spectrum that we're okay with the font loading. + +This highlights what is perhaps one of the hardest aspects of improving performance -- nothing comes for free. + +When it comes to page load times there is no such thing as too fast, but there can be such a thing as over-optimization. If we ditch the font we might speed up the page load time a tiny bit, but we might also lose some of less tangible aspects of the reading experience. We might get the page to our visitors 500ms faster, but they also might be less delighted with what we've given them. The right answer to this question of what stays and what goes is a case-by-case problem. + +For example, if you eliminate a JavaScript library to speed up your page but without the library your app stops working, well that would be silly. Moving that JavaScript library to CDN and caching it with a far-future Expires Header? Now that's smart. + +Performance is always a series of trade-offs. CSS blocks the loading of the page, but no one wants to see your site without its CSS. To speed up your site you don't get rid of your CSS, but you might consider inlining some of it. That is, move some of your critical CSS into the actual HTML document, enough that the initial viewport is rendered properly, and then load the stylesheet at the bottom of the page where it won't block rendering. Tools like Google's [PageSpeed Module][7] for Apache and Nginx can automatically do this for you. + +The answer to performance problems is rarely to move from one extreme to the other, but to find the middle ground where performance, functionality and great user experience meet. + +## What We Did + +After running Sifter through Webpagetest we identified the two biggest wins -- enabling GZip compression and setting Expires Headers. The first means users download less data, so the page loads faster. The second means repeat views will be even faster because common elements like stylesheet and fonts are already in the browsers cache. + +We also removed some analytics scripts which were really only necessary on particular pages we're testing, not the site as a whole. + +For us the change meant adding a few lines to Nginx. One gotcha for fellow Nginx users, you need to add your GZip and Expires configuration to your application *and* load balancing servers. Other than that snag, the changes hardly took any time at all. + +The result? Our initial page load times as determined by Pagespeedtest dropped down under 4 seconds over 3G. That's a two second improvement for mobile users with very little work on our end. For those with high speed connections the Sifter homepage now gets very close to that magical one second mark. + +We were able to get there because we did the testing, identified the problems and were able to target the biggest wins rather than trying to do it all. Remember, don't test more, test smarter. + + + +[1]: https://developers.google.com/speed/pagespeed/insights/ +[2]: https://developers.google.com/speed/pagespeed/insights/ +[3]: https://chrome.google.com/webstore/detail/pagespeed-insights-by-goo/gplegfbjlmmehdoakndmohflojccocli?hl=en +[4]: https://developers.google.com/web/fundamentals/performance/critical-rendering-path/ +[5]: https://developers.google.com/speed/docs/insights/about +[6]: https://www.youtube.com/watch?v=a4SbDZ9Y-I4#t=175 +[7]: https://developers.google.com/speed/pagespeed/module diff --git a/sifterapp/complete/yosemite-mail.txt b/sifterapp/complete/yosemite-mail.txt new file mode 100644 index 0000000..e713152 --- /dev/null +++ b/sifterapp/complete/yosemite-mail.txt @@ -0,0 +1,14 @@ +If you've updated to Apple's latest version of OS X, Yosemite, you have a powerful new tool for creating issues via email and you may not even know it. + +Yosemite debuts something Apple calls Extensions, which are little apps that run inside other apps. Extensions are available in both OS X and iOS, but since both are brand new not a lot of applications are taking advantage of them just yet. + +The latest version of Apple Mail does, however, make use of Extensions through it's new Markup feature. Markup is a tool to quickly add simple notes and annotations to image and PDF files directly within Mail. + +Here's how it works: First you add an image or PDF to a mail message. Then you click on the file and an icon appears in the top-left corner of the file preview. Click the little icon and select Markup. The image will then zoom out and you'll see a toolbar above it with options to draw shapes and add text on top of it. + +Most demos we've seen of Markup show people adding arrows to maps to indicate where to meet and other things you'll probably never actually do, but this is a very powerful tool for software developers. It makes adding a little bit of visual help your issues much easier. + +For example, your workflow might look like this: you discover a bug with some visual component to it, let's say some CSS fails to line up your submit buttons on a form. So, you grab a screenshot (just press CMD-Shift-3 and OS X will take a screenshot), drag it to a new mail message, annotate it with some arrows pointing to the problem and a quick note about how it should look. Then you send it off to your issue tracking software which creates a new issue and attaches your screenshot complete with annotations. + +This way your designers don't have to wade through a bunch of prose trying to figure out what you mean by "doesn't line up". Instead they see the image with your notes and can jump straight into fixing the issue. + |