[{"model": "resume.publisher", "pk": 1, "fields": {"name": "Ars Technica", "slug": "ars", "body_markdown": "", "body_html": "", "url": "http://arstechnica.com/", "payment_time": "4"}}, {"model": "resume.publisher", "pk": 2, "fields": {"name": "Wired", "slug": "wired", "body_markdown": "", "body_html": "", "url": "http://wired.com/", "payment_time": "3"}}, {"model": "resume.publisher", "pk": 3, "fields": {"name": "The Register", "slug": "the-register", "body_markdown": "", "body_html": "", "url": "http://theregister.co.uk/", "payment_time": "4"}}, {"model": "resume.publisher", "pk": 4, "fields": {"name": "Budget Travel", "slug": "budget-travel", "body_markdown": "", "body_html": "", "url": "http://www.budgettravel.com", "payment_time": "0"}}, {"model": "resume.pubitem", "pk": 1, "fields": {"title": " How Google\u2019s AMP project speeds up the Web\u2014by sandblasting HTML", "slug": "google-AMP-sandblasts-html", "body_markdown": "[**This story originally appeared on Ars Technica, to comment and enjoy the full reading experience with images (including a TRS-80 browsing the web) you should read it over there.**]\r\n\r\nThere's a story going around today that the Web is too slow, especially over mobile networks. It's a pretty good story\u2014and it's a perpetual story. The Web, while certainly improved from the days of 14.4k modems, has never been as fast as we want it to be, which is to say that the Web has never been instantaneous.\r\n\r\nCuriously, rather than a focus on possible cures, like increasing network speeds, finding ways to decrease network latency, or even speeding up Web browsers, the latest version of the \"Web is too slow\" story pins the blame on the Web itself. And, perhaps more pointedly, this blame falls directly on the people who make it.\r\n\r\nThe average webpage has increased in size at a terrific rate. In January 2012, the average page tracked by HTTPArchive [transferred 1,239kB and made 86 requests](http://httparchive.org/trends.php?s=All&minlabel=Oct+1+2012&maxlabel=Oct+1+2015#bytesTotal&reqTotal). Fast forward to September 2015, and the average page loads 2,162kB of data and makes 103 requests. These numbers don't directly correlate to longer page load-and-render times, of course, especially if download speeds are also increasing. But these figures are one indicator of how quickly webpages are bulking up.\r\n\r\nNative mobile applications, on the other hand, are getting faster. Mobile devices get more powerful with every release cycle, and native apps take better advantage of that power.\r\n\r\nSo as the story goes, apps get faster, the Web gets slower. This is allegedly why Facebook must invent Facebook Instant Articles, why Apple News must be built, and why Google must now create [Accelerated Mobile Pages](http://arstechnica.com/information-technology/2015/10/googles-new-amp-html-spec-wants-to-make-mobile-websites-load-instantly/) (AMP). Google is late to the game, but AMP has the same goal as Facebook's and Apple's efforts\u2014making the Web feel like a native application on mobile devices. (It's worth noting that all three solutions focus exclusively on mobile content.)\r\n\r\nFor AMP, two things in particular stand in the way of a lean, mean browsing experience: JavaScript... and advertisements that use JavaScript. The AMP story is compelling. It has good guys (Google) and bad guys (everyone not using Google Ads), and it's true to most of our experiences. But this narrative has some fundamental problems. For example, Google owns the largest ad server network on the Web. If ads are such a problem, why doesn't Google get to work speeding up the ads?\r\n\r\nThere are other potential issues looming with the AMP initiative as well, some as big as the state of the open Web itself. But to think through the possible ramifications of AMP, first you need to understand Google's new offering itself.\r\n\r\n## What is AMP?\r\n\r\nTo understand AMP, you first need to understand Facebook's Instant Articles. Instant Articles use RSS and standard HTML tags to create an optimized, slightly stripped-down version of an article. Facebook then allows for some extra rich content like auto-playing video or audio clips. Despite this, Facebook claims that Instant Articles are up to 10 times faster than their siblings on the open Web. Some of that speed comes from stripping things out, while some likely comes from aggressive caching.\r\n\r\nBut the key is that Instant Articles are only available via Facebook's mobile apps\u2014and only to established publishers who sign a deal with Facebook. That means reading articles from Facebook's Instant Article partners like National Geographic, BBC, and Buzzfeed is a faster, richer experience than reading those same articles when they appear on the publisher's site. Apple News appears to work roughly the same way, taking RSS feeds from publishers and then optimizing the content for delivery within Apple's application.\r\n\r\nAll this app-based content delivery cuts out the Web. That's a problem for the Web and, by extension, for Google, which leads us to Google's Accelerated Mobile Pages project.\r\n\r\nUnlike Facebook Articles and Apple News, AMP eschews standards like RSS and HTML in favor of its own little modified subset of HTML. AMP HTML looks a lot like HTML without the bells and whistles. In fact, if you head over to the [AMP project announcement](https://www.ampproject.org/how-it-works/), you'll see an AMP page rendered in your browser. It looks like any other page on the Web.\r\n\r\nAMP markup uses an extremely limited set of tags. Form tags? Nope. Audio or video tags? Nope. Embed tags? Certainly not. Script tags? Nope. There's a very short list of the HTML tags allowed in AMP documents available over on the [project page](https://github.com/ampproject/amphtml/blob/master/spec/amp-html-format.md). There's also no JavaScript allowed. Those ads and tracking scripts will never be part of AMP documents (but don't worry, Google will still be tracking you).\r\n\r\nAMP defines several of its own tags, things like amp-youtube, amp-ad, or amp-pixel. The extra tags are part of what's known as [Web components](http://www.w3.org/TR/components-intro/), which will likely become a Web standard (or it might turn out to be \"ActiveX part 2,\" only the future knows for sure).\r\n\r\nSo far AMP probably sounds like a pretty good idea\u2014faster pages, no tracking scripts, no JavaScript at all (and so no overlay ads about signing up for newsletters). However, there are some problematic design choices in AMP. (At least, they're problematic if you like the open Web and current HTML standards.)\r\n\r\nAMP re-invents the wheel for images by using the custom component amp-img instead of HTML's img tag, and it does the same thing with amp-audio and amp-video rather than use the HTML standard audio and video. AMP developers argue that this allows AMP to serve images only when required, which isn't possible with the HTML img tag. That, however, is a limitation of Web browsers, not HTML itself. AMP has also very clearly treated [accessibility](https://en.wikipedia.org/wiki/Computer_accessibility) as an afterthought. You lose more than just a few HTML tags with AMP.\r\n\r\nIn other words, AMP is technically half baked at best. (There are dozens of open issues calling out some of the [most](https://github.com/ampproject/amphtml/issues/517) [egregious](https://github.com/ampproject/amphtml/issues/481) [decisions](https://github.com/ampproject/amphtml/issues/545) in AMP's technical design.) The good news is that AMP developers are listening. One of the worst things about AMP's initial code was the decision to disable pinch-and-zoom on articles, and thankfully, Google has reversed course and [eliminated the tag that prevented pinch and zoom](https://github.com/ampproject/amphtml/issues/592).\r\n\r\nBut AMP's markup language is really just one part of the picture. After all, if all AMP really wanted to do was strip out all the enhancements and just present the content of a page, there are existing ways to do that. Speeding things up for users is a nice side benefit, but the point of AMP, as with Facebook Articles, looks to be more about locking in users to a particular site/format/service. In this case, though, the \"users\" aren't you and I as readers; the \"users\" are the publishers putting content on the Web.\r\n\r\n## It's the ads, stupid\r\n\r\nThe goal of Facebook Instant Articles is to keep you on Facebook. No need to explore the larger Web when it's all right there in Facebook, especially when it loads so much faster in the Facebook app than it does in a browser.\r\n\r\nGoogle seems to have recognized what a threat Facebook Instant Articles could be to Google's ability to serve ads. This is why Google's project is called Accelerated Mobile Pages. Sorry, desktop users, Google already knows how to get ads to you.\r\n\r\nIf you watch the [AMP demo](https://googleblog.blogspot.com/2015/10/introducing-accelerated-mobile-pages.html), which shows how AMP might work when it's integrated into search results next year, you'll notice that the viewer effectively never leaves Google. AMP pages are laid over the Google search page in much the same way that outside webpages are loaded in native applications on most mobile platforms. The experience from the user's point of view is just like the experience of using a mobile app.\r\n\r\nGoogle needs the Web to be on par with the speeds in mobile apps. And to its credit, the company has some of the smartest engineers working on the problem. Google has made one of the fastest Web browsers (if not the fastest) by building Chrome, and in doing so the company has pushed other vendors to speed up their browsers as well. Since Chrome debuted, browsers have become faster and better at an astonishing rate. Score one for Google.\r\n\r\nThe company has also been touting the benefits of mobile-friendly pages, first by labeling them as such in search results on mobile devices and then later by ranking mobile friendly pages above not-so-friendly ones when other factors are equal. Google has been quick to adopt speed-improving new HTML standards like the responsive images effort, which was first supported by Chrome. Score another one for Google.\r\n\r\nBut pages keep growing faster than network speeds, and the Web slows down. In other words, Google has tried just about everything within its considerable power as a search behemoth to get Web developers and publishers large and small to speed up their pages. It just isn't working.\r\n\r\nOne increasingly popular reaction to slow webpages has been the use of content blockers, typically browser add-ons that stop pages from loading anything but the primary content of the page. Content blockers have been around for over a decade now (No Script first appeared for Firefox in 2005), but their use has largely been limited to the desktop. That changed in Apple's iOS 9, which for the first time put simple content-blocking tools in the hands of millions of mobile users.\r\n\r\nCombine all the eyeballs that are using iOS with content blockers, reading Facebook Instant Articles, and perusing Apple News, and you suddenly have a whole lot of eyeballs that will never see any Google ads. That's a problem for Google, one that AMP is designed to fix.\r\n\r\n## Static pages that require Google's JavaScript\r\n\r\nThe most basic thing you can do on the Web is create a flat HTML file that sits on a server and contains some basic tags. This type of page will always be lightning fast. It's also insanely simple. This is literally all you need to do to put information on the Web. There's no need for JavaScript, no need even for CSS.\r\n\r\nThis is more or less the sort of page AMP wants you to create (AMP doesn't care if your pages are actually static or\u2014more likely\u2014generated from a database. The point is what's rendered is static). But then AMP wants to turn around and require that each page include a third-party script in order to load. AMP deliberately sets the opacity of the entire page to 0 until this script loads. Only then is the page revealed.\r\n\r\nThis is a little odd; as developer Justin Avery [writes](https://responsivedesign.is/articles/whats-the-deal-with-accelerated-mobile-pages-amp), \"Surely the document itself is going to be faster than loading a library to try and make it load faster.\"\r\n\r\nPinboard.in creator Maciej Ceg\u0142owski did just that, putting together a demo page that duplicates the AMP-based AMP homepage without that JavaScript. Over a 3G connection, Ceg\u0142owski's page fills the viewport in [1.9 seconds](http://www.webpagetest.org/result/151016_RF_VNE/). The AMP homepage takes [9.2 seconds](http://www.webpagetest.org/result/151016_9J_VNN/). JavaScript slows down page load times, even when that JavaScript is part of Google's plan to speed up the Web.\r\n\r\nIronically, for something that is ostensibly trying to encourage better behavior from developers and publishers, this means that pages using progressive enhancement, keeping scripts to a minimum and aggressively caching content\u2014in other words sites following best practices and trying to do things right\u2014may be slower in AMP.\r\n\r\nIn the end, developers and publishers who have been following best practices for Web development and don't rely on dozens of tracking networks and ads have little to gain from AMP. Unfortunately, the publishers building their sites like that right now are few and far between. Most publishers have much to gain from generating AMP pages\u2014at least in terms of speed. Google says that AMP can improve page speed index scores by between 15 to 85 percent. That huge range is likely a direct result of how many third-party scripts are being loaded on some sites.\r\n\r\nThe dependency on JavaScript has another detrimental effect. AMP documents depend on JavaScript, which is to say that if their (albeit small) script fails to load for some reason\u2014say, you're going through a tunnel on a train or only have a flaky one-bar connection at the beach\u2014the AMP page is completely blank. When an AMP page fails, it fails spectacularly.\r\n\r\nGoogle knows better than this. Even Gmail still offers a pure HTML-based fallback version of itself.\r\n\r\n## AMP for publishers\r\n\r\nUnder the AMP bargain, all big media has to do is give up its ad networks. And interactive maps. And data visualizations. And comment systems.\r\n\r\nYour WordPress blog can get in on the stripped-down AMP action as well. Given that WordPress powers roughly 24 percent of all sites on the Web, having an easy way to generate AMP documents from WordPress means a huge boost in adoption for AMP. It's certainly possible to build fast websites using WordPress, but it's also easy to do the opposite. WordPress plugins often have a dramatic (negative) impact on load times. It isn't uncommon to see a WordPress site loading not just one but several external JavaScript libraries because the user installed three plugins that each use a different library. AMP neatly solves that problem by stripping everything out.\r\n\r\nSo why would publishers want to use AMP? Google, while its influence has dipped a tad across industries (as Facebook and Twitter continue to drive more traffic), remains a powerful driver of traffic. When Google promises more eyeballs on their stories, big media listens.\r\n\r\nAMP isn't trying to get rid of the Web as we know it; it just wants to create a parallel one. Under this system, publishers would not stop generating regular pages, but they would also start generating AMP files, usually (judging by the early adopter examples) by appending /amp to the end of the URL. The AMP page and the canonical page would reference each other through standard HTML tags. User agents could then pick and choose between them. That is, Google's Web crawler might grab the AMP page, but desktop Firefox might hit the AMP page and redirect to the canonical URL.\r\n\r\nOn one hand, what this amounts to is that after years of telling the Web to stop making m. mobile-specific websites, Google is telling the Web to make /amp-specific mobile pages. On the other hand, this nudges publishers toward an idea that's big in the [IndieWeb movement](http://indiewebcamp.com/): Publish (on your) Own Site, Syndicate Elsewhere (or [POSSE](http://indiewebcamp.com/POSSE) for short).\r\n\r\nThe idea is to own the canonical copy of the content on your own site but then to send that content everywhere you can. Or rather, everywhere you want to reach your readers. Facebook Instant Article? Sure, hook up the RSS feed. Apple News? Send the feed over there, too. AMP? Sure, generate an AMP page. No need to stop there\u2014tap the new Medium API and half a dozen others as well.\r\n\r\nReading is a fragmented experience. Some people will love reading on the Web, some via RSS in their favorite reader, some in Facebook Instant Articles, some via AMP pages on Twitter, some via Lynx in their terminal running on a [restored TRS-80](http://arstechnica.com/information-technology/2015/08/surfing-the-internet-from-my-trs-80-model-100/) (seriously, it can be done. See below). The beauty of the POSSE approach is that you can reach them all from a single, canonical source.\r\n\r\n## AMP and the open Web\r\n\r\nWhile AMP has problems and just might be designed to lock publishers into a Google-controlled format, so far it does seem friendlier to the open Web than Facebook Instant Articles.\r\n\r\nIn fact, if you want to be optimistic, you could look at AMP as the carrot that Google has been looking for in its effort to speed up the Web. As noted Web developer (and AMP optimist) Jeremy Keith [writes](https://adactio.com/journal/9646) in a piece on AMP, \"My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask 'Why can't our regular pages be this fast?' By showing that there is life beyond big bloated invasive webpages, perhaps the AMP project will work as a demo of what the whole Web could be.\"\r\n\r\nNot everyone is that optimistic about AMP, though. Developer and Author Tim Kadlec [writes](https://timkadlec.com/2015/10/amp-and-incentives/), \"[AMP] doesn't feel like something helping the open Web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the Web... Using a very specific tool to build a tailored version of my page in order to 'reach everyone' doesn't fit any definition of the 'open Web' that I've ever heard.\"\r\n\r\nThere's one other important aspect to AMP that helps speed up their pages: Google will cache your pages on its CDN for free. \"AMP is caching... You can use their caching if you conform to certain rules,\" writes Dave Winer, developer and creator of RSS, [in a post on AMP](http://scripting.com/2015/10/10/supportingStandardsWithoutAllThatNastyInterop.html). \"If you don't, you can use your own caching. I can't imagine there's a lot of difference unless Google weighs search results based on whether you use their code.\"\r\n", "body_html": "

[This story originally appeared on Ars Technica, to comment and enjoy the full reading experience with images (including a TRS-80 browsing the web) you should read it over there.]

\n

There's a story going around today that the Web is too slow, especially over mobile networks. It's a pretty good story\u2014and it's a perpetual story. The Web, while certainly improved from the days of 14.4k modems, has never been as fast as we want it to be, which is to say that the Web has never been instantaneous.

\n

Curiously, rather than a focus on possible cures, like increasing network speeds, finding ways to decrease network latency, or even speeding up Web browsers, the latest version of the \"Web is too slow\" story pins the blame on the Web itself. And, perhaps more pointedly, this blame falls directly on the people who make it.

\n

The average webpage has increased in size at a terrific rate. In January 2012, the average page tracked by HTTPArchive transferred 1,239kB and made 86 requests. Fast forward to September 2015, and the average page loads 2,162kB of data and makes 103 requests. These numbers don't directly correlate to longer page load-and-render times, of course, especially if download speeds are also increasing. But these figures are one indicator of how quickly webpages are bulking up.

\n

Native mobile applications, on the other hand, are getting faster. Mobile devices get more powerful with every release cycle, and native apps take better advantage of that power.

\n

So as the story goes, apps get faster, the Web gets slower. This is allegedly why Facebook must invent Facebook Instant Articles, why Apple News must be built, and why Google must now create Accelerated Mobile Pages (AMP). Google is late to the game, but AMP has the same goal as Facebook's and Apple's efforts\u2014making the Web feel like a native application on mobile devices. (It's worth noting that all three solutions focus exclusively on mobile content.)

\n

For AMP, two things in particular stand in the way of a lean, mean browsing experience: JavaScript... and advertisements that use JavaScript. The AMP story is compelling. It has good guys (Google) and bad guys (everyone not using Google Ads), and it's true to most of our experiences. But this narrative has some fundamental problems. For example, Google owns the largest ad server network on the Web. If ads are such a problem, why doesn't Google get to work speeding up the ads?

\n

There are other potential issues looming with the AMP initiative as well, some as big as the state of the open Web itself. But to think through the possible ramifications of AMP, first you need to understand Google's new offering itself.

\n

What is AMP?

\n

To understand AMP, you first need to understand Facebook's Instant Articles. Instant Articles use RSS and standard HTML tags to create an optimized, slightly stripped-down version of an article. Facebook then allows for some extra rich content like auto-playing video or audio clips. Despite this, Facebook claims that Instant Articles are up to 10 times faster than their siblings on the open Web. Some of that speed comes from stripping things out, while some likely comes from aggressive caching.

\n

But the key is that Instant Articles are only available via Facebook's mobile apps\u2014and only to established publishers who sign a deal with Facebook. That means reading articles from Facebook's Instant Article partners like National Geographic, BBC, and Buzzfeed is a faster, richer experience than reading those same articles when they appear on the publisher's site. Apple News appears to work roughly the same way, taking RSS feeds from publishers and then optimizing the content for delivery within Apple's application.

\n

All this app-based content delivery cuts out the Web. That's a problem for the Web and, by extension, for Google, which leads us to Google's Accelerated Mobile Pages project.

\n

Unlike Facebook Articles and Apple News, AMP eschews standards like RSS and HTML in favor of its own little modified subset of HTML. AMP HTML looks a lot like HTML without the bells and whistles. In fact, if you head over to the AMP project announcement, you'll see an AMP page rendered in your browser. It looks like any other page on the Web.

\n

AMP markup uses an extremely limited set of tags. Form tags? Nope. Audio or video tags? Nope. Embed tags? Certainly not. Script tags? Nope. There's a very short list of the HTML tags allowed in AMP documents available over on the project page. There's also no JavaScript allowed. Those ads and tracking scripts will never be part of AMP documents (but don't worry, Google will still be tracking you).

\n

AMP defines several of its own tags, things like amp-youtube, amp-ad, or amp-pixel. The extra tags are part of what's known as Web components, which will likely become a Web standard (or it might turn out to be \"ActiveX part 2,\" only the future knows for sure).

\n

So far AMP probably sounds like a pretty good idea\u2014faster pages, no tracking scripts, no JavaScript at all (and so no overlay ads about signing up for newsletters). However, there are some problematic design choices in AMP. (At least, they're problematic if you like the open Web and current HTML standards.)

\n

AMP re-invents the wheel for images by using the custom component amp-img instead of HTML's img tag, and it does the same thing with amp-audio and amp-video rather than use the HTML standard audio and video. AMP developers argue that this allows AMP to serve images only when required, which isn't possible with the HTML img tag. That, however, is a limitation of Web browsers, not HTML itself. AMP has also very clearly treated accessibility as an afterthought. You lose more than just a few HTML tags with AMP.

\n

In other words, AMP is technically half baked at best. (There are dozens of open issues calling out some of the most egregious decisions in AMP's technical design.) The good news is that AMP developers are listening. One of the worst things about AMP's initial code was the decision to disable pinch-and-zoom on articles, and thankfully, Google has reversed course and eliminated the tag that prevented pinch and zoom.

\n

But AMP's markup language is really just one part of the picture. After all, if all AMP really wanted to do was strip out all the enhancements and just present the content of a page, there are existing ways to do that. Speeding things up for users is a nice side benefit, but the point of AMP, as with Facebook Articles, looks to be more about locking in users to a particular site/format/service. In this case, though, the \"users\" aren't you and I as readers; the \"users\" are the publishers putting content on the Web.

\n

It's the ads, stupid

\n

The goal of Facebook Instant Articles is to keep you on Facebook. No need to explore the larger Web when it's all right there in Facebook, especially when it loads so much faster in the Facebook app than it does in a browser.

\n

Google seems to have recognized what a threat Facebook Instant Articles could be to Google's ability to serve ads. This is why Google's project is called Accelerated Mobile Pages. Sorry, desktop users, Google already knows how to get ads to you.

\n

If you watch the AMP demo, which shows how AMP might work when it's integrated into search results next year, you'll notice that the viewer effectively never leaves Google. AMP pages are laid over the Google search page in much the same way that outside webpages are loaded in native applications on most mobile platforms. The experience from the user's point of view is just like the experience of using a mobile app.

\n

Google needs the Web to be on par with the speeds in mobile apps. And to its credit, the company has some of the smartest engineers working on the problem. Google has made one of the fastest Web browsers (if not the fastest) by building Chrome, and in doing so the company has pushed other vendors to speed up their browsers as well. Since Chrome debuted, browsers have become faster and better at an astonishing rate. Score one for Google.

\n

The company has also been touting the benefits of mobile-friendly pages, first by labeling them as such in search results on mobile devices and then later by ranking mobile friendly pages above not-so-friendly ones when other factors are equal. Google has been quick to adopt speed-improving new HTML standards like the responsive images effort, which was first supported by Chrome. Score another one for Google.

\n

But pages keep growing faster than network speeds, and the Web slows down. In other words, Google has tried just about everything within its considerable power as a search behemoth to get Web developers and publishers large and small to speed up their pages. It just isn't working.

\n

One increasingly popular reaction to slow webpages has been the use of content blockers, typically browser add-ons that stop pages from loading anything but the primary content of the page. Content blockers have been around for over a decade now (No Script first appeared for Firefox in 2005), but their use has largely been limited to the desktop. That changed in Apple's iOS 9, which for the first time put simple content-blocking tools in the hands of millions of mobile users.

\n

Combine all the eyeballs that are using iOS with content blockers, reading Facebook Instant Articles, and perusing Apple News, and you suddenly have a whole lot of eyeballs that will never see any Google ads. That's a problem for Google, one that AMP is designed to fix.

\n

Static pages that require Google's JavaScript

\n

The most basic thing you can do on the Web is create a flat HTML file that sits on a server and contains some basic tags. This type of page will always be lightning fast. It's also insanely simple. This is literally all you need to do to put information on the Web. There's no need for JavaScript, no need even for CSS.

\n

This is more or less the sort of page AMP wants you to create (AMP doesn't care if your pages are actually static or\u2014more likely\u2014generated from a database. The point is what's rendered is static). But then AMP wants to turn around and require that each page include a third-party script in order to load. AMP deliberately sets the opacity of the entire page to 0 until this script loads. Only then is the page revealed.

\n

This is a little odd; as developer Justin Avery writes, \"Surely the document itself is going to be faster than loading a library to try and make it load faster.\"

\n

Pinboard.in creator Maciej Ceg\u0142owski did just that, putting together a demo page that duplicates the AMP-based AMP homepage without that JavaScript. Over a 3G connection, Ceg\u0142owski's page fills the viewport in 1.9 seconds. The AMP homepage takes 9.2 seconds. JavaScript slows down page load times, even when that JavaScript is part of Google's plan to speed up the Web.

\n

Ironically, for something that is ostensibly trying to encourage better behavior from developers and publishers, this means that pages using progressive enhancement, keeping scripts to a minimum and aggressively caching content\u2014in other words sites following best practices and trying to do things right\u2014may be slower in AMP.

\n

In the end, developers and publishers who have been following best practices for Web development and don't rely on dozens of tracking networks and ads have little to gain from AMP. Unfortunately, the publishers building their sites like that right now are few and far between. Most publishers have much to gain from generating AMP pages\u2014at least in terms of speed. Google says that AMP can improve page speed index scores by between 15 to 85 percent. That huge range is likely a direct result of how many third-party scripts are being loaded on some sites.

\n

The dependency on JavaScript has another detrimental effect. AMP documents depend on JavaScript, which is to say that if their (albeit small) script fails to load for some reason\u2014say, you're going through a tunnel on a train or only have a flaky one-bar connection at the beach\u2014the AMP page is completely blank. When an AMP page fails, it fails spectacularly.

\n

Google knows better than this. Even Gmail still offers a pure HTML-based fallback version of itself.

\n

AMP for publishers

\n

Under the AMP bargain, all big media has to do is give up its ad networks. And interactive maps. And data visualizations. And comment systems.

\n

Your WordPress blog can get in on the stripped-down AMP action as well. Given that WordPress powers roughly 24 percent of all sites on the Web, having an easy way to generate AMP documents from WordPress means a huge boost in adoption for AMP. It's certainly possible to build fast websites using WordPress, but it's also easy to do the opposite. WordPress plugins often have a dramatic (negative) impact on load times. It isn't uncommon to see a WordPress site loading not just one but several external JavaScript libraries because the user installed three plugins that each use a different library. AMP neatly solves that problem by stripping everything out.

\n

So why would publishers want to use AMP? Google, while its influence has dipped a tad across industries (as Facebook and Twitter continue to drive more traffic), remains a powerful driver of traffic. When Google promises more eyeballs on their stories, big media listens.

\n

AMP isn't trying to get rid of the Web as we know it; it just wants to create a parallel one. Under this system, publishers would not stop generating regular pages, but they would also start generating AMP files, usually (judging by the early adopter examples) by appending /amp to the end of the URL. The AMP page and the canonical page would reference each other through standard HTML tags. User agents could then pick and choose between them. That is, Google's Web crawler might grab the AMP page, but desktop Firefox might hit the AMP page and redirect to the canonical URL.

\n

On one hand, what this amounts to is that after years of telling the Web to stop making m. mobile-specific websites, Google is telling the Web to make /amp-specific mobile pages. On the other hand, this nudges publishers toward an idea that's big in the IndieWeb movement: Publish (on your) Own Site, Syndicate Elsewhere (or POSSE for short).

\n

The idea is to own the canonical copy of the content on your own site but then to send that content everywhere you can. Or rather, everywhere you want to reach your readers. Facebook Instant Article? Sure, hook up the RSS feed. Apple News? Send the feed over there, too. AMP? Sure, generate an AMP page. No need to stop there\u2014tap the new Medium API and half a dozen others as well.

\n

Reading is a fragmented experience. Some people will love reading on the Web, some via RSS in their favorite reader, some in Facebook Instant Articles, some via AMP pages on Twitter, some via Lynx in their terminal running on a restored TRS-80 (seriously, it can be done. See below). The beauty of the POSSE approach is that you can reach them all from a single, canonical source.

\n

AMP and the open Web

\n

While AMP has problems and just might be designed to lock publishers into a Google-controlled format, so far it does seem friendlier to the open Web than Facebook Instant Articles.

\n

In fact, if you want to be optimistic, you could look at AMP as the carrot that Google has been looking for in its effort to speed up the Web. As noted Web developer (and AMP optimist) Jeremy Keith writes in a piece on AMP, \"My hope is that the current will flow in both directions. As well as publishers creating AMP versions of their pages in order to appease Google, perhaps they will start to ask 'Why can't our regular pages be this fast?' By showing that there is life beyond big bloated invasive webpages, perhaps the AMP project will work as a demo of what the whole Web could be.\"

\n

Not everyone is that optimistic about AMP, though. Developer and Author Tim Kadlec writes, \"[AMP] doesn't feel like something helping the open Web so much as it feels like something bringing a little bit of the walled garden mentality of native development onto the Web... Using a very specific tool to build a tailored version of my page in order to 'reach everyone' doesn't fit any definition of the 'open Web' that I've ever heard.\"

\n

There's one other important aspect to AMP that helps speed up their pages: Google will cache your pages on its CDN for free. \"AMP is caching... You can use their caching if you conform to certain rules,\" writes Dave Winer, developer and creator of RSS, in a post on AMP. \"If you don't, you can use your own caching. I can't imagine there's a lot of difference unless Google weighs search results based on whether you use their code.\"

", "url": "http://arstechnica.com/information-technology/2015/11/googles-amp-an-internet-giant-tackles-the-old-myth-of-the-web-is-too-slow/", "pub_date": "2015-11-03T22:54:50", "publisher": 1}}, {"model": "resume.pubitem", "pk": 2, "fields": {"title": "Review: Dell Chromebook 13", "slug": "dell-chromebook-13", "body_markdown": "###The Dell Chromebook 13 Is Almost Perfect — And its Price Shows it\r\n\r\nAfter using close to a dozen different models, I\u2019ve come to the conclusion that there is no such thing as the perfect Chromebook. There is always something missing, always something that could be better. Even when there isn\u2019t, as in the case of Dell\u2019s new 13-inch Chromebook, you\u2019ll pay as much as you would for a low-end PC laptop that\u2019s capable of a lot more.\r\n\r\nPerfection is not something you\u2019ll find in a Chromebook; buying the one that\u2019s right for you consists of finding the right set of compromises. That said, if money is no object, the Dell Chromebook 13 is pretty close to perfect.\r\n\r\nBilled as \"business class,\" the Dell Chromebook 13 comes in a wide variety of configurations ranging from low end Celeron models (like the one I tested), to higher end models with up to 8GB of RAM, an Intel i5 processor, and a touchscreen. Prices range accordingly, from $400 all the way up to $900. Yes, Dell has joined Google in making it easy to drop nearly as much on a Chromebook as a low end ultrabook.\r\n\r\nThe first thing you\u2019ll notice about the Chromebook 13 is that it feels like a \"real\" laptop. The carbon fiber exterior and magnesium alloy chassis give it a look and build quality that has more in common with ultrabooks that the typical, all plastic construction of Chromebooks. It also has the sharpest, nicest IPS screen (1920\u00d71080 matte) this side of the Toshiba Chromebook 2. While the display is sharp and clear, the model I tested did have some noticeable light leaks on two edges. I haven\u2019t seen other reviewers mention anything of the sort though, so it could have just been my unit.\r\n\r\nThe keyboard is backlit and well-constructed with none of the sponginess that plagues some Chromebooks. The trackpad is similarly nice and I rarely had any trouble with errant taps while typing. The port setup is typical of Chromebooks in general, with a USB 2.0 port on the right and a USB 3.0 port, HDMI port, microSD card slot, and combo headphone/mic jack on the left. A nice touch: The microSD card slot allows the card to sit flush, so you can leave it plugged in for additional storage space.\r\n\r\nOne thing that hasn\u2019t really changed in over a year of Chromebook reviews is Chrome OS itself. Google pushes out regular updates, but it\u2019s more or less the same it was when it launched. While I still dislike the Google-centric universe that Chrome OS lives in, I\u2019ve come around to the actual experience of using it. Chrome OS, while limited to browser-based tasks (the list of which gets longer everyday) is more secure and much easier for most people to figure out that something like Windows 10.\r\n\r\nHopefully Google won\u2019t mess up Chrome\u2019s strong points\u2014security, simplicity, discoverability, ease-of-use, etc\u2014in the merger with its \u201cother\u201d OS, Android, which at this point feels more like a complicated way to deliver malware than an operating system anyone wants to use (yes, I use it, no, there\u2019s nothing better yet; that doesn\u2019t make it good though).\r\n\r\nDell has worked with Google to add Citrix-based virtualization for Chrome OS, which makes it moderately more \u201cbusiness class.\u201d This makes it possible to access your Windows desktop remotely through Chrome. Of course you can already do this with using Chrome Remote Desktop, but the Citrix version adds some security features (SonicWall VPN for instance). How useful this is will depend on what you do for work, honestly.\r\n\r\nDell\u2019s latest offering could be one of the speediest Chromebooks out there if you opt for the i5 chip and 8GB RAM. That will set you back $900 though, making it the second most expensive Chromebook out there. At the other end of the price spectrum is the Intel Celeron 3205U processor, 2GB of RAM, which will run you $399.\r\n\r\nThe model Dell sent WIRED was the next one up from the bottom, which has the same processor and hard drive but adds 2GB of RAM for a total of 4GB. This model sells for $429. Performance on this lower end machine was fine for everyday tasks, though if you do want to log in to your Windows machine and have a dozen tabs open at the same time, you should spend the money for one of the more powerful models.\r\n\r\nBattery life may well be even more important than raw power with Chromebooks, and the Dell is hard to beat in that department. Dell claims up to 12 hours, and in my testing that\u2019s pretty close to accurate. If you go on a Mr. Robot bender and stream hours of video you probably won\u2019t get to 12 hours, but for everyday use\u2014email, news reading, Web browsing, streaming music\u2014the Dell has very impressive battery life.\r\n\r\nThe Dell comes as close to the ideal Chromebook as anything I\u2019ve tested. The catch is that you\u2019ll pay for it. It\u2019s probably best compared directly to the only Chromebook that\u2019s more powerful and pricier\u2014the Pixel. If you want a high-end Chromebook and don\u2019t mind spending $900 for it, the Dell bests the Pixel in many ways, including battery life.", "body_html": "

The Dell Chromebook 13 Is Almost Perfect — And its Price Shows it

\n

After using close to a dozen different models, I\u2019ve come to the conclusion that there is no such thing as the perfect Chromebook. There is always something missing, always something that could be better. Even when there isn\u2019t, as in the case of Dell\u2019s new 13-inch Chromebook, you\u2019ll pay as much as you would for a low-end PC laptop that\u2019s capable of a lot more.

\n

Perfection is not something you\u2019ll find in a Chromebook; buying the one that\u2019s right for you consists of finding the right set of compromises. That said, if money is no object, the Dell Chromebook 13 is pretty close to perfect.

\n

Billed as \"business class,\" the Dell Chromebook 13 comes in a wide variety of configurations ranging from low end Celeron models (like the one I tested), to higher end models with up to 8GB of RAM, an Intel i5 processor, and a touchscreen. Prices range accordingly, from $400 all the way up to $900. Yes, Dell has joined Google in making it easy to drop nearly as much on a Chromebook as a low end ultrabook.

\n

The first thing you\u2019ll notice about the Chromebook 13 is that it feels like a \"real\" laptop. The carbon fiber exterior and magnesium alloy chassis give it a look and build quality that has more in common with ultrabooks that the typical, all plastic construction of Chromebooks. It also has the sharpest, nicest IPS screen (1920\u00d71080 matte) this side of the Toshiba Chromebook 2. While the display is sharp and clear, the model I tested did have some noticeable light leaks on two edges. I haven\u2019t seen other reviewers mention anything of the sort though, so it could have just been my unit.

\n

The keyboard is backlit and well-constructed with none of the sponginess that plagues some Chromebooks. The trackpad is similarly nice and I rarely had any trouble with errant taps while typing. The port setup is typical of Chromebooks in general, with a USB 2.0 port on the right and a USB 3.0 port, HDMI port, microSD card slot, and combo headphone/mic jack on the left. A nice touch: The microSD card slot allows the card to sit flush, so you can leave it plugged in for additional storage space.

\n

One thing that hasn\u2019t really changed in over a year of Chromebook reviews is Chrome OS itself. Google pushes out regular updates, but it\u2019s more or less the same it was when it launched. While I still dislike the Google-centric universe that Chrome OS lives in, I\u2019ve come around to the actual experience of using it. Chrome OS, while limited to browser-based tasks (the list of which gets longer everyday) is more secure and much easier for most people to figure out that something like Windows 10.

\n

Hopefully Google won\u2019t mess up Chrome\u2019s strong points\u2014security, simplicity, discoverability, ease-of-use, etc\u2014in the merger with its \u201cother\u201d OS, Android, which at this point feels more like a complicated way to deliver malware than an operating system anyone wants to use (yes, I use it, no, there\u2019s nothing better yet; that doesn\u2019t make it good though).

\n

Dell has worked with Google to add Citrix-based virtualization for Chrome OS, which makes it moderately more \u201cbusiness class.\u201d This makes it possible to access your Windows desktop remotely through Chrome. Of course you can already do this with using Chrome Remote Desktop, but the Citrix version adds some security features (SonicWall VPN for instance). How useful this is will depend on what you do for work, honestly.

\n

Dell\u2019s latest offering could be one of the speediest Chromebooks out there if you opt for the i5 chip and 8GB RAM. That will set you back $900 though, making it the second most expensive Chromebook out there. At the other end of the price spectrum is the Intel Celeron 3205U processor, 2GB of RAM, which will run you $399.

\n

The model Dell sent WIRED was the next one up from the bottom, which has the same processor and hard drive but adds 2GB of RAM for a total of 4GB. This model sells for $429. Performance on this lower end machine was fine for everyday tasks, though if you do want to log in to your Windows machine and have a dozen tabs open at the same time, you should spend the money for one of the more powerful models.

\n

Battery life may well be even more important than raw power with Chromebooks, and the Dell is hard to beat in that department. Dell claims up to 12 hours, and in my testing that\u2019s pretty close to accurate. If you go on a Mr. Robot bender and stream hours of video you probably won\u2019t get to 12 hours, but for everyday use\u2014email, news reading, Web browsing, streaming music\u2014the Dell has very impressive battery life.

\n

The Dell comes as close to the ideal Chromebook as anything I\u2019ve tested. The catch is that you\u2019ll pay for it. It\u2019s probably best compared directly to the only Chromebook that\u2019s more powerful and pricier\u2014the Pixel. If you want a high-end Chromebook and don\u2019t mind spending $900 for it, the Dell bests the Pixel in many ways, including battery life.

", "url": "http://www.wired.com/2015/11/review-dell-chromebook-13/", "pub_date": "2015-11-16T10:59:00", "publisher": 2}}, {"model": "resume.pubitem", "pk": 3, "fields": {"title": "Refined player: Fedora 23's workin' it like Monday morning ", "slug": "fedora-23-reg", "body_markdown": "Review OK, it was a slight delay \u2013 one week \u2013 but the latest Fedora, number 23, represents a significant update that was worth waiting for.\r\n\r\nThat\u2019s thanks not just to upstream projects like GNOME, now at 3.18, but also some impressive new features from team Fedora.\r\n\r\nLike its predecessor, this Fedora comes in three base configurations \u2013 Workstation, Server and Cloud. The former is the desktop release and the primary basis for my testing, though I also tested out the Server release this time around.\r\n\r\nThe default Fedora 23 live CD will install the GNOME desktop though there are plenty of spins available if you prefer something else. I opted for GNOME since a lot of what's new in GNOME, like much improved Wayland support is currently only really available through Fedora.\r\n\r\nI have been hard on Fedora's Anaconda installer in the past, but I am slowly coming around. The installation experience in Fedora 23 is hard to beat, particularly the way you don't need to visit sections if Fedora has guessed something right. For example, Anaconda correctly guessed my time zone so I can just skip that panel without even needing to click OK. It's a small thing, but it helps set a certain tone of feature completeness right from the start.\r\n\r\n\r\nI still think the button-based approach of Anaconda can sometimes make it hard to figure out what you've missed if it's your first time using the installer. But it's a little clearer in Fedora 23 because there's an additional orange bar across the bottom to tell you about whatever you missed.\r\n\r\nWhat's perhaps most encouraging about Anaconda is that Fedora keeps refining it. Having just installed and tested Ubuntu and openSUSE, I wouldn't hesitate to say Anaconda is a better experience than either. It's certainly faster thanks to the amount of stuff you can simply ignore.\r\n\r\nOnce you've got Fedora WorkStation installed the first thing you'll likely notice is GNOME 3.18. GNOME may be upstream from Fedora, but Fedora has long been where GNOME turns to showcase new features and Fedora 23 is no different.\r\n\r\nAmong the changes in GNOME 3.18 are faster searching, first-class support for integrating Google Drive in Nautilus, support for light sensors (handy on laptops since you can lower the back light setting and extend battery life) and improved Wayland support. More on Wayland in a minute, but some other new features in GNOME 3.18 deserve mention.\r\n\r\nGNOME Software now has support for firmware updates via fwupd. The firmware support means means you won't need any proprietary tools nor will you have to resort to pulling out the bootable DVDs. The catch is that the vendor for your hardware needs to upload the firmware to the Linux Vendor Firmware Service.\r\n\r\nAnother big new GNOME project that arrives \u2013 albeit in limited form \u2013 is the Xdg project. Xdg is a system for building, distributing and running sandboxed desktop applications.\r\n\r\nAside from the security gains of sandboxing, xdg-app also hopes to allow app developers to use a single package for multiple distros. The xdg support in Fedora 23 is still very experimental and none of the apps are actually packaged this way, but look for xdg support to continue expanding in Fedora and GNOME's futures.\r\n\r\nFedora has been an early adopter of Wayland, the X.org replacement that will eventually be the default option (coming perhaps as early as Fedora 24). If you'd like to play around with Wayland this release offers considerably more support than any other distro to date.\r\n\r\nIn fact, provided you have supported hardware, Wayland actually works quite well and, with a little extra effort installing some experimental repos can get you really nice features like full GTK 3 support for OpenOffice 5 \u2013 meaning support for HiDPI screens, among other things \u2013 and support for running monitors with DPI-independent resolution. That is, you can have hi-res and normal res monitors running off the same machine and it all just works. Reportedly, anyway.\r\n\r\nNot everything GNOME 3.18 is great, though. The GNOME project continues its curious take on usability by, once again, removing something that was genuinely useful. In this case, it's the file copy feedback message that was a small window with a progress bar. The window is gone and now you'll have to get by with a tiny icon in the Nautilus window that shows progress via a pie chart looking icon.\r\n\r\nI mention this not so much to poke fun at Nautilus's ever-declining usability, but because it is the only file copy feedback you'll get and unless you know it's there you'll probably keep dragging and dropping files, thinking they haven't copied, when in fact they have you just didn't notice. You silly user wanting feedback about an action you initiated. Sigh. By the time GNOME gets done with it Nautilus won't actually do anything anymore, it will just be a nice looking window you can use to view files.\r\n\r\n\r\nOn the plus side, the new Google Drive integration is quite nice. Once you enter your Google account details interacting with your Drive documents in indistinguishable from local documents (provided you have an internet connection that is, without one you'll be looking at a lot of documents you can't actually open).\r\n\r\nThere are some big changes afoot in the Server release of Fedora 23 as well. Fedora's Cockpit, a web-based management console that aims to make everyone a reasonably compliant sysadmin, has been updated again. You'd be hard pressed to find a simpler visual way to monitor and manage your Fedora server deployments. You can do everything from here, including searching for, installing and deploying Docker containers with a single click. Cockpit's greatest contribution to the server world isn't its ease of use, though: it's that its ease of use means more secure deployments.\r\n\r\nThis release continues to improve on security by adding support for SSH key authentication in Cockpit and support for configuring user accounts with authorized keys. Fedora 23 Server also gets a rolekit update with the addition of a new role for a cache server for web applications (powered by memcached).\r\n\r\nAll versions of Fedora 23 ship with Linux kernel 4.2, which is pretty close to the latest and greatest, adding new hardware support for Intel Skylake CPUs and AMD GPUs.\r\n\r\nFedora's new DNF package manager gets some more new powers in this release: it's now in charge of system upgrades. That's right, now more fedup, which frankly, didn't make the update process very smooth in my experience. The DNF update process is very simple, just a couple of commands. DNF also uses systemd's support for offline system updates and allows you to roll them back if necessary.\r\n\r\nThe new upgrade tools are a welcome change not just because upgrading is easier and safer (with the ability to roll back should things go awry), but because Fedora has no LTS style release. Fedora 23 will be supported for 12 months and then you'll need to move on to Fedora 24. That's a bit abrupt if you're coming from the Ubuntu (or especially Debian) world of LTS releases with two years of support.\r\n\r\nIf you want that in the Red Hat ecosystem then you need to turn to RHEL or CentOS. However, now that Fedora is capable of transactional updates with rollbacks the missing LTS release feels, well, less missing, since upgrading is less problematic. ", "body_html": "

Review OK, it was a slight delay \u2013 one week \u2013 but the latest Fedora, number 23, represents a significant update that was worth waiting for.

\n

That\u2019s thanks not just to upstream projects like GNOME, now at 3.18, but also some impressive new features from team Fedora.

\n

Like its predecessor, this Fedora comes in three base configurations \u2013 Workstation, Server and Cloud. The former is the desktop release and the primary basis for my testing, though I also tested out the Server release this time around.

\n

The default Fedora 23 live CD will install the GNOME desktop though there are plenty of spins available if you prefer something else. I opted for GNOME since a lot of what's new in GNOME, like much improved Wayland support is currently only really available through Fedora.

\n

I have been hard on Fedora's Anaconda installer in the past, but I am slowly coming around. The installation experience in Fedora 23 is hard to beat, particularly the way you don't need to visit sections if Fedora has guessed something right. For example, Anaconda correctly guessed my time zone so I can just skip that panel without even needing to click OK. It's a small thing, but it helps set a certain tone of feature completeness right from the start.

\n

I still think the button-based approach of Anaconda can sometimes make it hard to figure out what you've missed if it's your first time using the installer. But it's a little clearer in Fedora 23 because there's an additional orange bar across the bottom to tell you about whatever you missed.

\n

What's perhaps most encouraging about Anaconda is that Fedora keeps refining it. Having just installed and tested Ubuntu and openSUSE, I wouldn't hesitate to say Anaconda is a better experience than either. It's certainly faster thanks to the amount of stuff you can simply ignore.

\n

Once you've got Fedora WorkStation installed the first thing you'll likely notice is GNOME 3.18. GNOME may be upstream from Fedora, but Fedora has long been where GNOME turns to showcase new features and Fedora 23 is no different.

\n

Among the changes in GNOME 3.18 are faster searching, first-class support for integrating Google Drive in Nautilus, support for light sensors (handy on laptops since you can lower the back light setting and extend battery life) and improved Wayland support. More on Wayland in a minute, but some other new features in GNOME 3.18 deserve mention.

\n

GNOME Software now has support for firmware updates via fwupd. The firmware support means means you won't need any proprietary tools nor will you have to resort to pulling out the bootable DVDs. The catch is that the vendor for your hardware needs to upload the firmware to the Linux Vendor Firmware Service.

\n

Another big new GNOME project that arrives \u2013 albeit in limited form \u2013 is the Xdg project. Xdg is a system for building, distributing and running sandboxed desktop applications.

\n

Aside from the security gains of sandboxing, xdg-app also hopes to allow app developers to use a single package for multiple distros. The xdg support in Fedora 23 is still very experimental and none of the apps are actually packaged this way, but look for xdg support to continue expanding in Fedora and GNOME's futures.

\n

Fedora has been an early adopter of Wayland, the X.org replacement that will eventually be the default option (coming perhaps as early as Fedora 24). If you'd like to play around with Wayland this release offers considerably more support than any other distro to date.

\n

In fact, provided you have supported hardware, Wayland actually works quite well and, with a little extra effort installing some experimental repos can get you really nice features like full GTK 3 support for OpenOffice 5 \u2013 meaning support for HiDPI screens, among other things \u2013 and support for running monitors with DPI-independent resolution. That is, you can have hi-res and normal res monitors running off the same machine and it all just works. Reportedly, anyway.

\n

Not everything GNOME 3.18 is great, though. The GNOME project continues its curious take on usability by, once again, removing something that was genuinely useful. In this case, it's the file copy feedback message that was a small window with a progress bar. The window is gone and now you'll have to get by with a tiny icon in the Nautilus window that shows progress via a pie chart looking icon.

\n

I mention this not so much to poke fun at Nautilus's ever-declining usability, but because it is the only file copy feedback you'll get and unless you know it's there you'll probably keep dragging and dropping files, thinking they haven't copied, when in fact they have you just didn't notice. You silly user wanting feedback about an action you initiated. Sigh. By the time GNOME gets done with it Nautilus won't actually do anything anymore, it will just be a nice looking window you can use to view files.

\n

On the plus side, the new Google Drive integration is quite nice. Once you enter your Google account details interacting with your Drive documents in indistinguishable from local documents (provided you have an internet connection that is, without one you'll be looking at a lot of documents you can't actually open).

\n

There are some big changes afoot in the Server release of Fedora 23 as well. Fedora's Cockpit, a web-based management console that aims to make everyone a reasonably compliant sysadmin, has been updated again. You'd be hard pressed to find a simpler visual way to monitor and manage your Fedora server deployments. You can do everything from here, including searching for, installing and deploying Docker containers with a single click. Cockpit's greatest contribution to the server world isn't its ease of use, though: it's that its ease of use means more secure deployments.

\n

This release continues to improve on security by adding support for SSH key authentication in Cockpit and support for configuring user accounts with authorized keys. Fedora 23 Server also gets a rolekit update with the addition of a new role for a cache server for web applications (powered by memcached).

\n

All versions of Fedora 23 ship with Linux kernel 4.2, which is pretty close to the latest and greatest, adding new hardware support for Intel Skylake CPUs and AMD GPUs.

\n

Fedora's new DNF package manager gets some more new powers in this release: it's now in charge of system upgrades. That's right, now more fedup, which frankly, didn't make the update process very smooth in my experience. The DNF update process is very simple, just a couple of commands. DNF also uses systemd's support for offline system updates and allows you to roll them back if necessary.

\n

The new upgrade tools are a welcome change not just because upgrading is easier and safer (with the ability to roll back should things go awry), but because Fedora has no LTS style release. Fedora 23 will be supported for 12 months and then you'll need to move on to Fedora 24. That's a bit abrupt if you're coming from the Ubuntu (or especially Debian) world of LTS releases with two years of support.

\n

If you want that in the Red Hat ecosystem then you need to turn to RHEL or CentOS. However, now that Fedora is capable of transactional updates with rollbacks the missing LTS release feels, well, less missing, since upgrading is less problematic.

", "url": "http://www.theregister.co.uk/2015/11/17/fedora_23_review/", "pub_date": "2015-11-17T09:30:49", "publisher": 3}}, {"model": "resume.pubitem", "pk": 4, "fields": {"title": "Ubuntu 15.10 review: Wily Werewolf leaves scary experimentation for next year", "slug": "ars-ubuntu-1510", "body_markdown": "####Fall is usually Canonical's time for change, but 15.10 is merely the calm before Unity 8. \r\n\r\nFor many users, Unity 8 is a scarier proposition than some mythical human-wolf hybrid.\r\n\r\nCanonical recently released Ubuntu 15.10, nicknamed Wily Werewolf. In the past, an autumn release of Ubuntu Linux like this would have been more experimental, warranting some caution when updating. Such releases weren't quite update-at-your-own-risk rough, but they were often packed full of new features that were not fully baked. (For example, the now-shuttered Ubuntu One first debuted in 9.10. The Unity desktop became a default in 11.10, and the controversial Amazon search results in the Unity Dash made their debut in 12.10.) Especially compared to the spring .04 releases that tended to be stable (and every two years packaged as Long Term Support releases), autumn was Canonical's time to experiment.\r\nFurther Reading\r\n\r\nUnfortunately\u2014or fortunately, depending on how you feel about desktop experiments\u2014that's not the case with Wily Werewolf.\r\n\r\nThere are new features worth updating for in this release, but, on the whole, this is Canonical refining what it has already created. The organization is essentially getting ready for the next LTS release (Ubuntu 16.04, due toward the end of April 2016), which will also likely be the last LTS release based on Unity 7.\r\n\r\nSo by this time next year, 16.10 will bring back the experimental new features with an entirely different beast on the desktop. Look for Unity 8, Mir, and other big changes to return in next year's .10 release, re-establishing the fall as Canonical's playground.\r\n\r\nBut talk of Unity 8 and what it means for Ubuntu can wait\u2014we should first appreciate Ubuntu 15.10, which might be the very last of its kind for a little while. This is a stable, welcome update that doesn't require you to radically change your workflow or habits.\r\n\r\nVisually Ubuntu 15.10 looks a lot like previous releases.\r\n\r\nWhile Ubuntu 15.10 is unlikely to win any awards for innovation, the kernel update includes some very useful new features, a couple of UI changes for Unity, and plenty of application updates. All of these make the new release well worth an update.\r\n\r\nThe most notable UI changes are the scroll bars, which are now pulled straight from GNOME 3. Canonical has abandoned its little disappearing \"handle\"-style scrollbars in favor of GNOME's defaults (which have improved considerably since Ubuntu started work on its own version). The change appears to be based more on code maintenance and development effort than any strong aesthetic feelings from Canonical. Writing and maintaining your own scroll bar code may be more work than it's worth. The visual change is minor and solves quite a few bugs in Canonical's home-grown scroll bars, making it a win for users as well as the programmers once tasked with maintaining the old code base.\r\n\r\nThe old Ubuntu-created scroll bar is on the left, the new upstream version from GNOME on the right.\r\n\r\nAbandoning the homegrown scroll bars might also mean that Unity is able to integrate upstream GNOME updates faster than it has been lately. With this release, most of the GNOME suite of tools that powers much of Unity have finally been updated to 3.16, though a few holdouts like text-editor GEdit remain at much older versions.\r\n\r\nAside from the scroll bars, there aren't many visual changes to this release. Unity itself gets a slight version bump with some bug fixes and a couple of new features, including a new option to drag icons out of the Dash launcher and onto your desktop. If you were really missing the ability to clutter your desktop with something other than files, well, now you can throw some application launchers in there for good measure.\r\n\r\nOther notable bug fixes in Unity address an annoying problem with full-screen menu bars and the ability to access locally integrated menus\u2014that is, menus within the window rather than in the top bar\u2014on unfocused windows.\r\n\r\nWhile those are welcome fixes, most of what's interesting in this release is not directly from Canonical. The most exciting thing in Ubuntu 15.10 is probably the updated kernel, which is now based on the upstream Linux Kernel 4.2.\r\n\r\nThe 4.2 line brings support for recent Radeon GPUs and some new encryption options for ext4 disks. There's also support for Intel's new Broxton chips, which just might be finding their way into an Ubuntu Mobile device at some point. 15.10 marks the first time that the new live kernel patching has been available in Ubuntu, and this release adds a new kernel for the Raspberry Pi 2 as well.\r\n\r\nLinux game aficionados will be happy to hear that this release ships with support for the new Steam controller.\r\n\r\nDevelopers get some love in this release, too, with updates for Python and Ubuntu Make, Ubuntu's impressive suite of developer tools. If you're looking for a quick way to get, for example, a basic Android development environment setup, you'd be hard pressed to beat Ubuntu Make's simple umake android command.\r\n\r\nAnyone doing tech support from an Ubuntu machine will be happy to hear that Virtualbox has been updated with the latest version, which offers guest additions for Windows 10. The rest of Ubuntu's standard application suite has been updated as well, including the latest version of Firefox, Thunderbird, Chromium, and more. Even LibreOffice has been upgraded to version 5, a major update for LibreOffice users.\r\n\r\nIn testing, Ubuntu 15.10 has been rock solid. But that said, I had some trouble installing 15.10 via Chrubuntu on a new Dell 13 Chromebook primarily related to trackpad drivers. Chrubuntu is a bit of a hack, though it's probably not fair to hold it against Ubuntu. 15.10 has otherwise been very stable and wonderful to use on all the devices I've tested it on.\r\n\r\nThis is especially true on my old Eeepc, where Ubuntu offers something that gets very little press\u2014UI scaling. Typically HiDPI screens get all the attention, and, indeed, Unity looks great in high res, but Ubuntu also has some great scaling in the opposite direction. Using the slider under Settings >> displays, it's possible to downsize the entire UI, which gains you some precious real estate on smaller screens. It doesn't work everywhere (Firefox is my most-used exception), but it does make it easy to reclaim a few pixels on small screens.\r\n\r\n###Ubuntu 15.10 Flavors\r\n\r\nWhen most people refer to Ubuntu, they mean the Unity desktop version, but there are half a dozen other official Ubuntu \"flavors\" using just about every popular desktop available for Linux.\r\n\r\nThe release of Wily Werewolf brings updates for all of them, but perhaps none is as big or impressive as Kubuntu 15.10. Kubuntu has always been one of the nicer KDE-based distros, but this release is particularly impressive. With Kubuntu 15.04 earlier this year, Kubuntu made the leap to Plasma 5, the next generation of KDE. At the start, things were rough around the edges. Kubuntu 15.10 adds an impressive list of bug fixes and some UI polish that make it one of the best KDE desktops available right now (the other standout being openSUSE Leap). This update features Plasma 5.4 and KDE Applications 15.08, which means the latest set of stock KDE apps and underlying tools you can get in a KDE distro.\r\n\r\nKubuntu 15.10 with the new Breeze KDE theme.\r\n\r\nThe new Breeze desktop with its flat, colorful, high-contrast look is what KDE refers to as a modernized interface, and it has \"reduced visual clutter throughout the workspace.\" (For more details on what's new in Plasma 5, see Ars' earlier review.)\r\n\r\nUnfortunately Kubuntu 15.10 comes along with the news that the lead developer of Kubuntu is [leaving the project](https://kubuntu.org/news/jonathan-riddell-stands-down-as-release-manager-of-kubuntu/). The good news is that he'll still be actively involved in KDE, but the bad news is that the troubling accusations he made about Canonical's misuse of donations is the reason for his departure. Canonical has reportedly launched an internal audit to figure out what, if anything, went wrong.\r\n\r\nThe other notable update among the various Ubuntu flavors is an Ubuntu MATE release intended for the Raspberry Pi 2. The lightweight MATE desktop is a natural fit for the Pi, and the new tailored release makes it much easier to get it installed and up and running on your Raspberry Pi 2.\r\n\r\nDespite a healthy list of new features in Unity and quite a bit of change in some of the other flavors, many (including me) feel a certain sense of disappointment with 15.10.\r\n\r\nWhile there's something to be said for solid updates that don't rock the boat and let you keep getting work done, that's really what LTS releases were designed for. If you prize stability, stick with 14.04 (or use Debian stable). It would be nice to see Ubuntu's x.10 releases return to something a bit edgier and more experimental.\r\n\r\nThat said, you actually can get something very experimental in this release. In fact, it's so experimental that it isn't quite ready for even a .10 release, and you'll need to install it yourself\u2014Ubuntu running Unity 8.\r\n\r\nYes, this is the very thing that has made Ubuntu a tad boring lately\u2014seemingly all development effort has been focused on Ubuntu Mobile and the new Unity 8 desktop. While this setup is actually [relatively easy to install](https://wiki.ubuntu.com/Unity8inLXC) now, it's still very buggy. That's why the latest Unity iteration is available as an LXC container, which helps keep it fully isolated from your production machine.\r\nEnlarge/\r\n\r\nUnity 8 as a login option. This is likely the approach Canonical will take, at least for the first few releases\u2014Unity 8 as a separate login option.\r\n\r\nI took it for a spin and, well, here's the thing about Unity 8: it's buggy and unstable, but it's getting really close. Today, it's possible to experience what Canonical has in mind for the future, and it actually looks pretty great.\r\n\r\nThe really exciting part of Unity 8, though, isn't on the desktop but on Ubuntu Mobile and Canonical's vision of \"convergence.\" [Convergence](http://arstechnica.com/gadgets/2013/01/canonical-unveils-ubuntu-phone-os-that-doubles-as-a-full-pc/), for Canonical, means the mobile device becomes a full desktop PC (*with the addition of a larger-screen monitor). To make this possible, Canonical has developed Unity 8, which will bring the same underlying code base to both the desktop and mobile versions of the OS.\r\n\r\nThe most impressive Unity 8 demo I've seen comes from Canonical engineers who have posted a couple of [video demos of GIMP](https://plus.google.com/+MichaelHall119/posts/HBRyD8npeJk) running on an Ubuntu Mobile device.\r\n\r\nThe point isn't that GIMP is on your phone; that's more of a novelty since the interface would be unusably small and, in the end, pointless beyond the \"hey look at that\" factor. The point is that you plug your phone into a monitor, and, all of a sudden, you have the full power of GIMP running on a device that fits in your pocket (and reverts to a mobile OS when you unplug it from the monitor). It sounds good, and, now, for the first time, it actually looks believable.\r\n\r\nWhat you can currently see in the desktop version is the opposite portion of Canonical's convergence. Mobile applications scale up to run on the desktop device, and some new visual splashes like the 3D app switcher and flatter visual look are showcased in the video below.\r\n\r\nIt won't be for everyone, but if you're underwhelmed by iOS' and Android's attempts to provide a desktop-quality experience with the applications you already use, Ubuntu Mobile is looking like it might finally deliver the goods.\r\n\r\nUbuntu Mobile is also the reason you have boring .10 releases like Wily Werewolf. Canonical is getting its ducks in a row for Unity 8. There will no doubt be growing pains involved with the transition, but a day will come soon when the minor, perhaps unremarkable, releases like 15.10 are a thing of long-lost memories.\r\n\r\nIf you want a desktop that's reliable, solid, but also pushing things forward\u2014which is to say, if you want the experience Unity has been providing for the last three or even four releases\u2014you will likely want to get the 16.04 LTS release coming next April. It will probably be the last Unity 7 release. But if you want to live on the edge, Unity 8 will be, if not the default, at least only a login screen away come this time next year.\r\n\r\nIn the meantime, enjoy your quiet days of Ubuntu 15.10. The days of such calm releases are limited.", "body_html": "

Fall is usually Canonical's time for change, but 15.10 is merely the calm before Unity 8.

\n

For many users, Unity 8 is a scarier proposition than some mythical human-wolf hybrid.

\n

Canonical recently released Ubuntu 15.10, nicknamed Wily Werewolf. In the past, an autumn release of Ubuntu Linux like this would have been more experimental, warranting some caution when updating. Such releases weren't quite update-at-your-own-risk rough, but they were often packed full of new features that were not fully baked. (For example, the now-shuttered Ubuntu One first debuted in 9.10. The Unity desktop became a default in 11.10, and the controversial Amazon search results in the Unity Dash made their debut in 12.10.) Especially compared to the spring .04 releases that tended to be stable (and every two years packaged as Long Term Support releases), autumn was Canonical's time to experiment.\nFurther Reading

\n

Unfortunately\u2014or fortunately, depending on how you feel about desktop experiments\u2014that's not the case with Wily Werewolf.

\n

There are new features worth updating for in this release, but, on the whole, this is Canonical refining what it has already created. The organization is essentially getting ready for the next LTS release (Ubuntu 16.04, due toward the end of April 2016), which will also likely be the last LTS release based on Unity 7.

\n

So by this time next year, 16.10 will bring back the experimental new features with an entirely different beast on the desktop. Look for Unity 8, Mir, and other big changes to return in next year's .10 release, re-establishing the fall as Canonical's playground.

\n

But talk of Unity 8 and what it means for Ubuntu can wait\u2014we should first appreciate Ubuntu 15.10, which might be the very last of its kind for a little while. This is a stable, welcome update that doesn't require you to radically change your workflow or habits.

\n

Visually Ubuntu 15.10 looks a lot like previous releases.

\n

While Ubuntu 15.10 is unlikely to win any awards for innovation, the kernel update includes some very useful new features, a couple of UI changes for Unity, and plenty of application updates. All of these make the new release well worth an update.

\n

The most notable UI changes are the scroll bars, which are now pulled straight from GNOME 3. Canonical has abandoned its little disappearing \"handle\"-style scrollbars in favor of GNOME's defaults (which have improved considerably since Ubuntu started work on its own version). The change appears to be based more on code maintenance and development effort than any strong aesthetic feelings from Canonical. Writing and maintaining your own scroll bar code may be more work than it's worth. The visual change is minor and solves quite a few bugs in Canonical's home-grown scroll bars, making it a win for users as well as the programmers once tasked with maintaining the old code base.

\n

The old Ubuntu-created scroll bar is on the left, the new upstream version from GNOME on the right.

\n

Abandoning the homegrown scroll bars might also mean that Unity is able to integrate upstream GNOME updates faster than it has been lately. With this release, most of the GNOME suite of tools that powers much of Unity have finally been updated to 3.16, though a few holdouts like text-editor GEdit remain at much older versions.

\n

Aside from the scroll bars, there aren't many visual changes to this release. Unity itself gets a slight version bump with some bug fixes and a couple of new features, including a new option to drag icons out of the Dash launcher and onto your desktop. If you were really missing the ability to clutter your desktop with something other than files, well, now you can throw some application launchers in there for good measure.

\n

Other notable bug fixes in Unity address an annoying problem with full-screen menu bars and the ability to access locally integrated menus\u2014that is, menus within the window rather than in the top bar\u2014on unfocused windows.

\n

While those are welcome fixes, most of what's interesting in this release is not directly from Canonical. The most exciting thing in Ubuntu 15.10 is probably the updated kernel, which is now based on the upstream Linux Kernel 4.2.

\n

The 4.2 line brings support for recent Radeon GPUs and some new encryption options for ext4 disks. There's also support for Intel's new Broxton chips, which just might be finding their way into an Ubuntu Mobile device at some point. 15.10 marks the first time that the new live kernel patching has been available in Ubuntu, and this release adds a new kernel for the Raspberry Pi 2 as well.

\n

Linux game aficionados will be happy to hear that this release ships with support for the new Steam controller.

\n

Developers get some love in this release, too, with updates for Python and Ubuntu Make, Ubuntu's impressive suite of developer tools. If you're looking for a quick way to get, for example, a basic Android development environment setup, you'd be hard pressed to beat Ubuntu Make's simple umake android command.

\n

Anyone doing tech support from an Ubuntu machine will be happy to hear that Virtualbox has been updated with the latest version, which offers guest additions for Windows 10. The rest of Ubuntu's standard application suite has been updated as well, including the latest version of Firefox, Thunderbird, Chromium, and more. Even LibreOffice has been upgraded to version 5, a major update for LibreOffice users.

\n

In testing, Ubuntu 15.10 has been rock solid. But that said, I had some trouble installing 15.10 via Chrubuntu on a new Dell 13 Chromebook primarily related to trackpad drivers. Chrubuntu is a bit of a hack, though it's probably not fair to hold it against Ubuntu. 15.10 has otherwise been very stable and wonderful to use on all the devices I've tested it on.

\n

This is especially true on my old Eeepc, where Ubuntu offers something that gets very little press\u2014UI scaling. Typically HiDPI screens get all the attention, and, indeed, Unity looks great in high res, but Ubuntu also has some great scaling in the opposite direction. Using the slider under Settings >> displays, it's possible to downsize the entire UI, which gains you some precious real estate on smaller screens. It doesn't work everywhere (Firefox is my most-used exception), but it does make it easy to reclaim a few pixels on small screens.

\n

Ubuntu 15.10 Flavors

\n

When most people refer to Ubuntu, they mean the Unity desktop version, but there are half a dozen other official Ubuntu \"flavors\" using just about every popular desktop available for Linux.

\n

The release of Wily Werewolf brings updates for all of them, but perhaps none is as big or impressive as Kubuntu 15.10. Kubuntu has always been one of the nicer KDE-based distros, but this release is particularly impressive. With Kubuntu 15.04 earlier this year, Kubuntu made the leap to Plasma 5, the next generation of KDE. At the start, things were rough around the edges. Kubuntu 15.10 adds an impressive list of bug fixes and some UI polish that make it one of the best KDE desktops available right now (the other standout being openSUSE Leap). This update features Plasma 5.4 and KDE Applications 15.08, which means the latest set of stock KDE apps and underlying tools you can get in a KDE distro.

\n

Kubuntu 15.10 with the new Breeze KDE theme.

\n

The new Breeze desktop with its flat, colorful, high-contrast look is what KDE refers to as a modernized interface, and it has \"reduced visual clutter throughout the workspace.\" (For more details on what's new in Plasma 5, see Ars' earlier review.)

\n

Unfortunately Kubuntu 15.10 comes along with the news that the lead developer of Kubuntu is leaving the project. The good news is that he'll still be actively involved in KDE, but the bad news is that the troubling accusations he made about Canonical's misuse of donations is the reason for his departure. Canonical has reportedly launched an internal audit to figure out what, if anything, went wrong.

\n

The other notable update among the various Ubuntu flavors is an Ubuntu MATE release intended for the Raspberry Pi 2. The lightweight MATE desktop is a natural fit for the Pi, and the new tailored release makes it much easier to get it installed and up and running on your Raspberry Pi 2.

\n

Despite a healthy list of new features in Unity and quite a bit of change in some of the other flavors, many (including me) feel a certain sense of disappointment with 15.10.

\n

While there's something to be said for solid updates that don't rock the boat and let you keep getting work done, that's really what LTS releases were designed for. If you prize stability, stick with 14.04 (or use Debian stable). It would be nice to see Ubuntu's x.10 releases return to something a bit edgier and more experimental.

\n

That said, you actually can get something very experimental in this release. In fact, it's so experimental that it isn't quite ready for even a .10 release, and you'll need to install it yourself\u2014Ubuntu running Unity 8.

\n

Yes, this is the very thing that has made Ubuntu a tad boring lately\u2014seemingly all development effort has been focused on Ubuntu Mobile and the new Unity 8 desktop. While this setup is actually relatively easy to install now, it's still very buggy. That's why the latest Unity iteration is available as an LXC container, which helps keep it fully isolated from your production machine.\nEnlarge/

\n

Unity 8 as a login option. This is likely the approach Canonical will take, at least for the first few releases\u2014Unity 8 as a separate login option.

\n

I took it for a spin and, well, here's the thing about Unity 8: it's buggy and unstable, but it's getting really close. Today, it's possible to experience what Canonical has in mind for the future, and it actually looks pretty great.

\n

The really exciting part of Unity 8, though, isn't on the desktop but on Ubuntu Mobile and Canonical's vision of \"convergence.\" Convergence, for Canonical, means the mobile device becomes a full desktop PC (*with the addition of a larger-screen monitor). To make this possible, Canonical has developed Unity 8, which will bring the same underlying code base to both the desktop and mobile versions of the OS.

\n

The most impressive Unity 8 demo I've seen comes from Canonical engineers who have posted a couple of video demos of GIMP running on an Ubuntu Mobile device.

\n

The point isn't that GIMP is on your phone; that's more of a novelty since the interface would be unusably small and, in the end, pointless beyond the \"hey look at that\" factor. The point is that you plug your phone into a monitor, and, all of a sudden, you have the full power of GIMP running on a device that fits in your pocket (and reverts to a mobile OS when you unplug it from the monitor). It sounds good, and, now, for the first time, it actually looks believable.

\n

What you can currently see in the desktop version is the opposite portion of Canonical's convergence. Mobile applications scale up to run on the desktop device, and some new visual splashes like the 3D app switcher and flatter visual look are showcased in the video below.

\n

It won't be for everyone, but if you're underwhelmed by iOS' and Android's attempts to provide a desktop-quality experience with the applications you already use, Ubuntu Mobile is looking like it might finally deliver the goods.

\n

Ubuntu Mobile is also the reason you have boring .10 releases like Wily Werewolf. Canonical is getting its ducks in a row for Unity 8. There will no doubt be growing pains involved with the transition, but a day will come soon when the minor, perhaps unremarkable, releases like 15.10 are a thing of long-lost memories.

\n

If you want a desktop that's reliable, solid, but also pushing things forward\u2014which is to say, if you want the experience Unity has been providing for the last three or even four releases\u2014you will likely want to get the 16.04 LTS release coming next April. It will probably be the last Unity 7 release. But if you want to live on the edge, Unity 8 will be, if not the default, at least only a login screen away come this time next year.

\n

In the meantime, enjoy your quiet days of Ubuntu 15.10. The days of such calm releases are limited.

", "url": "http://arstechnica.com/information-technology/2015/11/ubuntu-15-10-review-wily-werewolf-leaves-scary-experimentation-for-next-year/", "pub_date": "2015-11-19T12:03:27", "publisher": 1}}, {"model": "resume.pubitem", "pk": 5, "fields": {"title": "All hail Firefox Dev Edition 44 \u2013 animations, memory and all", "slug": "firefox-dev-44", "body_markdown": "Review When Mozilla released the first Firefox Developer Edition there wasn't much difference from the regular Firefox release, but all that changed recently.\r\n\r\nFirefox DE 44, delivered in early November, packs in a wealth of new features and improvements, particularly for anyone working with HTML5 and CSS3 animation.\r\n\r\nThe Developer Edition's Page Inspector tool adds an animation panel that allows developers to step through animations node by node and easily scrub, pause, and inspect each animation on the webpage.\r\n\r\nThe animation panel also ties into the DOM inspector so it's easy to switch between global and detail views of your animation. The animation panel also offers a visual cubic-bezier editor for tweaking any easing you need to apply to animations.\r\n\r\nThe animation tools make it much easier to create and edit animations in HTML without ever needing to leave the browser, but what might be the single most useful item is the new memory profiling tools \u2013 useful not just for developers, but by proxy end users who will thank you for reducing your memory footprint.\r\n\r\nThe memory tool offers a snapshot of everything in memory on a per-tab basis and then breaks everything down by type. There are four types of memory objects to look at: Objects, which are JavaScript objects grouped by class name; Scripts; Strings; and a generic \"Other\" for everything that isn't one of the first three.\r\n\r\nThere's one other way to inspect memory, though you'll need to turn it on each time you need it since it uses quite a bit of memory itself, namely \"allocation stack.\" With that activated (look for the checkbox at the top of the Memory Panel) you can quickly reference the actual code that created each object in memory.\r\n\r\nThat means, if you've got some huge object you want to re-factor, you can quickly see exactly which lines of code in your app created it. It might be the smartest JavaScript memory debugging tool I've ever used. Be sure to check out the Mozilla Hacks blog for more details and a nice video of everything you can do with the Memory debugger.\r\n\r\nEqually useful, though rougher around the edges, is the new WebSocket Debugging API. This API allows for monitoring WebSocket frames. Eventually Mozilla plans to turn this into a new WebSocket inspection panel, but at the moment you'll need to install an \"experimental\" add-on created by a Firefox Developer Tools engineer.\r\n\r\nExperimental or not, I didn't have any problems using it, it's no less stable than the rest of the Developer Edition. Because it's built on top of a pre-release version of Firefox, the Developer Edition can be a bit less stable than traditional Firefox, but it does mean you get new features faster. For example, this release starts with support for Mozilla's Electrolysis project(AKA multi-process Firefox) enabled by default.\r\n\r\nThis release also adds a new CSS filter inspector that lets you visually create, re-order and edit CSS3 filters, viewing your changes in real-time on the page.\r\n\r\nThere are two new measurement tools in this version as well. The first is a set of pixel rulers along the page margins, so think the pixel rulers in Photoshop or Gimp. The second is the more useful new measurement tool. Click the icon in the developer panel tool bar and then just click and drag anywhere on the page to get a pixel-based measurement.\r\n\r\nThis is one of those incredibly simple tools that once you've used it you'll wonder how you ever did without it. It's a bit like what might be the single most useful tool for developers, namely the forget button. The forget button resides in the tool bar and when clicked will quickly wipe your cache, cookies and browsing history for the last five minutes.\r\n\r\nPart of the problem with the Firefox Developer Edition is that it doesn't come with a detailed manual (there is some great documentation, but it often lacks real-world \"recipes\" to show what you can do with all the various tools).\r\n\r\nTo address that, and showcase some potential ways the developer tools can be useful, web animation engineer Rachel Nabors helped Mozilla create DevTools Challenger, an interactive showcase that's both an example of how to use the various tools and an example of what you can build with them.\r\n\r\nWhen Firefox Developer Edition launched a year ago I was sceptical, especially given Mozilla's decision to abandon Mozilla Labs amid a string of failed projects.\r\n\r\nTwelve months on, Developer Edition has proved itself a valuable tool, with half a dozen features you won't find anywhere else. Even better, thus far, the development pace shows no signs of stagnating.", "body_html": "

Review When Mozilla released the first Firefox Developer Edition there wasn't much difference from the regular Firefox release, but all that changed recently.

\n

Firefox DE 44, delivered in early November, packs in a wealth of new features and improvements, particularly for anyone working with HTML5 and CSS3 animation.

\n

The Developer Edition's Page Inspector tool adds an animation panel that allows developers to step through animations node by node and easily scrub, pause, and inspect each animation on the webpage.

\n

The animation panel also ties into the DOM inspector so it's easy to switch between global and detail views of your animation. The animation panel also offers a visual cubic-bezier editor for tweaking any easing you need to apply to animations.

\n

The animation tools make it much easier to create and edit animations in HTML without ever needing to leave the browser, but what might be the single most useful item is the new memory profiling tools \u2013 useful not just for developers, but by proxy end users who will thank you for reducing your memory footprint.

\n

The memory tool offers a snapshot of everything in memory on a per-tab basis and then breaks everything down by type. There are four types of memory objects to look at: Objects, which are JavaScript objects grouped by class name; Scripts; Strings; and a generic \"Other\" for everything that isn't one of the first three.

\n

There's one other way to inspect memory, though you'll need to turn it on each time you need it since it uses quite a bit of memory itself, namely \"allocation stack.\" With that activated (look for the checkbox at the top of the Memory Panel) you can quickly reference the actual code that created each object in memory.

\n

That means, if you've got some huge object you want to re-factor, you can quickly see exactly which lines of code in your app created it. It might be the smartest JavaScript memory debugging tool I've ever used. Be sure to check out the Mozilla Hacks blog for more details and a nice video of everything you can do with the Memory debugger.

\n

Equally useful, though rougher around the edges, is the new WebSocket Debugging API. This API allows for monitoring WebSocket frames. Eventually Mozilla plans to turn this into a new WebSocket inspection panel, but at the moment you'll need to install an \"experimental\" add-on created by a Firefox Developer Tools engineer.

\n

Experimental or not, I didn't have any problems using it, it's no less stable than the rest of the Developer Edition. Because it's built on top of a pre-release version of Firefox, the Developer Edition can be a bit less stable than traditional Firefox, but it does mean you get new features faster. For example, this release starts with support for Mozilla's Electrolysis project(AKA multi-process Firefox) enabled by default.

\n

This release also adds a new CSS filter inspector that lets you visually create, re-order and edit CSS3 filters, viewing your changes in real-time on the page.

\n

There are two new measurement tools in this version as well. The first is a set of pixel rulers along the page margins, so think the pixel rulers in Photoshop or Gimp. The second is the more useful new measurement tool. Click the icon in the developer panel tool bar and then just click and drag anywhere on the page to get a pixel-based measurement.

\n

This is one of those incredibly simple tools that once you've used it you'll wonder how you ever did without it. It's a bit like what might be the single most useful tool for developers, namely the forget button. The forget button resides in the tool bar and when clicked will quickly wipe your cache, cookies and browsing history for the last five minutes.

\n

Part of the problem with the Firefox Developer Edition is that it doesn't come with a detailed manual (there is some great documentation, but it often lacks real-world \"recipes\" to show what you can do with all the various tools).

\n

To address that, and showcase some potential ways the developer tools can be useful, web animation engineer Rachel Nabors helped Mozilla create DevTools Challenger, an interactive showcase that's both an example of how to use the various tools and an example of what you can build with them.

\n

When Firefox Developer Edition launched a year ago I was sceptical, especially given Mozilla's decision to abandon Mozilla Labs amid a string of failed projects.

\n

Twelve months on, Developer Edition has proved itself a valuable tool, with half a dozen features you won't find anywhere else. Even better, thus far, the development pace shows no signs of stagnating.

", "url": "http://www.theregister.co.uk/2015/11/30/firefox_developer_edition_44/", "pub_date": "2015-11-30T10:09:57", "publisher": 3}}, {"model": "resume.pubitem", "pk": 6, "fields": {"title": "Fedora 23 review: Skip if you want stability, stay to try Linux\u2019s bleeding edge", "slug": "fedora23-ars", "body_markdown": "###New default apps and Xdg support arrive, but stick to RHEL if you want long-term support.\r\n\r\nTwo releases ago, Fedora 21 introduced its namesake project's \"Fedora Next\" plan. The goal was simple\u2014bring the massive, sprawling entity that is Fedora into some neatly organized categories that would clearly define the project's aims. And since Next launched, Fedora has been busy doing just that. The results are impressive, and it feels like the distro has found a renewed sense of purpose.\r\n\r\nFedora.Next design brings some order to the chaos.\r\n\r\nFedora Next's structure is like a series of concentric rings where each ring is supported by the one inside it. At the center are the core components of the system, APIs that applications hook into, and so on. On the outside are the visible layers that users interact with, what Fedora calls \"Environments.\"\r\n\r\nFor the recently unveiled Fedora 23, these Environments consist of Workstation (Desktop), Server, and Cloud. It's the same Environment trio that Fedora offered in its two prior releases. While Cloud still has the feel of an also-ran, the Workstation and Server releases see quite a few new packages. That's especially true for the GNOME-based Workstation, and luckily the changes are largely welcome.\r\nFedora 23 Workstation\r\n\r\nThe biggest change in Fedora 23's default Workstation release comes in the form of GNOME 3.18. But before you get to enjoy what's new in GNOME 3.18, you have to get Fedora installed and do whatever you have to in order to make it through the dreaded Anaconda, Fedora's installation program.\r\n\r\nIn the Fedora 21 review, I gave Anaconda a hard time. Its button-based approach felt clunky compared to similar offerings from other distros, and sadly most of those criticisms stand with Fedora 23. For example, it still takes an extra click of the button to create a user account on the desktop when everyone installing Fedora 23 Workstation will need an account\u2014why not just present a screen to create one?\r\n\r\nThe user creation and root password screens hidden away behind buttons.\r\n\r\nTwo things in Fedora 23 make Anaconda a bit more tolerable, though. First, it's better at guessing defaults. For example, it successfully set my timezone and keyboard preferences with no user input at all. (That's one win for the button-based approach since there was no need to click those buttons.) Provided you stick with single partition, the default disk partitioning setup in Fedora 23 also may not require much input on your part. The second change that makes Anaconda a bit better this time is a new orange bar across the bottom, which helps call your attention to any unfinished business you may have in the installer. For example, it happily notified me when I needed to create that user account.\r\n\r\nIt's a marginal improvement over past releases, but ultimately I stand by my assessment. The best you can say about Fedora's installer is that you only have to use it once.\r\n\r\nGNOME 3.18\r\n\r\nOnce you get past Anaconda, Fedora 23 will land you in what might well be one of the nicest, and certainly one of the newest, GNOME desktops around.\r\n\r\nFedora 23 ships with the just-released GNOME 3.18, which is one of the best GNOME releases to date. It offers dozens of new features, better Wayland support, and a new option to update your firmware through GNOME software. But as all GNOME releases seem to, even GNOME 3.18 has a few steps backward.\r\n\r\nThe first thing you'll likely notice when you set up Fedora 23 Workstation is the new Google Drive integration in GNOME 3.18. Google Drive joins Facebook and Microsoft in the GNOME online accounts panel (along with what I like to hope is the more popular option for Linux users, ownCloud). The new Google Drive support finally makes all your Google documents into first class citizens on the GNOME desktop.\r\n\r\nGoogle Drive joins ownCloud, Microsoft, and Facebook in the GNOME online accounts dialog. Set up is just a matter of granting GNOME access to your account.\r\n\r\nTo set up Drive, all you need to do is follow the prompts to sign in to Google and authorize GNOME to access your account. In about 10 seconds, you'll have complete access to everything in your Drive within the GNOME Files app (aka, Nautilus). Your Google Drive account is displayed as a network share in the file browser sidebar, and interacting with your Google Drive documents is no different from local documents. You can set your documents to open in any application you like (by default they'll open in the Web editor), and creating new files and folders in Drive is just like it is for ordinary drives. Like the ownCloud integration, Google Drive in GNOME just works.\r\n\r\nInteracting with documents stored in Google Drive is just like interacting with any other file on your machine.\r\n\r\nThere's still no Google Drive client for Linux. For GNOME users anyway, GNOME's integration is good enough that you won't miss it. And if you're not a Google Drive user, the upside is that now that Drive support is done perhaps the GNOME team can move on to integrating other online sync services.\r\n\r\nSupport for Drive isn't the only thing new in the Files app, although it is the only thing that's new and good. The other change, while relatively minor, is yet another step backward for usability in GNOME. The file copy dialog has been moved to a tiny icon at the top right of the file browser window. An indicator circle animates large file copy operations, and clicking the icon reveals more details and a dropdown that looks roughly like the file copy dialog you'd see in most other applications. It works quite well enough if you know it's there. Otherwise, well, good luck finding any feedback on what your machine is doing when you drag and drop files.\r\n\r\nIf you know it's there, the new file copy dialog isn't so bad, but it's certainly not easy to discover.\r\n\r\nIf you're backing up, say, your photo folder with many gigabytes of data to an external drive, you might accidentally copy it three or perhaps even four times before you realize it. Despite the total absence of feedback, something is in fact happening. Don't ask me how I found out, just know that you will not suffer the same because now you know\u2014look for the tiny icon. At least GNOME is getting closer to its goal of making the command line look downright discoverable.\r\n\r\nThis release will also send you hunting for your network drives since those no longer appear in the sidebar by default. In the words of GNOME's announcement, this was done to \"reduce clutter.\" But those drives now require an extra click on the new \"Other Locations\" menu item, which will reveal all that unsightly clutter should you actually need to access those cluttered drives.\r\n\r\nNetworked drives, known as 'clutter' in GNOME parlance, are now hidden behind 'Other Locations' in the sidebar.\r\n\r\nThere is one other actual improvement of note in the UI of GNOME 3.18: you can now search by typing in open and save dialogs. (One step forward, two back.) Most of the other big changes in Fedora 23 and GNOME 3.18 are less visible though more welcome.\r\n\r\nFedora has long been an early adopter of Wayland, and Fedora 23 is no different, offering considerably more support than any other distro to date. In fact, the Wayland support is getting close enough to feature-complete that it appears Fedora 24 may boot to Wayland by default. By and large you won't notice much difference should you try out Wayland in Fedora 23 (just log out of your current session and select Wayland from the menu that drops down from the gear icon at the lower right side of the login dialog). This lack of noticeable difference is a good thing, since you really shouldn't need to know what your display manager is up to, but there are some new features available if you need them.\r\n\r\nThe most notable thing Wayland can do right now is run DPI-independent monitors. That is, if you have a normal resolution display and something more like a 4K display, Wayland can handle that scenario. Not having a high-res monitor, I haven't been able to test this one, but the GNOME forums are full of success reports. Other new Wayland-specific features include trackpad support for gestures like pinch-to-zoom, twirling to rotate, and four-finger swipes to switch workspaces. All of these gestures were previously available if you had a touchscreen, but they're now available to supported trackpads under Wayland. That said, I wasn't personally able to get them working in Fedora 23.\r\n\r\nFedora 23 does support GNOME 3.18's new \"automatic brightness\" support, which taps your laptop's integrated light sensor to automatically dim and brighten the screen based on the lighting around you. It saves fiddling with the brightness buttons and can help cut down on power use. However, if you're really trying to eke the last bit out of your battery, you'll probably want to disable automatic screen brightness in the power settings since it tends to err on the brighter side. Most of the time, though, this feature works well.\r\n\r\nThere are quite a few notable updates for GNOME's stock applications and two brand new applications\u2014Calendar and GNOME To Do. However, possibly the best addition is that GNOME Software now supports firmware updates via fwupd. That means you don't need any proprietary tools or original install DVDs just to update your firmware, provided of course that the firmware you need is available via the Linux Vendor Firmware Service.\r\n\r\nGNOME Software can now update firmware.\r\n\r\nAs a side note for Ubuntu users, take a good look at GNOME software because it's in your future. Canonical has decided to abandon its homegrown software center in favor of GNOME software for Ubuntu 16.04. First Upstart gave way to systemd, then Unity 8 moved to Qt, then the scrollbars went to stock GNOME, and now the Ubuntu Software Center is abandoned in favor of GNOME Software. It makes you wonder about Mir.\r\n\r\nBut let's get back to GNOME 3.18's two new default apps, Calendar and GNOME To Do. The lack of a good GUI calendar app for Linux has always been puzzling. There's Evolution, of course, but until now there hasn't really been a nice, simple stand-alone Calendar app. GNOME Calendar is that app, or rather, it's close to being that app. If you stick with the integrated GNOME online accounts (Google Calendar, ownCloud, etc), Calendar works as expected. Regrettably, I have not been able to get it working with any of my CalDav servers, including my primary calendar which resides on Fastmail's CalDav servers.\r\nEnlarge / GNOME Calendar, simple but functional\u2014provided your online calendar resides in one of GNOME's supported online account options.\r\n\r\nNot to be confused with the older, independent application launcher called GNOME-Do, GNOME To Do is just what the name suggests\u2014a to-do list manager. GNOME To Do is still a \"technical preview\" in GNOME 3.18, but it has most of what you'd want in a task manager application. You can enter new tasks, group them, add colors and priorities, and attach notes. Tasks also integrate and sync with, for example, Gmail's Tasks. It was perfectly stable in my testing (including syncing with Gmail), but bear in mind that it is still a preview release. To be safe, you might not want to trust your entire life schedule to it just yet.\r\n\r\nGNOME To Do, a nice, if still experimental, task manager.\r\n\r\nI should also note that while Fedora mentions both new GNOME apps in its release notes, in the case of the live CD I used to install Fedora 23, neither were installed by default. They're both in the repos, but, thanks to GNOME's helpful ability to search for apps not installed, they're easy enough to install on your own.\r\n\r\nThe last big update of note in Fedora 23's GNOME desktop is support for what's likely the biggest change coming soon to the GNOME world: the Xdg project. Xdg is a new effort designed to help developers build and distribute Linux applications. Ultimately, Xdg wants to be a kind of one-package-to-rule-them-all that developers can use to package apps across distros. Xdg will also add some much stricter application sandboxing.\r\n\r\nIn Fedora 23, Xdg is not much more than an outline. None of the apps that ship in Fedora's repos are packaged this way yet, but Xdg does indeed look to be part of the GNOME roadmap. This likely means Fedora will be an early adopter as Xdg expands.\r\n\r\nKernel\r\n\r\nLike the recently released Ubuntu 15.10, Fedora 23 ships with Linux Kernel 4.2. The biggest news in 4.2 is support for recent Radeon GPUs and Intel's new Broxton chips, though let's face it\u2014Fedora running on mobile chips is about as likely as this being the year of the Linux desktop.\r\n\r\nOn the more useful side, there are some new encryption options for ext4 disks and the new live kernel patching features. The encryption features should make using whole disk encryption a bit faster.\r\n\r\nOther under-the-hood changes in Fedora 23 include some improvements for Fedora's new DNF package manager, which replaced Yum a few releases ago (Yum is aliased to DNF now). With this release, DNF takes over from fedup, becoming the new way to perform system upgrades. Aside from the welcome unification of purpose\u2014that Fedora had to build a separate tool for system upgrades says something about Yum\u2014DNF's new upgrade support hooks into systemd's support for offline updates and allows you to easily roll back updates if necessary.\r\nServer\r\n\r\nFedora 23 Server includes everything found in the Workstation release (minus the desktop itself) and layers in some great tools for sysadmins, most notably Cockpit. Cockpit is Fedora's effort to bring the tools of the sysadmin into an interface anyone can use. Want to deploy a Docker container? Search for what you want, click install, and you're done.\r\n\r\nCockpit also feels a bit like a covert effort to build a more secure Web since it makes deploying secure servers something that anyone with a bit of Linux experience can figure out. Cockpit is really just a graphical interface layered on top of the often inscrutable tools sysadmins already use. It adds a welcome layer of abstraction. So while Cockpit isn't a substitute for experience, it can point you in the right direction.\r\n\r\nFedora 23 Server beefs up Cockpit security with support for SSH key authentication and the ability to configure user accounts with authorized keys.\r\n\r\nIn this release, Fedora rolekit gains the ability to deploy Server Roles as containerized applications. This allows better isolation of roles from the rest of the system and paves the way for roles to migrate into cloud-based systems like Fedora's Project Atomic.\r\nConclusion\r\n\r\nFedora 23 is such a strong release that it highlights what feels like Fedora's Achilles heel\u2014there's no Long Term Support release.\r\n\r\nIf you want an LTS release in the Red Hat world, it's RHEL you're after (or CentOS and other derivatives). Fedora is a bleeding edge, and as such Fedora 23 will, as always, be supported for 12 months. After that time, you'll need to upgrade.\r\n\r\nThe good news is that DNF's new upgrade tools with transactional updates and rollbacks temper the missing LTS release a bit. After all, if updating is simple, and you can roll back if something goes wrong, then there's less risk to updating. Still, what if you do need to rollback because something went wrong? What if that something isn't something you can quickly fix?\r\n\r\nThe lack of an LTS release isn't likely to stop desktop users, but it does make Fedora feel like a riskier bet on the server. In the end, that's probably how Red Hat likes things. If you want stable, RHEL is there. If you want the latest and greatest, Fedora 23 delivers.\r\n", "body_html": "

New default apps and Xdg support arrive, but stick to RHEL if you want long-term support.

\n

Two releases ago, Fedora 21 introduced its namesake project's \"Fedora Next\" plan. The goal was simple\u2014bring the massive, sprawling entity that is Fedora into some neatly organized categories that would clearly define the project's aims. And since Next launched, Fedora has been busy doing just that. The results are impressive, and it feels like the distro has found a renewed sense of purpose.

\n

Fedora.Next design brings some order to the chaos.

\n

Fedora Next's structure is like a series of concentric rings where each ring is supported by the one inside it. At the center are the core components of the system, APIs that applications hook into, and so on. On the outside are the visible layers that users interact with, what Fedora calls \"Environments.\"

\n

For the recently unveiled Fedora 23, these Environments consist of Workstation (Desktop), Server, and Cloud. It's the same Environment trio that Fedora offered in its two prior releases. While Cloud still has the feel of an also-ran, the Workstation and Server releases see quite a few new packages. That's especially true for the GNOME-based Workstation, and luckily the changes are largely welcome.\nFedora 23 Workstation

\n

The biggest change in Fedora 23's default Workstation release comes in the form of GNOME 3.18. But before you get to enjoy what's new in GNOME 3.18, you have to get Fedora installed and do whatever you have to in order to make it through the dreaded Anaconda, Fedora's installation program.

\n

In the Fedora 21 review, I gave Anaconda a hard time. Its button-based approach felt clunky compared to similar offerings from other distros, and sadly most of those criticisms stand with Fedora 23. For example, it still takes an extra click of the button to create a user account on the desktop when everyone installing Fedora 23 Workstation will need an account\u2014why not just present a screen to create one?

\n

The user creation and root password screens hidden away behind buttons.

\n

Two things in Fedora 23 make Anaconda a bit more tolerable, though. First, it's better at guessing defaults. For example, it successfully set my timezone and keyboard preferences with no user input at all. (That's one win for the button-based approach since there was no need to click those buttons.) Provided you stick with single partition, the default disk partitioning setup in Fedora 23 also may not require much input on your part. The second change that makes Anaconda a bit better this time is a new orange bar across the bottom, which helps call your attention to any unfinished business you may have in the installer. For example, it happily notified me when I needed to create that user account.

\n

It's a marginal improvement over past releases, but ultimately I stand by my assessment. The best you can say about Fedora's installer is that you only have to use it once.

\n

GNOME 3.18

\n

Once you get past Anaconda, Fedora 23 will land you in what might well be one of the nicest, and certainly one of the newest, GNOME desktops around.

\n

Fedora 23 ships with the just-released GNOME 3.18, which is one of the best GNOME releases to date. It offers dozens of new features, better Wayland support, and a new option to update your firmware through GNOME software. But as all GNOME releases seem to, even GNOME 3.18 has a few steps backward.

\n

The first thing you'll likely notice when you set up Fedora 23 Workstation is the new Google Drive integration in GNOME 3.18. Google Drive joins Facebook and Microsoft in the GNOME online accounts panel (along with what I like to hope is the more popular option for Linux users, ownCloud). The new Google Drive support finally makes all your Google documents into first class citizens on the GNOME desktop.

\n

Google Drive joins ownCloud, Microsoft, and Facebook in the GNOME online accounts dialog. Set up is just a matter of granting GNOME access to your account.

\n

To set up Drive, all you need to do is follow the prompts to sign in to Google and authorize GNOME to access your account. In about 10 seconds, you'll have complete access to everything in your Drive within the GNOME Files app (aka, Nautilus). Your Google Drive account is displayed as a network share in the file browser sidebar, and interacting with your Google Drive documents is no different from local documents. You can set your documents to open in any application you like (by default they'll open in the Web editor), and creating new files and folders in Drive is just like it is for ordinary drives. Like the ownCloud integration, Google Drive in GNOME just works.

\n

Interacting with documents stored in Google Drive is just like interacting with any other file on your machine.

\n

There's still no Google Drive client for Linux. For GNOME users anyway, GNOME's integration is good enough that you won't miss it. And if you're not a Google Drive user, the upside is that now that Drive support is done perhaps the GNOME team can move on to integrating other online sync services.

\n

Support for Drive isn't the only thing new in the Files app, although it is the only thing that's new and good. The other change, while relatively minor, is yet another step backward for usability in GNOME. The file copy dialog has been moved to a tiny icon at the top right of the file browser window. An indicator circle animates large file copy operations, and clicking the icon reveals more details and a dropdown that looks roughly like the file copy dialog you'd see in most other applications. It works quite well enough if you know it's there. Otherwise, well, good luck finding any feedback on what your machine is doing when you drag and drop files.

\n

If you know it's there, the new file copy dialog isn't so bad, but it's certainly not easy to discover.

\n

If you're backing up, say, your photo folder with many gigabytes of data to an external drive, you might accidentally copy it three or perhaps even four times before you realize it. Despite the total absence of feedback, something is in fact happening. Don't ask me how I found out, just know that you will not suffer the same because now you know\u2014look for the tiny icon. At least GNOME is getting closer to its goal of making the command line look downright discoverable.

\n

This release will also send you hunting for your network drives since those no longer appear in the sidebar by default. In the words of GNOME's announcement, this was done to \"reduce clutter.\" But those drives now require an extra click on the new \"Other Locations\" menu item, which will reveal all that unsightly clutter should you actually need to access those cluttered drives.

\n

Networked drives, known as 'clutter' in GNOME parlance, are now hidden behind 'Other Locations' in the sidebar.

\n

There is one other actual improvement of note in the UI of GNOME 3.18: you can now search by typing in open and save dialogs. (One step forward, two back.) Most of the other big changes in Fedora 23 and GNOME 3.18 are less visible though more welcome.

\n

Fedora has long been an early adopter of Wayland, and Fedora 23 is no different, offering considerably more support than any other distro to date. In fact, the Wayland support is getting close enough to feature-complete that it appears Fedora 24 may boot to Wayland by default. By and large you won't notice much difference should you try out Wayland in Fedora 23 (just log out of your current session and select Wayland from the menu that drops down from the gear icon at the lower right side of the login dialog). This lack of noticeable difference is a good thing, since you really shouldn't need to know what your display manager is up to, but there are some new features available if you need them.

\n

The most notable thing Wayland can do right now is run DPI-independent monitors. That is, if you have a normal resolution display and something more like a 4K display, Wayland can handle that scenario. Not having a high-res monitor, I haven't been able to test this one, but the GNOME forums are full of success reports. Other new Wayland-specific features include trackpad support for gestures like pinch-to-zoom, twirling to rotate, and four-finger swipes to switch workspaces. All of these gestures were previously available if you had a touchscreen, but they're now available to supported trackpads under Wayland. That said, I wasn't personally able to get them working in Fedora 23.

\n

Fedora 23 does support GNOME 3.18's new \"automatic brightness\" support, which taps your laptop's integrated light sensor to automatically dim and brighten the screen based on the lighting around you. It saves fiddling with the brightness buttons and can help cut down on power use. However, if you're really trying to eke the last bit out of your battery, you'll probably want to disable automatic screen brightness in the power settings since it tends to err on the brighter side. Most of the time, though, this feature works well.

\n

There are quite a few notable updates for GNOME's stock applications and two brand new applications\u2014Calendar and GNOME To Do. However, possibly the best addition is that GNOME Software now supports firmware updates via fwupd. That means you don't need any proprietary tools or original install DVDs just to update your firmware, provided of course that the firmware you need is available via the Linux Vendor Firmware Service.

\n

GNOME Software can now update firmware.

\n

As a side note for Ubuntu users, take a good look at GNOME software because it's in your future. Canonical has decided to abandon its homegrown software center in favor of GNOME software for Ubuntu 16.04. First Upstart gave way to systemd, then Unity 8 moved to Qt, then the scrollbars went to stock GNOME, and now the Ubuntu Software Center is abandoned in favor of GNOME Software. It makes you wonder about Mir.

\n

But let's get back to GNOME 3.18's two new default apps, Calendar and GNOME To Do. The lack of a good GUI calendar app for Linux has always been puzzling. There's Evolution, of course, but until now there hasn't really been a nice, simple stand-alone Calendar app. GNOME Calendar is that app, or rather, it's close to being that app. If you stick with the integrated GNOME online accounts (Google Calendar, ownCloud, etc), Calendar works as expected. Regrettably, I have not been able to get it working with any of my CalDav servers, including my primary calendar which resides on Fastmail's CalDav servers.\nEnlarge / GNOME Calendar, simple but functional\u2014provided your online calendar resides in one of GNOME's supported online account options.

\n

Not to be confused with the older, independent application launcher called GNOME-Do, GNOME To Do is just what the name suggests\u2014a to-do list manager. GNOME To Do is still a \"technical preview\" in GNOME 3.18, but it has most of what you'd want in a task manager application. You can enter new tasks, group them, add colors and priorities, and attach notes. Tasks also integrate and sync with, for example, Gmail's Tasks. It was perfectly stable in my testing (including syncing with Gmail), but bear in mind that it is still a preview release. To be safe, you might not want to trust your entire life schedule to it just yet.

\n

GNOME To Do, a nice, if still experimental, task manager.

\n

I should also note that while Fedora mentions both new GNOME apps in its release notes, in the case of the live CD I used to install Fedora 23, neither were installed by default. They're both in the repos, but, thanks to GNOME's helpful ability to search for apps not installed, they're easy enough to install on your own.

\n

The last big update of note in Fedora 23's GNOME desktop is support for what's likely the biggest change coming soon to the GNOME world: the Xdg project. Xdg is a new effort designed to help developers build and distribute Linux applications. Ultimately, Xdg wants to be a kind of one-package-to-rule-them-all that developers can use to package apps across distros. Xdg will also add some much stricter application sandboxing.

\n

In Fedora 23, Xdg is not much more than an outline. None of the apps that ship in Fedora's repos are packaged this way yet, but Xdg does indeed look to be part of the GNOME roadmap. This likely means Fedora will be an early adopter as Xdg expands.

\n

Kernel

\n

Like the recently released Ubuntu 15.10, Fedora 23 ships with Linux Kernel 4.2. The biggest news in 4.2 is support for recent Radeon GPUs and Intel's new Broxton chips, though let's face it\u2014Fedora running on mobile chips is about as likely as this being the year of the Linux desktop.

\n

On the more useful side, there are some new encryption options for ext4 disks and the new live kernel patching features. The encryption features should make using whole disk encryption a bit faster.

\n

Other under-the-hood changes in Fedora 23 include some improvements for Fedora's new DNF package manager, which replaced Yum a few releases ago (Yum is aliased to DNF now). With this release, DNF takes over from fedup, becoming the new way to perform system upgrades. Aside from the welcome unification of purpose\u2014that Fedora had to build a separate tool for system upgrades says something about Yum\u2014DNF's new upgrade support hooks into systemd's support for offline updates and allows you to easily roll back updates if necessary.\nServer

\n

Fedora 23 Server includes everything found in the Workstation release (minus the desktop itself) and layers in some great tools for sysadmins, most notably Cockpit. Cockpit is Fedora's effort to bring the tools of the sysadmin into an interface anyone can use. Want to deploy a Docker container? Search for what you want, click install, and you're done.

\n

Cockpit also feels a bit like a covert effort to build a more secure Web since it makes deploying secure servers something that anyone with a bit of Linux experience can figure out. Cockpit is really just a graphical interface layered on top of the often inscrutable tools sysadmins already use. It adds a welcome layer of abstraction. So while Cockpit isn't a substitute for experience, it can point you in the right direction.

\n

Fedora 23 Server beefs up Cockpit security with support for SSH key authentication and the ability to configure user accounts with authorized keys.

\n

In this release, Fedora rolekit gains the ability to deploy Server Roles as containerized applications. This allows better isolation of roles from the rest of the system and paves the way for roles to migrate into cloud-based systems like Fedora's Project Atomic.\nConclusion

\n

Fedora 23 is such a strong release that it highlights what feels like Fedora's Achilles heel\u2014there's no Long Term Support release.

\n

If you want an LTS release in the Red Hat world, it's RHEL you're after (or CentOS and other derivatives). Fedora is a bleeding edge, and as such Fedora 23 will, as always, be supported for 12 months. After that time, you'll need to upgrade.

\n

The good news is that DNF's new upgrade tools with transactional updates and rollbacks temper the missing LTS release a bit. After all, if updating is simple, and you can roll back if something goes wrong, then there's less risk to updating. Still, what if you do need to rollback because something went wrong? What if that something isn't something you can quickly fix?

\n

The lack of an LTS release isn't likely to stop desktop users, but it does make Fedora feel like a riskier bet on the server. In the end, that's probably how Red Hat likes things. If you want stable, RHEL is there. If you want the latest and greatest, Fedora 23 delivers.

", "url": "http://arstechnica.com/information-technology/2015/12/fedora-23-review-skip-if-you-want-stability-stay-to-try-linuxs-bleeding-edge/", "pub_date": "2015-12-01T13:47:11", "publisher": 1}}]