Performance is a feature: speeding up Fusion.net

Attention and immediacy are critical in digital media.

As the team building out Fusion.net, our primary job is making the on-site experience enjoyable. In a mobile world, this ultimately comes down to speed. In order for Fusion’s journalistic work to be most effective, our audience has to be able to read and share the content they want fast. Otherwise, it doesn’t spread.

For the past couple of weeks, we’ve focused on making this site faster. We wanted to make the experience of opening our links — especially on mobile devices — as painless and seamless as possible; to make it so that when you see a lil’ Fusion infusion in your newsfeed, you don’t hesitate to tap that story.

Here’s how we decided to move the needle forward.

Responsive images: We’re happy to say that we are now following the WHATWG spec for responsive images using the image srcset attribute. This is a fully browser-based syntax for specifying alternate sources to load for images based on browser viewport dimensions. With this improvement, mobile users can load only the images pre-sized for their device, while desktop users can still appreciate all the work our art department puts into making huge animated hero images for posts.

Because this solution doesn’t depend on Javascript, it also means that privacy-conscious visitors can read our posts while running Ghostery or AdBlock and still see the artwork. (But really, please don’t use ad blockers here unless you need to… our advertisers allow us to continue publishing!)

We’ve also been monitoring image size and formats for all of our images. Large animated gifs or resolution-critical images that are saved in lossless formats like .pngs will eat up our bandwidth budget really fast, and the need to improve page speed has to always be balanced against the goal of eye-catching artwork to illustrate our stories. We’ve begun a discussion with our art department about these issues and are exploring possible solutions.

Preventing browser repaint events: Through profiling of our site, we found that the fixed position elements on our site (the sticky header at the top, and the forward/backward links on the left and right) were causing extremely heavy CPU load on some browsers when scrolling through the page. The issue is caused by Webkit (and other browsers) “doing a union of damaged regions” to repaint on scrolling. Because the header and left/right navigation are on the same compositing layer and the browser thinks they need to be handled together, it repaints a rectangle which contains all three of those elements – which is an area almost the size of the entire viewport! – on every scroll event. We found that adding the otherwise-meaningless backface-visibility: hidden property to these elements forced the browser to treat them separately by marking them as having 3d transforms applied.

Before setting backface-visibility: hidden

Before setting <code>backface-visibility: hidden</code>. Note the spikes in painting time, which were pushing our performance well below the 60fps budget while scrolling.

After setting backface-visibility: hidden

After setting <code>backface-visibility: hidden</code>. Note the much smaller spikes in paint time.

Improving liveblog load performance: Some of the most compelling content our Soccer Gods writers produce is their match-time liveblogs, which are a solidly curated mix of commentary, videos, pictures, and discussion from around Twitter and elsewhere. We’ve been using WordPress.com VIP’s Liveblog Add-On, which is conveniently available as an open source plugin. One weakness of this plugin is that, because it had no pagination features, long liveblogs with large number of embeds can take a while load. Joining a liveblog in progress in the second half of a match meant waiting up to ten seconds for the page to become responsive while scripts from Twitter and Vine marked up content all over the page.

We improved performance here by instituting lazy-loading on all liveblogs. When you visit a liveblog page, only enough entries to fill up one screen will be fetched at first, and additional posts will be processed as system resources become available, using the window.requestAnimationFrame method, which works in most modern browsers (we fall back to using setTimeout, which does a similar thing, in other browsers). Our work is available as a plugin on Github and will be released in the WordPress.org plugin repository soon.

Loading scripts and iframes asynchronously: Web page rendering is a long process with many milestones – when the first content is received, when the text is visible, when styling is rendered, and finally when page load is complete. A typical page with interactivity will depend on the window’s onload event indicating that all content is loaded and its safe to begin marking it up and attaching handlers to it with javascript. The problem is that a window is not considered loaded until ALL content on it is loaded. This includes all social widgets, ads, trackers, and experiments which are not necessary to read an article. We started by analysing our third-party scripts: removing a few tracking scripts which were not necessary, and moving other scripts to load asynchronously.

Most of the ads we display are served as iframes which, compared to other techniques, are really easy to embed on the page, and to isolate styling and context so that they aren’t affected by our stylesheet—or worse, that styling from the ads doesn’t break our page. However, the downside of iframes are that they block the `onload` event of the page while building and setting up an entirely new DOM context and rendering it. We switched to using dynamic asynchronous iframes – substituting the <iframe> tag with a javascript function expression which creates an iframe element and assigns its source dynamically, so that it loads once the browser has finished rendering the current page.


Between these and a few smaller issues, we’ve made solid performance gains that make fusion.net substantially more pleasant to visit and read content on. Some takeaways we found:

  • Look at the whole process, from the CMS to the browser. You’ll never know what’s hanging up performance until you measure it all. Some of the biggest wins we found were back-end improvements to query functions and cacheability, other wins were CSS and Javascript hacks, and some fell in-between. There’s no single place to start looking: performance has to be part of an overall development strategy.
  • Profile continuously. We’ve begun to introduce performance budgets, and are experimenting with monitoring tools like Speedcurve to make sure that we keep within our “budget” for page load time, asset weight, and more. But each component also need to be monitored in more detail, so we’re working on choosing the best way to do that.
  • Question everything. There are a lot of performance-related “best practices” and commonly held wisdom that might not apply in your case. Your site’s performance bottleneck today might not be the same as it was in the past, or as it will be in the future. Before making any changes, make a hypothesis and test it by measuring and comparing relevant performance metrics.

 

NOW

The Heroin Trail

Tue, Sep 26, 2017 - 3:00 pm

NEXT

Drug Wars

TONIGHT

10:00 pm

United For Each Other Pre/Post-Game

10:20 pm

LMX Veracuz vs. Monarcas

WHERE TO WATCH