Website Speed Test Optimization. Gtmetrix, Page Speed Insigths & Lighthouse.

by Karl Schellenberg

At 3magine we are obsessed with delivering products that perform at their best. From user experience and website design, through web development best practices, accessibility, on-site optimization, and website performance, we make sure to put our best effort forward.

In this article we’ll show you how we achieved a top A (100%) grade in Gtmertix and PageSpeed Insights as well as the perfect score of 100 in Google DevTools’ Lighthouse .

3magine Article

“Lighthouse is an open-source, automated tool for improving the quality of web pages. ... It has audits for performance, accessibility, progressive web apps, and more. You can run Lighthouse in Chrome DevTools, from the command line, or as a Node module.”

It is of utmost importance that your website loads as fast as possible, regardless of the internet connection speed, because Google prioritizes websites that load faster and increases their organic search ranking.

In turn, this results in higher user engagement, more page views, and improved conversions.

Most website visitors care more about how fast a website loads than anything else, and if your website loads slowly you WILL experience a higher visitor drop-off rate.

In this article, we’ll go through some of the methods we’ve used to optimize the new 3magine website to load instantly on every possible connection and device and achieve the highest possible speed optimization score.

Remove Render Blocking Resources

The first and probably the most complicated task to achieve is to “Eliminate render blocking resources”.

When you visit a website, your browser really only sees the top of the website and the rest is hidden “below-the-fold”, meaning you have to scroll to see it. To eliminate render blocking resources, is to prioritize loading styles and scripts responsible for rendering the visible part of the website.

3magine Article

Modern websites typically utilize five types of technologies/resources:

HTML: code that defines the structures of the website.

CSS code that defines how your website looks including colors, fonts, font-sizes etc.

Images: photos, illustrations and other graphical elements.

Fonts: custom fonts for beautiful typography.

Javascript: code that allows for dynamic and interactive web pages.

The challenge of eliminating render blocking resources is to extract the parts of code from the CSS that are responsible for only rendering “above-the-fold” content.

When optimizing 3magine we used Node.js and Critical open-source package. They take a bit of time to configure but allow for instant separation of Critical CSS code.

Here’s a simplified version of a code example we used to achieve this:

const path = require('path');
const critical = require('critical');

const projectBasePath = path.resolve(process.env.PWD, '/');

critical.generate({
   base: projectBasePath,
   src: 'index.html',
   width: 1300,
   height: 900,
   inline: true,
   target: {
       html: path.resolve(projectBasePath, 'index.html'),
       uncritical: path.resolve(projectBasePath, json['css/app.css'].replace('.css', '-uncritical.css'))
   },
   minify: true,
   extract: true,
   ignore: {
       atrule: ['@font-face'],
       rule: [/Base__font/],
   }
}).then((css,html,uncritical) => {
   console.log("Critical Complete.");
});

One of the important configuration steps when extracting Critical CSS was to ignore the fonts from a critical extraction process, to defer their download until later so the website gets a chance to show as soon as possible. The key here was to wrap the HTML content with a “div.Body__font” element and move the “font-family” CSS declaration from “body” to it.

<body>
  <div class="Body__font">
      content...
  </div>
</body>

The second important item for a fast loading website is “Defer offscreen images”. When the browser finishes reading HTML code, it finds all images and starts downloading them. This prevents the website from downloading any other essential items needed to display the website as soon as possible. To counter that we use a process called Lazy-Loading Images. The way it works is that the image only gets loaded when it becomes visible in the browser's viewport. The tricky part is to preserve image dimensions and position on all different screen sizes before the image loads. Without that, you would see a lot of content jumping as images start to load.

The browser does not know the dimensions of the image until it loads or unless it is explicitly specified as a parameter of the image HTML tag. But even if it is specified, that does not help in displaying the same image correctly on different screen sizes. The trick we use is to wrap each image with two container elements, one responsible for position and width, the other responsible for maintaining the image aspect ratio. Here’s a code example:

<div class="Img__pos">
    <div class="Img__size">
        <picture>
            <source type="image/webp" data-srcset="img/final-photo@2x.webp 2x, img/final-photo@3x.webp 3x" />
            <img class="lazy" src="img/preload-photo.svg" data-src="img/final-photo.png" data-srcset="img/final-photo@2x.png 2x, img/final-photo@3x.png 3x" alt="..."/>
        </picture>
    </div>
</div>

“Img” is set to “position:absolute;” with “height” and “width” set to 100%. “.Img__size” has a height set to zero and bottom padding to the ratio of image height to width percentage: (height/width*100)%. So if the image dimensions are 400x300 then the bottom padding is 75% ( (300/400)*100 ). Then “.Img__pos” can be resized and repositioned to anything and it will always preserve the image aspect ratio and position. Here’s an example of the styles we’ve used:

.Img_pos {
  max-width: 720px;
  margin-left: auto;
  margin-right: auto;
}
.Img__size {
  position: relative;
  img {
    position: absolute;
    top: 0;
    left: 0;
    width: 100%;
    height: 100%;
  }
}

Also for all the lazy-loaded images, we set the initial image src parameter to a single optimized and animated SVG asset that displays to visitors as the image is being loaded.

Another item related to images performance is “Serve Images in Next-Gen Formats” . We use “webp” image format for browsers that support it. “Webp” is a superior lossless and lossy compression for images on the web which allows us to accomplish lower image file sizes and improved image quality with reduced artifacts. On average, we were able to save 70-75% in file sizes which make a big difference in how fast these images load.

On OSX, you can convert your images to “webp” format using a command line tool cwebp . Here is our simple toWebP shell script with the optimal configuration we use to do just that.

Next up is “Defer unused CSS”. Most of the time, when building websites we utilize other CSS libraries that were previously created and already tested by thousands of other developers. This saves us a ton of web development time but it comes with the cost of an extra CSS file size for all parts of libraries that are not being utilized. To address this we use Purgecss. This essentially strips all unused CSS and leaves only what is needed. One thing to watch out for is to make sure to whitelist any CSS classes that are used by Javascript as Purgecss is not aware of them and this may cause some issues with the website. On 3magine website, we were able to reduce the CSS assets by 40% and it all adds up at the end :). Here’s a code example:

const path = require('path');
const Purgecss = require('purgecss');

const projectBasePath = path.resolve(process.env.PWD, '/');

const purgeCssObj = new Purgecss({
   content: [path.resolve(projectBasePath, 'index.html')],
   css: [path.resolve(projectBasePath, 'css/*.css')],
   whitelist: ['loaded', 'loading', 'fl-form','Submitted','fl-style-2',
       'fl-wrap', 'fl-wrap-input', 'fl-wrap-select',
       'fl-has-focus', 'fl-is-active',
       'fl-label','fl-input', 'fl-textarea', 'fl-select',  'fl-is-required', 'bouncy', 'is-bouncy',
       'svg','path', 'input','textarea','select',
       'Mobile__Nav--open', 'is-active',
       '-pre-pinned', '-is-pinned', '-passed-hero',
       'hamburger--slider', 'hamburger-inner'
   ]
});
const result = purgeCssObj.purge();
for (var x = 0; x < result.length; x++) {
   fs.writeFile(result[x].file, result[x].css, function (err) {
       if(err){
           console.log("Error writing to file: ", err);
           return;
       }
   });
}

Aside from the methods described above we also implemented many other speed optimizations which are not as complicated and tricky as the ones described above, but also important. What’s amazing is that there are so many of them so I’ll only list a few that are worth mentioning.

3magine Article

Minify CSS & JavaScript . This process eliminates all the comments and unnecessary spaces from the code which in the case of our website decreased the file size by another 30%.

Use efficient cache policy on static assets. When a website visitor hits your website we configure the web server to talk to the browser letting it know to save a copy of all static assets for an optimal amount of time. When the visitor returns, the website loads instantly as it does not request any of these assets from the server but uses the local copy instead.

Use of Content Delivery Network (CDN). It’s basically a group of web servers strategically distributed around the globe that host the copy of your website and serve it to your visitors. This decreases the distance of your visitor to your website, increasing the load speed. For our website, we use the CloudFlare CDN .

Lastly, if you are not a developer and don’t want to install Google Dev Tools to measure the performance of your website, you can use a free tool like PageSpeed Insights to measure the performance of your website.

If you have a project that needs special web speed optimization attention, let us know and we’ll be glad to help you get it up to 100%.

Karl Schellenberg is a full stack web developer and Technical Director at 3magine.