How to Get the Perfect Core Web Vitals Score and Its Correlations With SEO Results

Written by aleh | Published 2022/10/05
Tech Story Tags: core-web-vitals | pagespeed | seo | web-development | website-performance | web-performance | performance-monitoring | seo-optimization

TLDRAt SEO PowerSuite, we struggled with declining organic traffic in 2021 and suspected that it could be related to the poor Core Web Vitals scores. Here’s our little case study on how we fixed all types of performance issues, ended up with green scores, and what correlations we’ve observed so far.via the TL;DR App

Google introduced Core Web Vitals as a ranking factor in 2021. It became clear that these new metrics would greatly affect sites’ positions, so our team decided not to beat around the bush and started our page experience optimization journey.

Here I’m going to tell you how it started, and what results we have achieved.

What are Core Web Vitals?

Core Web Vitals, or CWV, are special metrics Google uses to evaluate the quality of a page’s performance. These metrics are the Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS).

The LCP metric evaluated the loading speed of a page. Basically, it measures the time it takes for the largest image or text block within a default viewport to become visible from the moment a page starts loading. Currently Google’s benchmark for LCP stands at under 2.5 s.


The FID metric evaluates page responsiveness and interactivity. It measures the time it takes for a page to react to the user's first action (click, tap, or key press). An FID is good when it is less than 100 ms.


The CLS metric measures the visual stability of a page. If a webpage has any elements that shift while the page loads, there’s going to be a poor CLS score. A good CLS score should be equal to or less than 0.1.


Site performance before CWV optimization

Before we actually started working on Core Web Vitals, we turned to Google Search Console to check how our site, https://www.link-assistant.com, was doing without any optimization procedures.

Well, the results left much to be desired, to put it mildly:


Then we checked our site with WebSite Auditor, to see how to separate metrics of our pages were doing, and what should we improve to make performance satisfying. Just like in Google Search Console, the results of the bulk check were kind of a disappointment, still, we got some understanding of what aspects should we focus on first (LCP and CLS):


So, here’s where the technical part of our CWV optimization journey starts from.

Step 1. Setting up geo-specific servers

Server response time is crucial when it comes to CWV optimization. Once a user lands on a page, the server should load the information as soon as possible. Google uses the metric called Time to First Bite (or TTFB) to measure server response time. A good TTFB should be less than 600ms.

Although TTFB is not a part of Core Web Vitals, it directly affects LCP, so making TTFB as low as possible is a crucial part of LCP optimization.

As link-assistant.com serves people worldwide, just switching to a better hosting provider is not an option. So we decided to add some more servers in different parts of the world to make our pages load faster in all regions.

Before the Core Web Vitals update rolled out, we had one server located in the US. According to Google’s Real User Monitoring (RUM) data, the TTFB score was pretty nice for the US audience, but the truth is it was poor for other regions distant from the server location. So, those distant regions dragged the general TTFB score down:


So we added one more server in the US, one in Europe, and one in Asia. This actually helped us cut TTFB for European countries by up to 80%, but the problem of high TTFB for some Asian countries still remained.

After a series of tests, we discovered that the traffic flow from India was extremely high, and one server’s capacity was not enough to serve the whole region. So we decided to add two more servers — one in Osaka and one in Singapore — to spread the load. And the results were pretty satisfying.

The same problem with TTFB was observed in the Americas.


Just like in India, our American traffic is heavy, so two servers were probably not enough. What’s more, one of our American servers was on Wowrack and, in addition to its outdated technical characteristics, was located on the US West Coast, the so-called zone of the “oldest Internet”. All of that resulted in significant lags between users and the server.

As you may have guessed, we added one more AWS server in that region. And have already noticed improvements.

All in all, our efforts let us reach a worldwide TTFB of 241 ms, which is much more satisfying than before.

Note: Additional servers are a costly solution. If you do not serve a global audience, then simply changing your hosting provider may be enough.

Step 2. Deferring third-party JS

Third-party Javascript consumes tons of rendering resources plus has a rendering priority — every time a browser comes across a piece of JS, it puts HTML behind and starts rendering JS. This obviously leads to a poor LCP score.

In order to understand what third-party JS to delete, we first had to scan our site and see what render-blocking JS we had on our pages in general.

Actually, third-party JS includes social media share buttons, Google Analytics trackers, Facebook comments blocks, YouTube embeds, etc. In our case, it was also Sleeknote that we used for pop-ups.

Most third-party JS was not a problem to get rid of, though it was not even obligatory. At the same time, we needed to keep some JS pieces like social sharing buttons, Sleeknote, and GA trackers without harming LCP. So we moved those elements out of the critical rendering path with the help of the following attributes of the <script> tag:

  • Defer. This attribute tells the browser that the scrips can only be executed at the end of the rendering path (social share buttons, etc):

  • <script id="facebook-jssdk" src="https://connect.facebook.net/en_US/sdk.js" defer></script>

  • Async. This attribute tells the browser to render the element without pausing the parsing of a page (for scripts sensitive to the delayed loading such as Google Analytics):

  • <script async src="//www.googletagmanager.com/gtm.js"></script>


Step 3. Minimizing web fonts

Web fonts make pages’ designs visually appealing but can take a while to load. This cannot but affect both LCP and CLS Core Web Vitals.

As for LCP, the score is affected because a browser needs some time to load and fetch a third-party font.

The key trouble with CLS is that until a third-party font is loaded, a browser displays a system font instead. When a third-party font is finally loaded, the text may then take more screen real estate, which causes a layout shift and, eventually, a poor CLS score.


Before CWV started making some noise, we had many fonts on a single page. Sometimes, these fonts were not even used but were still loaded. In addition, we used external fonts like Google Fonts, stored outside our server.

The solution lay on the surface — we just got rid of external fonts and switched to system ones. And I should say it worked really well.

Still, there were cases when we still needed third-party fonts. To be on the safe side, we made the necessary fonts self-hosted on our servers, and preload them in the <head> section of a page’s HTML.

Besides, we abandoned icon fonts and started using SVG images hosted on our server instead.

Step 4. Extracting critical CSS & JS

A browser cannot render a page until it finishes rendering CSS. So if CSS files are heavy, loading them may take quite a while, which will negatively affect the LCP score.

Before we started our CWV optimization, our site had one big CSS stylesheet for all pages — it had more than 70,000 lines. That heavy CSS was loaded for any single page, even if it wasn’t even used there.

The Coverage report in Chrome’s Dev Tools helped us a lot in understanding how our CSS was doing and how we could make it lighter:


So we decided to get rid of all the CSS lines that were irrelevant. In addition, we ran our files through CSS Minifier to compress them even more. JSCompress let us do the same for JS.

Also, we did not need to load the same huge CSS for each page every time, so we extracted the styles that were needed for the above-the-fold area of a specific page and added them to the <style> tag of the <head> section. As for the remaining CSS, we made it load asynchronously with the help of the defer attribute.

Step 5. Compressing HTTP content

Compressing HTTP content transferred between servers and the browser also helped us greatly in terms of improving LCP.


There are several algorithms ways to compress HTTP – gzip and Brotli are the most popular tools. Traditionally, we used gzip here but tried to proceed with Brotli, as it is considered more effective these days.

And really, Brotli proved to be more effective. At the same time, some browsers still didn’t support Brotli compression:


The solution was quite easy — we simply kept two types of compressed content for each page and served the supported one depending on a user’s browser.

Step 6. Optimizing images

Heavy, improperly-sized images are very likely to affect Core Web Vitals negatively:

  • If that image is an LCP element, the LCP score has no chance to be anyhow good. The LCP score will also suffer if all images are loaded simultaneously.

  • If an image doesn’t have its dimensions specified, it is the CLS score that suffers.

It’s clear that we could not get rid of images at all, so we took some steps to optimize them properly:

  • Compressing images. Core Web Vitals report in WebSite Auditor helped us detect the problematic images. Then, we compressed those images with PNGQngquant and TinyPNG.

  • Choosing the best image format. To be honest, we decided to keep PNG and JPEG as we had them because testing them against WebP did not show any positive shifts.

  • Setting image dimensions. If image dimensions are not specified in the code, the browser may take some time to properly size, which results in content layout shifts and leads to a poor CLS score. So we specified them within the <img> tag. In some cases, we had to set dimensions for image containers, too.

  • Deferring offscreen images.All the images that were not critical for the above-the-fold content were made lazy-loaded to prevent network loading contention for key resources and to secure good LCP. This procedure is also executed within the <img> tag, like this:
    <img src="pillow.jpg" width="640" height="360" loading="lazy" alt="purple pillow with flower pattern"/>. Plus, all images visible within the default viewport were made to load at the beginning of the rendering path.

  • Getting rid of heavy background images. We decided to avoid heavy background images for mobile devices where possible, as they take a while to render and negatively affect the mobile LCP score.

  • Serving responsive images. Responsive images prevent the unnecessary load of elements that may not be necessary based on a user’s viewport width. We did this with the help of the srcsetattributethat allows us to collect many resolutions of the same image and serve the one that best fits the size of a user’s screen. The implementation looks like this:

    <img src="image1.png"srcset="image2.png 100w,image3.png 200w,image4.png 300w,width="500" height="380"alt="" decoding="async" sizes="(max-width: 200px) 100px, (max-width: 300px) 200px, (max-width: 400px) 300px">
    

Step 7. Final improvements

The things I’m talking about in this step are not that prominent (maybe), but they still helped us in our fight with Core Web Vitals. So here they are:

DOM size reduction

The Document Object Model (or DOM) is a tree-view representation of a web page, which is created for each page as it loads.

Each element of a page’s HTML has a DOM tree node. The logic is simple — the more elements a page has, the bigger its DOM tree size is. This negatively affects the page loading time and thus the LCP metric.

I cannot say that our pages are overloaded, still, we found some places where the DOM size could be easily reduced. Let’s take our page with all Rank Tracker’s updates, for example, where we only left 10 updates visible (instead of 500+). The rest was made to load by request after clicking View older updates.


Database requests optimization

Older sites use more database requests. Well, our site is a pretty old one, so the database requests it used were often outdated, and needed optimization, or complete refactoring. So that was what we did — a complete refactoring of our SQL requests. In some cases, the request execution time was cut by 95%. This led to better TTFB, and eventually LCP scores.

Static elements caching

We enabled caching of all static elements such as images, CSS, and JS, so a browser did not need to load the same elements many times. This was a benefit to our LCP as well.

We also prolonged the lifetime of cached elements where possible to keep the LCP score better for longer times.


Eliminating bootstrap.css

Bootstrap is a popular CSS Framework that helps build websites. Still, everything comes at a price, and, together with the WebDevs’ comfort, Bootstrap brings a lot of extra CSS that slows pages down.

As for our site, most of the bootstrap stylesheets turned out to be unused. So we extracted the necessary things and got rid of the rest to bring our LCP to the safe site.


Pre-connecting to third-party resources

Establishing secure connections takes a lot of time. DNS lookups, SSL handshakes, secret key exchange, roundtrips to the final server that is responsible for the user's request — all of that doesn’t happen in one moment.

So we used the preconnect attribute for such resources as CDN and Google Tag Manager to save as much time as possible.

The process is quite simple — we simply added a link tag to a page’s HTML:

<link href="https://cdn1.link-assistant.com" rel="preconnect">

Key resources preloading

That was the case when we could not get rid of background images but still needed a good LCP time.

Have a look at our Backlink API page and its beautiful, stylish gradient background image:


I cannot say that we did not try to optimize that image before — we lazy-loaded it via the <img> tag. But when we started preloading it using the <link rel="preload"> construction inside the <head> section, it really worked better.

The preload attribute is also now used for the fonts that do not default.

Step 8. Profiling the code

(Yes. One more step after “final” improvements. I would not like this step to ever appear, but the course of Core Web Vitals optimization is full of surprises.)

The steps I described above took months. Imagine how disappointed was our dev team when they saw many, MANY obvious and awkward issues that were still present in the code. Unnoticed, those issues prevented us from getting good CWV scores in some cases.

The solution was quite obvious — we had to review the code once again. Well, fine, not once. Still, it was a necessary job to ensure no bugs were left. Google’s PageSpeed Insights helps a lot with this task.

Anyway, repeat this very step regularly to find and fix any issues timely.

Site performance after CWV optimization

Well, here we are today, surviving through months of hard work and a series of Google updates:


The activities I described above let us bring the majority of our pages to the “green” zone of the Core Web Vitals assessment.

For new and modern sites, this is barely an achievement, to be honest. But for old websites, like a 17-year-old yours truly with 1K indexed pages, this is a pretty meaningful result.

The truth is that this work is never over. But now, unlike it was in October 2021, we know exactly what we have to keep an eye on and what to do to keep our Core Web Vitals all green.

CWV correlation with organic search results

Have we seen any correlations with SEO results? Well, it’s hard to say that there are any definite correlations, as correlation and causation are different things anyway. Still, we’ve spotted some positive outcomes:

  1. Growth of impressions

    **
    **

  2. Growth of clicks


    As for clicks, there was a drop in May 2022 after most of our optimization work was done. We investigated the issue and saw that our CWV got worse right before that.


When we fixed CWV back, clicks grew again.

Besides, all the Core Web Vitals optimizations we performed positively affected the user experience of our website. UX is crucial for today’s human-oriented SEO, so it is likely that it was UX that boosted our impressions and, consequently, clicks. But, once again, I cannot tell it for certain — as SEOs like to say, “it depends”, and this case is not an exception.

To sum it all up

Sure thing, many Core Web Vitals optimization methods I described here will suit any site. However, some of the solutions are specific to our website only. Always consult Chrome’s Lighthouse report, WebSite Auditor, or Google Search Console to spot the problems your site has and tailor your own strategy that will help with your specific issues.

By the way, have you succeeded with Core Web Vitals optimization? Share in the comments.


Written by aleh | Founder and CMO at SEO PowerSuite and Awario. Digital Marketer & Speaker at SMX, BrightonSEO.
Published by HackerNoon on 2022/10/05