News and Insights

Roundup of Google updates from August 2020

September 17, 2020

As this year’s very unique summer is coming to its end, people enjoyed the summer’s hottest month with many thoughts as to what comes next.

August’s Covid-19 restrictions were somewhat relaxed, many people went on holidays and businesses continued the seamless planning of their digital strategies within the unchartered waters of the following months. 

Google have been particularly busy once again, developing tools and sharing insights to the community. The SEO advocates have followed suit, helping businesses to facilitate any search engine updates and capabilities that could benefit them in the long run.

While the mobility of people seems to become more and more restricted, the tourism industry is finding itself in a stalemate. This has inevitably given space to an unprecedented growth of the Ecommerce industry. Technology makes products fairly cheap to transport and people still need to buy necessary and affordable products. Of course, Google is well aware of that and around mid-August created more options for retailers to have better control over how their product information appears in search results. Apart from that, Google suffered an indexing glitch that shook up the search quality across the globe, but that lasted no more than few hours. 

3rd Aug – Google’s “Search Off the Record” podcast

Be cautious when relying on 3rd parties for your website content. 

Google’s Martin Splitt calls the webmasters’ attention to the potential downsides of using JavaScript content rendered by 3rd parties. He used the blog comment sections as an example. 

Search Off the Record - Google Podcast, logo

In that early August podcast session, Splitt opened up this topic against the backdrop of an issue that occurred the previous month, when Google wouldn’t index blog comments from a third-party content provider called Disqus.

Third-party providers, such as Disqus, deploy embedded content using JavaScript which in turn is rendered on the client’s side. 

Even though this incident was due to a glitch on Google’s side, it sparked a broader discussion about how to deal with critical JavaScript content rendered by third-party providers. 

Because the challenge is that you, as a website owner, don’t really have control over a third party.
And if you are using client side JavaScript to pull in content from the third party in the browser, things can go wrong.
They could robot their JavaScript API, and then we can’t make the request or maybe their servers are really under load. And then we decide not to make these requests to the third party because they are already experiencing high load situations.

Martin Splitt, Developer Advocate at Google

Splitt went on to say that this problem can be tackled by doing everything on the server side, explaining that if the third party has an API that you can interact with on the client’s side, then it’s likely to be the case on the server’s side as well. 

Answering John Mueller’s question of whether it is a bad idea to rely on third parties, Splitt replied “it’s more like an “okay” idea to rely on third parties.” Splitt added that we have to understand that we have little control over what happens in the browser and we have less control when relying on the Googlebot to be essentially the breadwinner, rather than when your server is doing the work for you.

4th Aug – Google’s search quality raters do not directly influence rankings

Google published a comprehensive explainer on how changes to search rankings are being evaluated internally before being communicated to the users. 

Google’s Public Liaison for Search, Danny Sullivan is leading the team of the so-called search quality raters who are the people whose evaluations are being taken into consideration for the infamous algorithm updates. 

In plain words, Google’s search raters talk to people from all around the globe and try to understand how Search can be improved and be more relevant to them.

This is why we have a research team whose job it is to talk to people all around the world to understand how Search can be more useful. We invite people to give us feedback on different iterations of our projects and we do field research to understand how people in different communities access information online.

Danny Sullivan, Google’s public liaison of search

Regarding what exactly the Search Quality Raters are, Danny Sullivan mentions:

We publish publicly available rater guidelines that describe in great detail how our systems intend to surface great content. These guidelines are more than 160 pages long, but if we have to boil it down to just a phrase, we like to say that Search is designed to return relevant results from the most reliable sources available.

Danny Sullivan, Google’s public liaison of search

It’s beyond doubt that the algorithm can pick up signals automatically but relevance and trustworthiness, do require human judgment. Hence, Google employs 10,000 search quality raters from everywhere in the world to help the cause.

The raters assist Google in understanding how people experience search results. The ratings are designed based on Google’s guidelines and aim to get the real user sentiment as well as the type of information they really need. 

Sullivan went on to explain how search quality raters work. Interestingly, Google conducts some sort of SERP A/B testing. Google assigns a set of queries to groups of raters, showing them two versions of the search result pages.

One is the current version and the other one is considered to be an improved version in the eyes of Google. Every result page is evaluated against the query, always based on the rater guidelines.

To evaluate E-A-T, which stands for expertise, authority, and trustworthiness, raters are conducting reputational research on the sources. After the research is done, the raters provide a quality rating for each page. 

Having said that, Sullivan did make clear that ratings do not impact rankings directly but rather help Google measure how well content, aligned with its guidelines, is delivered to the users. 

It’s important to note that this rating does not directly impact how this page or site ranks in Search. Nobody is deciding that any given source is “authoritative” or “trustworthy.” In particular, pages are not assigned ratings as a way to determine how well to rank them.

Danny Sullivan, Google’s public liaison of search

6th Aug – Martin Split explains how to solve mobile-first indexing problems in a Lightning Talks episode

In-person event restrictions forced Google to adapt to the new landscape so Martin Splitt had to make this presentation alone and in the usual Google Webmaster Conferences. 

In a COVID-proof Lightning Talks rollout, Martin Splitt went through the crucial topic of mobile-first indexing explaining what are the common issues and providing specific solutions. 

Martin Splitt on Lightning Talks opening up the video introduction

It turns out that the most common problems that come with mobile-first indexing are

1. Mobile crawling issues
2. Mobile page content issues

Interestingly enough, mobile crawling issues are happening essentially when Google is crawling with the Googlebot’s mobile version.

In this case, a request can be handled differently by the server based on the user agent. When something like this happens, Google is unable to get sufficient information from the pages or no information at all. This results in insufficient signals for Google to be able to show any pages in the search results whatsoever. 

Mobile page content issues come mainly when a site serves different content to the mobile and desktop versions. Less information means Google cannot determine if a page is relevant, so the site cannot be ranked properly in the search results. 

To avoid encountering such issues, Splitt suggests the following:

Don’t:

  • Dlock Googlebot from crawling with a ‘Disallow’ directive in robots.txt
  • Use noindex meta tag
  • Block Googlebot from crawling mobile CSS
  • Block Googlebot from following internal links

Check:

  • Robots.txt directives
  • Noindex & nofollow  tags 
  • The server’s crawl capacity. Server desktop crawls should be as many as the mobile crawls.

Splitt pointed out that mobile page content should be identical to the desktop version. 

For example, if in your mobile page version users see the “See More” button whereas this button does not exist in the desktop version and all the content is visible on the page, this can pose a problem. As the Googlebot crawls only what it sees as visible and doesn’t interact with page elements, your mobile page content will not be ranked properly. 

Main takeaway; keep your mobile and desktop content identical to avoid mobile-first indexing problems. Don’t’ keep anything out of Googlebot’s sight including structured data and meta descriptions.

7th Aug – Meta descriptions function as a content summary helps Google to understand what is important about a page

One of the few things that seems to be widely accepted in the search community is that the meta description is not a ranking factor and comes with a low SEO value. But is it the case?

Replying to a Twitter thread, Google’s Martin Splitt gave us a quite valuable hint as to how Google understands meta descriptions.

Maritn Splitt comments on Twitter on the use of meta descriptions

Split clearly says that the meta description HTML element helps Google to understand what the content of the page is all about. 

John Muller jumps in making clear that meta descriptions are not a ranking factor. 

John Mueller comments on Twitter on the use of meta descriptions

However, Muller didn’t seem to add anything to Martin’s point. 

What is interesting is that the title element has been widely perceived as a ranking factor and the meta description to be acting as the description of the title tag summarising what a page is about.

From one point of view, Splitt seems to imply that the meta description plays a similar role to the title element, effectively providing a blurb detailing what a page is about. 

This is slightly different to what most of us in the search community had in mind as far as meta description and Mueller’s contribution to the discussion doesn’t seem to clarify much. 

Time will tell us if Splitt was hinting something that would be a very valuable insight.

11th Aug – An alleged massive Google update turned out to be a massive indexing glitch

The search community thought a core update was unexpectedly unfolding in front of their eyes but it was in fact, nothing more than a glitch.

John Mueller comments on Twitter confirming major Google indexing glitch

A few hours later Google’s Webmasters Twitter account posted:

Google Masters Blog Tweets confirming indexing glitch

Google’s Gary Illyes jumped in the discussion explaining how the Caffeine Index works with a list.

Twitt about the capabilities of Google's Caffeine indexing system

A few years ago, Google had developed a web crawling and indexing system called Caffeine. The objective was a comprehensive system that processes data faster, indexes the whole web in real-time and scales over time.

However, Gary Illyes explained further by tweeting the following downsides:

Twitt about the capabilities of Google's Caffeine indexing and discussion

A fuss was created around the issue and it soon became apparent that the glitch impact was felt across the board. 

Google’s search glitch was widespread, affecting all languages and all niches.

Poor search results and fluctuations in rankings were observed by everyone, from Ecommerce sites to search experts. 

Joe Youngblood tweeted:

 Comments on Twitter about the Google indexing glitch in August

The WebmasterWorld member Whoa182 added in the greater discussion with this:

What the hell is going on?

Just noticed my articles have gone from page 1 to page +

Seems to have just happened in the past few hours! Quite a few of my competitors have all disappeared from the SERPs.

Edit: Okay, it’s just massive fluctuations in page positions. One minute it’s on page 1, next it’s page 7 or whatever, and then back again.

Whoa182, WebmasterWorld member

Google Webmaster experts Gary Illyes and John Mueller made clear that the glitch was related to Google’s indexing system. 

However, that glitch caused a massive shake up in the search results that lasted a few hours, before everything went back to normal.

Yael Consulting tweets that things went back to normal after big Google indexing glitch
Osanda Cooray tweets that things went back to normal after big Google indexing glitch

15th Aug – John Mueller: Heading Tags are a really strong signal

In that mid-August Webmaster Central Hangout, Mueller stated with confidence that heading tags are a strong signal in terms of ranking. 

And when it comes to text on a page, a heading is a really strong signal telling is this part of th page is about this topic.

…whether you put that into an H1 tag or an H2 tag or H5 or whatever, that doesn’t matter so much.

But rather kind of this general signal that you give us that says…this part of the page is about this topic. And this other part of the page maybe about a different topic.

John Mueller, Senior Webmaster Trends Analyst at Google

John was asked if a page without an H1 title will still rank for keywords which are in the H2 title. 

So headings on a page help us to better understand the content on the page.

Headings on the page are not the only ranking factor we have.

We look at the content on its own as well.

But sometimes having a clear heading on a page gives us a little bit more information on what that section is about.

John Mueller, Senior Webmaster Trends Analyst at Google

What is striking about Mueller’s words is that he calls headings a ranking factor wheres in the past has himself downplayed their importance as a ranking factor

Your site is going to rank perfectly fine with no H1 tags or with five H1 tags.

So it’s not something you need to worry about.

Some SEO tools flag this as an issue and say like Oh you don’t have any H1 tags or you have two H1 tags… from our point of view that’s not a critical issue.

John Mueller, Senior Webmaster Trends Analyst at Google

Whoever has conducted any sort of competitors analysis knows that pages can rank pretty well without a H1 Headings. 

Elaborating more content-wise, Mueller made no space for doubts confirming that a heading is a strong signal when it comes to giving information about the topic of the page.   

21st Aug – Google gives options to retailers to customise product info in the search results. 

Google announces via the webmaster central blog channel that retailers will be offered options to control how product information appears in search results. 

Google's Webmaster blog title on the 21st of August, 2020

Retailers can now leverage meta tags, robots, and HTML attributes to control the way their products show on Google search results.

These new capabilities allow retailers to mark up their product pages and customise search snippets as per their needs and preferences for free.

However, Google adds that it might also include content that has not been marked up. As a result of the frequent crawling of content, alternative attributes could be pulled through if they are deemed relevant. 

While the processes above are the best way to ensure that product information will appear in this Search experience, Google may also include content that has not been marked up using schema.org or submitted through Merchant Center when the content has been crawled and is related to retail. Google does this to ensure that users see a wide variety of products from a broad group of retailers when they search for information on Google.

Google

Google also provided ways to implement these controls:

  1. Using the “nosnippet” meta tag retailers will be able to prevent any textual, image and snippet from being shown for the page in search results. 

Here’s 2 product pages with and without the tag.

Google page results with “nosnippet” meta tag

2.     Using “max-snippet:[number]” robots meta tag retailers can specify a maximum snippet length, in characters and if the structured data is greater than the maximum snippet length the page will be removed from any free listing experience.

Google page results with "max-snippet:[number]" robots meta tag
  1. Using “max-image-preview:[setting]” robots meta tag retailers can specify a maximum image size preview to be shown for images on this page, using either “standard”, or “large”.
Google page results with "max-image-preview:[setting]" robots meta tag
  1. Using “data-nosnippet” HTML attribute retailers can specify a section of content that should not be included in a snippet preview. When this attribute is applied to information for offers (price, availability, ratings, image) it will remove the listing from any free listing experience.
Photos of page results with and without “data-nosnippet” HTML attribute

Concluding, the Google Webmaster Central blog provides some additional information on the retailer preferences. 

These instructions do not apply to information supplied via schema.org markup or product date submitted through the Google Merchant Center. 

For details information follow this thread.

21st Aug – The vast majority of the websites don’t need to worry about crawl budget 

Search Off the Record podcast on crawl budget. 21st August

In the Search Off the Record podcast the Google Search relations team says that the vast majority of the sites shouldn’t worry about crawl budget.

Gary Illyes and his team were discussing crawl budgeting saying that a ‘substantial segment’ of sites do have to pay attention to this matter but it shouldn’t be a concern for the majority of the sites. 

We’ve been pushing back on the crawl budget, historically, typically telling people that you don’t have to care about it.

And I stand my ground and I still say that most people don’t have to care about it. We do think that there is a substantial segment of the ecosystem that has to care about it.

…but I still believe that – I’m trying to reinforce this here – that the vast majority of the people don’t have to care about it.

Gary Illyes, Webmaster Trends Analyst at Google

So, what is the exact number one has to know in order to decide if they have to consider with crawl budgeting or not?

Illyes replied:

… well, it’s not quite like that. It’s like you can do stupid stuff on your site, and then Googlebot with start crawling like crazy.

Or you can do other kinds of stupid stuff, and then Googlebot will just stop crawling altogether.

Gary Illyes, Webmaster Trends Analyst at Google

If he had to give a number Gary says sites with fewer than a million URLs don’t need to worry about crawl budget. 

For those sites that need to take crawl budget into consideration, there are two main factors to be considered:

1. Pages that haven’t been crawled in a long time or have never been crawled 

2. Refresh rates. Whereby changes have been made to certain sections of the site, but still they were not refreshed for a long period of time. 

On how to fix issues around crawl budget Illyes suggests; to first remove non-essential pages, as an excessive amount of redundant content would result in important pages not getting crawled.

Like if you remove, if you chop, if you prune from your site stuff that is perhaps less useful for users in general, then Googlebot will have time to focus on higher quality pages that are actually good for users

Gary Illyes, Webmaster Trends Analyst at Google

As a second action Gary Illyes suggests avoiding sending “back off” signals to Googlebot, which will make the latter stop crawling the site.

If you send us back off signals, then that will influence Googlebot crawl. So if your servers can handle it, then you want to make sure that you don’t send us like 429, 50X status codes and that your server responds snappy, fast.

Gary Illyes, Webmaster Trends Analyst at Google

26th Aug – Home activities rich results announced by Google 

Google announces that a new rich result that corresponds to home activity events will be shown upon a relevant search query.

The new rich results are limited to searches related to fitness and only available on mobile devices. Websites will have to add either Event or Video structured data in order to be considered for home activities. Event structured data is suitable for future online events and video structured data is suitable for already published videos.

Image if Google Home activities rich results

The eligibility criteria

Alt="Google Home activities rich results eligibility criteria list">

The events structured data is relevant to both in-person and virtual events. Due to Covid-19 and home video consumption, there are additional properties and types that can be used.

Event organiser have to go for VirtualLocation type and set the eventAttendanceMode property to OnlineEventAttendanceMode.

The LIVE badge guidelines are the requirements that describe rules about offensive language in the structured data and requires the use of Indexing API for live streams

There is also documentation for the BroadcastEvent structured data in Google’s developer pages with the respective required properties.

Required properties are: publication (date of live stream), publication.endDate, publication.isLiveBroadcast, publication.startDate

Currently, these types of rich results only show for fitness related queries. The time zone indicated in the structured data determines where these rich results are shown. 

These fitness related rich results might be useful to queries for local yoga services, nearby/local gyms, dance sessions and any activities related to online fitness. 

31st Aug – Google displays “Licensable” badges on images 

According to Google, images that come with licensing information will be marked with a “Licensable” badge. 

When an image is chosen and clicked on, a link to the licence details provided by the image owner will show. When available there will be an additional link that directs straight to the content owner or license holder page on which the user can request and acquire rights for image use. 

The licensor must be using image license structured data in order for the images to display licensing information in search results and eventually the “Licensable badge”.

The webmasters can use either structured data or IPCT photo metadata and the metadata should be added to each licensable image on a site.

Correct mark-up implementation can be validated with Google Search Console and the Rich Results tool.

More information on code snippets and support for testing image license structured data can be found here and here.

If you look to find licensable images in search results is a quite easy one.

Image with Google “Licensable” badges". How to find the option

You can simply filter image search by clicking tools and under the new dropdown option Usage Rights select either images with ‘creative commons licenses’ or ‘commercial & other licenses’. 

Having said that, please note that image license structured data mark-up is not a ranking factor and is entirely optional. As our SEO agency London team was anticipating, last February John Mueller explicitly confirmed this.

Google's John Mueller confirmed that Image with Google “Licensable” badges is not a ranking factor

TAGS: Technology