Google Algorithm Changes in February - Part 3
Welcome back to what is literally the longest blog post ever written (even Michael Martinez over at SEO Theory doesn't have a patch on me). In this post I'll be going through changes 21-30. If you haven't caught this series of posts from the beginning, I'm working my way through the 40 changes that Google have made to their algorithms, indexing and search results in February, and you can find the first 10 changes discussed here and changes 11-20 here.
21. International launch of shopping rich snippets. [project codename "rich snippets"] Shopping rich snippets help you more quickly identify which sites are likely to have the most relevant product for your needs, highlighting product prices, availability, ratings and review counts. This month we expanded shopping rich snippets globally (they were previously only available in the US, Japan and Germany).
As these shopping rich snippets are easy to implement, Google rolling them out worldwide is something that any online retailer should jump on as an opportunity. Take a look at these three search results for "leather office chair":
There are two ways to get a rich snippet with additional shopping data:
- Submit a product feed to the Google merchant center. Providing the URLs displayed in the SERPs are the same ones that appear in your merchant center feed, Google will associate the two and potentially generate product rich snippets for you.
- Code your site with semantic HTML according to the guidelines from schema.org, specifically using the product schema.
Of the two options clearly option 1, although more complex, is far better, since if the feed is built and optimised correctly you also get the benefit of visibility in product search, something that is extremely important given shopping results often take pride of place in the natural results for specific product queries such as this one:
Option 2 is the "easy" route, and should be a trivial
development update for most retailers as it typically involves
changing the names of page elements that already exist at a
Neither option is guaranteed to result in a product rich snippet, as whether one is shown or not is ultimately still Google's discretion, but I'd recommend pursuing both options as Google isn't the only search engine that may interpret semantic mark-up (schema.org is also backed by Yahoo, Bing and Yandex among others). Also, telling Google twice (though a feed and semantic HTML) can't do any harm, right?
22. Improvements to Korean spelling. This launch improves spelling corrections when the user performs a Korean query in the wrong keyboard mode (also known as an "IME", or input method editor). Specifically, this change helps users who mistakenly enter Hangul queries in Latin mode or vice-versa.
Google's proficiency at handling misspellings is very good. Furthermore, where they used to show a "did you mean...?" link, they now just show the results for the corrected spelling version, making optimising for misspellings a thing of the past. I for one am glad of this. Some years ago, when it was a concern, it was very hard to convince a brand to deliberately spell something wrong, and impossible to do so in prominent, visible page copy, meaning you'd have to resort to slightly underhand methods like putting the misspellings in image alt text.
23. Improvements to freshness. [launch codename "iotfreshweb", project codename "Freshness"] We've applied new signals which help us surface fresh content in our results even more quickly than before.
Not content with updating the way they determine the queries that need fresh content (change number 14, described in my last post), Google has also updated the way they determine what content is fresh. I'm still not sure there are any "signals" of freshness beyond the fact that the content is... well... new. So, to me this update suggests that Google has simply got better at finding new content, potentially using a social network or part of a social network that they weren't previously using to find content.
24. Web History in 20 new countries. With Web History, you can browse and search over your search history and webpages you've visited. You will also get personalized search results that are more relevant to you, based on what you've searched for and which sites you've visited in the past. In order to deliver more relevant and personalized search results, we've launched Web History in Malaysia, Pakistan, Philippines, Morocco, Belarus, Kazakhstan, Estonia, Kuwait, Iraq, Sri Lanka, Tunisia, Nigeria, Lebanon, Luxembourg, Bosnia and Herzegowina, Azerbaijan, Jamaica, Trinidad and Tobago, Republic of Moldova, and Ghana. Web History is turned on only for people who have a Google Account and previously enabled Web History.
If you work for a company in one of these countries, on reading
this update you may start to wonder whether personalised search is
good or bad for your site. It's as osbtinate problem.
After all, surely personalised search is personal, and therefore
impossible to track given that rank checking software is
Fortunately not. Since Google now provides rankings for your site in Webmaster Tools (and Google Analytics, if you've hooked the two platforms together), and these rankings are "in the wild", you can use this information and compare it with your rank checked rankings to see whether personalisation is improving your base rankings or causing them to drop.
25. Improved snippets for video channels. Some search results are links to channels with many different videos, whether on mtv.com, Hulu or YouTube. We've had a feature for a while now that displays snippets for these results including direct links to the videos in the channel, and this improvement increases quality and expands coverage of these rich "decorated" snippets. We've also made some improvements to our backends used to generate the snippets.
The development of universal search and the constant meddling with the SERPs is nothing new, but what I like about this is the subtle implication that Google ranks video sites other than YouTube regularly in its search results! This, of course, is a vanishingly remote occurrence these days. When it comes to video optimisation, as little as one year ago I think there might have been a genuine case to be made for hosting your own videos and trying to optimise them to appear in video search. Nowadays I think you just use YouTube.
26. Improvements to ranking for local search results. [launch codename "Venice"] This improvement improves the triggering of Local Universal results by relying more on the ranking of our main search results as a signal.
This is about taking one or more signals that affect the main
search results and applying those to the local search rankings as
well. So, a good way to deduce what these new factors are is
to look at what local search rankings were previously determined
by. To wit:
This chart is the result of an older study we conducted into local search ranking factors. Essentially what it shows is that there is a very strong correlation between businesses that rank and the amount of content on the businesses profile. To put it another way, this is pretty much "on page SEO" the only exception being "citations".
I'd say the main thing missing from these signals which we know is a consideration in the main Google algorithm is something akin to "trust" or "domain authority" based on links or user signals like CTR, engagement time and bounce rates. Looking at these signals would also prevent random small businesses gaming the local search results by lying about their location, which I've sometimes seen happening.
27. Improvements to English spell correction. [launch codename "Kamehameha"] This change improves spelling correction quality in English, especially for rare queries, by making one of our scoring functions more accurate.
See 22, above.
28. Improvements to coverage of News Universal. [launch codename "final destination"] We've fixed a bug that caused News Universal results not to appear in cases when our testing indicates they'd be very useful.
For most sites, being admitted as a news source in Google News
is essentially impossible. The list of requirements and the
application process are quite rigorous, involving among other
things being able to demonstrate your journalistic credentials,
that you have well known (or at least widely published) authors
writing for you, and so on.
If that doesn't describe you, then you need to resort to one
29. Consolidation of signals for spiking topics. [launch codename "news deserving score", project codename "Freshness"] We use a number of signals to detect when a new topic is spiking in popularity. This change consolidates some of the signals so we can rely on signals we can compute in realtime, rather than signals that need to be processed offline. This eliminates redundancy in our systems and helps to ensure we can continue to detect spiking topics as quickly as possible.
For "consolidation" read "removing some of". This means Google has dropped one or more previously used sources in favour of focussing only on search demand and maybe one or more social networks - the "real time" data sources alluded to. I think this change is the same as or is related to change number 14 ("disabling two old fresh query classifiers"), and, as described in the last post, this is likely to be that blog and/or news coverage which is less relevant and also harder to interpret as quickly are no longer considered as signals of whether a topic is "spiking" or "trending".
30. Better triggering for Turkish weather search feature. [launch codename "hava"] We've tuned the signals we use to decide when to present Turkish users with the weather search feature. The result is that we're able to provide our users with the weather forecast right on the results page with more frequency and accuracy.
It's striking how many of the updates in February are to do with universal search in one way or another. Although the prevalence of different result types can seem overwhelming or annoying at times, it's important to see them for what they are - namely, additional opportunities to gain visibility in the natural search results (although to be fair this isn't the case with the weather "one box" result, making my tenuous segue for this update even more tenuous). Further, it's usually much easier to optimise for universal search than for normal search, because the number of factors at play is much lower. To draw an analogy, most algorithms powering universal search today are akin to the main search algorithm in the year 2000.