How to define SERP intent and ‘resource type’ for better analysis

MY NUMBER 1 RECOMMENDATION TO CREATE FULL TIME INCOME ONLINE: CLICK HERE

SERP analysis along with keyword research is an integral part of any modern SEO campaign.

Search intent analysis is already a process within this. But when it comes to SERP analysis, all too often I see reports that stop at ranking a result based on its purpose – and that’s it.

We know that Google strives to provide a diverse results page for queries with multiple interpretations, and the differences are often:

  • Purpose of the result (commercial, informative).
  • Type of business (national result, local result).
  • Aggregators and comparison sites.
  • Page type (static or blog).

When we plan the content, we may develop a strategy based on Google ranking some of the information on page 1, so we will create the information as well.

We can also use a tool to “aggregate” first page metrics and create artificial keyword difficulty estimates.

This is where this strategy falls down and in my opinion will continue to show diminishing returns in the future.

This is because most of these pieces of analysis do not recognize or consider the source type. I personally believe it is because Guidelines for Evaluating Search Quality which led to EAT, YMYLand page quality, which is becoming a major part of our daily work, doesn’t actually use the term type of source, but talks about evaluating and analyzing sources for things like misinformation or bias.

As we begin to study resource types, we must also study and understand the concepts of quality thresholds and topical authority.

I’ve talked about quality thresholds and how they relate to indexing in previous articles I’ve written for Search Engine Land:

But when we combine this with SERP analysis, we can understand how and why Google chooses the sites and elements that should make up the results page, and also get an idea of ​​how effective it can be to rank for certain queries.

A better understanding of ranking feasibility helps predict potential traffic opportunities and then evaluate leads/revenue based on how your site converts.


Get the daily newsletter search that marketers rely on.


Defining native types

Defining resource types means going deeper than just classifying a site as informational or commercial, as Google goes deeper as well.

This is because Google compares websites based on their type and not just the content they produce. This is especially prevalent in search results pages for queries that can be mixed-purpose and return results for both commercial and informational purposes.

If we look at the query [rotating proxy manager] we can see this in practice in the first 5 results:

# Results website Purpose classification Source type classification
1 Oxylabs Commercially Commercial, lead generation
2 Zyte Commercially Commercial, lead generation
3 Geek Flare Informative Informative, commercial neutral
4 The node makes me happy Informative Open source, non-commercial
5 Scraper API Informative Informational, commercial bias

Quality thresholds are determined by the identity of the website, the general type of domain (not just a subdomain or blog subfolder), and then the context.

When Google obtains information to prepare a search results page, it will first compare the websites it retrieves based on their resource type group. Therefore, in the case of SERPs, Oxylabs and Zyte will be compared against each other first, before other types of resources selected for inclusion or those that are ranked highest based on weights and attributions.

The SERP is then formed based on these obtained rankings and is then overlaid with user data, SERP features, etc.

By understanding the types of resources that Google chooses to display (and their rankings) for certain queries, we can know if there are appropriate search terms to target based on your type of resource.

This is also common in the SERPs for [x alternative] queries where the company might want to rank for competition + alternative compounds.

For example, if we look at the top 10 blue link results for [pardot alternatives]:

# Results website Purpose classification Source type classification
1 G2 Informative Informational, non-commercial bias
2 Rather trust Informative Informational, non-commercial bias
3 Ascent Informative Informational, non-commercial bias
4 Capterra (blog) Informative Informational, non-commercial bias
5 Jotform Informative Informational, non-commercial bias
6 Finance online Informative Informational, non-commercial bias
7 Gartner Informative Informational, non-commercial bias
8 GetApp Informative Informational, non-commercial bias
9 Demodia Informative Informational, non-commercial bias
10 Suggest software Informative Informational, non-commercial bias

So if you’re a Freshmarketer or ActiveCampaign, while the company may see it as a relevant search term to target and matches your product positioning, you’re unlikely to gain traction as a commercial source.

This doesn’t mean that message and comparison pages on your website aren’t important content for educating and converting users.

Different types of resources have different quality thresholds

Another important distinction is that different resource types have different thresholds.

Therefore, third-party tools that produce keyword difficulty scores based on metrics such as backlinks for all results on page 1 have problems because not all resource types are scored equally on most SERPs.

This means that in order to determine the “benchmark” of what your site and content will need to be in a position to drive traffic, you need to compare it to other sites with the same types of resources and then the type of content that I rank with .

Thematic clusters and frequency

Establishing good thematic clusters and easy-to-follow information trees allow search engines to more easily understand your site’s resource type and “depth of usefulness.”

This is also why, in my opinion, for many queries in the same space (eg technology) you will probably often see sites like G2 and Capterra for a wide range of queries.

A search engine can return these sites to the SERP with a higher level of confidence, regardless of the type of software/technique, because these sites have:

  • High posting frequencies.
  • Logical information tree.
  • Developed a strong reputation for useful and accurate information

In addition to semantics and good keyword research, it is important to understand the basics of natural language interfaces, especially the Stanford Natural Language Inference (SNLI) corpus, when developing websites within topic groups.

The basics of this are that you have to test the hypothesis against the text, and the conclusion is that the text includes, contradicts the hypothesis, or is neutral against the hypothesis.

For the search engine: If the website contradicts the hypothesis, it will have a low value and should not be retrieved or ranked. However, if the website includes or is neutral to the query, it can be considered in the ranking to provide an answer and a potentially unbiased perspective (depending on the query).

We do this to some extent through content hubs/content clusters, which have become more popular over the last five years as ways to showcase EAT and create high-authority linkable assets for non-branded search terms.

We achieve this through good information architecture on the site and concise content in our topic groups and internal linking, making it easier for search engines to digest at scale.

Understanding the types of sources to inform your SEO strategy

By better understanding the types of resources that rank most prominently for targeted search queries, we can craft better strategies and predictions that deliver immediate results.

This is a better option rather than searching for terms that we simply aren’t a good fit for and likely won’t see a return on traffic compared to the investment in assets.


The opinions expressed in this article are those of the guest author and not necessarily those of Search Engine Land. Staff authors are cited here.


New to search engine land

About the author

Dan Taylor is the Head of Technical SEO at SOL agencya technical SEO specialist from the UK and winner of the Queens Award 2022. Dan works with and supervises a team that works with companies ranging from technology and SaaS companies to enterprise e-commerce.

MY NUMBER 1 RECOMMENDATION TO CREATE FULL TIME INCOME ONLINE: CLICK HERE

Leave a Comment

error: Content is protected !!