fbpx

Addressing Google E-A-T Misconceptions

5 min read

Discover why Google abstains from employing author bios and other presumed E-A-T elements for search ranking and uncover what criteria they potentially utilize instead. For a considerable duration, I refrained from addressing E-A-T.

Drawing from my experience as a former Google quality rater (nearly a decade ago), I swiftly discerned the essence of E-A-T: a humanized terminology encapsulating the algorithm’s ultimate objective, enabling raters devoid of Google data access to evaluate algorithms.

In light of recent clarifications affirming that E-A-T does not serve as a ranking signal, factor, or system, I’m compelled to delve into several pivotal insights.

 

Let’s begin with the question: What exactly is E-E-A-T?

 

As you might already know, E-E-A-T represents Experience, Expertise, Authoritativeness, and Trustworthiness. The addition of Experience is the most recent update. Initially, the concept was introduced as simply E-A-T.

There has been some debate regarding including “Timeliness” to make it E-E-A-T-T. However, we could explore far more engaging acronyms if that were the case.

 

Where did E-E-A-T originate?

 

E-E-A-T derives from Google’s Search Quality Rater Guidelines. Understanding that the QRG isn’t a compilation of ranking factors, systems, or signals is essential. Instead, it serves as a reference for human raters across various tasks.

These tasks may involve comparing sets of search results to determine superiority or evaluating pages for relevance to specific queries.

While rater data may inform the evaluation of proposed algorithm alterations or the development of test sets for internal assessments within Google, raters themselves do not directly influence actual ranking algorithms or penalties.

 

Why the focus on E-E-A-T now?

 

Recent alterations in the wording of the SEO starter guide, along with tweets from Google Search Liaison Danny Sullivan, have sparked inquiries on the subject. This prompted me to initiate a discussion thread, and subsequently, numerous individuals requested a blog post, leading us to this point.

It all began with a tweet from Sullivan addressing the common elements of E-E-A-T that SEOs frequently discuss, clarifying that none serve as ranking factors. Sullivan’s tweet delved into the general perception of E-E-A-T and debunked the notion that these elements influence rankings.

For some time, SEOs have speculated about tactics purported to contribute to E-E-A-T, such as:

 

  • Including author bios and profiles on pages.
  • Ensuring experts review content claims.
  • Providing relevant contact information on the page.
  • Establishing links to or obtaining links from authoritative sources.

 

However, the reality is that these tactics don’t affect rankings because there is no such thing as an E-E-A-T score.

 

Why doesn’t Google utilize these elements?

 

The internet is vast and varied, with countless coding methods and potential errors, making extracting specific types of information from web pages challenging.

Search engines like Google and Bing introduced structured data, schema, and XML sitemaps to streamline their operations.

Recall the days when Google employed rel=author markup. Many SEOs exploited that feature extensively.

If you’ve ever attempted to create your web crawler (which you should!), you understand the difficulty of extracting data from a page, such as dates. With myriad formats, coding techniques, and potential locations, numerous libraries exist solely for date estimation.

The same complexity applies to authorship or contact information. Crawling and scraping data at web-scale is more complicated. Implementing the elements, SEOs speculate Google uses in a robust and scalable manner poses challenges.

While Google could potentially develop solutions, they encounter a significant obstacle: the SEO dilemma. SEOs are inclined to manipulate such elements.

Once SEOs began promoting the idea that author profiles are essential for ranking (a reminder: they aren’t), black hat practitioners began fabricating fake authors and profiles for their AI-generated content. They asserted that content was reviewed by experts, among other claims.

Should these practices warrant a ranking boost? How can one differentiate between genuine and fabricated claims? Humans can discern this through research and critical thinking, but can bots? Should they?

If concepts like expertise and authority were solely derived from webpage claims, the need for expertise and authoritativeness concepts would be redundant.

 

Beyond Words: Advancing Search Engine Signals

 

Search engines possess an array of signals that extend beyond taking your word about your E-E-A-T.

Before delving further, let’s clarify some nuances of terminology. While terms like token, factor, signal, and system often mean distinct things, they’re frequently used interchangeably in Google documentation.

For clarity:

 

  • Token: The smallest unit of data from a query or document.
  • Signal: Any attribute of a document, link, or query.
  • Factor: An element with weight used in ranking, derived from signals or systems.
  • System: Processes factors and signs to influence rankings.

 

E-E-A-T, under my definition, isn’t a signal, factor, or system.

So, if search engines rely on QRG for something other than the mentioned elements, what could they use?

The signals likely rewarding authoritative sites could resemble a version of PageRank (i.e., link authority) and aggregate click data from search logs, feeding into machine learning algorithms.

Aggregate click data entails analyzing vast clicks, not merely “for this query, users clicked this site.” Instead, it involves data such as “over 100 million clicks, the most clicked-on results all had higher PageRank and included the keyword in the title and 700 other factors…”

While domain-level metrics may play a role, they’re beyond the article’s scope.

Search engines can rely on user actions rather than taking your word on your authority. If your site is authoritative, it will naturally attract more links. However, links alone aren’t conclusive, as they can be manipulated. This is where aggregate click data becomes invaluable.

If your site is genuinely authoritative, users will click on it. This analysis occurs at the macro level, not based on individual clicks for specific queries.

Consider the SERP holistically. If one ranking algorithm variant garners more clicks on higher-ranked sites, it could better reward trustworthy sites.

Machine learning algorithms can swiftly discern standard features among top-clicked sites. Search engines leverage this data to evaluate algorithms or refine rankings.

This analysis hinges on identifying shared characteristics among top-clicked sites, encompassing various content and link metrics.

 

So, where does the QRG definition come into play?

 

Consider the role of raters. They:

 

  • Lack of access to link data or click data.
  • Do not utilize machine learning outputs.
  • The capacity to assess hundreds of signals for each site must be present.
  • Do not directly impact any site’s ranking.
  • Do not train the algorithm.

 

Instead, they furnish consistent data for Google engineers to evaluate algorithmic adjustments.

To accomplish this, raters require human language delineating attributes aligned with expertise, authoritativeness, and trustworthiness.

Ideally, the algorithmic signals correspond with human assessments. If discrepancies arise, Google will continue refining its approach.

The reassuring aspect is that traditional E-E-A-T signals (such as author bios) are not inputted into the machine learning algorithms. Consequently, there’s no necessity to incorporate or fabricate them for ranking purposes.

 

Elevating User Experience: The Role of E-E-A-T in Content Creation

 

Does my content need author bios, contact info, expert reviews, or other E-E-A-T-related elements?

If ranking is your sole concern, then no, you don’t necessarily need them.

However, most of us prioritize user experience, conversions, sales, and other metrics – and users appreciate these elements.

For many searches, users prefer content authored by genuine individuals. However, this doesn’t imply that every piece of content requires human author bios. Specific topics may not necessitate such personalization.

Similarly, individuals seeking medical information prefer content vetted by professionals. Yet, this doesn’t mandate doctor reviews for articles on unrelated subjects like tire recycling or treehouse construction.

Most practices SEOs recommend for E-E-A-T also enhance user experience – benefiting your audience. Therefore, incorporating these elements makes sense if they align with your users’ needs.

Enhanced user experiences increase the likelihood of user engagement, such as linking to your content, sharing it, or clicking on your results – actions that may positively impact your rankings.

Before investing heavily in unnecessary experts, ensure these elements resonate with your users. If you still need to decide, consider exploring our monthly SEO packages, where experts can offer guidance tailored to your needs.

Shilpi Mathur
navyya.shilpi@gmail.com