An emphatic statement by Google’s VP of Search, Hyung-Jin Kim, at SMX Next in November 2022 has seemingly passed without substantial discussion within the SEO community.
While E-A-T and its derivative, E-E-A-T (Expertise, Authoritativeness, Trustworthiness, Experience), are frequent topics of discussion among SEOs, there’s often a prevailing belief that they need to be more directly integrated into Google’s ranking system. Statements from Google spokespeople reinforce this notion. Instead, they are viewed as quality concepts conveyed to human quality raters, who use the Search Quality Rater Guidelines to evaluate the effectiveness of Google’s ranking systems.
When Kim’s statement was shared in various forums and chat groups, a noticeable lack of extensive engagement was observed. This prompted reflection on how E-A-T could be applied to “every single query and every single result” if it’s not formally part of Google’s ranking system.
One interpretation posited is that E-E-A-T serves as a post-delivery quality assurance process, potentially executed through the following steps:
- An AI process continuously evaluates evidence of expertise, authoritativeness, trustworthiness, and possibly experience on indexed pages and their citations or backlinks.
- These assessments occur during site crawls, assigning each factor a dynamic numeric score.
- Each element of a search engine results page (SERP), such as snippets and URLs, is assigned corresponding scores, expected to surpass those of subsequent results.
- While separate ranking systems determine SERP results, E-E-A-T is a post-ranking quality check.
- Adverse trends trigger detailed analysis and potential adjustments to ranking systems or E-E-A-T factors, ensuring ongoing optimization without impeding SERP delivery.
Despite this interpretation, the lack of substantial discourse within the SEO community suggests varying levels of engagement or skepticism. While some responses provided valuable insights, others exhibited humor, sarcasm, or outright skepticism of Google’s statements.
Ultimately, this discussion highlights the complex dynamics within the SEO community and the diverse perspectives surrounding Google’s algorithmic processes.
Navigating the Path of SEO Curiosity
Despite my recent attempts to spark conversation on the topic, I’m astounded by the need for subsequent discussion following an uncommon statement from a Googler.
Where has the legendary SEO curiosity gone—the one that used to drive speculation about the “200 ranking factors”? Authors often poll the SEO community to uncover and rank these factors, with each of us eager to contribute our insights.
Today, much energy and curiosity are channeled into building flashy tools with Python, often incorporating AI. Yet, some of this effort might be reinventing the wheel.
Discussions about SEO tools remain lively, with marketers seeking the best keyword research tool. But do these tools truly revolutionize our strategies, as they claim? Can AI writing tools genuinely benefit all facets of SEO?
The landscape is cluttered with self-proclaimed experts, enticing us to join their mailing lists and unlock their “secrets.” Yet, amidst this sea of information, misinformation often masquerades as fact.
We’ve seemingly lost the early explorers who meticulously dissected every search engine patent and sought correlations with SERP observations. I find myself longing for pioneers like Ted Ulle and Bill Slawski, who dissected algorithm updates and strategized ways to evade Google’s algorithms.
Cultivating Curiosity in SEO
SEO curiosity may not be extinct after all. For example, many people in the E-E-A-T group assert that these factors aren’t integrated into Google’s ranking systems. It’s healthy to maintain a level of skepticism towards statements from search engine spokespeople.
Direct your curiosity towards investigation. While you may only sometimes share your findings if they have significance, there are instances, like Cyrus Shepard’s examination of 50 websites, where correlations between website features and Google algorithm updates were uncovered.
Shepard’s findings highlighted “experience” as a feature prevalent on websites deemed “winners” by Google’s updates. This raises questions about the mantra that E-E-A-T is separate from ranking algorithms. While it may not be directly integrated, signals related to experience could positively influence ranking algorithms. Keeping an Experience algorithm separate from ranking algorithms, particularly for pages featuring product or place reviews, makes logical sense.
Witnessing curious SEO professionals like Daniel K. Cheung building matrices of E-E-A-T attributes for page audits is enlightening. Cheung’s approach involves assigning numeric values to attributes, recognizing that specific attributes may significantly impact a page more than others.
For instance, a video featuring the author using the reviewed product could carry more weight than a static image of the same scenario. While Google’s methods may be more nuanced, such curiosity sparks ideas for testing and exploration.
Embrace Skepticism in SEO
Some may question the validity of Shepard’s study with a sample size of 50. It’s a valid concern. A large SEO tool provider could employ their crawlers to analyze a million websites and validate or refute their findings.
However, rather than waiting for a tool company to undertake such a study, take matters into your own hands. Select 100 or more sites and conduct your tests. Reiterate the process until you’re confident in your findings. And if navigating these complexities still proves challenging, consider exploring our monthly SEO packages. Let our experts lend their expertise to your endeavors.