Page 1 of 1

Possible criteria for an algorithmic EAT assessment

Posted: Thu Jan 30, 2025 8:56 am
by Reddi2
In summary, I would conclude from the findings that the following components have a significant influence on the algorithmic evaluation of sources such as authors and publishers according to EAT:

how long the author / publisher has been demonstrably producing content in a subject area
degree of fame of the author / publisher
Ratings of the content published by the author / publisher by users
The number of articles published by the author / publisher on a topic
How often the author / publisher publishes content on the topic
Co-occurrences of the author / publisher in connection with terms from the topic environment
Accuracy of the published information in comparison with the "common opinion" or scientific findings (KBT)
Frequent link proximity to seed sites of the publisher / author's content
User signals such as CTR of the publisher / author's documents
Mentions of the author / publisher in best-of lists
Prizes and awards won by the author / publisher
mood / sentiment regarding the company / publisher / author
These signals must be recognized by the crawler and can qatar phone number data be evaluated algorithmically. As soon as an entity in the form of a company, publisher, author or product can be evaluated using these signals, the documents associated with the entity can be evaluated according to EAT.

Author Box and EAT
Since the author box is currently being discussed a lot, I would like to briefly address it here. An author box in itself will not have a major impact on the ranking. It can help to link content to an entity. However, if the author is not yet recognized as an entity by Google or does not have thematic credibility or authority that can be recognized algorithmically by Google, an author box will have no influence on the ranking.

Non-validated entities next to Knowledge Graph
I think Google has more entities on its radar than just those that are officially recorded in the Knowledge Graph. Since the Knowledge Vault or Natural Language Processing can analyze entities in search queries and in content of any kind, there will be a second invalid database alongside the Knowledge Graph. This database could contain all entities that were recognized as entities, that are assigned to a domain and an entity type , but are not socially relevant enough to be displayed in a Knowledge Panel.

Something like this would make sense for performance reasons, as such a repository would make it possible not to start from scratch again and again. I think all entities are stored there whose information cannot (yet) be validated for accuracy.

This would give Google the opportunity to apply the explained signals to other entities besides those recorded in the Knowledge Graph in order to perform EAT evaluations.