Jump to content

Supplemental Result

From Wikipedia, the free encyclopedia

A supplemental result is a URL residing in Google's supplemental index, a secondary database containing pages of less importance, as measured primarily by Google's PageRank algorithm.

The importance of a page is measured by the number and quality of links pointing at it.[1] The degree to which Google trusts a site's inbound links also influences the importance of a page. If Google detects paid links, for example, it will devalue the links or nullify them completely so that PageRank will not pass to the target page.[2]

A supplemental page will still rank in search results, but only if there are not enough pages in the main index that are returned within the search.

Google used to place a "Supplemental result" label at the bottom of a search result to indicate that it is in the supplemental index; however in July 2007 they discontinued this practice and it is no longer possible to tell whether a result is in the supplemental index or the main one.[1] [2].

Causes of supplemental results[edit]

Duplicate content[edit]

Some people believe that the supplemental index is Google’s way of filtering out duplicate content.[3] TITLE elements, META descriptions tags, and navigational text that are similar or identical can lead to duplicate or near-duplicate content.[4]

Low PageRank[edit]

However, duplicate content is a side effect of supplemental results, not the cause. Instead, low PageRank is the primary cause of supplemental results.[5] Links to multiple versions of the same page dilutes PageRank across multiple URLs, thus increasing their chance of not reaching a minimum PageRank threshold. If a page's PageRank is too low, Google will drop it from its main index. That page will appear in search results as a supplemental result.

Lack of trust[edit]

Manipulative linking practices can also lower PageRank flow into a domain, thus creating more supplemental pages.[6] Manipulative links include excessive reciprocal linking, link injections, and paid links.[7][8] Questionable outlinks can also lead to link devaluation.[9]

High page count[edit]

A large site with a high page count is also generally more vulnerable to the supplemental index than a small site because inbound PageRanks divided among several hundred thousand pages tend to be lower than that divided up among only a few dozen pages.[10]

Page freshness[edit]

Page freshness is also a factor.[11]

See also[edit]

Notes[edit]

  1. ^ Google Technology
  2. ^ Search Engine Spam? Matt Cutts: those links ... have not been trusted in terms of linkage for months and months.
  3. ^ How to Fix Supplemental due to Duplicate Content
  4. ^ Deftly Dealing with Duplicate Content Adam Lasnik explains how to tackle duplicate content issues.
  5. ^ Buffy in Duplicate Ex-Googler Vanessa Fox on why duplicate content is not the cause of supplemental results.
  6. ^ Indexing Timeline Matt Cutts, a Googler, explains how manipulative links can create supplemental results.
  7. ^ Condemned To Google Hell Forbes names a supplemental results victim.
  8. ^ Being Condemned to Google Hell and Matt's Rebuttal Matt Cutts names the site owner as the culprit.
  9. ^ Indexing Timeline Matt Cutts, a Googler, explains how a real estate site linking out to mortgages sites, credit card sites, and exercise equipment went from 10K pages indexed down to 80 after Big Daddy rollout.
  10. ^ SMX Video - Matt Cutts Explains How to Get Out of Google's Supplemental Index Matt Cutts explains the relationship between PageRank, site size, and supplemental results during a session at Seattle SMX 2007.
  11. ^ Getting into Google According to Jill Whalen, Dave Crow, Google's director of crawl systems, stated during SEMNE July 2007 that page freshness is a factor in whether or not a page is put into the supplemental index.

References[edit]