Wednesday, August 22, 2007

Trusting Google search results

There's an interesting new study in the Journal of Computer-Mediated Communication: In Google We Trust: Users' Decisions on Rank, Position, and Relevance

An eye tracking experiment revealed that college student users have substantial trust in Google's ability to rank results by their true relevance to the query. When the participants selected a link to follow from Google's result pages, their decisions were strongly biased towards links higher in position even if the abstracts themselves were less relevant. While the participants reacted to artificially reduced retrieval quality by greater scrutiny, they failed to achieve the same success rate.

This demonstrated trust in Google has implications for the search engine's tremendous potential influence on culture, society, and user traffic on the Web.

The press release from the College of Charleston announcing the study had this to say with some quotes from the study's lead author Bing Pan: User's mistakenly trust higher positioned results in Google searches

"Despite the popularity of search engines, most users are not aware of how they work and know little about the implications of their algorithms," says study author Bing Pan. "When websites rank highly in a search engine, they might not be authoritative, unbiased or trustworthy."

According to Pan, this has important long term implications for search engine results, as this type of use, in turn, affects future rankings. "The way college students conduct online searches promotes a 'rich-get-richer' phenomenon, where popular sites get more hits regardless of relevance," says Pan. "This further cements the site's high rank, and makes it more difficult for lesser known sites to gain an audience."

It's an interesting study for a number of reasons, and I encourage you to read it. It uses eye-tracking to confirm what has been said about how people use search engines. In this instance they included some deception to see if it changed behavior, which it appears to have done, and it confirms the researchers' hypothesis that people trust Google. This is important stuff to know for anyone who prepares content for online delivery.

I am wondering, however, if Pan is jumping to the wrong conclusions in thinking this is such a bad thing? He seriously over represents the importance of "sites" in what a Google search returns. I think he may be waxing poetic to a time before Google (BG). Maybe the subjects were correct in trusting Google? What are the things that contribute to Google discoverability:


  • Titles that contain keywords that real people will use when trying to find said content in a search

  • An opening sentence and first paragraph that actually discuss what the article title says it's about

  • Links to other seminal works that readers would find useful -- with meaningful link anchor text

  • Others finding the piece of content of importance and linking to it

  • The content is unique -- meaning that it is not plagiarized, copied, or a minor derivative work

If these are the major items that make an individual piece of content discoverable by the Google algorithm, where are the negatives in trusting it? These seem to be things that are all positive, and things that were not available BG. These seem to me, if anything, to be disintermediating the idea of a popular site preventing an important piece of content from being discovered. Good content has now been freed from the tyranny of the site, and it can now stand on its own merits.

The study seemed to overemphasis the importance of a click-through from a search result page. For the next generation of Web users a click-through in this instance is a next to effortless act. What is the price of a click? A fraction of a second? That the subjects didn't read the abstracts carefully is not such a surprise. Actually, this is a positive as well. They aren't really abstracts anyway. That the content wasn't always as relevant seems a trivial point. It tells me that the subjects were perhaps demonstrating more advanced information retrieval skills. What does looking at a page tell a person that the Google abstract does not? Lots! People have become very fast at evaluating the quality of a source, which accounts for the rapid use of the back button, and the short amount of time spent on most pieces of content. Why might they click through to do their evaluation? Perhaps because the site contains all sorts of artifacts (metadata) which allows us to evaluate it more thoroughly and quickly? I would suggest that perhaps Pan was observing an emergent form of information literacy that should not be so quickly dismissed as wrong or unsophisticated.

Let me conclude with what is a seemingly minor point, but one that I see too often when researchers discuss their findings. No where in the study did I see any assessment of the subjects' actual knowledge (or lack there-of) concerning how the Google algorithm works. If it was a part of the study it certainly isn't discussed in the research article. It is, however, a prominent part of the university's press release. Maybe this is seemingly insignificant, but I don't think we should be making these types of off-topic assumptions when discussing our research findings. Just the facts -- or perhaps highlight an area needing further investigation.

No comments: