Posted February 9, 2009 11:03 am by with 8 comments

Tweet about this on TwitterShare on LinkedInShare on Google+Share on FacebookBuffer this page

By Peter Young

One of the best things about Google is the simplicity of its search results pages—clean, generally uncluttered and logical. Over the last couple of years these pages have seen some significant changes as Google bring new technologies to the table, with first Blended Search and more recently SearchWiki revelutionising the way we use the search pages. 

It is therefore interesting to see some of Google’s own research into the impact of their changes on user behaviour, something I personally have only seen on studies such as those done by the likes of Enquiro.

In a post on the official Google blog, Google provided a glimpse of some of the research behind the pages the likes of me and you see every day. The testing which appears to be using the Bunnyfoot eye tracking tool, plots the eyefall on the search pages and thus how users potentially engage with the pages themselves. It is interesting to compare this research against previous research done in this area. One of the first eye-tracking studies highlighted the well known F-Shape eye scan on initial results:

THe Enquiro Golden Triangle


The post from Google reinforces the traditional 10 contextual link behaviour with a high density of eye fall contained within the intial 3 or so results, however it should be noted the density of links varies throughout the results page itself—particularly when this is compared to results which incorporate blended search—most notably those with visual assets such as PR or Video.

One of the more interesting aspects of the post however was the reference to blended search:

We ran a series of eye-tracking studies where we compared how users scan the search results pages with and without thumbnail images. Our studies showed that the thumbnails did not strongly affect the order of scanning the results and seemed to make it easier for the participants to find the result they wanted.

The thumbnail image seemed to make results with thumbnails easy to notice when the users wanted them . . . and the thumbnails also seemed to make it easy for people to skip over the results with thumbnails when those results were not relevant to their search.

Much of these confirmed early studies (such as that between Enquiro/Google back in 2008), which discussed behavioural traits such as chunking and fencing, where blended search results were introduced. At the release of that report, blended search was still a new concept, and it would thus be interesting to compare any subsequent research to that of the earlier studies to identify whether familiarity has indeed altered users behaviour on blended search pages.

Google did not say whether these results incorporated any aspect of personalisation (certainly the SearchWiki functionality is not prevelant), as previous research has shown that this significantly impacts not only on the amount of time spent on the search pages but also on the number of fixations on the page as well as the number of clicks.

On a personal note, I feel that time spent in particular is important, as the search pages themselves effectively become the ‘new landing pages’. Further to this I am sure we will see a number of further studies to further establish how users interact with the search results, as these pages continue to evolve.

Peter Young is the SEO Manager at MVi and also runs the Holistic Search Marketing blog offering tips and advice on SEO, PPC and other aspects of Online Marketing.

  • Yahoo and MSN got really stronger ad efficiency than Google. This is because of lack of useful results or better visualization of ads?

    William’s last blog post..Nokia 5800 error “Expired certificate” solution

  • William, it is most probably because Google isn’t releasing all the info….sort of comparing Yahoo Links to Google Links, where Yahoo gives the more accurate data, whilst Google only lists a fraction of a websites links. What do you think?

  • Google is making assumptions about its results, which could render its findings useless. It needs to consider what people are actually doing on web pages (as we all do) with more care.

    See my response to the Google study at:

  • Having mastered SEO and SERPS and the like since 1997, I’ve come to the conclusion that the best way for attaining high visibility is simply by becoming an authority site (and that’s not too difficult to achieve either).

    No matter how whitehat your site might be, it takes only one tweak of the Google Dance to smoosh you from the results. Word-of-mouth traffic is far more valuable and not dependent upon any algorithm.

    Data points, Barbara

    Barbara Ling, Virtual Coach’s last blog post..SURVIVING the Circle of Love (and smashing your fears to teeny tiny bits)

  • Is it just me or does the heat map comparison show even more concentration in the top few results for Google than it does for Yahoo or MSN?

    It also seems as though the user is logged in, which means there could be some personalisation going on in the results? and also, I don’t see any adwords above the natural results.

    Any thoughts?

    Alastair’s last blog post..Where were you when the page was empty?

  • I just love these “controlled” studies. When users are willingly volunteering for focus groups or other research relating to usability DON’T take this for fact and extrapolate to how a mass audience will behave. Make that bet and you will be wrong! Think about it. Are you ready to bet there’s none or only small differences between how users behave when they are in a nice conference room probably with nice computers, friendly people and given a fake task to accomplish and that of a real user who is most likely at home on a well used PC and on a real task they want to accomplish? [And] of course don’t forget to add the distractions like maybe kids, friends IM’s, etc.?

    Don’t get me wrong, talking to real users and watching their behavior while early prototyping is a good thing to do and I encourage all product managers, engineers and execs to do this. But there’s nothing that can replace observing how the real world behaves. I don’t know what decision a PM could make from this article about SERP design (as someone pointed out earlier Google are not disclosing everything – nor should you expect them to) but if you are working on some new layouts or want to test some new designs do an A | B test with your real site and real visitors. Randomize the versions to the end users and run it for a couple of days (or until you get enough data). This type of testing takes into account all the different intents, scenarios and personal environment experiences and will result in a much better “real world” evaluation of your product.

    Sorry, didn’t mean to make a rant on your article Peter just that this has been a pet peeve of mine for years and I’ve just seen so many inexperienced product managers take this info as truth and implement full scale designs off 1 or 2 focus groups. As I tell them – put it out in the wild and you might be surprised what you learn.

    twitter: retrevo_robb

  • It’s hard to tell whether the data is correct. People generally optimize their websites for Google search, Yahoo! and MSN come in after that.

    Darren Tan’s last blog post..Music Review: ?A Hundred Million Suns? by Snow Patrol

  • Pingback: Mobile SERPs - Mobile Search Engine Results Page » In Search of the Perfect Page: Google Offers Insight into SERP Testing()