According to a post on Google’s Webmaster Central blog, Google is now discovering web sites by automatically scanning RSS and Atom feeds. This new process will help Google more quickly identify web pages and will allow users to find new content in search results as soon as it goes live. While not exactly “real-time,” using feeds to identify updates to websites is an arguably faster method than the traditional crawling techniques Google has used in the past. And Google may get even faster in the near future – the post also notes that the company may soon explore using mechanisms like the real-time protocol PubSubHubbub to identify updated items going forward.
This is pretty nifty. Of course, the obvious question is how do you rank these new entries into whatever keyword clustered group the page belongs in? Just because they are the newest or freshest entries into a space by no means determines their relevancy and quality.
In fact, one could argue that real time could be a real pain in the butt because it could simply end up meaning that whoever is first is best. That’s not a good result. It’s kinda like saying that the kid in school who runs the 100 yard dash the fastest get the prize for Best Science Project. There is no correlation between speed and quality. It happens on occasion but it is more rare than one might think. Real time may be more suited for news and not for research. It’s too early to tell but these are questions that will be cropping up regularly moving forward.
The bottom line is that Google is going to be using all of its considerable resources to try to harness the new push to real time results. Once everything is gathered however then the fun really begins.
If I could be so bold as to make a suggestion I would like to see a “real time” search option and “traditional” one. I’m not sure there will be a real clean way to present real time results with those that are actually the best result without making the SERP’s a complete usability train wreck.
What are your thoughts on this one?