Posted September 22, 2006 4:19 pm by with 5 comments

Tweet about this on TwitterShare on LinkedInShare on Google+Share on FacebookBuffer this page

Good grief, Reuters is reporting a group of publishers are planning to spend $583,700 to find a way to prevent Google from indexing their content!

“Since search engine operators rely on robotic ‘spiders’ to manage their automated processes, publishers’ Web sites need to start speaking a language which the operators can teach their robots to understand,” according to a document seen by Reuters that outlines the publishers’ plans.

What the..?

Google already offers a way for a publisher to have their content removed. In addition, modify your Robots.txt file to disallow the search engines and you can donate that money to a charity that helps paranoid publishers! 😉

  • Wow, someone talked them into that deal?
    If people want to throw away money I’ll take some.

  • I could’ve hooked them up for about $20 since it only takes a whopping 5 seconds to do.

    That’s ridiculous!

  • I guess the World Newspaper Association can’t even ask their own webguys since they haven’t bothered with a robots.txt file either.
    Yet, amazingly our local paper has one. New York Times has one.
    Hell even the Clarion Ledger in mississippi has one! (though # be nice. isn’t a command I am familiar with 🙂 )

  • Pingback: I’m Going to Shoot Myself in the Foot | inter:digital strategies()

  • $500,000?! Wow, I possibly don’t see what you could want in a Robots.txt file worth that much?