It appears Facebook is built on a house of cards.
Cards that rely on more than 4,000 MySQL “shards” or should that be sharts?
According to GigaOm:
…Facebook has split its MySQL database into 4,000 shards in order to handle the site’s massive data volume, and is running 9,000 instances of memcached in order to keep up with the number of transactions the database must serve.
And, citing database guru Michael Stonebraker…
…Facebook is operating a huge, complex MySQL implementation equivalent to “a fate worse than death,” and the only way out is “bite the bullet and rewrite everything.”
That may sound like an exaggeration, but having lived through this nightmare myself, he may have a point.
Trackur started off using MySQL and that worked absolutely wonderfully until we got to about 20,000 users and millions of entries. At that point, things started to groan, break, or create mysterious bugs out of thin air. In the end, we had to bite the bullet and move to a NoSQL platform that was better suited to the vast amount of data we have to index, search, and serve.
Those headaches were real. And we were on a much, much, MUCH smaller scale than Facebook. I can only imagine that the MySQL team at Facebook look something like this on any given day: