The rotating blades database benchmark

(and before you ask, yes “rotating blades” comes from “become a fan”)

I’m forming the ideas here first and then we can go and implement it. Feedback is much appreciated.

Two tables.

Table one looks like this:

user_id BIGINT,
item_id BIGINT,
PRIMARY KEY (user_id, item_id),
INDEX (item_id)

That is, two columns, both 64bit integers. The primary key covers both columns (a user cannot be a fan of something more than once) and can be used to look up all things the user is a fan of. There is also an index over item_id so that you can find out which users are a fan of an item.

The second table looks like this:

CREATE TABLE fan_count (

Both tables start empty.

You will have 1000, 2000,4000 and 8000 concurrent clients attempting to run the queries. These concurrent clients must behave as if they could be coming from a web server. The spirit of the benchmark is to have 8000 threads (or processes) talk to the database server independent of each other.

The following set of queries will be run a total of 23,000,000 (twenty three million) times. The my_user_id below is an incrementing ID per connection allocated by partitioning 23,000,000 evenly between all the concurrent clients (e.g. for 1000 connections each connection gets 23,000 sequential ids)

You must run the following queries.

  • How many fans are there of item 12345678 (e.g. SELECT fans FROM fan_count WHERE item_id=12345678)
  • Is my_user_id already a fan of item 12345678 (e.g. SELECT user_id FROM fan_of WHERE user_id=my_user_id AND item_id=12345678)
  • The next two queries MUST be in the same transaction:
    • my_user_id becomes a fan of item 12345678 (e.g. INSERT INTO fans (user_id,item_id) values (my_user_id, 12345678))
    • increment count of fans (e.g. UPDATE fan_count SET fans=fans+1 WHERE item_id=12345678)

For the first query you are allowed to use a caching layer (such as memcached) but the expiry time must be 5 seconds or less.

You do not have to use SQL. You must however obey the transaction boundary above. The insert and the update must be part of the same transaction.

Results should include: min, avg, max response time for each query as well as the total time to execute the benchmark.

Data must be durable to a machine being switched off and must still be available with that machine switched off. If committing to local disk, you must also replicate to another machine. If running asynchronous replication, the clock does not stop until all changes have been applied on the slave. If doing asynchronous replication, you must also record the replication delay throughout the entire test.

In the event of timeout or deadlock in doing the insert and update part, you must go back to the first query (how many fans) and retry. Having to retry does not count towards the 23,000,000 runs.

At the end of the benchmark, the query SELECT fans FROM fan_count WHERE item_id=12345678 should return 23,000,000.

Yes, this is a very evil benchmark. It seems to be a bit indicative about the kind of peak load that can be experienced by a bunch of Web 2.0 sites that have a “like” or “become a fan” style buttons. I fully expect the following:

  • Pretty much all systems will nosedive in performance after 1000 concurrent clients
  • Transaction rollbacks due to deadlock detection or lock wait timeouts will be a lot.
  • Many existing systems and setups not complete it in reasonable time.
  • A solution using Scale Stack to be an early winner (backed by MySQL or Drizzle)
  • Somebody influenced by Domas turning InnoDB deadlock detection off very quickly.
  • Somebody to call this benchmark “stupid” (that person will have a system that fails dismally at this benchmark)
  • Somebody who actually has any knowledge of modern large scale web apps to suggest improvements
  • Nobody even attempting to benchmark the Oracle database
  • Somebody submitting results with MySQL to not wait until the replication stream has finished applying.
  • Some NoSQL systems to suck considerably more than their SQL counterparts.

14 thoughts on “The rotating blades database benchmark

  1. It’s been a while, but this benchmark seems like something MySQL Cluster (or rather NDB – no SQL nodes required) should be able to pull off quite nicely.

  2. I’m happy to test this on Oracle 11g (standalone and RAC) if you provide a benchmarking tool I can just run. :)

  3. As soon as I submit an awesome result for InnoDB. The Cluster team will publish something 10X to 100X better. It isn’t like they have anything else to do right now as they are all in San Francisco. I won’t fall for that trick.

  4. I’m pretty sure there could be some rather awesome numbers out of InnoDB without too much trickery…

    I suspect the NDB numbers will be very different between NDBAPI and SQL.

  5. I Love this benchmark, I want to be facebook friends with this benchmark, I want to retweet this benchmark.

    But I must say that the following statement

    ** Some NoSQL systems to suck considerably more than their SQL counterparts.**

    says very little. It came at the end of the post, so perhaps the editor failed to notice it.

    Wouldn’t nndbapi be a nosql solution, not that some other noSQL solutions will suck.

    Can I add another prediction to the list?
    Somebody who is actually has similar requirements will tell you how they cheat, and the cheat will be something like estimating fan counts, cheating on granularity such as updates to fan counts every 5 minutes.

    I mean once you have many fans, does it matter exactly how many at exactly that moment. Perhaps some quantum principal of fandom will be developed. Such as, “on you can know exactly how many fans you had 5 minutes ago, or approximately how many fans you have now, but you can not know both exactly at the same time”

  6. At some point we can create a facebook group for it… bonus points if we get 23 million fans :)

    Part of the point of the benchmark is to not completely throw away consistency.

    As for the NoSQL systems… sure, NDB is a AlsoSQL system. I’m more thinking of the newer projects.

  7. It should be the rotating knives benchmark.

    “Excuse me, did you say knives?”

    “Rotating knives, yes.”

    “You are not proposing to slaughter our tenants, are you?”

  8. You know, this could probably be implemented without a lot of fuss using a Random Query Generator grammar.

    A new validator might have to be hacked to measure everything that you would like, but there are some validators that measure query time – such as comparing two different versions of the server to see if performance suffers.

  9. How is table fan_count populated?

    You write that:

    1. Both tables start empty.

    2. # increment count of fans (e.g. UPDATE fan_count SET fans=fans+1 WHERE item_id=12345678)

    Apparently UPDATE won’t work because updating empty table will keep it empty. If we change UPDATE to insert-or-update, then the table will have only one row with item_id=12345678. Isn’t that a bit unrealistic?

  10. Another odd thing is this:

    > The my_user_id below is an incrementing ID per connection allocated by partitioning 23,000,000 evenly between all the concurrent clients (e.g. for 1000 connections each connection gets 23,000 sequential ids)

    Could you elaborate why one would need *sequential* ids? My idea was that that’s only needed for accounting or banking apps?

  11. So I need to edit it to say that fan_count starts with a single row

    Yes, it’s a bit unrealistic to only have one row in such a table. However, I don’t think it will make too much of a difference – this should be nasty enough benchmark already.

    I don’t think that my_user_id needs to be sequential at all- I was jut thinking of an easy algorithm to produce them without requiring any form of locking or complex math in the client.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.