(and before you ask, yes “rotating blades” comes from “become a fan”)
I’m forming the ideas here first and then we can go and implement it. Feedback is much appreciated.
Table one looks like this:
CREATE TABLE fan_of (
PRIMARY KEY (user_id, item_id),
That is, two columns, both 64bit integers. The primary key covers both columns (a user cannot be a fan of something more than once) and can be used to look up all things the user is a fan of. There is also an index over item_id so that you can find out which users are a fan of an item.
The second table looks like this:
CREATE TABLE fan_count (
item_id BIGINT PRIMARY KEY,
Both tables start empty.
You will have 1000, 2000,4000 and 8000 concurrent clients attempting to run the queries. These concurrent clients must behave as if they could be coming from a web server. The spirit of the benchmark is to have 8000 threads (or processes) talk to the database server independent of each other.
The following set of queries will be run a total of 23,000,000 (twenty three million) times. The my_user_id below is an incrementing ID per connection allocated by partitioning 23,000,000 evenly between all the concurrent clients (e.g. for 1000 connections each connection gets 23,000 sequential ids)
You must run the following queries.
- How many fans are there of item 12345678 (e.g. SELECT fans FROM fan_count WHERE item_id=12345678)
- Is my_user_id already a fan of item 12345678 (e.g. SELECT user_id FROM fan_of WHERE user_id=my_user_id AND item_id=12345678)
- The next two queries MUST be in the same transaction:
- my_user_id becomes a fan of item 12345678 (e.g. INSERT INTO fans (user_id,item_id) values (my_user_id, 12345678))
- increment count of fans (e.g. UPDATE fan_count SET fans=fans+1 WHERE item_id=12345678)
For the first query you are allowed to use a caching layer (such as memcached) but the expiry time must be 5 seconds or less.
You do not have to use SQL. You must however obey the transaction boundary above. The insert and the update must be part of the same transaction.
Results should include: min, avg, max response time for each query as well as the total time to execute the benchmark.
Data must be durable to a machine being switched off and must still be available with that machine switched off. If committing to local disk, you must also replicate to another machine. If running asynchronous replication, the clock does not stop until all changes have been applied on the slave. If doing asynchronous replication, you must also record the replication delay throughout the entire test.
In the event of timeout or deadlock in doing the insert and update part, you must go back to the first query (how many fans) and retry. Having to retry does not count towards the 23,000,000 runs.
At the end of the benchmark, the query SELECT fans FROM fan_count WHERE item_id=12345678 should return 23,000,000.
Yes, this is a very evil benchmark. It seems to be a bit indicative about the kind of peak load that can be experienced by a bunch of Web 2.0 sites that have a “like” or “become a fan” style buttons. I fully expect the following:
- Pretty much all systems will nosedive in performance after 1000 concurrent clients
- Transaction rollbacks due to deadlock detection or lock wait timeouts will be a lot.
- Many existing systems and setups not complete it in reasonable time.
- A solution using Scale Stack to be an early winner (backed by MySQL or Drizzle)
- Somebody influenced by Domas turning InnoDB deadlock detection off very quickly.
- Somebody to call this benchmark “stupid” (that person will have a system that fails dismally at this benchmark)
- Somebody who actually has any knowledge of modern large scale web apps to suggest improvements
- Nobody even attempting to benchmark the Oracle database
- Somebody submitting results with MySQL to not wait until the replication stream has finished applying.
- Some NoSQL systems to suck considerably more than their SQL counterparts.