Comments:"Damien Katz: Dynamo Sure Works Hard"
URL:http://damienkatz.net/2013/05/dynamo_sure_works_hard.html
Dynamo Sure Works Hard
We tend to think of working hard as a good thing. We value a strong work ethic and determination is the face of adversity. But if you are working harder than you should to get the same results, then it's not a virtue, it's a waste of time and energy. If it's your business systems that are working harder than they should, it's a waste of your IT budget.
Dynamo based systems work too hard. SimpleDB/DynamoDB, Riak, Cassandra and Voldemort are all based, at least in part, on the design first described publicly in the Amazon Dynamo Paper. It has some very interesting concepts, but ultimately fails to provide a good balance of reliability, performance and cost. It's pretty neat in that each transaction allows you dial in the levels of redundancy and consistency to trade off performance and efficiency. It can be pretty fast and efficient if you don't need any consistency, but ultimately the more consistency you want the more have to pay for it via a lot of extra work.
Network Partitions are Rare, Server Failures are Not
... it is well known that when dealing with the possibility of network failures, strong consistency and high data availability cannot be achieved simultaneously. As such systems and applications need to be aware which properties can be achieved under which conditions. For systems prone to server and network failures, availability can be increased by using optimistic replication techniques, where changes are allowed to propagate to replicas in the background, and concurrent, disconnected work is tolerated. The challenge with this approach is that it can lead to conflicting changes which must be detected and resolved. This process of conflict resolution introduces two problems: when to resolve them and who resolves them. Dynamo is designed to be an eventually consistent data store; that is all updates reach all replicas eventually. - Amazon Dynamo PaperThe Dynamo system is a design that treats the probability of a network switch failure as having the same probability of machine failure, and pays the cost with every single read. This is madness. Expensive madness.
Within a datacenter, the Mean Time To Failure (MTTF) for a network switch is one to two orders of magnitude higher than servers, depending on the quality of the switch. This is according to data from Google about datacenter server failures, and the publish numbers of the MTBF of Cisco switches (There is a subtle difference between MTBF and MTTF, but for our purposes we can treat them the same)
It is claimed that when W + R > N you can get consistency. But it's not true, because without distributed ACID transactions, it's never possible to achieve W > 1 atomically.
Consider W=3, R=1 and N=3. If a network failure or more likely a client/app tier failure (hardware, OS or process crash) happens during the writing of data, it's possible for only replica A to receive the write, with a lag until the cluster notices and syncs up. Then another client with R = 1 can do two consecutive reads, getting newer data first from a node A, and older data next from node B for the same key. But you don't even need a failure or crash, once the first write occurs there is always a lag for the next server(s) to receive the write. It's possible for a fast client to do the same read 2 times again, getting a newer version from one server, then an older version from another.
What is true is that if R > N / 2, then you get consistency where it's not possible to read in a newer value, then a subsequent read get's an older value.
For the vast majority of applications, it's okay for a failure leading to temporary unavailability. Amazon believes its shopping cart is so important to capture writes it's worth the cost of quorum reads, or inconsistency. Perhaps. But the problems and costs multiply. If you are doing extra reads to achieve high consistency, then you are putting extra load on each machine, requiring extra server hardware and extra networking infrastructure to provide the same baseline performance. All of this can increase the frequency of a component failure and increases operational costs (hardware, power, rack space and the personnel to maintain it all).
A Better Way?
What if a document had 1 master and N replicas to write to, but only a single master to read from? Clients know based on the document key and topology map which machine serves as the master. That would make the reads far cheaper and faster. All reads and writes for a document go to the same master, with writes replicated to replicas (which also serve as masters for other documents, each machine is both a master and replica).
But, you might ask, how do I achieve strong consistency if the master goes down or becomes unresponsive?
If when that happens, the cluster also notices the machine is unresponsive or too slow and removes it out of the cluster and fails over to a new master. Then the client tries again and has a successful read.
But, you might ask, what if the client asks the wrong server for a read?
If all machines in the cluster know their role and only one machine in the cluster can be a document master at any time, and the cluster manager (a regular server node elected by Paxos consensus) makes sure to remove the old master, and then assign the new master, and then tell the client about the new topology. Then the client updates its topology map, and retries at the new master.
But, you might ask, what if the topology has changed again, and the client again asks to read from the wrong server?
Then this wrong server will let the client know. The client will reload the topology maps, and re-request from the right server. If the right master server isn't really right any more because of another topology change, it will reload and retry again. It will do this as many times as necessary, but typically it happens only once.
But, you might ask, what if there is a network partition, and the client is on the wrong (minor) side of the partition, and reads from a master server that doesn't know it's not a master server anymore?
Then it gets a stale read. But only for a little while, until the server itself realizes it's no longer in heartbeat contact with the majority of the cluster. And partitions like this are the among the rarest form of a cluster failure. It will require a network failure, and for the client to be on the wrong side of the partition.
But, you might ask, what if there is a network partition, and the client is on the wrong (smaller) side of the partition, and WRITES to a server that doesn't know it's not a master server anymore?
Then the write is lost. But if the client wanted true multi-node durability, then the write wouldn't have succeeded (the client would timeout waiting for replicas(s) to receive the update) and the client wouldn't unknowingly lose data.
What I'm describing is the Couchbase clustering system.
Let's Run Some Numbers
Given the MTTF of a server, how much hardware and how quickly must the cluster failover to a new master and still meet our SLAs requirements vs a Dynamo based system?
Let's start with some assumptions:
We want to achieve 4000 transactions/sec with 3 node replication factor. Our load mix is 75% reads/25% writes.
We want to have some consistency, so that we don't read newer values, then older values, so for Dynamo:
R = 2, W = 2, N = 3
But for Couchbase:
R = 1, W = 2, N = 3
This means for a Dynamo style cluster, the load will be:
Read transactions/sec: 9000 reads (reads spread over 3 nodes)
Write transactions/sec: 3000 writes (writes spread over 3 nodes)
This means for a Couchbase style cluster, the load will be:
Read transactions/sec: 3000 reads (each document read only on 1 master node, but all document masters evenly spread across 3 nodes)
Write transaction/sec: 3000 writes (writes spread over 3 nodes)
Let's assume both systems are equally as reliable at the machine level. Google's research indicates in their datacenter each server has a MTTF of 3141 hrs or 2.7 failures per year. Google also reports a rack failure (usually power supply) of 10.2 years, roughly 30x a reliable as a server, so we'll ignore that to make the analysis simpler. (This is from Googles paper studying server failures here)
The MTBF of Cisco network switch is published at 54,229 hrs on the low end, to 1,023,027 hrs on the high end. For our purposes, we'll ignore switch failures, since the failures affects availability and consistency of both system about the same, and it's 1 to 2 orders of magnitude rarer than a server failure. (This is from a Cisco product spreadsheet here)
Assume we want to meet a latency SLA 99.9% of the time (the actual latency SLA threshold number doesn't matter here).
On Dynamo, that means each node can fail the SLA 1.837% of the time. Because it queries 3 nodes, but only uses the values from the first 2 nodes and the chances of SLA failure are the same across nodes, the formula is different (only two must meet the SLA):
0.0001 = (3 − 2 * P) * P ^ 2
or:
P = 0.001837
On Couchbase, if a master node fails, it must recognize it and fail it out. Given Google's MTTF failure above and it can fail out a node in 30 secs, and let's say it will take 4.5 minutes for it warm up the RAM cache, given 2.7 failures/year with 5 minutes of downtime for each before a failover completes, then queries will fail 0.00095% of time due to node failure.
For Couchbase meet the same SLA:
0.0001 = P(SlaFail) + P(NodeFail) - (P(SlaFail) * P(NodeFail))
0.0001 = P(SlaFail) + 0.0000095 - (P(SlaFail) * 0.0000095)
0.0001 ~= 0.00009 + 0.0000095 - (0.00009 * 0.0000095)
Note: Some things I'm omitting from the analysis are when a Dynamo node fails the lower latency requirement from meeting the SLA for 2 nodes vs. 3 (it would drop from 1.837% to ~0.05%), and also the increased work on the remaining servers when a Couchbase server fails. Both are only temporary and go away when a new server is added back and initialized in the cluster, and shouldn't change the numbers significantly. Also there is the time to add in a new node and rebalance load on it. At Couchbase we work very hard to make that as fast and efficient as possible. I'll assume Dynamo systems do the same, that the cost is the same and omit it, though I think we are the leaders in rebalance performance.
With this analysis, a Couchbase node can only fail its SLA 0.9% of the requests, and a Dynamo node can fail it 1.837%. Sounds good for Dynamo, but it must do for 2X the throughput per node on 3x the data, and with 2x the total network traffic. And for very low latency response times (our customers often want sub-millisecond latency) typically meeting the SLA means a DBMS must a large amount of relevant data and metadata in RAM, because there is a huge cost for random disk fetches on latency. With disk fetches 2 orders of magnitude slower on SSDs (100x), and 4 orders of magnitude slower on HDDs (10000x) the disk accesses pile up faster without much more RAM, so do the latencies.
So each Dynamo node can fail its SLA at a higher rate is very small win when it will still need to keep nearly 3X the working set ready in memory because each node will be serving 3x the data at all times for read requests (it can fail its SLA slightly more often, so it's actually about 2.97x the necessary RAM), and will use 2x the network capacity.
Damn Dynamo, you sure do work hard!
Now Couchbase isn't perfect either, far from it. Follow me on twitter @damienkatz. I'll be posting more about the Couchbase shortcomings and capabilities, and technical roadmap soon.
Posted May 3, 2013 10:06 AM