Introduction to distributed systems

What is a distributed system?

According to Wikipedia: A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages.

Parallel and distributed systems

  • Parallel systems use shared memory
    • Distributed parallel systems still use the notion of shared memory, but this is co-ordinated with special HW/SW that unifies memory accesses accross multiple computers

Parallel system

  • Distributed systems use no shared components

Distributed system

Distributed system characteristics

  • Computational entities each with own memory
    • Need to synchronize distributed state
  • Entities communicate with message passing
  • Each entity maintains parts of the complete picture
  • Need to tolerate failure

Building distributed systems is hard

  • They fail often (and failure is difficult to spot!)
    • Split-brain scenarios
  • Maintaining order/consistency is hard
  • Coordination is hard
  • Partial operation must be possible
  • Testing is hard
  • Profiling is hard: “it’s slow” might be due to 1000s of factors

Fallacies of distributed systems

By Peter Deutsch

  • The network is reliable
  • Latency is zero
  • Bandwidth is infinite
  • The network is secure
  • Topology does not change
  • Transport cost is zero
  • The network is homogeneous

Four main problems

We will focus on the following four issues with distributed systems

  • Unreliable networks: As distributed systems need to share data, they have to co-ordinate over unreliable networks.
  • Unreliable time: Time is a universal co-ordination principle. Distributed systems need time to determine order
  • No single source of truth: Distributed systems need to co-ordinate and agree upon a (version of) truth.
  • Different opinions: Distributed systems need to provide guarantees of consistency in query answers.

Unreliable networks

Unreliable networks

How can a network fail?

Most distributed systems use some form of asynchronous networking to communicate. Upon waiting for a response to a requests, it is not possible to distinguish whether:

  1. the request was lost
  2. the remote node is down
  3. the response was lost

The usual remedy is to set timeouts and retry the request until it succeeds.

Network failures in practice

In a study of network failures by Microsoft Research[1], they found that:

  • 5 devices per day fail
  • 41 links per day fail
  • Load balancers fail with a probability >20% once per year
  • MMTR 5 mins
  • Redundancy is not very effective
  • Most failures are due to misconfiguration

The data is for a professionally managed data centre by a single company.

On the public cloud, failures may affect thousands of systems in parallel.

Timeouts

Timeouts is a fundamental design choice in asynchronous networks: Ethernet, TCP and most application protocols work with timeouts.

The problem with timeouts is that delays in asynchronous systems are unbounded. This can be due to:

  • Queueing of packets at the network level, due to high or spiking traffic
  • Queueing of requests at the application level, e.g. because the application is busy processing other requests

Queues also experience a snowball effect; queues are getting bigger on busy systems.

Timeouts usually follow the exponential back-off rule; we double the time we check for an answer up to an upper bound. More fine-grained approaches use successful request response times to calibrate appropriate timeouts.

Unreliable time

Soft Watch At The Moment Of First Explosion, by Salvador Dali

Time is essential

In a distributed system, time is the only global constant nodes can rely on to make distributed decisions on ordering problems.

Time in computers is kept in two ways:

  • “Real” time clocks (RTCs): Capture our intuition about time and are kept in sync with the NTP protocol with centralised servers. (e.g. System.getCurrentTimeMillis).
  • Monotonic clocks: Those clocks only move forward. (e.g. System.nanoTime)

Q: Which clock would you use to time a task? Why?

The trouble with computer clocks

Monotonic clocks are maintained by the OS and rely on HW counters exposed by CPUs. They are (usually!) good for determining order within a node, but each node only has its own notion of time.

NTP can synchronize time across nodes with an accuracy of ms. A modern CPU can execute \(10^6\) instructions (\(\times\) number of cores) in an ms!

Moreover, leap seconds are introduced every now and then; minutes may last for 61 or 59 seconds on occasion

\(\mu s\) accuracy is possible with GPS clocks, but expensive

Order

What is order? A way of arranging items in a way that the following properties are maintained.

  • Reflexivity: \(\forall a. a \le a\)
  • Transitivity: if \(a \le b\) and \(b \le c\) then \(a \le c\)
  • Antisymetry: \(a \le b\) and \(b \le a\) <=> \(b = a\)

When do we need to order?

  • Sequencing items in memory (e.g. with a mutex)
  • Encoding history (“happens before” relationships)
  • Mutual exclusion of access to a shared resource
  • Transactions in a database

Order and time

  • FIFO is enough to maintain order with a single sender
  • Time at the receiver end is enough to maintain order at the receiver end
  • When multiple senders/receivers are involved, we need external ordering scheme
    • Total order: If our message rate is globally bounded (e.g. 1 msg/sec/receiver), and less fine-grained than our clock accuracy (e.g. ms range), then synchronized RTCs are enough to guarantee order.
    • Causal order: Otherwise, we need to rely on happens before (\(\rightarrow\)) relationships.

Encoding happens before

Lamport introduced the eponymus timestamps in 1978[2]. Lamport timestamps define a partial causal ordering of events.

Invariant: For two events \(a\) and \(b\), if \(a \rightarrow b\), then \(LT(a) < LT(b)\).

Q: The reverse is not true: If \(LT(a) < LT(b)\), then it does not mean that \(a \rightarrow b\)!! Why?

Lamport timestamps: How they work

  • Each individual process \(p\) maintains a counter: \(LT(p)\).
  • When a process \(p\) performs an action, it increments \(LT(p)\).
  • When a process \(p\) sends a message, it includes \(LT(p)\) in the message.
  • When a process \(p\) receives a message from a process \(q\), that message includes the value of \(LT(q)\); \(p\) updates its \(LT(p)\) to the \(\max(LT(p), LT(q))+1\)

Why is the LT invariant not symmetric?

4 nodes exchange events.

Initial state of timestamps: \([A(0), B(0), C(0), D(0)]\)

E1. \(A\) sends to \(C\): \([A(1), B(0), C(0), D(0)]\)

E2. \(C\) receives from \(A\): \([A(1), B(0), C(2), D(0)]\)

E3. \(C\) sends to \(A\): \([A(1), B(0), C(3), D(0)]\)

E4. \(A\) receives from \(C\): \([A(4), B(0), C(3), D(0)]\)

E5. \(B\) sends to \(D\): \([A(4), B(1), C(3), D(0)]\)

E6. \(D\) receives from \(B\): \([A(4), B(1), C(3), D(2)]\)

At this point, \(LT(E6) < LT(E4)\), but it does not mean that \(E6 \rightarrow E4\)! Events 4 and 6 are independent.

Vector clocks

Vector clocks[3] can maintain total causal order.

On a system with \(N\) nodes, each node \(i\) maintains a vector \(V_i\) of size \(N\).

  • \(V_i[i]\) is the number of events tht occurred at node \(i\)
  • \(V_i[j]\) is the number of events that node \(i\) knows occurred at node \(j\)

They are updated as follows:

  • Local events increment \(V_i[i]\)
  • When \(i\) sends a message to \(j\), it includes \(V_i\)
  • When \(j\) receives \(V_i\), it updates all elements of \(V_j\) to \(V_j[a] = \max(V_i[a], V_j[a])\)

Vector clocks guarantees

  • if \(a \rightarrow b\), then \(VC(a) < VC(b)\)
  • if \(VC(a) < VC(b)\), then \(a \rightarrow b\)
  • if \(VC(a) < VC(b)\) and VC(b) < VC(c) then \(a \rightarrow c\)
  • if \(VC(a) < VC(b)\), then \(RT(a) < RT(b)\), where RT is the clock time of events \(a\) and \(b\)

Vector clocks are expensive to maintain: they require \(O(n)\) timestamps to be exchanged with each communication.

However, it has been shown[4] that we cannot do better than vector clocks!

Distributed decision making

What is true in a distributed setting?

  • Nodes in distributed systems cannot know anything for sure
    • Only make guesses
  • Individual nodes cannot rely on their own information
    • Clocks can be unsynchronized
    • Other nodes may be unresponsive when updating state
  • “Split-brain” scenarios: Parts of the system know a different version of the truth than the other part(s)

In distributed settings, the truth is determined by concensus, which is reached through voting mechanisms.

Consensus

Achieving a decision in the presence of faulty processes.

  • Committing a transaction
  • Synchronizing state machines
  • Leader election
  • Atomic broadcasts

The 2 generals problem

The two generals problem setting

  • 2 armies camped in opposing hills (A1 and A2)
  • The are only able to communicate with messengers
  • They need to decided on a time to attack
  • Enemy (B) is camped between the two hills and can at any time intercept the messengers
  • How can the generals decide when to attack?

It turns out that it is impossible to make a reliable decision.

The 2 generals problem solution

  • The problem cannot be solved without loss of information
  • Approximately:
    • Pre-agree on timeouts
    • Send n labeled messages
    • Receiver calculates received messages within time window, then decides how many messages to send for ack.

Consequences: we can only make distributed decisions using either reliable communication or more than 2 parties.

The Byzantine generals problem

Formulated by Lamport et al.[5], the Byzantine generals problem shaped distributed systems research for the next 40 years.

The formulation is simple:

  • \(n\) generals need to make unanimous decision
  • they communicate with unreliable messages
  • \(m\) (\(n > m\)) generals vote suboptimally (i.e. lie) or do not vote

Byzantine Eagle

Consensus algorithms

Roles in a Raft cluster

Most consensus algorithms (e.g. Paxos[6] or Raft[7]), attempt to keep the log module of replicated state machines in sync. Basic properties include:

  • Safety: Never returning an incorrect result, in the presence of non- Byzantine conditions.
  • Availability: Able to provide an answer if \(n/2 + 1\) servers are operational
  • No clocks: They do not depend on RTCs to work
  • Immune to stranglers: If \(n/2 + 1\) servers vote, then the result is considered safe

The Raft consensus algorithm

Raft defines the following server roles:

  • Client: Creates log entries, asks queries
  • Leader: Accepts log entries from clients, replicates them on other servers, and tells servers when it is safe to apply log entries
  • Followers: Replicate the leader’s state machine.

Raft cluster states

  1. Leader election
    • Select one server to act as leader 
    • Detect crashes, choose new leader
  2. Log replication (normal operation)
    • Leader accepts commands from clients, appends to its log
    • Leader replicates its log to other servers (overwrites inconsistencies)

Raft ensures that: logs are always consistent and that only servers with up-to-date logs can become leader

Raft terms

Raft terms

  • One leader per term only
  • Some terms have no leader (failed election)
  • Each server maintains current term value (no global view) 
    • Exchanged in every RPC
    • Peer has later term? Update term, revert to follower

Terms are used to identify obsolete information

Leader election process

Raft leader election

  • Raft elects only one leader per election
  • Raft ensures that one candidate will eventually win

See more details in this visualization

The FLP impossibility

Fischer, Linch and Patterson (FLP) theorem[8]: In an asynchronous network, consensus cannot be reached if at least one node fails in asynchronous networks

A foundational result, proving the impossibility of distributed consensus. The system model the authors assume is fairly restrictive:

  • Asynchronous communication
  • No clocks or timeouts
  • No random number generators

In practice however, we can mitigate the consequences, as we are indeed allowed to use both clocks and/or random numbers.

Byzantine fault tolerance

Raft and Paxos work by assuming that the exchanged messages are valid and true (i.e. non-Byzantine). In open distributed systems (e.g. BitCoin) this assumption is not necessarily valid.

Most open distributed systems use public-key cryptography and node registration before they start and sign messages to avoid MITM attacks.

Still, decisions require majority votes, so the \(n/2 + 1\) rule applies.

Consistency

The consistency guarantee

Consistency refers to the requirement that any given transaction must change affected data only in allowed ways.

  • strong: at any time, concurrent reads from any node return the same values
  • eventual: if writes stop, all reads will return the same value after a while

Consensus is the basis upon which we build consistency

The CAP conjecture

By Erik Brewer[9]: A distributed system can only provide 2 of the following 3 guarantees

  • Consistency: all nodes see the same data at the same time
  • Availability: every request receives a response about whether it succeeded or failed
  • Partition tolerance: the system continues to operate despite arbitrary partitioning due to network failures

While widely cited, it is only indicative; when the network is working, systems can offer all 3 guarantees. So it can be reduced to either consistent or available when partitioned.

Linearisability

At any time, concurrent reads from any node return the same values. As soon as writes complete successfully, the result is immediately replicated to all nodes in an atomic manner an is made available to reads. In that sense, linearisability is a timing constraint[10].

A non-linearisable system

Q: Is the above system linearisable?

It is not, as while the write operation is in flight, the system cannot return a consistent answer.

Linearisability primitives

All operations last for a time block called a transaction; this involves setting up a connection to a remote system and executing the command.

  • reads: must always return the latest value from the storage.
  • writes: change the value of the shared memory; last-one-wins is the most common strategy to deal with multiple writers
  • compare and swap: or atomic writes. It compares the contents of a memory location with a given value and, only if they are the same, modifies the contents of that memory location to a new given value. All further writes fail.

Linearisability example

Linearisability Compare and Swap

Q: What does the \(cas(x,3)\) instruction do?

Return an error! Compare and swap instructions should run atomically across the cluster.

Linearisability Read after Write

Q: What should \(read(x)\) return for client B?

It should return 1. We assume that write transactions get committed imediately when they are executed, so A’s write overwrites D’s earlier write.

Linearisability read during conflict

Q: What should \(read(x)\) return for client C?

It should return 2. The compare-and-swap transaction started by B may conflict with the one started by D, but the system always maintains a consistent state.

General advice

  • Avoid implementing a distributed system
  • Avoid relying on a distributed system
  • If the above fail, only use available solutions

Content credits

Bibliography

[1]
P. Gill, N. Jain, and N. Nagappan, “Understanding network failures in data centers: Measurement, analysis, and implications,” SIGCOMM Comput. Commun. Rev., vol. 41, no. 4, pp. 350–361, Aug. 2011.
[2]
L. Lamport, “Time, clocks, and the ordering of events in a distributed system,” Communications of the ACM, vol. 21, no. 7, pp. 558–565, 1978.
[3]
F. Mattern, “Virtual time and global states of distributed systems,” in PARALLEL AND DISTRIBUTED ALGORITHMS, 1988, pp. 215–226.
[4]
B. Charron-Bost, “Concerning the size of logical clocks in distributed systems,” Information Processing Letters, vol. 39, no. 1, pp. 11–16, 1991.
[5]
L. Lamport, R. Shostak, and M. Pease, “The byzantine generals problem,” ACM Transactions on Programming Languages and Systems (TOPLAS), vol. 4, no. 3, pp. 382–401, 1982.
[6]
L. Lamport, “The part-time parliament,” ACM Trans. Comput. Syst., vol. 16, no. 2, pp. 133–169, May 1998.
[7]
D. Ongaro and J. K. Ousterhout, “In search of an understandable consensus algorithm.” in USENIX annual technical conference, 2014, pp. 305–319.
[8]
M. J. Fischer, N. A. Lynch, and M. S. Paterson, “Impossibility of distributed consensus with one faulty process,” J. ACM, vol. 32, no. 2, pp. 374–382, Apr. 1985.
[9]
E. Brewer, “CAP twelve years later: How the "rules" have changed,” Computer, vol. 45, no. 2, pp. 23–29, Feb. 2012.
[10]
M. P. Herlihy and J. M. Wing, “Linearizability: A correctness condition for concurrent objects,” ACM Trans. Program. Lang. Syst., vol. 12, no. 3, pp. 463–492, Jul. 1990.
[11]
M. Kleppmann, Designing data-intensive applications. O’Reilly Media, Inc., 2017.
[12]
K. Kingsbury, “Jespen: On the perils of network partitions,” 2013. [Online]. Available: https://aphyr.com/tags/jepsen.
[13]
M. Castro and B. Liskov, “Practical byzantine fault tolerance and proactive recovery,” ACM Trans. Comput. Syst., vol. 20, no. 4, pp. 398–461, Nov. 2002.