Saral Shiksha Yojna
Courses/Distributed Systems

Distributed Systems

CS3.401
Prof. Kishore KothapalliMonsoon 2025-264 credits

Lamport, Ricart-Agrawala, Maekawa, Suzuki-Kasami, Raymond — Complete Comparison

NotesStory
Unit 5 — Distributed Mutual Exclusion

The Critical Section Without A Computer

In a single machine, mutual exclusion is easy: lock a semaphore, do the work, unlock. The OS has shared memory and a kernel to enforce ordering.

In a distributed system there's no shared memory, no kernel, no central arbitrator. Just messages. So how do you make sure exactly one process is in a critical section at any time?

Five algorithms, two families.

Three Properties And Four Metrics

Every DME algorithm must satisfy:

  • Safety — at most one in CS at any instant.
  • Liveness — every requester eventually enters.
  • Fairness — served in timestamp order.

You measure them on four axes:

  • Message complexity — msgs per CS invocation.
  • Synchronisation delay (SD) — time after one site leaves CS before next enters.
  • Response time — wait between request and completion.
  • Throughput , where is avg CS execution time.

Family 1: Non-Token (Permission-Based)

Lamport (1978)

The original. Broadcast a request, wait for permissions, enter when conditions met. Needs FIFO channels.

Each site keeps a request queue. To enter CS:

  • Broadcast ; place self in own queue.
  • Other sites: queue the request, send a timestamped REPLY.
  • L1: I've received a msg with ts from every other site.
  • L2: my own request is at top of my queue.
  • Both L1 and L2 hold → enter.
  • On release: dequeue self + broadcast RELEASE.

Cost: msgs per CS (REQUEST + REPLY + RELEASE). SD = (one msg delay).

Ricart-Agrawala (1981)

Eliminate RELEASE messages by *deferring* replies. No FIFO needed.

  • Broadcast REQUEST.
  • On receiving REQUEST at : REPLY if idle or your ts is smaller; else defer until you leave CS.
  • Enter after REPLY from all .
  • On release: send all deferred REPLYs.

Cost: msgs. SD = . Strictly better than Lamport.

Roucairol-Carvalho optimisation: once you have a REPLY from , don't re-request from unless you've replied to since. Variable 0 to msgs.

Maekawa (1985)

Stop asking everyone — ask only a quorum .

  • , every pair , , every node in quorums.
  • Optimum: (any pair sharing one member is enough to arbitrate).

V1 protocol: 3√N msgs per CS. SD = . But deadlocks — cyclic locking where three quorums lock each other.

V2 fix adds three messages:

  • FAILED — sent when you already replied to a higher-priority request.
  • INQUIRE — sent by when a higher-priority request arrives; asks the previously-acked site "are you in CS?"
  • YIELD — sent by a process that received a FAILED, relinquishing its lock so the higher-priority requester can proceed.

V2 messages: up to . SD still .

Family 2: Token-Based

Suzuki-Kasami (1985)

Single token grants entry. When you want CS, broadcast a REQUEST; the token-holder forwards the token if your request is fresh.

Token contents: FIFO queue of pending requesters + where is the seq num of the last CS executed for .

**Per-site **: largest seq num ever seen in a REQUEST from .

To request: ++, broadcast .

On receiving REQUEST : . If has the token, is idle, and (the request is *fresh*), send the token to .

On release: . Append any new fresh requesters () to . Pop head, send token there.

Cost: (already hold token) or (broadcast REQ + token transfer). SD or .

The freshness condition filters out stale duplicate requests from delayed messages.

Raymond (1989)

Token-based on a logical tree, not broadcast.

Each node has a Holder pointer toward the current token-holder (root) and a FIFO queue of pending requests.

To request: push self to ; if not token-holder and was empty, send REQUEST to Holder.

Non-root receives REQUEST: queue it; if no prior REQUEST sent upward, forward REQUEST to Holder.

Root receives REQUEST: send token to requester; update Holder. Token migrates downward through Holder chain.

On receiving token: pop ; if self, enter CS; else forward token there. If still non-empty, send REQUEST upward.

Cost: msgs per CS in balanced tree. SD .

Trade-off: requests aggregate as they flow up — efficient. Bottleneck: root.

The Big Table

| Algo | Type | Msgs/CS | SD | Assumption | |---|---|---|---|---| | Lamport | Non-token | | | FIFO | | Ricart-Agrawala | Non-token | | | None | | Maekawa V1 | Quorum | | | Deadlocks | | Maekawa V2 | Quorum | up to | | Deadlock-free | | Suzuki-Kasami | Token (broadcast) | or | or | Token loss = problem | | Raymond | Token (tree) | | | Root bottleneck |

Memorise this table. Every DME question asks variations of these numbers.

Why Lamport Needs FIFO But R-A Doesn't

Lamport relies on REQUEST and RELEASE arriving in order — RELEASE removes a queue entry that REQUEST added. If RELEASE could overtake REQUEST (non-FIFO), you'd remove a non-existent entry.

R-A has no RELEASE at all. Deferred REPLYs encode the ordering implicitly — no FIFO needed.

What You Walk In Carrying

Three properties (safety, liveness, fairness) + four metrics. Both algorithm families. The five algorithms with their exact message counts, SD, assumptions. Why Lamport needs FIFO. Maekawa quorum requirements + optimality + V1's cyclic deadlock + V2's three new messages. Suzuki-Kasami's freshness condition. Raymond's Holder pointer + tree migration. The comparison table.