Architecture

Certify and Increase Opportunity.
Be
Govt. Certified Apache Cassandra Professional

Architecture

Cassandra Architecture

Cassandra forgoes the widely used Master-Slave setup, in favor of a peer-to-peer cluster. This contributes to Cassandra having no single-point-of-failure, as there is no master-server which, when faced with lots of requests or when breaking, would render all of its slaves useless. Any number of commodity servers can be grouped into a Cassandra cluster.

This architecture is a lot more complex to implement behind the scenes, but we won’t have to deal with that. The nice folks working at the Cassandra core bust their heads against the quirks of distributed systems.

Not having to distinguish between a Master and a Slave node allows you to add any number of machines to any cluster in any datacenter, without having to worry about what type of machine you need at the moment. Every server accepts requests from any client. Every server is equal.

Architecture Details

CAP theorem

The CAP theorem (Brewer) states that you have to pick two of Consistency, Availability, Partition tolerance: You can’t have the three at the same time and get an acceptable latency.

Cassandra values Availability and Partitioning tolerance (AP). Tradeoffs between consistency and latency are tunable in Cassandra. You can get strong consistency with Cassandra (with an increased latency). But, you can’t get row locking: that is a definite win for HBase.

History and approaches

Two famous papers

  • Bigtable: A distributed storage system for structured data, 2006
  • Dynamo: amazon’s highly available keyvalue store, 2007

Two approaches

  • Bigtable: “How can we build a distributed db on top of GFS?”
  • Dynamo: “How can we build a distributed hash table appropriate for the data center?”

Cassandra 10,000 ft summary

  • Dynamo partitioning and replication
  • Log-structured ColumnFamily data model similar to Bigtable’s

Cassandra highlights

  • High availability
  • Incremental scalability
  • Eventually consistent
  • Tunable tradeoffs between consistency and latency
  • Minimal administration
  • No SPF (Single Point of Failure)

p2p distribution model — which drives the consistency model — means there is no single point of failure.

Keys distribution and Partition

Dynamo architecture & Lookup

In a ring of nodes A, B, C, D, E, F and G Nodes B, C and D store keys in the range (a,b) including key k

You can decide where the key should go in Cassandra using the InitialToken parameter for your Partitioner.

Architecture details

  • O(1) node lookup
  • Explicit replication
  • Eventually consistent

Architecture layers

Core Layer Middle Layer Top Layer
Messaging service
Gossip Failure detection
Cluster state
Partitioner
Replication
Commit log
Memtable
SSTable
Indexes
Compaction
Tombstones
Hinted handoff
Read repair
Bootstrap
Monitoring
Admin tools

Writes

Any node Partitioner Commitlog, memtable SSTable Compaction Wait for W responses

Write model:

There are two write modes:

  • Quorum write: blocks until quorum is reached
  • Async write: sends request to any node. That node will push the data to appropriate nodes but return to client immediately

If the node is down, then write to another node with a hint saying where it should be written to. Harvester every 15 min goes through and find hints and moves the data to the appropriate node

Write path

At write time,

  • you first write to a disk commit log (sequential)
  • After write to log it is sent to the appropriate nodes
  • Each node receiving write first records it in a local log, then makes update to appropriate memtables (one for each column family). A Memtable is Cassandra’s in-memory representation of key/value pairs before the data gets flushed to disk as an SSTable.
  • Memtables are flushed to disk when:
    • Out of space
    • Too many keys (128 is default)
    • Time duration (client provided – no cluster clock)
  • When memtables written out two files go out:
    • Data File (SSTable). A SSTable (terminology borrowed from Google) stands for Sorted Strings Table and is a file of key/value string pairs, sorted by keys.
    • Index File (SSTable Index). (Similar to Hadoop MapFile / Tfile)
      • (Key, offset) pairs (points into data file)
      • Bloom filter (all keys in data file). A Bloom filter, is a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. False positives are possible, but false negatives are not. Cassandra uses bloom filters to save IO when performing a key lookup: each SSTable has a bloom filter associated with it that Cassandra checks before doing any disk seeks, making queries for keys that don’t exist almost free. Bloom filters are surprisingly simple: divide a memory area into buckets (one bit per bucket for a standard bloom filter; more -typically four – for a counting bloom filter). To insert a key, generate several hashes per key, and mark the buckets for each hash. To check if a key is present, check each bucket; if any bucket is empty, the key was never inserted in the filter. If all buckets are non-empty, though, the key is only probably inserted – other keys’ hashes could have covered the same buckets.
  • When a commit log has had all its column families pushed to disk, it is deleted
  • Compaction: Data files accumulate over time. Periodically data files are merged sorted into a new file (and creates new index)
    • Merge keys
    • Combine columns
    • Discard tombstones

Write properties

  • No reads
  • No seeks
  • Fast
  • Atomic within ColumnFamily
  • Always writable

Remove

Deletion marker (tombstone) necessary to suppress data in older SSTables, until compaction Read repair complicates things a little Eventually consistent complicates things more Solution: configurable delay before tombstone GC, after which tombstones are not repaired

Read

Read path

  • Any node
  • Partitioner
  • Wait for R responses
  • Wait for N -­ R responses in the background and perform read repair

Cassandra read properties

  • Read multiple SSTables
  • Slower than writes (but still fast)
  • Seeks can be mitigated with more RAM
  • Scales to billions of rows

Consistency

Consistency describes how and whether a system is left in a consistent state after an operation. In distributed data systems like Cassandra, this usually means that once a writer has written, all readers will see that write.

On the contrary to the strong consistency used in most relational databases (ACID for Atomicity Consistency Isolation Durability) Cassandra is at the other end of the spectrum (BASE for Basically Available Soft-state Eventual consistency). Cassandra weak consistency comes in the form of eventual consistency which means the database eventually reaches a consistent state. As the data is replicated, the latest version of something is sitting on some node in the cluster, but older versions are still out there on other nodes, but eventually all nodes will see the latest version.

More specifically: R=read replica count W=write replica count N=replication factor Q=QUORUM (Q = N / 2 + 1)

  • If W + R > N, you will have consistency
  • W=1, R=N
  • W=N, R=1
  • W=Q, R=Q where Q = N / 2 + 1

Cassandra provides consistency when R + W > N (read replica count + write replica count > replication factor).

You get consistency if R + W > N, where R is the number of records to read, W is the number of records to write, and N is the replication factor. A ConsistencyLevel of ONE means R or W is 1. A ConsistencyLevel of QUORUM means R or W is ceiling((N+1)/2). A ConsistencyLevel of ALL means R or W is N. So if you want to write with a ConsistencyLevel of ONE and then get the same data when you read, you need to read with ConsistencyLevel ALL.

Get industry recognized certification – Contact us

Menu