Reasons for unbalanced Cassandra Cluster

Sometimes an Apache Cassandra cluster can end up in an unbalanced state. An unbalanced state is where data is unevenly distributed across a cluster or locally configured data directories. There are a number of reasons this can happen. In this blog post, I will cover two basics reasons this might happen.  A cluster can end up in an unbalanced state due to two basic reasons:

  1. Configuration
  2. Data Model and data distribution

Configuration

The main configuration option that affects data distribution is the num_token value set in the cassandra.yaml. By default, this is set to 256 but can be configured prior to bootstrapping a node. The num_token configuration is useful when configuring a non-uniform Cassandra cluster.  This enables users to distribute data according to the capacity of each node.

Data Model and Data Distribution

Data can also end up unevenly distributed if your data model is not designed for your data profile. You must have a good understanding of your domain and your data model must be fit for your domain. In Cassandra, your partition key is responsible for your distributing data across a cluster and must be carefully chosen.

Example

The best way to understand the above two points is via an example. In this example, we are going to start off with a single node cluster. This node is configured with a num_token value of 256. It has two data directories (/var/lib/cassandra/data, /var/lib/cassandra/data1) specified in the data_file_directories property.

In this example, we will

  1. Bootstrap a single node
  2. Add data to this node.
  3. Add a second node to the cluster and examine how this influences data distribution.

To carry out the test I have created the following schema.

The distribution_test keyspace has a replication factor 1 and uses the simple replication strategy. The distribution_test keyspace has a table called dd_test. This table has a composite primary key. The primary key is composed of a partition key and cluster key aptly named partition_key and cluster_key respectively.

I have added data to the dd_test table using the following Python script.

Notice line 13. The insert statement ensures that the partition key is changed on every insert. As we will see later in the post that this is key to ensuring even data distribution.

After inserting 100000 records I ran nodetool status and got the following values:

As expected all the data is owned by a single node. I also proceeded to check data distribution across the two data directories. Data is evenly spread out. Check out the screen shot below.

Bootstrapping A New Cassandra Node

Let's add a new node to this cluster. When adding a new node ensure that the auto_bootstrap property is set to true. This ensures that new node automatically joins the new cluster and there is no need for manual configuration.  On bootstrapping node2 I see the following log output:

The auto bootstrap process:

  1. Added the node to the configured cluster.
  2. Calculated its token range and sets up the new token ranges across the two nodes.
  3. Streamed data from node1 to node 2. This is data that use to belong to node one but now belongs to node two.

After bootstrapping the node I ran nodetool status and got the following output:

Note that node1(172.20.0.3) still has 64.18Mb of data while node two (172.20.0.4) has 31MB of data. This is a bit odd as the new node should have half of the cluster's data. The reason for the imbalance is because node1 not only contains data the is responsible for but also contains data that is now stored in node2. Running a nodetool cleanup fixes this issue as shown in the screenshot be below:

How has the second node affected the spreading of the data across the two data directories in node1. As expected the data in the two data directories on node1 are evenly spread but half of what they use to be.

All in all, this is a happy situation as our Apache Cassandra cluster is in a balanced state.

Let's bootstrap node two again but this time configured with a num_token of 10:

Notice that node1 holds over 95% of the data. Note this is expected due to our num_token configuration.

Data Modelling Errors

This is the most frequent reason for an unbalanced cluster. We are going to insert data into the dd_test table but this time we are only going to insert data into a single partition. The code used to insert this data can be found below. The only difference from the code above is line 13.

The above program ends up inserting data into a single partition. One running node tool status we see the following:

We have inserted 485.63 MB of data into node one. Let's examine how this data is distributed around our two disks.

Note that data1 directory has a majority of data. This is because data is distributed across disks by evenly splitting the nodes token ranges across disks. Since all our data has been inserted into a single partition we have ended up with all our data in a data disk.

Let's add a new node to the cluster and see how this influences data distribution.

Note adding a new node has resulted in highly imbalanced data distributions. This is again because we have inserted all data into a single partition and thus resulted in a highly unbalanced node.

I hope this has helped you understand why we might end up with unbalanced Cassandra nodes.

Moral of the story. Make sure you put in effort into designing your schema correctly.

2 Responses to Reasons for unbalanced Cassandra Cluster

  1. Francisco Andrade February 28, 2018 at 6:30 pm #

    Hi there, nice post!

    But there is an error in the inserts.
    You duplicated the partition_key column name.
    You misplaced the second one instead of using the cluster_key column.

    Thanks for sharing the knowledge.

    Regards

    • Akhil April 15, 2018 at 12:25 am #

      Fixed. Thanks for pointing out the error.

Leave a Reply

sixteen + two =