Apache Kafka Architecture and Its Components-The A-Z Guide

Apache Kafka Architecture with Diagram - Explore the Event Driven Architecture of Kafka Cluster, its features and the role of various components.

Apache Kafka Architecture and Its Components-The A-Z Guide
 |  BY ProjectPro

A detailed introduction to Apache Kafka Architecture, one of the most popular messaging systems for distributed applications. 

The first COVID-19 cases were reported in the United States in January 2020. By the end of the year, over 200,000 cases were reported per day, which climbed to 250,000 cases  in early 2021. Responding to a pandemic on such a large scale involves technical and public health challenges. One of the challenges was keeping track of the data coming in from many data streams in multiple formats. The CELR (COVID Electronic Lab Reporting) program of the Centre for Disease Control and Prevention (CDC) was established to validate, transform and aggregate laboratory testing data submitted by various public health departments and other partners. Kafka Streams and Kafka Connect were used to keep track of the threat of the COVID-19 virus and analyze the data for a more thorough response on local, state, and federal levels.

kafka architecture

Kafka is an integral part of Netflix’s real-time monitoring and event-processing pipeline. The Keystone Data Pipeline of Netflix processes over 500 billion events a day . These events include error logs, data on user viewing activities, and troubleshooting events, among other valuable datasets. 


Streaming ETL in Kafka with KSQL using NYC TLC Data

Downloadable solution code | Explanatory videos | Tech Support

Start Project

At LinkedIn, Kafka is the backbone behind various products, including LinkedIn Newsfeed and LinkedIn Today. Spotify uses Kafka as part of its log delivery system.

Kafka is used by thousands of companies today, including over 60% of the Fortune 100, including Box, Goldman Sachs, Target, Cisco, and Intuit. Apache Kafka is one of the most popular open-source distributed streaming platforms for  processing large volumes of streaming data from real-time applications.

 

ProjectPro Free Projects on Big Data and Data Science

Why is Apache Kafka so popular?

So why is Kafka so popular? And what makes it such a popular choice for companies? 

  • Scalability: The scalability of a system is determined by how well it can maintain its performance when exposed to changes in application and processing demands. Apache Kafka has a distributed architecture capable of handling incoming messages with higher volume and velocity. As a result, Kafka is highly scalable without any downtime impact.

  • High Throughput: Apache Kafka is able to handle thousands of messages per second. Messages coming in at a high volume or a high velocity or both will not affect the performance of Kafka.

  • Low Latency: Latency refers to the amount of time taken for a system to process a single event. Kafka offers a very low latency, which is as low as ten milliseconds.

  • Fault Tolerance: By using replication, Kafka can handle failures at nodes in a cluster without any data loss. Running processes, too, can remain undisturbed. The replication factor determines the number of replicas for a partition. For a replication factor of ‘n,’ Kafka guarantees a fault tolerance for up to n-1 servers in the Kafka cluster.

  • Reliability: Apache Kafka is a distributed platform with very high fault tolerance, making it a very reliable system to use.

  • Durability: Data present on the Kafka cluster is allowed to remain persistent more on the cluster than on the disk. This ensures that Kafka’s data remains durable.

  • Ability to handle real-time data: Kafka supports real-time data handling and is an excellent choice when data has to be processed in real-time.

Apache Kafka Architecture Explained- Overview of Kafka Components

Let’s look in detail at the architecture of Apache Kafka and the relationship between the various architectural components to develop a deeper understanding of Kafka for distributed streaming.  But before delving into the components of Apache Kafka, it is crucial to grasp the concept of a Kafka cluster first.

What is a Kafka Cluster? 

A Kafka cluster is a distributed system composed of multiple Kafka brokers working together to handle the storage and processing of real-time streaming data. It provides fault tolerance, scalability, and high availability for efficient data streaming and messaging in large-scale applications.

Apache Kafka Components and Its Architectural Concepts

apache kafka architecture

Topics

A stream of messages that are a part of a specific category or feed name is referred to as a Kafka topic. In Kafka, data is stored in the form of topics. Producers write their data to topics, and consumers read the data from these topics. 

Here's what valued users are saying about ProjectPro

ProjectPro is a unique platform and helps many people in the industry to solve real-life problems with a step-by-step walkthrough of projects. A platform with some fantastic resources to gain hands-on experience and prepare for job interviews. I would highly recommend this platform to anyone...

Anand Kumpatla

Sr Data Scientist @ Doubleslash Software Solutions Pvt Ltd

ProjectPro is an awesome platform that helps me learn much hands-on industrial experience with a step-by-step walkthrough of projects. There are two primary paths to learn: Data Science and Big Data. In each learning path, there are many customized projects with all the details from the beginner to...

Jingwei Li

Graduate Research assistance at Stony Brook University

Not sure what you are looking for?

View All Projects

Brokers

A Kafka cluster comprises one or more servers that are known as brokers. In Kafka, a broker works as a container that can hold multiple topics with different partitions. A unique integer ID is used to identify brokers in the Kafka cluster. Connection with any one of the kafka brokers in the cluster implies a connection with the whole cluster. If there is more than one broker in a cluster, the brokers need not contain the complete data associated with a particular topic.

Consumers and Consumer Groups

Consumers read data from the Kafka cluster. The data to be read by the consumers has to be pulled from the broker when the consumer is ready to receive the message. A consumer group in Kafka refers to a number of consumers that pull data from the same topic or same set of topics. 

Ace Your Next Job Interview with Mock Interviews from Experts to Improve Your Skills and Boost Confidence!

Data Science Interview Preparation

Producers

Producers in Kafka publish messages to one or more topics. They send data to the Kafka cluster. Whenever a Kafka producer publishes a message to Kafka, the broker receives the message and appends it to a particular partition. Producers are given a choice to publish messages to a partition of their choice.

Partitions

Topics in Kafka are divided into a configurable number of parts, which are known as partitions. Partitions allow several consumers to read data from a particular topic in parallel. Partitions are separated in order. The number of partitions is specified when configuring a topic, but this number can be changed later on. The partitions comprising a topic are distributed across servers in the Kafka cluster. Each server in the cluster handles the data and requests for its share of partitions. Messages are sent to the broker along with a key. The key can be used to determine which partition that particular message will go to. All messages which have the same key go to the same partition. If the key is not specified, then the partition will be decided in a round-robin fashion.

Partition Offset

Messages or records in Kafka are assigned to a partition. To specify the position of the records within the partition, each record is provided with an offset. A record can be uniquely identified within its partition using the offset value associated with it. A partition offset carries meaning only within that particular partition. Older records will have lower offset values since records are added to the ends of partitions.

Replicas

Replicas are like backups for partitions in Kafka. They are used to ensure that there is no data loss in the event of a failure or a planned shutdown. Partitions of a topic are published across multiple servers in a Kafka cluster. Copies of the partition are known as Replicas.

Leader and Follower

Every partition in Kafka will have one server that plays the role of a leader for that particular partition. The leader is responsible for performing all the read and write tasks for the partition. Each partition can have zero or more followers. The duty of the follower is to replicate the data of the leader. In the event of a failure in the leader for a particular partition, one of the follower nodes can take on the role of the leader.

Get FREE Access to Data Analytics Example Codes for Data Cleaning, Data Munging, and Data Visualization

Apache Kafka Event-Driven Workflow Orchestration

event driven architecture kafka

Kafka Producers

In Kafka, the producers send data directly to the broker that plays the role of leader for a given partition. In order to help the producer send the messages directly, the nodes of the Kafka cluster answer requests for metadata on which servers are alive and the current status of the leaders of partitions of a topic so that the producer can direct its requests accordingly. The client decides which partition it publishes its messages to. This can either be done arbitrarily or by making use of a partitioning key, where all messages containing the same partition key will be sent to the same partition.

Messages in Kafka are sent in the form of batches, known as record batches. The producers accumulate messages in memory and send them in batches either after a fixed number of messages are accumulated or before a fixed latency bound period of time has elapsed.

Kafka Brokers

In Kafka, the cluster usually contains multiple nodes, that are known as brokers, to maintain the load balance. The brokers are stateless, and hence their cluster state is maintained by the ZooKeeper. One Kafka broker is able to handle hundreds of thousands of reads and writes per second. For one particular partition, one broker serves as the leader. The leader may have one or more followers, where the data on the leader is to be replicated across the followers for that particular partition. The role of leader for partitions is distributed across brokers in the cluster.

The nodes in a cluster have to send messages called Heartbeat messages to the ZooKeeper to keep the ZooKeeper informed that they are alive. The followers have to stay caught up with the data that is in the leader. The leader keeps track of the followers that are “in sync” with it. If a follower is no longer alive or does not stay caught up with the leader, it is removed from the list of in-sync replicas (ISRs) associated with that particular leader. If the leader dies, a new leader is selected from among the followers. The election of the new leader is handled by the ZooKeeper.

Kafka Consumers

In Kafka, the consumer has to issue requests to the brokers indicating the partitions it wants to consume. The consumer is required to specify its offset in the request and receives a chunk of log beginning from the offset position from the broker. Since the consumer has control over this position, it can re-consume data if required. Records remain in the log for a configurable time period which is known as the retention period. The consumer may re-consume the data as long as the data is present in the log.

In Kafka, the consumers work on a pull-based approach. This means that data is not immediately pushed onto the consumers from the brokers. The consumers have to send requests to the brokers to indicate that they are ready to consume the data. A pull-based system ensures that the consumer does not get overwhelmed with messages and can fall behind and catch up when it can. A pull-based system can also allow aggressive batching of data sent to the consumer since the consumer will pull all available messages after its current position in the log. In this manner, batching is performed without any unnecessary latency.

End to End Batch Compression

To efficiently handle large volumes of data, Kafka performs compression of messages. Efficient compression involves compressing multiple messages together instead of compressing individual messages. For the reason that Apache Kafka supports an efficient batching format, a batch of messages can be compressed together and sent to the server in this format. The batch of messages here get written to the broker in a compressed format and continue to remain compressed in the log until they are extracted and decompressed by the consumer.

Access to a curated library of 250+ end-to-end industry projects with solution code, videos and tech support.

Request a demo

The Role of ZooKeeper in Apache Kafka Architecture

kafka architecture diagram

Apache ZooKeeper is a software developed by Apache that acts as a centralized service and is used to maintain the configuration of data to provide flexible yet robust synchronization for distributed systems. The ZooKeeper is used to manage and coordinate Kafka brokers in the cluster. It maintains a list of the brokers and manages them using this list. It is used to notify the producer and consumer about the presence of new brokers or about the failure of brokers in the Kafka cluster. Using this information, the producer and consumer can make a decision and accordingly coordinate with some other broker in the cluster. The ZooKeeper is also used to store information about the Kafka cluster and the various details regarding the consumer clients. The ZooKeeper is also responsible for choosing a leader for the partitions. In the event of a failure in the leader node, it is the duty of the ZooKeeper to coordinate the leader election and choose the next leader for a partition.

Up until Kafka 2.8.0, it was not possible to run a Kafka cluster without the ZooKeeper. However, in the 2.8.0 release, the Kafka team is rolling out an alternative method where users can run a Kafka cluster without ZooKeeper but instead using an internal implementation of the Raft consensus algorithm. The changes are outlined in KIP-500 (Kafka Improvement Proposal - 500). The goal here is to move topic metadata and configurations out of ZooKeeper and into a new internal topic, named @metadata, which is managed by an internal Raft quorum of controllers, and replicated to all brokers in the cluster.

Recommended Reading:

Achieving Performance Tuning in Apache Kafka

Optimum performance involves the consideration of two key measures: latency and throughput. Latency refers to the time taken to process one event. Hence a lower latency is required for better performance. Throughput denotes the number of events that can be processed in a specific amount of time, and hence, the goal is to always have a higher throughput. Many systems tend to optimize one and end up compromising the other, but Kafka attains a perfect balance of the two. 

Tuning Apache Kafka for optimal performance involves:

  • Tuning Kafka Producer: Data that the producers publish to the brokers is stored in a batch and sent only when the batch is ready. To tune the producers, two parameters are taken into consideration -

    • Batch Size: The batch size has to be decided based on the nature of the volume of messages sent by the producer. Producers which send messages frequently will work better with larger batch sizes so that throughput can be maximized without compromising heavily on the latency. In cases where the producers do not send messages frequently, smaller batch size is preferred. In such cases, if the batch size is very large, it may never get full or take a long time to fill up. This will increase the latency and hence, compromise performance.

    • Linger Time: The linger time is added to create a delay to allow more records to be filled up in the batch so that larger batches can be sent. A longer linger time allows more messages to be sent in one batch but can result in compromising latency. On the other hand, a reduced linger time results in fewer messages getting sent faster, and as a result, there is lower latency but reduced throughput too.

  • Tuning Kafka Brokers: Every partition has a leader associated with it and zero or more followers for the leader. While the Kafka cluster is running, due to failures in some of the brokers or due to reallocation of partitions, an imbalance may occur among the brokers in the cluster. Some brokers might be overworked compared to others. In such cases, it is important to monitor the brokers and ensure that the workload is balanced across the various brokers present in the cluster.

  • Tuning Kafka Consumers: While tuning consumers, it is important to keep in mind that a consumer can read many partitions, but one consumer can only read one partition. A good practice to follow is to keep the number of consumers equal to or lower than the partition count. If the customers are lower than the partition count, the number of partitions can be an exact multiple of the number of consumers. More consumers than partitions will result in some consumers remaining idle. 

Get More Practice, More Big Data and Analytics Projects, and More guidance.Fast-Track Your Career Transition with ProjectPro

Drawbacks of Apache Kafka

We have already seen some interesting reasons that make Apache Kafka a popular tool for distributed streaming, but like every other big data tool, there are a few downsides to using Apache Kafka-

  • Tweaking messages in Kafka results in performance issues. Kafka is well-suited for cases where the message does not have to be changed.

  • In Kafka, there is no support for wildcard topic selection. The topic name has to be an exact match.

  • Certain message paradigms such as point-to-point queues and request/reply features are not supported by Kafka.

  • Large messages require compression and decompression of messages. This results in an effect on the throughput and performance of Kafka.

Apache Kafka Use Cases 

Message Broker

Kafka serves as an excellent replacement for traditional message brokers. Compared to traditional massage brokers, Apache Kafka provides better throughput and is capable of handling a larger volume of messages. Kafka can be used as a publish-subscribe messaging service and is a good tool for large-scale message processing applications.

Tracking Website Activities

The activity associated with a website, that includes metrics like page views, searches, and other actions that users take, is published to a centralized topic which in turn contains a topic for each type of activity. This data can be further used for real-time processing, real-time monitoring, and loading into the Hadoop Ecosystem for processing in the future. Website activity usually involves a very high volume of data as several messages are generated for page views by a single user.

Monitoring Metrics

Kafka finds applications in monitoring the metrics associated with operational data. Statistics from distributed applications are consolidated into centralized feeds to monitor their metrics.

Stream Processing

A widespread use case for Kafka is to process data in processing pipelines, where raw data is consumed from topics and then further processed or transformed into a new topic or topics, that will be consumed for another round of processing. These processing pipelines create channels of real-time data. Kafka version 0.10.0.0 onwards, a powerful stream processing library known as Kafka Streams, has been made available in Kafka to process data in such a format.

Build an Awesome Job Winning Project Portfolio with Solved End-to-End Big Data Projects

Event Sourcing

Event sourcing refers to an application design that involves logging state changes as a sequence of records ordered based on time. Kafka’s ability to store large logs make it a great choice for event sourcing applications.

Logging

Kafka can be used as an external commit-log for a distributed application. 

Kafka’s replication feature can help replicate data between multiple nodes and allow re-syncing in failed nodes for data restoration when required. In addition, Kafka can also be used as a centralized repository for log files from multiple data sources and in cases where there is distributed data consumption. In such cases, data can be collected from physical log files of servers and from numerous sources and made available in a single location.

To explore the powerful and versatile streaming loads on Apache Kafka , it is necessary to have some hands-on experience working with the architecture in real life. Working on real-time Apache Kafka projects is an excellent way to build your big data skills and experience to nail your next big data job interview to land a top gig as a big data professional.

PREVIOUS

NEXT

Access Solved Big Data and Data Science Projects

About the Author

ProjectPro

ProjectPro is the only online platform designed to help professionals gain practical, hands-on experience in big data, data engineering, data science, and machine learning related technologies. Having over 270+ reusable project templates in data science and big data with step-by-step walkthroughs,

Meet The Author arrow link