How to Set Up ACLs in Kafka

Updated: January 30, 2024 By: Guest Contributor Post a comment

Introduction

Apache Kafka is a popular distributed event-streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, and data integration. With the widespread use of Kafka in critical applications, security becomes an essential aspect of any Kafka deployment. One key security feature of Kafka is Access Control Lists (ACLs), which allow you to manage permissions for resources within your Kafka cluster. In this tutorial, we will explore how to set up ACLs in Kafka, including code examples to help you start securing your Kafka environment.

Understanding Kafka ACLs

ACLs in Kafka are rules that define which principals (users or applications) are allowed to perform specific operations on specific Kafka resources, such as topics, consumer groups, or the cluster itself. Kafka uses a pluggable Authorizer that can store ACLs in ZooKeeper or any other secure and consistent storage backend.

To use ACLs, you must enable an authorizer at the Kafka broker. The most common authorizer used is the SimpleAclAuthorizer. This authorizer uses ZooKeeper to store the ACLs.

Enabling the Authorizer

The first step to setting up ACLs in Kafka is to configure the Kafka brokers to use an Authorizer. You can do this by adding the following settings to your server.properties file on each Kafka broker:

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin
allow.everyone.if.no.acl.found=false

With the above configuration, we have specified that the SimpleAclAuthorizer should be used, defined a super user with the name ‘admin’, and declared that if no ACL is found for a resource, access should be denied by default.

Creating and Managing ACLs

Once you have the authorizer configured, you can start creating ACLs using the Kafka command-line tools. Kafka provides a script called kafka-acls.sh that allows you to manage ACLs.

Adding an ACL

To allow a user to produce messages to a specific topic, you could use a command like this:

kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:alice --producer --topic my-topic

This command grants permission to the user ‘alice’ to produce messages to ‘my-topic’. ACLs are additive, so if you want to allow another user to produce to the same topic, you would issue a separate command for that user.

Removing an ACL

To remove an ACL, you can use the following command:

kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:alice --producer --topic my-topic

This command removes the permission for the user ‘alice’ to produce messages to ‘my-topic’.

Listing ACLs

To view all ACLs, you can use the --list option with the kafka-acls.sh script:

kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list

This will show you all ACLs that are currently set in the cluster.

Advanced ACL Configuration

Complex authorization scenarios might require more granular control over the ACLs, such as restricting to certain operations, IP addresses, or using wildcards for topics.

Restricting based on Operations

With Kafka ACLs, you can restrict permissions to specific operations on a topic. For instance, you can allow a user read access to a topic by using the --allow-principal and --consumer options:

kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:alice --consumer --topic my-topic --group my-group

Here, ‘alice’ is allowed to read from ‘my-topic’ and also from the consumer group ‘my-group’.

Using Wildcards

Wildcards are another powerful feature when setting up ACLs in Kafka. If you want to grant a user access to all topics, you could use the ‘*’ wildcard:

kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:alice --operation All --topic *

Notice the use of --operation All, this gives ‘alice’ permission to perform all operations on all topics.

IP-Based ACLs

Kafka also supports IP-based ACLs, which allows you to restrict access based on the client’s IP address. To allow access to a topic only from a particular IP, use:

kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:alice --producer --topic my-topic --allow-host 192.168.1.100

Now, user ‘alice’ can produce to ‘my-topic’ only if the connection comes from IP ‘192.168.1.100’.

Integrating ACLs with Security Protocols

Kafka also allows the integration of ACLs with security protocols like SSL/TLS or SASL for authentication. This means you can enforce ACLs based on authenticated users over secure connections. By combining ACLs with SSL/TLS encryption, you not only control access but also secure the data in transit.

To integrate ACLs with SSL, for instance, you will need to generate keystores and truststores and configure your Kafka brokers and clients to use SSL. For user authentication with SASL, Kafka supports mechanisms like PLAIN, SCRAM, or GSSAPI (Kerberos).

Below is a step-by-step example.

Step 1: Generate Keystores and Truststores

First, generate the necessary keystores and truststores for SSL. This step involves creating a keystore for each Kafka broker and a truststore that all brokers and clients will use.

# Generate broker keystore
keytool -keystore kafka.server.keystore.jks -alias localhost -validity 365 -genkey -keyalg RSA -storepass password -keypass password -dname "CN=localhost"

# Generate truststore
keytool -keystore kafka.server.truststore.jks -alias CARoot -validity 365 -import -file ca-cert -storepass password -noprompt

# Repeat similar steps for client keystores and truststores

Step 2: Configure Kafka Brokers for SSL

Configure each Kafka broker to use SSL by updating the server.properties file.

# server.properties
listeners=SSL://:9093
ssl.keystore.location=/path/to/kafka.server.keystore.jks
ssl.keystore.password=password
ssl.key.password=password
ssl.truststore.location=/path/to/kafka.server.truststore.jks
ssl.truststore.password=password
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.client.auth=required

Step 3: Configure Kafka Clients for SSL

Configure Kafka clients to use SSL in the client properties.

# client.properties
security.protocol=SSL
ssl.truststore.location=/path/to/kafka.client.truststore.jks
ssl.truststore.password=password

Step 4: Configure SASL for Authentication

Choose a SASL mechanism (e.g., PLAIN) and configure it in the Kafka brokers.

# server.properties
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
security.inter.broker.protocol=SASL_SSL
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret";

Step 5: Configure ACLs

Create ACLs to control access based on authenticated users. This can be done using Kafka’s command-line tools.

# Create an ACL for a user
kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:"admin" --operation Read --topic testTopic

Notes:

  • Replace placeholders (like /path/to/, password, admin-secret) with actual values.
  • Additional configuration may be required based on your specific Kafka version and setup.
  • Ensure your Kafka cluster and clients are properly secured according to best practices, especially when dealing with sensitive data.

This example provides a basic framework for integrating ACLs with SSL/TLS and SASL in Kafka. It’s a high-level overview and should be tailored to fit the specific requirements of your Kafka deployment.

Conclusion

In this tutorial, we have walked through the steps necessary to set up ACLs in Kafka from basic configurations to more advanced scenarios, with practical code examples. With the proper implementation of ACLs, you can significantly improve the security posture of your Kafka environment, ensuring that only authorized individuals and applications can publish and consume data within your ecosystem. As Kafka continues to play a pivotal role in event-driven architecture, understanding and leveraging ACLs remains crucial for secure and compliant data operations.