Skip to content

Latest commit

 

History

History
427 lines (287 loc) · 16.5 KB

keycloak-integration.adoc

File metadata and controls

427 lines (287 loc) · 16.5 KB

Keycloak integration

Warning
In this document, Keycloak and Red Hat SSO are interchangeable. The latter is the product made out of the upstream community project known as Keycloak. However, all the procedures are tested using Red Hat SSO.
Tip
This document was initially developed targeting fish shell, but it strives to provide also the bash alternatives. Here is a general rule, that will not be advertised further: the command set -x ENV_VAR xxx must be replaces with export ENV_VAR=xxx

Prerequisites

This document expects:

  • AMQ Streams operator deployed

  • A Kafka cluster already deployed in your environment called my-cluster.

  • No authorization configured

Deploy and configure Keycloak

Create keycloak in its own namespace:

oc new-project keycloak
oc apply -f k8s/keycloak/01-rhsso-operator-olm.yaml
oc apply -f k8s/keycloak/02-keycloak.yaml

Show the keycloak user and password:

oc get secret credential-my-keycloak -o jsonpath='{.data.ADMIN_USERNAME}' |base64 -d
oc get secret credential-my-keycloak -o jsonpath='{.data.ADMIN_PASSWORD}' |base64 -d

Show the essential information about keycloak:

oc describe keycloak my-keycloak

Create the kafka realm:

oc apply -f k8s/keycloak/03-kafka-realm.yaml

A client is an application or service that interacts with Keycloak for authentication and authorization purposes

Clients can be of different types, including web applications, mobile applications, single-page applications (SPAs), and service accounts. Each client in Keycloak is assigned a unique client ID and can have its own set of configuration settings, security protocols, and access permissions.

We need 2 clients one for the consumer application and another for the producer one:

  • client-id kafka-consumer

  • client-id kafka-producer

They have to be configured: confidential and service account enabled.

In order to have predictable secrets and streamline the remaining part of the configuration, import a realm base configuration:

  1. Select the Kafka realm

  2. From the side menu select Import

  3. Upload the json file available in this repository: docs/realm-no-pol.json

  4. At question If a resource exists choose Overwrite

in addition to the two clients, the import prepares some definitions that will be useful later:

  • kafka-authz client which hold the authorization configuration

  • Realm roles: topic-consumer and topic-producer

Kafka authentication

The following picture shows the authentication flow when the set up is completed:

keycloak kafka authentication

Switch on the kafka project and keep the Keycloak endpoint in a convenient environment variable:

oc project my-kafka
set -x KEYCLOAK_ROUTE (oc get route keycloak -n keycloak -o jsonpath='{.spec.host}')

The following command adds a new Kafka listener that allows access to applications that bring the JWT token issued by keycloak:

oc patch kafka/my-cluster --type=merge --patch-file=(cat k8s/keycloak/05-kafka-listener.yaml.patch | envsubst | psub)
Tip
Bash alternative: oc patch kafka/my-cluster --type=merge -p "$(cat k8s/keycloak/05-kafka-listener.yaml.patch | envsubst)"

This is the listener configuration:

spec:
  kafka:
    listeners:
      - name: external
        port: 9094
        type: route
        tls: true
        authentication:
          type: oauth
          validIssuerUri: https://${KEYCLOAK_ROUTE}/auth/realms/kafka
          jwksEndpointUri: https://${KEYCLOAK_ROUTE}/auth/realms/kafka/protocol/openid-connect/certs
          checkIssuer: true
          checkAccessTokenType: true
          accessTokenIsJwt: true
          enableOauthBearer: true
          maxSecondsWithoutReauthentication: 3600

Client side configuration

In this section, you will configure the kafka consumer in your local environment to connect to the remote Kafka server using the OAuth authentication mechanism:

The client application needs to enstablish 2 TLS connections: one to Keycloak and one to Kafka. In this example, the Kafka endpoints uses a self signed CA, so we create a truststores to support it:

oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].certificates[0]}{"\n"}' > kafka-cluster-ca.crt
keytool -import -trustcacerts -alias root -file kafka-cluster-ca.crt -keystore truststore.jks -storepass password -noprompt

Add application properties to enable the OAUTHBEARER authentication:

set -x KAFKA_ROUTE (oc get kafka my-cluster -o jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}')
echo | cat - k8s/keycloak/06-application.properties | envsubst >> kafka-consumer/src/main/resources/application.properties

Update the password in application.properties to match the secret in the Keycloak web console.

Make sure that OAuth dependecy is present in pom.xml

<dependency>
  <groupId>io.strimzi</groupId>
  <artifactId>kafka-oauth-client</artifactId>
</dependency>

Run the kafka consumer:

mvn -f kafka-consumer/pom.xml clean quarkus:dev

The following command adds the OAuth configuration to the producer using the respective kafka-producer user.

echo | cat - k8s/keycloak/06-application.properties | envsubst | sed 's/consumer/producer/g' >> kafka-producer/src/main/resources/application.properties

Run the producer to check it’s working as expected:

mvn -f kafka-producer/pom.xml clean quarkus:dev

Enable OAuth for client applications in OpenShift

Once the authorization is enabled at Kafka level, client applications cannot access to Kafka in an anonymous way, even if the connection comes from an internal listener. For such a reason, make sure that authentication is enabled on all your listeners.

The following script show the environment variable to enable client OAuth authentication:

cat k8s/keycloak/09-configmap.template | envsubst

Add the outcome of the previous command to the consumer configmap:

oc edit configmap kafka-consumer-config

Repeat the configuration for the producer using the following variables:

cat k8s/keycloak/09-configmap.template | envsubst | sed 's/consumer/producer/g'

Add the outcome of the previous command to the consumer configmap:

oc edit configmap kafka-producer-config

Kafka Authorization

In this section, it will be enabled the Kafka authorization and the SSO will be used to retrieve the permission. The following picture shows the interactions between the parties:

keycloak kafka authz

Kafka Authorization model

Kafka operation: Read, Write, Create, Delete, Alter, Describe, ClusterAction, DescribeConfigs, AlterConfigs, IdempotentWrite, CreateTokens, DescribeTokens, All

Kafka resources:

  • Topic

  • Group represents the consumer groups in the brokers

  • TransactionalId represents actions related to transactions

  • DelegationToken represents the delegation tokens in the cluster

  • User: CreateToken and DescribeToken operations can be granted to User resources to allow creating and describing tokens for other users

An API key (protocol) is represented by a specific request and response pair. Some of the commonly used operations include:

  • Produce: The produce operation allows clients to send messages to Kafka brokers for storage and distribution. Clients send a produce request containing the messages they want to publish, and brokers respond with a produce response indicating the success or failure of the operation.

  • Fetch: The fetch operation allows clients to retrieve messages from Kafka brokers. Clients send a fetch request specifying the topic, partition, and offset they want to read from, and brokers respond with a fetch response containing the requested messages.

  • Metadata: The metadata operation retrieves metadata about topics, partitions, and brokers in the Kafka cluster. Clients can send a metadata request to obtain information such as the list of available topics, partition leaders, and replicas.

  • Offset Commit: The offset commit operation is used by consumer clients to inform Kafka brokers about the progress of consuming messages. Clients send an offset commit request to commit the offsets of consumed messages, and brokers respond with an offset commit response.

Privileges can apply to specific tuples of protocol, operation and resources, e.g.:

PROTOCOL OPERATION RESOURCES NOTE

PRODUCE

Write

TransactionalId

An transactional producer which has its transactional.id set requires this privilege

PRODUCE

IdempotentWrite

Cluster

An idempotent produce action requires this privilege

PRODUCE

Write

Topic

This applies to a normal produce action

FETCH

Read

Topic

Regular Kafka consumers need READ permission on each partition they are fetching

OFFSET_COMMIT

Read

Group

An offset can only be committed if it’s authorized to the given group and the topic too

OFFSET_COMMIT

Read

Topic

Since offset commit is part of the consuming process, it needs privileges for the read action

Further information: Security Authorization Primitives

Keycloak concepts

Clients are entities that interact with Keycloak to authenticate users and obtain tokens. Most often, clients are applications and services acting on behalf of users that provide a single sign-on experience to their users and access other services using the tokens issued by the server

Permissions are the individual actions or operations that a user or client can perform on a specific resource. For example, permissions can include actions like "read," "write," "create," or "delete" on a particular resource.

Policies are the rules or conditions that determine whether a user or client is granted or denied access to perform those permissions on a resource. Policies evaluate the permissions requested by a user or client and make access control decisions accordingly.

A Role is a set of permissions or access rights that can be assigned to users or clients.

A permission associates the object being protected with the policies that must be evaluated to determine whether access is granted.

X CAN DO Y ON RESOURCE Z

where:

  • X represents one or more users, roles, or groups, or a combination of them. You can also use claims and context here.

  • Y represents an action to be performed, for example, write, view, and so on.

  • Z represents a protected resource, for example: a topic, a consumer group.

Scope-based Permission: use it where a set of one or more client scopes is permitted to access an object.

Resource-based Permission defines a set of one or more resources to protect using a set of one or more authorization policies.

An Authorization Service is a component of an identity and access management (IAM) system that handles the process of granting or denying access to protected resources based on predefined policies and rules. Any confidential client can provide the authorization service.

Mapping Kafka Authorization in Keycloak

This section shows how to create a client with the authorization services enabled, then inside the client configuration how to define:

  • roles

  • resources

  • permissions

Open the browser with the keycloak route URL.

See in section Kafka authentication how to retrieve the Keycloak administration user and password.

After the login, select the Kafka Realm.

Important
If the import procedure worked without issues you can jump on Create Permissions section.

Create the client to host the kafka authorization service:

oc apply -n keycloak -f k8s/keycloak/07-authz-client.yaml

Alternatively, via web console:

  1. Create kafka-authz client

  2. Set Access Type to confidential

  3. Switch on Service Account Enabled

  4. Switch on Authorization Enabled

  5. Save

Create and assign Roles

From the left menu select Roles and add 2 roles: topic-consumer and topic-producer.

Select the Clients entry from left menu:

  1. Select kafka-consumer

  2. Switch to the Service Account Roles tab

  3. Assign topic-consumer role

Repeat the previous steps for kafka-producer and topic-producer.

Decision Strategy

Affirmative decision strategy means that at least one permission must be evaluated positive.

Select the Clients entry from left menu and open the kafka-authz client.

  1. Switch to the Authorization tab

  2. In the nested tabs line, select Settings

  3. Set Decision Strategy to Affirmative

  4. Save

Create Authorization Scopes

Select the Clients entry from left menu and open the kafka-authz client.

  1. Switch to the Authorization tab

  2. In the nested tabs line, select Authorization Scopes

  3. Create the following scopes: Read, Write, Describe, IdempotentWrite

Create Resources

In Authorization > Resources

  1. Delete the Default Resource

  2. Create the following resources:

    1. Topic:event and add all the available scopes

    2. Cluster:* and add IdempotentWrite as scope

Create Permissions

Prerequisites:

  • Roles are defined (at realm level)

  • Resources are defined (at client level)

  • Authorization Scopes are defined (at client level)

Grant permissions to consume from a topic (Scope-based Permission):

  1. Navigate in kafka-authz client, then Authorization tab

  2. In the second level of tabs select Permission

    1. Delete Default Permission if it exists

  3. From Create Permission drop down list select Scope-Base

    1. Enter a meaningful name: Topic consumers can read and describe topic:event

    2. In the Resource field select Topic:event

    3. In the Scope field enter: Read, Describe

    4. Create a new Policy select Role Policy

      1. Enter a meaningful name: topic consumer policy

      2. In Realm Roles select and add topic-consumer

      3. Save

    5. Save

Grant permissions to any consumer group (Resource-based Permission):

  1. Enter a meaningful name: Topic consumers can use any consumer group

  2. In the Resources field select Group:*

  3. Select an existing policy e.g. topic consumer or create a new one

Grant permissions to produce into a topic (Scope-based Permission):

  1. Enter a meaningful name: Topic producer can write and describe topic:event

  2. In the Resource field select Topic:event

  3. In the Scope field enter: Write, Describe

  4. Create a new Policy select Role Policy

    1. Enter a meaningful name: topic producer policy

    2. In Realm Roles select and add topic-producer

    3. Save

  5. Save

Grant IdempotentWrite permissions at Cluster level (Scope-based Permission):

  1. Enter a meaningful name: Topic producer have IdempotentWrite grant at Cluster level

  2. In the Resource field select Cluster:*

  3. In the Scope field enter: IdempotentWrite

  4. Add topic producer policy

  5. Save

Configure Kafka Authorization

The following command will set up Kafka to delegate the authorization to Keycloak

oc patch kafka/my-cluster --type=merge --patch-file=(cat k8s/keycloak/08-kafka-authorization.yaml.patch | envsubst | psub)
Tip
Bash alternative: oc patch kafka/my-cluster --type=merge -p "$(cat k8s/keycloak/07-kafka-authorization.yaml.patch | envsubst)"

If the Keycloak definitions are correct, you can execute the local consumer and producer and check the normal message flow.

If you get an authorization exception on the client side, you can enable the logging in Kafka to investigate the OAuth behavior.

Further Information

For more information and troubleshooting tips, see the Appendix.

Clean up

Remove authentication and authorization from the cluster definition or replace the entire configuration with the original.

Reset application configuration:

oc replace -f kafka-consumer/src/main/kubernetes/openshift.yml
oc replace -f kafka-producer/src/main/kubernetes/openshift.yml