Apache Kafka
If you already have Apache Kafka running, skip this step and proceed to install Keycloak. See also pre-installation tasks on which configuration values of your Kafka installation you need to gather for the installation of IBM Financial Services Workbench.
Alternatively, you can install Red Hat Integration - AMQ Streams (Apache Kafka) Operator in version 1.8.4 or higher.
Official Documentation
Red Hat AMQ: https://access.redhat.com/documentation/en-us/red_hat_amq
Apache Kafka: https://kafka.apache.org/20/documentation.html
Install the Red Hat Integration - AMQ Streams operator from the OperatorHub
cluster admin
role is required.As a cluster administrator, install the Red Hat Integration - AMQ Streams operator from the OperatorHub to the namespace foundation as follows:
Navigate in the OpenShift Web Console to the Operators → OperatorHub page
Filter by keyword: amq
Select the operator: Red Hat Integration - AMQ Streams provided by Red Hat
Read the information about the operator and click Install
On the Create Operator Subscription page:
Select option A specific namespace on the cluster with namespace
foundation
Select an update channel (if more than one is available)
Select Automatic approval strategy
Click Subscribe
After the subscription's upgrade status is up-to-date, navigate in the web console to the Operators → Installed Operators page
Select the Red Hat Integration - AMQ Streams operator and verify that the content for the Overview tab of the Operators → Operator Details page is displayed
Create the Kafka instance
Create the Kafka CRD instance in the namespace foundation
as follows:
Navigate in the OpenShift Web Console to the Operators → Installed Operators page
Select the Strimzi Apache Kafka Operator
Navigate to the Kafka tab of the Operators → Operator Details page
Click Create Kafka
In the Red Hat Integration - AMQ Streams Operator → Create Kafka page
Enter the resource definition (See Example Kafka Configuration)
Click on Create
Verify that in the Kafka tab the newly created
kafka
CRD instance is displayed.
Example Kafka Configuration
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: kafka
namespace: foundation
spec:
kafka:
replicas: 3
listeners:
- authentication:
type: scram-sha-512
name: scram
port: 9092
tls: true
type: route
- authentication:
type: scram-sha-512
name: tls
port: 9093
tls: true
type: internal
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
storage:
type: ephemeral
zookeeper:
replicas: 3
storage:
type: ephemeral
entityOperator:
topicOperator: { }
userOperator: { }
Create the Kafka User Instance
Create a KafkaUser CRD instance in the namespace foundation as follows:
Navigate in the web console to the Operators → Installed Operators page
Select the Strimzi Apache Kafka Operator
Navigate to the Kafka tab of the Operators → Operator Details page
Click Create Kafka User
In the Strimzi Apache Kafka Operator → Create KafkaUser page
Enter the resource definition ( See Example KafkaUser Configuration)
Click on Create
Verify that in the Kafka User tab the newly created
kafka-user
CRD instance is displayed.
Example KafkaUser Configuration
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
labels:
strimzi.io/cluster: kafka
name: kafka-user
namespace: foundation
spec:
authentication:
type: scram-sha-512
Retrieve the Credentials
You can retrieve the credentials for connecting to the Kafka broker by looking for a Kubernetes secret named after the user you provided (e.g. kafka-user ):
oc -n foundation get secret kafka-user -o jsonpath='{.data.password}' | base64 -d; echo
Retrieve the Certificates
Get the certificate that you need during the installation of IBM Financial Services Workbench:
oc -n foundation get secret kafka-cluster-ca -o jsonpath='{.data.ca\.key}' | base64 -d > kafka.ca.key
oc -n foundation get secret kafka-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > kafka.ca.crt