Red Hat Integration - AMQ Streams
If you already have Apache Kafka running, skip this step and proceed to install Red Hat Single Sign-On Operator. See also pre-installation tasks on which configuration values of your Kafka installation you need to gather for the installation of IBM Industry Solutions Workbench.
Official Documentation
Install the Red Hat Integration - AMQ Streams operator from the OperatorHub
To complete this task a user in the cluster admin role is required.
As a cluster administrator, install the Red Hat Integration - AMQ Streams operator from the OperatorHub to the
namespace foundation as follows:
- Navigate in the OpenShift Web Console to the Operators → OperatorHub page
- Filter by keyword: amq
- Select the operator: Red Hat Integration - AMQ Streams provided by Red Hat
- Read the information about the operator and click Install
- On the Create Operator Subscription page:- Select option A specific namespace on the cluster with namespace foundation
- Select an update channel (if more than one is available)
- Select Automatic approval strategy
- Click Subscribe
 
- Select option A specific namespace on the cluster with namespace 
- After the subscription's upgrade status is up to date, navigate in the web console to the Operators → Installed Operators page
- Select the Red Hat Integration - AMQ Streams operator and verify that the content for the Overview tab of the Operators → Operator Details page is displayed
See OpenShift documentation on adding operators to a cluster (OpenShift 4.14) for further information on how to install an operator from the OperatorHub.
Create the Kafka instance
Create the Kafka CRD instance in the namespace foundation as follows:
- Navigate in the OpenShift Web Console to the Operators → Installed Operators page
- Select the Red Hat Integration - AMQ Streams Operator
- Navigate to the Kafka tab of the Operators → Operator Details page
- Click Create Kafka- Enter the resource definition (See Example Kafka Configuration)
- Click on Create
 
- Verify that in the Kafka tab the newly created kafkaCRD instance is displayed.
Example Kafka Configuration
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: kafka
  namespace: foundation
spec:
  kafka:
    replicas: 3
    listeners:
      - authentication:
          type: scram-sha-512
        name: scram
        port: 9092
        tls: true
        type: route
      - authentication:
          type: scram-sha-512
        name: tls
        port: 9093
        tls: true
        type: internal
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
    storage:
      type: ephemeral
  zookeeper:
    replicas: 3
    storage:
      type: ephemeral
  entityOperator:
    topicOperator: {}
    userOperator: {}Create the Kafka User Instance
Create a KafkaUser CRD instance in the namespace foundation as follows:
- Navigate in the web console to the Operators → Installed Operators page
- Select the Red Hat Integration - AMQ Streams Operator
- Navigate to the Kafka tab of the Operators → Operator Details page
- Click Create Kafka User- Enter the resource definition (See Example KafkaUser Configuration)
- Click on Create
 
- Verify that in the Kafka User tab the newly created kafka-userCRD instance is displayed.
Example KafkaUser Configuration
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
  labels:
    strimzi.io/cluster: kafka
  name: kafka-user
  namespace: foundation
spec:
  authentication:
    type: scram-sha-512Retrieve the Credentials
You can retrieve the credentials for connecting to the Kafka broker by looking for a Kubernetes secret named after the user you provided (e.g. kafka-user ):
oc -n foundation get secret kafka-user -o jsonpath='{.data.password}' | base64 -d; echoRetrieve the Certificates
Get the certificate that you need during the installation of IBM Industry Solutions Workbench:
oc -n foundation get secret kafka-cluster-ca -o jsonpath='{.data.ca\.key}' | base64 -d > kafka.ca.key
oc -n foundation get secret kafka-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > kafka.ca.crt