System requirements

Ready to install IBM Financial Services Workbench? Before you get started, use the information in this document to determine whether you have sufficient resources to install the service and the required software dependencies for the service.

Hardware resources

HardwareNumber of ServersAvailable VPCsMemory
x86-643+ worker nodes. The nodes must be able to schedule pods16 virtual processor cores (VPCs)64 GB

A dynamic storage provisioner is required to be present on the cluster. The core product has only modest requirements in terms of PVC storage access modes - only ReadWriteOnce (RWO) is required. If you are installing other pre-requirements software onto the same cluster, i.e. MongoDB, kafka, Keycloak, additional access modes might be required.

Note: Use a storage provider with 500 GB or more of available space for your cluster. Tested providers include AWS EBS volumes, NFS, Portworx and OpenShift Container Storage. Squash must not be enabled on the NFS server, and all nodes in the cluster must have access to mount the NFS server and read/write access.

OpenShift setup

Please make sure you are running the following versions:

To setup OpenShift please follow the instructions in the OpenShift documentation

Attention: For some preparatory steps a user with the cluster admin role is required. These steps are clearly marked. Once the product is installed this role is no longer required for the users working with IBM Financial Services Workbench.

Cloud Pak for Data

IBM Financial Services Workbench 3.0 runs as a service with IBM Cloud Pak for Data 3.5. Please make sure that the Cloud Pak for Data control plane is installed in the OpenShift cluster.

Detailed information on how to install IBM Cloud Pak for Data control plane you can find here.

Required third-party components

The IBM Financial Services Workbench requires the following third-party components with the specified versions. These can be either deployed into the cluster or be deployed separately.

ComponentRequired versionMinimum specifications for worker nodesMinimum storage requirements
MongoDBVersion 3.6Cores: 2, Memory: 4 GB20 GB
Apache Kafka*Version 1.0.2 or higherCores: 12, Memory 16 GB20 GB
KeycloakVersion 5.0 up to version 14Cores 1, Memory 2 GB--
Git providerSee below----
ArgoCDVersion 2.1 See belowCores: 1, Memory: 2 GB--
Helm Chart RepositorySee below----

Besides the Open-Source version of Apache Kafka, commercial distributions such as RedHat AMQ Streams 1.5 or Cloud Offerings such as Confluent Cloud or IBM Event Streams are supported as well.

Attention: Keycloak is only supported with HTTPS mode enabled. It needs proper certificate and hostname setup.

Supported Git providers

Currently the following Git providers are supported:

  • GitLab self managed

  • Bitbucket Server

  • GitHub Enterprise

ArgoCD Installation

There are 2 ways of installing ArgoCD in a 4.6 OpenShift Cluster:

  • Installation via ArgoCD (Version 2.1)

  • Installation via Red Hat OpenShift GitOps Operator (Version 1.3.3)

Attention: Please notice that the RedHat OpenShift GitOps Operator is available in OpenShift 4.6 but Red Hat does not provide support for this tool, (see RedHat OpenShift GitOps Operator)

Installation via ArgoCD

Please follow the instructions of ArgoCD installation, see Statement of support.

Additionally to the official Installation of ArgoCD the following step is necessary to enable a Route:

oc -n argocd patch deployment argocd-server -p '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"argocd-server"}],"containers":[{"command":["argocd-server","--insecure","--staticassets","/shared/app"],"name":"argocd-server"}]}}}}'

Installation via Red Hat OpenShift GitOps Operator

Please open the OperatorHub (Marketplace) in your OpenShift cluster and search for Red Hat OpenShift GitOps and follow the instructions.

Additionally to the Installation of via the Operator the following two steps must be performed:

  • Adjust Route via argocd CR to use reencryption instead of passthrough:

$ oc -n openshift-gitops patch argocd/openshift-gitops --type=merge -p='{"spec":{"server":{"route":{"enabled":true,"tls":{"insecureEdgeTerminationPolicy":"Redirect","termination":"reencrypt"}}}}}'
  • Role and Rolebinding needs to be created to give ArgoCD the needed permissions:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata: 
  name: openshift-gitops-argocd-application-controller
rules: 
  - verbs: 
      - '*'
    apiGroups: 
      - '*'
    resources: 
      - '*'
  - verbs: 
      - '*'
    nonResourceURLs: 
      - '*'
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata: 
  name: openshift-gitops-argocd-application-controller-rb
subjects: 
  - kind: ServiceAccount
    name: openshift-gitops-argocd-application-controller
    namespace: openshift-gitops
roleRef: 
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: openshift-gitops-argocd-application-controller

Helm Chart Repository

A Helm Chart Repository is required to release own Service Project (Component) Helm Charts, which must meet the following requirements:

  • The chart repository must not be helm-oci based

  • The chart repository must contain an index.yaml file, see Helm Chart Repository.

  • The chart repository must be available from the cluster via HTTPS and has a proper certificate (not self-signed)

Examples for supported Helm Chart Repositories are:

  • ChartMuseum Repository Server

  • GitLab Package Registry

  • Sonatype Nexus Helm Chart Repository