System requirements

Ready to install IBM Financial Services Workbench? Before you get started, use the information in this document to determine whether you have sufficient resources to install the service and the required software dependencies for the service.

Hardware resources

HardwareNumber of ServersAvailable VPCsMemory
x86-643+ worker nodes. The nodes must be able to schedule pods16 virtual processor cores (VPCs)64 GB

A dynamic storage provisioner is required to be present on the cluster. The core product has only modest requirements in terms of PVC storage access modes - only ReadWriteOnce (RWO) is required. If you are installing other pre-requirements software onto the same cluster, i.e. MongoDB, kafka, Keycloak, additional access modes might be required.

Note: Use a storage provider with 500 GB or more of available space for your cluster. Tested providers include AWS EBS volumes, NFS, Portworx and OpenShift Container Storage. Squash must not be enabled on the NFS server, and all nodes in the cluster must have access to mount the NFS server and read/write access.

OpenShift setup

Please make sure you are running the following versions:

To set up OpenShift please follow the instructions in the OpenShift documentation.

Attention: For some preparatory steps a user with the cluster admin role is required. These steps are clearly marked. Once the product is installed this role is no longer required for the users working with IBM Financial Services Workbench.

Cloud Pak for Data

IBM Financial Services Workbench 3.1-Preview runs as a service with IBM Cloud Pak for Data 3.5. Please make sure that the Cloud Pak for Data control plane is installed in the OpenShift cluster.

Detailed information on how to install IBM Cloud Pak for Data control plane you can find here.

Required third-party components

The IBM Financial Services Workbench requires the following third-party components with the specified versions. These can be either deployed into the cluster or be deployed separately.

ComponentRequired versionMinimum specifications for worker nodesMinimum storage requirements
MongoDBVersion 3.6Cores: 2, Memory: 4 GB20 GB
Apache Kafka*Version 1.0.2 or higherCores: 12, Memory 16 GB20 GB
KeycloakVersion 5.0 up to version 14Cores 1, Memory 2 GB--
Git providerSee below----
ArgoCDVersion 2.1 - see belowCores: 1, Memory: 2 GB--
Helm chart repositorySee below----

Besides the open-source version of Apache Kafka, commercial distributions such as Red Hat AMQ Streams or cloud offerings such as Confluent Cloud or IBM Event Streams are supported as well.

Attention: Keycloak is only supported with HTTPS mode enabled. It needs a proper certificate and hostname setup.

Supported Git providers

Currently, the following Git providers are supported:

  • GitLab self-managed

  • Bitbucket Server

  • GitHub Enterprise

ArgoCD Installation

There are 2 ways of installing ArgoCD in an OpenShift 4.8 cluster:

  • Installation via ArgoCD (Version 2.1)

  • Installation via Red Hat OpenShift GitOps operator (version 1.4.5)

Attention: Please notice, that the Red Hat OpenShift GitOps operator is available in OpenShift 4.8 but Red Hat does not provide support for this tool, (see RedHat OpenShift GitOps Operator)

Installation via ArgoCD

Please follow the instructions of ArgoCD installation, see statement of support.

Additionally, to the official Installation of ArgoCD the following step is necessary to enable a Route:

oc -n argocd patch deployment argocd-server -p '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"argocd-server"}],"containers":[{"command":["argocd-server","--insecure","--staticassets","/shared/app"],"name":"argocd-server"}]}}}}'

Installation via Red Hat OpenShift GitOps operator

Please open the OperatorHub (Marketplace) in your OpenShift cluster and search for Red Hat OpenShift GitOps and follow the instructions.

The following two steps must be performed additionally to the installation via the operator:

  • Adjust the route via an ArgoCD Custom Resource (CR) to use re-encryption instead of passthrough:

$ oc -n openshift-gitops patch argocd/openshift-gitops --type=merge -p='{"spec":{"server":{"route":{"enabled":true,"tls":{"insecureEdgeTerminationPolicy":"Redirect","termination":"reencrypt"}}}}}'
  • A Role and RoleBinding need to be created to give ArgoCD the required permissions:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: openshift-gitops-argocd-application-controller
rules:
  - verbs:
      - '*'
    apiGroups:
      - '*'
    resources:
      - '*'
  - verbs:
      - '*'
    nonResourceURLs:
      - '*'
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: openshift-gitops-argocd-application-controller-rb
subjects:
  - kind: ServiceAccount
    name: openshift-gitops-argocd-application-controller
    namespace: openshift-gitops
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: openshift-gitops-argocd-application-controller

Helm chart repository

A Helm chart repository is required to release your service projects' Helm charts in order to create components, which must meet the following requirements:

  • The chart repository must not be helm-oci based

  • The chart repository must contain an index.yaml file, see Helm Chart Repository.

  • The chart repository must be available from the cluster via HTTPS and has a proper certificate (not self-signed)

Examples for supported Helm Chart Repositories are:

Attention: Please note, that using Sonatype Nexus as Helm chart repository doesn't support descriptions and icons used in Solution Designer to display components.