Installation Process
Detailed information how to install IBM Financial Services Workbench.
This will guide you through the installation process of IBM Financial Services Workbench .
After the installation finished successfully, you will have
-
a running Solution Designer
-
a running Solution Hub
Roles in the installation process
Red Hat OpenShift cluster administrator
The cluster administrator is responsible for:
-
Creating projects (namespaces)
-
Creating roles, rolebindings and service-accounts
-
Installing Custom Resource Definitions
-
Configuring the OpenShift image-registry
Project administrator
The project administrator is responsible for:
-
Installing Solution Hub and Designer
-
Installing Envoys on prepared namespaces
-
Providing necessary configuration data
-
Pushing images to the image-registry
-
Executing helm after installation
Before you begin
In order to install IBM Financial Services Workbench the following requirements should be met on the machine from where the installer is executed:
-
You are logged-in into the OpenShift cluster as a user with sufficient rights for the task at hand
$ oc login
-
A local docker daemon is installed and running
$ systemctl start docker
-
tar utility and bash are available and the installer package was downloaded and unpacked
$ tar xvzf ssob-2.1.0-ibm-openshift-4.3-cpd-installations.tgz
-
Your current working directory is set to the directory
ssob-install
of the unpacked installer package
$ cd ssob-install
Step 1: Setup cluster-wide resources
Introduction
Cluster-wide resources are setup via executing the script ssob_admin_setup.sh
, that is part of the installer. It must be executed as a user with cluster-admin
permissions in the OpenShift cluster (Role: Red Hat OpenShift cluster administrator). It can be
configured via commandline arguments, that are described later on.
Description
The ssob_admin_setup.sh
will create and configure the following:
-
Create Custom Resource Definition
-
solutions.sol.rt.cp.knowis.de
-
envoys.env.rt.cp.knowis.de
-
k5clients.k5.project.operator
-
k5dashboards.k5.project.operator
-
k5pipelinemanagers.k5.project.operator
-
k5projects.k5.project.operator
-
k5realms.k5.project.operator
-
k5topics.k5.project.operator
-
-
Create instances of Custom Resource
ConsoleYAMLSample
-
Optionally: creates a
route
for the OpenShift docker registry -
Create Roles
-
k5-aggregate-edit
(provides edit access to the created Custom Resource Definitions above and aggregates it to the cluster roles admin and edit) -
k5-aggregate-view
(provides view access to the created Custom Resource Definitions above and aggregates it to the cluster role view) -
cpd-admin-role-additional
(provides admin access to routes and pods/portforward)
-
-
Create ServiceAccounts
-
k5-operator-sa
-
k5-s3-storage
-
-
Creates rolebindings (associates permissions to service-accounts in the namespace)
-
Service Account
k5-operator-sa
: cpd-admin-role, cpd-viewer-role, edit -
Service Account
cpd-viewer-sa
: view -
Service Account
cpd-admin-sa
: cpd-admin-role-additional -
Service Account
k5-s3-storage
: scc anyuid
-
Parameters
The ssob_admin_setup.sh
script has the following parameters:
Variable | Description | Example | Default |
---|---|---|---|
--accept_license
|
Set flag to confirm that the license aggreement has been read and
accepted (license files can be found in the installation package folder
./ssob-install/deployments/LICENSES ) |
- | - |
--cpd_namespace
|
Name of the cpd project in the OpenShift cluster |
zen
|
- |
--external_address_image_registry
|
External address of the internal docker registry for creating the required route (without namespace); if this address is not set, the route must otherwise be created manually later |
image-registry.apps.ocp43.ibm.com
|
- |
--image_registry_namespace
|
Namespace of internal docker registry |
openshift-image-registry
|
openshift-image-registry
|
--image_registry_service
|
Service of internal docker registry |
image-registry
|
image-registry
|
--external_image_registry
is set. An external route to the cluster
image registry is needed in order to upload the container images that IBM Financial Services
Workbench is comprised of to the cluster registry. The existence of those images in the cluster
registry is a prerequisite of the following step.To check whether the internal container registry is already exposed via a route use the following command:
$ oc -n openshift-image-registry get routes
Example
$ cd deployments
$ chmod +x ssob_admin_setup.sh
$ ./ssob_admin_setup.sh \
--accept_license \
--cpd_namespace=zen \
--external_address_image_registry=image-registry.apps.ocp43.ibm.com
/tmp/wdp.install.log
.Step 2: Installation of IBM Financial Services Workbench
Introduction
The installation of IBM Financial Services Workbench is done by executing the script
ssob_install.sh
. It required to be executed as a user with OpenShift project admin
permissions in the chosen OpenShift namespace, i.e. zen
. The installation
script requires a set of commandline arguments.
Description
The installer will configure and install the following components via helm charts into the already existing namespace of cpd:
-
solution-designer
-
solution-hub
Without any further configuration the components will be reusing the existing cpd service-accounts respectively the created ones.
The installer will create all default secrets, that are necessary for an envoy project. It creates all secrets containing sensitive data (needed for the deployments and jobs) not via helm chart in order to avoid breach of secrecy.
Each package contains all used docker images and helm charts. During the installation all provided and used docker images are pushed into the configured registry. For convenience it should be the internal OpenShift registry.
Also after the installation of each helm chart the helm tests will be executed. The installation will fail, if one or more helm tests are failing. Per default the helm test is enabled and the helm tetss pods will be removed automatically. These steps can be skipped. In case of failing helm tests, the pod description and if available the pod logs will be printed.
The installation script ( ssob_install.sh
) creates the following required
k8s secrets (containing all needed sensitive data) using the oc cli and the
install_init_data.yaml
file:
-
default mongodb binding for solution-envoy
-
default kafka binding for solution-envoy
-
default iam binding for solution-envoy
-
mongodb binding for the solution-designer
-
token master key for encryption of git tokens stored in the solution-designer
-
optional access credentials for the s3 storage binding of the solution-designer (local marketplace)
-
secret containing the admin credentials for the identity server
Parametrization
The installer is parametrized and adapted to your particular environment via CLI parameters and 3 parameter or value files.
There are 3 different types of parameters:
-
Those that you will always have to adapt to your environment. They are given as CLI parameters to the
ssob_install.sh
script. -
Those that contain sensitive data and need to be handled with care. These are configured via the values file
install_init_data.yaml
-
Those that are likely to be used in their default form. These are configured via the yaml files
solution-hub-values.yaml
andsolution-designer-values.yaml
You can find these files in the ssob-install/deployments/configs
directory
of the unpacked installation package.
Backup the following files:
-
install_init_data.yaml
-
install.yaml
-
solution-designer-values.yaml
-
solution-hub-values.yaml
For example:
$ cd ./SSOB_2.1.0_download/ssob-install/deployments
$ mv configs configs.bak
$ mkdir configs
$ cp ./SSOB_2.1.0_download/ssob-install/deployments/configs/* configs
Connect via scp to the server host and download the files to your local computer, for example:
# scp user@remote_host:remote_file ~/remote_directory
$ scp root@11.11.11.11:/root/SSOB_2.1.3_download/ssob-install/deployments/configs/install_init_data.yaml ~/install
Locally edit your files and then connect via scp to the server host and upload each edited file, for example:
# scp local_file user@remote_host:remote_file
$ scp install_init_data.yaml root@11.11.11.11:/root/SSOB_2.1.3_download/ssob-install/deployments/configs/install_init_data.yaml
Parameters
The ssob_install.sh
script has the following parameters:
-
See previous section on how to find out external URL of cluster image registry
-
Run the following to check for the image registry service (look for the service running on port 5000):
$ oc get services -n openshift-image-registry
-
To obtain the TLS certificate data (created by the IBM Cloud Pak for Data installation) run the following commands:
$ oc get secret helm-secret -n zen -o jsonpath='{.data.ca\.cert\.pem}' | base64 -d > /root/SSOB_2.1.0_download/zenhelm/ca.cert.pem
$ oc get secret helm-secret -n zen -o jsonpath='{.data.helm\.cert\.pem}' | base64 -d > /root/SSOB_2.1.0_download/zenhelm/helm.cert.pem
$ oc get secret helm-secret -n zen -o jsonpath='{.data.helm\.key\.pem}' | base64 -d > /root/SSOB_2.1.0_download/zenhelm/helm.key.pem
external_address_image_registry
can be obtained by viewing the route of the service
image-registry
in the namespace openshift-image-registry
:
$ oc -n "openshift-image-registry" get route image-registry
internal_address_image_registry
is composed
like {image-registry service}
.openshift-image-registry.svc: {port}
. You can find the port by searching for the service with name like
image-registry
in the list of all services in the openshift-image-registry
:
$ oc -n "openshift-image-registry" get services
Variable | Description | Example |
---|---|---|
--accept_license
|
Set flag to confirm that the license aggreement has been read and
accepted (license files can be found in the installation package folder
./ssob-install/deployments/LICENSES ) |
|
--external_address_image_registry
|
Docker registry for pushing the images into the used registry (normally the external available address) (without namespace) |
image-registry.apps.openshift-cluster.mydomain.cloud
|
--internal_address_image_registry
|
Docker registry for pulling images in OpenShift (usually the internal docker registry) |
image-registry.openshift-image-registry.svc:5000
|
--cpd_namespace
|
Name of the cpd project in the OpenShift cluster |
zen
|
--identity_provider_host
|
URL of the identity provider |
https://identity.apps.openshift-cluster.mydomain.cloud
|
--host_domain
|
Hostname of the OpenShift cluster |
apps.openshift-cluster.mydomain.cloud
|
--only_solution_designer
|
Optional: install only a new solution-designer (default is to install all 2 components at once) | |
--only_solution_hub
|
Optional: install only a new solution-hub (default is to install all 2 components at once) | |
--skip_docker_login
|
Optional: skip the docker login (can be used if you are already logged in) | |
--start_docker_daemon
|
Optional: start the docker daemon via the script, default value is true |
true
|
--skip_helm_tls
|
Optional: disable tls for helm requests; default value is true |
true
|
--helm-tls-ca-cert
|
Optional: path to the TLS CA certificate file (default "$HELM_HOME/ca.pem") for helm commands |
/path/to/my/ca.cert.pem
|
--helm-tls-cert
|
Optional: path to TLS certificate file (default "$HELM_HOME/cert.pem") for helm commands |
/path/to/my/cert.pem
|
--helm-tls-key
|
Optional: path to TLS key file (default "$HELM_HOME/key.pem") for helm commands |
/path/to/my/helm.key.pem
|
--skip_helm_test
|
Optional: skip helm test execution | |
--skip_helm_test_cleanup
|
Optional: skip cleanup of helm test pod after execution of helm test |
Configurations
#
with appropriate
values, e.g. registry: "#DOCKER_REGISTRY#/#NAMESPACE#"
will be resolved to
registry: "image-registry.openshift-image-registry.svc:5000/zen"
. install_init_data_file.yaml
- The credentials
global.k5-s3-storage.access.accesskey/secretkey
are used for integrated s3 storage service that is used to store local marketplace data. - The MongoDB credentials
admin:password@mongodb
for theadmin
user can be retrieved by looking for a Kubernetes secret namedmongodb
:$ oc -n foundation get secret mongodb -o jsonpath='{.data.>database-admin-password}' | base64 -d; echo
The following configuration yaml file can be found in the directory
../ssob-install/deployments/configs
and needs to be adjusted:
Variable |
Description
|
Example |
---|---|---|
global.identity.adminUser
|
Username of a Keycloak admin |
admin
|
global.identity.adminPassword
|
Password of a Keycloak admin | |
global.mongodb.designer.connectionString
|
Mongo database connection string, that will be used for the Solution Designer |
mongodb://admin:password@mongodb.foundation.svc.cluster.local:27017/admin?ssl=false
|
global.mongodb.solutions.connectionString
|
Mongo database connection string, that will be used as a default for Solution Envoy |
mongodb://admin:password@mongodb.foundation.svc.cluster.local:27017/admin?ssl=false
|
global.messagehub.brokersSasl
|
MessageHub bootstrap server address array |
["kafka-cluster-kafka-bootstrap.foundation.svc.cluster.local:9093"]
|
global.messagehub.user
|
Username for the MessageHub service user |
kafka-user
|
global.messagehub.password
|
Password of the user for the MessageHub service |
sercret123
|
global.designer.tokenMasterKey.password
|
Password for encryption of git tokens in the Solution Designer |
sercret123
|
global.k5-s3-storage.access.accesskey
|
The accesskey for accessing the s3 storage endpoint. It will be created only the first time with random data, but not updated if the secret already exists. If this value is set and the corresponding secretkey is also set, then the resulting secret will be created or updated. |
ICnh47lt3YmwSmy801Dfobi9Wf82oK6gHxJzA9LbxgiFcAW5pSPPcJPjxiy8YylN
|
global.k5-s3-storage.access.secretkey
|
The secretkey for accessing the s3 storage endpoint. It will be created only the first time with random data, but not updated if the secret already exists. If this value is set and the corresponding accesskey is also set, then the resulting secret will be created or updated. |
BGh6SkSi35egd7mkMxKj#QFB6UnpQyyXw5remhMLwwrwiiQVVmIgKzchc3CuEY5xL
|
YAML file solution-hub-values.yaml
The following configuration yaml file can be found in the directory
../ssob-install/deployments/resources
and needs to be adjusted:
Variable | Description | Example |
---|---|---|
global.identity.realm
|
Used realm in the identity provider. This entry must correspond to the
entry global.identity.realm in solution-designer-values.yaml
. |
ssob (default value) |
global.hosts.configurationManagement.name
|
Hostname for the configuration management (without protocol) |
ssob-config.{{ .Values.global.hosts.domain }} (default value) |
global.hosts.k5query.name
|
Hostname for the k5-query component (without protocol) |
ssob-sdo.{{ .Values.global.hosts.domain }} (default value) |
global.truststore.trustMap
|
List of certificates that is used for the proper communication over
SSL. The list should include all certificates in PEM ASCII format for the communication with
|
See Format Example |
Format Example global.truststore.trustMap
trustMap:
identity: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
mycert: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
$ echo | openssl s_client -showcerts -connect #KEYCLOAK#DOMAIN#:443 2>/dev/null
YAML file solution-designer-values.yaml
The following configuration yaml file can be found in the directory
../ssob-install/deployments/configs
and needs to be adjusted:
Variable |
Description
|
Example |
---|---|---|
global.environment.name
|
Provide a name for the solution designer installation |
designer (default value) |
global.domain
|
Domain for the solution designer (without protocol) |
#DOMAIN# (default value) |
global.endpoints.designer.host
|
Hostname for the solution designer (without protocol) |
ssob-sc.#DOMAIN# (default value) |
global.endpoints.gitIntegrationController.host
|
Hostname for the git integration controller (without protocol) |
ssob-git-integration-controller.#DOMAIN# (default value) |
global.endpoints.solutionController.host
|
Hostname for the solution controller (without protocol) |
ssob-sdo.#DOMAIN# (default value) |
global.endpoints.codeGenerationProvider.host
|
Hostname for the code generation provider (without protocol) |
ssob-code-gen.#DOMAIN# (default value) |
global.endpoints.cliProvider.host
|
Hostname for the cli provider (without protocol) |
ssob-cli-provider.#DOMAIN# (default value) |
global.endpoints.queryService.host
|
Hostname for the query service (without protocol) |
ssob-k5-query-service.#DOMAIN# (default value) |
global.openshiftConsole.url
|
Url for the openshift console |
https://console-openshift-console.#DOMAIN#
|
global.imageTest.repository
|
The image repository for the general helm test docker image |
#DOCKER_REGISTRY#/#NAMESPACE#/ubi8-testing (default value) |
global.truststore.secretName
|
Secretname for the truststore. | k5-hub-truststore |
global.identity.url
|
Url for the identity provider #IDENTITY_PROVIDER_HOST#
(default value) |
|
global.identity.realm
|
Used realm in the identity provider. This entry must correspond to the
entry global.identity.realm in solution-hub-values.yaml
. |
ssob (default value) |
global.identity.adminCredentialsSecretName
|
Name of secret in which admin credentials for the identity provider are stored |
iam-secret (default value) |
global.identity.job.image.repository
|
Docker image to run the scripts in k8s |
#DOCKER_REGISTRY#/#NAMESPACE#/knowis-jcn-util (default value) |
cp-dt-backend.image.repository
|
The image repository for the designer backend |
#DOCKER_REGISTRY#/#NAMESPACE#/backend (default value) |
cp-dt-backend.mongoDb.secretName
|
Name of the secret, where access credentials of the Mongo database used by the Solution Designer are defined |
cp-dt-backend-mongodb-secret (default value) |
cp-dt-backend.mongoDb.dbName
|
Name of the Mongo database for the solution designer |
ssob-sc (default value) |
cp-dt-backend.migration.db.gic.mongoDb.secretName
|
MongoDB secret of git integration controller db - use same value as at git-integration-controller.mongoDb.secretName |
k5-wb-gic (default value) |
cp-dt-backend.migration.db.gic.mongoDb.dbName
|
Name of git integration controller MongoDB - use same value as at git-integration-controller.mongoDb.dbName | |
cp-dt-backend.migration.solutionProviderMigration.runMigration
|
Boolean value to run migration for solution provider |
false (default value) |
cp-dt-backend.migration.solutionProviderMigration.image.repository
|
The image repository of solution provider migration |
#DOCKER_REGISTRY#/#NAMESPACE#/solution-git-provider-migration (default
value) |
cp-dt-backend.migration.solutionProviderMigration.defaultProviderData.baseUrl
|
Url of the old gitlab system here - only needed if runMigration is true | |
cp-dt-frontend.image.repository
|
The image repository for the frontend |
#DOCKER_REGISTRY#/#NAMESPACE#/frontend (default value) |
git-integration-controller.image.repository
|
The image repository for the git integration controller |
#DOCKER_REGISTRY#/#NAMESPACE#/git-integration-controller (default value)
|
git-integration-controller.mongoDb.secretName
|
Name of the secret, where access credentials of the Mongo database used by the git integration controller are defined |
k5-wb-gic (default value) |
git-integration-controller.mongoDb.dbName
|
Name of the Mongo database for the git integration controller |
k5-wb-gic (default value) |
git-integration-controller.tokenEncryptionMasterKey.secretName
|
Name of the Secret for providing the encryption of all the tokens which are stored in the database.See Format Example |
k5-token-encryption-master-key (default value) |
code-generation-provider.image.repository
|
Repository for the code generation provider |
#DOCKER_REGISTRY#/#NAMESPACE#/code-generation-provider (default value)
|
cli-provider.image.repository
|
Repository for the cli provider |
#DOCKER_REGISTRY#/#NAMESPACE#/cli-provider (default value) |
solution-controller.image.solc.repository
|
Repository for the solution controller |
#DOCKER_REGISTRY#/#NAMESPACE#/solution-controller (default value)
|
solution-controller.image.solutionTemplates.repository
|
Repository for the solution templates |
#DOCKER_REGISTRY#/#NAMESPACE#/solution-template-image (default value)
|
solution-controller.marketplace.storage.secretName
|
Name of the secret containing the credentials, which are used to
access the s3 storage. In default case the same secret can be used as for
k5-s3-storage.secretName . See Format Example
|
k5-s3-storage-access (default value) |
k5-s3-storage.secretName
|
Name of the secret containing the credentials, which are used to allow access to the k5 s3 storage. This secret will be automatically created by the installer, if the secret does not exists. In this case also the accesskey and the secret key will be generated and will have random values. See Format Example |
k5-s3-storage-access (default value) |
k5-s3-storage.rbac.serviceAccountName
|
ServiceAccountName name of the k5 s3 storage |
k5-s3-storage (default value) |
k5-s3-storage.image.repository
|
The image repository for the k5 s3 storage |
#DOCKER_REGISTRY#/#NAMESPACE#/k5-s3-storage (default value) |
k5-s3-storage.imageTest.repository
|
The image repository for the k5 s3 storage helm test |
#DOCKER_REGISTRY#/#NAMESPACE#/k5-s3-storage-helm-test (default value)
|
k5-s3-storage.volumePermissions.image.repository
|
The image repository for the k5 s3 storage volume permissions |
#DOCKER_REGISTRY#/#NAMESPACE#/k5-s3-storage-volume-permission (default
value) |
cp-dt-backend.codeGenerationProvider.url
|
URL to the code generation provider service. It must correspond to the
URL that was specified at global.hosts.codeGenerationProvider.name in the
file solution-hub-values.yaml . |
https://ssob-code-gen.#DOMAIN# (default value) |
cp-dt-backend.cliProvider.url
|
URL to the code generation provider service. It must correspond to the
URL that was specified at global.hosts.cliProvider.name in the file
solution-hub-values.yaml . |
https://ssob-cli-provider.#DOMAIN# (default value) |
Format Example git-integration-controller.tokenEncryptionMasterKey.secretName
apiVersion: v1
kind: Secret
type: Opaque
data:
key: BASE64_ENCODED_ACCESSKEY
Format Example solution-controller.marketplace.storage.secretName
apiVersion: v1
kind: Secret
type: Opaque
data:
accesskey: BASE64_ENCODED_ACCESSKEY
secretkey: BASE64_ENCODED_SECRETKEY
Format Example k5-s3-storage.secretName
apiVersion: v1
kind: Secret
type: Opaque
data:
accesskey: BASE64_ENCODED_ACCESSKEY
secretkey: BASE64_ENCODED_SECRETKEY
Example
cd deployments
chmod +x ssob_install.sh
./ssob_install.sh \
--accept_license \
--external_address_image_registry=image-registry.apps.ocp43.ibm.com \
--internal_address_image_registry=image-registry.openshift-image-registry.svc:5000 \
--cpd_namespace=zen \
--identity_provider_host=https://keycloak.apps.ocp43.ibm.com \
--host_domain=apps.ocp43.ibm.com \
--helm-tls-ca-cert=/root/.helm/ca.pem \
--helm-tls-cert=/root/.helm/cert.pem \
--helm-tls-key=/root/.helm/key.pem
/tmp/wdp.install.log
Validate the installation
To validate the results of the previous installation step do the following:
-
Check the deployments by running this command and compare your output to the output provided here
$ oc get deployments -n zen NAME READY UP-TO-DATE AVAILABLE AGE auto-devops-mvn-dependencies 2/2 2 2 25d cli-provider 3/3 3 3 25d code-generation-provider 3/3 3 3 25d configuration-management 2/2 2 2 25d cp-dt-backend 3/3 3 3 25d cp-dt-frontend 3/3 3 3 25d git-integration-controller 3/3 3 3 25d k5-query 2/2 2 2 25d k5-s3-storage 1/1 1 1 25d k5projectoperator 1/1 1 1 25d solution-controller 2/2 2 2 25d
- Open Solution Designer webapp in your browser
- Retrieve URL to point your browser to:
$ oc -n zen get routes | grep cp-dt-frontend-service cp-dt-frontend-route designer.apps.{your.cluster.domain} / cp-dt-frontend-service https reencrypt/Redirect None
-
Point your browser to
https://ssob-sc.apps.{your.cluster.domain}/
. As a result you should see a login form.
- Retrieve URL to point your browser to:
Troubleshooting
Docker daemon
For above command to work you need a running docker daemon on the host you are running the command on (your bastion host). If you get an error message like the following
$ Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Start docker like this (system dependent):
$ systemctl start docker
$ systemctl enable docker
Docker and self-signed certificates
If your container registry uses self signed certificates you might see something like the following:
Extracting modules
checking structure
structure check skip for now
docker login https://image-registry.apps.ocp43.tec.uk.ibm.com
Error response from daemon: Get https://image-registry.apps.ocp43.tec.uk.ibm.com/v1/users/: x509: certificate signed by unknown authority
To fix the issues with self signed certificates, please follow the instructions on: https://docs.docker.com/registry/insecure/ and perform the following steps:
-
retrieve the CA Cert from the image registry
-
add it to the local trust store
-
reload the trusted certificates
$ oc -n openshift-ingress get secret router-certs-default -o jsonpath='{.data.tls\.crt}' | base64 -d > /etc/pki/ca-trust/source/anchors/image-registry.apps.ocp43.tec.uk.ibm.com.crt
update-ca-trust
$ systemctl restart docker
$ docker login -u admin -p passw0rd https://image-registry.apps.ocp43.tec.uk.ibm.com