Installation Process

Detailed information how to install IBM Financial Services Workbench.

This will guide you through the installation process of IBM Financial Services Workbench .

After the installation finished successfully, you will have

  • a running Solution Designer

  • a running Solution Hub

Roles in the installation process

Red Hat OpenShift cluster administrator

The cluster administrator is responsible for:

  • Creating projects (namespaces)

  • Creating roles, role bindings and service-accounts

  • Installing Custom Resource (CR) definitions

  • Configuring the OpenShift image-registry

Project administrator

The project administrator is responsible for:

  • Installing Solution Hub and Designer

  • Installing Envoys on prepared namespaces

  • Providing necessary configuration data

  • Pushing images to the image-registry

  • Executing Helm after installation

Before you begin

Attention: Please make sure that all System Requirements are met, especially the OpenShift setup including OpenShift Pipelines and the Cloud Pak for Data installation.

In order to install IBM Financial Services Workbench the following requirements should be met on the machine from where the installer is executed:

  • You are logged-in into the OpenShift cluster as a user with sufficient rights for the task at hand

$ oc login
  • A local docker daemon is installed and running

$ systemctl start docker
  • tar utility and bash are available and the installer package was downloaded and unpacked

$ tar xvzf ssob-2.3.0-ibm-openshift-4.3-cpd-installations.tgz
  • Your current working directory is set to the directory ssob-install of the unpacked installer package

$ cd ssob-install 

Step 1: Setup cluster-wide resources

Attention: To complete this task a user in the cluster admin role is required.

Introduction

Cluster-wide resources are setup via executing the script ssob_admin_setup.sh , that is part of the installer. It must be executed as a user with cluster-admin permissions in the OpenShift cluster (Role: Red Hat OpenShift cluster administrator). It can be configured via command line arguments, that are described later on.

Description

The ssob_admin_setup.sh will create and configure the following:

  • Create Custom Resource definition

    • solutions.sol.rt.cp.knowis.de

    • envoys.env.rt.cp.knowis.de

    • k5clients.k5.project.operator

    • k5dashboards.k5.project.operator

    • k5pipelinemanagers.k5.project.operator

    • k5projects.k5.project.operator

    • k5realms.k5.project.operator

    • k5topics.k5.project.operator

  • Create instances of Custom Resource ConsoleYAMLSample

  • Optionally: creates a route for the OpenShift docker registry

  • Create Roles

    • k5-aggregate-edit (provides edit access to the created Custom Resource Definitions above and aggregates it to the cluster roles admin and edit)

    • k5-aggregate-view (provides view access to the created Custom Resource Definitions above and aggregates it to the cluster role view)

    • cpd-admin-additional-role (provides admin access to routes and pods/portforward)

  • Create ServiceAccounts

    • k5-operator-sa

  • Creates role bindings (associates permissions to service-accounts in the namespace)

    • Service Account k5-operator-sa: cpd-admin-role, cpd-viewer-role, edit

    • Service Account cpd-viewer-sa: view

    • Service Account cpd-admin-sa-rolebinding: cpd-admin-additional-role

Parameters

The ssob_admin_setup.sh script has the following parameters:

Variable Description Example Default
--accept_license Set flag to confirm that the license agreement has been read and accepted (license files can be found in the installation package folder ./ssob-install/deployments/LICENSES ) - -
--cpd_namespace Name of the CPD project in the OpenShift cluster zen -
--external_address_image_registry External address of the internal docker registry for creating the required route (without namespace); if this address is not set, the route must otherwise be created manually later image-registry.apps.{your.cluster.domain} -
--image_registry_namespace Namespace of internal docker registry openshift-image-registry openshift-image-registry
--image_registry_service Service of internal docker registry image-registry image-registry
Note: The route for the OpenShift image registry will only be created if the parameter --external_image_registry is set. An external route to the cluster image registry is needed in order to upload the container images that IBM Financial Services Workbench is comprised of to the cluster registry. The existence of those images in the cluster registry is a prerequisite of the following step.

To check whether the internal container registry is already exposed via a route use the following command:

$ oc -n openshift-image-registry get routes

Example

$ cd deployments
$ chmod +x ssob_admin_setup.sh
$ ./ssob_admin_setup.sh  \
  --accept_license  \
  --cpd_namespace=zen \
  --external_address_image_registry=image-registry.apps.{your.cluster.domain}
Trouble: If the above command fails, check the logs for more detailed infos about error causes. You can find the logs at /tmp/wdp.install.log.

Step 2: Installation of IBM Financial Services Workbench

Introduction

The installation of IBM Financial Services Workbench is done by executing the script ssob_install.sh . It required to be executed as a user with OpenShift project admin permissions in the chosen OpenShift namespace, i.e. zen . The installation script requires a set of command line arguments.

Description

The installer will configure and install the following components via Helm charts into the already existing namespace of CPD:

  • solution-designer

  • solution-hub

Without any further configuration the components will be reusing the existing cpd service-accounts respectively the created ones.

The installer will create all default secrets, that are necessary for an envoy project. It creates all secrets containing sensitive data (needed for the deployments and jobs) not via Helm chart in order to avoid breach of secrecy.

Each package contains all used docker images and Helm charts. During the installation all provided and used docker images are pushed into the configured registry. For convenience it should be the internal OpenShift registry.

Also after the installation of each Helm chart the Helm tests will be executed. The installation will fail, if one or more Helm tests are failing. Per default the Helm test is enabled and the Helm tests pods will be removed automatically. These steps can be skipped. In case of failing Helm tests, the pod description and if available the pod logs will be printed.

The installation script ( ssob_install.sh ) creates the following required k8s secrets (containing all needed sensitive data) using the oc cli and the install_init_data.yaml file:

  • default mongodb binding for solution-envoy

  • default kafka binding for solution-envoy

  • default iam binding for solution-envoy

  • mongodb binding for the solution-designer

  • token master key for encryption of git tokens stored in the solution-designer

  • optional access credentials for the s3 storage binding of the solution-designer (local marketplace)

  • secret containing the admin credentials for the identity server

Parametrization

The installer is parametrized and adapted to your particular environment via CLI parameters and 3 parameter or value files.

There are 3 different types of parameters:

  • Those that you will always have to adapt to your environment. They are given as CLI parameters to the ssob_install.sh script.

  • Those that contain sensitive data and need to be handled with care. These are configured via the values file install_init_data.yaml

  • Those that are likely to be used in their default form. These are configured via the yaml files solution-hub-values.yaml and solution-designer-values.yaml

You can find these files in the ssob-install/deployments/configs directory of the unpacked installation package.

Customize the configuration files

Tip: You can easily edit the YAML files with your favorite text editor, which supports YAML mode or YAML syntax highlighting. This is recommended, and you probably have to transfer these YAML files temporarily to your local computer and back to the host server for editing.

Connect via scp to the server host and download the files to your local computer, for example:

$ scp user@remote_host:remote_file ~/remote_directory 
$ scp root@11.11.11.11:/root/install/ssob_2.4.0/ssob-install/deployments/configs/install_init_data.yaml ~/install

Locally edit your files and then connect via scp to the server host and upload each edited file, for example:

$ scp local_file user@remote_host:remote_file
$ scp install_init_data.yaml root@11.11.11.11:/root/install/ssob_2.4.0/ssob-install/deployments/configs/install_init_data.yaml

Backup the configuration files

CAUTION: The YAML files are processed and modified by the installation script. Therefore, after you have customized the files to your needs, you must backup those files before executing the installation script. This is especially important because these files must still be accessible for the Installing an Upgrade or Hotfix after this installation.

Backup the following files:

  • install_init_data.yaml

  • install.yaml

  • solution-designer-values.yaml

  • solution-hub-values.yaml

For example:

$ cd ~/install/ssob_2.4.0/ssob-install/deployments
$ mv configs configs.bak
$ mkdir configs
$ cp ./configs.bak/* configs

Parameters

The ssob_install.sh script has the following parameters:

Note:
  • See previous section on how to find out external URL of cluster image registry

  • Run the following to check for the image registry service (look for the service running on port 5000):

$ oc get services -n openshift-image-registry
  • To obtain the TLS certificate data (created by the IBM Cloud Pak for Data installation) run the following commands:

$ oc get secret helm-secret -n zen -o jsonpath='{.data.ca\.cert\.pem}' | base64 -d > /root/install/ssob_2.4.0/zenhelm/ca.cert.pem
$ oc get secret helm-secret -n zen -o jsonpath='{.data.helm\.cert\.pem}' | base64 -d > /root/install/ssob_2.4.0/zenhelm/helm.cert.pem
$ oc get secret helm-secret -n zen -o jsonpath='{.data.helm\.key\.pem}' | base64 -d > /root/install/ssob_2.4.0/zenhelm/helm.key.pem    
Note: If the OpenShift image repository is used, the value for the parameter external_address_image_registry can be obtained by viewing the route of the service image-registry in the namespace openshift-image-registry :
$ oc -n "openshift-image-registry" get route image-registry    
Note: The value for the parameter internal_address_image_registry is composed like {image-registry service} .openshift-image-registry.svc: {port} . You can find the port by searching for the service with name like image-registry in the list of all services in the openshift-image-registry :
$ oc -n "openshift-image-registry" get services    
Variable Description Example
--accept_license Set flag to confirm that the license aggreement has been read and accepted (license files can be found in the installation package folder ./ssob-install/deployments/LICENSES )
--external_address_image_registry Docker registry for pushing the images into the used registry (normally the external available address) (without namespace) image-registry.apps.openshift-cluster.{your.domain.cloud}
--internal_address_image_registry Docker registry for pulling images in OpenShift (usually the internal docker registry) image-registry.openshift-image-registry.svc:5000
--cpd_namespace Name of the cpd project in the OpenShift cluster zen
--identity_provider_host URL of the identity provider https://identity.apps.openshift-cluster.{your.domain.cloud}
--host_domain Hostname of the OpenShift cluster apps.openshift-cluster.{your.domain.cloud}
--only_solution_designer Optional: install only a new solution-designer (default is to install all 2 components at once)
--only_solution_hub Optional: install only a new solution-hub (default is to install all 2 components at once)
--skip_docker_login Optional: skip the docker login (can be used if you are already logged in)
--start_docker_daemon Optional: start the docker daemon via the script, default value is true true
--skip_helm_tls Optional: disable TLS for Helm requests; default value is true true
--helm-tls-ca-cert Optional: path to the TLS CA certificate file (default "$HELM_HOME/ca.pem") for helm commands /path/to/my/ca.cert.pem
--helm-tls-cert Optional: path to TLS certificate file (default "$HELM_HOME/cert.pem") for Helm commands /path/to/my/cert.pem
--helm-tls-key Optional: path to TLS key file (default "$HELM_HOME/key.pem") for Helm commands /path/to/my/helm.key.pem
--skip_helm_test Optional: skip Helm test execution
--skip_helm_test_cleanup Optional: skip clean up of Helm test pod after execution of Helm test

Configuration

Note: The installer replaces values that are surrounded by # with appropriate values, e.g. registry: "#DOCKER_REGISTRY#/#NAMESPACE#" will be resolved to registry: "image-registry.openshift-image-registry.svc:5000/zen" .

YAML file install_init_data_file.yaml

Attention: This file contains settings for creating secrets with sensitive data (credentials, bindings, TLS certificates and keys)
Note:
  • The credentials global.k5-s3-storage.access.accesskey/secretkey are used for integrated s3 storage service that is used to store local marketplace data.
  • The MongoDB credentials admin:password@mongodb for the admin user can be retrieved by looking for a Kubernetes secret named mongodb :
    $ oc -n foundation get secret mongodb -o jsonpath='{.data.>database-admin-password}' | base64 -d; echo

The following configuration yaml file can be found in the directory ../ssob-install/deployments/configs and needs to be adjusted:

Variable
Description
Example
global.identity.adminUser Username of a Keycloak admin admin
global.identity.adminPassword Password of a Keycloak admin
global.mongodb.designer.connectionString Mongo database connection string, that will be used for the Solution Designer mongodb://admin:password@mongodb.foundation.svc.cluster.local:27017/admin?ssl=false
global.mongodb.designer.secretName Name of secret where credentials for the Mongo database connection of the Solution Designer are stored k5-designer-backend-mongodb-secret
global.mongodb.solutions.connectionString Mongo database connection string, that will be used as a default for Solution Envoy mongodb://admin:password@mongodb.foundation.svc.cluster.local:27017/admin?ssl=false
global.mongodb.solutions.secretName Name of secret where credentials for the Mongo database connection of the Solution Envoy are stored k5-default-document-storage-service-binding (default value)
global.messagehub.brokersSasl Array containing the MessageHub bootstrap service address ["kafka-kafka-bootstrap.foundation.svc.cluster.local:9093"]
global.messagehub.user Username for the MessageHub service user kafka-user
global.messagehub.password Password of the user for the MessageHub service secret123
global.messagehub.secretName Name of secret where credentials for the MessageHub service are stored k5-default-message-service-binding (default value)
global.designer.tokenMasterKey.password Password for the encryption of git tokens in the Solution Designer secret123
global.designer.tokenMasterKey.secretName Name of secret where credentials for the encryption of git tokens in the Solution Designer are stored k5-token-encryption-master-key (default value)
global.k5-s3-storage.access.accesskey The accesskey for accessing the s3 storage endpoint. It will be created only the first time with random data, but not updated if the secret already exists. If this value is set and the corresponding secretkey is also set, then the resulting secret will be created or updated. ICnh47lt3YmwSmy801Dfobi9Wf82oK6gHxJzA9LbxgiFcAW5pSPPcJPjxiy8YylN
global.k5-s3-storage.access.secretkey The secretkey for accessing the s3 storage endpoint. It will be created only the first time with random data, but not updated if the secret already exists. If this value is set and the corresponding accesskey is also set, then the resulting secret will be created or updated. BGh6SkSi35egd7mkMxKj#QFB6UnpQyyXw5remhMLwwrwiiQVVmIgKzchc3CuEY5xL
Note: The service address value for the parameter global.messagehub.brokersSasl is structured as follows "{kafka-bootstrap service} .#NAMESPACE#.svc:{port}". You can find the port by searching the list of services, in the namespace where Kafka is installed (e.g. foundation), for the service with name like kafka-bootstrap :
$ oc -n "foundation" get services   

YAML file solution-hub-values.yaml

Note: This file contains settings for Helm deployment of the Solution Hub. It contains settings for the configuration-management, k5-query component and the k5-project-operator.

The following configuration yaml file can be found in the directory ../ssob-install/deployments/resources and needs to be adjusted:

Variable Description Example
global.image.registry Image repository for the solution hub #DOCKER_REGISTRY#/#NAMESPACE# (default value)
global.domain Domain for the solution hub (without protocol) #DOMAIN# (default value)
global.hosts.configurationManagement.name Hostname for the configuration management (without protocol) ssob-config.#DOMAIN# (default value)
global.hosts.hub.name Hostname for the solution hub (without protocol) k5-hub.#DOMAIN# (default value)
global.hosts.k5query.name Hostname for the k5-query component (without protocol) k5query.#DOMAIN# (default value)
global.hosts.k5plantumlserver.name Hostname for the PlantUML server (without protocol) k5-plantuml-server.#DOMAIN# (default value)
global.identity.url URL for the identity provider #IDENTITY_PROVIDER_HOST# (default value)
global.identity.realm Used realm in the identity provider. This entry must correspond to the entry global.identity.realm in solution-designer-values.yaml . ssob (default value)
global.truststore.trustMap.identity List of certificates that is used for the proper communication over SSL. The list should include all certificates in PEM ASCII format for the communication with
  • the identity server
  • the storage service of the marketplace (used in Solution Designer)
  • the git repository (used in Solution Designer)
  • the mongo database (used in Solution Designer)
This truststore is also used in the Solution Designer.
Attention: Please note, that it is necessary to add the certificate of the Root CA to this trust map. If the Root CA certificate is not added, there will be issues within the communication between the components.
See Format Example
k5projectoperator.watch.namespaces.blacklist List of namespaces that will never be watched (exact matches)
k5projectoperator.watch.namespaces.whitelist List of namespaces that will be watched (starts with matches) If empty all namespaces with permissions are considered.
k5-hub-backend.endpoints.openshiftConsole.url URL for the openshift console https://console-openshift-console.#DOMAIN# (default value)

Format Example global.truststore.trustMap.identity

trustMap:
  identity: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
  mycert: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
Note: The certificates contained in the trustMap are the TLS certificates of the Keycloak endpoint. In general, and especially if you followed the Installing Third-Party Components , you can obtain these certificates by checking which certificates are being presented by the server to the client via OpenSSL:
$ echo | openssl s_client -showcerts -connect #KEYCLOAK#DOMAIN#:443 2>/dev/null

YAML file solution-designer-values.yaml

Note: This file contains settings for Helm deployment of the Solution Designer.

The following configuration yaml file can be found in the directory ../ssob-install/deployments/configs and needs to be adjusted:

Variable
Description
Example
global.environment.name Provide a name for the solution designer installation designer (default value)
global.image.registry Image repository for the solution designer #DOCKER_REGISTRY#/#NAMESPACE# (default value)
global.domain Domain for the solution designer (without protocol) #DOMAIN# (default value)
global.endpoints.designer.host Hostname for the solution designer (without protocol) ssob-sc.#DOMAIN# (default value)
global.endpoints.gitIntegrationController.host Hostname for the git integration controller (without protocol) ssob-git-integration-controller.#DOMAIN# (default value)
global.endpoints.solutionController.host Hostname for the solution controller (without protocol) ssob-sdo.#DOMAIN# (default value)
global.endpoints.codeGenerationProvider.host Hostname for the code generation provider (without protocol) ssob-code-gen.#DOMAIN# (default value)
global.endpoints.cliProvider.host Hostname for the cli provider (without protocol) ssob-cli-provider.#DOMAIN# (default value)
global.endpoints.queryService.host Hostname for the query service (without protocol) ssob-k5-query-service.#DOMAIN# (default value)
global.endpoints.queryService.internalUrl Internal URL for the query service https://k5-query.#NAMESPACE#.svc.cluster.local (default value)
global.endpoints.umlRenderer.internalUrl Internal URL for the PlantUML server https://k5-plantuml-server.#NAMESPACE#.svc.cluster.local (default value)
global.endpoints.localMarketplaceController.host Hostname for the local marketplace controller (without protocol) ssob-local-marketplace-controller.#DOMAIN# (default value)
global.openshiftConsole.url URL for the openshift console https://console-openshift-console.#DOMAIN# (default value)
global.truststore.secretName Secretname for the truststore. k5-hub-truststore (default value)
global.identity.url URL for the identity provider #IDENTITY_PROVIDER_HOST# (default value)
global.identity.realm Used realm in the identity provider. This entry must correspond to the entry global.identity.realm in solution-hub-values.yaml . ssob (default value)
global.identity.adminCredentialsSecretName Name of secret in which admin credentials for the identity provider are stored iam-secret (default value)
k5-designer-backend.mongoDb.secretName Name of the secret, where access credentials of the Mongo database used by the Solution Designer are defined k5-designer-backend-mongodb-secret (default value)
k5-designer-backend.mongoDb.dbName Name of the Mongo database for the solution designer ssob-sc (default value)
k5-designer-backend.migration.db.gic.mongoDb.secretName MongoDB secret of git integration controller db - use same value as at git-integration-controller.mongoDb.secretName k5-wb-gic (default value)
k5-designer-backend.migration.db.gic.mongoDb.dbName Name of git integration controller MongoDB - use same value as at git-integration-controller.mongoDb.dbName
k5-designer-backend.migration.solutionProviderMigration.runMigration Boolean value to run migration for solution provider false (default value)
k5-designer-backend.solutionProviderMigration.defaultProviderData.baseUrl Url of the old gitlab system here - only needed if runMigration is true
k5-documentable-artifacts-migration.mongoDb.secretName Name of the secret, where access credentials of the Mongo database used by the migration of documentable artifacts are defined k5-designer-backend-mongodb-secret (default value)
k5-documentable-artifacts-migration.mongoDb.dbName Name of the Mongo database for the migration of documentable artifacts ssob-sc (default value)
k5-git-integration-controller.mongoDb.secretName Name of the secret, where access credentials of the Mongo database used by the git integration controller are defined k5-wb-gic (default value)
k5-git-integration-controller.mongoDb.dbName Name of the Mongo database for the git integration controller k5-wb-gic (default value)
k5-git-integration-controller.tokenEncryptionMasterKey.secretName Name of the Secret for providing the encryption of all the tokens which are stored in the database.See Format Example k5-token-encryption-master-key (default value)
k5-solution-controller.marketplace.storage.secretName Name of the secret containing the credentials, which are used to access the s3 storage. In default case the same secret can be used as for k5-s3-storage.secretName. See Format Example k5-s3-storage-access (default value)
k5-local-marketplace-controller.marketplace.storage.secretName Name of the secret containing the credentials, which are used to access the s3 storage. In default case the same secret can be used as for k5-s3-storage.secretName. See Format Example k5-s3-storage-access (default value)
k5-s3-storage.secretName Name of the secret containing the credentials, which are used to allow access to the k5 s3 storage. This secret will be automatically created by the installer, if the secret does not exists. In this case also the accesskey and the secret key will be generated and will have random values. See Format Example k5-s3-storage-access (default value)

Format Example git-integration-controller.tokenEncryptionMasterKey.secretName

apiVersion: v1
kind: Secret
type: Opaque
data:
  key: BASE64_ENCODED_ACCESSKEY

Format Example solution-controller.marketplace.storage.secretName

apiVersion: v1
kind: Secret
type: Opaque
data:
  accesskey: BASE64_ENCODED_ACCESSKEY
  secretkey: BASE64_ENCODED_SECRETKEY

Format Example k5-local-marketplace-controller.marketplace.storage.secretName

apiVersion: v1
kind: Secret
type: Opaque
data:
  accesskey: BASE64_ENCODED_ACCESSKEY
  secretkey: BASE64_ENCODED_SECRETKEY

Format Example k5-s3-storage.secretName

apiVersion: v1
kind: Secret
type: Opaque
data:
  accesskey: BASE64_ENCODED_ACCESSKEY
  secretkey: BASE64_ENCODED_SECRETKEY
Attention: The certificates in the truststore should always contain the complete certificate chain. Certificates for services that are called with a cluster based URL (e.g myservice.namespace.svc.cluster.local) may be omitted in truststores, if the services are using certificates issued by the internal cluster certificate authority.

Example

cd deployments
chmod +x ssob_install.sh
./ssob_install.sh \
  --accept_license \
  --external_address_image_registry=image-registry.apps.{your.cluster.domain} \
  --internal_address_image_registry=image-registry.openshift-image-registry.svc:5000 \
  --cpd_namespace=zen \
  --identity_provider_host=https://keycloak.apps.{your.cluster.domain} \
  --host_domain=apps.{your.cluster.domain} \
  --helm-tls-ca-cert=/root/.helm/ca.pem \
  --helm-tls-cert=/root/.helm/cert.pem \
  --helm-tls-key=/root/.helm/key.pem
Trouble: If the above command fails, check the logs for more detailed infos about error causes. You can find the logs at /tmp/wdp.install.log

Validate the installation

To validate the results of the previous installation step do the following:

  1. Check the deployment versions by running this command and make sure that the expected versions of Solution Designer and Solution Hub are deployed as listed in section Deployment Versions:
    $ helm ls --tls --tiller-namespace zen \
    --tls-cert=/root/.helm/cert.pem \
    --tls-key=/root/.helm/key.pem \
    | grep release-ssob-solution
    NAME                            STATUS    CHART                     APP VERSION
    release-ssob-solution-designer  DEPLOYED  k5-designer-bundle-2.9.5  2.9.5      
    release-ssob-solution-hub       DEPLOYED  ssob-solution-hub-2.3.3   2.3.3      
  2. Check the deployments by running this command and compare your output to the output provided here

    $ oc get deployments -n zen
    NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
    auto-devops-mvn-dependencies   2/2     2            2           1d
    configuration-management       2/2     2            2           1d
    k5-cli-provider                3/3     3            3           1d
    k5-code-generation-provider    3/3     3            3           1d
    k5-designer-backend            3/3     3            3           1d
    k5-designer-frontend           3/3     3            3           1d
    k5-git-integration-controller  3/3     3            3           1d
    k5-hub-backend                 3/3     3            3           1d
    k5-hub-frontend                3/3     3            3           1d
    k5-plantuml-server             2/2     2            2           1d
    k5-query                       2/2     2            2           1d
    k5-s3-storage                  1/1     1            1           1d
    k5-solution-controller         2/2     2            2           1d
    k5projectoperator              1/1     1            1           1d
    tiller-deploy                  1/1     1            1           1d
  3. Open the Solution Designer Web application in your browser
    1. Retrieve the URL to point your browser to:
      $ oc -n zen get routes | grep k5-designer-frontend-service
      k5-designer-frontend-route ssob-sc.apps.{your.cluster.domain} / k5-designer-frontend-service https reencrypt/Redirect None
    2. Point your browser to the retrieved URL, e.g https://ssob-sc.apps.{your.cluster.domain}/. As a result you should see a login form.

Troubleshooting

Docker daemon

For above command to work you need a running docker daemon on the host you are running the command on (your bastion host). If you get an error message like the following

$ Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Start docker like this (system dependent):

$ systemctl start docker
$ systemctl enable docker

Docker and self-signed certificates

If your container registry uses self signed certificates you might see something like the following:

Extracting modules
checking structure
structure check skip for now
docker login https://image-registry.apps.{your.cluster.domain}
Error response from daemon: Get https://image-registry.apps.{your.cluster.domain}/v1/users/: x509: certificate signed by unknown authority

To fix the issues with self signed certificates, please follow the instructions on: https://docs.docker.com/registry/insecure/ and perform the following steps:

  • retrieve the CA Cert from the image registry

  • add it to the local trust store

  • reload the trusted certificates

$ oc -n openshift-ingress get secret router-certs-default -o jsonpath='{.data.tls\.crt}' | base64 -d  > /etc/pki/ca-trust/source/anchors/image-registry.apps.{your.cluster.domain}.crt
update-ca-trust
$ systemctl restart docker  
$ docker login -u admin -p {password} https://image-registry.{your.cluster.domain}
k5-s3-storage with nfs perstistent volume fails due to insufficient permissions

In case of failing k5-s3-storage with that log

ERROR Unable to initialize backend: Insufficient permissions to access path
> Please ensure the specified path can be accessed

And if the storage class is nfs

> oc get pvc k5-s3-storage -o yaml | grep "storageClassName"
  storageClassName: nfs

the permissions must be repaired.

To do so, please follow these stops

1. Get the user id of the k5 s3 storage

> oc get pod  | grep k5-s3-storage
k5-s3-storage-775db6d5fc-sjlqn                                  1/1     Running     0          6h6m
> oc get pod k5-s3-storage-775db6d5fc-sjlqn -o yaml | grep runAsUser
      runAsUser: 1000320900

2. Login to your nfs mount

3. Check the current permissions

> ls -la *
insgesamt 336
drwxr-xr-x. 2 1001 root   4096 17. Jun 19:27 .
drwxrwxrwx. 4 root root     53 23. Jun 13:46 ..
-rw-r--r--. 1 1001 root    462 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-meta.yml
-rw-r--r--. 1 1001 root   1269 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-preview.json
-rw-r--r--. 1 1001 root 144790 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb.zip
-rw-r--r--. 1 1001 root    960 17. Jun 19:27 index.yml

4. Change owner

> chown -R 1000320900:1000320900 .

5. Check result

> ls -la *
insgesamt 336
drwxr-xr-x. 2 1000320900 1000320900   4096 17. Jun 19:27 .
drwxrwxrwx. 4 1000320900 1000320900     53 23. Jun 13:48 ..
-rw-r--r--. 1 1000320900 1000320900    462 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-meta.yml
-rw-r--r--. 1 1000320900 1000320900   1269 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-preview.json
-rw-r--r--. 1 1000320900 1000320900 144790 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb.zip
-rw-r--r--. 1 1000320900 1000320900    960 17. Jun 19:27 index.yml

6. Change permission

chmod g+rwx . -R

7. Check result

> ls -la *
insgesamt 336
drwxrwxr-x. 2 1000320900 1000320900   4096 17. Jun 19:27 .
drwxrwxrwx. 4 1000320900 1000320900     53 23. Jun 13:48 ..
-rw-rwxr--. 1 1000320900 1000320900    462 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-meta.yml
-rw-rwxr--. 1 1000320900 1000320900   1269 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-preview.json
-rw-rwxr--. 1 1000320900 1000320900 144790 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb.zip
-rw-rwxr--. 1 1000320900 1000320900    960 17. Jun 19:27 index.yml