Installation process

This will guide you through the installation process of IBM Financial Services Workbench 3.1-Preview.

After the installation finished successfully, you will have

Attention: Please note that both components are not fully configured after the installation of the binaries.
Note: The FSW installer (ssob_admin_setup.sh) automatically uses the builder service account from the chosen namespace cpd_namespace to login and push into the internal docker container registry (of the cluster).

Roles in the installation process

Red Hat OpenShift cluster administrator

The cluster administrator is responsible for:

  • Creating projects (namespaces)

  • Creating roles, role bindings and service-accounts

  • Installing Custom Resource (CR) definitions

  • Configuring the OpenShift image-registry

Project administrator

The project administrator is responsible for:

  • Installing Solution Hub and Designer

  • Installing Envoys on prepared namespaces

  • Providing necessary configuration data

  • Pushing images to the image-registry

  • Executing Helm after installation

Before you begin

Attention: Please make sure that all system requirements are met, especially the OpenShift setup including OpenShift Pipelines and the Cloud Pak for Data installation.

In order to install IBM Financial Services Workbench the following requirements should be met on the machine from where the installer is executed:

  • You are logged-in into the OpenShift cluster as a user with sufficient rights for the task at hand

    oc login
  • A local docker daemon is installed and running

    systemctl start docker
  • A .tar utility and bash are available and the installer package was downloaded and unpacked

    tar xvzf ssob-3.1.0-preview-ibm-openshift-4.8-cpd-installations.tgz
  • Your current working directory is set to the directory ssob-install of the unpacked installer package

    cd ssob-install

Step 1: Setup cluster-wide resources

Attention: To complete this task a user in the cluster admin role is required.

Introduction

Cluster-wide resources are setup via executing the script ssob_admin_setup.sh, that is part of the installer. It must be executed as a user with cluster-admin permissions in the OpenShift cluster (Role: Red Hat OpenShift cluster administrator). It can be configured via command line arguments, that are described later on.

Description

Executing the ssob_admin_setup.sh script will create and configure the following:

  • Custom Resource definitions

    • solutions.sol.rt.cp.knowis.de

    • envoys.env.rt.cp.knowis.de

    • k5clients.k5.project.operator

    • k5dashboards.k5.project.operator

    • k5pipelinemanagers.k5.project.operator

    • k5projects.k5.project.operator

    • k5realms.k5.project.operator

    • k5topics.k5.project.operator

  • Instances of Custom Resource ConsoleYAMLSample

  • Optionally: create a route for the OpenShift docker registry

  • Roles

    • k5-aggregate-edit (provides edit access to the created Custom Resource Definitions above and aggregates it to the cluster roles admin and edit)

    • k5-aggregate-view (provides view access to the created Custom Resource Definitions above and aggregates it to the cluster role view)

    • cpd-admin-additional-role (provides admin access to routes and pods/portforward)

    • cpd-webhook-role (provides admin access to webhooks integration)

  • ServiceAccounts

    • k5-operator-sa

  • Creates role bindings (associates permissions to service-accounts in the namespace)

    • Service Account k5-operator-sa: cpd-admin-role, cpd-viewer-role, edit

    • Service Account cpd-viewer-sa: view

    • Service Account cpd-admin-sa-rolebinding: cpd-admin-additional-role

    • Service Account cpd-admin-sa-webhook-rolebinding: cpd-webhook-role

Parameters

The ssob_admin_setup.sh script has the following parameters:

VariableDescriptionExampleDefault
--accept_licenseSet flag to confirm that the license agreement has been read and accepted (license files can be found in the installation package folder ./ssob-install/deployments/LICENSES)--
--cpd_namespaceName of the CPD project in the OpenShift clusterzen-
--external_address_image_registryExternal address of the internal docker registry for creating the required route (without namespace); if this address is not set, the route must otherwise be created manually laterimage-registry.apps.{your.cluster.domain}-
--image_registry_namespaceNamespace of internal docker registryopenshift-image-registryopenshift-image-registry
--image_registry_serviceService of internal docker registryimage-registryimage-registry
Note: The route for the OpenShift image registry will only be created if the parameter --external_image_registry is set. An external route to the cluster image registry is needed in order to upload the container images that IBM Financial Services Workbench is comprised of to the cluster registry. The existence of those images in the cluster registry is a prerequisite of the following step.

To check whether the internal container registry is already exposed via a route use the following command:

oc -n openshift-image-registry get routes

Example:

$ cd deployments
$ chmod +x ssob_admin_setup.sh
$ ./ssob_admin_setup.sh  \
  --accept_license  \
  --cpd_namespace=zen \
  --external_address_image_registry=image-registry.apps.{your.cluster.domain}
Tip: If the above command fails, check the logs for more detailed infos about error causes. You can find the logs at /tmp/wdp.install.log.

Step 2: Installation of IBM Financial Services Workbench

Introduction

The installation of IBM Financial Services Workbench is done by executing the script ssob_install.sh. It requires to be executed as a user with OpenShift project admin permissions in the chosen OpenShift namespace, i.e. zen . The installation script requires a set of command line arguments.

Description

The installer will configure and install the following components via Helm charts into the already existing namespace of CPD:

Without any further configuration the components will be reusing the existing cpd service-accounts respectively the created ones.

Each package contains all used docker images and Helm charts. During the installation all provided and used docker images are pushed into the configured registry. For convenience it should be the internal OpenShift registry.

Also after the installation of each Helm chart some Helm tests will be executed. The installation will fail, if one or more Helm tests are failing. Per default the Helm test is enabled and the Helm tests pods will be removed automatically. These steps can be skipped. In case of failing Helm tests, the pod description and if available the pod logs will be printed.

The installer is parametrized and adapted to your particular environment via CLI parameters and two value files.

There are two different types of parameters:

  • Those that you will always have to adapt to your environment. They are given as CLI parameters to the ssob_install.sh script.

  • Those that are likely to be used in their default form. These are configured via the yaml files solution-hub-values.yaml and solution-designer-values.yaml

You can find these files in the ssob-install/deployments/configs directory of the unpacked installation package.

Customize the configuration files

Tip: You can easily edit the YAML files with your favorite text editor, which supports YAML mode or YAML syntax highlighting. This is recommended, and you probably have to transfer these YAML files temporarily to your local computer and back to the host server for editing.

Connect via scp to the server host and download the files to your local computer, for example:

scp user@remote_host:remote_file ~/remote_directory
scp root@11.11.11.11:/root/install/ssob_3.0/ssob-install/deployments/configs/install_init_data.yaml ~/install

Locally edit your files and then connect via scp to the server host and upload each edited file, for example:

scp local_file user@remote_host:remote_file
scp install_init_data.yaml root@11.11.11.11:/root/install/ssob_3.0/ssob-install/deployments/configs/install_init_data.yaml

Backup the configuration files

Warning: The YAML files are processed and modified by the installation script. Therefore, after you have customized the files to your needs, you must backup those files before executing the installation script. This is especially important because these files must still be accessible for the Upgrading after this installation.

Backup the following files:

  • install.yaml

  • solution-designer-values.yaml

  • solution-hub-values.yaml

For example:

cd ~/install/ssob_3.0/ssob-install/deployments
mv configs configs.bak
mkdir configs
cp ./configs.bak/* configs

Parameters

The ssob_install.sh script has the following parameters:

Note: See previous section on how to find out external URL of cluster image registry

Run the following to check for the image registry service (look for the service running on port 5000):

oc get services -n openshift-image-registry

To obtain the TLS certificate data (created by the IBM Cloud Pak for Data installation) run the following commands:

oc get secret helm-secret -n zen -o jsonpath='{.data.ca\.cert\.pem}' | base64 -d > /root/install/ssob_3.0/zenhelm/ca.cert.pem
oc get secret helm-secret -n zen -o jsonpath='{.data.helm\.cert\.pem}' | base64 -d > /root/install/ssob_3.0/zenhelm/helm.cert.pem
oc get secret helm-secret -n zen -o jsonpath='{.data.helm\.key\.pem}' | base64 -d > /root/install/ssob_3.0/zenhelm/helm.key.pem

If the OpenShift image repository is used, the value for the parameter external_address_image_registry can be obtained by viewing the route of the service image-registry in the namespace openshift-image-registry:

oc -n "openshift-image-registry" get route image-registry

The value for the parameter internal_address_image_registry is composed like {image-registry service}.openshift-image-registry.svc:{port}. You can find the port by searching for the service with name like image-registry in the list of all services in the openshift-image-registry:

oc -n "openshift-image-registry" get services
VariableDescriptionExample
--accept_licenseSet flag to confirm that the license agreement has been read and accepted (license files can be found in the installation package folder ./ssob-install/deployments/LICENSES)
--external_address_image_registryDocker registry for pushing the images into the used registry (normally the external available address) (without namespace)image-registry.apps.openshift-cluster.{your.domain.cloud}
--internal_address_image_registryDocker registry for pulling images in OpenShift (usually the internal docker registry)image-registry.openshift-image-registry.svc:5000
--cpd_namespaceName of the cpd project in the OpenShift clusterzen
--host_domainHostname of the OpenShift clusterapps.openshift-cluster.{your.domain.cloud}
--only_designerOptional: install only a new solution-designer (default is to install all 2 components at once)
--only_hub Optional: install only a new solution-hub (default is to install all 2 components at once)
--skip_docker_loginOptional: skip the docker login (can be used if you are already logged in)
--start_docker_daemonOptional: start the docker daemon via the script, default value is truetrue
--skip_helm_tlsOptional: disable TLS for Helm requests; default value is truetrue
--helm-tls-ca-certOptional: path to the TLS CA certificate file (default "$HELM_HOME/ca.pem") for helm commands/path/to/my/ca.cert.pem
--helm-tls-certOptional: path to TLS certificate file (default "$HELM_HOME/cert.pem") for Helm commands/path/to/my/cert.pem
--helm-tls-keyOptional: path to TLS key file (default "$HELM_HOME/key.pem") for Helm commands/path/to/my/helm.key.pem
--skip_helm_testOptional: skip Helm test execution
--skip_helm_test_cleanupOptional: skip clean up of Helm test pod after execution of Helm test
--skip_set_fsgroupOptional: skip setting of fsGroup for k5-s3-storage

YAML files for the installation:

The installer replaces values that are surrounded by # with appropriate values, e.g. registry: #DOCKER_REGISTRY#/#NAMESPACE# will be resolved to registry: image-registry.openshift-image-registry.svc:5000/zen.

Optional: YAML file solution-hub-values.yaml

This file contains settings for Helm deployment of the Solution Hub. It contains settings for the Configuration Management, k5-query component and the k5-project-operator. The solution-hub-values.yaml file is located in the directory ../ssob-install/deployments/configs and only needs to be adjusted if the default settings do not fit.

VariableDescriptionExample
global.domainDomain for the solution hub (without protocol)#DOMAIN# (default value)
global.image.registryImage repository for the solution hub#DOCKER_REGISTRY#/#NAMESPACE# (default value)
global.hosts.configurationManagement.nameHostname for the Configuration Management (without protocol)k5-configuration.#DOMAIN# (default value)
global.hosts.k5configurator.nameHostname for the k5-configurator (without protocol)k5-configurator.#DOMAIN# (default value)
global.hosts.hub.nameHostname for the solution hub (without protocol)k5-hub.#DOMAIN# (default value)
global.hosts.k5pipelinemanager.nameHostname for the k5-pipeline-manager component (without protocol)k5-pipeline-manager.#DOMAIN# (default value)
global.networkPolicy.enabledEnable/Disable Network Policytrue (default value)
k5projectoperator.watch.namespaces.blacklistList of namespaces that will never be watched (exactmatches)
k5projectoperator.watch.namespaces.whitelistList of namespaces that will be watched (starts withmatches) If empty all namespaces with permissions are considered.
k5-hub-backend.endpoints.openshiftConsole.urlURL for the openshift consolehttps://console-openshift-console.#DOMAIN# (default value)

Format Example of solution-hub-values.yaml:

## Global (potentially shared) settings
global:
  hosts:
    domain: "#DOMAIN#"
  image:
    registry: "#DOCKER_REGISTRY#/#NAMESPACE#"
  configurationManagement:
    name: "k5-configuration.#DOMAIN#"
  k5configurator:
    name: "k5-configurator.#DOMAIN#"
  hub:
    name: "k5-hub.#DOMAIN#"
  k5pipelinemanager:
    name: "k5-pipeline-manager.#DOMAIN#"
## Operator namespace settings
k5projectoperator:
  watch:
    namespaces:
      blacklist: [ ]
      whitelist: [ ]
k5-hub-backend:
  endpoints:
    openshiftConsole:
      url: "https://console-openshift-console.#DOMAIN#"

Optional: YAML file solution-designer-values.yaml:

Note: This file contains settings for Helm deployment of the Solution Designer.

The solution-hub-values.yaml file is located in the directory ../ssob-install/deployments/configs and only needs to be adjusted if the default settings do not fit.

VariableDescriptionExample
global.domainDomain for the solution designer (without protocol)#DOMAIN# (default value)
global.image.registryImage repository for the solution designer#DOCKER_REGISTRY#/#NAMESPACE# (default value)
global.environment.nameProvide a name for the solution designer installationdesigner (default value)
global.endpoints.designer.hostHostname for the solution designer (without protocol)k5-designer.#DOMAIN# (default value)
global.endpoints.gitIntegrationController.hostHostname for the git integration controller (without protocol)k5-git-integration-controller.#DOMAIN# (default value)
global.endpoints.solutionController.hostHostname for the solution controller (without protocol)k5-solution-controller.#DOMAIN# (default value)
global.endpoints.codeGenerationProvider.hostHostname for the code generation provider (without protocol)k5-code-generation-provider.#DOMAIN# (default value)
global.endpoints.cliProvider.hostHostname for the cli provider (without protocol)k5-cli-provider.#DOMAIN# (default value)
global.endpoints.queryService.hostHostname for the query service (without protocol)k5-query.#DOMAIN# (default value)
global.endpoints.s3Storage.hostHostname for the s3 storage (without protocol)k5-s3-storage.#DOMAIN# (default value)
global.endpoints.localMarketplaceController.hostHostname for the local marketplace controller (without protocol)k5-local-marketplace-controller.#DOMAIN# (default value)

Format Example solution-designer-values.yaml:

## Global (potentially shared) settings
global:
  domain: "#DOMAIN#"
  hosts:
    domain: "#DOMAIN#"
  image:
    registry: "#DOCKER_REGISTRY#/#NAMESPACE#"
  ## Environment information
  environment:
    ## Optionally provide a name for the Solution Designer installation
    name: "designer"
  ## Hostnames of the sub components, for optionally overriding the default endpoints:
  endpoints:
    designer:
      host: "k5-designer.#DOMAIN#"
    gitIntegrationController:
      host: "k5-git-integration-controller.#DOMAIN#"
    solutionController:
      host: "k5-solution-controller.#DOMAIN#"
    codeGenerationProvider:
      host: "k5-code-generation-provider.#DOMAIN#"
    cliProvider:
      host: "k5-cli-provider.#DOMAIN#"
    queryService:
      host: "k5-s3-storage.#DOMAIN#"
    localMarketplaceController:
      host: "k5-local-marketplace-controller.#DOMAIN#"
    s3Storage:
      host: "k5-s3-storage.#DOMAIN#"

Example call of ssob_install.sh:

cd deployments
chmod +x ssob_install.sh
./ssob_install.sh \
  --accept_license \
  --external_address_image_registry=image-registry.apps.{your.cluster.domain} \
  --internal_address_image_registry=image-registry.openshift-image-registry.svc:5000 \
  --cpd_namespace=zen \
  --host_domain=apps.{your.cluster.domain} \
  --helm-tls-ca-cert=/root/.helm/ca.pem \
  --helm-tls-cert=/root/.helm/cert.pem \
  --helm-tls-key=/root/.helm/key.pem
Attention: The certificates in the truststore should always contain the complete certificate chain. Certificates for services that are called with a cluster based URL (e.g myservice.namespace.svc.cluster.local) may be omitted in truststores, if the services are using certificates issued by the internal cluster certificate authority.
Note: If the above command fails, check the logs for more detailed infos about error causes. You can find the logs at /tmp/wdp.install.log

Step 3: Configuration of the installation

At the time of the pure installation in step 2 only minimal settings (e.g. namespace in which CPD is installed) have to be made. To use Solution Designer and Solution Hub and to enable secure communication, additional configuration data must be provided in this step via the K5-Configurator API. Using this API, you must specify the environment specific values for this installation.

The k5-configurator API provides REST services for reading and updating configurations for Solution Hub and Solution Designer. The k5-configurator API is part of the Solution Hub installation and its Helm chart.

Note: For using the k5-configurator a valid OpenShift token is needed. An OpenShift Token can be retrieved through login at the OpenShift WebConsole --> Copy login command. The required permissions are depending on the used API call and can be found within this documentation.

The configurations can be done easily with the provided Swagger UI or any other tool for calling APIs (like curl, postman).

Tip: The Swagger UI for the k5-configurator is accessible for example at: https://k5-configurator.apps.openshift-cluster.mydomain.cloud. The exact URL can be found within the route named k5-configurator. It can be easily retrieved by issuing
oc get route k5-configurator -n <namespace>

, whereby <namespace> points to the namespace, where Solution Hub is installed.

Configuration of IBM Financial Services Workbench

For a new installation at least, the following configuration must be provided:

  • ArgoCD: Configures the properties to access ArgoCD service

  • HelmRepo: Configures the properties to access the Helm repository

  • IAM: Configures the properties to access the Identity and Access Management system (IAM), respectively Keycloak

  • MasterKey: Configures the master key required by the REST API to encrypt some sensitive user data, such as git-tokens or api-keys.

  • MongoDB: Configures the connection to the Mongo database, which is used by the Solution Designer

  • S3Storage: Configures properties to access a S3Storage, which is used as a persistence layer for the k5-marketplace

  • Truststore: Set the truststore, which holds the certificates for external TLS/SSL handshake that should be trusted within IBM Financial Services Workbench.

Note: The line length of the certificates must comply with the standard for PEM messages, with each line containing exactly 64 printable characters except the last line and 64 or fewer printable characters in the last line.
Attention: Even though the request parameters of the configuration API are optional, all values must be configured in the end.

For a detailed description of all values and in-depth explanation, please see configuring IBM Financial Services Workbench

Step 4: Validate the installation

To validate the results of the previous installation steps do the following:

  1. Check the deployment versions by running this command and make sure that the expected versions of Solution Designer and Solution Hub are deployed as listed in section upgrading:

    $ helm ls --tls --tiller-namespace zen \
    --tls-cert=/root/.helm/cert.pem \
    --tls-key=/root/.helm/key.pem \
    | grep release-ssob-solution
    
    NAME                            STATUS    CHART                              APP VERSION
    release-ssob-solution-designer  DEPLOYED  fsw-solution-designer-prod-3.22.5  3.22.5
    release-ssob-solution-hub       DEPLOYED  fsw-solution-hub-prod-3.1.5        3.1.5
  2. Check the deployments by running this command and compare your output to the output provided here

    $ oc get deployments -n zen
     NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
     k5-application-manager            2/2     2            2         1d
     k5-audit-common-service           3/3     3            3         1d
     k5-cli-provider                   3/3     3            3         1d
     k5-code-generation-provider       3/3     3            3         1d
     k5-configuration-management       2/2     2            2         1d
     k5-configurator                   2/2     2            2         1d
     k5-designer-backend               3/3     3            3         1d
     k5-designer-frontend              3/3     3            3         1d
     k5-external-secrets               1/1     1            1         1d
     k5-git-integration-controller     3/3     3            3         1d
     k5-helm-repository-controller     2/2     2            2         1d
     k5-hub-backend                    3/3     3            3         1d
     k5-hub-frontend                   3/3     3            3         1d
     k5-iam-operator                   1/1     1            1         1d
     k5-local-marketplace-controller   3/3     3            3         1d
     k5-mvn-dependencies               2/2     2            2         1d
     k5-pipeline-manager               5/5     5            5         1d
     k5-plantuml-server                2/2     2            2         1d
     k5-query                          2/2     2            2         1d
     k5-rollout-config                 1/1     1            1         1d
     k5-s3-storage                     1/1     1            1         1d
     k5-secret-manager                 2/2     2            2         1d
     k5-solution-controller            2/2     2            2         1d
     k5-topic-management               3/3     3            3         1d
     k5projectoperator                 1/1     1            1         1d
  3. Verify the created CRD's

Execute the following command to verify if all required CRD's have been installed:

oc get crd -o name | grep k5

The following CRD's must be created and found in the cluster after the installation:

customresourcedefinition.apiextensions.k8s.io/k5clients.k5.project.operator
customresourcedefinition.apiextensions.k8s.io/k5dashboards.k5.project.operator
customresourcedefinition.apiextensions.k8s.io/k5externalsecrets.k5.config
customresourcedefinition.apiextensions.k8s.io/k5pipelinemanagers.k5.project.operator
customresourcedefinition.apiextensions.k8s.io/k5projects.k5.project.operator
customresourcedefinition.apiextensions.k8s.io/k5realms.k5.project.operator
customresourcedefinition.apiextensions.k8s.io/k5topics.k5.project.operator

If CRD's are missing they can be applied after installation as well. The definitions can be found in the following folder of the installation package ./resources/crd/ and created if needed as follows:

oc apply -f ./resources/crd/crd-k5client.yaml
oc apply -f ./resources/crd/crd-k5dashboard.yaml
oc apply -f ./resources/crd/crd-k5externalsecrets.yaml
oc apply -f ./resources/crd/crd-k5pipelinemanager.yaml
oc apply -f ./resources/crd/crd-k5project.yaml
oc apply -f ./resources/crd/crd-k5realm.yaml
oc apply -f ./resources/crd/crd-k5topic.yaml
  1. Open the Solution Designer Web application in your browser

    • Retrieve the URL to point your browser to:

    $ oc -n zen get routes | grep k5-designer-frontend-service
    k5-designer-frontend-route designer.apps.{your.cluster.domain} / k5-designer-frontend-service https reencrypt/Redirect None
    • Point your browser to the retrieved URL, e.g https://designer.apps.{your.cluster.domain}/. As a result you should see a login form.

Troubleshooting

Docker daemon

For above command to work you need a running docker daemon on the host you are running the command on (your bastion host). If you get an error message like the following

$ Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Start docker like this (system dependent):

systemctl start docker
systemctl enable docker

Docker and self-signed certificates

If your container registry uses self signed certificates you might see something like the following:

Extracting modules
checking structure
structure check skip for now
docker login https://image-registry.apps.{your.cluster.domain}
Error response from daemon: Get https://image-registry.apps.{your.cluster.domain}/v1/users/: x509: certificate signed by unknown authority

To fix the issues with self signed certificates, please follow the instructions on https://docs.docker.com/registry/insecure/ and perform the following steps:

  • retrieve the CA Cert from the image registry

  • add it to the local trust store

  • reload the trusted certificates

$ oc -n openshift-ingress get secret router-certs-default -o jsonpath='{.data.tls\.crt}' | base64 -d  > /etc/pki/ca-trust/source/anchors/image-registry.apps.{your.cluster.domain}.crt
update-ca-trust
$ systemctl restart docker
$ docker login -u admin -p {password} https://image-registry.{your.cluster.domain}

k5-s3-storage with nfs persistent volume fails due to insufficient permissions

In case of failing k5-s3-storage with that log

ERROR Unable to initialize backend: Insufficient permissions to access path
Please ensure the specified path can be accessed

And if the storage class is nfs

$ oc get pvc k5-s3-storage -o yaml | grep "storageClassName"
storageClassName: nfs

the permissions must be repaired.

To do so, please follow these steps

  1. Get the user id of the k5 s3 storage

    oc get pod  | grep k5-s3-storage
    k5-s3-storage-775db6d5fc-sjlqn                                  1/1     Running     0          6h6m
    oc get pod k5-s3-storage-775db6d5fc-sjlqn -o yaml | grep runAsUser
        runAsUser: 1000320900
  2. Login to your nfs mount

  3. Check the current permissions

    ls -la *
    total 336
    drwxr-xr-x. 2 1001 root   4096 17. Jun 19:27 .
    drwxrwxrwx. 4 root root     53 23. Jun 13:46 ..
    -rw-r--r--. 1 1001 root    462 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-meta.yml
    -rw-r--r--. 1 1001 root   1269 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-preview.json
    -rw-r--r--. 1 1001 root 144790 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb.zip
    -rw-r--r--. 1 1001 root    960 17. Jun 19:27 index.yml
  4. Change owner

    chown -R 1000320900:1000320900 .
  5. Check result

    ls -la *
    insgesamt 336
    drwxr-xr-x. 2 1000320900 1000320900   4096 17. Jun 19:27 .
    drwxrwxrwx. 4 1000320900 1000320900     53 23. Jun 13:48 ..
    -rw-r--r--. 1 1000320900 1000320900    462 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-meta.yml
    -rw-r--r--. 1 1000320900 1000320900   1269 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-preview.json
    -rw-r--r--. 1 1000320900 1000320900 144790 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb.zip
    -rw-r--r--. 1 1000320900 1000320900    960 17. Jun 19:27 index.yml
  6. Change permission

    chmod g+rwx . -R
  7. Check result

    ls -la *
    insgesamt 336
    drwxrwxr-x. 2 1000320900 1000320900   4096 17. Jun 19:27 .
    drwxrwxrwx. 4 1000320900 1000320900     53 23. Jun 13:48 ..
    -rw-rwxr--. 1 1000320900 1000320900    462 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-meta.yml
    -rw-rwxr--. 1 1000320900 1000320900   1269 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-preview.json
    -rw-rwxr--. 1 1000320900 1000320900 144790 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb.zip
    -rw-rwxr--. 1 1000320900 1000320900    960 17. Jun 19:27 index.yml

fss clone is not working on MacOs (base64 issue)

If the fss clone command is failing on MacOS like this

fss clone -s MYSOLUTION -p "my-git"
========= Cloning Solution to filesystem =================================================
--------- > Authenticating ---------------------------------------------------------------
--------- > Cloning Solution from Solution Git Repository --------------------------------
Cloning into '/dev/MYSOLUTION'...
fatal: unable to access 'https://my-git/MYSOLUTION.git/': error setting certificate verify locations:
CAfile: /Users/MyUser/.knowis/fss-cli/default/designtime.ca.crt
CApath: /Users/MyUser/.knowis/fss-cli/default

[ERROR] Cloning failed, removing directory: /dev/MYSOLUTION

Then please verify, if the file /Users/MyUser/.knowis/fss-cli/default/designtime.ca.crt has proper base64 encoded values only. To do so, open the file and verify, that all lines between the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- do not exceed 64 characters. For manual and local fixing you can adjust the lines by breaking after 64 characters. And verify, that this is solving the experienced issue.

To fix it generally, the value of global.truststore.trustMap.identity must be adjusted in a similar way. Afterwards the setup of fss must be reset by downloading the designtime.config.json and executing fss setup --file ./cli-config.json.

How to analyze JWT in case of unauthorized responses

If a request is rejected and the response contains invalid_token, then it is helpful to decode the JWT itself by using for example jwt.io. So it is easier to see, if the JWT is decode-able and what kind of content it has, and to understand, what might cause the unexpected rejections.

Understanding the reason of The iss claim is not valid

If a request is rejected and the response contains invalid_token in combination of The iss claim is not valid, then the JWT was created by an OIDC provider using a different issuer URL, than the configured one.

It is helpful to decode the JWT itself by using for example jwt.io and check the value of iss. That must be the same as it is configured described by configuring OIDC provider for solutions and configuring deployment targets.