Installation Process
Detailed information how to install IBM Financial Services Workbench.
This will guide you through the installation process of IBM Financial Services Workbench .
After the installation finished successfully, you will have
-
a running Solution Designer
-
a running Solution Hub
Roles in the installation process
Red Hat OpenShift cluster administrator
The cluster administrator is responsible for:
-
Creating projects (namespaces)
-
Creating roles, role bindings and service-accounts
-
Installing Custom Resource (CR) definitions
-
Configuring the OpenShift image-registry
Project administrator
The project administrator is responsible for:
-
Installing Solution Hub and Designer
-
Installing Envoys on prepared namespaces
-
Providing necessary configuration data
-
Pushing images to the image-registry
-
Executing Helm after installation
Before you begin
In order to install IBM Financial Services Workbench the following requirements should be met on the machine from where the installer is executed:
-
You are logged-in into the OpenShift cluster as a user with sufficient rights for the task at hand
$ oc login
-
A local docker daemon is installed and running
$ systemctl start docker
-
tar utility and bash are available and the installer package was downloaded and unpacked
$ tar xvzf ssob-2.7.0-ibm-openshift-4.5-cpd-installations.tgz
-
Your current working directory is set to the directory
ssob-install
of the unpacked installer package
$ cd ssob-install
Step 1: Setup cluster-wide resources
Introduction
Cluster-wide resources are setup via executing the script ssob_admin_setup.sh
, that is part of the installer. It must be executed as a user with cluster-admin
permissions in the OpenShift cluster (Role: Red Hat OpenShift cluster administrator). It can be
configured via command line arguments, that are described later on.
Description
The ssob_admin_setup.sh
will create and configure the following:
-
Create Custom Resource definition
-
solutions.sol.rt.cp.knowis.de
-
envoys.env.rt.cp.knowis.de
-
k5clients.k5.project.operator
-
k5dashboards.k5.project.operator
-
k5pipelinemanagers.k5.project.operator
-
k5projects.k5.project.operator
-
k5realms.k5.project.operator
-
k5topics.k5.project.operator
-
-
Create instances of Custom Resource
ConsoleYAMLSample
-
Optionally: creates a
route
for the OpenShift docker registry -
Create Roles
-
k5-aggregate-edit
(provides edit access to the created Custom Resource Definitions above and aggregates it to the cluster roles admin and edit) -
k5-aggregate-view
(provides view access to the created Custom Resource Definitions above and aggregates it to the cluster role view) -
cpd-admin-additional-role
(provides admin access to routes and pods/portforward) -
cpd-webhook-role
(provides admin access to webhooks integration)
-
-
Create ServiceAccounts
-
k5-operator-sa
-
-
Creates role bindings (associates permissions to service-accounts in the namespace)
-
Service Account
k5-operator-sa
: cpd-admin-role, cpd-viewer-role, edit -
Service Account
cpd-viewer-sa
: view -
Service Account
cpd-admin-sa-rolebinding
: cpd-admin-additional-role - Service Account
cpd-admin-sa-webhook-rolebinding
-
Parameters
The ssob_admin_setup.sh
script has the following parameters:
Variable | Description | Example | Default |
---|---|---|---|
--accept_license
|
Set flag to confirm that the license agreement has been read and
accepted (license files can be found in the installation package folder
./ssob-install/deployments/LICENSES ) |
- | - |
--cpd_namespace
|
Name of the CPD project in the OpenShift cluster |
zen
|
- |
--external_address_image_registry
|
External address of the internal docker registry for creating the required route (without namespace); if this address is not set, the route must otherwise be created manually later |
image-registry.apps.{your.cluster.domain}
|
- |
--image_registry_namespace
|
Namespace of internal docker registry |
openshift-image-registry
|
openshift-image-registry
|
--image_registry_service
|
Service of internal docker registry |
image-registry
|
image-registry
|
--external_image_registry
is set. An external route to the cluster
image registry is needed in order to upload the container images that IBM Financial Services
Workbench is comprised of to the cluster registry. The existence of those images in the cluster
registry is a prerequisite of the following step.To check whether the internal container registry is already exposed via a route use the following command:
$ oc -n openshift-image-registry get routes
Example
$ cd deployments
$ chmod +x ssob_admin_setup.sh
$ ./ssob_admin_setup.sh \
--accept_license \
--cpd_namespace=zen \
--external_address_image_registry=image-registry.apps.{your.cluster.domain}
/tmp/wdp.install.log
.Step 2: Installation of IBM Financial Services Workbench
Introduction
The installation of IBM Financial Services Workbench is done by executing the script
ssob_install.sh
. It requires to be executed as a user with OpenShift project admin
permissions in the chosen OpenShift namespace, i.e. zen
. The installation
script requires a set of command line arguments.
Description
The installer will configure and install the following components via Helm charts into the already existing namespace of CPD:
-
solution-designer
-
solution-hub
Without any further configuration the components will be reusing the existing cpd service-accounts respectively the created ones.
Each package contains all used docker images and Helm charts. During the installation all provided and used docker images are pushed into the configured registry. For convenience it should be the internal OpenShift registry.
Also after the installation of each Helm chart some Helm tests will be executed. The installation will fail, if one or more Helm tests are failing. Per default the Helm test is enabled and the Helm tests pods will be removed automatically. These steps can be skipped. In case of failing Helm tests, the pod description and if available the pod logs will be printed.
The installer is parametrized and adapted to your particular environment via CLI parameters and 2 value files.
There are 2 different types of parameters:
-
Those that you will always have to adapt to your environment. They are given as CLI parameters to the
ssob_install.sh
script. -
Those that are likely to be used in their default form. These are configured via the yaml files
solution-hub-values.yaml
andsolution-designer-values.yaml
You can find these files in the ssob-install/deployments/configs
directory
of the unpacked installation package.
Customize the configuration files
Connect via scp to the server host and download the files to your local computer, for example:
$ scp user@remote_host:remote_file ~/remote_directory
$ scp root@11.11.11.11:/root/install/ssob_2.7/ssob-install/deployments/configs/install_init_data.yaml ~/install
Locally edit your files and then connect via scp to the server host and upload each edited file, for example:
$ scp local_file user@remote_host:remote_file
$ scp install_init_data.yaml root@11.11.11.11:/root/install/ssob_2.7/ssob-install/deployments/configs/install_init_data.yaml
Backup the configuration files
Backup the following files:
-
install.yaml
-
solution-designer-values.yaml
-
solution-hub-values.yaml
For example:
$ cd ~/install/ssob_2.7/ssob-install/deployments
$ mv configs configs.bak
$ mkdir configs
$ cp ./configs.bak/* configs
Parameters
The ssob_install.sh
script has the following parameters:
-
See previous section on how to find out external URL of cluster image registry
-
Run the following to check for the image registry service (look for the service running on port 5000):
$ oc get services -n openshift-image-registry
-
To obtain the TLS certificate data (created by the IBM Cloud Pak for Data installation) run the following commands:
$ oc get secret helm-secret -n zen -o jsonpath='{.data.ca\.cert\.pem}' | base64 -d > /root/install/ssob_2.7/zenhelm/ca.cert.pem
$ oc get secret helm-secret -n zen -o jsonpath='{.data.helm\.cert\.pem}' | base64 -d > /root/install/ssob_2.7/zenhelm/helm.cert.pem
$ oc get secret helm-secret -n zen -o jsonpath='{.data.helm\.key\.pem}' | base64 -d > /root/install/ssob_2.7/zenhelm/helm.key.pem
external_address_image_registry
can be obtained by viewing the route of the service
image-registry
in the namespace openshift-image-registry
:
$ oc -n "openshift-image-registry" get route image-registry
internal_address_image_registry
is composed
like {image-registry service}
.openshift-image-registry.svc: {port}
. You can find the port by searching for the service with name like
image-registry
in the list of all services in the openshift-image-registry
:
$ oc -n "openshift-image-registry" get services
Variable | Description | Example |
---|---|---|
--accept_license
|
Set flag to confirm that the license aggreement has been read and
accepted (license files can be found in the installation package folder
./ssob-install/deployments/LICENSES ) |
|
--external_address_image_registry
|
Docker registry for pushing the images into the used registry (normally the external available address) (without namespace) |
image-registry.apps.openshift-cluster.{your.domain.cloud}
|
--internal_address_image_registry
|
Docker registry for pulling images in OpenShift (usually the internal docker registry) |
image-registry.openshift-image-registry.svc:5000
|
--cpd_namespace
|
Name of the cpd project in the OpenShift cluster |
zen
|
--host_domain
|
Hostname of the OpenShift cluster |
apps.openshift-cluster.{your.domain.cloud}
|
--only_solution_designer
|
Optional: install only a new solution-designer (default is to install all 2 components at once) | |
--only_solution_hub
|
Optional: install only a new solution-hub (default is to install all 2 components at once) | |
--skip_docker_login
|
Optional: skip the docker login (can be used if you are already logged in) | |
--start_docker_daemon
|
Optional: start the docker daemon via the script, default value is true |
true
|
--skip_helm_tls
|
Optional: disable TLS for Helm requests; default value is true |
true
|
--helm-tls-ca-cert
|
Optional: path to the TLS CA certificate file (default "$HELM_HOME/ca.pem") for helm commands |
/path/to/my/ca.cert.pem
|
--helm-tls-cert
|
Optional: path to TLS certificate file (default "$HELM_HOME/cert.pem") for Helm commands |
/path/to/my/cert.pem
|
--helm-tls-key
|
Optional: path to TLS key file (default "$HELM_HOME/key.pem") for Helm commands |
/path/to/my/helm.key.pem
|
--skip_helm_test
|
Optional: skip Helm test execution | |
--skip_helm_test_cleanup
|
Optional: skip clean up of Helm test pod after execution of Helm test |
YAML files for the installation
#
with appropriate
values, e.g. registry: "#DOCKER_REGISTRY#/#NAMESPACE#"
will be resolved to
registry: "image-registry.openshift-image-registry.svc:5000/zen"
. Optional: YAML file solution-hub-values.yaml
The solution-hub-values.yaml file is located in the directory
../ssob-install/deployments/configs
and only needs to be adjusted if the
default settings do not fit.
Variable | Description | Example |
---|---|---|
global.hosts.domain
|
Domain for the solution hub (without protocol) |
#DOMAIN# (default value) |
global.image.registry
|
Image repository for the solution hub |
#DOCKER_REGISTRY#/#NAMESPACE# (default value) |
global.hosts.configurationManagement.name
|
Hostname for the Configuration Management (without protocol) |
k5-configuration.#DOMAIN# (default value) |
global.hosts.k5configurator.name
|
Hostname for the k5-configurator (without protocol) |
k5-configurator.#DOMAIN# (default value) |
global.hosts.hub.name
|
Hostname for the solution hub (without protocol) |
k5-hub.#DOMAIN# (default value) |
global.hosts.k5query.name
|
Hostname for the k5-query component (without protocol) |
k5query.#DOMAIN# (default value) |
global.hosts.k5plantumlserver.name |
Hostname for the PlantUML server (without protocol) | k5-plantuml-server.#DOMAIN# (default value) |
global.networkPolicy.enabled
|
Enable/Disable Network Policy |
true (default value) |
k5projectoperator.watch.namespaces.blacklist
|
List of namespaces that will never be watched (exact matches) | |
k5projectoperator.watch.namespaces.whitelist
|
List of namespaces that will be watched (starts with matches) If empty all namespaces with permissions are considered. | |
k5-hub-backend.endpoints.openshiftConsole.url
|
URL for the openshift console |
https://console-openshift-console.#DOMAIN# (default value) |
Format Example solution-hub-values.yaml
## Global (potentially shared) settings
global:
hosts:
domain: "#DOMAIN#"
image:
registry: "#DOCKER_REGISTRY#/#NAMESPACE#"
configurationManagement:
name: "k5-configuration.#DOMAIN#"
k5configurator:
name: "k5-configurator.#DOMAIN#"
hub:
name: "k5-hub.#DOMAIN#"
k5query:
name: "k5-query.#DOMAIN#"
k5plantumlserver:
name: "k5-plantuml-server.#DOMAIN#"
## Operator namespace settings
k5projectoperator:
watch:
namespaces:
blacklist: []
whitelist: []
k5-hub-backend:
endpoints:
openshiftConsole:
url: "https://console-openshift-console.#DOMAIN#"
Optional: YAML file solution-designer-values.yaml
The solution-hub-values.yaml file is located in the directory
../ssob-install/deployments/configs
and only needs to be adjusted if the
default settings do not fit.
Variable |
Description
|
Example |
---|---|---|
global.domain
|
Domain for the solution designer (without protocol) |
#DOMAIN# (default value) |
global.hosts.domain
|
Domain for the solution designer (without protocol) |
#DOMAIN# (default value) |
global.image.registry
|
Image repository for the solution designer |
#DOCKER_REGISTRY#/#NAMESPACE# (default value) |
global.environment.name
|
Provide a name for the solution designer installation |
designer (default value) |
global.endpoints.designer.host
|
Hostname for the solution designer (without protocol) |
k5-designer.#DOMAIN# (default value) |
global.endpoints.gitIntegrationController.host
|
Hostname for the git integration controller (without protocol) |
k5-git-integration-controller.#DOMAIN# (default value) |
global.endpoints.solutionController.host
|
Hostname for the solution controller (without protocol) |
k5-solution-controller.#DOMAIN# (default value) |
global.endpoints.codeGenerationProvider.host
|
Hostname for the code generation provider (without protocol) |
k5-code-generation-provider.#DOMAIN# (default value) |
global.endpoints.cliProvider.host
|
Hostname for the cli provider (without protocol) |
k5-cli-provider.#DOMAIN# (default value) |
global.endpoints.queryService.host
|
Hostname for the query service (without protocol) |
k5-query.#DOMAIN# (default value) |
global.endpoints.s3Storage.host
|
Hostname for the s3 storage (without protocol) |
k5-s3-storage.#DOMAIN# (default value) |
global.endpoints.localMarketplaceController.host
|
Hostname for the local marketplace controller (without protocol) |
k5-local-marketplace-controller.#DOMAIN# (default value) |
Format Example solution-designer-values.yaml
## Global (potentially shared) settings
global:
domain: "#DOMAIN#"
hosts:
domain: "#DOMAIN#"
image:
registry: "#DOCKER_REGISTRY#/#NAMESPACE#"
## Environment information
environment:
## Optionally provide a name for the Solution Designer installation
name: "designer"
## Hostnames of the sub components, for optionally overriding the default endpoints:
endpoints:
designer:
host: "k5-designer.#DOMAIN#"
gitIntegrationController:
host: "k5-git-integration-controller.#DOMAIN#"
solutionController:
host: "k5-solution-controller.#DOMAIN#"
codeGenerationProvider:
host: "k5-code-generation-provider.#DOMAIN#"
cliProvider:
host: "k5-cli-provider.#DOMAIN#"
queryService:
host: "k5-s3-storage.#DOMAIN#"
localMarketplaceController:
host: "k5-local-marketplace-controller.#DOMAIN#"
s3Storage:
host: "k5-s3-storage.#DOMAIN#"
Example call of ssob_install.sh
cd deployments
chmod +x ssob_install.sh
./ssob_install.sh \
--accept_license \
--external_address_image_registry=image-registry.apps.{your.cluster.domain} \
--internal_address_image_registry=image-registry.openshift-image-registry.svc:5000 \
--cpd_namespace=zen \
--identity_provider_host=https://keycloak.apps.{your.cluster.domain} \
--host_domain=apps.{your.cluster.domain} \
--helm-tls-ca-cert=/root/.helm/ca.pem \
--helm-tls-cert=/root/.helm/cert.pem \
--helm-tls-key=/root/.helm/key.pem
/tmp/wdp.install.log
Step 3: Configuration of the installation
At the time of the pure installation in step 2 only minimal settings (e.g. namespace in which CPD is installed) have to be made. To use the Solution Designer and Solution Hub and to enable secure communication, additional configuration data must be provided in this step via the K5 Configurator API. Using this API, you must specify the environment specific values for this installation.
The k5-configurator
API provides REST services for reading and updating configurations for the
Solution Hub
and the Solution Designer
. The
k5-configurator API
is part of the Solution Hub
installation and its helm chart.
OpenShift WebConsole --> Copy login command
.
The required permissions are depending on the used API call and can be found within this documentation.curl
, postman
). k5-configurator
is accessible for example at:https://k5-configurator.apps.openshift-cluster.mydomain.cloud
. The exact URL
can be found within the route named k5-configurator
. It can be easily
retrieved by issuing
oc get route k5-configurator -n <namespace>
, whereby
<namespace>
points to the namespace, where the Solution Hub is
installed.
Configuration of Financial Services Workbench
For a new installation at least, the following configuration must be provided:
Iam: Configures the properties to access the Identity and Access Management system (IAM), respectively Keycloak
MasterKey: Configures the master key required by the REST API to encrypt some sensitive user data, such as git-tokens or api-keys.
MongoDb: Configures the connection to the Mongo database, which is used by the Solution Designer
S3Storage:Configures properties to access a S3Storage, which is used as a persistence layer for the k5-marketplace
- Truststore: Set the truststore, which holds the certificates for external TLS/SSL handshake that should be trusted within IBM Financial Services Workbench.Note: The line length of the certificates must comply with the standard for PEM messages, with each line containing exactly 64 printable characters except the last line and 64 or fewer printable characters in the last line.
For a detailed description of all values and in-depth explanation, please see Product Configuration
Step 4: Validate the installation
To validate the results of the previous installation steps do the following:
- Check the deployment versions by running this command and make sure that the expected
versions of Solution Designer and Solution Hub are deployed as listed in section Deployment Versions:
$ helm ls --tls --tiller-namespace zen \ --tls-cert=/root/.helm/cert.pem \ --tls-key=/root/.helm/key.pem \ | grep release-ssob-solution NAME STATUS CHART APP VERSION release-ssob-solution-designer DEPLOYED k5-designer-bundle-3.7.11 3.7.11 release-ssob-solution-hub DEPLOYED ssob-solution-hub-2.5.5 2.5.5
-
Check the deployments by running this command and compare your output to the output provided here
$ oc get deployments -n zen NAME READY UP-TO-DATE AVAILABLE AGE k5-mvn-dependencies 2/2 2 2 1d k5-configurator 1/1 1 1 1d k5-rollout-config 1/1 1 1 1d k5-iam-operator 1/1 1 1 1d k5-configuration-management 2/2 2 2 1d k5-cli-provider 3/3 3 3 1d k5-code-generation-provider 3/3 3 3 1d k5-designer-backend 3/3 3 3 1d k5-designer-frontend 3/3 3 3 1d k5-git-integration-controller 3/3 3 3 1d k5-hub-backend 3/3 3 3 1d k5-hub-frontend 3/3 3 3 1d k5-plantuml-server 2/2 2 2 1d k5-query 2/2 2 2 1d k5-s3-storage 1/1 1 1 1d k5-solution-controller 2/2 2 2 1d k5projectoperator 1/1 1 1 1d
- Open the Solution Designer Web application in your browser
- Retrieve the URL to point your browser to:
$ oc -n zen get routes | grep k5-designer-frontend-service k5-designer-frontend-route designer.apps.{your.cluster.domain} / k5-designer-frontend-service https reencrypt/Redirect None
-
Point your browser to the retrieved URL, e.g
https://designer.apps.{your.cluster.domain}/
. As a result you should see a login form.
- Retrieve the URL to point your browser to:
Troubleshooting
Docker daemon
For above command to work you need a running docker daemon on the host you are running the command on (your bastion host). If you get an error message like the following
$ Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Start docker like this (system dependent):
$ systemctl start docker
$ systemctl enable docker
Docker and self-signed certificates
If your container registry uses self signed certificates you might see something like the following:
Extracting modules
checking structure
structure check skip for now
docker login https://image-registry.apps.{your.cluster.domain}
Error response from daemon: Get https://image-registry.apps.{your.cluster.domain}/v1/users/: x509: certificate signed by unknown authority
To fix the issues with self signed certificates, please follow the instructions on: https://docs.docker.com/registry/insecure/ and perform the following steps:
-
retrieve the CA Cert from the image registry
-
add it to the local trust store
-
reload the trusted certificates
$ oc -n openshift-ingress get secret router-certs-default -o jsonpath='{.data.tls\.crt}' | base64 -d > /etc/pki/ca-trust/source/anchors/image-registry.apps.{your.cluster.domain}.crt
update-ca-trust
$ systemctl restart docker
$ docker login -u admin -p {password} https://image-registry.{your.cluster.domain}
k5-s3-storage
with nfs perstistent volume fails due to insufficient permissionsIn case of failing k5-s3-storage with that log
ERROR Unable to initialize backend: Insufficient permissions to access path > Please ensure the specified path can be accessed
And if the storage class is nfs
> oc get pvc k5-s3-storage -o yaml | grep "storageClassName" storageClassName: nfs
the permissions must be repaired.
To do so, please follow these stops
1. Get the user id of the k5 s3 storage
> oc get pod | grep k5-s3-storage k5-s3-storage-775db6d5fc-sjlqn 1/1 Running 0 6h6m > oc get pod k5-s3-storage-775db6d5fc-sjlqn -o yaml | grep runAsUser runAsUser: 1000320900
2. Login to your nfs mount
3. Check the current permissions
> ls -la * insgesamt 336 drwxr-xr-x. 2 1001 root 4096 17. Jun 19:27 . drwxrwxrwx. 4 root root 53 23. Jun 13:46 .. -rw-r--r--. 1 1001 root 462 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-meta.yml -rw-r--r--. 1 1001 root 1269 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-preview.json -rw-r--r--. 1 1001 root 144790 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb.zip -rw-r--r--. 1 1001 root 960 17. Jun 19:27 index.yml
4. Change owner
> chown -R 1000320900:1000320900 .
5. Check result
> ls -la * insgesamt 336 drwxr-xr-x. 2 1000320900 1000320900 4096 17. Jun 19:27 . drwxrwxrwx. 4 1000320900 1000320900 53 23. Jun 13:48 .. -rw-r--r--. 1 1000320900 1000320900 462 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-meta.yml -rw-r--r--. 1 1000320900 1000320900 1269 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-preview.json -rw-r--r--. 1 1000320900 1000320900 144790 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb.zip -rw-r--r--. 1 1000320900 1000320900 960 17. Jun 19:27 index.yml
6. Change permission
chmod g+rwx . -R
7. Check result
> ls -la * insgesamt 336 drwxrwxr-x. 2 1000320900 1000320900 4096 17. Jun 19:27 . drwxrwxrwx. 4 1000320900 1000320900 53 23. Jun 13:48 .. -rw-rwxr--. 1 1000320900 1000320900 462 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-meta.yml -rw-rwxr--. 1 1000320900 1000320900 1269 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb-preview.json -rw-rwxr--. 1 1000320900 1000320900 144790 17. Jun 19:22 c48a5afe-3d1b-4c18-ad61-2dd9c5d078eb.zip -rw-rwxr--. 1 1000320900 1000320900 960 17. Jun 19:27 index.yml
fss clone is not working on MacOs (base64 issue)
fss clone
command is failing on MacOs like this
> fss clone -s MYSOLUTION -p "my-git" ========= Cloning Solution to filesystem ================================================= --------- > Authenticating --------------------------------------------------------------- --------- > Cloning Solution from Solution Git Repository -------------------------------- Cloning into '/dev/MYSOLUTION'... fatal: unable to access 'https://my-git/MYSOLUTION.git/': error setting certificate verify locations: CAfile: /Users/MyUser/.knowis/fss-cli/default/designtime.ca.crt CApath: /Users/MyUser/.knowis/fss-cli/default [ERROR] Cloning failed, removing directory: /dev/MYSOLUTIONThen please verify, if the file
/Users/MyUser/.knowis/fss-cli/default/designtime.ca.crt
has proper base64 encoded values only. To do so, open /Users/MyUser/.knowis/fss-cli/default/designtime.ca.crt
and verify, that all lines between the -----BEGIN CERTIFICATE-----
and -----END CERTIFICATE-----
do not exceed 64 characters.
For manual and local fixing you can adjust the lines by breaking after 64 characters. And verify, that this is solving the experienced issue.
To fix it generally, the value of global.truststore.trustMap.identity
must be adjusted in a similar way. Afterwards the setup of fss
must be reset by downloding the designtime.config.json
and executing fss setup --file /cli-config.json
.
How to analyze JWT in case of unauthorized responses
If a request is rejected and the response contains invalid_token
, then it is helpful to decode the JWT itself by using for example JWT.io. So it is easier to see, if the JWT is decodable and what kind of content it has, and to understand, what might cause the unexpected rejections.
Understanding the reason of The iss claim is not valid
If a request is rejected and the response contains invalid_token
in combination of The iss claim is not valid
, then the JWT was created by an OIDC provider using a different issuer URL, than the configured one.
It is helpful to decode the JWT itself by using for example JWT.io and check the value of iss
. That must be the same as it is configured described by Configuring OIDC Provider for solutions and Run Time Configuration.