Upgrade Notes

The upgrade steps to move from version 1.0 to version 2.2.2 are described below.

High level overview

  • Prepare new OpenShift 4.3 Cluster
  • Install CPD
  • Decide whether to keep using your current GitLab instance or whether you move to a new GitLab installation (migration using non GitLab git servers is not supported)
  • If you switch to a new GitLab instance - backup and restore all your FSW related repositories in the managed-solutions group to the new GitLab instance using the same group managed-solutions as the target

  • Decide whether to keep your MongoDB instance or move to a new instance
  • If you switch to new MongoDB instance backup and restore the relevant databases
  • Install IBM Financial Services Workbench 2.2 on the new cluster providing connection parameters for the new GitLab and MongoDB instances
  • Have all your users (developers and analysts) setup their GitLab access tokens in the Solution Designer
  • Have your developers checkout the solutions they need to work on using FSW cli tools (fss)

Details

Install OpenShift 4.3

To setup OpenShift please follow the instructions in the OpenShift documentation.

Install IBM Cloud Pak for Data

Detailed information on how to install IBM Cloud Pak for Data control plane you can find here.

Designer Migration

In order to migrate all solutions from the Designer to the new OpenShift cluster, do the following steps:

Prepare for Migration

Ensure all developers that implemented something locally pushed the changes to the git repository using `fss push`

Note: All changes of the implementation code will be lost if they are not pushed. Changes in the Solution Designer MUST NOT be pushed - they will be automatically preserved.

All developers should now delete the directories, cloned with the fss CLI

Migrate Git projects
  • Go to your existing GitLab
  • Export GitLab projects
  • Import GitLab projects to your new GitLab into group managed-solutions
  • Alternatively: use GitLab backup and restore functionality to move the data
Migrate Mongo DB (optional)
  • Prepare new MongoDB instance
  • Restore data from a backup obtained from old instance
  • Note new connection details

Install new version of IBM Financial Services Workbench

Note: Expected: A new Git Provider with name GitLab and base URL of your GitLab was automatically created. All solutions are connected to this Git Provider.

Setup for Version 2.2

To have all users properly set up on version 2.2, some additional things need to be done:
  • Each user (both developers and analysts) needs to create a Git Token for the current Git Repository in the UserSettings
  • Git Provider Admin need to ensure that users have enough access on the Git system
    • to create new solutions: at least developer permissions on the group must be granted
    • to edit the solution content: at least developer permission (on the group or the repository) must be granted
    • to view the solution: at least guest or reporter permission (on the group or the repository) must be granted
    • to delete a solution: at least owner permission on the group must be granted
  • Setting up the developer workspaces
    • Run fss upgrade-cli
    • Download the latest version of the cli-config of a Solution from the Solution Designer
    • Run fss setup using the new downloaded file
    • Run fss clone for all relevant solutions

Backup and Restore of the Local Marketplace

Note: If you already used the Local Marketplace of IBM Fincancial Services Workbench and additional step before and after the Upgrade is necessary. To backup and restore the existing Desing Packs you can choose one of the below described alternatives.
  • Using the Solution Controller API:
    • Before you upgrade: Download the template
      • Open https://<NAMESPACE>/swagger-ui.html -> Local Marketplace Controller API
        • Use GET /api/marketplace/v2/templates to retrieve all available templates
        • Download each template using GET /api/marketplace/v2/templates/ {templateId}
    • After the upgrade: Upload the templates
      • Open https://<NAMESPACE>swagger-ui.html -> Local Marketplace Controller API
        • Use POST /api/marketplace/v2/templates/- to upload each template
  • Direct copy of s3 bucket from s3 storage to s3 storage
    • Requirement: S3 Client e.g. https://github.com/minio/mc
    • Before upgrade
      # Get credentials from secret e.g. ssob-sdo-minio-secret
      > oc project <HUB NAMESPACE>
      > export GITLAB_SECRET=ssob-sdo-minio-secret
      > export GITLAB_ACCESS_KEY=$(oc get secret ${GITLAB_SECRET} -o jsonpath="{.data.accesskey}" | base64 -d)
      > export GITLAB_SECRET_KEY=$(oc get secret ${GITLAB_SECRET} -o jsonpath="{.data.secretkey}" | base64 -d)
      > export GITLAB_ENDPOINT=$(oc get cm solution-controller-application -o 'go-template={{index .data "marketplace.storage.endpoint"}}')
      > export GITLAB_PORT=$(oc get cm solution-controller-application -o 'go-template={{index .data "marketplace.storage.port"}}')
      
      # Open port forward, if the endpoint is internal like http://<SERVICE>.<NAMESPACE>.svc.cluster.local e.g. http://gitlab-minio-svc.gitlab.svc.cluster.local
      > echo ${GITLAB_ENDPOINT}
      > kubectl -n <NAMESPACE> port-forward svc/<SERVICE> ${GITLAB_PORT}:${GITLAB_PORT}
      > export GITLAB_ENDPOINT="http://localhost"
      
      # Retrieve data
      > mkdir -p data
      > set +o history
      > mc config host add gitlab ${GITLAB_ENDPOINT}:${GITLAB_PORT} ${GITLAB_ACCESS_KEY} ${GITLAB_SECRET_KEY}
      > set -o history
      > mc ls gitlab/marketplace-templates/
      > mc cp --recursive gitlab/marketplace-templates/ data
      > ls -la data/
      
    • After upgrade
      # Get credentials from secret e.g. k5-s3-storage-access
      > oc project <DESIGNER NAMESPACE>
      > export K5S3STORAGE_SECRET=k5-s3-storage-access
      > export K5S3STORAGE_ACCESS_KEY=$(oc get secret ${K5S3STORAGE_SECRET} -o jsonpath="{.data.accesskey}" | base64 -d)
      > export K5S3STORAGE_SECRET_KEY=$(oc get secret ${K5S3STORAGE_SECRET} -o jsonpath="{.data.secretkey}" | base64 -d)
      > export K5S3STORAGE_ENDPOINT=$(oc get cm solution-controller-application -o 'go-template={{index .data "marketplace.storage.endpoint"}}')
      > export K5S3STORAGE_PORT=$(oc get cm solution-controller-application -o 'go-template={{index .data "marketplace.storage.port"}}')
      
      # Open port forward, if the endpoint is internal like https://<SERVICE>.<NAMESPACE>.svc.cluster.local e.g. https://k5-s3-storage.designer.svc.cluster.local
      > echo ${K5S3STORAGE_ENDPOINT}
      > kubectl -n <NAMESPACE> port-forward svc/<SERVICE> ${K5S3STORAGE_PORT}:${K5S3STORAGE_PORT}
      kubectl -n dev-designer port-forward svc/k5-s3-storage 9000:9000
      > export K5S3STORAGE_ENDPOINT="https://localhost"
      
      # Upload data (--insecure is required, as the mc will use localhost and the certificate is created for the service hostname)
      > set +o history
      > mc config host add k5s3storage ${K5S3STORAGE_ENDPOINT}:${K5S3STORAGE_PORT} ${K5S3STORAGE_ACCESS_KEY} ${K5S3STORAGE_SECRET_KEY} --insecure
      > set -o history
      > cd data
      > mc --insecure ls k5s3storage/marketplace-templates --recursive
      > mc ls --recursive .
      > mc --insecure cp --recursive . k5s3storage/marketplace-templates
      > mc --insecure ls k5s3storage/marketplace-templates --recursive

Migration

During the installation the existing data of the Documentation Sections are automatically migrated to the new MarkDown documentation that can be accessed within the instance of the related artefact.

Migration from version 2.x to version 2.2

If you already used a Version 2.x on OpenShift 4.3 the steps above are not necessary. Please proceed with the steps described below.

Upgrade your CLI

  • Run fss upgrade-cli

Upgrade Lowcode Typescript solutions

Lowcode Typescript solution created or exported with a version earlier then 2.2.0 need a tslint.config file to perform linting. For solutions that were created earlier, the tslint.json file was added in the .gitignore file. To enable custom linting configuration, it is necessary to remove the tslint.json file from .gitignore.

To do this, please execute the following steps:

  • Go to your cloned solution
  • open the .gitignore file
  • remove the line having tslint.json in it
  • save .gitignore file
  • run fss push

Migration from version 2.2.x to version 2.2.2

If you already used FSW version 2.2.0 on OpenShift 4.3 the steps above are not necessary. Please proceed with the steps described below.

Upgrade OpenShift Pipelines Operator to 1.0.1

With version 2.2.2 an upgrade to the OpenShift Pipelines Operator version 1.0.1 is needed. Please see the changed System Requirements

If you are already subscribed to the community version of OpenShift-Pipelines-Operator, then please uninstall the community version of the operator before subscribing to this operator.
Attention: WARNING: Please note that uninstalling the community operator will delete all Pipelines, Tasks, PipelineResources and PipelineRuns.
To uninstall the community operator:
  • Backup your Pipeline resources if necessary (see section Backup and Restore Pipeline resources)
  • Delete the instance (name: cluster) of the CRD config.operator.tekton.dev
  • Uninstall the community version of this operator from OperatorHub
  • Install the OpenShift Pipelines Operator in version 1.0.1 using the OperatorHub
  • Restore your Pipeline resources if necessary (see section Backup and Restore Pipeline resources)
Note: Please also see the description/overview for the installation of the OpenShift Pipeline Operator in the OperatorHub.

Backup and Restore Pipeline resources

For backup and restore all needed Pipeline resources (Pipelines, PipelineResources and PipelineRuns) you can use the following provided example bash scripts. Please check and adjust the scripts to your needs before executing. The two scripts will simply save all Pipelines and PipelineResources of the cluster and restore them after upgrading to the new Pipeline Operator version (1.0.1) and then trigger every restored Pipeline to recreate a PipelineRun.

Note: Please be aware that all existing Pipeline git commit triggers (Auto-deploy option of Build Pipelines) are not restored. To enable this git commit trigger again your Pipeline has to be recreated.

The following example script will create the backup of the pipeline resources:

Note: For executing the scripts the oc client must be installed on your machine and you need to login as admin into the cluster.
#!/bin/bash
rm -f pipelines_backup.yaml
rm -f pipelines_resources_backup.yaml
rm -f pipelines_list
rm -f pipelines_list_resources
exec &> back_up_logfile.txt
echo "START....................................................."
echo "Getting all pipelines and namespaces"
echo "\n"
oc get pipelines --all-namespaces | awk 'NR>2 {print $1 , $2}' > pipelines_list
echo "---" >> pipelines_backup.yaml
echo "extracting every pipeline from all the namespaces"
echo "\n"
cat pipelines_list | while read line 
do
  namespace=`echo $line | cut -d' ' -f1`
  pipeline_name=`echo $line | cut -d' ' -f2`
  echo "back-up of pipeline start.." $pipeline_name
  oc get pipeline $pipeline_name -o yaml -n $namespace >> pipelines_backup.yaml
  echo "---" >> pipelines_backup.yaml
done
echo "Back-up of pipelines in all the namespaces is completed"
echo "\n"
echo "\n"
echo "\n"
echo "Back-up of pipelines resources in all the namespaces is started"
echo "\n"
oc get pipelineresources --all-namespaces | awk 'NR>2 {print $1 , $2}' > pipelines_list_resources
echo "---" >> pipelines_resources_backup.yaml
echo "extracting every pipeline resources from all the namespaces"
echo "\n"
cat pipelines_list_resources | while read line 
do
  namespace=`echo $line | cut -d' ' -f1`
  pipeline_resource_name=`echo $line | cut -d' ' -f2`
  echo "back-up of pipeline resources start.." $pipeline_resource_name
  oc get pipelineresources $pipeline_resource_name -o yaml -n $namespace >> pipelines_resources_backup.yaml
  echo "---" >> pipelines_resources_backup.yaml
done
echo "Back-up of pipelines resources in all the namespaces is completed"
echo "\n"
echo "End....................................................."
            

The following example script will restore the pipeline resources:

Note: For executing the restore script the following two parameters needs to be provided:
  • OC_DOMAIN_URL: Domain url of your openshift cluster (e.g. apps.openshift.my.cloud)
  • OC_TOKEN: OpenShift API token
#!/bin/bash
exec &> restore_logfile.txt
echo "START....................................................."
echo "checking for backup files"
pipeline_backup="pipelines_backup.yaml"
pipelines_resources_backup="pipelines_resources_backup.yaml"
pipelines_list="pipelines_list"
OC_DOMAIN_URL=$1
OC_TOKEN=$2

if [ ! -f "$pipeline_backup" ]
then
    echo "File '${pipeline_backup}' not found."
    exit 0
elif  [ ! -f "$pipelines_resources_backup" ]
then
    echo "File '${pipelines_resources_backup}' not found."
    exit 1
elif  [ -z $1 ]
then
    echo "Openshift URL is not provided."
  echo "Usage: sh restore.sh <Openshift-url-except-namespace> <openshift-token>"
    exit 2
elif  [ -z $2 ]
then
    echo "Openshift Token is not provided."
  echo "Usage: sh restore.sh <Openshift-url-except-namespace> <openshift-token>"
    exit 3
fi
echo "\n"
echo "Started restoring pipelines"
oc apply -f $pipeline_backup
echo "Restoring pipelines completed"
echo "\n"
echo "Started restoring pipeline resources"
oc apply -f $pipelines_resources_backup
echo "Restoring pipeline resources completed"
echo "\n"
echo "Triggering all pipelines to create pipeline runs."
echo "\n"
cat pipelines_list | while read line 
do
  namespace=`echo $line | cut -d' ' -f1`
  pipeline_name=`echo $line | cut -d' ' -f2`
  echo "Triggering pipeline run for the pipeline: " $pipeline_name
  curl -X POST "https://"$namespace"."$OC_DOMAIN_URL"/-/pipelines/v1/"$pipeline_name"" -H "accept: */*" -H "Authorization: Bearer "$OC_TOKEN"" -d "" &
done
echo "\n"
echo "Triggering all pipelines completed"
echo "\n"
echo "End....................................................."