Upgrading from 3.1 (TP) to 4.0

How to upgrade

Compared to previous versions, IBM Industry Solutions Workbench provides now an operator-based installation mechanism which makes installation and upgrades easier in the future. When upgrading from 3.1 (TP) to 4.0, some migrations step are necessary to switch from the existing installation and are explained in the following.

Obtain the installation package

To obtain files required for upgrading IBM Industry Solutions Workbench:

  • Go to Passport Advantage Online

  • Search for IBM Industry Solutions Workbench

  • Download the new version you want to upgrade to, e.g. isw-4.0-ibm-openshift-4.8-installations.tgz

Unpack the installation file on the computer from which the installation commands are to be executed:

INSTALLDIR=~/install/
mkdir ${INSTALLDIR}
cd ${INSTALLDIR}
tar xzvf ibm-industry-solutions-workbench-4.0.0-ocp-4.8.tgz
cd ${INSTALLDIR}/ibm-industry-solutions-workbench-4.0.0-ocp-4.8
Attention: Make sure that the configuration files from your previous installation in the ssob-install/deployments/configs folder are saved, this information is important to not change installation configuration.

Uninstall previous installation

Before installing IBM Industry Solutions Workbench the current installation must be uninstalled because the installation mechanism has changed. After the upgrade to IBM Industry Solutions Workbench 4.0 this step will not be necessary again. If you follow the upgrade instructions no data will be lost.

Backup the configuration from your cluster

Backup the following secrets:

  • Login to your cluster and switch into your installation namespace

oc login
oc project namespace
  • Save the default configurations (for the runtime mongodb, kafka instance and oidc provider)

 oc get secret k5-default-document-storage-service-binding -o=yaml > k5-default-document-storage-service-binding.yaml
 oc get secret k5-default-message-service-binding -o=yaml > k5-default-message-service-binding.yaml
 oc get secret k5-default-iam-service-binding -o=yaml > k5-default-iam-service-binding.yaml
 oc get secret --selector='k5-configurable=true' -o=yaml > backup-config-secrets.yaml
  • At least the following secrets must be saved and re-created in case they get deleted in the cluster by mistake:

oc get secret --selector='k5-configurable=true' -o=name
secret/k5-argocd-binding
secret/k5-designer-mongodb
secret/k5-encryption-master-key
secret/k5-hub-truststore
secret/k5-iam-secret
secret/k5-iam-settings
Tip: This configuration secrets should not be deleted during the upgrade process, but we recommend a backup.

Uninstall current installation

The uninstallation needs to be done via helm cli (version 2). There are 2 options to do this:

  • Option 1: Use a local helm tool (on your machine), then you need to get the tiller/helm certificates from the instance running in your installation namespace

    • To obtain the TLS certificate data (created by the IBM Cloud Pak for Data installation) run the following commands:

oc project namespace
oc get secret helm-secret -o jsonpath='{.data.ca\.cert\.pem}' | base64 -d > ./ca.cert.pem
oc get secret helm-secret -o jsonpath='{.data.helm\.cert\.pem}' | base64 -d > ./helm.cert.pem
oc get secret helm-secret -o jsonpath='{.data.helm\.key\.pem}' | base64 -d > ./helm.key.pem
  • The saved certificates can then be used for the following command by adding the flags: --tls-ca-cert ./ca.cert.pem --tls-cert ./helm.cert.pem --tls-key ./helm.key.pem

  • Option 2: Connect to the running tiller pod in your cluster and execute the following command in this pod

    • This can be done by searching for the tiller pod in the web console in your installation namespace

    • Then open the tiller pod and switch to the Terminal

    • Use the helm tool from the tiller pod (can be tested with /helm list --tls)

The following commands need to be executed for the uninstallation:

  • List all installed helm deployments

helm list --tls
  • Delete/uninstall all helm deployments of Cloud Pak for Data, Solution-Hub and Solution-Designer

helm delete RELEASE_NAME --purge --tls

For example:

helm delete 0010-infra --purge --tls
helm delete 0015-setup --purge --tls
helm delete 0020-core --purge --tls
helm delete namespace-ssob-solution-hub --purge --tls
helm delete namespace-ssob-solution-designer --purge --tls

Install the Upgrade

Follow the installation process. The installation of the new version during the upgrade is essentially the same as the initial installation.

Installation configuration parameters

All installation configurations you may have set in one of the following files in your previous installation can now be set as well during the new operator based installation in the ISW Custom Resource, see also Configure ISW Custom Resource:

  • solution-designer-values.yaml

  • solution-hub-values.yaml

Especially if you have used specific urls or database names you need to add these parameters to your ISW Custom Resource, see especially spec.values.global.endpoints, spec.values.service-builder.k5-designer-backend.mongoDb.dbName and spec.values.service-builder.k5-git-integration-controller.mongoDb.dbName in the following example:

apiVersion: k5.ibm.com/v1beta1
kind: ISW
metadata:
  name: k5-tools
  namespace: k5-tools
spec:
  designer:
    enabled: true
  domain: apps.openshift.my.cloud
  license:
    accept: true
  values:
    global:
      endpoints:
        assetManager:
          host: k5-asset-manager.apps.openshift.my.cloud
        configurationManagement:
          host: k5-configuration-management.apps.openshift.my.cloud
        designer:
          host: k5-designer.apps.openshift.my.cloud
        hub:
          host: k5-hub.apps.openshift.my.cloud
        query:
          host: k5-query.apps.openshift.my.cloud
    service-builder:
      k5-designer-backend:
        mongoDb:
          dbName: my-designer-database
      k5-git-integration-controller:
        mongoDb:
          dbName: my-gic-database
Tip: Please be aware that the yaml structure of the parameters in the solution-hub-values.yaml have been aligned with the parameters in the solution-designer-values.yaml.

Breaking changes

IBM Industry Solutions Workbench 4.0 brings also some technical alignments that may lead to existing projects not working any longer. This section will give you an overview of the breaking changes and how to solve issues related to them.

Java pro-code

Dependency update

Java pro-code uses several functionalities provided by a dependency "cp-framework-sdk", this dependency has been updated to include new fixes, and versions of underlying used dependencies.

All existing Java pro-code are required to migrate to Download and migrate to this new version.

Migration steps
  1. Open the Solution Envoy dashboard (you can find the link in Solution Designer on the CI/CD page after you have set up at least one deploy pipeline or inside Solution Hub after selecting a k5-project and opening the tab "Service Deployments")

  2. Navigate to the Infrastructure page and download all the dependencies listed under Maven Dependencies (Java).

  3. Install dependencies in the local maven repository. Below is an example how the dependencies can be installed via maven, where "-x.y.z" will be replaced by the version of the downloaded dependencies.

    mvn install:install-file -Dfile=cp-framework-sdk-autoconfiguration-x.y.z.jar -DpomFile=cp-framework-sdk-autoconfiguration-x.y.z.pom
    mvn install:install-file -Dfile=cp-framework-sdk-parent-x.y.z.pom -DpomFile=cp-framework-sdk-parent-x.y.z.pom
  4. Clone / Open your pro-code service project.

  5. Navigate to pom.xml file

  6. Search for the parent attribute and change it to match the new "cp-framework-sdk" version, see the following snippet example:

    <parent>
        <groupId>de.knowis.cp.sdk</groupId>
        <artifactId>cp-framework-sdk-parent</artifactId>
        <version>4.0.122</version>
        <relativePath>
            ./.framework/repo/de/knowis/cp/sdk/cp-framework-sdk-parent/4.0.122/cp-framework-sdk-parent-4.0.122.pom
        </relativePath>
    </parent>

NOTE Current latest version of "cp-framework-sdk" that is shipped with the IBM Industry Solutions Workbench 4.0 is 4.0.122.

Finally Push your adjustment and run pipeline to make sure projects deploy successfully.

Java low-code

Date property generation alignment

Within Domain namespace a "Date" property used to be generated as "offsetDateTime" data type, however customers are using "Date" properties only to store dates, hence "Date" property is now generated as "LocalDate" data type.

Migration steps
  1. Clone your service project, if it contains a Date property in domain namespace, you will get compilation error.

  2. Adjust your logic to convert / use "LocalDate" instead of "offsetDateTime". See below different examples

  3. Push your adjustments and run pipeline to make sure projects deploy successfully.

Examples
  • Converting between an OffsetDateTime to a LocalDate

// Converting between an OffsetDateTime to a LocalDate
LocalDate migratedDteProperty = offsetDateTimeProp.toLocalDate();
  • Directly pass your string representation of a date and create a LocalDate instance.

// Create date time formatter
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("d-MMM-yyyy");
String date = "16-Aug-2016";

// Convert to local date
LocalDate localDate = LocalDate.parse(date, formatter);
  • Directly using data types from API Namespace schema as API Namespace generates schema date properties as "LocalDate"

// Get hiring date from schema
LocalDate hiringDate = employeeSchema.getHiringDate();

// Directly set date to an entity
employeeEntity.setHiringDate(hiringDate);

Api namespace schema setter/getter naming convention

Api namespace schema setter/getter was not following proper camelCase naming.

For Example:

  • Proeprty named "myProperty" getters / setters where generated as setmyProperty() and getmyProperty

  • Now this is aligned to be setMyProperty() and getMyProperty()

Migration steps
  1. Clone your service project, you will get compilation error (if you were using getter / setters within Api namespace).

  2. Adjust the usage of used getters / setters.

// Before change for a schema property named "hiringDate"
LocalDate hiringDate = employeeSchema.gethiringDate();

// After change
LocalDate hiringDate = employeeSchema.getHiringDate();

Finally Push your adjustment and run pipeline to make sure projects deploy successfully.

Dependency Update

Java low-code uses several functionalities provided by a dependency "cp-framework-sdk", this dependency has been updated to include new fixes, and versions of underlying used dependencies.

All existing Java low-code are required to migrate to this new version.

Migration steps
  1. Clone / Open your low-code service project.

  2. Navigate to pom.xml file

  3. Search for the parent attribute and change it to match the new "cp-framework-sdk" version, see the following snippet example:

    <parent>
        <groupId>de.knowis.cp.sdk</groupId>
        <artifactId>cp-framework-managed-sdk-parent</artifactId>
        <version>4.0.122</version>
        <relativePath>
            ./.framework/repo/de/knowis/cp/sdk/cp-framework-managed-sdk-parent/4.0.122/cp-framework-managed-sdk-parent-4.0.122.pom
        </relativePath>
    </parent>

NOTE Current latest version of "cp-framework-sdk" that is shipped with the IBM Industry Solutions Workbench 4.0 is 4.0.122.

Finally Push your adjustment and run pipeline to make sure projects deploy successfully.

Base Constructor updates for (service, command and agents)

Previously service, command and agent used to get "DomainEventBuilder" and "EventProducerService" classes injected as constructor parameters, even if they are not modelled to fire business events.

Now only service, command and agent that are modelled to fire events will get these injected constructor parameters.

If you have compilation errors because these parameters were removed, remove them from your implementation class.

// Before change constructor getting DomainEventBuilder and EventProducerService
public class FailedCompensationAgent extends FailedCompensationAgentBase {

  private static Logger log = LoggerFactory.getLogger(FailedCompensationAgent.class);

  public FailedCompensationAgent(
    DomainEntityBuilder entityBuilder,
    DomainEventBuilder eventBuilder,
    EventProducerService eventProducer, 
    Repository repo) {
    super(entityBuilder, eventBuilder, eventProducer, repo);
  }
}

// After change base class constrcutor no longer have DomainEventBuilder and EventProducerService
public class FailedCompensationAgent extends FailedCompensationAgentBase {

  private static Logger log = LoggerFactory.getLogger(FailedCompensationAgent.class);

  public FailedCompensationAgent(
    DomainEntityBuilder entityBuilder,
    Repository repo) {
    super(entityBuilder, repo);
  }
}

Typescript low-code

Data types

Integer, Decimal and Date used to be string type. Now Integer has the native Javascript (BigInt) type, Decimal now has BigNumber type from bignumber.js and Date now has the native Javascript Date type.

Tip: It is recommended to clone/pull all your low-code projects to fix the usage of those types if needed. Most of the parts can be easily identified by running k5 compile, check the showed problems one by one and fix the type usage accordingly. Additionaly, conversion functions for decimals, integers or date properties like parseInt(), parseFloat() or others should no longer be needed.

Api namespace

IBM Industry Solutions Workbench 4.0 completely changes how the operation file is generated and structured, to be more aligned with Typescript and express controller style, thus allowing it to support more complex api design.

In order to do a successful migration we need to first understand what are the structure and elements of an Operation File, so we can do a proper migration for the logic of each operation.

IBM Industry Solutions Workbench 3.1 generated operation

In 3.1 operation files were generated as below structure:

export default class extends operations.admin_CreateReferenceInformation {
  public async execute (): Promise<void> {
    const log = this.util.log
    log.debug('admin_CreateReferenceInformation.execute()')

    // Access parameters
    const rootIdParam = this.request.path['root-id'];
    const sortParam = this.request.query.sort;

    // Access Request Body
    const requestBody = this.request.body;

    // Setting response
    this.response.body = this.factory.schema.admin.ResourceId()
    
    this.response.status = 201

  }

   /**
   * This function is automatically called when an error occurs within the execution flow of operation admin_CreateReferenceInformation
   * @param error Operation execution error that occurred.
   */
  public async handleError(error: Error): Promise<void> {
    const log = this.util.log;
    log.debug("admin_CreateReferenceInformation.handleError()");
    // Add Error handling logic below and set this.response that will be returned as operation admin_CreateReferenceInformation response

    // Set 500 internal server error code
    this.response.status = 500
  }

}

From above we can see that:

  1. Each Operation has a class that extends a dase class from solution-framework with an operation execute() that holds all logic.

  2. Each Operation has an error handling method handleError() to implement your operation error handling and returning response accordingly.

  3. Access to Path and Query parameters was using the this.request.path and this.request.query respectively.

  4. Access to Request body was through this.request.body.

  5. Setting Response status was through this.response.status.

  6. Setting Response Body was through this.response.body.

In addittion to

  1. Global ErrorMiddleware file to be implemented for each api namespace.

  2. Default access to isInstanceOf custom operator was removed from Base Operation classes, however the class itself is still part of the solution-framework.

/**
 * This class is responsible for providing a centralized place for common error handling logic independent from API Operation Context,
 * API operation response set within this class via (this.response) might end up being un-documented API response for the executed operation.
 */
export default class extends middleware.admin_ErrorMiddleware {

  // This script gets executed if any error happens that is not yet handled in an operation / within operation handleError()
  public async handleError(error: Error): Promise<void> {
    // Handle the general errors that may occur throughout the implementation
    const log = this.util.log;
    log.debug('admin_ErrorMiddleware.execute()');
  }
}
IBM Industry Solutions Workbench 4.0 generated operation

IBM Industry Solutions Workbench 4.0 operations are generated as below structure:

/**
 * InitiateApi
 */
export class InitiateApi extends V1BaseControllers.InitiateApiBase {
  constructor(requestContext: RequestContext) {
    super(requestContext);
  }

  /**
    * @param request Request.InitiateRequest
    * initiate logic
  */
  initiate = async (request: Request.InitiateRequest) => {
    // TODO: add your implementation logic
    this.util.log.info('start initiate execution');

    // Access original express request (for example headers)
    this.util.log.trace(`Custom Header ${request._request.headers['x-custom-header']}`);

    // Access Request Body
    const requestBody = request.body;
   
    // Access Path Parameters
    const sessionId = request.path.sessionId;

    // Access Query Parameters
     const audit = request.query.audit;

    // Setting response can be in three ways

    // 1. Default imported Schema
    let schema: Schema.InitiateCorrespondenceOperatingSessionResponse;// typescript native way
    schema.CorrespondenceOperatingSession.CorrespondenceServiceSessionDate = '27-11-2022';
    schema.CorrespondenceOperatingSession.CorrespondenceServiceSessionStatistics = 'valid';
    this.response.body = schema;

    // 2. Directly setting response body in a Typescript style
    this.response.body = {
      CorrespondenceOperatingSession : {
        CorrespondenceServiceSessionDate : '27-11-2022',
        CorrespondenceServiceSessionStatistics : 'Valid'
      }
    };

    this.response.body = body;

    // Setting status code
    this.response.statusCode = 200;
    
  }

  /**
    * @param request Request.InitiateRequest
    * @param error TypeError
    * initiateErrorHandler logic
  */
  initiateErrorHandler = async (error: TypeError, request: Request.InitiateRequest) => {
    // TODO: error handling should go here
    this.util.log.info('start initiateErrorHandler execution');

    this.util.log.error(`An Error ${error.name} occurred in operation ${error.message}`);
   
    this.response.statusCode = 500;

  }
}

From above we can see that:

  1. Each Operation has a class that extends a base class from solution-framework with an arrow function named using naming convention operationId, this is where all operation logic will be implemented.

  2. Each Operation has an error handling method named using naming convention operationIdErrorHandler() to implement your operation error handling and returning response accordingly.

  3. Each Operation has a request object (generated interface for that operation) is always passed to that arrow function, it will provide access to:

  • Path Parameters

  • Query Parameters

  • Request Body

  • Original Express request that contains all the request headers.

  1. Setting Response status code is through this.response.status.

  2. Setting Response Body is through this.response.body.

Migration steps

Now that we have seen how operation generated file have changed, also generated types (Parameters, Request, Response) have changed.

Before we can start migrating we need to

  1. Preferably back up your service project.

  2. Do pull using CLI command k5 pull

  3. You will get new files (Two files per each operation one for operation logic and one for testing), new files follow naming convention operationIdApi.ts and operationIdApi.test.ts.

Operation logic
  1. Copy old operation and change access to Parameters, request and response as below

LogicOld WayNew Way
Path Parameterconst rootIdParam = this.request.path.rootId; const rootIdParam = request.path.path.rootId;
Query Parameterconst sortParam = this.request.query.sort; const sortParam = request.query.sortParam;
Request Bodyconst requestBody = this.request.body;; const requestBody = request.body; -> request is passed as operation argument
Setting Response StatusCodethis.response.status = 201;this.response.statusCode = 201;
Setting Response Bodythis.response.body = responseSchemathis.response.body = responseSchema;

Note That setting response is identical, however the underlying type (response body) is completely different, so is creating response body schema / setting values to it.

Below example illustrate how can you set values to response body

    // 1. Default imported Schema
    let schema: Schema.InitiateCorrespondenceOperatingSessionResponse;// typescript native way
    schema.CorrespondenceOperatingSession.CorrespondenceServiceSessionDate = '27-11-2022';
    schema.CorrespondenceOperatingSession.CorrespondenceServiceSessionStatistics = 'valid';
    this.response.body = schema;


    // 2. Directly setting response body in a Typescript style
    this.response.body = {
      CorrespondenceOperatingSession : {
        CorrespondenceServiceSessionDate : '27-11-2022',
        CorrespondenceServiceSessionStatistics : 'Valid'
      }
    };

    this.response.body = body;
  1. If implementation was using isInstanceOf checker that was accessible via this like below example:

 if (this.isInstanceOf.error.AggregateNotFoundError(error)) {
        const errorSchema = this.factory.schema.admin.GenericError();
        errorSchema.code = 'E404';
        errorSchema.description = JSON.stringify(error.responseDetails, null, 2);
        this.response.body = errorSchema;
        this.response.status = 500;
      }

It can still be used, however it need to be imported and initialized like below illustrative example

//1. Import directly from solution-framework
import { SolutionInstanceChecker } from 'solution-framework/sdk/v1/solution/instanceChecker/SolutionInstanceChecker';

export class InitiateApi extends V1BaseControllers.InitiateApiBase {

  // 2. Declare it as a variable  
  private isInstanceOf: SolutionInstanceChecker;

  constructor(requestContext: RequestContext) {
    super(requestContext);
    // 3. Initialize it
    this.isInstanceOf = new SolutionInstanceChecker();
  }

  /**
    * @param request Request.InitiateRequest
    * initiate logic
  */
  initiate = async (request: Request.InitiateRequest) => {
   
    this.util.log.info('start initiate execution');
     try {
      // Some Action
    } catch(e) {
      //4. Use it as previously  
      if (this.isInstanceOf.error.GeneralError(e)){
          // Set response status as 500
          this.response.status = 500;
      }
    }
  }
  1. If an ErrorMiddleware class located within folder /api/apiNamespaceAcronym/middleware was being used, this class can be removed or re-purposed as a general generic error handling utility.

Below is an illustrative example of a class ApiGenericErrorHandler that has a function handleError which takes an error argument (similar to old error middleware class) and returns a generic response for it.

Note: This generic error handler is not called automatically and needs to be triggered explicitly inside the errorHandler methods of the API operations.
export class ApiGenericErrorHandler {

	public handleError(error: Error): Promise<BadRequestErrorResponse | InternalServerErrorResponse> {
	    // Handle the general errors that may occur throughout the implementation
	    const log = this.util.log;
	    log.debug('ApiGenericErrorHandler.handleError()');

      // 1. Move old error middleware logic to this place
      // 2. Examine the error and return response accordingly

      return {};
	}

}
  1. If you were using a Mapper class (to map schema to entity and vice-versa) and need to create an instance of it, the Mapper class constructor requires a special context "RequestContext" which is already passed to your operation controller class. You can store it as a class variable and use it to constrcut mapper instances.

See illustrative example below:


// 1. Directly create a mapper within operation controller constrcutor
export class ExecuteApi extends V1BaseControllers.ExecuteApiBase {
  private mapper: Mapper;
  constructor(requestContext: RequestContext) {
    super(requestContext);
    this.mapper = new Mapper(requestContext);
  }
}  

//2. Store request context and use it later
export class ExecuteApi extends V1BaseControllers.ExecuteApiBase {
  private context: RequestContext;
  constructor(requestContext: RequestContext) {
    super(requestContext);
    this.context = requestContext;
  }

  /**
    * @param request Request.ExecuteRequest
    * execute logic
  */
  execute = async (request: Request.ExecuteRequest) => {
    // TODO: add your implementation logic
    this.util.log.info('start execute executioMappern');
  
     // Create mapper instance
     const mapper:Mapper  = new Mapper(this.context);
  }
}  

export class Mapper extends mappers.BaseMapper {

constructor(context: Context) { super(context); }

public mapSchemaToEntity(schema: ObjectSchemaObject): Entity { const log = this.log; log.debug('mapSchemaToEntity()'); // Add mapping implementation

return null;

}

public mapEntityToSchema(entity: Entity): ObjectSchemaObject { const log = this.log; log.debug('mapEntityToSchema()'); // Add mapping implementation

return null;

}

Test files

For test files change would follow similar migration as operation logic

Below is an example of a test file to illustrate calling the operation, accessing the response

Also there are two tests functions one for operation logic and one for error handling:


describe('RetrieveOutboundApi', () => {
  const testEnvironment = new TestEnvironment();
  let requestContext: DebugRequestContext;
  before(async () => {
    // This block will run automatically before all tests.
    // Alternatively, use beforeEach() to define what should automatically happen before each test.
    // This is an optional block.
    requestContext = loadAndPrepareDebugConfig().requestContext;

  });
  after(async () => {
    // This block will run automatically after all tests.
    // Alternatively, use afterEach() to define what should automatically happen after each test.
    // This is an optional block.

    // Recommended: remove all instances that were created
    // await testEnvironment.cleanup();
  });

  it('retrieveOutbound', async () => {
    const retrieveOutboundApiInstance = new RetrieveOutboundApi(requestContext);

    // Declaring request
    let request: Request.RetrieveOutboundRequest;

    // Initializing request path parameters
    request.path = {
      inboundId: 'some id',
      correspondenceId : 'some id'
    };

    // Initializing request query parameters
    request.query = {
        from: '1',
        to: '20',
    };

    // Initializing request body
    request.body = {
        property1: 'property 1',
        property2: 'property 2',
    };

    // Calling operation
    await retrieveOutboundApiInstance.retrieveOutbound(request);
   
    // Accessing the response status code
    expect(retrieveOutboundApiInstance.response.statusCode).to.equal(200);

    // Accessing the response body
    expect(retrieveOutboundApiInstance.response.body.myProperty).to.equal('some property');
  });

  it('retrieveOutboundErrorHandler', async () => {
    const retrieveOutboundApiInstance = new RetrieveOutboundApi(requestContext);

    let request: Request.RetrieveOutboundRequest;
    let error: TypeError = {};

    // Initialize request & Error you want to test aganist
    await retrieveOutboundApiInstance.retrieveOutboundErrorHandler(error, request);
  
    // Accessing the response status code
    expect(retrieveOutboundApiInstance.response.statusCode).to.equal(500);

    // Accessing the response body
    expect(retrieveOutboundApiInstance.response.body.customErrorMessage).to.equal('An error happened');
  });

});

Integration namespace

  1. Only one API dependency is allowed per integration namespace, if you are working with a version that supported many dependencies in an integration namespace, you need to create additional integration namesapces and upload these dependencies to.

  2. Calling an Api dependency operation is different

    • It's directly under this.apis.integrationNamespaceAcronym

    • Return Type is different, All the api dependency operation now return full axios response, to access the response body, its mapped to data property.

    • Schemas are all grouped under integrationSchema.integrationNamespaceAcronymSchema

See below illustrative comparison between IBM Industry Solutions Workbench 3.1 and IBM Industry Solutions Workbench 4.0

IBM Industry Solutions Workbench 3.1

// 1. Accessing types from API dependency where
// - apis is a grouping for all api dependencies 
// - email_simemail is "integrationNamespaceAcronym_dependencyName"
// - EmailMessage schema we want to get its type
 const requestBody: apis.email_simemail.EmailMessage = {
      to: [
        {
          adress: this.input.custEmail
        }
      ],
      subject: `Order Confirmation of Order #${this.input.orderId}`,
      sender: 'pizza.fsw@ISW.email',
      message: message,
      messageFormat: 'HTML'
    };

    // Making a call where
    // 1. apis is the grouping for all api dependencies
    // 2. simemail is the dependency name
    // 3. postEmail is the operation we want to call
    await this.apis.simemail.postEmail(requestBody);

  }
IBM Industry Solutions Workbench 4.0
// 1. Accessing types from API dependency where
// - integrationSchema is a grouping for all api dependencies schemas
// - emailSchema is a grouping for schemas in integration namespace with acronym "email"
// - EmailMessage schema we want to get its type

const requestBody: integrationSchema.emailSchema.EmailMessage= {
      to: [
        {
          adress: 'some address'
        }
      ],
      subject: `Order Confirmation of Order #12345`,
      sender: 'pizza.fsw@ISW.email',
      message: 'Some message',
      messageFormat: 'HTML'
    };


// 1. Calling api dependency operation and getting response body
const integrationCallResult =  (await this.apis.email.postEmail(requestBody)).data;


// 2. Also we have full access to response returned as a result of an underlying axios call
const axiosResponse = await this.apis.email.postEmail(requestBody);

// Getting response body
const responseBody = axiosResponse.data;

// Getting response headers
const headers = axiosResponse.headers;

//3. Another variation
const { headers , data} =  await this.apis.email.postEmail(requestBody);
Migration steps
LogicOld WayNew Way
Calling an API Dependency await this.apis.simemail.postEmail(requestBody); const response = (await this.apis.email.postEmail(requestBody)).data;

NOTE In the below above example, operation only had a request body , however if it have additional Path or Query parameters , they need to be passed also, see below example:

const integrationCallResult =  (await this.apis.cnr.acceptOffer(requestBody, 'Path Param1', 'Query Param1')).data

API Modeling

If API parameters have been modeled with the location path, dots or dashes are no longer allowed in name attribute.

Dashes or dots in the path caused the path not to be resolved correctly in the running service and the API call returned only an HTTP 404 not found. If dashes or dots were used in an existing project, you will see a warning in the Problems section.

In this case, you only need to remove it from the Name attribute and adjust the source code if necessary.

Migrate Projects from local marketplace to asset catalog

User's using IBM Industry Solutions Workbench 3.0 and prior had the functionality to create project templates from solutions and export them in the local marketplace. From IBM Industry Solutions Workbench 4.0, the local marketplace will be replaced by asset catalogs, so all users who are having some existing project templates need to migrate these into an asset catalog. This migration can be done in two ways and both approach have two steps:

Approach 1 : Import all project templates as projects in the designer

This approach involves two steps: one before upgrading to IBM Industry Solutions Workbench 4.0 and another step is after upgrading to IBM Industry Solutions Workbench 4.0.

  • Before upgrading to IBM Industry Solutions Workbench 4.0: User need to go to designer and

    1. Import all project templates (one by one) as projects into the designer.

    2. Create an initail commit from designer

  • After upgrading to IBM Industry Solutions Workbench 4.0: Now for each of the projects created in above step, user have to share those projects as an asset and make it available in an asset catalog. The asset catalog needs to be configured first, see Configuring Asset Catalogs and Assets Overview.

Approach 2 : Using bash scripts to download and upload project templates

Note: Only templates which have corresponding service project with same acronym available in designer can be migrated with this approach. In order to migrate templates which does not have corresponding service project in designer, user can create a new service project in designer with same acronym. After this service project is created, the migration for these templates should work

This approach involves three steps:

  1. Download all existing templates in local machine (to be performed before upgrade to IBM Industry Solutions Workbench 4.0)

  2. Transform all templates to correct format (can be done before or after upgrade once the templates are downloaded)

  3. Upload templates to designer (to be performed after upgrade to IBM Industry Solutions Workbench 4.0)

Step 1: This task has to be performed before upgrading to new release.

Requirements:

  1. A machine to execute the task.

  2. Machine should have access to cluster.

  3. A user with valid Keycloak user credentials.

Outcome: All the templates available in designer are now available in local machine of customer.

Steps:

  1. Copy the bash script below into a get-templates.bash file in user's machine

  2. Add missing values for configuration in bash script :

    1. Add the values for the user credentials (keycloak/designer username and password)

    2. keycloak details: configured in authentication binding for a project configuration management

      1. KEYCLOAK_BASE_URL= {{hostname}}

      2. CLIENT_ID= {{client.id}}

      3. CLIENT_SECRET= {{client.secret}}

      4. DESIGNER_REALM= {{realm}}

    3. Destination folder in local machine where templates can be saved.

    4. External route url for local market place controller service.

  3. Open terminal in directory where bash script is copied and run command bash get-templates.bash.

  4. Bash Script will start downloading the templates one by one and store them in the directory specified for DESTINATION.

  5. Bash Script will print the logs in the terminal and also any error message (in case of error).

  6. Once all the templates are downloaded, script will stop

#Configuration for keycloak client
# https://auth.apps.openshift.example.cloud
KEYCLOAK_BASE_URL=  
CLIENT_ID=
CLIENT_SECRET=
USER_NAME=
PASSWORD=
DESIGNER_REALM=

# Provide template download location
DESTINATION="./templates"


# provide the url for the local marketplace controller api https://local-marketplace-controller.apps.openshift.example.cloud
LOCAL_MARKETPLACE_CONTROLLER_HOST={{LOCAL_MARKETPLACE_CONTROLLER_URL}}


# Get access token
TOKEN_STATUS_CODE=$(curl -X POST -o token.json -w "%{http_code}" -d "client_id=$CLIENT_ID" -d "client_secret=$CLIENT_SECRET" -d "grant_type=password" -d "username=$USER_NAME" -d "password=$PASSWORD" "$KEYCLOAK_BASE_URL/auth/realms/$DESIGNER_REALM/protocol/openid-connect/token")

TOKEN_RESPONSE=`cat ./token.json`
if [ $TOKEN_STATUS_CODE == "200" ]; then
  TOKEN=$(echo $TOKEN_RESPONSE | jq -r '.access_token')
  # # Get list of all templates available in marketplace
  RESPONSE_CODE=$(curl -X GET  -w "%{http_code}"  -H "Accept: application/json" -H "Authorization: Bearer $TOKEN" "$LOCAL_MARKETPLACE_CONTROLLER_HOST/api/v1/templates" -o ./templates.json)

  # read templates json
  TEMPLATES=`cat ./templates.json`
else
    echo "Failed to get token, please check keycloak config"
    echo $TOKEN_RESPONSE
fi

# check if api call is successful, other wise log the error
if [[ $RESPONSE_CODE != "200" ]]; then
      echo "invalid response from local marketplace api"
      echo $TEMPLATES
      rm ./templates.json
      exit 1
fi

cat $TOKEN 

#iterate through the Templates array and read template id of all the templates
templateids=$(echo "$TEMPLATES" | jq -c -r '.[].id')


# download each template in to a zip file using local market place controller api
for templateid in ${templateids}
 do
  echo $templateid
  # zip file path
  FILE=$DESTINATION/$templateid-export.zip
  # check if template zip file already exists in the destination folder
  if [[ ! -f "$FILE" ]]
  then
    echo "$FILE does not exist on your filesystem."
    echo "downloading template zip"
  # download the template
  STATUS=$(curl -X GET -w "%{http_code}" -H "accept: application/octet-stream" -H "Authorization: Bearer $TOKEN"  $LOCAL_MARKETPLACE_CONTROLLER_HOST/api/v1/templates/$templateid --output $FILE)
    # get a new access_token if the token has expired 
    if [ $STATUS == "401" ]; then
      STATUS_CODE=$(curl -X POST -o token.json -w "%{http_code}" -d "client_id=$CLIENT_ID" -d "client_secret=$CLIENT_SECRET" -d "grant_type=password" -d "username=$USER_NAME" -d "password=$PASSWORD" "$KEYCLOAK_BASE_URL/auth/realms/$DESIGNER_REALM/protocol/openid-connect/token")
      TOKEN_RESPONSE=`cat ./token.json`
      TOKEN=$(echo $TOKEN_RESPONSE | jq -r '.access_token')      
      # retry for last template
      STATUS=$(curl -X GET -w "%{http_code}" -H "accept: application/octet-stream" -H "Authorization: Bearer $TOKEN"  $LOCAL_MARKETPLACE_CONTROLLER_HOST/api/v1/templates/$templateid --output $FILE)
    # print error and remove error file if response is not 200
    elif [ $STATUS != "200" ]; then
      echo "Failed to download template"
      cat $FILE
      rm $FILE
    fi
  fi
  sleep 2s
done
#  remove token.json from file system
rm ./token.json

Step 2: Transform the templates to get them ready for upload. Requirements:

  1. A machine to execute the task.

  2. Machine should already have downloaded templates.

Outcome: All the templates are transformed and available in new directory in local machine of customer.

Steps:

  1. Copy the bash script below into a transform-templates.bash file in user's machine

  2. Add missing values for configuration in bash script:

    1. Directory path where templates are saved from step 1.

    2. Destination directory path for transformed templates.

  3. Open terminal in directory where bash script is copied and run command bash transform-templates.bash.

  4. Bash Script will start transforming the templates and save transformed templates to transformed destination.

  5. Once all the templates are downloaded, script will stop

# directory where template zip files are stored
TEMPLATE_DIRECTORY=

# directory used to store the templates after transformation
TRANSFORMED_TEMPLATES=

# traverse through template directory and transform to update the preview.json for all templates
for entry in "$TEMPLATE_DIRECTORY"/*
do
  echo "$entry"
  TemplateZipName=$(echo "${entry##*/}")
  echo "$TemplateZipName"
  TemplateDirectoryName=$(echo $TemplateZipName| cut  -d'.' -f 1); 

  # unzip the export zip
  unzip -d ./$TemplateDirectoryName/ $entry
  SolutionZipPath=$(find ./$TemplateDirectoryName/* -name "*.zip")
  SolutionZipName=$(echo "${SolutionZipPath##*/}")
  SolutionDirectoryName=$(echo $SolutionZipName| cut  -d'.' -f 1); 

  echo $SolutionDirectoryName
  #  unzip the inner template zip
  unzip -d ./$TemplateDirectoryName/$SolutionDirectoryName $SolutionZipPath
  #  read preview json from root folder
  previewJson=$(find ./$TemplateDirectoryName/ -name "*-preview.json")
  # copy preview json from root folder to template data folder
  cp $previewJson ./$TemplateDirectoryName/$SolutionDirectoryName/preview.json
  rm $SolutionZipPath
  cd $TemplateDirectoryName
  cd $SolutionDirectoryName
  zip -r $SolutionDirectoryName *
  mv $SolutionZipName ../$SolutionZipName
  cd ..
  rm -rf ./$SolutionDirectoryName
  zip -r $TemplateDirectoryName *
  mv $TemplateZipName .$TRANSFORMED_TEMPLATES
  cd ..
  rm -rf ./$TemplateDirectoryName
done

Step 3: This task has to be performed after upgrading to new release. Requirements:

  1. A machine to execute the task.

  2. Machine should have access to cluster.

  3. A user with valid Keycloak user credentials.

  4. User should have access (developer or above) to all the service projects in designer.

  5. The asset catalog is configured to publish assets, see Configuring Asset Catalogs and Assets Overview

  6. The external route for Asset Manager need to enabled by updating configuration in Extended configuration

Outcome: All the templates now available in designer and under asset catalog configured.

Steps:

  1. Copy the bash script below into a upload-templates.bash file in user's machine

  2. Add missing values for configuration in bash script :

    1. Add the values for the user credentials (keycloak/designer username and password)

    2. keycloak details: configured in authentication binding for a project configuration management

      1. KEYCLOAK_BASE_URL= {{hostname}}

      2. CLIENT_ID= {{client.id}}

      3. CLIENT_SECRET= {{client.secret}}

      4. DESIGNER_REALM= {{realm}}

    3. Catalog alias of asset catalog where all assets need to be published.

    4. Directory path where transformed templates are available.

    5. External route url for K5 Asset Manager Service.

  3. Open terminal in directory where bash script is copied and run command bash upload-templates.bash.

  4. Bash Script will start uploading the templates one by one.

  5. Bash Script will print the logs in the terminal and also any error message (in case of error).

  6. Once all the templates are downloaded, script will stop

  7. User can see all the uploaded assets by browsing the asset catalog in designer.

Once the template is uploaded as new asset, it will be removed from local directory, so make a copy of the templates in case they are needed for future.

#!/bin/bash

#Configuration for keycloak client
# https://auth.apps.openshift.example.cloud
KEYCLOAK_BASE_URL=  
CLIENT_ID=
CLIENT_SECRET=
USER_NAME=
PASSWORD=
DESIGNER_REALM=

# URL for asset manager service https://k5-asset-manager.apps.openshift.example.cloud
ASSET_MANAGER_HOST={{ASSET_MANAGER_URL}}

# Catalog alias where new assets should be created
ASSET_CATALOG={{CATALOG_ALIAS}}

# Directory where transformed template zip files are stored
TEMPLATE_DIRECTORY=

# Get access token
STATUS_CODE=$(curl -X POST -o token.json -w "%{http_code}" -d "client_id=$CLIENT_ID" -d "client_secret=$CLIENT_SECRET" -d "grant_type=password" -d "username=$USER_NAME" -d "password=$PASSWORD" "$KEYCLOAK_BASE_URL/auth/realms/$DESIGNER_REALM/protocol/openid-connect/token")
TOKEN_RESPONSE=`cat ./token.json`
if [ $STATUS_CODE == "200" ]; then
  TOKEN=$(echo $TOKEN_RESPONSE | jq -r '.access_token')
else
    echo "Failed to get token, please check keycloak config"
    echo $TOKEN_RESPONSE
fi


# traverse through template directory and call /convertAsset api for each template
for entry in "$TEMPLATE_DIRECTORY"/*
do
  echo "$entry"

# upload template to convert it to new asset
STATUS=$(curl -X POST -s -w "%{http_code}" -H "Accept: application/json" -H "Content-Type: multipart/form-data" -H "Authorization: Bearer $TOKEN" -F "file=@$entry;type=application/zip" "$ASSET_MANAGER_HOST/api/v1/$ASSET_CATALOG/assets/importTemplate" --output ./output.json)
echo $STATUS
#  check if successfully uploaded
if [ $STATUS == "201" ]; then
    # remove zip file from directory
    rm $entry
# get a new access_token if the token has expired 
elif [ $STATUS == "401" ]; then
      STATUS_CODE=$(curl -X POST -o token.json -w "%{http_code}" -d "client_id=$CLIENT_ID" -d "client_secret=$CLIENT_SECRET" -d "grant_type=password" -d "username=$USER_NAME" -d "password=$PASSWORD" "$KEYCLOAK_BASE_URL/auth/realms/$DESIGNER_REALM/protocol/openid-connect/token")
      TOKEN_RESPONSE=`cat ./token.json`
      if [ $STATUS_CODE == "200" ]; then
        TOKEN=$(echo $TOKEN_RESPONSE | jq -r '.access_token')
        # retry for last template
        STATUS=$(curl -X POST -s -w "%{http_code}" -H "Accept: application/json" -H "Content-Type: multipart/form-data" -H "Authorization: Bearer $TOKEN" -F "file=@$entry;type=application/zip" "$ASSET_MANAGER_HOST/api/v1/$ASSET_CATALOG/assets/importTemplate" --output ./output.json)
      else
          echo "Failed to get token, please check keycloak config"
          echo $TOKEN_RESPONSE
      fi
else
    echo "Failed to convert template"
    # print error response body
    cat output.json
    # remove output file
    rm output.json
fi
sleep 2s
done

Deprecated Local Marketplace User Roles

As the Local Marketplace is no longer used and replaced by the Asset Catalog the following 2 User Roles are deprecated and can be deleted from the keycloak installation after the migration of projects from local marketplace to the new asset catalog is done:

NameDescription
mp_user (deprecated)Can publish projects to the local marketplace.
mp_manager (deprecated)Can delete project templates from the local marketplace.

For the new Asset Catalog feature the privileges are delegated to the used Git Provider. Please see also Configure users - Roles overview.

Upgrade Solution CLI

  • Before upgrading to IBM Industry Solutions Workbench 4.0:

Upgrade the Solution CLI to the latest version (6.1.5).

fss upgrade-cli

NOTE Alternatively you can uninstall CLI and download new version and install it again from designer after upgrading to IBM Industry Solutions Workbench 4.0.

npm uninstall -g @knowis/designer-cli
  • After upgrading to IBM Industry Solutions Workbench 4.0:

The binary name of the Solution CLI has been changed from 'fss' to 'k5'. Thereby you start the commands of the CLI now with:

k5 [command]

In addition, the following ENV variable names have changed:

Old NameNew Name
FSS_CLI_CONFIG_DIRK5_CLI_CONFIG_DIRAlternative path to local configuration
FSS_CLI_LOGIN_USERNAMEK5_CLI_LOGIN_USERNAMEUsername for login
FSS_CLI_LOGIN_PASSWORDK5_CLI_LOGIN_PASSWORDPassword for login
FSS_CLI_LOGIN_ENVOY_USERNAMEK5_CLI_LOGIN_ENVOY_USERNAMEUsername for login to Solution Envoy
FSS_CLI_LOGIN_ENVOY_PASSWORDK5_CLI_LOGIN_ENVOY_PASSWORDPassword for login to Solution Envoy

The directory to the local configuration files has changed from '//.knowis/fss-cli' to '//.k5/k5-cli'. At the first start of the version 6.1.5 of the CLI the files are copied to the new directory. However, the files can also be copied by hand at any time. After the automatic migration, the existing directory ('//.knowis/fss-cli') remains. We recommend deleting it by hand.

Documentation

  • Before upgrading to IBM Industry Solutions Workbench 4.0:

The command word 'fsw' was used to visualize the modeled content.

  • After upgrading to IBM Industry Solutions Workbench 4.0:

The command word 'k5' is now used to visualize the modeled contents.

When loading the documents, the old keyword is automatically replaced. If the replacement of the old keyword does not work, this can be done manually in the editor. After that, the documentation should be saved.

User Settings

The User Settings menu item is now under Account and no longer under Settings.