Debugging Implementations

Getting started

The .test.ts files are used to locally execute services, commands, external entities, operations and agents. Therefore, through the runner you can provide the input (i.e. the input values) for the execution of a service, command or operation and also access their output after the execution of the script.

For all services, commands, external entities and operations the debugging process works in a similar way. From the runner component you are able to access the input and the output object. The input object is used to provide the values of the input properties of the service or command.

For operations the input properties belong in the request parameters and request body. On the contrary, the output object is used to read the values of the output (or response for the operations) properties after the await runner.run() has been successfully executed. Moreover, the await runner.run() is the line for execute the service, command or operation. The run( ) function either does not retrieve an input at all or, in case of instance commands, it retrieves as input the instance id.

In order to debug the scripts, a default launch configuration for VSCode is provided. The steps that must be followed are:

  1. Open a *.test.ts file (it won't work with implementation files).
  2. Set some breakpoints in the implementation files.
  3. Navigate to the debug section on the left menu bar.
  4. Launch the debug in the VSCode on "Current Test".
  5. On the sidebar it is possible to trace variables.

Test Environment

With TestEnvironment one is able to create new instances of entities and then perform one or more test scenarios on the created instances. After the tests will be executed, then the created instances can be deleted at once using the cleanUp method.

Example:

describe('solution:Command1', () => {
  // we define the testEnvironment so that it is accessible in the test blocks
  // We need to create a new instance of our TestEnvironment
  // Each instance of it can handle its own test data

  const testEnvironment = new TestEnvironment();

  // we define the created entity so that it is accessible in the test blocks
  let createdEntity;
  before(async () =>
  {
    createdEntity = testEnvironment.factory.entity.cptest.RootEntity1();
    // We set values to each property
    createdEntity.property1 = "value1";
    createdEntity.property2 = "value2";

    // We create the entity in the database
    await createdEntity.persist();
  });

  // This block will define what will happen after all tests are executed.
  after(async () =>
  {
    // Delete everything we've created
    // through the testEnvironment in this test session
    await testEnvironment.cleanUp();
  });

  // 1: Create and delete the entity in the actual test.
  // In this case you do not need the before() and after() blocks
  it('works on a RootEntity2', async () =>
  {

    // Initialize the entitiy
    const rootEntity2 = testEnvironment.factory.entity.cptest.RootEntity2();

    // Set value to the properties of the entitiy
    rootEntity2.property1 = "value1";

    // Create the entity in the database
    await rootEntity2.persist();

    const runner = new cptest_Command1Runner();
    // Run the test on the instance we created before
    await runner.run(rootEntity2._id);

    console.warn('No tests available');
    expect(true).to.equal(true);

    // Delete the instance created before
    await rootEntity2.delete();
  });

  // 1: Use the find() function to search for a specific entity that was created in before() block
  // do not need to delete manually, after() block will do it for you
  it('works on a rootEntity1', async () =>
  {

    // The before() block will run automatically before this test, provided it was implemented

    // Find an instance that already exists
    const foundEntity = await testEnvironment.repo.cptest.RootEntity1.find(true, 'myFilter');

    const runner = new cptest_Command1Runner();
    // Run the test on the instance that already exists
    await runner.run(foundEntity._id);

    console.warn('No tests available'); expect(true).to.equal(true);

    // The after() block will run automatically
  });
});

Debugging factory commands

Debug a factory command by following the structure below:


it('works on an existing rootEntity1 that we find', async () => { 

    // The beforeAll() block will run automatically before this test, provided it was implemented

    const runner = new cptest_FactoryCommand1Runner(); 

    // Give input to factory command
    runner.input = testEnvironment.factory.entity.ns.FactoryCommandIdentifier_Input();
    runner.input.property1 =  value1; 
    runner.input.property2 =  value2;
    
    // This will return the created instance of the root entity
    const factory_output = await runner.run(); 

    console.warn('No tests available'); 
    expect(true).to.equal(true);
});

Debugging Instance commands

Debug an instance command by following the structure below:


it('works on an existing rootEntity1 that we find', async () => { 

    // Give input to factory command
    runner.input = testEnvironment.factory.entity.ns.InstanceCommandIdentifier_Input();
    runner.input.property1 =  value1; 
    runner.input.property2 =  value2;

 
    // Use the Id of the created entity
    // This will return the modified instance of the root entity
    const instance_output =  await runner.run(createdEntity._id); 

    console.warn('No tests available'); 
    expect(true).to.equal(true); 
    
    instance_output._id;
    instance_output.prop1;
    instance_output.prop2;
});

Debugging services

Debug a service by following the structure below:


it('works on an existing rootEntity1 that we find', async () => { 

     // The beforeAll() block will run automatically before this test, provided it was implemented

     const runner = new cptest_Service1Runner(); 

     // Give input to factory command
     runner.input = testEnvironment.factory.entity.ns.Service1Identifier_Input();
     runner.input.property1 =  value1; 
     runner.input.property2 =  value2;
     
     // This returns the output entity 
     const service_output = await runner.run(); 

     console.warn('No tests available'); 
     expect(true).to.equal(true);
          
     // Get the output of the service and store it in local variable
     service_output.prop1; 
     service_output.prop2; 
});

Debugging external entities

Debug an external entity by following the structure below:


describe('ns:ExternalEntityId', () => {

  const testEnvironment = new TestEnvironment();
  before(async () => {
    // This block will run automatically before all tests.
    // Alternatively, use beforeEach() to define what should automatically happen before each test.
    // This is an optional block.
  });
  after(async () => {
    // This block will run automatically after all tests.
    // Alternatively, use afterEach() to define what should automatically happen after each test.
    // This is an optional block.

    // Recommended: remove all instances that were created
    // await testEnvironment.cleanup();
  });

  describe('create', () => {
    it('works', async () => {
     // const runner = new externalEntityRunners.ns_ExternalEntityIdConstructorRunner();
     // await runner.run();
      console.warn('No tests available');
      expect(true).to.equal(true);
    });
  });

  describe('load', () => {
    it('works', async () => {
      // const runner = new externalEntityRunners.ns_ExternalEntityIdLoaderRunner();
      // await runner.run();
      console.warn('No tests available');
      expect(true).to.equal(true);
    });
  });

  describe('validate', () => {
    it('works', async () => {
       // const runner = new externalEntityRunners.ns_ExternalEntityIdValidatorRunner();
       // await runner.run(false);
      console.warn('No tests available');
      expect(true).to.equal(true);
    });
  });
});
            

Debugging operations

Debug an operation by following the structure below:


it('works on an existing rootEntity1 that we find', async () => { 

    // The beforeAll() block will run automatically before this test, provided it was implemented

    const runner = new cptest_Service1Runner(); 

    // Initialize the path parameters of the operation
    runner.request.path.parameter1 =  value1; 

    // Initialize the query parameters of the operation
    runner.request.query.parameter1 =  value2;

    // Initialize the request body and its properties of the operation
    // If body is a complex body
    runner.request.body =  testEnvironment.factory.schema.nspacrnm.SchemaIdentifier();
    runner.request.body.property1 = value1;
    runner.request.body.property2 = value2;

    await runner.run(); 

    console.warn('No tests available'); 
    expect(true).to.equal(true);
          
    // Get the output of the command and store it in local variable
    const operation_response = runner.response; 
});
       

Debugging error middleware

Debug an error middleware by following the structure below:


describe('apitest_ErrorMiddleware', () => {
  const testEnvironment = new TestEnvironment();
  before(async () => {
    // This block will run automatically before all tests.
    // Alternatively, use beforeEach() to define what should automatically happen before each test.
    // This is an optional block.
  });
  after(async () => {
    // This block will run automatically after all tests.
    // Alternatively, use afterEach() to define what should automatically happen after each test.
    // This is an optional block.

    // Recommended: remove all instances that were created
    // await testEnvironment.cleanup();
  });
  it('works', async () => {

    // Create runner instance
    const runner = new errorMiddlewareRunners.apitest_ErrorMiddlewareRunner();

    // Assign an error object that will be passed to error middleware
    runner.error = new cptest_SomeCustomError();

    // Execute error middleware
    await runner.run();

    // Have some expectation aganist the response returned from error middleware
    expect(runner.response.status).to.equal(500);
    expect(runner.response.body.code).to.equal('E19001');
    expect(runner.response.body.message).to.equal('An error occurred');
  });

});

       

How to change the default log level for a deployed Low-Code-Solution

Change deploy configuration

Adjust either the solution default configuration or a solution specific configuration in the configuration management with the following value:

configmap: extraConfiguration: de.knowis.cp.ds.action.loglevel.defaultLevel: INFO

The log level can be changed here as needed either to INFO, DEBUG, TRACE, ERROR or WARN.

How to configure different log levels for a local Low-Code-Solution

Prerequisites

1. Create a JSON file named log-config.json in your solution root directory.

2. Add an entry in .gitignore file for log-config.json so it is not pushed to your repository.

3. Adjust your VSCode launch configuration to allow output display from std- Open .vscode/launch.json- In configurations array add "outputCapture": "std"

Supported log levels

log levels supported are: error, warn, info, debug and trace

Examples

Use the examples below to configure different log levels:
  • Configure log levels using module names
    • Configure solution-framework log level:
      The below example configures the solution-framework to be at error log level, this is achieved by placing an entry in log-config.json file with key <solution-framework> and desired log level, in this example error
      {
        "solution-framework" : "error"
      }
    • Configure solution implementation files:
      The below example will configure all src-impl files within solution (including test files) to be debug, this is achieved by placing an entry in log-config.json file with key <your-solution-acronym> and desired log level, in this example debug
      {
        "ORDERS" : "debug"
      }
  • Configure using specific paths:

    In the example below:

    1. Every file under the path "/src-impl/api/apitest/operations" in your solution will be configured to log level debug.

    2. Test file "/src-impl/api/apitest/operations/addDate.test" will be configured to log level warn.

    3. File "/src-impl/api/apitest/operations/addDate" will be configured to log level warn.

    4. All sdk files under "/sdk/v1" will be configured to log level error.

    5. All sdk files under "/sdk/v1/handler" will be configured to log level trace.

    {
     "/src-impl/api/apitest/operations/*" : "debug",
     "/src-impl/api/apitest/operations/addDate.test" : "warn",
     "/src-impl/api/apitest/operations/addDate" : "trace",
     "/sdk/v1/*" : "error",
     "/sdk/v1/handler/*" : "trace"
    }
    Note:

    1. When using paths, a path always starts with forward slash '/' and doesn't include extension (.js or .ts).

    2. If no log-config.json is available, the default log level is info.

    3. A specific file log level will always take precedence over its parent folder log level, same with sub-folders and their parent folders.

Attention: In order for debugging to work, the implementation folder must be opened directly in VSCode.
Attention: If trace log level is configured, the logs might contain sensitive user information such as ID Token.