Testing and debugging
Introduction
The auto-generated *.test.ts
files are used for local debugging and testing of services, commands,
external entities, operations and agents against the cluster (integration tests). Therefore, through the
runner you can provide the input (i.e. the input values) for the execution of a service, command or operation and
also access their output after the execution of the script. For all services, commands, external entities, operations
the debugging process works in a similar way. From the runner component you are able to access the input and output
objects.
The input object is used to provide the values of the input properties of the service or command. For operations
the input properties belong in the request parameters and request body. In contrary, the output object
is used to read the values of the output (or response for the operations) properties after the await runner.run()
has
been successfully executed. Moreover, the await runner.run()
is the line to execute the service, command or operation.
The run()
function either does not retrieve an input at all or, in case of **
instance commands**, it retrieves as input the instance id
.
true
), you need to use the *.unit.test.ts
files. Only files named with this pattern will be
recognized. By default, "Enable unit test execution" is set to false
, since there are no unit tests generated by the
system.
Preparation
*.test.ts
files you need to set up a connection before. See instructions
below.Prepare the CLI
You can download the required *-cli-config.json
file from the Solution Envoy of the runtime (stage) you want to
connect with. To do so
open Solution Designer
go to CI/CD
in the Pipeline Configurations table search for a "deploy" pipeline for your desired deployment target
click on the link provided in the Solution Envoy column
inside Solution Envoy click on Infrastructure
click on Download in the Solution CLI Setup window to download the config file
This file will be prefixed with the name of the stage and ending with -cli-config.json
.
fss setup-envoy -f <path/to/your/{name of your stage}-cli-config.json>
fss prepare-debug
The CLI will prompt you to authenticate yourself.
Setup of local bindings
Create a local-bindings.json
file and place it in your project's root directory. Make sure it's added to
the .gitignore
file, so it will not get uploaded to your git repository.
Depending on your project you might have to set up event topic bindings if it uses events or API bindings for an API
dependency that you modelled within an integration namespace. If your project uses both than you have to configure both
in the local-bindings.json
file.
Topic bindings
For services, commands or agents that use events, you need to provide a local configuration for the topic binding(s).
Install and configure your local Kafka broker see https://kafka.apache.org/quickstart
For each event topic add a proper topic binding configuration in the
local-bindings.json
file. Same goes for Kafka bindings like below:
Example of a local-bindings.json file:
For each topic in use add a mapping and also for each kafka binding in use.
{
"topicBindings": {
"<Topic Binding Name>": {
"topicName": "<Kafka Topic Name>",
"kafkaBinding": "<Kafka Binding Name>"
},
"<Topic Binding Name>": {
"topicName": "<Kafka Topic Name>",
"kafkaBinding": "<Kafka Binding Name>"
}
},
"kafkaBindings": {
"<Kafka Binding Name>": {
"kafka_brokers_sasl": [
"<Kafka Broker sasl>"
],
"user": "<Username>",
"password": "<Password>"
},
"<Kafka Binding Name>": {
"kafka_brokers_sasl": [
"<Kafka Broker sasl>"
],
"user": "<Username>",
"password": "<Password>"
}
}
}
Explanation:
To find out which topic bindings this project uses you can open the project in Solution Designer. You can find the relevant information on the topic bindings in Solution Hub > Topic Bindings.
<Topic Binding Name>: the name of the event topic binding in Solution Designer
topicName:
: the name of the topic on the Kafka clusterkafkaBinding
: must match one of the kafkaBindings listed in "kafkaBindings" configuration
topicBindings
and kafkaBindings
are mandatory
keys for local debugging.API bindings
In case you created an API binding for an API Dependency
of your project you need to add a configuration for API bindings to the local-bindings.json
file.
Example of a local-bindings.json file:
{
"apiBindings": {
"<API Binding Name>": {
"url": "example1.com",
"k5_propagate_security_token": true,
"ca_cert": "<PEM-formatted certificate string>"
},
"<API Binding Name>": {
"url": "example2.com",
"k5_propagate_security_token": true
},
"<API Binding Name>": {
"url": "example3.com",
"k5_propagate_security_token": true,
"custom_key": "custom_value"
}
}
}
Explanation:
<API Binding Name>: the name of the API binding (see API bindings on how to get this information)
<
url
>: the URL of the external API that you want to callk5_propagate_security_token
: a boolean that determines if the JWT will be forwarded automaticallyca_cert
: (optional) the certificate as a PEM formatted stringcustom_key
: (optional) this can be any key/value that you added to this API binding ( see API bindings on how to add custom keys)
Setup your IDE
The steps that must be followed are:
Open a
*.test.ts
file (it won't work with implementation files)Set some breakpoints in the implementation files
Navigate to the debug section on the left menu bar
Launch the debug in VS Code on "Current Test"
On the sidebar it is possible to trace variables
Test environment
With TestEnvironment one is able to create new instances of entities and then perform one or more test
scenarios on the created instances. After the tests have been executed, the created instances can be deleted at once
using the cleanUp()
method.
Example:
describe('solution:Command1', () => {
// we define the testEnvironment so that it is accessible in the test blocks
// We need to create a new instance of our TestEnvironment
// Each instance of it can handle its own test data
const testEnvironment = new TestEnvironment();
// we define the created entity so that it is accessible in the test blocks
let createdEntity;
before(async () =>
{
createdEntity = testEnvironment.factory.entity.cptest.RootEntity1();
// We set values to each property
createdEntity.property1 = "value1";
createdEntity.property2 = "value2";
// We create the entity in the database
await createdEntity.persist();
});
// This block will define what will happen after all tests are executed.
after(async () =>
{
// Delete everything we've created
// through the testEnvironment in this test session
await testEnvironment.cleanUp();
});
// 1: Create and delete the entity in the actual test.
// In this case you do not need the before() and after() blocks
it('works on a RootEntity2', async () =>
{
// Initialize the entitiy
const rootEntity2 = testEnvironment.factory.entity.cptest.RootEntity2();
// Set value to the properties of the entitiy
rootEntity2.property1 = "value1";
// Create the entity in the database
await rootEntity2.persist();
const runner = new cptest_Command1Runner();
// Run the test on the instance we created before
await runner.run(rootEntity2._id);
console.warn('No tests available');
expect(true).to.equal(true);
// Delete the instance created before
await rootEntity2.delete();
});
// 1: Use the find() function to search for a specific entity that was created in before() block
// do not need to delete manually, after() block will do it for you
it('works on a rootEntity1', async () =>
{
// The before() block will run automatically before this test, provided it was implemented
// Find an instance that already exists
const foundEntity = await testEnvironment.repo.cptest.RootEntity1.find(true, 'myFilter');
const runner = new cptest_Command1Runner();
// Run the test on the instance that already exists
await runner.run(foundEntity._id);
console.warn('No tests available'); expect(true).to.equal(true);
// The after() block will run automatically
});
});
Debug factory commands
Debug a Factory Command by following the structure below:
it('works on an existing rootEntity1 that we find', async () => {
// The beforeAll() block will run automatically before this test, provided it was implemented
const runner = new cptest_FactoryCommand1Runner();
// Give input to factory command
runner.input = testEnvironment.factory.entity.ns.FactoryCommandIdentifier_Input();
runner.input.property1 = value1;
runner.input.property2 = value2;
// This will return the created instance of the root entity
const factory_output = await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
});
Debug instance commands
Debug an Instance Command by following the structure below:
it('works on an existing rootEntity1 that we find', async () => {
// Give input to factory command
runner.input = testEnvironment.factory.entity.ns.InstanceCommandIdentifier_Input();
runner.input.property1 = value1;
runner.input.property2 = value2;
// Use the Id of the created entity
// This will return the modified instance of the root entity
const instance_output = await runner.run(createdEntity._id);
console.warn('No tests available');
expect(true).to.equal(true);
instance_output._id;
instance_output.prop1;
instance_output.prop2;
});
Debug services
Debug a Service by following the structure below:
it('works on an existing rootEntity1 that we find', async () => {
// The beforeAll() block will run automatically before this test, provided it was implemented
const runner = new cptest_Service1Runner();
// Give input to factory command
runner.input = testEnvironment.factory.entity.ns.Service1Identifier_Input();
runner.input.property1 = value1;
runner.input.property2 = value2;
// This returns the output entity
const service_output = await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
// Get the output of the service and store it in local variable
service_output.prop1;
service_output.prop2;
});
Debug external entities
Debug an External Entity by following the structure below:
describe('ns:ExternalEntityId', () => {
const testEnvironment = new TestEnvironment();
before(async () => {
// This block will run automatically before all tests.
// Alternatively, use beforeEach() to define what should automatically happen before each test.
// This is an optional block.
});
after(async () => {
// This block will run automatically after all tests.
// Alternatively, use afterEach() to define what should automatically happen after each test.
// This is an optional block.
// Recommended: remove all instances that were created
// await testEnvironment.cleanup();
});
describe('create', () => {
it('works', async () => {
// const runner = new externalEntityRunners.ns_ExternalEntityIdConstructorRunner();
// await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
});
});
describe('load', () => {
it('works', async () => {
// const runner = new externalEntityRunners.ns_ExternalEntityIdLoaderRunner();
// await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
});
});
describe('validate', () => {
it('works', async () => {
// const runner = new externalEntityRunners.ns_ExternalEntityIdValidatorRunner();
// await runner.run(false);
console.warn('No tests available');
expect(true).to.equal(true);
});
});
});
Debug operations
Debug an Operation by following the structure below:
it('works on an existing rootEntity1 that we find', async () => {
// The beforeAll() block will run automatically before this test, provided it was implemented
const runner = new cptest_Service1Runner();
// Initialize the path parameters of the operation
runner.request.path.parameter1 = value1;
// Initialize the query parameters of the operation
runner.request.query.parameter1 = value2;
// Initialize the request body and its properties of the operation
// If body is a complex body
runner.request.body = testEnvironment.factory.schema.nspacrnm.SchemaIdentifier();
runner.request.body.property1 = value1;
runner.request.body.property2 = value2;
await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
// Get the output of the command and store it in local variable
const operation_response = runner.response;
});
Debug error middleware
Debug an Error Middleware by following the structure below:
describe('apitest_ErrorMiddleware', () => {
const testEnvironment = new TestEnvironment();
before(async () => {
// This block will run automatically before all tests.
// Alternatively, use beforeEach() to define what should automatically happen before each test.
// This is an optional block.
});
after(async () => {
// This block will run automatically after all tests.
// Alternatively, use afterEach() to define what should automatically happen after each test.
// This is an optional block.
// Recommended: remove all instances that were created
// await testEnvironment.cleanup();
});
it('works', async () => {
// Create runner instance
const runner = new errorMiddlewareRunners.apitest_ErrorMiddlewareRunner();
// Assign an error object that will be passed to error middleware
runner.error = new cptest_SomeCustomError();
// Execute error middleware
await runner.run();
// Have some expectation aganist the response returned from error middleware
expect(runner.response.status).to.equal(500);
expect(runner.response.body.code).to.equal('E19001');
expect(runner.response.body.message).to.equal('An error occurred');
});
});
Change default log level
Either adjust the Project Configuration or the Solution-Specific Configuration in the Configuration Management with the following value:
configmap:
extraConfiguration:
de.knowis.cp.ds.action.loglevel.defaultLevel: INFO
The log level can be changed here as needed either to INFO, DEBUG, TRACE, ERROR or WARN.
Configure different log levels
Prerequisites
Create a JSON file named
log-config.json
in your project's root directoryAdd an entry in the
.gitignore
file forlog-config.json
so it is not pushed to your repositoryAdjust your VS Code launch configuration to allow output display from
std
. Open.vscode/launch.json
in configurations and add"outputCapture": "std"
Supported log levels
The supported log levels are:
error
warn
info
debug
trace
Configure log levels using module names
Configure solution-framework log level
The below example configures the solution-framework to be at error
log level, this is achieved by placing an entry
in log-config.json
file with key "solution-framework" and desired log level, in this example error
{
"solution-framework": "error"
}
Configure project implementation files
The below example will set the log level for all files within the project's src-impl
folder (including test files)
to debug
. This is achieved by placing an entry in log-config.json
file with key "your-solution-acronym" and desired
log level, in this example debug
{
"ORDERS": "debug"
}
Configure using specific paths
In the example below:
Every file under the path
"/src-impl/api/apitest/operations"
in your project will be configured to log leveldebug
Test file
"/src-impl/api/apitest/operations/addDate.test"
will be configured to log levelwarn
File
"/src-impl/api/apitest/operations/addDate"
will be configured to log levelwarn
All sdk files under
"/sdk/v1"
will be configured to log levelerror
All sdk files under
"/sdk/v1/handler"
will be configured to log leveltrace
{
"/src-impl/api/apitest/operations/*": "debug",
"/src-impl/api/apitest/operations/addDate.test": "warn",
"/src-impl/api/apitest/operations/addDate": "trace",
"/sdk/v1/*": "error",
"/sdk/v1/handler/*": "trace"
}
log-config.json
is available, the default log level is info. A specific file log level will always take
precedence over its parent folder log level, same with sub-folders and their parent folders.trace
log level is configured, the logs might contain sensitive user information such as ID
token.Unit testing for TypeScript / JavaScript projects
The *.unit.test.ts
or *.unit.test.js
files are used to unit test the TypeScript / JavaScript code that is explicitly
added by the user.
If the user enables Enable unit test execution
flag while creating the pipeline, it will run the
command npm run test:unit
which will look for the test:unit
script in package.json
. For example, in a low-code
project in TypeScript, the package.json
has something like this
{
"scripts": {
"test:unit": "./node_modules/.bin/mocha --require ./node_modules/ts-node/register/transpile-only -c --recursive test/*.unit.test.ts"
}
}
If the Enable unit test execution
flag in a pipeline is enabled, the pipeline will run the script test:unit
and will
look for *.unit.test.ts
files inside the test
folder. If there are no *.unit.test.ts
files, it is expected to
throw an error (mentioned below) in the pipeline.
Error: No test files found: "test/*.unit.test.ts"
Enable unit test execution
only when the user has added unit tests
explicitly.