Debugging Implementations
Getting started
The .test.ts files are used to locally execute services, commands, operations and agents. Therefore, through the runner you can provide the input (i.e. the input values) for the execution of a service, command or operation and also access their output after the execution of the script.
For all services, commands and operations the debugging process works in a similar way. From the runner component one is able to access the input and the output object. The input object is used to provide the values of the input properties of the service or command. For operations the input properties belong in the request parameters and request body. On the contrary, the output object is used to read the values of the output (or response for the operations) properties after the await runner.run() has been successfully executed. Moreover, the await runner.run() is the line for execute the service, command or operation. The run( ) function either does not retrieve an input at all or, in case of instance commands, it retrieves as input the instance id.
In order to debug the scripts, a default launch configuration for VSCode is provided. The steps that must be followed are:
- Open a
*.test.ts
file (it won't work with implementation files). - Set some breakpoints in the implementation files.
- Navigate to the debug section on the left menu bar.
- Launch the Debug in the VSCode on "Current Test".
- On the sidebar it is possible to tract variables.
Test environment
With TestEnvironment
one is able to create new instances of entities
and then perform one or more test scenarios on the created instances. After the
tests will be executed, then the created instances can be deleted at once using the
cleanUp
method.
describe('solution:Command1', () => {
// we define the testEnvironment so that it is accessible in the test blocks
// We need to create a new instance of our TestEnvironment
// Each instance of it can handle its own test data
const testEnvironment = new TestEnvironment();
// we define the created entity so that it is accessible in the test blocks
let createdEntity;
before(async () =>
{
createdEntity = testEnvironment.factory.entity.cptest.RootEntity1();
// We set values to each property
createdEntity.property1 = "value1";
createdEntity.property2 = "value2";
// We create the entity in the database
await createdEntity.persist();
});
// This block will define what will happen after all tests are executed.
after(async () =>
{
// Delete everything we've created
// through the testEnvironment in this test session
await testEnvironment.cleanUp();
});
// 1: Create and delete the entity in the actual test.
// In this case you do not need the before() and after() blocks
it('works on a RootEntity2', async () =>
{
// Initialize the entitiy
const rootEntity2 = testEnvirionment.factory.entity.cptest.RootEntity2();
// Set value to the properties of the entitiy
rootEntity2.property1 = "value1";
// Create the entity in the database
await rootEntity2.persist();
const runner = new cptest_Command1Runner();
// Run the test on the instance we created before
await runner.run(rootEntity2._id);
console.warn('No tests available');
expect(true).to.equal(true);
// Delete the instance created before
await rootEntity2.delete();
});
// 1: Use the find() function to search for a specific entity that was created in before() block
// do not need to delete manually, after() block will do it for you
it('works on a rootEntity1', async () =>
{
// The before() block will run automatically before this test, provided it was implemented
// Find an instance that already exists
const foundEntity = await testEnvironment.repo.cptest.RootEntity1.find(true, 'myFilter');
const runner = new cptest_Command1Runner();
// Run the test on the instance that already exists
await runner.run(foundEntity._id);
console.warn('No tests available'); expect(true).to.equal(true);
// The after() block will run automatically
});
});
Debugging factory commands
it('works on an existing rootEntity1 that we find', async () => {
// The beforeAll() block will run automatically before this test, provided it was implemented
const runner = new cptest_FactoryCommand1Runner();
// Give input to factory command
runner.input = runner.factory.entity.ns.FactoryCommandIdentifier_Input();
runner.input.property1 = value1;
runner.input.property2 = value2;
// This will return the created instance of the root entity
const factory_output = await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
});
Debugging Instance commands
it('works on an existing rootEntity1 that we find', async () => {
// Give input to factory command
runner.input = runner.factory.entity.ns.InstanceCommandIdentifier_Input();
runner.input.property1 = value1;
runner.input.property2 = value2;
// Use the Id of the created entity
// This will return the modified instance of the root entity
const instance_output = await runner.run(createdEntity._id);
console.warn('No tests available');
expect(true).to.equal(true);
instance_output._id;
instance_output.prop1;
instance_output.prop2;
});
Debugging services
it('works on an existing rootEntity1 that we find', async () => {
// The beforeAll() block will run automatically before this test, provided it was implemented
const runner = new cptest_Service1Runner();
// Give input to factory command
runner.input = runner.factory.entity.ns.Service1Identifier_Input();
runner.input.property1 = value1;
runner.input.property2 = value2;
// This returns the output entity
const service_output = await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
// Get the output of the service and store it in local variable
service_output.prop1;
service_output.prop2;
});
Debugging operations
it('works on an existing rootEntity1 that we find', async () => {
// The beforeAll() block will run automatically before this test, provided it was implemented
const runner = new cptest_Service1Runner();
// Initialize the path parameters of the operation
runner.request.path.parameter1 = value1;
// Initialize the query parameters of the operation
runner.request.query.parameter1 = value2;
// Initialize the request body and its properties of the operation
// If body is a complex body
runner.request.body = runner.factory.schema.nspacrnm.SchemaIdentifier();
runner.request.body.property1 = value1;
runner.request.body.property2 = value2;
await runner.run();
console.warn('No tests available');
expect(true).to.equal(true);
// Get the output of the command and store it in local variable
const operation_response = runner.response;
});
How to change the default log level for a ddd solution
Change deploy configuration
configmap: extraConfiguration: de.knowis.cp.ds.action.loglevel.defaultLevel: INFO
The log level can be changed here as needed either to INFO
,
DEBUG
, TRACE
, ERROR
or
WARN
.
How to retrieve logs on OpenShift from a solution
In general, all logs of a solution are printed out to the console of each pod.
Retrieve logs via CLI
Identify pod of solution
oc get pods -n=runtime-dev
oc logs <pod>
Available options
To get logs of the function sidecar
where operations are run oc logs <pod> -c
function-sidecar
.
To get logs of the solution itself
oc logs <pod> -c solution
.
Retrieve logs via log aggregation on OpenShift (EFK)
If log aggregation of OpenShift is available
- Open OpenShift Management Platform
- Open Monitoring
- Open Logging
Helpful links
https://docs.openshift.com/container-platform/3.11/install_config/aggregate_logging.html