Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

Table of Contents
maxLevel2

Test Scenarios

KATE supports the running of tests in the backend, which are focused on the Query testing, ListCube testing, Drill Down, Transformation, Table and Report testing. Unlike the Test Scenarios in the frontend, the tests in the backend have the Test Scenarios that are predefined and contain Test Run IDs. 

...


Each Test Scenario focuses on a different aspect of the testing. Currently, there are seven Test Scenarios defined to test Queries, DBtables, Reports, DTP loads/Transformations and InfoProviders using ListCube functionality.

Test Run IDs

The Run ID is one instance of a Test Scenario and is executed step by step, from the top down.
To add a new Test Run ID, click on the icon 'Create new Run' and fill name with description.

...


The Test Scenario specifies what is the created run type (i.e. is it for a Query testing scenario – EQS, ListCube testing scenario – EQS_LIST, Drill Down testing scenario – EQS_DRILL , DTP testing scenario – EQS_DTP, SLO ListCube testing scenario – EQS_SLO_L or Report testing scenario – EQS_ERP) . This field is automatically prefilled based on what is selected from the list of all runs. Another way to create a new run is by right clicking on one of the parent nodes that defines the scenario type and selecting the 'Create Test Run ID' option.

Entering Test Run

After a new Test Run is created, you are taken to the execution run view where the individual steps can be executed. You can leave execution view, by pressing any of the standard navigational buttons 'Back', 'Exit' or 'Cancel'. To re-enter the view, double click on the desired Test Run in the test run tree or by right clicking on it and choosing the 'Display Test Run ID' option.

Deleting Test Run

To delete a Test Run, right click on the Test Run ID in Test Run overview window (Figure 192) and choose the 'Delete Test Run ID' form the Test Run and all of its data is removed from the system. 
 Note when you delete a Test Run, this will delete all of the saved image data linked to the run, but will not delete the test variants specified in the run. The Variants stay in system and can be reused in other Test Runs. To delete the variants you can use Variant Editor available via the KATE Dashboard. 



Figure 194: Reset of all Test Steps of Test Run 

Resetting Test steps

To reset a Test Step choose the 'Reset task state' function by right clicking on the test step, the 'Reset task' status functionality is especially useful if you want to go back to modify and retest tasks. When resetting the test steps some restrictions do apply, no test step should be reset if there are test steps after the Test Scenario have are already finished. The Functionality 'Set task state' is not recommended as it could leave certain steps with an inconsistent status. Below some examples of when you can reset the step status provided:

...


When the selection step (1st step) status is reset, you do not lose information about the selected Test Variants. When you Double click on a selection step, the previously selected variants (if any) are loaded and act as the predefined variants for a specific run, these can either be removed or new set of variants added.

Query Based Testing

The Query based testing uses the query output as the test data and the before image of the query output is then compared with the output of an after image of the query output. 
The actions that lead to the creation of both these images can vary based on the purpose of each scenario. KATE currently supports one testing scenario with this type of testing. When the queries are executed using KATE the cache is not used, the reason for this is to get as precise as possible, the runtime values for both the before and after image. 
Recommendations:

  • A maximum of 1 000 Query Variants should be used in each created run of Query Testing scenario.
  • Processing time for each of step depends on queries that are processed. E.g. if there are lots of queries resulting in a huge output (more then 100 000 rows) the processing time may take longer than that of the query with a smaller output.

Test Scenario EQS (Query Testing)



Figure 197: Query Testing scenario steps

...

You can navigate and execute all steps by double clicking on them.

Select Query Variants

You can choose which Queries/Bookmarks are to be used for testing, the definition and selection of query/bookmarks variants is necessary and can be done in the main window of this step displayed in (Figure 198).

...


All of the Query Variants that are added to this list are then used in the subsequent steps of the Query Testing scenario. There are multiple ways to add/create Query Variants into the list.
Create New Query Variant - Please refer to 7.1.1 ('Create new Query Variant') section for a detailed description of Query Variant creation. The only difference is in variant creation the confirmation button is not 'Continue' button (Enter) but 'Save' button (Enter). 
By double clicking on the created Query Variant you can reenter and edit the Variant properties. Remember, it is not possible to edit variant properties if same variant is already in use for another Test Run.
Create New Bookmark Variant - Please refer to 7.1.2 ('Create new Bookmark Variant') section for a more detailed description as the screen for defining Bookmark Variant directly in run. The only difference is in variant creation confirmation button, which in this case is not 'Continue' button (Enter) but 'Save' button (Enter). 
Add existing Query Variant - A list of all the existing Query Variants in a system is displayed and you can select a set of variants to be added by using the 'Copy' button (Enter). Please note that for each Query Variant can only be used once in each Test Run. 
Add Query Variant from Test Run ID - By clicking the 'Add Query Variant from Test Run ID' button a list of all the existing Query Testing Runs is displayed. You can a Test Run and add all variants used in one run to current Test Run.
Copy Query Variant of Run - By clicking the 'Copy Query Variant of Run' button a list of all the existing Query Testing Runs is displayed. You can select a Test Run and add all variants used in another run to the current Test Run as copies.
Generate Query Variants from HM (Optional) – You can automatically generate new Query Variants, to use this option you must have the HeatMap tool is present in the system. 
Add Variants of Test Case – By clicking the 'Add Variants of Test Case button a list of all the existing Query Test Cases is displayed. You can select a Test Case and add all variants used in it to the current Test Run.
Copy Variants of Test Case – By clicking the 'Copy Variants of Test Case button a list of all the existing Query Test Case is displayed. You can select a Test Case and add all variants used in it to the current Test Run as copies.
For detailed description of Generate Query Variants functionality please refer to 7.1.3 ('Generate Query Variants from HeatMap Statistics').
Create Based on Web Template - Click on the 'Create Based on Web Template' button (Shift + F7). Please refer to 7.1.4 ('Create Variants of Web Templates'). 
Copy Variants - Please refer to 7.1.14 ("Copy Query Variants") section for function details.

Generate tasks for before image

This step is executed automatically by double clicking on it, afterwards the executable tasks for each defined Query/Bookmark Variant in 'Select Query Variants' step is generated and visible in the following step 'Create before image'.

Create before image

By double clicking on this step, you first define the number of background jobs to be used for the execution of tasks. After each task is completed a before image of the corresponding query output is created and saved for later comparison.

Generate task for after image

This step is executed automatically by double clicking on it, after the generation of the tasks for each of the defined Query/Bookmark Variant from the 'Select Query Variants' step. These are to be executed in the following step 'Create after image'.

Create after Image

By double clicking on this step, you first define the number of background jobs to be used for the execution of tasks. After each task is completed an after image of the corresponding query output is created and saved for later comparison.

Generate tasks for comparison

This step is executed by double clicking on it, the tasks for each of defined Query/Bookmark Variant in 'Select Query Variants' step are generated and can be found under the following step 'Compare before and after image'.

Compare before and after image

By double clicking on this step, you define number of background jobs to be used for the execution of tasks. Each task compares the before image data with the after image data for one Query Variant.

Display results

To view the Query Variants execution outputs, and their comparison results. For more information please refer to (Results overview) chapter for more details.

ListCube based testing

Currently there are two Test Scenarios using the ListCube functionality outputs. The ListCube testing uses the same functionality as found in the standard RSA1 'Display Data' to collect the data from the InfoProviders that is to be tested. 
Recommendations:

  • For one ListCube testing run a maximum of 5000 ListCube variants should be used.
  • Highly recommended is to use the 'Use DB Aggregation' parameter when creating the ListCube Variants. When there are multiple lines with the same key in the image, then the performance is decreased during the comparison phase of this test scenario.

Test Scenario EQS_LIST (ListCube testing)



Figure 199: ListCube Testing Scenario

...

  1. Selecting ListCube Variants
  2. Generation of before image tasks
  3. Creation of before images
  4. Performing specific tasks in system outside KATE (like conversion, migration, etc.)
  5. Generation of after image tasks
  6. Creation of after images
  7. Generation of comparison tasks
  8. Execution of comparison tasks
  9. Display of results


Select ListCube Variants

By double clicking on the first Test Step 'Select ListCube Variants' you can define which ListCube variants are to be used in the specified run for testing, or you can create new ListCube Variants, these consist of all the selected ListCube Variants for this Test Run. 
Once you have the variants selected for the Test Run you need to save the selection by clicking on the 'Save' button (Ctrl + s) in main screen of the ListCube variants selection.
You can add ListCube Variants in the following ways:
Create New ListCube Variant- You can create new ListCube Variant by clicking on 'Create New ListCube Variant' button (Shift + F1). For more information please refer to 7.2.1 Create new ListCube Variant chapter. 
Add Existing ListCube Variant – You can display all of the existing ListCube variants in system and by clicking on the 'Add Existing ListCube Variant' button (SHIFT + F4), select one or more variants to be added into the Test Run. 
Add ListCube Variants of Run - By clicking on the 'Add ListCube Variants of Run' button (SHIFT + F5) you can select another (distinct from current) run, and add all of the ListCube Variants used also into the current Test Run. 
Copy ListCube Variants of Run - By clicking on the 'Copy ListCube Variants of Run' button (SHIFT + F8) you can choose another (distinct from current) run and copy the ListCube Variants into the current Test Run. 
Generate ListCube Variants - By clicking on the 'Generate ListCube Variants' button (SHIFT + F2) a dialog for the ListCube variants generation is displayed. Please refer to 7.2.2 ('Generate ListCube Variants') chapter. 
Generate RFC ListCube Variants - By clicking on the generation ListCube variants with RFC destination is displayed. Please refer to 7.2.3 ('Generate RFC ListCube Variants').
Add Variants of Test Case - By clicking on the 'Add Variants of Test Case' button (SHIFT + F11) you can select from existing ListCube Test Cases and add the ListCube Variants in it also into the current Test Run. 
Copy Variants of Test Case - By clicking on the 'Copy Variants of Test Case' button (SHIFT + F12) you can select from existing ListCube Test Cases, and add all of the ListCube Variants in it also into the current Test Run as copies. 
Copy Variants - Please refer to 7.2.11 ("Copy ListCube Variants") section for function details.

Generate tasks for before image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create before image'.

Create before images

By double clicking on this step, you first define number of background jobs to be used to create the before image for each of the specified ListCube variants. The Image creation of some ListCube variants can fail due to too many rows read. This is safety limit can be adjusted by changing the setting 'Max. ListCube image row size'. See Settings section for more details on how to change this setting. 
If KATE setting 'ListCube additional aggreg.' is set to 'X' the image that is returned from an InfoProvider during the task execution is aggregated again. This functionality can be used for cases when a ListCube returns multiple rows with same key because of unsupported functions. 
KATE is capable of handling numeric aggregation overflows. Automatic overflow handling is executed when 'ListCube additional aggreg.' is set to 'X' and aggregation of results fails because of numeric overflow. In cases of a problematic overflowing the key figure is set to 0 for all returned rows. And in cases of multiple overflowing key figures these will be set to 0. Overflow handling is always logged as if it was used. This feature enables you to create images for InfoProviders that normally would not be possible because of an overflow error on DB level. By turning off "DB Aggregation" in ListCube variants all aggregations will be performed in KATE and all non-overflowing key figures will be correctly saved in the image. "DB Aggregation" is always preferred for performance reasons and additional aggregation should be used for handling special situations. 
 During additional aggregation the whole data image of returned rows from the InfoProvider needs to be loaded into memory at once. Memory restrictions of work processes can cause a fail of the task execution if these thresholds are reached during additional aggregation.

Generate tasks for after image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create after image'.

Create after images

You define the number of background jobs to be used in the creation of the after image of the specified ListCube variants. The Image creation of some ListCube variants can fail due to too many rows read. This is safety limit can be adjusted by changing the setting 'Max. ListCube image row size'. See Settings section for more details on how to change this setting. 
If KATE setting 'ListCube additional aggreg.' is set to 'X' the image that is returned from an InfoProvider during the task execution is aggregated again. This functionality can be used for cases when a ListCube returns multiple rows with same key because of unsupported functions. 
KATE is capable of handling numeric aggregation overflows. Automatic overflow handling is executed when 'ListCube additional aggreg.' is set to 'X' and aggregation of results fails because of numeric overflow. In this case problematic overflowing key figure is set to 0 for all returned rows. In case of multiple overflowing key figures all of them will be set to 0. Overflow handling is always logged if it was used. This feature enables you to create images of InfoProviders that would be not possible because overflow error on DB level. By turning off "DB Aggregation" in ListCube variants all aggregation will be done in KATE and all not overflowing key figures will be correctly saved in image. "DB Aggregation" is always preferred for performance reasons and additional aggregation should be used for handling special situations. 
 During additional aggregation the whole data image of returned rows from the InfoProvider needs to be loaded into memory at once. Memory restrictions of work processes can cause a fail of the task execution if these thresholds are reached during additional aggregation. 
Generate tasks for comparison
You can specify key figures InfoObjects that are ignored during the comparison. These ignored key figure columns and are not visible in the 'Display Results' step. Generation of tasks is executed by clicking on 'Create Tasks' (F8) button. 
You can enable Automated Root Cause Analysis for comparison tasks. Refer to Automated Root Cause Analysis section for details. 

...


 It is not possible to ignore characteristic columns as those are used as unique keys for record matching during comparison.

Compare before and after images

You define the number of jobs/ background jobs to be used for tasks execution each of the tasks compares the before image data with the after image data for one ListCube Variant, please refer to the (Comparison logic) chapter.

Display results

You can view the outputs and their comparison results for the ListCube Variants executions, please refer to ('Results overview') chapter.

Test Scenario EQS_SLO_L (SLO ListCube testing)

The SLO ListCube scenario shares many of the same steps as with the standard ListCube scenario, the difference is with the additional functions to create converted images based on the defined mappings.

...

  1. Selecting ListCube Variants
  2. Generation of before image tasks
  3. Creation of before images
  4. Defining mapping
  5. Generation of convert tasks
  6. Creation of converted images
  7. Performing specific tasks in system outside KATE (conversion, etc.)
  8. Generation of after image tasks
  9. Creation of after images
  10. Generation of comparison tasks
  11. Execution of comparisons tasks
  12. Display of results

Select ListCube Variants

By double clicking on the first Test Step 'Select ListCube Variants' you can define which ListCube variants are to be used in the specified run for testing, or you can create new ListCube Variants, these consist of all the selected ListCube Variants for this Test Run. 
Once you have the variants selected for the Test Run you need to save the selection by clicking on the 'Save' button (Ctrl + s) in main screen of the ListCube variants selection.
You can add ListCube Variants in the following ways:
Create New ListCube Variant- you can create new ListCube Variant by clicking on 'Create New ListCube Variant' button (Shift + F1). For more information please refer to 7.2.1 Create new ListCube Variant chapter. 
Add Existing ListCube Variant – You can display all of the existing ListCube variants in system and by clicking on the 'Add Existing ListCube Variant' button (SHIFT + F4), select one or more variants to be added into the Test Run. 
Add ListCube Variants of Run - By clicking on the 'Add ListCube Variants of Run' button (SHIFT + F5) you can select another (distinct from current) run, and add all of the ListCube Variants used also into the current Test Run. 
Copy ListCube Variants of Run - By clicking on the 'Copy ListCube Variants of Run' button (SHIFT + F8) you can select another (distinct from current) run, and add all of the ListCube Variants used also into the current Test Run as copies. 
Generate ListCube Variants - By clicking on the 'Generate ListCube Variants' button (SHIFT + F2) a dialog for the ListCube variants generation is displayed. Please refer to 7.2.2 ('Generate ListCube Variants') chapter. 
Generate RFC ListCube Variants - By clicking on the generation ListCube variants with RFC destination is displayed. Please refer to 7.2.3 ('Generate RFC ListCube Variants') 
Add Variants of Test Case - By clicking on the 'Add Variants of Test Case' button (SHIFT + F11) you can select from existing ListCube Test Cases and add the ListCube Variants into the current Test Run. 
Copy Variants of Test Case - By clicking on the 'Copy Variants of Test Case' button (SHIFT + F12) you can select from existing ListCube Test Cases, and the ListCube Variants into the current Test Run as copies. 
Copy Variants - Please refer to 7.2.11 ("Copy ListCube Variants") section for function details.

Generate tasks for before image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create before image'.

Create before images

By double clicking on this step, you first define number of background jobs to be used to create the before image for each of the specified ListCube variants. Image creation of some SLO ListCube variants can fail due to too many rows read. This is safety check while this behavior can be adjusted by changing setting 'Max. ListCube image row size'. See Settings section for more details how to change this setting.

Define mapping

By clicking on this step you can define mapping that should be used to convert before images and create converted images. Currently there are two types of mapping that user can use/create.

...


You can define the separator character that is used in the CSV file and the number of lines that should be ignored in the file from the top. The Data structure in the CSV file has to adhere to rules that are described in help text available under  'Information' button.

Generate tasks for conversion

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Convert Before Images'.

Converted before images

By double clicking on this step, you first define number of background jobs to be used to create the converted image for each of the specified ListCube variant before images.

Generate tasks for after image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create after image'.

Create after images

You define the number of background jobs to be used in the creation of the after image of the specified ListCube variants. Image creation of some SLO ListCube variants can fail due to too many rows read. This is safety check while this behavior can be adjusted by changing setting 'Max. ListCube image row size'. See Settings section for more details how to change this setting.

Generate tasks for comparison

By clicking on this task you can define which already created image you would like to compare with the after image. You can either select standard before image or converted image created by use of specified mapping. 

...


After clicking on  button comparison tasks are generated in next scenario step 'Compare Images'.

Display results

You can view the outputs and their comparison results for the ListCube Variants executions, please refer to ('Results overview') chapter.

Drill Down testing

The KATE Drill Down Test Scenario uses the ListCube functionality in a similar way as in the ListCube test scenario and uses the RSA1 'Display Data' functionality to get the data from the InfoProviders to be tested. 
Recommendations:

  • For one DrillDown testing run, a maximum of 1000 Drill Down variants should be used.
  • It is recommended to use of the KATE setting 'DrillDown maximum records' to under the value of 100 in order to reduce the amount of data to process and reduce the chance of errors.

Test Scenario EQS_DRILL (Drill Down testing)



Figure 207: Drill Down Testing Scenario 

...

  1. Selecting Drill Down Variants
  2. Generation of execution tasks
  3. Execution of Drill Down Tasks
  4. Display of results

Select Drill Down Variants

You can select which of the Drill Down Variants are to be used or create new Drill Down Variants to be added into the Test. When the Test Run is generated through the KATE Test management the variants are already preselected for the Drill Down Test Case.
Once the variants are selected for the Test Run, you can save the selection by clicking on 'Save' button (Ctrl + s).
In the following ways it is possible to add Drill Down Variants to the list:
Create New Drill Down Variant - (SHIFT + F1). For detailed information about creation of Drill Down variant please refer to 7.4.1 ('Create new Drill Down Variant ') section, 
Add Existing Drill Down Variant - (SHIFT + F4) displays all of the existing Drill Down Variants in the system, and you can select one or more variants to be added into the current Test Run. 
Add Drill Down Variants of Run - (SHIFT + F5) you can select the Variants used in another (distinct from current) run and add them into current the run as well. 
Generate Drill Down Variants - (SHIFT + F6) please refer to 7.4.2 ('Generate Drill Down Variants') section for more information.
Copy Variants - Please refer to7.1.14 7.4.7 ("Copy DrillDown Variants") section for function details.

Generate Tasks for Drill Down

The tasks are generated for the following step 'Execute Drill Down Tasks'. You can specify the key figures go be ignored in the InfoObjects during the comparison of the Drill Down Variants specified for this run. These ignored key figure columns are not visible in the 'Display Results. The Generation of the tasks is executed by clicking on 'Create Tasks' (F8) button.
You can enable Automated Root Cause Analysis for comparison tasks. Refer to Automated Root Cause Analysis section for details. 

...


 It is not possible to ignore characteristic columns as those are used as unique keys for records during the matching comparison.

Execute Drill Down Tasks

You double click to execute, first you can define the number of jobs/ background jobs to be used for tasks execution each task executes a Drill Down test for one Drill Down Variant. Drill Down scenario testing is explained below.
The Drill Down test variant compares the data from two different InfoProviders, in most cases it's the same InfoProvider but on different systems. Based on selected Drill Down characteristics in the variant definition the execution starts by selecting the first specified Drill Down characteristic, and adds to a ListCube, which then read on both InfoProviders as a characteristic. The ListCube reads the outputs and these are immediately compared. For two out of three scenarios the Drill Down test execution ends here.

...

In the third scenario some or all data returned by InfoProviders are not same. In this case a new test cycle begins, and the erroneous data is checked. Using the KATE setting 'DrillDown maximum records', for the first added characteristic (the one added at the start of first cycle) the X number of distinct values that belongs to erroneous records are the selected. These values act as a filter value for this characteristic in next test cycle (drill down into erroneous records). 
The next characteristic in the order of specified Drill Down characteristics (if no more are available then the execution stops here) is added to ListCube for reading and then another characteristic is selected for the output. The ListCube reads are repeated for both InfoProviders with the new settings and the data is then compared again. Afterwards either a new test cycle begins using the same logic just described, or execution ends here, depending on the result of comparison. 
If KATE setting 'DrillDown additional aggr.' is set to 'X' in each cycle when InfoProviders are read the returned data is aggregated again. This functionality can be used for cases when ListCube returns multiple rows with same key because of unsupported functions. 
 During additional aggregation the whole data image of returned rows from InfoProvider needs be loaded into memory. Memory restrictions of work processes can cause fail of the task execution if these thresholds are reached during additional aggregation.

Display results

By double clicking on this step you can view the DrillDown Variants executions outputs and their comparison results are displayed. Please refer to (Results overview) chapter for more details.

Transformation testing

Transformations are used during the execution of Data Transfer Process. This scenario is used to test the changes/updates of logic used in the transformations (lookups, rules, routines).

Test Scenario EQS_DTP (Transformations testing)



Figure 209: Transformation Testing Scenario

...

  1. Select DTP Variants
  2. Generate tasks for DTP generation
  3. Generate DTPs
  4. Generate tasks for before DTP load
  5. Load DTP for Before Image
  6. Generate tasks for before DTP images
  7. Create Before Images
  8. Performing some specific task in system outside KATE (like conversion, archiving, migration, etc.)
  9. Generate tasks for after DTP load
  10. Load DTP for After Image
  11. Generate tasks for after DTP images
  12. Create DTP for After Image
  13. Generate tasks for comparison
  14. Compare before and after images
  15. Display Results
  16. Display Lookup Results
  17. Generate tasks for clean up
  18. Clean up
  19. Clean up DTP


Select DTP variants

First in the list is a screen for defining the DTP variants to be tested during the run.

...

Extraction from – Active Table (Without Archive) – can be changed with KATE settings (see Settings chapter).
Note: You can always modify the generated DTP settings and filters through the standard RSA1 transaction and the generated DTPs are also visible there.
Note: DTPs generated by the KATE tool have their description generated in the same way as in standard DTPs; the only difference is an added prefix to the beginning. This Prefix can be changed in the KATE settings and can be up to 8 characters long. Please refer to Settings chapter.

Generate tasks for before DTP load

Generates tasks for the following step 'Load DTP for Before Image' for all DTPs in the Test Run.

Load DTP for Before Image

This step is a group task that can be executed using multiple background jobs. Each task executes one generated DTP (If RFC Destination was specified for DTP Variant the DTP load is executed on specified target system). 
In KATE the settings for the maximum waiting time can be specified (parameter 'DTP load wait time') for each task. If this time is exceeded (load takes too long) you are informed via the log and should check the status of load manually. It is necessary to wait for all loads to finish before you continue to the next scenario steps.

Generate tasks for before DTP images

Generates the tasks in following step 'Create Before Images' for all DTP variants in the Test Run.

Create Before Images

Each task creates a snapshot of request data loaded for one DTP from the previous steps and all loaded data of the request is stored.

Generate tasks for after DTP load

Generates the tasks for the following step 'Load DTP for After Image' for all DTP variants in the Test Run.

Load DTP for After Image

Each task executes one DTP (If RFC Destination was specified for DTP Variant the DTP load is executed on specified target system). If a different after image DTP Variant is specified then it's generated DTP is used for loading. 
In KATE the settings for the maximum waiting time can be specified (parameter 'DTP load wait time') for each task. If this time is exceeded (load takes too long) you are informed via the log and should check the status of load manually. It is necessary to wait for all loads to finish before you continue to the next scenario steps.

Generate tasks for after DTP images

This step automatically generates the tasks for the DTP variants that are going to be used in the 'Create After Images'.

Create After Images

Each task creates a snapshot of the request data loaded by a DTP and all loaded data of request is stored.

Generate tasks for comparison

You can select InfoObjects that are ignored during comparison for the DTP Variants for this run, these ignored InfoObject columns are not visible when you 'Display Results'. Generation of task is executed by clicking on 'Create Tasks' (F8) button.



Figure 211: Selection of ignored key figure columns

Compare before and after images

These tasks compare the before image with after image of the DTP Variants that were created previously.

Display Results

Double click to view results of testing is displayed refer to (Results overview) chapter for more information.

Display Lookup Results (Optional)

Here the lookup data testing results are displayed, when DVD Lookup Translator is installed in the system and configured to provide KATE pre and post lookup data, KATE will also create images of the provided data and compares these with the 'Compare before and after images' step. 
 Please note that Lookup data testing enhancement is not supported for RFC DTP Test Cases.

Generate tasks for cleanup

This step automatically generates the tasks for following steps 'Clean up requests' and 'Clean up DTP' for all DTP's in the Test Run.

Clean up requests

Each task looks into saved parameters of a variant and finds the before/after image load requests executed and then deletes these from the system. 
The saved image data with comparison results are preserved after the clean up steps have finished their execution. If the DTP Test Case is defined with a RFC Destination these requests are deleted in appropriate system. By executing this step prior to 'Create Before Images' step, you can separately clear test loads from systems after the before image is taken. The status can also be reset after the 'Create After Images' step and then rerun to also clear the after image loads.

Clean up DTP

The tasks look into the saved parameters for one variant; searches for the generated DTPs and these are then deleted from the system. If the DTP Test Case is defined with a RFC Destination the DTPs are deleted on appropriate system. The saved image data with comparison results are preserved after these steps finish their execution.

Table based testing

Table based testing uses data directly from the database tables. Based on the user specification set, the table columns are read from the database table and are used to create before and after image. These images are then compared.
Recommendations:

  • For one Table testing run a maximum of 1000 Table variants should be used.

Test Scenario EQS_TABLE (Table testing)



Figure 212 Table Testing Scenario

...

  1. Selecting Table Variants
  2. Generation of before image tasks
  3. Creation of before images
  4. Performing specific tasks in system outside KATE (like conversion, migration, etc.)
  5. Generation of after image tasks
  6. Creation of after images
  7. Generation of comparison tasks
  8. Execution of comparison tasks
  9. Display of results

Select Table Variants

By double clicking on the first Test Step 'Select Table Variants' you can define which Table variants are to be used in this Test Run for testing, or you can simply create new Table variants.
Once you have the variants selected for the Test Run, you need to save the selection by clicking on the 'Save' button (Ctrl + s) in main screen of the Table variants selection.
You can add Table Variants in the following ways:
Create New Table Variant – You can create new Table Variant by clicking on 'Create New Table Variant' button (Shift + F1). For more information please refer to 7.5.1 Creating new Table Variant chapter. 
Add Existing Table Variant – You can display all of the existing Table variants in system by clicking on the 'Add Existing Table Variant' button (Shift + F2). You can then select one or more variants to be added into this Test Run. 
Add Table Variants of Run – By clicking on the 'Add Table Variants of Run' button (Shift + F4) you can select another (distinct from current) run, and add all of the Table Variants of selected run into the current Test Run. 
Copy Table Variants of Run – By clicking on the 'Copy Table Variants of Run' button (Shift + F8) you can select another (distinct from current) run and add all of the Table Variants of selected run into the current Test Run as copies. 
Add Variants of Test Case– By clicking on the 'Add Variants of Test Case' button (Shift + F11) you can select an existing Test Case, and add the Table Variants of selected Test Case into the current Test Run.
Copy Variants of Test Case– By clicking on the 'Copy Variants of Test Case' button (Shift + F12) you can select an existing Test Case, and add all of the Table Variants of selected Test Case into the current Test Run as copies. 
Copy Variants - Please refer to 7.1.14 7.5.10 ("Copy Table Variants") section for function details.

Generate tasks for before image

To generate the tasks, you can double click the Test Step 'Generate tasks for before image' to execute it. This will prepare the tasks for the following step 'Create before images'.

Create before images

By double clicking on this step, you first define number of background jobs to be used to create the before image for each of the specified Table variants. The Image creation of some Table variants can fail due to too many rows read. This is safety limit can be adjusted by changing the setting 'Max. Table image row size'. See Settings section for more details on how to change this setting. 
KATE is capable of handling numeric aggregation overflows. Automatic overflow handling is executed when 'Table own aggregation' setting is set to 'X' and aggregation of results fails because of numeric overflow. In cases of a problematic overflowing the key figure is set to 0 for all returned rows. And in cases of multiple overflowing key figures these will be set to 0. Overflow handling is always logged. This feature enables you to create images for Tables that normally would not be possible because of an overflow error on DB level. DB aggregation is always preferred for performance reasons and own aggregation should be used for handling special situations.

Generate tasks for after image

To generate the tasks, you can double click the Test Step 'Generate tasks for the after image' to execute it. This will prepare the tasks for the following step 'Create after images'.

Create after images

By double clicking on this step, you first define number of background jobs to be used to create the before image for each of the specified Table variants. The Image creation of some Table variants can fail due to too many rows being read. A limit can be set/ adjusted by changing the setting 'Max. Table image row size'. See Settings section for more details on how to change this setting. 
KATE is capable of handling numeric aggregation overflows. Automatic overflow handling is executed when 'Table own aggregation' setting is set to 'X' and aggregation of results fails because of numeric overflow. In cases of a problematic overflowing the key figure is set to 0 for all returned rows. In cases of multiple overflowing key figures these will be set to 0. Overflow handling is always logged. This feature enables you to create images for Tables that normally would not be possible because of an overflow error on DB level. DB aggregation is always preferred for performance reasons and own aggregation should be used for handling special situations.

Generate tasks for comparison

To generate the tasks, you can double click the Test Step 'Generate tasks for comparison' to execute it. This will prepare the tasks for the following step 'Compare before and after images'.

Compare before and after images

You define the number of background jobs to be used for tasks execution. Each of the tasks compares the before image data with the after image data for one Table Variant, please refer to the (Comparison logic) chapter.

Display results

You can view the outputs and their comparison results for the Table Variants executions, please refer to ('Results overview') chapter.

Report based testing

The Report based testing scenarios execute and focus on the ERP standard reports, with or without variants. The reports are executed with background jobs and their output is saved to spool. The outputs are then read after the execution and are compared based on your preferred settings. 
Recommendations:

  • A maximum of 1000 Table variants should be used for one Table testing run.

Test scenario EQS_ERP (Report testing)



Figure 213 Report Testing Scenario

...

  1. Selecting Report Variants
  2. Generation of before image tasks
  3. Creation of before images
  4. Performing specific task in system outside KATE
  5. Generation of after image tasks
  6. Specifying comparison settings
  7. Generation of comparison tasks
  8. Execution of comparison tasks
  9. Display of results

Select Report Variants

By double clicking on the first Test Step 'Select Report Variants' you can define which Report variants are to be used in the specified run for testing, or you can create new Report Variants, these consist of all the selected Report Variants for this Test Run. 
Once you have the variants selected for the Test Run you need to save the selection by clicking on the 'Save' button (Ctrl + s) in main screen of the Report variants selection.
You can add Report Variants in the following ways:
Create New Report Variant- you can create new Report Variant by clicking on 'Create New Report Variant' button (Shift + F4). For more information please refer to 7.6.1Create New Report Variant chapter. 
Add Existing Report Variant – You can display all of the existing Report variants in system and by clicking on the 'Add Existing Report Variant' button (SHIFT + F5), select one or more variants to be added into the Test Run. 
Add Report Variants of Run - By clicking on the 'Add Report Variants of Run' button (SHIFT + F6) you can select another (distinct from current) run, and add all of the Report Variants used also into the current Test Run. 
Copy Report Variants of Run - By clicking on the 'Copy Report Variants of Run' button (SHIFT + F8) you can select another (distinct from current) run, and add all of the Report Variants used also into the current Test Run as copies. 
Add Variants of Test Case - By clicking on the 'Add Variants of Test Case' button (SHIFT + F11) you can select existing Test Case, and add all of the Report Variants used into the current Test Run.
Copy Variants of Test Case - By clicking on the 'Copy Variants of Test Case' button (SHIFT + F12) you can select existing Test Case, and add all of the Report Variants used into the current Test Run as copies. 
Copy Variants - Please refer to 7.1.14 7.6.5 ("Copy Report Variants") section for function details.

Generate tasks for before image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create before images'.

Create before images

By double clicking on this step, you first define number of background jobs to be used to create the before image for each of the specified Report variants.

Generate tasks for after image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create after images'.

Create after images

You define the number of background jobs to be used in the creation of the after image of the specified Report variants.

Set Comparison settings

By double clicking on this step you can define some standard settings that should be used during comparison of reports output. 

...

You save the specified compare setting by clicking on 'Enter' button.

Generate tasks for comparison

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Compare before and after images'.

Display of results

By clicking on this step you can review the output and comparison results of reports executed in steps before. There are some differences between report testing scenario and other scenarios, which are specified more in detail in section Reports Display Results.

Comparison logic

All Test Scenarios use this comparison logic to validate if the data in the after image differs from data the before image. This section contains description of how data are compared in test scenarios.

Comparison of query results

Each query output is separated into rows, with each row split into the key part and data part. The Key part is defined by all the characteristics columns on the left side of output while all of the other columns define the data part (see Figure 215). 

...

Figure 217: Incorrect data in After image

Comparison of ListCube/Drill Down results

The logic for ListCube comparison is similar to query comparison. Each row of the ListCube output is split into Key and Data parts. The Key part for ListCube output is also visually distinguished in the output table for better orientation. 

...

  • If Key part structure is not same in both the Before and After image ListCube outputs cannot be compared
  • If some Data part column is missing or added in the Before/After image this column is not checked and is not taken as an error.

Automated Root Cause Analysis

For ListCube, Drill Down and Table comparison tasks, an automated root cause analysis can be enabled. Automated root cause analysis works in following way:
Before the last Drill Down / ListCube/Table comparison, a table of Keys to be ignored is created. This table is created for both A and B InfoProviders in the Drill Down and for the before and after image of ListCube InfoProviders. For Table variants the table is created only for the appropriate InfoProviders if changelog table of standard DSO or active table of write-optimized DSO is compared. 
Each InfoProvider is checked for last delta loading requests of all DTPs and IP executions. Data of these last loaded delta requests are searched and the ignored keys table is created. The Ignored keys table contains all combinations of Keys (with structure same as DrillDown/List Cube key part compared) that were present in last delta loads. 
When a comparison error is found during the comparison, the key part of the erroneous row is compared against the ignored keys table. If an error is found, it means that data of this row was affected by last delta load requests and error is ignored. Such ignored erroneous data are then colorized with yellow colour to distinct them from real errors. 

...


For MultiProviders the ignored keys table is constructed as union of the ignored keys tables for all part providers and for SPOs the ignored keys tables is constructed as union of the ignored keys tables of all semantic partitions.

Comparison of Table results

The logic of the Table Comparison is same as logic for ListCube/DrillDown results comparison with following exceptions:

  • Automated Root Cause Analysis cannot be used for table comparison.
  • Key figure columns cannot be ignored in table comparison.

Comparison of DTP load results

Transformation testing often requires checking huge volumes of lines to see if the transformation logic was changed during the test scenario. When the Before and After image is compared, the rows to be mutually checked are selected by their row position. To speed up the comparison only hashed values of these lines are compared. The hashes of lines are calculated using all of fields of in the line. Lines are colorized with red colour if there is any difference in any number of cells of compared lines.

Lookup data comparison

If the lookup data testing is included in the comparison, then the following logic is applied during the data comparison. Direct Lookup data includes the lookup source data, which is coming into Lookup and the data results that are returned from Lookup. 
The data provided from the DVD Lookup Translator comes in packages because the packages are processed in the transformation. KATE executes the comparison only in cases where the same number of packages is provided by DVD Lookup Translator as for the before and after image. 
Data from each package is then compared independently with the data from other packages. For each package, the lookup source data is compared first. If the source data for the before/after package is not same then the results data is not compared as you can only test the lookup correctness based on the same input. If the source data is the same as for the before/after image of package results data is compared. 
Matching the image is done by the row number of the before/after resource data, when saving the lookup source and the result data, KATE sorts this by all fields to prevent sorting problems.

Results overview

When displaying the results of Query, ListCube, SLO ListCube, Drill Down or Transformation testing in the 'Display results' step, for these Test Scenarios there is a list of all of the run Test Variants on the left side of screen. With basic information about each Test Variant (this depends on Test Variant type i.e. Query/ListCube/DTP/Drill Down) is displayed together with following information:

...



Figure 226: Erroneous display in Transformations testing scenario

Reports Display Results

The Display Results screen for the ERP Report scenario is very similar to other Display Results screens but there are following differences.

...



Figure 228 Example of report HTML report output view

Navigation in outputs

For performance reasons 5000 lines (can be changed via settings) is displayed at once in the 'Display results' for each variant output. When any of the variant images have more than 5000 lines in output, the navigational buttons become active and you can page through the results. In the Before and After image outputs you can navigate through these independently by using navigational buttons. 



Figure 229: Output buttons and position

Full screen mode

Sometimes it is required to display Before/After image outputs in full screen. You can click on the  'Full Screen' button in the application toolbar. To activate full screen mode, the results must be already displayed in the standard view. It is possible to navigate through the output result pages (if any) directly in full screen mode. 



Figure 230: Full screen mode of variant output

Union screen mode

In the Output for ListCube, Transformation and Drill Down test scenarios these can be displayed together in union mode. This is accessible after clicking on  'Union Screen Results Display' button in the toolbar and lets you display two different outputs on one single screen (Figure 231). 

...

For all scenarios the rows are paired in a way they can be compared together. This can differ based on the scenario i.e. for Transformation testing scenario row numbers are used while for ListCube and Drill Down scenarios the appropriate row keys are matched. 
*Important Note: In the current version of KATE the Query testing scenario does not support Union display mode.

'Missing' column

For the Drill Down and ListCube Test Scenario a column with the name 'Missing' is added to the output of InfoProviders. If a line is not correct in one of the Before/After images because there was no record with the same key found in the After image an icon  is added to the column. This helps you to differentiate between the erroneous and the missing records, you can also use this column for the filtering of results. This column is visible in all three types of results display.
*Important Note: Using standard ALV filtering functionality on the output tables only influences the actually displayed page records and does not influence records of other pages. 



Figure 232: Missing Column

Reporting Status

It is possible to display the reporting status column in the table with a list of the tested variants for all of the Backend scenarios by changing the layout of the result table. By default this status is always set to the same value as that of the comparison results in KATE, The Reporting status is used to define the statuses of individual Test Cases for reporting. You can set the reporting status and override the compare status set by KATE by clicking on the appropriate Variant result row and then selecting the 'Set Reporting Status' option (Figure 233). 

...



Figure 236: Report Statuses Customizing

Lookup Results

During the Transformation testing scenario, when KATE is provided with the data from the DVD Lookup Translator, and is using the 'Display Lookup Results' this contains the data and the comparison results. The Structure of this screen is very similar to the standard 'Display Results' screen, however there are some differences:
In 'Display Lookup Results' screen there are two comparison statuses for each line of the DTP variant tested, in some cases there can be multiple lines in a run and for each variant. The number of lines depends on number of data packages that were processed during the load for each variant. The First comparison of the results defines the statuses of the lookup source data for the comparison, while the second defines the statuses of the comparison results from the returned data in the lookup. 

...


As is displayed on Figure 237 you can see that when the source data comparison fails no comparison is performed on the lookup returned data.
The display on the right side of the screen is the before/after image data as it normally would have been in the standard 'Display Results' screen. When you double click on the appropriate variant lookup package the data is displayed. By default, when you display the actual data this way, the returned lookup data is also displayed. To switch between the display of the source data and the result data, click on the  'Source Data' button (Shift + F7) or  'Result Data' button (Shift + F8).

Test Run ID Locking

To prevent execution of Test Run by mistake and overwriting current results you can use lock option. When a Test Run is locked, it is not possible to change task state, execute or reset image creation or comparison, change variants or reporting status until it is unlocked again. Mass Execution and scheduling will not run on locked test runs. Deletion in Test Plan stops at first locked Test Run. You are informed about locked Test Run by lock icon in Test Plan tab of Test Management and by message with description and user name.
You can change lock state of Test Run by right clicking on Test Case name in Test Plan (or right click on root task in Test Case) and selecting the 'Lock Run' (or 'Unlock Run') from context menu. 

...

  • Query
  • ListCube
  • Table
  • DrillDown
  • ERP Report
  • DTP
  • SLO ListCube



Step by Step: Creating and executing Test Run ID in Backend Testing

This example shows you a step-by-step guide as to how to create, execute and display results in Backend Testing.

...