(KATE-146) Backend Testing

Test Scenarios

KATE supports the running of tests in the backend, which are focused on the Query testing, ListCube testing, Drill Down, Transformation, Table and Report testing. Unlike the Test Scenarios in the frontend, the tests in the backend have the Test Scenarios that are predefined and contain Test Run IDs. 


 
Figure 192: KATE Background Testing


Each Test Scenario focuses on a different aspect of the testing. Currently, there are seven Test Scenarios defined to test Queries, DBtables, Reports, DTP loads/Transformations and InfoProviders using ListCube functionality.

Test Run IDs

The Run ID is one instance of a Test Scenario and is executed step by step, from the top down.
To add a new Test Run ID, click on the icon 'Create new Run' and fill name with description.



Figure 193: Creation of a new Test Run ID


The Test Scenario specifies what is the created run type (i.e. is it for a Query testing scenario – EQS, ListCube testing scenario – EQS_LIST, Drill Down testing scenario – EQS_DRILL , DTP testing scenario – EQS_DTP, SLO ListCube testing scenario – EQS_SLO_L or Report testing scenario – EQS_ERP) . This field is automatically prefilled based on what is selected from the list of all runs. Another way to create a new run is by right clicking on one of the parent nodes that defines the scenario type and selecting the 'Create Test Run ID' option.

Entering Test Run

After a new Test Run is created, you are taken to the execution run view where the individual steps can be executed. You can leave execution view, by pressing any of the standard navigational buttons 'Back', 'Exit' or 'Cancel'. To re-enter the view, double click on the desired Test Run in the test run tree or by right clicking on it and choosing the 'Display Test Run ID' option.

Deleting Test Run

To delete a Test Run, right click on the Test Run ID in Test Run overview window (Figure 192) and choose the 'Delete Test Run ID' form the Test Run and all of its data is removed from the system. 
 Note when you delete a Test Run, this will delete all of the saved image data linked to the run, but will not delete the test variants specified in the run. The Variants stay in system and can be reused in other Test Runs. To delete the variants you can use Variant Editor available via the KATE Dashboard. 



Figure 194: Reset of all Test Steps of Test Run 

Resetting Test steps

To reset a Test Step choose the 'Reset task state' function by right clicking on the test step, the 'Reset task' status functionality is especially useful if you want to go back to modify and retest tasks. When resetting the test steps some restrictions do apply, no test step should be reset if there are test steps after the Test Scenario have are already finished. The Functionality 'Set task state' is not recommended as it could leave certain steps with an inconsistent status. Below some examples of when you can reset the step status provided:

  1. It is recommended to reset statuses for all of the Test Steps (Tasks) by using right click on topmost item in Test Scenario hierarchy. It is recommended to reset status of single Test Step (Task), and applies when all of the following Test steps in Test Scenario are in an initial state (Were not executed yet).


Figure 195: Reset Test Step state

  1. It is possible but not recommended to reset state of Test Step if there are any following Test Steps that were already executed.


Figure 196: Reset Test Step state (not recommended)


When the selection step (1st step) status is reset, you do not lose information about the selected Test Variants. When you Double click on a selection step, the previously selected variants (if any) are loaded and act as the predefined variants for a specific run, these can either be removed or new set of variants added.

Query Based Testing

The Query based testing uses the query output as the test data and the before image of the query output is then compared with the output of an after image of the query output. 
The actions that lead to the creation of both these images can vary based on the purpose of each scenario. KATE currently supports one testing scenario with this type of testing. When the queries are executed using KATE the cache is not used, the reason for this is to get as precise as possible, the runtime values for both the before and after image. 
Recommendations:

  • A maximum of 1 000 Query Variants should be used in each created run of Query Testing scenario.
  • Processing time for each of step depends on queries that are processed. E.g. if there are lots of queries resulting in a huge output (more then 100 000 rows) the processing time may take longer than that of the query with a smaller output.

Test Scenario EQS (Query Testing)



Figure 197: Query Testing scenario steps


The Query Testing Scenario contains following steps:

  1. Select Query Variants
  2. Create the Before Image
  3. Performing some specific task in system outside KATE (like archiving, migration, etc.)
  4. Create the After Image
  5. Compare Before and After Image
  6. Display of results

You can navigate and execute all steps by double clicking on them.

Select Query Variants

You can choose which Queries/Bookmarks are to be used for testing, the definition and selection of query/bookmarks variants is necessary and can be done in the main window of this step displayed in (Figure 198).



Figure 198: Selection of Query Variants


All of the Query Variants that are added to this list are then used in the subsequent steps of the Query Testing scenario. There are multiple ways to add/create Query Variants into the list.
Create New Query Variant - Please refer to 7.1.1 ('Create new Query Variant') section for a detailed description of Query Variant creation. The only difference is in variant creation the confirmation button is not 'Continue' button (Enter) but 'Save' button (Enter). 
By double clicking on the created Query Variant you can reenter and edit the Variant properties. Remember, it is not possible to edit variant properties if same variant is already in use for another Test Run.
Create New Bookmark Variant - Please refer to 7.1.2 ('Create new Bookmark Variant') section for a more detailed description as the screen for defining Bookmark Variant directly in run. The only difference is in variant creation confirmation button, which in this case is not 'Continue' button (Enter) but 'Save' button (Enter). 
Add existing Query Variant - A list of all the existing Query Variants in a system is displayed and you can select a set of variants to be added by using the 'Copy' button (Enter). Please note that for each Query Variant can only be used once in each Test Run. 
Add Query Variant from Test Run ID - By clicking the 'Add Query Variant from Test Run ID' button a list of all the existing Query Testing Runs is displayed. You can a Test Run and add all variants used in one run to current Test Run.
Copy Query Variant of Run - By clicking the 'Copy Query Variant of Run' button a list of all the existing Query Testing Runs is displayed. You can select a Test Run and add all variants used in another run to the current Test Run as copies.
Generate Query Variants from HM (Optional) – You can automatically generate new Query Variants, to use this option you must have the HeatMap tool is present in the system. 
Add Variants of Test Case – By clicking the 'Add Variants of Test Case button a list of all the existing Query Test Cases is displayed. You can select a Test Case and add all variants used in it to the current Test Run.
Copy Variants of Test Case – By clicking the 'Copy Variants of Test Case button a list of all the existing Query Test Case is displayed. You can select a Test Case and add all variants used in it to the current Test Run as copies.
For detailed description of Generate Query Variants functionality please refer to 7.1.3 ('Generate Query Variants from HeatMap Statistics').
Create Based on Web Template - Click on the 'Create Based on Web Template' button (Shift + F7). Please refer to 7.1.4 ('Create Variants of Web Templates'). 
Copy Variants - Please refer to 7.1.14 ("Copy Query Variants") section for function details.

Generate tasks for before image

This step is executed automatically by double clicking on it, afterwards the executable tasks for each defined Query/Bookmark Variant in 'Select Query Variants' step is generated and visible in the following step 'Create before image'.

Create before image

By double clicking on this step, you first define the number of background jobs to be used for the execution of tasks. After each task is completed a before image of the corresponding query output is created and saved for later comparison.

Generate task for after image

This step is executed automatically by double clicking on it, after the generation of the tasks for each of the defined Query/Bookmark Variant from the 'Select Query Variants' step. These are to be executed in the following step 'Create after image'.

Create after Image

By double clicking on this step, you first define the number of background jobs to be used for the execution of tasks. After each task is completed an after image of the corresponding query output is created and saved for later comparison.

Generate tasks for comparison

This step is executed by double clicking on it, the tasks for each of defined Query/Bookmark Variant in 'Select Query Variants' step are generated and can be found under the following step 'Compare before and after image'.

Compare before and after image

By double clicking on this step, you define number of background jobs to be used for the execution of tasks. Each task compares the before image data with the after image data for one Query Variant.

Display results

To view the Query Variants execution outputs, and their comparison results. For more information please refer to (Results overview) chapter for more details.

ListCube based testing

Currently there are two Test Scenarios using the ListCube functionality outputs. The ListCube testing uses the same functionality as found in the standard RSA1 'Display Data' to collect the data from the InfoProviders that is to be tested. 
Recommendations:

  • For one ListCube testing run a maximum of 5000 ListCube variants should be used.
  • Highly recommended is to use the 'Use DB Aggregation' parameter when creating the ListCube Variants. When there are multiple lines with the same key in the image, then the performance is decreased during the comparison phase of this test scenario.

Test Scenario EQS_LIST (ListCube testing)



Figure 199: ListCube Testing Scenario


Basic ListCube testing contains following steps:

  1. Selecting ListCube Variants
  2. Generation of before image tasks
  3. Creation of before images
  4. Performing specific tasks in system outside KATE (like conversion, migration, etc.)
  5. Generation of after image tasks
  6. Creation of after images
  7. Generation of comparison tasks
  8. Execution of comparison tasks
  9. Display of results


Select ListCube Variants

By double clicking on the first Test Step 'Select ListCube Variants' you can define which ListCube variants are to be used in the specified run for testing, or you can create new ListCube Variants, these consist of all the selected ListCube Variants for this Test Run. 
Once you have the variants selected for the Test Run you need to save the selection by clicking on the 'Save' button (Ctrl + s) in main screen of the ListCube variants selection.
You can add ListCube Variants in the following ways:
Create New ListCube Variant- You can create new ListCube Variant by clicking on 'Create New ListCube Variant' button (Shift + F1). For more information please refer to 7.2.1 Create new ListCube Variant chapter. 
Add Existing ListCube Variant – You can display all of the existing ListCube variants in system and by clicking on the 'Add Existing ListCube Variant' button (SHIFT + F4), select one or more variants to be added into the Test Run. 
Add ListCube Variants of Run - By clicking on the 'Add ListCube Variants of Run' button (SHIFT + F5) you can select another (distinct from current) run, and add all of the ListCube Variants used also into the current Test Run. 
Copy ListCube Variants of Run - By clicking on the 'Copy ListCube Variants of Run' button (SHIFT + F8) you can choose another (distinct from current) run and copy the ListCube Variants into the current Test Run. 
Generate ListCube Variants - By clicking on the 'Generate ListCube Variants' button (SHIFT + F2) a dialog for the ListCube variants generation is displayed. Please refer to 7.2.2 ('Generate ListCube Variants') chapter. 
Generate RFC ListCube Variants - By clicking on the generation ListCube variants with RFC destination is displayed. Please refer to 7.2.3 ('Generate RFC ListCube Variants').
Add Variants of Test Case - By clicking on the 'Add Variants of Test Case' button (SHIFT + F11) you can select from existing ListCube Test Cases and add the ListCube Variants in it also into the current Test Run. 
Copy Variants of Test Case - By clicking on the 'Copy Variants of Test Case' button (SHIFT + F12) you can select from existing ListCube Test Cases, and add all of the ListCube Variants in it also into the current Test Run as copies. 
Copy Variants - Please refer to 7.2.11 ("Copy ListCube Variants") section for function details.

Generate tasks for before image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create before image'.

Create before images

By double clicking on this step, you first define number of background jobs to be used to create the before image for each of the specified ListCube variants. The Image creation of some ListCube variants can fail due to too many rows read. This is safety limit can be adjusted by changing the setting 'Max. ListCube image row size'. See Settings section for more details on how to change this setting. 
If KATE setting 'ListCube additional aggreg.' is set to 'X' the image that is returned from an InfoProvider during the task execution is aggregated again. This functionality can be used for cases when a ListCube returns multiple rows with same key because of unsupported functions. 
KATE is capable of handling numeric aggregation overflows. Automatic overflow handling is executed when 'ListCube additional aggreg.' is set to 'X' and aggregation of results fails because of numeric overflow. In cases of a problematic overflowing the key figure is set to 0 for all returned rows. And in cases of multiple overflowing key figures these will be set to 0. Overflow handling is always logged as if it was used. This feature enables you to create images for InfoProviders that normally would not be possible because of an overflow error on DB level. By turning off "DB Aggregation" in ListCube variants all aggregations will be performed in KATE and all non-overflowing key figures will be correctly saved in the image. "DB Aggregation" is always preferred for performance reasons and additional aggregation should be used for handling special situations. 
 During additional aggregation the whole data image of returned rows from the InfoProvider needs to be loaded into memory at once. Memory restrictions of work processes can cause a fail of the task execution if these thresholds are reached during additional aggregation.

Generate tasks for after image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create after image'.

Create after images

You define the number of background jobs to be used in the creation of the after image of the specified ListCube variants. The Image creation of some ListCube variants can fail due to too many rows read. This is safety limit can be adjusted by changing the setting 'Max. ListCube image row size'. See Settings section for more details on how to change this setting. 
If KATE setting 'ListCube additional aggreg.' is set to 'X' the image that is returned from an InfoProvider during the task execution is aggregated again. This functionality can be used for cases when a ListCube returns multiple rows with same key because of unsupported functions. 
KATE is capable of handling numeric aggregation overflows. Automatic overflow handling is executed when 'ListCube additional aggreg.' is set to 'X' and aggregation of results fails because of numeric overflow. In this case problematic overflowing key figure is set to 0 for all returned rows. In case of multiple overflowing key figures all of them will be set to 0. Overflow handling is always logged if it was used. This feature enables you to create images of InfoProviders that would be not possible because overflow error on DB level. By turning off "DB Aggregation" in ListCube variants all aggregation will be done in KATE and all not overflowing key figures will be correctly saved in image. "DB Aggregation" is always preferred for performance reasons and additional aggregation should be used for handling special situations. 
 During additional aggregation the whole data image of returned rows from the InfoProvider needs to be loaded into memory at once. Memory restrictions of work processes can cause a fail of the task execution if these thresholds are reached during additional aggregation. 
Generate tasks for comparison
You can specify key figures InfoObjects that are ignored during the comparison. These ignored key figure columns and are not visible in the 'Display Results' step. Generation of tasks is executed by clicking on 'Create Tasks' (F8) button. 
You can enable Automated Root Cause Analysis for comparison tasks. Refer to Automated Root Cause Analysis section for details. 



Figure 200: Selection of ignored key figure columns


 It is not possible to ignore characteristic columns as those are used as unique keys for record matching during comparison.

Compare before and after images

You define the number of jobs/ background jobs to be used for tasks execution each of the tasks compares the before image data with the after image data for one ListCube Variant, please refer to the (Comparison logic) chapter.

Display results

You can view the outputs and their comparison results for the ListCube Variants executions, please refer to ('Results overview') chapter.

Test Scenario EQS_SLO_L (SLO ListCube testing)

The SLO ListCube scenario shares many of the same steps as with the standard ListCube scenario, the difference is with the additional functions to create converted images based on the defined mappings.



Figure 201 SLO ListCube Testing Scenario


SLO ListCube testing contains following steps:

  1. Selecting ListCube Variants
  2. Generation of before image tasks
  3. Creation of before images
  4. Defining mapping
  5. Generation of convert tasks
  6. Creation of converted images
  7. Performing specific tasks in system outside KATE (conversion, etc.)
  8. Generation of after image tasks
  9. Creation of after images
  10. Generation of comparison tasks
  11. Execution of comparisons tasks
  12. Display of results

Select ListCube Variants

By double clicking on the first Test Step 'Select ListCube Variants' you can define which ListCube variants are to be used in the specified run for testing, or you can create new ListCube Variants, these consist of all the selected ListCube Variants for this Test Run. 
Once you have the variants selected for the Test Run you need to save the selection by clicking on the 'Save' button (Ctrl + s) in main screen of the ListCube variants selection.
You can add ListCube Variants in the following ways:
Create New ListCube Variant- you can create new ListCube Variant by clicking on 'Create New ListCube Variant' button (Shift + F1). For more information please refer to 7.2.1 Create new ListCube Variant chapter. 
Add Existing ListCube Variant – You can display all of the existing ListCube variants in system and by clicking on the 'Add Existing ListCube Variant' button (SHIFT + F4), select one or more variants to be added into the Test Run. 
Add ListCube Variants of Run - By clicking on the 'Add ListCube Variants of Run' button (SHIFT + F5) you can select another (distinct from current) run, and add all of the ListCube Variants used also into the current Test Run. 
Copy ListCube Variants of Run - By clicking on the 'Copy ListCube Variants of Run' button (SHIFT + F8) you can select another (distinct from current) run, and add all of the ListCube Variants used also into the current Test Run as copies. 
Generate ListCube Variants - By clicking on the 'Generate ListCube Variants' button (SHIFT + F2) a dialog for the ListCube variants generation is displayed. Please refer to 7.2.2 ('Generate ListCube Variants') chapter. 
Generate RFC ListCube Variants - By clicking on the generation ListCube variants with RFC destination is displayed. Please refer to 7.2.3 ('Generate RFC ListCube Variants') 
Add Variants of Test Case - By clicking on the 'Add Variants of Test Case' button (SHIFT + F11) you can select from existing ListCube Test Cases and add the ListCube Variants into the current Test Run. 
Copy Variants of Test Case - By clicking on the 'Copy Variants of Test Case' button (SHIFT + F12) you can select from existing ListCube Test Cases, and the ListCube Variants into the current Test Run as copies. 
Copy Variants - Please refer to 7.2.11 ("Copy ListCube Variants") section for function details.

Generate tasks for before image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create before image'.

Create before images

By double clicking on this step, you first define number of background jobs to be used to create the before image for each of the specified ListCube variants. Image creation of some SLO ListCube variants can fail due to too many rows read. This is safety check while this behavior can be adjusted by changing setting 'Max. ListCube image row size'. See Settings section for more details how to change this setting.

Define mapping

By clicking on this step you can define mapping that should be used to convert before images and create converted images. Currently there are two types of mapping that user can use/create.

 

Figure 202 Selecting mapping for image conversion


Custom mapping
By clicking on  button screen is displayed where you can specify which program and which form should be called for each row of before image that should be converted. 



Creation of custom mapping is confirmed by pressing 'Enter' button. KATE SLO ListCube scenario expects that the called form have this parameter:

  • cs_line – changing parameter; type ANY

The cs_line parameter holds actual line that should be changed based on your custom mapping logic. 
 KATE always tries to call the supplied Form once before the actual conversion of data, providing an empty line to check the correct specification of custom mapping. This first call should be ignored in your implementation of conversion Form.
It is possible to also define the following form in a program you inserted when creating custom mapping. 
Name: KATE_MAPPING_INITIALIZATION
Parameters:

  • iv_infoprov – using parameter; type RSINFOPROV
  • it_components – using parameter; type ABAP_COMPONENT_TAB

If this form exists in a custom mapping program, KATE calls it once before actual conversion of data. This way you can make decisions based on the InfoProvider name that is converted and/or components of line to be converted. 
KATE standard mapping
By clicking on  button screen is displayed (Figure 203) where you can define standard KATE mapping without need of creation of custom mapping reports. 



Figure 203 KATE Standard mapping creation


In each row you can define characteristic and its old value that should be changed to the new value. For each mapping rule you can define conditions that need to be satisfied for mapping to be applied (by clicking on  button). When multiple mappings can be applied on a field value based on conditions (during actual conversion) the first applicable mapping rule in the list is used.
For each row conditions can be specified. 



Figure 204 Setting conversion-mapping conditions


For each mapping rule row you can specify multiple conditions. Specified conditions are then combine to create one rule in following way: If there are multiple condition values for one type of characteristic these are evaluated together while using OR condition. Multiple types of characteristics are evaluated together while using AND condition. So in case of same conditions as are defined on Figure 204 the resulting condition would look like following:
IF ((0COMP_CODE = 1000 OR 0COMP_CODE = 1100) AND 0SALES_ORG = AA125) => use mapping 
If no mapping rules are specified the resulting converted image should be same as before image. 
It is possible to upload mapping rules with conditions from a CSV file. By clicking on  'Upload mapping from CSV file' button, select a CSV file from the system that contains the defined mapping.



Figure 205 Upload mapping from CSV


You can define the separator character that is used in the CSV file and the number of lines that should be ignored in the file from the top. The Data structure in the CSV file has to adhere to rules that are described in help text available under  'Information' button.

Generate tasks for conversion

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Convert Before Images'.

Converted before images

By double clicking on this step, you first define number of background jobs to be used to create the converted image for each of the specified ListCube variant before images.

Generate tasks for after image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create after image'.

Create after images

You define the number of background jobs to be used in the creation of the after image of the specified ListCube variants. Image creation of some SLO ListCube variants can fail due to too many rows read. This is safety check while this behavior can be adjusted by changing setting 'Max. ListCube image row size'. See Settings section for more details how to change this setting.

Generate tasks for comparison

By clicking on this task you can define which already created image you would like to compare with the after image. You can either select standard before image or converted image created by use of specified mapping. 



Figure 206 Selection of image to be compared


After clicking on  button comparison tasks are generated in next scenario step 'Compare Images'.

Display results

You can view the outputs and their comparison results for the ListCube Variants executions, please refer to ('Results overview') chapter.

Drill Down testing

The KATE Drill Down Test Scenario uses the ListCube functionality in a similar way as in the ListCube test scenario and uses the RSA1 'Display Data' functionality to get the data from the InfoProviders to be tested. 
Recommendations:

  • For one DrillDown testing run, a maximum of 1000 Drill Down variants should be used.
  • It is recommended to use of the KATE setting 'DrillDown maximum records' to under the value of 100 in order to reduce the amount of data to process and reduce the chance of errors.

Test Scenario EQS_DRILL (Drill Down testing)



Figure 207: Drill Down Testing Scenario 


Drill Down testing contains following steps:

  1. Selecting Drill Down Variants
  2. Generation of execution tasks
  3. Execution of Drill Down Tasks
  4. Display of results

Select Drill Down Variants

You can select which of the Drill Down Variants are to be used or create new Drill Down Variants to be added into the Test. When the Test Run is generated through the KATE Test management the variants are already preselected for the Drill Down Test Case.
Once the variants are selected for the Test Run, you can save the selection by clicking on 'Save' button (Ctrl + s).
In the following ways it is possible to add Drill Down Variants to the list:
Create New Drill Down Variant - (SHIFT + F1). For detailed information about creation of Drill Down variant please refer to 7.4.1 ('Create new Drill Down Variant ') section, 
Add Existing Drill Down Variant - (SHIFT + F4) displays all of the existing Drill Down Variants in the system, and you can select one or more variants to be added into the current Test Run. 
Add Drill Down Variants of Run - (SHIFT + F5) you can select the Variants used in another (distinct from current) run and add them into current the run as well. 
Generate Drill Down Variants - (SHIFT + F6) please refer to 7.4.2 ('Generate Drill Down Variants') section for more information.
Copy Variants - Please refer to7.1.14 7.4.7 ("Copy DrillDown Variants") section for function details.

Generate Tasks for Drill Down

The tasks are generated for the following step 'Execute Drill Down Tasks'. You can specify the key figures go be ignored in the InfoObjects during the comparison of the Drill Down Variants specified for this run. These ignored key figure columns are not visible in the 'Display Results. The Generation of the tasks is executed by clicking on 'Create Tasks' (F8) button.
You can enable Automated Root Cause Analysis for comparison tasks. Refer to Automated Root Cause Analysis section for details. 



Figure 208: Selection of ignored key figure columns


 It is not possible to ignore characteristic columns as those are used as unique keys for records during the matching comparison.

Execute Drill Down Tasks

You double click to execute, first you can define the number of jobs/ background jobs to be used for tasks execution each task executes a Drill Down test for one Drill Down Variant. Drill Down scenario testing is explained below.
The Drill Down test variant compares the data from two different InfoProviders, in most cases it's the same InfoProvider but on different systems. Based on selected Drill Down characteristics in the variant definition the execution starts by selecting the first specified Drill Down characteristic, and adds to a ListCube, which then read on both InfoProviders as a characteristic. The ListCube reads the outputs and these are immediately compared. For two out of three scenarios the Drill Down test execution ends here.

  1. One InfoProvider did not returned data while the other one did. In such case the first InfoProvider has no data, we advise not to continue as further comparisons will fail.
  2. The data returned by both InfoProviders are same. If everything is correct the Drill Down task stops the execution to free up resources for other tasks.

In the third scenario some or all data returned by InfoProviders are not same. In this case a new test cycle begins, and the erroneous data is checked. Using the KATE setting 'DrillDown maximum records', for the first added characteristic (the one added at the start of first cycle) the X number of distinct values that belongs to erroneous records are the selected. These values act as a filter value for this characteristic in next test cycle (drill down into erroneous records). 
The next characteristic in the order of specified Drill Down characteristics (if no more are available then the execution stops here) is added to ListCube for reading and then another characteristic is selected for the output. The ListCube reads are repeated for both InfoProviders with the new settings and the data is then compared again. Afterwards either a new test cycle begins using the same logic just described, or execution ends here, depending on the result of comparison. 
If KATE setting 'DrillDown additional aggr.' is set to 'X' in each cycle when InfoProviders are read the returned data is aggregated again. This functionality can be used for cases when ListCube returns multiple rows with same key because of unsupported functions. 
 During additional aggregation the whole data image of returned rows from InfoProvider needs be loaded into memory. Memory restrictions of work processes can cause fail of the task execution if these thresholds are reached during additional aggregation.

Display results

By double clicking on this step you can view the DrillDown Variants executions outputs and their comparison results are displayed. Please refer to (Results overview) chapter for more details.

Transformation testing

Transformations are used during the execution of Data Transfer Process. This scenario is used to test the changes/updates of logic used in the transformations (lookups, rules, routines).

Test Scenario EQS_DTP (Transformations testing)



Figure 209: Transformation Testing Scenario


This test scenario consists of following 18 simple steps:

  1. Select DTP Variants
  2. Generate tasks for DTP generation
  3. Generate DTPs
  4. Generate tasks for before DTP load
  5. Load DTP for Before Image
  6. Generate tasks for before DTP images
  7. Create Before Images
  8. Performing some specific task in system outside KATE (like conversion, archiving, migration, etc.)
  9. Generate tasks for after DTP load
  10. Load DTP for After Image
  11. Generate tasks for after DTP images
  12. Create DTP for After Image
  13. Generate tasks for comparison
  14. Compare before and after images
  15. Display Results
  16. Display Lookup Results
  17. Generate tasks for clean up
  18. Clean up
  19. Clean up DTP


Select DTP variants

First in the list is a screen for defining the DTP variants to be tested during the run.

 

Figure 210: Selection of DTP variants


You can use standard KATE functionality to create a new DTP variant, add existing variants or add collection of variants from another Test Run. 
After selecting the DTP variants you should save the selection 'Save' button (Ctrl + s) for current Test Run.
Create New DTP Variant - (Shift + F1) a dialog is displayed where you can specify the new variant. Please refer to 7.3.1 ('Create New DTP Variant').
Add existing DTP Variant – (Shift + F2) you can view a list of all the existing DTP Variants in the systems. Here you can select the variants to be added to the Test Run. 
Add DTP Variants of Run - (Shift + F4) you can choose to add all variants used in another run to this Test Run.
Copy DTP Variants of Run - (Shift + F8) you can choose to add all variants used in another run to this Test Run as copies.
Add variants of Test Case - (Shift + F11) you can choose to add all variants used in existing Test Case to this Test Run.
Copy variants of Test Case - (Shift + F12) you can choose to add all variants used in existing Test Case to this Test Run as copies.
Copy Variants - Please refer to 7.1.147.3.7 ("Copy DTP Variants") section for function details.
Generate tasks for DTP generation - Generates the tasks for all DTP variants in the Test Run for the following step 'Generate DTPs'
Generate DTPs - This step groups the tasks to be executed into multiple background jobs. Each task generates one DTP based on the transformations specified in the DTP variant. However, if you are using a different after image, another variant is specified for the DTP variant as well. This DTP is generated for this variant settings and the same logic is applied. 
Generated DTPs are created with these settings:

  • Extraction Mode – Full;
  • Package Size – 50 000;

Extraction from – Active Table (Without Archive) – can be changed with KATE settings (see Settings chapter).
Note: You can always modify the generated DTP settings and filters through the standard RSA1 transaction and the generated DTPs are also visible there.
Note: DTPs generated by the KATE tool have their description generated in the same way as in standard DTPs; the only difference is an added prefix to the beginning. This Prefix can be changed in the KATE settings and can be up to 8 characters long. Please refer to Settings chapter.

Generate tasks for before DTP load

Generates tasks for the following step 'Load DTP for Before Image' for all DTPs in the Test Run.

Load DTP for Before Image

This step is a group task that can be executed using multiple background jobs. Each task executes one generated DTP (If RFC Destination was specified for DTP Variant the DTP load is executed on specified target system). 
In KATE the settings for the maximum waiting time can be specified (parameter 'DTP load wait time') for each task. If this time is exceeded (load takes too long) you are informed via the log and should check the status of load manually. It is necessary to wait for all loads to finish before you continue to the next scenario steps.

Generate tasks for before DTP images

Generates the tasks in following step 'Create Before Images' for all DTP variants in the Test Run.

Create Before Images

Each task creates a snapshot of request data loaded for one DTP from the previous steps and all loaded data of the request is stored.

Generate tasks for after DTP load

Generates the tasks for the following step 'Load DTP for After Image' for all DTP variants in the Test Run.

Load DTP for After Image

Each task executes one DTP (If RFC Destination was specified for DTP Variant the DTP load is executed on specified target system). If a different after image DTP Variant is specified then it's generated DTP is used for loading. 
In KATE the settings for the maximum waiting time can be specified (parameter 'DTP load wait time') for each task. If this time is exceeded (load takes too long) you are informed via the log and should check the status of load manually. It is necessary to wait for all loads to finish before you continue to the next scenario steps.

Generate tasks for after DTP images

This step automatically generates the tasks for the DTP variants that are going to be used in the 'Create After Images'.

Create After Images

Each task creates a snapshot of the request data loaded by a DTP and all loaded data of request is stored.

Generate tasks for comparison

You can select InfoObjects that are ignored during comparison for the DTP Variants for this run, these ignored InfoObject columns are not visible when you 'Display Results'. Generation of task is executed by clicking on 'Create Tasks' (F8) button.



Figure 211: Selection of ignored key figure columns

Compare before and after images

These tasks compare the before image with after image of the DTP Variants that were created previously.

Display Results

Double click to view results of testing is displayed refer to (Results overview) chapter for more information.

Display Lookup Results (Optional)

Here the lookup data testing results are displayed, when DVD Lookup Translator is installed in the system and configured to provide KATE pre and post lookup data, KATE will also create images of the provided data and compares these with the 'Compare before and after images' step. 
 Please note that Lookup data testing enhancement is not supported for RFC DTP Test Cases.

Generate tasks for cleanup

This step automatically generates the tasks for following steps 'Clean up requests' and 'Clean up DTP' for all DTP's in the Test Run.

Clean up requests

Each task looks into saved parameters of a variant and finds the before/after image load requests executed and then deletes these from the system. 
The saved image data with comparison results are preserved after the clean up steps have finished their execution. If the DTP Test Case is defined with a RFC Destination these requests are deleted in appropriate system. By executing this step prior to 'Create Before Images' step, you can separately clear test loads from systems after the before image is taken. The status can also be reset after the 'Create After Images' step and then rerun to also clear the after image loads.

Clean up DTP

The tasks look into the saved parameters for one variant; searches for the generated DTPs and these are then deleted from the system. If the DTP Test Case is defined with a RFC Destination the DTPs are deleted on appropriate system. The saved image data with comparison results are preserved after these steps finish their execution.

Table based testing

Table based testing uses data directly from the database tables. Based on the user specification set, the table columns are read from the database table and are used to create before and after image. These images are then compared.
Recommendations:

  • For one Table testing run a maximum of 1000 Table variants should be used.

Test Scenario EQS_TABLE (Table testing)



Figure 212 Table Testing Scenario


Table testing contains following steps:

  1. Selecting Table Variants
  2. Generation of before image tasks
  3. Creation of before images
  4. Performing specific tasks in system outside KATE (like conversion, migration, etc.)
  5. Generation of after image tasks
  6. Creation of after images
  7. Generation of comparison tasks
  8. Execution of comparison tasks
  9. Display of results

Select Table Variants

By double clicking on the first Test Step 'Select Table Variants' you can define which Table variants are to be used in this Test Run for testing, or you can simply create new Table variants.
Once you have the variants selected for the Test Run, you need to save the selection by clicking on the 'Save' button (Ctrl + s) in main screen of the Table variants selection.
You can add Table Variants in the following ways:
Create New Table Variant – You can create new Table Variant by clicking on 'Create New Table Variant' button (Shift + F1). For more information please refer to 7.5.1 Creating new Table Variant chapter. 
Add Existing Table Variant – You can display all of the existing Table variants in system by clicking on the 'Add Existing Table Variant' button (Shift + F2). You can then select one or more variants to be added into this Test Run. 
Add Table Variants of Run – By clicking on the 'Add Table Variants of Run' button (Shift + F4) you can select another (distinct from current) run, and add all of the Table Variants of selected run into the current Test Run. 
Copy Table Variants of Run – By clicking on the 'Copy Table Variants of Run' button (Shift + F8) you can select another (distinct from current) run and add all of the Table Variants of selected run into the current Test Run as copies. 
Add Variants of Test Case– By clicking on the 'Add Variants of Test Case' button (Shift + F11) you can select an existing Test Case, and add the Table Variants of selected Test Case into the current Test Run.
Copy Variants of Test Case– By clicking on the 'Copy Variants of Test Case' button (Shift + F12) you can select an existing Test Case, and add all of the Table Variants of selected Test Case into the current Test Run as copies. 
Copy Variants - Please refer to 7.1.14 7.5.10 ("Copy Table Variants") section for function details.

Generate tasks for before image

To generate the tasks, you can double click the Test Step 'Generate tasks for before image' to execute it. This will prepare the tasks for the following step 'Create before images'.

Create before images

By double clicking on this step, you first define number of background jobs to be used to create the before image for each of the specified Table variants. The Image creation of some Table variants can fail due to too many rows read. This is safety limit can be adjusted by changing the setting 'Max. Table image row size'. See Settings section for more details on how to change this setting. 
KATE is capable of handling numeric aggregation overflows. Automatic overflow handling is executed when 'Table own aggregation' setting is set to 'X' and aggregation of results fails because of numeric overflow. In cases of a problematic overflowing the key figure is set to 0 for all returned rows. And in cases of multiple overflowing key figures these will be set to 0. Overflow handling is always logged. This feature enables you to create images for Tables that normally would not be possible because of an overflow error on DB level. DB aggregation is always preferred for performance reasons and own aggregation should be used for handling special situations.

Generate tasks for after image

To generate the tasks, you can double click the Test Step 'Generate tasks for the after image' to execute it. This will prepare the tasks for the following step 'Create after images'.

Create after images

By double clicking on this step, you first define number of background jobs to be used to create the before image for each of the specified Table variants. The Image creation of some Table variants can fail due to too many rows being read. A limit can be set/ adjusted by changing the setting 'Max. Table image row size'. See Settings section for more details on how to change this setting. 
KATE is capable of handling numeric aggregation overflows. Automatic overflow handling is executed when 'Table own aggregation' setting is set to 'X' and aggregation of results fails because of numeric overflow. In cases of a problematic overflowing the key figure is set to 0 for all returned rows. In cases of multiple overflowing key figures these will be set to 0. Overflow handling is always logged. This feature enables you to create images for Tables that normally would not be possible because of an overflow error on DB level. DB aggregation is always preferred for performance reasons and own aggregation should be used for handling special situations.

Generate tasks for comparison

To generate the tasks, you can double click the Test Step 'Generate tasks for comparison' to execute it. This will prepare the tasks for the following step 'Compare before and after images'.

Compare before and after images

You define the number of background jobs to be used for tasks execution. Each of the tasks compares the before image data with the after image data for one Table Variant, please refer to the (Comparison logic) chapter.

Display results

You can view the outputs and their comparison results for the Table Variants executions, please refer to ('Results overview') chapter.

Report based testing

The Report based testing scenarios execute and focus on the ERP standard reports, with or without variants. The reports are executed with background jobs and their output is saved to spool. The outputs are then read after the execution and are compared based on your preferred settings. 
Recommendations:

  • A maximum of 1000 Table variants should be used for one Table testing run.

Test scenario EQS_ERP (Report testing)



Figure 213 Report Testing Scenario


Report testing contains following steps:

  1. Selecting Report Variants
  2. Generation of before image tasks
  3. Creation of before images
  4. Performing specific task in system outside KATE
  5. Generation of after image tasks
  6. Specifying comparison settings
  7. Generation of comparison tasks
  8. Execution of comparison tasks
  9. Display of results

Select Report Variants

By double clicking on the first Test Step 'Select Report Variants' you can define which Report variants are to be used in the specified run for testing, or you can create new Report Variants, these consist of all the selected Report Variants for this Test Run. 
Once you have the variants selected for the Test Run you need to save the selection by clicking on the 'Save' button (Ctrl + s) in main screen of the Report variants selection.
You can add Report Variants in the following ways:
Create New Report Variant- you can create new Report Variant by clicking on 'Create New Report Variant' button (Shift + F4). For more information please refer to 7.6.1Create New Report Variant chapter. 
Add Existing Report Variant – You can display all of the existing Report variants in system and by clicking on the 'Add Existing Report Variant' button (SHIFT + F5), select one or more variants to be added into the Test Run. 
Add Report Variants of Run - By clicking on the 'Add Report Variants of Run' button (SHIFT + F6) you can select another (distinct from current) run, and add all of the Report Variants used also into the current Test Run. 
Copy Report Variants of Run - By clicking on the 'Copy Report Variants of Run' button (SHIFT + F8) you can select another (distinct from current) run, and add all of the Report Variants used also into the current Test Run as copies. 
Add Variants of Test Case - By clicking on the 'Add Variants of Test Case' button (SHIFT + F11) you can select existing Test Case, and add all of the Report Variants used into the current Test Run.
Copy Variants of Test Case - By clicking on the 'Copy Variants of Test Case' button (SHIFT + F12) you can select existing Test Case, and add all of the Report Variants used into the current Test Run as copies. 
Copy Variants - Please refer to 7.1.14 7.6.5 ("Copy Report Variants") section for function details.

Generate tasks for before image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create before images'.

Create before images

By double clicking on this step, you first define number of background jobs to be used to create the before image for each of the specified Report variants.

Generate tasks for after image

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Create after images'.

Create after images

You define the number of background jobs to be used in the creation of the after image of the specified Report variants.

Set Comparison settings

By double clicking on this step you can define some standard settings that should be used during comparison of reports output. 



Figure 214 Report comparison settings


You can define:

  1. Report comparator type – KATE comes by default with one report comparator (Simple Report Comparator) that compares output of report row by row. Or you can select your own comparator implementation to be used. To create your own BADI comparator you need to create implementation of KATE '/DVD/EQS_BADI_REP_COMP' BADI.
  2. Ignore TimeStamps – KATE default report comparator is capable of ignoring time stamps in number of standard formats that often cause difference in output of reports as they are executed in different times.
  3. Compare only table data – KATE default report comparator can ignore headers of standard reports and compare only actual table data saved in spool output of executed reports.

You save the specified compare setting by clicking on 'Enter' button.

Generate tasks for comparison

To generate the tasks, you can double click to execute, this will prepares the tasks for the following step 'Compare before and after images'.

Display of results

By clicking on this step you can review the output and comparison results of reports executed in steps before. There are some differences between report testing scenario and other scenarios, which are specified more in detail in section Reports Display Results.

Comparison logic

All Test Scenarios use this comparison logic to validate if the data in the after image differs from data the before image. This section contains description of how data are compared in test scenarios.

Comparison of query results

Each query output is separated into rows, with each row split into the key part and data part. The Key part is defined by all the characteristics columns on the left side of output while all of the other columns define the data part (see Figure 215). 



Figure 215: Key and Data part of query output


When the key/data part creation (images) are completed for the Before and After image, the corresponding rows are searched using the Key part. 
There are cases when there is no Key part in the query output, in this situation; KATE uses the rows position to compare the appropriate rows between both images. When multiple rows have the same Key part, KATE picks the corresponding row from another image. If the corresponding row does not belong to group of rows with the matching Key part then the output of the row is highlighted with yellow color to indicate the need for a manual check. 
To prevent any SAP rounding differences, in the settings for KATE you can specify the number of decimal places used for the Query results comparison. Please refer to Settings chapter to see how this setting can be changed. 
If a row from either image does not have a matching pair, the alternate image row is colorized with a red color; the data fields are colorized along with the row number. Missing rows can occur in both the Before and After image. 



Figure 216: Missing row in other image


All corresponding rows are compared even on the data part. If any differences are found in the data part, the appropriate data cells are colorized with a red color to mark the difference. Only cells in the After image are colorized as the Before Image values are taken as correct ones.


Figure 217: Incorrect data in After image

Comparison of ListCube/Drill Down results

The logic for ListCube comparison is similar to query comparison. Each row of the ListCube output is split into Key and Data parts. The Key part for ListCube output is also visually distinguished in the output table for better orientation. 



Figure 218ListCube output Key and Data parts


To prevent any rounding differences, you can use the settings for KATE to specify the number of decimal places used for the ListCube results comparison. Please refer to Settings chapter to see how this setting can be changed. 
To prevent float number differences you can set a threshold value. If the difference between the before/after image values is smaller than the threshold value, the comparison is evaluated as correct. Please refer to Settings chapter to see how this setting can be changed.
Full logic of numeric values comparison is described in Figure 219 .On the left side is a flow diagram of comparison logic is displayed while right side contains example of comparison logic.



Figure 219 KATE numberic value comparison flow diagram


It is possible to specify a precise comparison of RowCount (if they are present) columns of images by changing value of appropriate setting (TABLE_ROWCNT_PREC/ LIST_ROWCNT_PREC). 
The Before and After images of the ListCube output from the different InfoProviders can be compared (or the structure of InfoProvider is changed in same time as the After image creation) some rules applies on top of the existing comparison logic:

  • If Key part structure is not same in both the Before and After image ListCube outputs cannot be compared
  • If some Data part column is missing or added in the Before/After image this column is not checked and is not taken as an error.

Automated Root Cause Analysis

For ListCube, Drill Down and Table comparison tasks, an automated root cause analysis can be enabled. Automated root cause analysis works in following way:
Before the last Drill Down / ListCube/Table comparison, a table of Keys to be ignored is created. This table is created for both A and B InfoProviders in the Drill Down and for the before and after image of ListCube InfoProviders. For Table variants the table is created only for the appropriate InfoProviders if changelog table of standard DSO or active table of write-optimized DSO is compared. 
Each InfoProvider is checked for last delta loading requests of all DTPs and IP executions. Data of these last loaded delta requests are searched and the ignored keys table is created. The Ignored keys table contains all combinations of Keys (with structure same as DrillDown/List Cube key part compared) that were present in last delta loads. 
When a comparison error is found during the comparison, the key part of the erroneous row is compared against the ignored keys table. If an error is found, it means that data of this row was affected by last delta load requests and error is ignored. Such ignored erroneous data are then colorized with yellow colour to distinct them from real errors. 



Figure 219 Erroneous data ignored by Automated Root Cause Analysis


Erroneous data does not influence the overall compare result. If no errors are found during comparison except errors that were handled by Automated Root Cause Analysis then overall comparison result will be set to OK (Green). 
As for DSOs, data cannot be read directly with the use of the last loaded delta request IDs, so activation requests are used. For each found load request a corresponding activation request is found. The Changelog of DSO is then searched with the list of such activation requests. The ignored keys table is constructed based on the data found in change log for the corresponding activation requests. 
In the comparison task, an application log user can always see the errors that were evaluated as false errors, through the Automated Root Cause Analysis. 



Figure 220 Automated Root Cause Analysis log


For MultiProviders the ignored keys table is constructed as union of the ignored keys tables for all part providers and for SPOs the ignored keys tables is constructed as union of the ignored keys tables of all semantic partitions.

Comparison of Table results

The logic of the Table Comparison is same as logic for ListCube/DrillDown results comparison with following exceptions:

  • Automated Root Cause Analysis cannot be used for table comparison.
  • Key figure columns cannot be ignored in table comparison.

Comparison of DTP load results

Transformation testing often requires checking huge volumes of lines to see if the transformation logic was changed during the test scenario. When the Before and After image is compared, the rows to be mutually checked are selected by their row position. To speed up the comparison only hashed values of these lines are compared. The hashes of lines are calculated using all of fields of in the line. Lines are colorized with red colour if there is any difference in any number of cells of compared lines.

Lookup data comparison

If the lookup data testing is included in the comparison, then the following logic is applied during the data comparison. Direct Lookup data includes the lookup source data, which is coming into Lookup and the data results that are returned from Lookup. 
The data provided from the DVD Lookup Translator comes in packages because the packages are processed in the transformation. KATE executes the comparison only in cases where the same number of packages is provided by DVD Lookup Translator as for the before and after image. 
Data from each package is then compared independently with the data from other packages. For each package, the lookup source data is compared first. If the source data for the before/after package is not same then the results data is not compared as you can only test the lookup correctness based on the same input. If the source data is the same as for the before/after image of package results data is compared. 
Matching the image is done by the row number of the before/after resource data, when saving the lookup source and the result data, KATE sorts this by all fields to prevent sorting problems.

Results overview

When displaying the results of Query, ListCube, SLO ListCube, Drill Down or Transformation testing in the 'Display results' step, for these Test Scenarios there is a list of all of the run Test Variants on the left side of screen. With basic information about each Test Variant (this depends on Test Variant type i.e. Query/ListCube/DTP/Drill Down) is displayed together with following information:

  • Before Image Runtime (optional) – runtime of Query/ListCube/DTP load/ Drilldown execution in Before image.
  • After Image Runtime (optional) - runtime of Query/ListCube/DTP load/Drill Down execution in After image.
  • Difference (optional) – difference in seconds between After image runtime and Before image runtime.
  • Result – result of data comparison.
  • Reporting Result – reporting status set be KATE/user.
  • Before Image Creation Time (optional) – time of before image creation.
  • Before Image Creation Data (optional) – date of before image creation.
  • After Image Creation Time (optional) – time of after image creation.
  • After Image Creation Data (optional) – date of after image creation.
  • Before Image Rows (optional) – number of rows of before image.
  • After Image Rows (optional) – number of rows of after image.
  • Reporting Status Text (optional) – text supplied when KATE/ user set reporting status.
  • Variant ID (optional) – KATE technical ID of variant.
  • Before Image Status (optional) – Task execution status for the before image
  • After Image Status (optional) – Task execution status for the after image
  • Before Image Overflow Handled (optional) – notification of overflow occurrence
  • After Image Overflow Handled (optional) – notification of overflow occurrence

All optional columns can be added to the results overview table by clicking on  'Change Layout …' button. ALV column structure for the user and scenario type is always saved on exit and reapplied when user enters this step again.



Figure 221: Different Results of Before/After image comparison


There are four types of results that each Test Variant can have:

  • Green semaphore - if data returned by Before and After image is the same,
  • Yellow semaphore - if none of the images returned data.
  • Red semaphore - if inconsistency/differences were found between the data returned in the Before and After Image.
  • None - if the comparison of images failed with an error (e.g. outputs with different keys structure were supplied for comparison) or if the comparison was not yet done.

Sometimes the 'After Runtime [s]' cells along with the 'Difference' cells are colorized with red colour, this can happen when a there is a difference between the Before image runtime and/or the After image runtime image reaches a threshold value defined in the KATE settings. You can specify these threshold values by clicking on the  button in toolbar.
These settings can have an influence on the comparisons decimal precisions are also applied in the output.
You can display each variant in detail, by right clicking on appropriate variant row and selecting the 'Display Variant' from context menu. 



Figure 222: Display Variant Details


The Variant details screen differs, based on the type of variant that was clicked on (e.g. Query Variant, Bookmark Variant, ListCube Variant, Drill Down Variant).



Figure 223: Query Variant Details Screen


You can use  button to filter out all correct records. 
In the list of all Test Variants only the variants that finished with erroneous comparisons are displayed (i.e. Test Variants that didn't finish with error are filtered out from list).



Figure 224: Query Testing scenario erroneous variants


For the actual data screens if the 'Only Errors' mode is active, only the rows of output that have at least one cell colorized with red colour are displayed. In the Before Image output, the correct rows are also displayed that correspond to After Image rows with wrong data.



Figure 225: ListCube erroneous rows and appropriate before image rows


For the Transformation testing in the before image results only the corresponding rows in terms of row number are displayed with the erroneous rows of the after image, when the 'Only Errors' mode is active.



Figure 226: Erroneous display in Transformations testing scenario

Reports Display Results

The Display Results screen for the ERP Report scenario is very similar to other Display Results screens but there are following differences.

  • No union view is available
  • There are three different formats you can review the reports outputs: ALV table, Text Screen and HTML.


Figure 227 Three types of report output display


We recommended the display type HTML that uses a monospaced font so the results are easily readable. ALV and HTML display types can colorize errors found in the reports unlike the simple text display.



Figure 228 Example of report HTML report output view

Navigation in outputs

For performance reasons 5000 lines (can be changed via settings) is displayed at once in the 'Display results' for each variant output. When any of the variant images have more than 5000 lines in output, the navigational buttons become active and you can page through the results. In the Before and After image outputs you can navigate through these independently by using navigational buttons. 



Figure 229: Output buttons and position

Full screen mode

Sometimes it is required to display Before/After image outputs in full screen. You can click on the  'Full Screen' button in the application toolbar. To activate full screen mode, the results must be already displayed in the standard view. It is possible to navigate through the output result pages (if any) directly in full screen mode. 



Figure 230: Full screen mode of variant output

Union screen mode

In the Output for ListCube, Transformation and Drill Down test scenarios these can be displayed together in union mode. This is accessible after clicking on  'Union Screen Results Display' button in the toolbar and lets you display two different outputs on one single screen (Figure 231). 



Figure 231: Union screen mode for results display


The first column contains information about the source of data for a specified row. For Transformation and ListCube scenarios it either contains the value 'A' (After image) or 'B' (Before image) and specifies which image the record belongs to. For the Drill Down scenario this column contains a concatenation of InfoProvider technical name and its RFC Destination system ID (if any). 

For all scenarios the rows are paired in a way they can be compared together. This can differ based on the scenario i.e. for Transformation testing scenario row numbers are used while for ListCube and Drill Down scenarios the appropriate row keys are matched. 
*Important Note: In the current version of KATE the Query testing scenario does not support Union display mode.

'Missing' column

For the Drill Down and ListCube Test Scenario a column with the name 'Missing' is added to the output of InfoProviders. If a line is not correct in one of the Before/After images because there was no record with the same key found in the After image an icon  is added to the column. This helps you to differentiate between the erroneous and the missing records, you can also use this column for the filtering of results. This column is visible in all three types of results display.
*Important Note: Using standard ALV filtering functionality on the output tables only influences the actually displayed page records and does not influence records of other pages. 



Figure 232: Missing Column

Reporting Status

It is possible to display the reporting status column in the table with a list of the tested variants for all of the Backend scenarios by changing the layout of the result table. By default this status is always set to the same value as that of the comparison results in KATE, The Reporting status is used to define the statuses of individual Test Cases for reporting. You can set the reporting status and override the compare status set by KATE by clicking on the appropriate Variant result row and then selecting the 'Set Reporting Status' option (Figure 233). 



Figure 233: Set Reporting Status Option


When changing the reporting status, you can choose the reporting status you want to set for variant and add a description (Figure 234). 



Figure 234: Set Reporting Status Dialog


It is possible to set the same reporting status atonce for multiple variants by selecting more rows and choosing 'Set Reporting Status' option.

 


Figure 235: Setting reporting status for multiple variants


You may want to unify reporting statuses used by test users to reflect acceptable errors (e.g. 'Out Of Scope') it is possible to specify cross KATE reporting statuses. You can define these reporting statuses in KATE settings under 'Backend' settings tab. All customized reporting statuses can then be selected in Set Reporting Status dialog (Figure 234) using F4 help. 



Figure 236: Report Statuses Customizing

Lookup Results

During the Transformation testing scenario, when KATE is provided with the data from the DVD Lookup Translator, and is using the 'Display Lookup Results' this contains the data and the comparison results. The Structure of this screen is very similar to the standard 'Display Results' screen, however there are some differences:
In 'Display Lookup Results' screen there are two comparison statuses for each line of the DTP variant tested, in some cases there can be multiple lines in a run and for each variant. The number of lines depends on number of data packages that were processed during the load for each variant. The First comparison of the results defines the statuses of the lookup source data for the comparison, while the second defines the statuses of the comparison results from the returned data in the lookup. 



Figure 237: Lookup package comparison results


As is displayed on Figure 237 you can see that when the source data comparison fails no comparison is performed on the lookup returned data.
The display on the right side of the screen is the before/after image data as it normally would have been in the standard 'Display Results' screen. When you double click on the appropriate variant lookup package the data is displayed. By default, when you display the actual data this way, the returned lookup data is also displayed. To switch between the display of the source data and the result data, click on the  'Source Data' button (Shift + F7) or  'Result Data' button (Shift + F8).

Test Run ID Locking

To prevent execution of Test Run by mistake and overwriting current results you can use lock option. When a Test Run is locked, it is not possible to change task state, execute or reset image creation or comparison, change variants or reporting status until it is unlocked again. Mass Execution and scheduling will not run on locked test runs. Deletion in Test Plan stops at first locked Test Run. You are informed about locked Test Run by lock icon in Test Plan tab of Test Management and by message with description and user name.
You can change lock state of Test Run by right clicking on Test Case name in Test Plan (or right click on root task in Test Case) and selecting the 'Lock Run' (or 'Unlock Run') from context menu. 



Click on 'Lock Run' option opens a popup window where you can set description of reason to lock Test Run. Click on 'Unlock Run' unlocks Test Run directly.



Lock functionality is available for these Test Case Types:

  • Query
  • ListCube
  • Table
  • DrillDown
  • ERP Report
  • DTP
  • SLO ListCube



Step by Step: Creating and executing Test Run ID in Backend Testing

This example shows you a step-by-step guide as to how to create, execute and display results in Backend Testing.

  1. Run the transaction /DVD/KATE in your SAP BW system


  1. In KATE Dashboard screen, choose Backend Testing (last icon on the function panel).


Figure 238: Backend Testing

  1. In the KATE Backend Testing screen, choose Create new Test Run ID (F5).



Figure 239: Create New Run Test ID

  1. A new window appears, in the first pop up, where you can add the type of Test Scenario (you can press F4 for the list of all possible entries). Currently, there are 4 Test Scenarios to choose from. These scenarios are described in chapter Backend Testing.


Figure 240: Adding a Test Scenario


For this example, we will choose the test scenario for Query testing – EQS.
After choosing the Test Scenario, you can enter the name for Test Run ID and the description. 



Figure 241: Completing the creation of a Test Run ID


  1. After creating a new Test Run ID, you should see an overview of all the tasks.


Figure 242: Overview of Tasks for Test Run ID 

  1. In the next step, we will add new query variants for our run.

Double click on the first task in the Select Query Variants and a new screen appears with several options for adding a new variant. Here you can do the following: 

  • Create new Query Variant

This option creates a new query variant based on selected query.

  • Create new Bookmark Variant

This option creates a new query variant based on selected bookmark.

  • Add existing Query Variant

If you choose this option, you can choose from the existing query variants that were created previously.

  • Add Query Variant from Test Run ID

Allows you to copy query variants from an existing Test Run ID. Afterwards all query variants from the chosen Test Run ID are added automatically.

  • Generate Query Variants from QS

You can also add new query variants from Query Statistics.

  • Create Based o Web Template

Create query variant for queries of selected web template.

  • Generate Based on Web Template Bookmark

Create query variant for queries of web template bookmark.
In our example, we will choose to Create new Query Variant.



Figure 243: Creating a new Query Variant

  1. When you click on Create new Query Variant, a new window should appear; here you need to add the technical name of the query you want to use and a description. Other fields are optional please refer to 7.1.1 Create new Query Variant chapter for more details.


Figure 244: Set query variables 


If the selected query requires input variables, you can then set this up by clicking on 'Set query variables' button. 


Figure 245: Set query variables 

  1. After you save the query variant, you can view this in the list of all query variants.


Figure 246: Set query variables

  1. Once all your query variants are added, save the selection by pressing 'Save' button (Ctrl + S), then you can return to the Test Run ID tasks. In the next step you need to Execute (F8 or double click) Generate tasks for before image, once generated, the first two status icons should be green.


Figure 247: Generate Tasks for before image 


Important information: To reset the task state of every executed task by highlighting the task and clicking on Reset task state button (Shift + F8). 



Figure 248: Generate Tasks for before image 


  1. Once the tasks are generated you can then execute Create before image. A new pop up appears where you can specify the number of background jobs and the name of Application server to be used.

If you want the number of background jobs to stay the same even when one or more jobs fail, you can check the 'Keep alive' option. 


Figure 249: Create before Image 


You can press F5 or F9 to refresh the task monitor during the time it takes for the task to finish. A truck icon in the Status column means that the task is running. After a successful run, the status icon should turn green/yellow/red.
It is possible to execute specific sub-tasks instead of executing all tasks at once. To display and execute these sub-tasks, click on the blue icon in the Sub-task column.



Figure 250: Display Sub-Tasks 


From here, you can execute the sub-tasks. In our example, we have few sub-tasks and to execute one, double click on the chosen sub-task or press F8. We can observe the changing status and refresh the task monitor until the sub-task finishes and the status icon turns green.



Figure 251: Execute Sub-Task 



Figure 252: Sub-Task Completed


  1. Once the Create before image task is complete, you can start performing your specific tasks (like archiving, migration, etc.) in your system (outside KATE).


  1. Afterwards, the Next step is to execute Generate tasks for after image.


Figure 253: Generate tasks for after image 


  1. Creating the after image task is similar to creating the before image. You execute the Create after image task and choose the number of background jobs for the task to run.


Figure 254: Create after Image 

  1. After both images (before and after) are ready for comparison, you should execute the task Generate tasks for comparison, followed by the task Compare before and after image. Your Test Run ID should now look similar to this one:


Figure 255: Generate task and Compare before and after image 

  1. In the Display results screen, the section on the left displays a list of all the test variants, by selecting one the test variants you can compare the before and after image outputs in the right-hand side of the screen. The runtimes units are displayed in seconds.

Double click on your test case to compare the results of your before and after image. 



Figure 256: Comparing Before and After results 

  1. As mentioned previously in the documentation, you can go back to any step in your Test Run by selecting Reset task state (Shift + F8).


Figure 257: Resetting steps in the Test Run ID