Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The Run ID is one instance of a Test Scenario and is executed step by step, from the top down.
To add a new Test Run ID, click on the icon 'Create new Run' and fill the name with its description.



Creation of a new Test Run ID


The Test Scenario specifies what is the created run type (i.e. is it for a Query testing scenario – EQS, ListCube testing scenario – EQS_LIST, Drill Down testing scenario – EQS_DRILL , DTP testing scenario – EQS_DTP, SLO ListCube testing scenario – EQS_SLO_L or Report testing scenario – EQS_ERP). This field is automatically prefilled pre-filled based on what is selected from the list of all runs. Another way to create a new run is by right clicking on selecting it from the context menu of one of the parent nodes that defines the scenario type and selecting the 'Create Test Run ID' option.

...

After a new Test Run is created, you are taken to the execution run view where the individual steps can be executed. You can leave the execution view, by pressing any of the standard navigational buttons 'Back', 'Exit' or 'Cancel'. To re-enter the view, double click on the desired Test Run in the test run tree or by right clicking on it and choosing or select it from the context menu and choose the 'Display Test Run ID' option.

...

To delete a Test Run, right click on the Test Run ID in the Test Run overview window (Figure 192) and choose the 'Delete Test Run ID' form from the Test Run and all of its data is removed from the system. 

Note

Note when you delete a Test Run, this will delete all of the saved image data linked to the run, but will not delete the test variants specified in the run. The All Variants stay in the system and can be reused in other Test Runs. To delete the variants, you can use Variant Editor available via the Validate Dashboard. 



Reset of all Test Steps of Test Run 

Resetting Test steps

To reset a Test Step, choose the 'Reset task state' function by right clicking on by selecting it from the context menu of the test step, the . The 'Reset task' status functionality is especially useful if you want to go back to modify and retest tasks. When resetting the test steps, some restrictions do apply, no . No test step should be reset if there are test steps after the Test Scenario have are already finished. The Functionality 'Set task state' is not recommended as it could leave certain steps with an inconsistent status. Below are some examples of when you can reset the step status provided:

  1. It is recommended to reset statuses for all of the Test Steps (Tasks) by using right click on topmost by selecting it from the context menu of a topmost item in the Test Scenario hierarchy. It is recommended to reset status of a single Test Step (Task), and applies apply it when all of the following Test steps in Test Scenario are in an initial state (Were were not executed yet).


Reset Test Step state

  1. It is possible but not recommended to reset the state of the Test Step if there are any following Test Steps that were already executed.

...


When the selection step (1st step) status is reset, you do not don't lose the information about the selected Test Variants. When you Double click on a selection step, the previously selected variants (if any) are loaded and act as the predefined variants for a specific run, these . These can either be removed or new set of variants added.

...

Backend Testing scenarios:

Query

Listcube Execution & Backend Testing

Drill Down testing

Transformation testing

...

All Test Scenarios use this comparison logic to validate if the data in the after image differs from data in the before image. This section contains the description of how data are compared in test scenarios.

...

Each query output is separated into rows, with each row split into the key part and data part. The Key part is defined by all the characteristics columns on the left side of the output while all of the other columns define the data part (see Figure 215). 


Image RemovedImage Added
Key and Data part of query output


When the key/data part creation (images) are completed for the Before and After image, the corresponding rows are searched using the Key part. 
There are cases when there is no Key part in the query output, in . In this situation; Validate uses the rows position to compare the appropriate rows between both images. When multiple rows have the same Key part, Validate picks the corresponding row from another image. If the corresponding row does not belong to the group of rows with the matching Key part, then the output of the row is highlighted with yellow color to indicate the need for a manual check. 
To prevent any SAP rounding differences, in the settings for Validate you can specify the number of decimal places used for the Query results comparison. Please refer to Settings chapter to see how this setting can be changed. 
If a row from either image does not have a matching pair, the alternate image row is colorized with a red color; the data fields are colorized along with the row number. Missing rows can occur in both the Before and After image. 


Image RemovedImage Added
Missing row in other image


All corresponding rows are compared even on the data part. If any differences are found in the data part, the appropriate data cells are colorized with a red color to mark the difference. Only cells in the After image are colorized as the Before Image values are taken as correct ones.


Image RemovedImage Added

Incorrect data in After image

...

The logic for ListCube comparison is similar to query comparison. Each row of the ListCube output is split into Key and Data parts. The Key part for the ListCube output is also visually distinguished in the output table for better orientation. 


Image RemovedImage Added
ListCube output Key and Data parts


To prevent any rounding differences, you can use the settings for Validate to specify the number of decimal places used for the ListCube results comparison. Please refer to Settings chapter to see how this setting can be changed. 
To prevent float number differences you can set a threshold value. If the difference between the before/after image values is smaller than the threshold value, the comparison is evaluated as correct. Please refer to Settings chapter to see how this setting can be changed.
Full logic of numeric values comparison is described in Figure 219. On the left side is a flow diagram of comparison logic is displayed while the right side contains the example of comparison logic.


Image RemovedImage Added
Validate numberic numeric value comparison flow diagram


It is possible to specify a precise comparison of RowCount (if they are present) columns of images by changing value of appropriate setting (TABLE_ROWCNT_PREC/ LIST_ROWCNT_PREC). 
The Before and After images of the ListCube output from the different InfoProviders can be compared (or the structure of the InfoProvider is changed in at the same time as the After image creation) some rules applies on top of the existing comparison logic:

  • If Key part structure is not the same in both the Before and After image ListCube outputs cannot be compared
  • If some Data part column is missing or added in the Before/After image this column is not checked and is not taken as an error.

...

For ListCube, Drill Down and Table comparison tasks, an automated root cause analysis can be enabled. Automated root cause analysis works in following way:

Before the last Drill Down / ListCube/Table comparison, a table of Keys to be ignored is created. This table is created for both A and B InfoProviders in the Drill Down and for the before and after image of ListCube InfoProviders. For Table variants the table is created only for the appropriate InfoProviders if changelog the Changelog table of standard DSO or active table of write-optimized DSO is compared. 
Each InfoProvider is checked for last delta loading requests of all DTPs and IP executions. Data of these last loaded delta requests are searched and the ignored keys table is created. The Ignored ignored keys table contains all combinations of Keys (with the same structure same as DrillDown/List Cube key part compared) that were present in last delta loads. 
When a comparison error is found during the comparison, the key part of the erroneous row is compared against the ignored keys table. If an error is found, it means that data of this row was affected by last delta load requests and error is ignored. Such ignored erroneous data are then colorized with yellow colour to distinct them from real errors. 

...


Erroneous data does not influence the overall compare result. If no errors are found during comparison except errors that were handled by Automated Root Cause Analysis, then the overall comparison result will be set to OK (Green). 
As for DSOs, data cannot be read directly with the use of the last loaded delta request IDs, so activation requests are used. For each found load request a corresponding activation request is found. The Changelog of DSO is then searched with the list of such activation requests. The ignored keys table is constructed based on the data found in change log Changelog for the corresponding activation requests. 
In the comparison task, an application log user can always see the errors that were evaluated as false errors, through the Automated Root Cause Analysis. 


Image RemovedImage Added
Automated Root Cause Analysis log


For MultiProviders the ignored keys table is constructed as an union of the ignored keys tables for all part providers and for SPOs the ignored keys tables is constructed as an union of the ignored keys tables of all semantic partitions.

...

The logic of the Table Comparison is the same as logic for ListCube/DrillDown results comparison with following exceptions:

  • Automated Root Cause Analysis cannot be used for table comparison.
  • Key figure columns cannot be ignored in table comparison.

Comparison of DTP load results

Transformation testing often requires checking huge volumes of lines to see if the transformation logic was changed during the test scenario. When the Before and After image is compared, the rows to be mutually checked are selected by their row position. To speed up the comparison only hashed values of these lines are compared. The hashes of lines are calculated using all of fields of in the line. Lines are colorized with red colour if there is any difference in any number of cells of compared lines.

...