(DV-2305) Backend Testing

Display results authorization

To display results, you need proper authorization. It is checked against Auth. Object /DVD/VVAR. Auth. Fields checked are Activity (29 - Display results), Validate test type, and then Variant testing object and sub-object (e. g. the report name and its variant).

The authorization is checked only when displaying the saved data. Users without authorization can still see the result of the comparison and variant definition. The user must have appropriate authorization for both Before and After Images to see any of the images.

The default /DVD/VALIDATE role allows you to display the results of all variants where Variant testing object starts with /DVD/, all Variant testing sub-objects are allowed. At the moment Test Plan and Test Case are not checked.

Currently, only the ERP report test type has this feature.

Test Scenarios

SNP Validate supports the running of tests in the backend, which are focused on Query testing, ListCube testing, Drill Down, Transformation, Table, Report, and Performance testing. Unlike the Test Scenarios in the front end, the tests in the back end have Test Scenarios that are predefined and contain Test Run IDs. 


 SNP Validate Background Testing


Each Test Scenario focuses on a different aspect of the testing. Currently, there are seven Test Scenarios defined to test Queries, DB tables, Reports, DTP loads/Transformations, and InfoProviders using the ListCube function.

Test Run IDs

The Run ID is one instance of a Test Scenario and is executed step by step, from the top down.
To add a new Test Run ID, click on the icon Create new Run and fill in the name with the description.

Creation of a new Test Run ID


The Test Scenario specifies what the created run type is (i.e. is it for a Query testing scenario – EQS, ListCube testing scenario – EQS_LIST, Drill Down testing scenario – EQS_DRILL, DTP testing scenario – EQS_DTP, SLO ListCube testing scenario – EQS_SLO_L or Report testing scenario – EQS_ERP). This field is automatically prefilled based on what is selected from the list of all runs. Another way to create a new run is by choosing one of the parent nodes in the context menu that define the scenario type and selecting the Create Test Run ID option.

Entering Test Run

After a new Test Run is created, you are taken to the execution run view where the individual steps can be executed. You can leave the execution view, by pressing any of the standard navigational buttons Back, Exit, or Cancel. To re-enter the view, double-click on the desired Test Run in the test run tree or by selecting it in the context menu and choose the Display Test Run ID option.

Deleting Test Run

To delete a Test Run, select it in the context menu of the Test Run ID in the Test Run overview window and choose the Delete Test Run ID from the Test Run, and all of its data is removed from the system. 

Note when you delete a Test Run, this will delete all of the saved image data linked to the run, but will not delete the test variants specified in the run. The Variants stay in the system and can be reused in other Test Runs. To delete the variants you can use Variant Editor available via the SNP Validate Dashboard. 


Reset all Test Steps of the Test Run 

Resetting Test steps

To reset a Test Step, choose the Reset task state function by selecting it in the context menu of the test step, the Reset task status function is especially useful if you want to go back to modify and retest tasks. When resetting the test steps some restrictions do apply, no test step should be reset if there are test steps after the Test Scenario has already finished. The Function Set task state is not recommended as it could leave certain steps with an inconsistent status. Below are some examples of when you can reset the step status provided:

It is recommended to reset statuses for all of the Test Steps (Tasks) by selecting the topmost item in the context menu in the Test Scenario hierarchy. It is recommended to reset the status of a single Test Step (Task) and apply when all of the following Test steps in the Test Scenario are in an initial state (Were not executed yet).

Reset the Test Step state


It is possible but not recommended to reset the state of the Test Step if there are any following Test Steps that were already executed.


When the selection step (1st step) status is reset, you do not lose information about the selected Test Variants. When you double-click on a selection step, the previously selected variants (if any) are loaded and act as the predefined variants for a specific run, these can either be removed or a new set of variants added.


Backend Testing scenarios:

Query

Listcube Execution & Backend Testing

Drill Down testing

Transformation testing

Table based testing

Report based testing

System Performance testing


Comparison logic

All Test Scenarios use this comparison logic to validate if the data in the after image differs from the data in the before image. This section contains a description of how data are compared in test scenarios.

Comparison of query results

Each query output is separated into rows, with each row split into the key part and the data part. The Key part is defined by all the characteristics columns on the left side of the output while all of the other columns define the data part. 

Key and Data part of query output


When the key/data part creation (images) is completed for the Before and After image, the corresponding rows are searched using the Key part. 
There are cases when there is no Key part in the query output, in this situation; SNP Validate uses the position of the rows to compare the appropriate rows between both images. When multiple rows have the same Key part, SNP Validate picks the corresponding row from another image. If the corresponding row does not belong to a group of rows with the matching Key part then the output of the row is highlighted with yellow color to indicate the need for a manual check. 
To prevent any SAP rounding differences, in the settings for SNP Validate you can specify the number of decimal places used for the Query results comparison. Please refer to the Settings chapter to see how this setting can be changed. 
If a row from either image does not have a matching pair, the alternate image row is colorized with a red color; the data fields are colorized along with the row number. Missing rows can occur in both the Before and After image. 

Missing row in another image


All corresponding rows are compared even on the data part. If any differences are found in the data part, the appropriate data cells are colorized with a red color to mark the difference. Only cells in the After image are colorized as the Before Image values are taken as correct ones.

Incorrect data in After image

Comparison of ListCube/Drill Down results

The logic for the ListCube comparison is similar to the query comparison. Each row of the ListCube output is split into Key and Data parts. The Key part for ListCube output is also visually distinguished in the output table for better orientation. 

ListCube output Key and Data parts


To prevent any rounding differences, you can use the settings for SNP Validate to specify the number of decimal places used for the comparison of the ListCube results. Refer to the Settings chapter to see how this setting can be changed. 
To prevent float number differences you can set a threshold value. If the difference between the before/after image values is smaller than the threshold value, the comparison is evaluated as correct. Refer to the Settings chapter to see how this setting can be changed.
The full logic of numeric values comparison is described in the figure below. On the left side is a flow diagram of comparison logic while the right side contains an example of comparison logic:

SNP Validate numeric value comparison flow diagram


It is possible to specify a precise comparison of RowCount (if they are present) columns of images by changing the value of the appropriate setting (TABLE_ROWCNT_PREC/ LIST_ROWCNT_PREC). 
The Before and After images of the ListCube output from the different InfoProviders can be compared (or the structure of InfoProvider is changed at the same time as the After image creation) some rules apply on top of the existing comparison logic:

  • If the Key part structure is not the same in both the Before and After image ListCube outputs cannot be compared.
  • If some Data part column is missing or added in the Before/After image this column is not checked and is not taken as an error.

Automated Root Cause Analysis

For ListCube, Drill Down, and Table comparison tasks, automated root cause analysis can be enabled. Automated root cause analysis works in the following way:
Before the last Drill Down / ListCube/Table comparison, a table of Keys to be ignored is created. This table is created for both A and B InfoProviders in the Drill Down and for the before and after image of ListCube InfoProviders. For Table variants, the table is created only for the appropriate InfoProviders if the changelog table of standard DSO or active table of write-optimized DSO is compared. 
Each InfoProvider is checked for the last delta loading requests of all DTPs and IP executions. Data of these last loaded delta requests are searched and the ignored keys table is created. The Ignored keys table contains all combinations of Keys (with the structure same as the DrillDown/List Cube key part compared) that were present in the last delta loads. 
When a comparison error is found during the comparison, the key part of the erroneous row is compared against the ignored keys table. If an error is found, it means that the data of this row was affected by the last delta load requests, and the error is ignored. Such ignored erroneous data are then colorized with yellow color to distinguish them from real errors.

Erroneous data ignored by Automated Root Cause Analysis


Erroneous data does not influence the overall comparison result. If no errors are found during the comparison except errors that were handled by Automated Root Cause Analysis, the overall comparison result will be set to OK (Green). 
As for DSOs, data cannot be read directly with the use of the last loaded delta request IDs, so activation requests are used. For each found load request a corresponding activation request is found. The Changelog of DSO is then searched with the list of such activation requests. The ignored keys table is constructed based on the data found in the changelog for the corresponding activation requests. 
In the comparison task, an application log user can always see the errors that were evaluated as false errors, through the Automated Root Cause Analysis. 

Automated Root Cause Analysis log


For MultiProviders the ignored keys table is constructed as an union of the ignored keys tables for all part providers and for SPOs the ignored keys tables are constructed as an union of the ignored keys tables of all semantic partitions.

Comparison of Table results

The logic of the Table Comparison is the same as the logic for ListCube/DrillDown results compared with the following exceptions:

  • Automated Root Cause Analysis cannot be used for table comparison.
  • Key figure columns cannot be ignored in table comparison.

Comparison of DTP load results

Transformation testing often requires checking huge volumes of lines to see if the transformation logic was changed during the test scenario. When the Before and After image is compared, the rows to be mutually checked are selected by their row position. To speed up the comparison only hashed values of these lines are compared. The hashes of lines are calculated using all of the fields in the line. Lines are colorized with red color if there is any difference in any number of cells of compared lines.

Lookup data comparison

If the lookup data testing is included in the comparison, then the following logic is applied during the data comparison. Direct Lookup data includes the lookup source data, which is coming into Lookup, and the data results that are returned from Lookup. 
The data provided by the DVD Lookup Translator comes in packages because the packages are processed in the transformation. SNP Validate executes the comparison only in cases where the same number of packages is provided by DVD Lookup Translator as for the before and after image. 
Data from each package is then compared independently with the data from other packages. For each package, the lookup source data is compared first. If the source data for the before/after package is not the same then the results data is not compared as you can only test the lookup correctness based on the same input. If the source data is the same as for the before/after image of package results data is compared. 
Matching the image is done by the row number of the before/after resource data, when saving the lookup source and the result data, SNP Validate sorts this by all fields to prevent sorting problems.

Results overview

When displaying the results of Query, ListCube, SLO ListCube, Drill Down, or Transformation testing in the Display results step. For these Test Scenarios, there is a list of all of the run Test Variants on the left side of the screen. Basic information about each Test Variant (this depends on the Test Variant type i.e. Query/ListCube/DTP/Drill Down) is displayed together with the following information:

  • Before Runtime [s] (optional): Runtime of Query/Table/ListCube/SLO ListCube/DTP load/ Drilldown execution in Before image.
  • After Runtime [s] (optional): Runtime of Query/Table/ListCube/SLO ListCube/DTP load/Drill Down execution in After image.
  • Before Runtime [hh:mm:ss] (optional): Runtime of Query/Table/ListCube/SLO ListCube/DTP load execution in a Before image in time format.
  • After Runtime [hh:mm:ss] (optional): Runtime of Query/Table/ListCube/SLO ListCube/DTP load execution in a After image in time format.
  • InfoProvider A Runtime [hh:mm:ss] (optional): Runtime of a Drill Down execution in an InfoProvider A in time format.
  • InfoProvider B Runtime [hh:mm:ss] (optional): Runtime of a Drill Down execution in an InfoProvider B in time format.
  • InfoProvider A Runtime [s] (optional): Runtime of a Drill Down execution in an InfoProvider A in time format.
  • InfoProvider B Runtime [s] (optional): Runtime of a Drill Down execution in an InfoProvider B in time format.
  • Difference [s] (optional): Difference in seconds between After image runtime and Before image runtime.
  • Difference [hh:mm:ss] (optional): Difference in seconds between After image runtime and Before image runtime in time format.
  • Result: Result of data comparison.
  • Reporting Result: Reporting status set to SNP Validate/user.
  • Before Image Creation Time (optional): Time of before image creation.
  • Before Image Creation Data (optional): Date of before image creation.
  • After Image Creation Time (optional): Time of after image creation.
  • After Image Creation Data (optional): Date of after image creation.
  • Before Image Rows (optional): Number of rows of before image.
  • After Image Rows (optional): Number of rows of after image.
  • Reporting Status Text (optional): Text supplied when SNP Validate/user set reporting status.
  • Variant ID (optional): SNP Validate technical ID of a variant.
  • Before Image Status (optional): Task execution status for the before image.
  • After Image Status (optional): Task execution status for the after image.
  • Before Image Overflow Handled (optional): Notification of overflow occurrence.
  • After Image Overflow Handled (optional): Notification of overflow occurrence.
  • Conversion Runtime [s] (optional): Conversion runtime of a Before image in the SLO ListCube/ListCube.
  • Conversion Runtime [hh:mm:ss] (optional): Conversion runtime of a Before image in the SLO ListCube/ListCube in time format.

All optional columns can be added to the results overview table by clicking on  Change Layout… button. ALV column structure for the user and scenario type is always saved on exit and reapplied when a user enters this step again.

Different Results of Before/After image comparison


There are four types of results that each Test Variant can have:

  • Green semaphore: If data returned by Before and After image is the same.
  • Yellow semaphore: If none of the images returned data.
  • Red semaphore: If inconsistency/differences were found between the data returned in the Before and After Image.
  • None: If the comparison of images failed with an error (e.g. outputs with different key structures were supplied for comparison) or if the comparison was not yet done.

Sometimes the After Runtime [s] cells along with the Difference cells are colorized with red color, this can happen when there is a difference between the Before image runtime and/or the After image runtime image reaches a threshold value defined in the SNP Validate settings. You can specify these threshold values by clicking on the  button in the toolbar.
These settings can have an influence on the comparisons decimal precisions are also applied in the output.
You can display each variant in detail, by selecting the appropriate variant row in the context menu and selecting the Display Variant from the context menu. 

Display Variant Details


The Variant details screen differs, based on the type of variant that was clicked on (e.g. Query Variant, Bookmark Variant, ListCube Variant, Drill Down Variant).

Query Variant Details Screen


You can use  button to filter out all correct records. 
In the list of all Test Variants, only the variants that finished with erroneous comparisons are displayed (i.e. Test Variants that did not finish with an error are filtered out from the list).

Query Testing scenario erroneous variants


For the actual data screens, if the Only Errors mode is active, only the rows of output that have at least one cell colorized with red color are displayed. In the Before Image output, the correct rows are also displayed that correspond to After Image rows with wrong data.

ListCube erroneous rows and appropriate before image rows


For the Transformation testing in the before image results, only the corresponding rows in terms of row number are displayed with the erroneous rows of the after image, when the Only Errors mode is active.

Erroneous display in Transformations testing scenario

Reports Display Results

The Display Results screen for the ERP Report scenario is very similar to other Display Results screens but there are the following differences.

  • No union view is available
  • There are three different formats you can review the report's outputs: ALV table, Text Screen, and HTML.

Three types of the report output display


We recommended the display type HTML that uses a monospaced font so the results are easily readable. ALV and HTML display types can colorize errors found in the reports, unlike the simple text display.

Example of report HTML report output view

PDF Export

You can choose to export ERP report results into a PDF file. First, you have to select variants that you want to generate PDFs from. Users can select multiple variants as it will create one PDF for each variant. After choosing this option user will be prompted to select a folder. PDF files will be saved in this folder in the following namespace: RUN ID_TESTCASE ID_DATE OF COMPARE. In each PDF there will be a before image and an after image for one variant. Differences are highlighted by red color in the after image.


Navigation in outputs

For performance reasons 5000 lines (which can be changed via settings) are displayed at once in the Display results for each variant output. When any of the variant images have more than 5000 lines in the output, the navigational buttons become active and you can page through the results. In the Before and After image outputs you can navigate through these independently by using navigational buttons. 

Output buttons and position

Full-screen mode

Sometimes it is required to display Before/After image outputs in full screen. You can click on the  Full Screen button in the application toolbar. To activate the full-screen mode, the results must be already displayed in the standard view. It is possible to navigate through the output result pages (if any) directly in full-screen mode. 

Full-screen mode of variant output

Union screen mode

In the Output for ListCube, Transformation, and Drill Down test scenarios these can be displayed together in union mode. This is accessible after clicking on  Union Screen Results Display button in the toolbar lets you display two different outputs on one single screen. 

Union screen mode for results display


The first column contains information about the source of data for a specified row. For Transformation and ListCube scenarios, it either contains the value A (After image) or B (Before image) and specifies which image the record belongs to. For the Drill Down scenario, this column contains a concatenation of the InfoProvider technical name and its RFC Destination system ID (if any). 

For all scenarios, the rows are paired in a way that can be compared together. This can differ based on the scenario i.e. for the Transformation testing scenario row numbers are used while for ListCube and Drill Down scenarios, the appropriate row keys are matched. 
*Important Note: In the current version of SNP Validate the Query testing scenario does not support Union display mode.

Check of key figure sums (Listcube and Table Test Scenario)

In the output of InfoProviders or database tables, sometimes, when there are differences in results of Before and After Images, it is needed to compare also overall sums of Before/After Image key figures. Instead of manual check of each column, you can click on the button Check Sums and get a comparison of summed key figure values. Fields with different values are highlighted with red, overflown sums are set to 0 and also highlighted with different colors.  

Compared Table key figure sums with highlighted differences and overflown sum

'Missing' column

For the Drill Down and ListCube Test Scenario, a column with the name Missing is added to the output of InfoProviders. If a line is not correct in one of the Before/After images because there was no record with the same key found in the After image an icon  is added to the column. This helps you to differentiate between the erroneous and the missing records, you can also use this column for the filtering of results. This column is visible in all three types of results displayed.
*Important Note: Using the standard ALV filtering function on the output tables only influences the actually displayed page records and does not influence records of other pages. 

Missing Column

Reporting Status

It is possible to display the reporting status column in the table with a list of the tested variants for all of the Backend scenarios by changing the layout of the result table. By default, this status is always set to the same value as that of the comparison results in SNP Validate. The Reporting status is used to define the statuses of individual Test Cases for reporting. You can set the reporting status and override the compare status set by SNP Validate by clicking on the appropriate Variant result row and then selecting the Set Reporting Status option. 

Set Reporting Status Option


When changing the reporting status, you can choose the reporting status you want to set for the variant and add a description. 

Set Reporting Status Dialog


It is possible to set the same reporting status at once for multiple variants by selecting more rows and choosing the Set Reporting Status option.

Setting the reporting status for multiple variants


You may want to unify reporting statuses used by test users to reflect acceptable errors (e.g. Out Of Scope) it is possible to specify cross SNP Validate reporting statuses. You can define these reporting statuses in SNP Validate settings under the Backend settings tab. All customized reporting statuses can then be selected in the Set Reporting Status dialog using F4 help. 

Report Statuses Customizing

Lookup Results

During the Transformation testing scenario, when SNP Validate is provided with the data from the DVD Lookup Translator and is using the Display Lookup Results this contains the data and the comparison results. The Structure of this screen is very similar to the standard Display Results screen, however, there are some differences:
In the Display Lookup Results screen, there are two comparison statuses for each line of the DTP variant tested, in some cases, there can be multiple lines in a run for each variant. The number of lines depends on the number of data packages that were processed during the load for each variant. The First comparison of the results defines the statuses of the lookup source data for the comparison, while the second defines the statuses of the comparison results from the returned data in the lookup. 

Lookup package comparison results


You can see that when the source data comparison fails no comparison is performed on the lookup returned data.
The display on the right side of the screen is the before/after image data as it normally would have been in the standard Display Results screen. When you double-click on the appropriate variant lookup package the data is displayed. By default, when you display the actual data this way, the returned lookup data is also displayed. To switch between the display of the source data and the result data, click on the  Source Data button (Shift + F7) or  Result Data button (Shift + F8).

Test Run ID Locking

To prevent the execution of the Test Run by mistake and overwriting current results, you can use the lock option. When a Test Run is locked, it is not possible to change the task state, execute or reset image creation or comparison, change variants, or report status until it is unlocked again. Mass Execution and scheduling will not run on locked test runs. Deletion in Test Plan stops at the first locked Test Run. You are informed about locked Test Run by the lock icon in the Test Plan tab of Test Management and by message with the description and user name.
You can change the lock state of the Test Run by selecting the Test Case name from the context menu in Test Plan (or in the context menu of the root task in Test Case) and selecting the Lock Run (or Unlock Run) from the context menu. 



Clicking on the Lock Run option opens a pop-up window where you can set a description of the reason to lock the Test Run. Click on Unlock Run to unlock Test Run directly.


Lock function is available for these Test Case Types:

  • Query
  • ListCube
  • Table
  • DrillDown
  • ERP Report
  • DTP
  • SLO ListCube

STEP BY STEP: CREATING AND EXECUTING TEST RUN ID IN BACKEND TESTING

This example shows you a step-by-step guide as to how to create, execute, and display results in Backend Testing.

1. Run the transaction /DVD/VALIDATE in your SAP BW system

2. In the SNP Validate Dashboard screen, choose Backend Testing (last icon on the function panel).

Backend Testing


3. In the SNP Validate Backend Testing screen, choose Create new Test Run ID (F5).

Create New Run Test ID

4. A new window appears, in the first pop-up, where you can add the type of Test Scenario (you can press F4 for the list of all possible entries). Currently, there are four Test Scenarios to choose from. These scenarios are described in the chapter Backend Testing.

Adding a Test Scenario


For this example, we will choose the test scenario for Query testing – EQS.
After choosing the Test Scenario, you can enter the name for Test Run ID and the description. 

Completing the creation of a Test Run ID


5. After creating a new Test Run ID, you should see an overview of all the tasks.

Overview of Tasks for Test Run ID 

6. In the next step, we will add new query variants for our run.

Double-click on the first task in the Select Query Variants and a new screen appear with several options for adding a new variant. Here you can do the following: 

  • Create new Query Variant

This option creates a new query variant based on the selected query.

  • Create new Bookmark Variant

This option creates a new query variant based on the selected bookmark.

  • Add existing Query Variant

If you choose this option, you can choose from the existing query variants that were created previously.

  • Add Query Variant from Test Run ID

Allows you to copy query variants from an existing Test Run ID. Afterward, all query variants from the chosen Test Run ID are added automatically.

  • Generate Query Variants from QS

You can also add new query variants from Query Statistics.

  • Create Based o Web Template

Create query variants for queries of the selected web template.

  • Generate Based on Web Template Bookmark

Create a query variant for queries of web template bookmark.
In our example, we will choose to Create new Query Variant.


Creating a new Query Variant


7. When you click on Create new Query Variant, a new window should appear; here you need to add the technical name of the query you want to use and a description. Other fields are optional refer to Create new Query Variant chapter for more details.

Query variables 


If the selected query requires input variables, you can then set this up by clicking on Set query variables button. 

Set query variables

 

8. After you save the query variant, you can view this in the list of all query variants.

Set query variables


9. Once all your query variants are added, save the selection by pressing the 'Save' button (Ctrl + S), then you can return to the Test Run ID tasks. In the next step, you need to Execute (F8 or double-click) Generate tasks for before image, once generated, the first two status icons should be green.

Generate Tasks for before image 


Important information: To reset the task state of every executed task by highlighting the task and clicking on Reset task state button (Shift + F8). 


Reset task state


10. Once the tasks are generated you can then execute Create before image. A new pop-up appears where you can specify the number of background jobs and the name of the application server to be used.

If you want the number of background jobs to stay the same even when one or more jobs fail, you can check the Keep alive option. 

Create before Image 


You can press F5 or F9 to refresh the task monitor during the time it takes for the task to finish. A truck icon in the Status column means that the task is running. After a successful run, the status icon should turn green/yellow/red.
It is possible to execute specific sub-tasks instead of executing all tasks at once. To display and execute these sub-tasks, click on the blue icon in the Sub-task column.

Display Sub-Tasks 


From here, you can execute the sub-tasks. In our example, we have a few sub-tasks, and to execute one, double-click on the chosen sub-task, or press F8. We can observe the changing status and refresh the task monitor until the sub-task finishes and the status icon turns green.

Execute Sub-Task 


Important information: To reset the task state of every executed task by highlighting the task and clicking on Reset task state button (Shift + F8). 


Reset task state


11. Once the Create before image task is complete, you can start performing your specific tasks (like archiving, migration, etc.) in your system (outside SNP Validate).

12. Afterward, the next step is to execute Generate tasks for after image.

Generate tasks for after image

 

13. Creating the after image task is similar to creating the before image. You execute the Create after image task and choose the number of background jobs for the task to run.

Create after Image

 

14. After both images (before and after) are ready for comparison, you should execute the task Generate tasks for comparison, followed by the task Compare before and after image. Your Test Run ID should now look similar to this one:

Generate task and Compare before and after image

 

15. In the Display results screen, the section on the left displays a list of all the test variants, by selecting one of the test variants you can compare the before and after image outputs on the right-hand side of the screen. The runtime units are displayed in seconds.

Double-click on your test case to compare the results of your before and after image. 

Comparing Before and After results

 

16. As mentioned previously in the documentation, you can go back to any step in your Test Run by selecting Reset task state (Shift + F8).

Resetting steps in the Test Run ID