(DV-2008) Backend Testing
Test Scenarios
Validate supports the running of tests in the backend, which are focused on Query testing, ListCube testing, Drill Down, Transformation, Table, Report, and Performance testing. Unlike the Test Scenarios in the frontend, the tests in the backend have the Test Scenarios that are predefined and contain Test Run IDs.
Validate Background Testing
Each Test Scenario focuses on a different aspect of the testing. Currently, there are seven Test Scenarios defined to test Queries, DB tables, Reports, DTP loads/Transformations, and InfoProviders using ListCube functionality.
Test Run IDs
The Run ID is one instance of a Test Scenario and is executed step by step, from the top down.
To add a new Test Run ID, click on the icon 'Create new Run' and fill the name with description.
Creation of a new Test Run ID
The Test Scenario specifies what is the created run type (i.e. is it for a Query testing scenario – EQS, ListCube testing scenario – EQS_LIST, Drill Down testing scenario – EQS_DRILL , DTP testing scenario – EQS_DTP, SLO ListCube testing scenario – EQS_SLO_L or Report testing scenario – EQS_ERP) . This field is automatically prefilled based on what is selected from the list of all runs. Another way to create a new run is by choosing one of the parent nodes in the context menu that define the scenario type and selecting the 'Create Test Run ID' option.
Entering Test Run
After a new Test Run is created, you are taken to the execution run view where the individual steps can be executed. You can leave the execution view, by pressing any of the standard navigational buttons 'Back', 'Exit', or 'Cancel'. To re-enter the view, double click on the desired Test Run in the test run tree or by selecting it in the context menu and choosing the 'Display Test Run ID' option.
Deleting Test Run
To delete a Test Run, select it in the context menu of the Test Run ID in the Test Run overview window and choose the 'Delete Test Run ID' form the Test Run and all of its data is removed from the system.
Note when you delete a Test Run, this will delete all of the saved image data linked to the run, but will not delete the test variants specified in the run. The Variants stay in the system and can be reused in other Test Runs. To delete the variants you can use Variant Editor available via the Validate Dashboard.
Reset of all Test Steps of Test Run
Resetting Test steps
To reset a Test Step, choose the 'Reset task state' function by selecting it in the context menu of the test step, the 'Reset task' status functionality is especially useful if you want to go back to modify and retest tasks. When resetting the test steps some restrictions do apply, no test step should be reset if there are test steps after the Test Scenario has are already finished. The Functionality 'Set task state' is not recommended as it could leave certain steps with an inconsistent status. Below some examples of when you can reset the step status provided:
- It is recommended to reset statuses for all of the Test Steps (Tasks) by selecting the topmost item in the context menu in the Test Scenario hierarchy. It is recommended to reset the status of a single Test Step (Task) and apply when all of the following Test steps in Test Scenario are in an initial state (Were not executed yet).
Reset Test Step state
- It is possible but not recommended to reset the state of Test Step if there are any following Test Steps that were already executed.
Reset Test Step state (not recommended)
When the selection step (1st step) status is reset, you do not lose information about the selected Test Variants. When you Double click on a selection step, the previously selected variants (if any) are loaded and act as the predefined variants for a specific run, these can either be removed or new set of variants added.
Backend Testing scenarios:
Query
Listcube Execution & Backend Testing
Drill Down testing
Transformation testing
Table based testing
Report based testing
System Performance testing
Comparison logic
All Test Scenarios use this comparison logic to validate if the data in the after image differs from data the before image. This section contains a description of how data are compared in test scenarios.
Comparison of query results
Each query output is separated into rows, with each row split into the key part and data part. The Key part is defined by all the characteristics columns on the left side of output while all of the other columns define the data part (see Figure 215).
Key and Data part of query output
When the key/data part creation (images) are completed for the Before and After image, the corresponding rows are searched using the Key part.
There are cases when there is no Key part in the query output, in this situation; Validate uses the rows position to compare the appropriate rows between both images. When multiple rows have the same Key part, Validate picks the corresponding row from another image. If the corresponding row does not belong to a group of rows with the matching Key part then the output of the row is highlighted with yellow color to indicate the need for a manual check.
To prevent any SAP rounding differences, in the settings for Validate you can specify the number of decimal places used for the Query results comparison. Please refer to Settings chapter to see how this setting can be changed.
If a row from either image does not have a matching pair, the alternate image row is colorized with a red color; the data fields are colorized along with the row number. Missing rows can occur in both the Before and After image.
Missing row in other image
All corresponding rows are compared even on the data part. If any differences are found in the data part, the appropriate data cells are colorized with a red color to mark the difference. Only cells in the After image are colorized as the Before Image values are taken as correct ones.
Incorrect data in After image
Comparison of ListCube/Drill Down results
The logic for ListCube comparison is similar to query comparison. Each row of the ListCube output is split into Key and Data parts. The Key part for ListCube output is also visually distinguished in the output table for better orientation.
ListCube output Key and Data parts
To prevent any rounding differences, you can use the settings for Validate to specify the number of decimal places used for the comparison of the ListCube results. Please refer to the Settings chapter to see how this setting can be changed.
To prevent float number differences you can set a threshold value. If the difference between the before/after image values is smaller than the threshold value, the comparison is evaluated as correct. Please refer to Settings chapter to see how this setting can be changed.
The full logic of numeric values comparison is described in the figure below. On the left side is a flow diagram of comparison logic while the right side contains an example of comparison logic.
Validate numeric value comparison flow diagram
It is possible to specify a precise comparison of RowCount (if they are present) columns of images by changing value of appropriate setting (TABLE_ROWCNT_PREC/ LIST_ROWCNT_PREC).
The Before and After images of the ListCube output from the different InfoProviders can be compared (or the structure of InfoProvider is changed in same time as the After image creation) some rules apply on top of the existing comparison logic:
- If Key part structure is not same in both the Before and After image ListCube outputs cannot be compared
- If some Data part column is missing or added in the Before/After image this column is not checked and is not taken as an error.
Automated Root Cause Analysis
For ListCube, Drill Down, and Table comparison tasks, automated root cause analysis can be enabled. Automated root cause analysis works in the following way:
Before the last Drill Down / ListCube/Table comparison, a table of Keys to be ignored is created. This table is created for both A and B InfoProviders in the Drill Down and for the before and after image of ListCube InfoProviders. For Table variants, the table is created only for the appropriate InfoProviders if the changelog table of standard DSO or active table of write-optimized DSO is compared.
Each InfoProvider is checked for last delta loading requests of all DTPs and IP executions. Data of these last loaded delta requests are searched and the ignored keys table is created. The Ignored keys table contains all combinations of Keys (with structure same as DrillDown/List Cube key part compared) that were present in the last delta loads.
When a comparison error is found during the comparison, the key part of the erroneous row is compared against the ignored keys table. If an error is found, it means that data of this row was affected by last delta load requests, and error is ignored. Such ignored erroneous data are then colorized with yellow color to distinct them from real errors.
Erroneous data ignored by Automated Root Cause Analysis
Erroneous data does not influence the overall compare result. If no errors are found during comparison except errors that were handled by Automated Root Cause Analysis then the overall comparison result will be set to OK (Green).
As for DSOs, data cannot be read directly with the use of the last loaded delta request IDs, so activation requests are used. For each found load request a corresponding activation request is found. The Changelog of DSO is then searched with the list of such activation requests. The ignored keys table is constructed based on the data found in changelog for the corresponding activation requests.
In the comparison task, an application log user can always see the errors that were evaluated as false errors, through the Automated Root Cause Analysis.
Automated Root Cause Analysis log
For MultiProviders the ignored keys table is constructed as union of the ignored keys tables for all part providers and for SPOs the ignored keys tables is constructed as union of the ignored keys tables of all semantic partitions.
Comparison of Table results
The logic of the Table Comparison is same as logic for ListCube/DrillDown results comparison with following exceptions:
- Automated Root Cause Analysis cannot be used for table comparison.
- Key figure columns cannot be ignored in table comparison.
Comparison of DTP load results
Transformation testing often requires checking huge volumes of lines to see if the transformation logic was changed during the test scenario. When the Before and After image is compared, the rows to be mutually checked are selected by their row position. To speed up the comparison only hashed values of these lines are compared. The hashes of lines are calculated using all of the fields in the line. Lines are colorized with red color if there is any difference in any number of cells of compared lines.
Lookup data comparison
If the lookup data testing is included in the comparison, then the following logic is applied during the data comparison. Direct Lookup data includes the lookup source data, which is coming into Lookup and the data results that are returned from Lookup.
The data provided from the DVD Lookup Translator comes in packages because the packages are processed in the transformation. Validate executes the comparison only in cases where the same number of packages is provided by DVD Lookup Translator as for the before and after image.
Data from each package is then compared independently with the data from other packages. For each package, the lookup source data is compared first. If the source data for the before/after package is not the same then the results data is not compared as you can only test the lookup correctness based on the same input. If the source data is the same as for the before/after image of package results data is compared.
Matching the image is done by the row number of the before/after resource data, when saving the lookup source and the result data, Validate sorts this by all fields to prevent sorting problems.
Results overview
When displaying the results of Query, ListCube, SLO ListCube, Drill Down or Transformation testing in the 'Display results' step, for these Test Scenarios there is a list of all of the run Test Variants on the left side of screen. With basic information about each Test Variant (this depends on Test Variant type i.e. Query/ListCube/DTP/Drill Down) is displayed together with the following information:
- Before Runtime [s] (optional) – runtime of Query/Table/ListCube/SLO ListCube/DTP load/ Drilldown execution in Before image.
- After Runtime [s] (optional) – runtime of Query/Table/ListCube/SLO ListCube/DTP load/Drill Down execution in After image.
- Before Runtime [hh:mm:ss] (optional) – runtime of Query/Table/ListCube/SLO ListCube/DTP load execution in a Before image in time format.
- After Runtime [hh:mm:ss] (optional) – runtime of Query/Table/ListCube/SLO ListCube/DTP load execution in a After image in time format.
- InfoProvider A Runtime [hh:mm:ss] (optional) – runtime of a Drill Down execution in an InfoProvider A in time format.
- InfoProvider B Runtime [hh:mm:ss] (optional) – runtime of a Drill Down execution in an InfoProvider B in time format.
- InfoProvider A Runtime [s] (optional) – runtime of a Drill Down execution in an InfoProvider A in time format.
- InfoProvider B Runtime [s] (optional) – runtime of a Drill Down execution in an InfoProvider B in time format.
- Difference [s] (optional) – difference in seconds between After image runtime and Before image runtime.
- Difference [hh:mm:ss] (optional) – difference in seconds between After image runtime and Before image runtime in time format.
- Result – result of data comparison.
- Reporting Result – reporting status set be Validate/user.
- Before Image Creation Time (optional) – time of before image creation.
- Before Image Creation Data (optional) – date of before image creation.
- After Image Creation Time (optional) – time of after image creation.
- After Image Creation Data (optional) – date of after image creation.
- Before Image Rows (optional) – number of rows of before image.
- After Image Rows (optional) – number of rows of after image.
- Reporting Status Text (optional) – text supplied when Validate/ user set reporting status.
- Variant ID (optional) – Validate technical ID of a variant.
- Before Image Status (optional) – Task execution status for the before image
- After Image Status (optional) – Task execution status for the after image
- Before Image Overflow Handled (optional) – notification of overflow occurrence
- After Image Overflow Handled (optional) – notification of overflow occurrence
- Conversion Runtime [s] (optional) – conversion runtime of a Before image in the SLO ListCube/ListCube.
- Conversion Runtime [hh:mm:ss] (optional) – conversion runtime of a Before image in the SLO ListCube/ListCube in time format.
All optional columns can be added to the results overview table by clicking on 'Change Layout …' button. ALV column structure for the user and scenario type is always saved on exit and reapplied when user enters this step again.
Different Results of Before/After image comparison
There are four types of results that each Test Variant can have:
- Green semaphore - if data returned by Before and After image is the same,
- Yellow semaphore - if none of the images returned data.
- Red semaphore - if inconsistency/differences were found between the data returned in the Before and After Image.
- None - if the comparison of images failed with an error (e.g. outputs with different keys structure were supplied for comparison) or if the comparison was not yet done.
Sometimes the 'After Runtime [s]' cells along with the 'Difference' cells are colorized with red color, this can happen when a there is a difference between the Before image runtime and/or the After image runtime image reaches a threshold value defined in the Validate settings. You can specify these threshold values by clicking on the button in toolbar.
These settings can have an influence on the comparisons decimal precisions are also applied in the output.
You can display each variant in detail, by selecting the appropriate variant row in the context menu and selecting the 'Display Variant' from the context menu.
Display Variant Details
The Variant details screen differs, based on the type of variant that was clicked on (e.g. Query Variant, Bookmark Variant, ListCube Variant, Drill Down Variant).
Query Variant Details Screen
You can use button to filter out all correct records.
In the list of all Test Variants only the variants that finished with erroneous comparisons are displayed (i.e. Test Variants that didn't finish with an error are filtered out from list).
Query Testing scenario erroneous variants
For the actual data screens if the 'Only Errors' mode is active, only the rows of output that have at least one cell colorized with red color are displayed. In the Before Image output, the correct rows are also displayed that correspond to After Image rows with wrong data.
ListCube erroneous rows and appropriate before image rows
For the Transformation testing in the before image results, only the corresponding rows in terms of row number are displayed with the erroneous rows of the after image, when the 'Only Errors' mode is active.
Erroneous display in Transformations testing scenario
Reports Display Results
The Display Results screen for the ERP Report scenario is very similar to other Display Results screens but there are following differences.
- No union view is available
- There are three different formats you can review the reports outputs: ALV table, Text Screen and HTML.
Three types of the report output display
We recommended the display type HTML that uses a monospaced font so the results are easily readable. ALV and HTML display types can colorize errors found in the reports, unlike the simple text display.
Example of report HTML report output view
PDF Export
You can choose to export ERP report result into PDF file. First, you have to select variants that you want to generate PDF from. User can select multiple variants as it will create one PDF for each variant. After choosing this option user will be prompted to select a folder. PDF files will be saved in this folder following namespace: RUN ID_TESTCASE ID_DATE OF COMPARE. In each PDF there will be before image and after image for one variant. Differences are highlighted by red color in after image.
Navigation in outputs
For performance reasons 5000 lines (can be changed via settings) are displayed at once in the 'Display results' for each variant output. When any of the variant images have more than 5000 lines in the output, the navigational buttons become active and you can page through the results. In the Before and After image outputs you can navigate through these independently by using navigational buttons.
Output buttons and position
Full-screen mode
Sometimes it is required to display Before/After image outputs in full screen. You can click on the 'Full Screen' button in the application toolbar. To activate the full-screen mode, the results must be already displayed in the standard view. It is possible to navigate through the output result pages (if any) directly in full-screen mode.
Full-screen mode of variant output
Union screen mode
In the Output for ListCube, Transformation, and Drill Down test scenarios these can be displayed together in union mode. This is accessible after clicking on 'Union Screen Results Display' button in the toolbar and lets you display two different outputs on one single screen.
Union screen mode for results display
The first column contains information about the source of data for a specified row. For Transformation and ListCube scenarios, it either contains the value 'A' (After image) or 'B' (Before image) and specifies which image the record belongs to. For the Drill Down scenario, this column contains a concatenation of InfoProvider technical name and its RFC Destination system ID (if any).
For all scenarios, the rows are paired in a way they can be compared together. This can differ based on the scenario i.e. for Transformation testing scenario row numbers are used while for ListCube and Drill Down scenarios the appropriate row keys are matched.
*Important Note: In the current version of Validate the Query testing scenario does not support Union display mode.
Check of keyfigure sums (Listcube and Table Test Scenario)
In the output of InfoProviders or database tables, sometimes, when there are differences in results of Before and After Images, it is needed to compare also overall sums of Before/After Image keyfigures. Instead of manual check of each column, you can click on button 'Check Sums' and get comparison of summed keyfigure values. Fields with different values are highlighted with red, overflown sums are set to 0 and also highlighted with different color.
Compared Table keyfigure sums with highlighted differences and overflown sum
'Missing' column
For the Drill Down and ListCube Test Scenario a column with the name 'Missing' is added to the output of InfoProviders. If a line is not correct in one of the Before/After images because there was no record with the same key found in the After image an icon is added to the column. This helps you to differentiate between the erroneous and the missing records, you can also use this column for the filtering of results. This column is visible in all three types of the results display.
*Important Note: Using standard ALV filtering functionality on the output tables only influences the actually displayed page records and does not influence records of other pages.
Missing Column
Reporting Status
It is possible to display the reporting status column in the table with a list of the tested variants for all of the Backend scenarios by changing the layout of the result table. By default this status is always set to the same value as that of the comparison results in Validate, The Reporting status is used to define the statuses of individual Test Cases for reporting. You can set the reporting status and override the compare status set by Validate by clicking on the appropriate Variant result row and then selecting the 'Set Reporting Status' option.
Set Reporting Status Option
When changing the reporting status, you can choose the reporting status you want to set for variant and add a description.
Set Reporting Status Dialog
It is possible to set the same reporting status at once for multiple variants by selecting more rows and choosing 'Set Reporting Status' option.
Setting the reporting status for multiple variants
You may want to unify reporting statuses used by test users to reflect acceptable errors (e.g. 'Out Of Scope') it is possible to specify cross Validate reporting statuses. You can define these reporting statuses in Validate settings under 'Backend' settings tab. All customized reporting statuses can then be selected in Set Reporting Status dialog using F4 help.
Report Statuses Customizing
Lookup Results
During the Transformation testing scenario, when Validate is provided with the data from the DVD Lookup Translator and is using the 'Display Lookup Results' this contains the data and the comparison results. The Structure of this screen is very similar to the standard 'Display Results' screen, however, there are some differences:
In 'Display Lookup Results' screen there are two comparison statuses for each line of the DTP variant tested, in some cases, there can be multiple lines in a run and for each variant. The number of lines depends on the number of data packages that were processed during the load for each variant. The First comparison of the results defines the statuses of the lookup source data for the comparison, while the second defines the statuses of the comparison results from the returned data in the lookup.
Lookup package comparison results
You can see that when the source data comparison fails no comparison is performed on the lookup returned data.
The display on the right side of the screen is the before/after image data as it normally would have been in the standard 'Display Results' screen. When you double click on the appropriate variant lookup package the data is displayed. By default, when you display the actual data this way, the returned lookup data is also displayed. To switch between the display of the source data and the result data, click on the 'Source Data' button (Shift + F7) or 'Result Data' button (Shift + F8).
Test Run ID Locking
To prevent the execution of Test Run by mistake and overwriting current results, you can use the lock option. When a Test Run is locked, it is not possible to change task state, execute or reset image creation or comparison, change variants, or reporting status until it is unlocked again. Mass Execution and scheduling will not run on locked test runs. Deletion in Test Plan stops at first locked Test Run. You are informed about locked Test Run by lock icon in Test Plan tab of Test Management and by message with the description and user name.
You can change the lock state of Test Run by selecting the Test Case name from the context menu in Test Plan (or in the context menu of the root task in Test Case) and selecting the 'Lock Run' (or 'Unlock Run') from the context menu.
Clicking on 'Lock Run' option opens a popup window where you can set description of reason to lock Test Run. Click on 'Unlock Run' unlocks Test Run directly.
Lock functionality is available for these Test Case Types:
- Query
- ListCube
- Table
- DrillDown
- ERP Report
- DTP
- SLO ListCube
STEP BY STEP: CREATING AND EXECUTING TEST RUN ID IN BACKEND TESTING
This example shows you a step-by-step guide as to how to create, execute, and display results in Backend Testing.
- Run the transaction /DVD/VALIDATE in your SAP BW system
2. In Validate Dashboard screen, choose Backend Testing (last icon on the function panel).
Backend Testing
3. In the Validate Backend Testing screen, choose Create new Test Run ID (F5).
Create New Run Test ID
4. A new window appears, in the first pop up, where you can add the type of Test Scenario (you can press F4 for the list of all possible entries). Currently, there are 4 Test Scenarios to choose from. These scenarios are described in chapter Backend Testing.
Adding a Test Scenario
For this example, we will choose the test scenario for Query testing – EQS.
After choosing the Test Scenario, you can enter the name for Test Run ID and the description.
Completing the creation of a Test Run ID
5. After creating a new Test Run ID, you should see an overview of all the tasks.
Overview of Tasks for Test Run ID
6. In the next step, we will add new query variants for our run.
Double click on the first task in the Select Query Variants and a new screen appears with several options for adding a new variant. Here you can do the following:
- Create new Query Variant
This option creates a new query variant based on the selected query.
- Create new Bookmark Variant
This option creates a new query variant based on selected bookmark.
- Add existing Query Variant
If you choose this option, you can choose from the existing query variants that were created previously.
- Add Query Variant from Test Run ID
Allows you to copy query variants from an existing Test Run ID. Afterwards, all query variants from the chosen Test Run ID are added automatically.
- Generate Query Variants from QS
You can also add new query variants from Query Statistics.
- Create Based o Web Template
Create query variant for queries of the selected web template.
- Generate Based on Web Template Bookmark
Create a query variant for queries of web template bookmark.
In our example, we will choose to Create new Query Variant.
Creating a new Query Variant
7. When you click on Create new Query Variant, a new window should appear; here you need to add the technical name of the query you want to use and description. Other fields are optional please refer to 'Create new Query Variant' chapter for more details.
Query variables
If the selected query requires input variables, you can then set this up by clicking on 'Set query variables' button.
Set query variables
8. After you save the query variant, you can view this in the list of all query variants.
Set query variables
9. Once all your query variants are added, save the selection by pressing 'Save' button (Ctrl + S), then you can return to the Test Run ID tasks. In the next step, you need to Execute (F8 or double click) Generate tasks for before image, once generated, the first two status icons should be green.
Generate Tasks for before image
Important information: To reset the task state of every executed task by highlighting the task and clicking on Reset task state button (Shift + F8).
Generate Tasks for before image
10. Once the tasks are generated you can then execute Create before image. A new pop up appears where you can specify the number of background jobs and the name of the Application server to be used.
If you want the number of background jobs to stay the same even when one or more jobs fail, you can check the 'Keep alive' option.
Create before Image
You can press F5 or F9 to refresh the task monitor during the time it takes for the task to finish. A truck icon in the Status column means that the task is running. After a successful run, the status icon should turn green/yellow/red.
It is possible to execute specific sub-tasks instead of executing all tasks at once. To display and execute these sub-tasks, click on the blue icon in the Sub-task column.
Display Sub-Tasks
From here, you can execute the sub-tasks. In our example, we have a few sub-tasks and to execute one, double click on the chosen sub-task, or press F8. We can observe the changing status and refresh the task monitor until the sub-task finishes and the status icon turns green.
Execute Sub-Task
Sub-Task Completed
11. Once the Create before image task is complete, you can start performing your specific tasks (like archiving, migration, etc.) in your system (outside Validate).
12. Afterwards, the Next step is to execute Generate tasks for after image.
Generate tasks for after image
13. Creating the after image task is similar to creating the before image. You execute the Create after image task and choose the number of background jobs for the task to run.
Create after Image
14. After both images (before and after) are ready for comparison, you should execute the task Generate tasks for comparison, followed by the task Compare before and after image. Your Test Run ID should now look similar to this one:
Generate task and Compare before and after image
15. In the Display results screen, the section on the left displays a list of all the test variants, by selecting one of the test variants you can compare the before and after image outputs in the right-hand side of the screen. The runtimes units are displayed in seconds.
Double click on your test case to compare the results of your before and after image.
Comparing Before and After results
16. As mentioned previously in the documentation, you can go back to any step in your Test Run by selecting Reset task state (Shift + F8).
Resetting steps in the Test Run ID