(DV-2302) Drill Down Execution & Backend
DRILL DOWN TEST CASE EXECUTION
When a Drill Down Test Case is added to the Test Plan a new backend Test Run is automatically created and a Run ID is generated. The description holds the technical name of the Test Plan and the Test Case. All Drill Down Variants selected during the creation of the Drill Down Test Case are automatically added into the Drill Down Variant selection for the generated Test Run.
To execute a Drill Down Test Case from Test Management click on Execute. Afterward, the standard Drill Down Testing scenario is displayed.
Drill Down testing
The SNP Validate Drill Down Test Scenario uses the ListCube function in a similar way as in the ListCube test scenario and uses the RSA1 Display Data function to get the data from the InfoProviders to be tested.
Recommendations:
- For one Drill Down testing run, a maximum of 1000 Drill Down variants should be used.
- It is recommended to use the SNP Validate setting Drill Down maximum records to under the value of 100 in order to reduce the amount of data to process and reduce the chance of errors.
Test Scenario EQS_DRILL (Drill Down testing)
Drill Down Testing Scenario
Drill Down testing contains the following steps:
- Selection of Drill Down Variants
- Generation of execution tasks
- Execution of Drill Down Tasks
- Display of the results
Select Drill Down Variants
You can select which of the Drill Down Variants are to be used or create new Drill Down Variants to be added to the Test. When the Test Run is generated through the SNP Validate Test management the variants are already preselected for the Drill Down Test Case.
Once the variants are selected for the Test Run, you can save the selection by clicking on the Save button (Ctrl + S).
In the following ways it is possible to add Drill Down Variants to the list:
Create New Drill Down Variant: (SHIFT + F1). For detailed information about the creation of the Drill Down variant refer to Create new Drill Down Variant section,
Add Existing Drill Down Variant: (SHIFT + F4) displays all of the existing Drill Down Variants in the system, and you can select one or more variants to be added to the current Test Run.
Add Drill Down Variants of Run: (SHIFT + F5) you can select the Variants used in another (distinct from the current) run and add them to current the run as well.
Generate Drill Down Variants: (SHIFT + F6) refer to Generate Drill Down Variants section for more information.
Copy Variants: Refer to the Copy Drill Down Variants section for function details.
Generate Tasks for Drill Down
The tasks are generated for the following step Execute Drill Down Tasks. You can specify the key figures to be ignored in the InfoObjects during the comparison of the Drill Down Variants specified for this run. These ignored key figure columns are not visible in the Display Results. The Generation of the tasks is executed by clicking on the Create Tasks (F8) button.
You can enable Automated Root Cause Analysis for comparison tasks. Refer to the Automated Root Cause Analysis section for details.
Selection of ignored key figure columns
Execute Drill Down Tasks
Double-click to execute, then you can define the number of jobs or background jobs to be used for task execution each task executes a Drill Down test for one Drill Down Variant. Drill Down scenario testing is explained below.
The Drill Down test variant compares the data from two different InfoProviders, in most cases, it is the same InfoProvider but on different systems. Based on selected Drill Down characteristics in the variant definition the execution starts by selecting the first specified Drill Down characteristic and adding it to a ListCube, which is then read on both InfoProviders as a characteristic. The ListCube reads the outputs and these are immediately compared. For two out of three scenarios, the Drill Down test execution ends here.
- One InfoProvider did not return data while the other one did. In such case, the first InfoProvider has no data, we advise not to continue as further comparisons will fail.
- The data returned by both InfoProviders are the same. If everything is correct the Drill Down task stops the execution to free up resources for other tasks.
In the third scenario, some or all data returned by InfoProviders are not the same. In this case, a new test cycle begins, and the erroneous data is checked. Using the SNP Validate setting Drill Down maximum records, for the first added characteristic (the one added at the start of the first cycle) the X number of distinct values that belong to erroneous records are selected. These values act as filter values for this characteristic in the next test cycle (drill down into erroneous records).
The next characteristic is the order of specified Drill Down characteristics (if no more are available then the execution stops here) is added to ListCube for reading and then another characteristic is selected for the output. The ListCube reads are repeated for both InfoProviders with the new settings and the data is then compared again. Afterward either a new test cycle begins using the same logic just described, or execution ends here, depending on the result of the comparison.
If the SNP Validate setting Drill Down additional aggr. is set to X in each cycle when InfoProviders are read the returned data is aggregated again. This function can be used for cases when ListCube returns multiple rows with the same key because of unsupported functions.
Display results
By double-clicking on this step, you can view the Drill Down Variants executions outputs and their comparison results are displayed. Refer to the Results overview chapter for more details.