(2011) DataTiering Settings

To access the settings of a specific DataProvider, you can choose the Settings button, as shown in the figure below.
To access the global settings of DataProviders, click on the Settings tab.

Under DP Settings you can either change the general setting for all DataProviders by clicking on Display or you can change the settings for one DataProvider by entering its name.


It is possible to transport the definition of the DataProvider and its' settings to another system directly from DataTiering. For more information, see the chapter (2011) Transport of DataProviders and Settings to Other Systems.

SETTINGS

Parameters

Description


Group ID

Name of the Group ID for mass processing of DataProviders. For better processing, Group ID should contain DataProviders that have a common characteristic.

The group ID is a global setting and may be changed only for all DataProviders.
The Group ID is set to the value "Default". To edit the name of the default Group ID, click Edit. The name should contain a maximum of 10 characters.

Read VP: ON-THE-FLY replication of navigation attributes When reading offloaded data through a query with the navigation attributes, this feature creates a master data table replica of these navigation attributes (if it is not created yet) and performs a join with this table on the external storage.Click Edit and enter 'X' to switch on the On-The-Fly replication or enter ' ' to switch it off.

Source storage

Name of the storage from which data is offloaded.

Source storage is a global setting and may be changed only for all DataProviders.
To change the default source storage, click Edit.

Target storage

Name of the storage to which data is offloaded. 

Target storage is a global setting and may be changed only for all DataProviders.
To change the default target storage, click Edit.

Selectivity field for DP

Name of the field for partitioning. If the value of this setting is filled, the partitioning will be based on the entered field name. If the value stays empty, the request won't be divided into partitions.

You can change the Selectivity field by clicking Edit. We recommend to use a field name that has many distinct values (has a higher granularity), for example, a document number.

Max BG jobs for subrequest processing


A maximum number of background jobs for parallel processing.
This setting is relevant only when parallel processing is enabled. Please note that by parallel processing, the system creates first a job for offloading, and after that, it creates other jobs for the subrequests.

To edit the maximum number of background jobs, click Edit. Please consider your system settings and set an appropriate value.


Fetch package sizeA maximum number of records in one package fetched from the source storage. This setting is used when reading data from source storage.

To change the maximum number of records in one package, click Edit. The recommended minimum value is 100 000 rows.

Package volume for the subrequest

Maximum size of a subrequest. Relevant only when parallel processing is enabled.

Maximum size of a GLUE subrequest (in MB)

This parameter should be high enough to not have small files on HDFS but also not to fail on SAP memory.

To change the maximum size of one subrequest, click Edit. The recommended value is between 500-1000 MB.


Minimum offloaded rowsA minimum number of offloaded rows for the offloading process. If the parameter isn't high enough, it may influence the storage performance (too many partitions are created). Also, the offloading condition should be defined as lower or equal to this parameter. In the case number of offloaded rows is less than the parameter Minimum offloaded rows a confirmation message appearsTo change the minimum number of offloaded rows, click Edit. The recommended value is 100 000 rows.
Use index

Indexes are used only by Writer3, which supports DSO and ADSO like DSO objects. Indexes for other types of objects are not needed anymore and can be disabled for existing DataProviders.

To delete the unused indexes execute report /DVD/OFF_INDEX_DELETE, which also automatically disables this setting.

For more info check the setting's and report's info button.

Enable data synchronizationAllows the data synchronization of the source table.To enable this feature, click Edit.
Table for data synchronizationName of the table used for data synchronization. Create a table based on the source table, then create an extractor and execute the data extraction.To fill in the name of the table, click Edit.

Enable PiT recovery for Hadoop

Allows the Point-in-Time recovery for Hadoop target storage. For more information, see the chapter (2011) Point-In-Time Recovery for Hadoop.
To enable this feature, click Edit.
Enable PiT recovery for HANAAllows the Point-in-Time recovery for HANA primary database. For more information, see the chapter (2011) Point-In-Time Recovery for HANA.
To enable this feature, click Edit.

Binary storage for HANA PiT recovery

Name of the binary storage used to store data for Point-in-Time recovery for HANA. Please note that once you filled in the binary storage for the specific DataProvider, you shouldn't change it.To fill in the name of the binary storage, click Edit.
BAdI Provider - BAdI Implementation

In order to use BAdI Providers, you need to create an Enhancement implementation of RSO_BADI_PROVIDER Enhancement Spot and BAdI implementation with Implementing class /DVD/OFF_BW4HANA_CL_BADI_PROV. You can create both of implementation using transaction SE19.

If the setting is empty functionality try to find BAdI implementation with implementing class /DVD/OFF_BW4HANA_CL_BADI_PROV, if there is only one such implementation it will use it otherwise the creation of BAdI Provider end up with an error.

To fill in the BAdI implementation name, click Edit.
BAdI Provider - Enhancement Implementation

In order to use BAdI providers you firstly need to create your enhancement implementation of enhancement spot RSO_BADI_PROVIDER. To create enhancement use transaction SE19. This parameter should contain the name of the created enhancement implementation.

If the setting is empty functionality try to find BAdI implementation with implementing class /DVD/OFF_BW4HANA_CL_BADI_PROV, if there is only one such implementation it will use it otherwise the creation of BAdI Provider end up with an error.

To fill in the Enhancement implementation name, click Edit.
Reload via DTP

Due to the SAP certification restrictions, we are not allowed to write or delete data directly to InfoProvider's underlying database table. Therefore, we are using the standard Data Transfer Process to do such operations. 

In order to use certified functionality, you need to create DTP between the Composite provider generated for DataProvider and the original InfoProvider. This DTP has to be set as this parameter. If the parameter is not filled, data will be written or deleted directly from the database table of the original InfoProvider. 

To fill in the DTP (Data Transfer Process), click Edit.
Manual import of a transport

In case you have any issues with the creation/activation of DataProvider using the XPRA report, here you can set that import of the transport will not be executed by XPRA functionality.

You will then be able to manually execute it by using transaction /DVD/OFF_IMPORT.

Functionality is disabled by default as it is easier to import the transport and XPRA will do the creation/activation of objects automatically.

To enable this feature, click Edit.