(1911) DataTiering Settings

To access the settings of a specific DataProvider, you can choose the Settings button, as shown on the figure below.
To access the global settings of DataProviders, click on the Settings tab.

Under DP Settings you can either change the general setting for all DataProviders by clicking on Display or you can change the settings for one DataProvider by entering its name.


It is possible to transport the definition of the DataProvider and its' settings to another system directly from DataTiering. For more information check (1911) Transport of DataProviders and Settings to Other Systems.

SETTINGS

Parameters

Description


Group ID

Name of the Group ID for mass processing of DataProviders. For better processing, Group ID should contain DataProviders that have a common characteristic.

Group ID is a global setting and may be changed only for all DataProviders.
The Group ID is set to the value "Default". To edit the name of the default Group ID, click Edit. The name should contain maximum 10 characters.

Read VP : ON-THE-FLY replication of navigation attributes When reading offloaded data through a query with the navigation attributes, this feature creates a master data table replica of this navigation attributes (if it is not created yet) and performs a join with this table on the external storage.Click Edit and enter 'X' to switch on the On-The-Fly replication or enter ' ' to switch it off.

Source storage

Name of the storage from which data is offloaded.

Source storage is a global setting and may be changed only for all DataProviders.
To change the default source storage, click Edit.

Target storage

Name of the storage to which data is offloaded. 

Target storage is a global setting and may be changed only for all DataProviders.
To change the default target storage, click Edit.

Selectivity field for DP

Name of the field for partitioning. If the value of this setting is filled, the partitioning will be based on the entered field name. If the value stays empty, the request won't be divided into partitions.

You can change the Selectivity field by clicking Edit. We recommend to use a field name that has many distinct values (has a higher granularity), for example a document number.

Max BG jobs for subrequest processing


Maximum number of background jobs for parallel processing.
This setting is relevant only when parallel processing is enabled. Please note that by parallel processing, the system creates first a job for offloading, and after that it creates other jobs for the subrequests.

To edit the maximum number of background jobs, click Edit. Please consider your system settings and set an appropriate value.


Fetch package sizeMaximum number of records in one package fetched from the source storage. This setting is used when reading data from source storage.

To change the maximum number of records in one package, click Edit. The recommended minimum value is 100 000 rows.

Package volume for subrequest

Maximum size of a subrequest. Relevant only when parallel processing is enabled.

Maximum size of a GLUE subrequest (in MB)

This parameter should be high enough to not have small files on HDFS but also not to fail on SAP memory.

To change the maximum size of one subrequest, click Edit. The recommended value is between 500-1000 MB.


Minimum offloaded rowsMinimum number of offloaded rows for offloading process. If the parameter isn't high enough, it may influence the storage performance (too many partitions are created). Also, the offloading condition should be defined as lower or equal to this parameter. In the case number of offloaded rows is less than the parameter Minimum offloaded rows a confirmation message appearsTo change the minimum number of offloaded rows, click Edit. The recommended value is 100 000 rows.
Enable data synchronizationAllows the data synchronization of the source table.To enable this feature, click Edit.
Table for data synchronizationName of the table used for the data synchronization. Create a table based on the source table, then create an extractor and execute the data extraction.To fill in the name of the table, click Edit.

Enable PiT recovery for Hadoop

Allows the Point-in-Time recovery for Hadoop target storage. For more information, see (1911) Point-In-Time Recovery for Hadoop.
To enable this feature, click Edit.
Enable PiT recovery for HANAAllows the Point-in-Time recovery for HANA primary database. For more information, see (1911) Point-In-Time Recovery for HANA.
To enable this feature, click Edit.

Binary storage for HANA PiT recovery

Name of the binary storage used to store data for Point-in-Time recovery for HANA. Please note that once you filled in the binary storage for the specific DataProvider, you shouldn't change it.To fill in the name of the binary storage, click Edit.