(2105) HANA Native Offloading

The goal of the solution is to archive data from HANA native database to Hadoop storage and access these archived data in HANA native. For this purpose DataTiering creates a Virtual table and calculation view, it is only possible to create one calculation view for one DataProvider.

The created calculation view contains a union of a source table and a target table (Virtual table) in Hana native. The Hadoop is added to HANA native as a remote source through Smart Data Access (SDA). The Offloading will create a Virtual table, which is pointing to the Hadoop table. With this procedure, we can access data from Hadoop in HANA native.

Features:

  • Support table name longer than 16 characters

    • DP name HTAB*generated_hash* will be created. On the main screen, the user can change the view with
      DP Description button to see the name of Hana native table in the description

  • Support table fields longer than 30 characters

    • The long field is cut or replaced with a field with count (fieldname_count)

  • Relative condition

  • Custom target table name

    • After defining target storage, the user can define a custom target table name with the mandatory prefix ZOFF* which is automatically added to the name

  • Possible to offload Hana native table with the same name, but from different databases (schema)

    • if DP name is already defined, DP name will be automatically generated (HTAB*)

Set up before offloading:

You can check or set up storages in Storage management through the transaction /DVD/SM_SETUP.

  • Source storage - HANA Native Storage type SM_TRS_HNV

    DB Connection Name - You can find an existing connections through the transaction DBCO.

  • Target storage - Hadoop Metastore Storage type SM_TRS_MS

Set up SDA parameters:

Remote source name: Hadoop cluster

Database name: Database name

Schema: Database schema of the primary DB

Database connection: DB connection to external HANA database (transaction DBCO)

  • Smart Data Access (SDA): Remote source in source storage HANA databas
    Before you set up the target storage, you must check remote sources in the HANA studio. According to the below figure you should fill out SDA parameters in the target storage (figure above).

1. Create a DataProvider of the type HANATAB

The process of creating this type of DataProvider is the same as for other types. Enter the transaction /DVD/OFF and click Add DataProvider.

Mandatory fields are:

  • Source Storage - The field must be a Hana native storage (which you set up previously in /DVD/SM_SETUP)

  • DataProvider category

  • DataProvider name - Use search help to find a source table. Please note that the displayed tables are from external HANA

  • Target Storage - Hadoop (Hadoop storage with SDA parameters, which you set up previously in /DVD/SM_SETUP)

  • Target table name (optional) - table created in target storage, with mandatory prefix ZOFF*

The virtual table is created in this step and it is always saved into the schema, where the source table is defined.

2. Impact analysis

In this step, DataTiering finds dependent objects of a DataProvider. After clicking on Execute Analysis, the calculation view will be added. This means the calculation view will be generated in the next step, therefore fields are empty.

3. Object adjustment

By executing the adjustment, DataTiering generates a calculation view that contains an union of the original HANA table and virtual table with the offloaded data.

Generated Calculation view in Hana native: