(SM-1911) SAP System Refresh with Glue replication to Hadoop

Contents

Purpose

This page provides information about steps necessary to handle refresh of SAP system consistently and with preserved connection to Hadoop. 

Chapters below list the areas of SAP system which need to be saved during the system refresh.
The usual approach is to export the content of appropriate tables and import it afterwards.
Actual table names are not listed, as these may vary depending on the SAP Netweaver version.


General SAP related steps

This section discusses SAP standard activities and assumptions that need to take place during SAP system refresh. If some of the steps are not handled in the customer environment by default, manual actions are required. 

RFC destinations

It is a common practice to re-import all RFC destinations during the system refresh. Datavard Storage Management is dependent on two RFC destinations (link), which need to be preserved.

Certificates

Similarly to the usual Secure Storage content re-import, the system's certificates contained in STRUST need to be re-imported as well.

User Master Data

User Master Data re-import usually is, but sometimes may not be, part of the system refresh. Important for Datavard Java Connector functionality is the HADOOP user (link).

HADOOP technical user and the role are client dependent.

Logical paths

If the Storage Management setup was done properly, the logical paths defined in transaction FILE (link) are identical in development, quality and production systems.
Usage of <SYSID> in the path definition ensures that no post-processing in this area is necessary.

OS directories

The "dvd_conn" directories described in the setup guide (link) contain configuration files necessary for proper functionality of Datavard Java Connector and as such need to be preserved.

Datavard configuration related steps

This sections discusses steps that need to be performed prior and after the SAP system refresh in the Datavard software. 

Java Connector export

Prior to refresh of the quality system, we recommend to preserve Java connector settings by exporting them into a transport. This transport can be later imported to the refreshed quality system.

To export the Java connector:

  1. Go to transaction /DVD/JCO_MNG
  2. Click Transport JCO button in the toolbar
  3. Fill the required information. In most cases, you want to preserve all settings and libraries.
  4. Proceed, and in the next screen use either an existing transport, or create a new transport of copies.
  5. Go to SE01 and release the transport.

Storage definition export

Definition of the storage needs to be preserved on the refreshed system.  

Before the refresh, take screen shots of following objects:

  • table /dvd/hdp_cus_c
  • storage definition in /dvd/sm_setup
  • table /DVD/GL_TR_STR_M 

After the refresh, make sure that the objects are restored to their original state. 

Automation of this process is currently in development.

License

Functionality of Datavard software (e.g. Datavard Outboard, Datavard Glue) is dependent on valid license. The license is issued for specific system and needs to be re-applied after system refresh. Make sure you note it down before the refresh is executed

Metadata correction

After the refresh, it is necessary to run report /DVD/GL_STOR_MTDT_CHANGE. This report makes sure that Glue metadata point to the correct storage and that data in SAP and Hadoop are in sync. 

This report:

  • Corrects metadata of Glue objects
  • Reactivates all Glue objects - this means that all data previously loaded to the Hadoop cluster will be dropped.
  • Restarts statistics - deletes all information about previously executed loads and restarts GL_REQUEST counter.

Entries explained:

Old StorageID - StorageID that was used in production. Currently Glue metadata is bound to this storageID.

New StorageID - StorageID that was used in quality. Make sure that the Storage was recreated and is working.

Perform data cleanup - Must have in most cases. Deletes data on Hadoop side and restarts statistics.


This report needs to be run only after the Storage Mangement definition is in it's original state and connection check is successful.