(SM-2211) SAP System Refresh with SDA+ on Hadoop
Table of Contents:
Purpose
This chapter provides information about all steps necessary to handle refresh of the SAP system consistently and with preserved connection to Hadoop.
We recommend keeping the same StorageID in both Systems so fewer steps are necessary.
The sections below list the areas of the SAP system which need to be saved during the system refresh.
The usual approach is to export the content of appropriate tables and import it afterward.
Actual table names are not listed, as these may vary depending on the SAP Netweaver version.
General SAP-related steps
This section discusses SAP standard activities and assumptions that need to take place during the SAP system refresh. If some of the steps are not handled in the customer environment by default, manual actions are required.
RFC destinations
It is a common practice to re-import all RFC destinations during the system refresh. Storage Management is dependent on two RFC destinations ((SM-2211) Hadoop Storage Setup), which need to be preserved.
Certificates
Similar to the usual Secure Storage content re-import, the system's certificates contained in STRUST need to be re-imported as well.
User Master Data
User Master Data re-import usually is, but sometimes may not be a part of the system refresh. Important for Java Connector functionality is the HADOOP user ((SM-2211) Java Connector Setup).
HADOOP technical user and the role are client dependent.
Logical paths
If the Storage Management setup was done properly, the logical paths defined in transaction FILE ((SM-2211) Hadoop Storage Setup) are identical in development, quality, and production systems.
Usage of <SYSID> in the path definition ensures that no post-processing in this area is necessary.
OS directories
The "dvd_conn" directories described in the setup guide ((SM-2211) Hadoop Storage Setup) contain configuration files necessary for the proper functionality of Java Connector and as such need to be preserved.
SNP configuration related steps
This section describes the steps that need to be performed prior to and after the SAP system refresh in the SNP software.
Java Connector export
Prior to the refresh of the quality system, we recommend preserving Java connector settings by exporting them into a transport. This transport can be later imported to the refreshed quality system.
To export the Java connector:
- Go to transaction /DVD/JCO_MNG
- Click the Transport JCO button in the toolbar
- Fill in the required information. In most cases, you want to preserve all settings and libraries.
- Proceed. In the next screen use either an existing transport or create a new transport of copies.
- Go to SE01 and release the transport.
Storage definition export
The definition of the storage needs to be preserved on the refreshed system.
Before the refresh, go to /DVD/SM_SETUP → menu: Tools → SM backup:
Choose a file on a local drive, where table content will be exported to and execute (F8):
After the refresh, make sure the objects are restored to their original state selecting Import option.
If the tables are not empty, make sure to select [x] Cleanup before import to the truncate tables before re-importing the contents.
License
The functionality of SNP software (e.g. SNP OutBoard™, SNP Glue™) is dependent on a valid license. The license is issued for a specific system and needs to be re-applied after the system refresh. Make sure you note it down before the refresh is executed
Metadata correction
This step is not necessary if the storage id remains the same for both systems.
After the refresh, it is necessary to run the report /DVD/GL_STOR_MTDT_CHANGE. This report makes sure that Glue metadata points to the correct storage and the data in SAP and Hadoop are in sync.
This report:
- Corrects metadata of Glue objects
- Reactivates all Glue objects - this means that all data previously loaded to the Hadoop cluster will be dropped.
- Restarts statistics - deletes all information about previously executed loads and restarts the GL_REQUEST counter.
Entries explained:
Old StorageID - StorageID that was used in production. Currently, Glue metadata is bound to this StorageID.
New StorageID - StorageID that was used in quality. Make sure the Storage was recreated and is working.
Perform data cleanup - Must have been unchecked in most cases. when it is marked it deletes data on the Hadoop side and restarts statistics.
This report needs to be run only after the Storage Management definition is in its original state and the connection check is successful.