Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

Table of Contents:

Purpose

This chapter provides information about all steps necessary to preserve the connection to NLS on Hadoop during the SAP system refresh.
It does NOT cover the information related to actual application data.

The sections below list the areas of the SAP system that need to be saved during the system refresh.
The usual approach is to export the content of appropriate tables and import it afterward.
Actual table names are not listed, as these may vary depending on the SAP Netweaver version.

RFC destinations

It is a common practice to re-import all RFC destinations during the system refresh. Storage Management is dependent on two RFC destinations (link) that need to be preserved.

Certificates

Similar to the usual Secure Storage content re-import, the system's certificates contained in STRUST need to be re-imported as well.

User Master Data

User Master Data re-import usually is, but sometimes may not be a part of the system refresh. Important for Java Connector functionality is the HADOOP user (link).

HADOOP technical user and the role are client dependent.

Logical paths

If the Storage Management setup was done properly, the logical paths defined in transaction FILE (link) are identical in development, quality, and production systems.
Usage of <SYSID> in the path definition ensures that no post-processing in this area is necessary.

OS directories

The dvd_conn directories described in the setup guide (link) contain configuration files necessary for the proper functionality of Java Connector and as such need to be preserved.

Java Connector export

Before the refresh of the quality system, we recommend preserving Java connector settings by exporting them into a transport. This transport can be later imported to the refreshed quality system.

To export the Java connector:

  1. Go to transaction /DVD/JCO_MNG
  2. Click the Transport JCO button in the toolbar
  3. Fill in the required information. In most cases, you want to preserve all settings and libraries.
  4. Proceed. In the next screen use either an existing transport or create a new transport of copies.
  5. Go to SE01 and release the transport.

Storage setup and Glue settings export/import

Configuration export into the file is available for both Storage Management setup and Glue settings.
To backup SM configuration, go to /DVD/SM_SETUP > Menu: Tools > SM backup. The backup utility has a predefined set of configuration tables that will be stored in the chosen file:

Similarly, Glue settings configuration can be saved before the system refresh into a file via /DVD/GLUE > Menu: Tools > Glue utilities > Glue backup.

Glue objects are NOT exported in Glue backup. It is expected that Glue objects will be copied from the production system as a part of the database copy.

Things to consider

The following topics do not relate directly to the Storage Management setup but are worth keeping in mind while refreshing the SAP system connected to Hadoop.

  • License: The function of SNP software (e.g. SNP OutBoard™, SNP Glue™) is dependent on a valid license. The license is issued for a specific system and needs to be re-applied after the system refresh.
  • Storage ID & Profile ID: The whole SNP OutBoard™ archiving solution refers to the Profile ID and Storage ID. These must be identical in the source and target system, otherwise, a manual change in the configuration is necessary (contact your representative in such a case).
  • Hive database: If SNP OutBoard™ is used and the archives are stored in a Hive database, the source Hive database contents need to be copied into the target Hive database.
  • LOGSYS ID: Logical system conversion usually runs as a post-processing step during the SAP system refresh. It converts only entries in the standard SAP tables. If the LOGSYS ID does somehow appear in the archives stored in the Hive database, the manual conversion is probably necessary on the Hadoop side.
  • No labels