(SM-2408) Performance Monitoring

Since Storage Management integrates with different storages frequently located in different environments, be it on-premise or cloud, sometimes it is difficult to identify the bottlenecks.
By identifying and upgrading low-performing components of the data transfer it is possible to greatly increase overall data transfer speed to improve the user experience.
To perform performance monitoring, we will use the SAP standard transaction SAT. 

Gathering data

  1. Execute transaction SAT and create a copy of the default variant.
  2. Edit the newly created variant and check the option Measure RFC and Update Calls, Save.
  3. Schedule a Glue job that runs for at least 10 minutes, to provide enough time to gather job statistics.
  4. Once the job is running, start the SAT runtime analysis in a parallel session.
  5. In the next screen choose a background work process in which the Glue job is being executed.
  6. Let the monitoring run for a few minutes. You should let the job process at least 10 replication packages to get more accurate measurements. Check the job logs to identify how many packages were transferred.
  7. To stop the monitoring, click the green box.
  8. A new screen with monitoring results will appear. Now it's necessary to interpret the gathered data properly.

Interpreting the data

In general, the processing can be distributed into three parts:

  • Fetching data from the database: Represented by Data Accesses External (13%).
  • Glue internal processing: Represented by Internal Processing Blocks - Methods / Events (80,85%).
  • Writing data to Hadoop: Represented by Internal Processing Blocks - Function Modules (2,43%).

This example processing distribution is close to ideal, as the major portion of the processing is in ABAP. In this scenario, the only option to improve the performance is to execute the replication in several parallel jobs.
In case the Writing data part is much bigger than in the provided example, it's most likely that the bottleneck is network bandwidth. To improve the performance, in this case, it's possible to set compression on transfer in Storage Management.
If the performance is still not sufficient, the upgrade of the network link between the source and target environments should be considered.


In the detailed output, several major parts of the SNP code take most of the processing time. 

  1. /DVD/SM_CL_HIVE_MS_CSV: Conversion of an internal table into CSV format. Usually is the biggest part of the execution.
  2. CL_ABAP_GZIP: Compression of CSV file. Can be turned off, but will increase time spent on network transfer. 
  3. DB: FETCH: Selecting data from the SAP database.
  4. /DVD/SM_CL_HIVE_MS_UTILS: Escapes special characters in data.
  5. /DVD/GL_EXT_RUL_CL_EXE: Execution of custom logic (Currency conversions, Lookups, etc.).
  6. HTTP_READ_SC: Waiting for a response after sending data to Hadoop. Can be interpreted as writing time.