/
(OH-1905) Activities (Tasks)

(OH-1905) Activities (Tasks)


OutBoard Housekeeping tasks are hierarchically structured in four main topics – Basis, Business Warehouse, ERP and CRM tasks. Each topic contains DataVard implemented housekeeping tasks plus related Standard SAP Housekeeping tasks.
There is a number of useful standard SAP housekeeping transactions or reports, which may be known to the user. In OutBoard Housekeeping, the user can access standard housekeeping functions easily within the OutBoard Housekeeping cockpit. OutBoard Housekeeping offers only short description of their functionality and there is in-depth information available in SAP documentation. As these tasks are part of the standard SAP installation, their maintenance and the correct functionality is the responsibility of SAP.


Basis


Topic Basis is group of the housekeeping tasks, which are SAP Basis-oriented.



Application logs Deletion


Created by:DVD

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes



Introduction

The application log is a tool that collects the messages, the exceptions and the errors for all the activities and the processes in the system. This information is organized and displayed in a log. Many different applications collect the messages in the application log, which contains the information or the messages for the end user. The application log serves as the temporary storage for the messages. The logs are written to the database but they are not automatically deleted. There is no general procedure to switch the application logging ON or OFF. The log tables tend to grow very considerably as they are not automatically deleted by the system and thus they can significantly impact the overall system performance.
OutBoard Housekeeping takes care about it and deletes logs stored in the old format tables as well as all the logs that are specified.
Tables of the application log that can contain too many entries are:

  • BALHDR (all releases)
  • BALHDRP (<4.6)
  • BALM (<4.6)
  • BALMP (<4.6)
  • BALC (<4.6)
  • BALDAT (>=4.6)
  • BAL_INDX (all releases)


An expiry date is assigned to each log in BALHDR table. The logs will remain in the database until these dates expire. After the expiry, date passes the according data the log is deleted from the database. There are often a large number of the logs in the database because no expiry date was assigned to them. If no specific date has been assigned to the application log the system assigns an expiry date as 12/31/2098 or 12/31/9999, depending on the release, which allows the logs to stay in the system for as long as possible.

Steplist

In the main OutBoard Housekeeping menu select "Application Logs – Settings" under the Basis/Deletion Tasks.
Now, the Settings selection must be specified. The user could create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
Should the user create new settings the "Description" field needs to be filled and select whether or not the settings ID will be run in test mode.


Figure 71: Application Logs – Settings detail


Selection conditions

The 'Object' and 'Subobject' fields specify the application area in which the logs were written (see F4 Help).
The field 'External ID' indicates the number, which was assigned by the application for this log.
Fields 'Transaction Code''User' and 'Log number' provide additional selection criteria for Application Log deletion
Field 'Problem class' indicates the importance of the log. By default, this field contains the value '4'; this means only logs with additional information. The user may want to delete all logs by entering the value '1' in this field. All logs with log class '1' or higher are then deleted.
Note: If no selection is made under the "Selection conditions" the Application logs will be deleted based only on specified Time criterion.


Expiry Date

A log usually has an expiration date, which is set by the application, which calls the Application log' tool. If the application log does not set an expiration date, the Application log' tool sets the expiration date as 12/31/2098 or 12/31/9999, depending on the release, which allows the logs to stay in the system for as long as possible.
The user could specify if only Application logs, that have reached their expiration date, will be deleted or if the logs expiration date will not be taken into account.
In field 'Logs older than (in Days)', the user may specify the time limit for Application logs to be deleted.


Figure 72: Application Logs – Settings info buttons



  • Show Selection – will list all selected Log numbers, which will be deleted.
  • Number of objects – will list total number of Application logs, which fulfill combined selection criteria.


Note: information listed by clicking "Show Selection" and "Number of Objects" buttons is valid for the selected system. If the landscape node is selected, the buttons are hidden.
Once the settings are specified, the user can run the created/modified Settings Group from Main menu. The user can start/ schedule the run in several ways. For more information, refer to Execute and Schedule sections of this user documentation.
The user should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run the user can go to the monitor or check the logs.

Recommendation

Our recommendation is to switch on the log update at the beginning in order to determine which objects need to have the log entries. Then delete the application log for example after a maximum of 7 days. If the production system is running smoothly after the initial phase, the user may be able to deactivate the application log update completely. We recommend to look into SAP – Related Notes for more information.

Related Notes

2057897

Created by:DVD

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no

RFC Logs Deletion


Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction

Transactional RFC (tRFC, previously known as asynchronous RFC) is an asynchronous communication method that executes the called function module just once in the RFC server. The remote system needs not to be available at the time when the RFC client program is executing a tRFC. The tRFC component stores the called RFC function, together with the corresponding data, in the SAP database under a unique transaction ID (TID).
The tables ARFCSSTATE, ARFCSDATA, ARFCRSTATE can contain a large number of entries. This leads to poor performance during tRFC processing.
In OutBoard Housekeeping it is possible to delete old data from these tables using retention time.

Step list

In the main OutBoard Housekeeping menu select "RFC Logs – Settings" under the Basis/Deletion Tasks.
Now, the Settings selection must be specified. The user could create new settings (1) be entering a new ID 1 or chose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
Should the user create new settings the "Description" field needs to be filled and selected whether or not the settings ID will be run in test mode.


Figure 73: RFC Logs – Settings detail

Specify selection conditions for Time, Destination and User Name, if necessary.
The user can also restrict the deletion of outdated logs based on status:

  • Connection Error
  • Recorded
  • System Error
  • Being Executed
  • Already Executed
  • Terminated Due to Overload
  • Temporary Application Errors
  • Serious Application Errors

Click on info button "Save settings" to save the selection, for any further updates click on "Modify Settings" info button and confirm.
Once settings for RFC logs cleaning are specified, the user may run the created/modified Settings group from Main menu. There are several options how to start the deletion, for more information, refer to Execute and Schedule sections of this user documentation.
The user should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run the user can go to the monitor or check the logs.

Recommendation

Our recommendation is to schedule RFC logs deletion task weekly on a regular basis.


TemSe Objects Consistency Check


Created by:SAP/DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction

TemSe is a store for temporary sequential data, e.g. objects that are not normally permanently held in the system. TemSe objects consist of a header entry (stored in table TST0e.g. and the object itself (stored in the file system or in table TST03).
This task checks the consistency of object header and object data. However, it doesn't check spool requests (stored in table TSP01) and entries in table TSP02, if output request exist.

Step list

In the main OutBoard Housekeeping menu select "TemSe Objects Consistency Check – Settings" under the Basis/Deletion Tasks.
Now, the Settings selection must be specified. The user can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
Should the user create new settings the "Description" field needs to be filled. In "Selection criteria", you can fill "Client" field if check will be made for specific client. "TemSe object (pattern)" will determine the range of checked objects.


Figure 74: TemSe Objects Consistency Check – Settings detail


If "Create settings for TemSe Cleanup based on selection" is checked, consistency check will prepare settings for "TemSe Objects Cleanup" task and settings ID can be found in consistency check logs – "Problem class Important".
Click on info button "Save settings" to save the selection, for any further updates click on "Modify Settings" info button and confirm.
Once settings for TemSe objects consistency check are specified, the user may run the created/modified Settings group from Main menu. There are several options how to start the deletion, for more information, refer to Execute and Schedule sections of this user documentation.
The user should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run the user can go to the monitor or check the logs.

Recommendations

It is recommended to run consistency check twice with a gap of approx. 30 minutes. Outputs have to be compared and only those TemSe objects, which are in both, should be deleted. Thus the temporary inconsistencies are eliminated.

Warning

TemSe storage is not intended as an archiving system. It can contain limited number of spool requests (default value – 32000, but can be increased up to 2 billion) this can affect the performance.

Related Notes

48284

TemSe Objects Cleanup


Created by:SAP/DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction

This task follows previous one. Because of it, see introduction "TemSe Objects Consistency Check" to learn more about TemSe objects background.

Step list

There are two ways how to prepare settings.

  1. In the main OutBoard Housekeeping menu select "TemSe Objects Cleanup – Settings" under the Basis/Deletion Tasks.

Now, the Settings selection must be specified. The user can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information refer to the Creating a Settings ID section of this user documentation.
Should the user create new settings, the "Description" field needs to be filled. In "Selection criteria", Selection of object can be done in two ways: First is by making it as absolute selection: the user can fill "Client" field if check will be made for specific client. "TemSe object (pattern)" will determine the range of checked objects. The second way is to create relative selection where user can fill how old objects should be deleted and whether should be deleted also obsolete objects.
"Test mode" option is checked by default.


Figure 75: TemSe Objects Deletion – Settings detail relative selection



Figure 76: TemSe Objects Deletion – Settings detail relative selection


  1. In "TemSe Objects Consistency Check" task, check "Create settings for TemSe Cleanup based on selection" and execute. Now generated settings are prepared based on consistency check. But be aware, they are not differential as recommended for consistency check and can also contain temporary inconsistencies.

In generated settings, "Test mode" option is unchecked.
In this case, all TemSe objects that fit to the criteria are stored in Select-Options (see Figure 77).


Figure 77: TemSe Objects Deletion – Multiple selection detail

Recommendations

It is recommended to run TemSe objects cleanup as follow-up of TemSe objects consistency check.


XML Messages Deletion


Created by:DVD

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no

Introduction

SAP Exchange Infrastructure (SAP XI) enables an implementation of cross-system processes. It is based on exchange of XML messages. It enables to connect the systems from different vendors and different version.
SXMSCLUP and SXMSCLUR tables are part of SAP XI. When using SAP XI extensively, their size can increase very rapidly. Therefore, regular deletion is highly recommended.
XML messages deletion task offers possibility to delete different XML messages in one-step.

Recommendation

It is possible to set a retention period for them separately according to the message type and status. Our recommendation for setting the retention period is following:

  • Asynchronous messages without errors awaiting... 1 day
  • Synchronous messages without errors awaiting ... 1 day
  • Synchronous messages with errors awaiting ... 0 days
  • History entries for deleted XML messages ...30 days
  • Entries for connecting IDocs and messages ... 7 days

This task should be scheduled to run on daily basis.

Step list

In the main OutBoard Housekeeping menu select "XML Message Deletion – Settings" under the Basis/Deletion Tasks.
Now, the Settings selection must be specified. The user can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
Next step is to provide time settings for XML Message deletion in settings. Time frame can be specified to Days, Hours, Minutes and Seconds.

Figure 78: XML Message Deletion – Settings detail

Single Z* Table Cleanup

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:yes

Introduction

The customer-defined tables can sometimes grow rapidly, depending on the purpose of these tables over time the content may no longer be needed. OutBoard Housekeeping offers possibility to delete selected data from any customer defined table from naming convention Z* / Y* and keep it in Recycle Bin for selected period of time.
The name of task itself points to the intention to safely cleanup only one Z* table during the execution without any development effort in contrast to OutBoard Housekeeping feature called Custom objects cleanup that enables to manage any number of related tables, but requires some small amount of coding.

Step list

In the main OutBoard Housekeeping menu, select "Single Z* Table Cleanup – Settings" under the Basis/Deletion Tasks.
Then the Settings selection must be specified. The user can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
Next step is to provide the table name and generate the selection screen.


Figure 79: Single Z Table Cleanup – initial screen*

The Settings can be saved according to the entered table specific selections.


Figure 80: Single Z Table Cleanup – test run results*

Test run functionality is available for this task. The result of the test run is the number of entries that will be deleted from the "Z" and saved in recycle bin for the current selection.
There are several options how to start the Single Z* Table Cleanup. For more information, refer to Execute and Schedule sections of this user documentation.

HANA Audit Log Cleanup


Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no

HANA specific:

yes

Introduction

Old audit data can be deleted from SAP HANA database audit table. This is only applicable if audit entries are written to column store database tables. The threshold date can be set as a specific date or it can be set relatively – all log entries older than x days will be deleted.

Step list

In the main OutBoard Housekeeping menu select "HANA Audit Log Cleanup – Settings" under the Deletion Tasks.
Now, the Settings selection must be specified. You can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
Next step is to provide selection condition for HANA Audit Log Cleanup in settings. Time frame can be specified as 'Before date' or 'Older than (days).


Figure 81: HANA Audit Log Cleanup settings

Recommendation

The size of the table can grow significantly therefore we recommend scheduling this task to delete audit logs weekly. Retention time of logs depends on the company policy and local legal requirements.

HANA Traces Cleanup


Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no

HANA specific:

yes

Introduction

All the trace files opened by the SAP HANA database and their content can be deleted. Types of trace files which are deleted can be following:

  • ALERT
  • CLIENT
  • CRASHDUMP
  • EMERGENCYDUMP
  • EXPENSIVESTATEMENT
  • RTEDUMP
  • UNLOAD
  • ROWSTOREREORG
  • SQLTRACE

The task deletes all traces on all hosts when it is executed on distributed system. Trace files are just compressed and saved, not deleted when checkbox 'With backup' is marked.

Step list

In the main OutBoard Housekeeping menu select "HANA Traces Cleanup – Settings" under the Deletion Tasks.
Now, the Settings selection must be specified. The user can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
Next step is to provide selection condition for HANA Traces Cleanup in settings. This task can run in test mode and there is also possibility to store the backup data. Required types of traces can be selected.
Housekeeping allows user to delete traces which are older than set amount of days. In case of older release than SAP HANA Platform SPS 09, this setting will be ignored.


Figure 82: HANA Traces Cleanup settings

Recommendation

Our recommendation is to run this task to reduce disk space used by large trace files, especially trace components INFO or DEBUG.


Intermediate Documents Archiving


Created by:SAP/DVD
Underlying SAP report:RSEXARCA

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no

Recommended for HANA pre-migration housekeeping:

yes

Introduction

Intermediate Documents (IDocs) are stored in several tables in the database. To control their size and improve access time without losing any IDocs, they can be stored in the archives at the operating system level. These archives can be moved to external storage media for the future retrieval. Archived IDocs should be optionally deleted from the SAP system.
This task encapsulates the SAP NetWeaver data archiving concept using the SARA transaction and WRITE action. The archiving object IDOC contains information about which database tables are used for archiving. In runtime, the report RSEXARCA is executed.

Step list

In the main OutBoard Housekeeping menu select "Intermediate Documents Archiving – Settings" under the Basis/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks do. For more information, refer to the Creating a Settings ID section of this user documentation.
In the variant screen the user can set the criteria for the requests to be archived.

IDoc number

Identifies range of document numbers

Created At

Refers to the time of the document creation

Create On

Refers to the date of the document creation, this is an absolute date value

Created ago (in days)

Refers to the age of the document creation date, this allows to specify the relative date and has a higher priority than absolute creation date value

Last Changed At

Refers to the time of the last document modification

Last Changed On

Refers to the date of the last document modification; it is absolute date value

Last Changed ago (in days)

Refers to the age of the last document modification, this allows to specify the relative date and has a higher priority than absolute last modification date value

Direction

Specifies if the document is out- or inbound

Current Status

Specifies the document status

Basic type

Document type

Extension

Combined with an IDoc type from the SAP standard version (a basic type) to create a new, upwardly-compatible IDoc type

Port of Sender

Identifies which system sent the IDoc

Partner Type of Sender

Defines commercial relationship between sender and receiver

Partner Number of Sender

Contains partner number of the sender

Port of Receiver

Identifies which system receives the IDoc

Partner Type of Receiver

Defines commercial relationship between sender and receiver

Partner Number of Receiver

Contains partner number of the receiver

Test Mode / Production Mode

Specifies, which mode the report executes. (Test mode makes no changes in database).

Detail Log

Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete).

Log Output

Specifies type of output log (List, Application Log, List and Application Log).

Archiving Session Note

Description of the archived content.




Figure 83: Intermediate Documents Archiving – Settings detail


There are several options how to start the Intermediate Documents Archiving. For more information, refer to Execute and Schedule sections of this user documentation.

Warning

Only use the archiving if the IDocs were not activated through the application. The user should make sure that no IDocs are activated and may still be needed by the application.

Work Items Archiving


Created by:SAP/DVD
Underlying SAP report:

WORKITEM_WRI

Client-dependent:

yes

Settings as variant:

yes
Support for Recycle bin:no

Recommended for HANA pre-migration housekeeping:

no

Introduction

For archiving and deleting the work items, the archiving object used is WORKITEM.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept, by using SARA transaction and WRITE action. The archiving object WORKITEM contains information about which database tables are used for archiving. During runtime, the report WORKITEM_WRI is executed.

Step list

In the main OutBoard Housekeeping menu select "Work Items Archiving – Settings" under the Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks. For more information, refer to the Creating a Settings ID section of this user documentation.
In the variant screen, the user can set the criteria for the requests to be archived.


Work Item ID

Unique ID of a work item

Creation Date

Day on which the work item was generated in status ready or waiting for the first time.

End Date

Day on which the work item was set to status done or logically deleted.

Task ID

Internal and unique ID of the task, which is assigned automatically after the task, is created.

Actual Agent

User who last reserved or processed the work item – user name

Delete Unnecessary Log Entries

It is possible to delete or store log of entries.

Test Mode / Production Mode

Specifies, in which mode the report executes. Test mode makes no changes in database.

Detail Log

Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete).

Log Output

Specifies type of output log (List, Application Log, List and Application Log).

Archiving Session Note

Description of the archived content.

Grouping of List Display

Option to group the list – System Defaults, Grouping by Work Item Title or Task Description.


For more detailed information, see the contextual help.


 

Figure 84: Work Items Archiving – Settings detail


There are several options how to start the Work Items Archiving. For more information, refer to Execute and Schedule sections of this user documentation.

Prerequisites

Work Items that you want to archive should have status Completed of Logically deleted (CANCELLED).

Recommendations

We recommend running Work Items archiving regularly. Frequency of archiving is system specific.

Note

SAP recommends that you use archive information structure SAP_O_2_WI_001, which is necessary if you are using ABAP classes or XML objects. If you are using only BOR objects, and are already using archive information structure SAP_BO_2_WI_001, you can continue to use it, but SAP recommends that you switch to the extended archive information structure SAP_O_2_WI_001.


Change Documents Archiving


Created by:SAP/DVD
Underlying SAP report:

CHANGEDOCU_WRI

Client-dependent:

yes

Settings as variant:

yes
Support for Recycle bin:no

Recommended for HANA pre-migration housekeeping:

no

Introduction

While using CHANGEDOCU solution any document changes to master data, tables, documents, etc. are archived.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept, by using SARA transaction and WRITE action. The archiving object CHANGEDOCU contains information about which database tables are used for archiving. In runtime, the report CHANGEDOCU_WRI is executed.

Step list

In the main OutBoard Housekeeping menu select "Change Documents Archiving – Settings" under the Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks. For more information, refer to the Creating a Settings ID section of this user documentation.
In the variant screen, the user can set the criteria for the requests to be archived.


Change doc. object

Indicator for stores

Object value

Day (in internal format YYYYMMDD) to which a POS transaction is assigned.

From Date

Date from which change documents user want to archive

To Date

Date to which change documents user want to archive

From Time (HH:MM:SS)

Time from which change documents user want to archive

To Time (HH:MM:SS)

Time to which change documents user want to archive

Transaction code

Transaction code in which the change was made

Changed By (User Name)

User name of the person responsible for the change of document

Test Mode / Production Mode

Specifies, in which mode the report executes. Test mode makes no changes in database.

Detail Log

Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete).

Log Output

Specifies type of output log (List, Application Log, List and Application Log).

Archiving Session Note

Description of the archived content.


For more detailed information, see the contextual help.


Figure 85: Change Documents Archiving – Settings detail


There are several options how to start the Change Documents Archiving. For more information, refer to Execute and Schedule sections of this user documentation.

Recommendations

We recommend running Change Documents archiving regularly. Frequency of archiving is system specific.

Note

Use the Change Document Archiving to archive the change documents of master data. Change documents for transactional data should still be archived together with the corresponding archiving of the application.

Warning

The business transactions need to be traceable, change documents cannot be deleted for a certain period of time. However, to reduce data volumes in the database, you can archive those change documents that you no longer need in current business processes, and keep them outside the database for the duration of the legal retention time.

Links Deletion between ALE and IDocs


Created by:SAP
Underlying SAP report:

RSRLDREL

Client-dependent:

yes

Settings as variant:

yes
Support for Recycle bin:no

Recommended for HANA pre-migration housekeeping:

yes

Introduction

Links are written in the ALE and IDoc environment. These are required for IDoc monitoring of the document trace and ALE audit.
They result in a rapid increase of size in IDOCREL and SRRELROLES tables.

Recommendation

Links of the type IDC8 and IDCA can be deleted on a regular basis, because they are generally no longer required after the IDocs are successfully posted in the target system. For more information, see related note.

Related Notes

505608


IDocs Deletion


Created by:SAP/DVD
Underlying SAP report:

RSETESTD

Client-dependent:

yes

Settings as variant:

yes
Support for Recycle bin:no

Cyclic execution of standard report:

yes
Pause / resume support:yes

Introduction

IDoc stands for Intermediate Document. It is a standard SAP document format. IDocs enable the connection of different application systems using message-based interface.
IDoc data is stored in the following DB tables:

  • EDIDCcontrol record
  • EDIDOCINDXcontrol record
  • EDIDOvalue table for IDoc types
  • EDIDOTshort description of IDoc types
  • EDIDSstatus record
  • EDIDD_OLDdata record
  • EDID2data record from 3.0 onwards
  • EDID3data record from 3.0 onwards
  • EDID4data record from 4.0 onwards
  • EDID40data record from 4.0 onwards

In case the old IDocs are being kept in the system, these EDI* tables may have become too big.
Unlike the standard SAP settings, this task is extended and enables to set the date of document creation as absolute or relative value (relative value has a higher priority).

 Figure 86: IDOCs deletion – Settings detail

Recommendation

Our recommendation is to run this IDocs Deletion regularly. The period is system specific.


IDocs Deletion (Central system release >= 7.40)


Created by:SAP/DVD
Underlying SAP report:

RSETESTD

Client-dependent:

yes

Settings as variant:

yes
Support for Recycle bin:no

Cyclic execution of standard report:

yes
Pause / resume support:yes

Introduction

This task replaces the task "IDocs Deletion" in case that central system version is the same or higher than 7.40.
This report deletes IDocs without archiving them. It provides the option for also deleting other objects that were created during IDoc generation - objects such as work items, links, RFC entries, and application logs.

Step list

Make an IDoc selection in accordance with the selection criteria given. If the checkbox 'Test Run' is marked, the IDocs and linked objects will be determined in accordance with the selection made, but they will not be deleted.Using the 'Maximum Number of IDocs' parameter, you can control how many IDocs should be deleted. It is not recommended to leave this parameter empty, in case you do, all the IDocs in your selection will be taken into consideration. Recommended value for this parameter is 100 000.
Determining the linked objects is a complex process. Activate the additional functions only if the respective objects are created during IDoc generation and are not already removed from your system by another action.
Dynamic variant usage is available for this task. For more information see chapter IDocs.

BCS Reorganization of Documents and Send Requests


Created by:SAP
Underlying SAP report:

RSBCS_REORG

Client-dependent:

yes

Settings as variant:

yes
Support for Recycle bin:no

Recommended for HANA pre-migration housekeeping:

yes

Introduction

The Business Communication Services (BCS) offer functions for SAP applications to send and receive e-mails, faxes or SMS messages. BCS offers ABAP programming interface connected to SAPconnect allowing for exchanging messages with e-mail servers via SMTP.
This task deletes documents with send requests or documents that are not part of a send request, if they are no longer in use and in accordance with the settings under "Reorganization Mode".

Recommendation

It is recommended to run this task when the related tables are increasing over time and the data is no longer required.

Related Notes

966854
1003894

Documents from Hidden Folder Deletion

Created by:SAP
Underlying SAP report:RSSODFRE

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction

Some applications use the 'dark folder' (meaning it is not visible in the Business Workplace) to store Business Communication Services documents.
This task removes documents from the 'dark folder' and therefore allows the reorganization of the documents and the document content.

Recommendation

It is recommended to run this task when the SOOD, SOFM, SOC3, SOOS, SOES tables have become considerably large and are not significantly reduced when you run common reorganization reports.

Related Notes

567975


Reorganization Program for Table SNAP of Short Dumps

Created by:SAP
Underlying SAP report:

RSSNAPDL

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no


Introduction

This program deletes old short dumps from table SNAP. The dumps that may be kept can be selected in transaction ST22.
The program parameters that can be set are:

  • The maximum number of entries to remain after reorganization.
  • The maximum number of table entries to be deleted at once.
  • Storage date.

The program first deletes the short dumps, which are older than storage date and are not flagged as protected. If there are more entries in the table SNAP than are specified in the first parameter, later short dumps are also deleted.
In the program, the delete process is split into small units so that only a certain number of entries can be deleted at any one time. This prevents the occurrence of any set problems in the database. The split is set in the second program parameter.

Table Log Database Management

Created by:SAP

Underlying SAP report:

RSTBPDEL

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction

Once the table logging is activated, there is possibility to review the history of changes done to custom tables. These changes are saved in DBTABLOG table. Logging is carried out "record by record", it means for every change operation, the image of "before state" is written to log table. This approach is space consuming. So it is really important to adopt well-balanced table logging policy to ensure that data growth of DBTABLOG table will be acceptable.
It is possible to delete the data saved in DBTABLOG using RSTBPDEL report.

Recommendation

Our recommendation is to prepare table-logging policy and to decide with cooperation of data owners which tables will be logged

Warning

Activating logging in a table has an important disadvantage – updates/modifications to the Z tables for which logging is activated can become slow.


Spool Administration


Created by:SAP

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no

Introduction

Spool Administration (transaction SPAD) is intended for administrators to cover the following activities:

  • Defining output devices in the SAP system.
  • Analyzing printing problems.
  • Maintaining the spool database – scheduling in dialog.

Warning

This task cannot be executed on a system connected via SYSTEM type RFC user.


Tool for Analyzing and Processing VB Request

Created by:SAP
Underlying SAP report:RSM13002

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no


Introduction

An update is asynchronous (not simultaneous). Bundling all updates for one SAP transaction into a single database transaction ensures that the data that belongs to this SAP transaction can be rolled back completely.
An update is divided into different modules, which correspond to the update function modules. The SAP System makes a distinction between:

  • Primary (V1, time-critical) update module;
  • Secondary (V2, non-time-critical) update module.

An update request or update record describes the data changes defined in an SAP LUW (Logical Unit of Work), which are carried out either in full or not at all (in the database LUWs for V1 and V2 updates).
This tool allows editing of update requests:

  • Starting V2 updating (normally, V2 updating starts directly after V1 updating; for some reasons – e. g. performance bottleneck – V2 update should be postponed by unchecking STARTV2 option)
  • Deleting successfully executed update requests (normally, update requests are deleted after they have been successfully executed, but for performance issues this behaviour should be switched off by unchecking DELETE option)
  • Reorganizing the update tables (if transaction in progress are terminated, this can lead to incomplete update requests in the update tables; to delete these, run this tool with REORG option checked).

Recommendation

If V2 is not updated directly, the update should be started as often as possible (several times a day). Otherwise, the update tables can get very large.
If deletion is not carried out directly, it should be carried out as often as possible (several times a day).
A reorganization of the update tables is only occasionally necessary (once a day is sufficient).

Delete Statistics Data from the Job Run-time Statistics

Created by:SAP
Underlying SAP report:RSBPSTDE 

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes


Introduction

A number of job run-time statistics are calculated during/after the jobs are run. These statistics should be deleted when they become obsolete.
This report reorganizes job run-time statistics. A period can be specified in days or by a date. All statistics records, which are in period, are deleted.

Recommendation

Our recommendation is to run this task monthly.


Batch Input: Reorganize Sessions and Logs

Created by:SAP
Underlying SAP report:RSBDCREO

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no


Introduction

If the batch input functionality of the R/3 system is utilized, log entries are produced. This program is responsible for cleaning up the batch input sessions and their logs.
The report does the following:

  • Deletes (successfully) processed sessions still in the system and their logs. Only these sessions are deleted and not "sessions still to be processed", "sessions with errors", and so on.
  • Deletes logs, for which sessions no longer exist.

Recommendation

We recommend periodical use of the program RSBDCREO to reorganize the batch input log file once a day. It can run RSBDCREO in the background or interactively.

Warning

Batch input logs cannot be reorganized using TemSe reorganization, but must be reorganized using the program RSBDC_REORG or transactions SM35/SM35P.


Delete Old Spool Requests

Created by:SAP
Underlying SAP report:RSPO1041

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes


Introduction

Report RSPO1041 is used to tidy up the spool database and to delete old spool requests. This must be done regularly in the productive system, as the spool database may only contain 32000 spool requests.
For the additional info, see program documentation (variant screen).
If previous version of this report (RSPO0041) has been scheduled, it will continue working, it can be re-scheduled, but if cannot be newly scheduled from scratch.

Deletion of Jobs

Created by:SAP
Underlying SAP report:RSBTCDEL2

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes


Introduction

The report RSBTCDEL2 runs in the background. This task is provided by SAP to delete old, inconsistent and non-deleted jobs from system. It replaces previously used report RSBTCDEL.

Recommendation

Our recommendation is to run this task regularly in the background once a day.

Related Notes

784969


Orphaned Job Logs Search and Deletion

Created by:SAP
Underlying SAP report:RSTS0024

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes



Introduction


Obsolete jobs deleted with report RSBTCDEL (predecessor of current report RSBTCDEL2). Sometimes logs that are left cannot be deleted (e.g. problems with system). These logs are called "orphans".
This task searches, checks and deletes these "orphans".


Recommendation


Our recommendation is to run this task regularly in the background once a week.


Related Notes


666290


Spool Files Consistency Check

Created by:SAP
Underlying SAP report:RSPO1042

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no



Introduction


For the SPOOL files, the DIR_GLOBAL directory or in the client-specific ADSP subdirectories, some of these may be very old. Usually, these files are removed by deleting the relevant spool request. However, under certain circumstances, these files may remain in the system as "orphaned" files.


Recommendation


This task checks whether spool requests still exist in the DIR_GLOBAL directory (ADS and SAP GUI for HTML print requests). If no more spool requests exist, the files are deleted. This task should be scheduled daily.


Related Notes


1493058


Administration Tables for BG Processing Consistency Check

Created by:SAP
Underlying SAP report:RSBTCCNS

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no


Introduction


Background processing stores job data in multiple database tables and these tables can be checked for consistency. This test is especially important if problems occur with the database and you need to determine whether all job data is still available. The report includes two predefined variants that you can use in the job step. These are:
Variant SAP&AUTOREPNO: Use this variant if consistency problems should only be listed in the output list. No automatic repair of the problems is performed.
Variant SAP&AUTOREPYES: Use this variant, if consistency problems should be logged and automatically corrected.


Recommendation


This task should be scheduled daily.


Related Notes


1440439
1549293


Active Jobs Status

Created by:SAP
Underlying SAP report:BTCAUX07

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no

Introduction


Sometimes jobs remain in the status 'active' after the background work process terminates or after database connection problems occur and their status can be corrected manually using transaction SM37. This task will do so automatically.


Recommendation


This task should be scheduled hourly.


Related Notes


16083


Collector for Background Job Run-time Statistics

Created by:SAP
Underlying SAP report:RSBPCOLL

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no



Introduction


This task creates job statistics and should run daily.


Recommendation


Make sure the note 2118489 is installed in the system otherwise; RSBPCOLL has poor performance and performs too many DB accesses.


Related Notes


16083
2118489


Performance monitor (RFC) Collector

Created by:SAP
Underlying SAP report:RSCOLL00

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no



Introduction


This report starts via RFC reports on all servers (SAPSYSTEMS) belonging to the SAP-system of one database (compare SM51), which collect performance relevant data of these servers and write them to the performance database MONI. Furthermore, it starts reports to collect and store information about the database system itself. These reports will be started on the database server, if there is a dialog system available – otherwise the first dialog system in the system list will be used.
It has components RSSTAT80 that reads local statistic data from shared memory and stores it in the performance table MONI, and RSSTAT60 that creates statistic data for day, week, month, and year and reorganizes the table MONI. It also updates the following other tables:
RSHOSTDB – data for host system monitor,
RSHOSTPH – protocol changes on host parameters
RSORATDB – analyzes the space of the database
RSORAPAR – protocol changes on database parameters.


Recommendation


This task should be scheduled hourly. It must always be scheduled in client 000 with user DDIC or with a user with the same authorization.


Related Notes


16083


Orphaned Temporary Variants Deletion

Created by:SAP
 Underlying SAP report:BTC_DELETE_ORPHANED_IVARIS

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no



Introduction


Delete "Orphaned" Temporary Variants.


Recommendation


This task should be scheduled weekly.


Related Notes


16083


Reorganization of Print Parameters for Background Jobs

Created by:SAP
Underlying SAP report:RSBTCPRIDEL

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no



Introduction


It reorganizes print parameters cross-client. Since the number of print parameters increases more slowly than the number of background processing steps, you can execute this report after longer periods of time (longer than one month).


Recommendation


This task should be scheduled monthly.


Related Notes


16083


Reorganization of XMI Logs

Created by:SAP
Underlying SAP report:RSXMILOGREORG

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no



Introduction


When you use external job scheduling tools, XMI log entries are written to table TXMILOGRAW. The system may write a very large number of log entries, even if the audit level is set to 0. You must therefore reorganize the TXMILOGRAW table manually on a regular basis.


Recommendation


This task should be scheduled weekly.


Related Notes


16083
182963


SOAP Runtime Management

Created by:SAP
Underlying SAP report:RSWSMANAGEMENT

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no



Introduction


This task does the SOAP runtime monitoring by scheduling other SAP standard programs:
SRT_REORG_LOG_TRACE
SRT_COLLECTOR_FOR_MONITORING
SRT_SEQ_DELETE_BGRFC_QUEUES
SRT_SEQ_DELETE_TERM_SEQUENCES
WSSE_TOKEN_CACHE_CLEANUP (Security Group)


Recommendation


This task should be scheduled hourly.


Delete History Entries for Processed XML Messages

Created by:SAP
Underlying SAP report:RSXMB_DELETE_HISTORY

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no



Introduction


Historical data consists of a small amount of header information from deleted messages. This history data is stored in a table and transferred to a second table on a weekly basis. Thus, data from the first table can be removed every week; but the history is still available in the second table for another month.


Recommendation


This task should be scheduled monthly.


Delete XML Messages from the Persistency Layer

Created by:SAP
Underlying SAP report:RSXMB_DELETE_MESSAGES

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no



Introduction


For the Integration Server/Engine of SAP XI this periodical task is recommended.


Recommendation


This task should be scheduled daily.


Spool Data Consistency Check in Background

Created by:SAP
Underlying SAP report:RSPO1043

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:no



Introduction


The task enables continuous monitoring of inconsistent spool objects. The write locks are analysed and, if necessary, deleted. The inconsistent objects found are gathered in a table. At the end of the test run, old and new tables are compared according to the following scheme:


Old tables

New tables

Action

No

Yes

Stays in new table (new inclusion)

Yes

Yes

If counter > limit then delete object
If counter <= limit then incr. counter by 1

Yes

No

If counter <= limit then incr. counter by 1
Inconsistent. This is the normal case.


The write locks found are deleted without being gathered in a table.
The functions "Delete write locks" and "Delete inconsistencies" can be used independently of each other, but this is not recommended. For normal daily use, the limit values for both functions should be the same. At the moment no uses for differing limit values are known.


Recommendation


This task should be scheduled daily.


Related Notes


16083


Business Warehouse


Topic Business Warehouse is a group of housekeeping tasks, which are Business Warehouse-oriented.


PSA Cleanup

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:yes
Support for initial housekeeping:yes
Recommended for HANA pre-migration housekeeping: yes
Pause / resume support:yes


Introduction

PSA Cleanup is part of Business Warehouse deletion tasks.
The Persistent Staging Area (PSA) is the inbound storage area in BI for data from the source system. The requested data is saved unchanged from the source system.
Requested data is stored in the transfer structure format in transparent, relational database tables in BI.
If regular deletion doesn't take place, the data from PSA tables can grow to an unlimited size. In applications, this can lead to the poor system performance while from an administration point of view it can cause an increase in the use of resources. High volumes of data can also have a considerable effect on the total cost of ownership of a system.
In OutBoard Housekeeping, it is possible to delete data from PSA tables using retention time.

Step list

In the main OutBoard Housekeeping menu select "PSA Cleanup – Settings" under the Business Warehouse/Deletion Tasks.
Now, the Settings selection must be specified. The user could create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.


Figure 87: PSA Cleanup – Settings


The user can edit created Settings Group (the user has to save changes by clicking the Save Settings icon on the taskbar in the end of modifying).
1st specify DataSource Name(s) by clicking on 'Add Object' Info button, from the list of available Data Sources (F4) the user can select which are to be added into newly created Settings Group & confirm. Also, the source system may be specified as an additional filter for DataSources. The Option to Include/Exclude determines if the result of selection will be added or removed from selection.
The user can set DataSource name in a pattern style and by checking "Do you want to save a pattern?" can skip selecting the PSA tables. This is useful should the user create settings on group level.


Figure 88: PSA Cleanup – DataSource selection


After the confirmation, the user should select PSA tables that are to be cleaned up and confirm the selected PSA tables to be added into the Settings Group list.

Figure 89: PSA Cleanup – PSA selection


Figure 90: PSA Cleanup – Settings with patterns and objects


Icons in "X" column indicate how the lines will apply to the overall list of PSAs:
Pattern will be evaluated during the task execution and the PSAs will be added to the overall list
Pattern will be evaluated during the task execution and PSAs will be removed from the overall list
PSA will be added to the overall list
PSA will be removed from the overall list
If the pattern is added, by clicking on its technical name it is evaluated and the list of PSAs is shown.


Figure 91: PSA Cleanup – List of PSAs included in pattern


2nd By clicking on 'Requests sel.' button, the user can specify Time period for deletion relevant entries in selected PSA tables and exclude/include the requests with error status from processing. From OutBoard Housekeeping 2.54 version, there is an option that allows the direct deletion of PSA tables by skipping the RecycleBin use.
Note: The user can specify different time criterion for every PSA table in the list, if no selection on PSA table is done, selected Time parameter will be set for all PSA tables in the list.


Figure 92: PSA Cleanup – Time period settings


3rd Run 'Test Run' for settings, it will build the overall list of PSAs, scan them and identify all REQESTIDs, which fulfil Time period condition. After 'Test Run' execution, the screen "Requests to be deleted" with the list of relevant REQUESTIDs, DataSources and source systems is opened.


Figure 93: PSA Cleanup – Test run result


Note:when creating settings on group level, 'Test Run' button is unavailable and thus this step is skipped.
4th Next step is to set Time limit for Recycle Bin. Enter value in days in RecycleBin Period; use input field or leave on the default value, which is 14 days.
Note: Data stored in Recycle Bin is still available and can be possibly restored if necessary during the time period, which was defined during the setup. Once the time period expires, data stored in Recycle bin will be automatically deleted by manual or scheduled execution of system task "OutBoard Housekeeping RecycleBin Cleanup".


Figure 94: PSA Cleanup – RecycleBin Period


5th step the user may define the maximum number of jobs that can be running in parallel by using the input field on the right side 'Max jobs'.
Note: If parallelization parameter "Max. Jobs" is set to 0, execution of such settings will distribute the selection into particular execution chunks, but the execution of these chunks will not be performed and respective RunID will be paused.


Figure 95: PSA Cleanup – Parallelization – Max jobs


Once settings for PSA Cleanup have been specified, the user may run the created/modified Settings group from Main menu. There are several options how to start the deletion. For more information, refer to Execute and Schedule sections of this user documentation.
The user should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run the user can go to the monitor or check the logs.


Recommendation


It is recommended to periodically delete:


  • The incorrect requests
  • The delta requests that have been updated successfully in an InfoProvider and no further deltas should be loaded for.


It helps to reduce database diskspace usage significantly. Our recommended retention time for PSA tables is 15 days and the task should run daily in common, but it is application specific.


ChangeLog Cleanup

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:yes
Support for initial housekeeping:yes
Pause / resume support:yes



Introduction

ChangeLog cleanup is part of Business Warehouse deletion tasks.
The change log is a table of the PSA that is automatically created for each standard DataStore Object (DSO). Further, for each standard DataStore Object, an export DataSource will be created that serves as a data source for the transfer of data from the change log to other data targets.
Change log contains the change history for delta updating from the ODS Object into other data targets, for example ODS Objects or InfoCubes.
The data are put into the change log via the activation queue and are written to the table for the active data. During activation, the requests are sorted according to their logical keys. This ensures that the data are updated in the correct request sequence in the table for active data.
OutBoard Housekeeping is able to delete data from ChangeLog tables using retention time.

Step list

In the main OutBoard Housekeeping menu select "ChangeLog Cleanup – Settings" under the Business Warehouse/Deletion Tasks.
The Settings-part of the Change log Cleanup allows the user to specify the selection criterion of the Settings Group as well as the time window for data relevancy. Settings are changed by means of setting corresponding parameters.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.


 

Figure 96: ChangeLog Cleanup – Settings Group selection


The user may edit created Settings Group (the user has to save changes by clicking the Save Settings icon on the taskbar in the end of modifying).
1st The user can specify DataStore name(s) by clicking on 'Add Object' Info button, from the list of available DataStores (F4), the user should select the objects to be added into newly created Settings Group & confirm. Also, InfoArea can be specified as an additional filter for DataStores. Option Include/Exclude determines if the result of selection will be added of removed from selection.
The user can set DataStore name in pattern style, e.g. "ZDSO*" and by checking "Do you want to save a pattern?" can skip selecting ChangeLog tables. This is useful for the user should they create settings on group level.
After the confirmation, the user can select and confirm the ChangeLog tables that are to be cleaned up. The selected ChangeLog tables will be added into the Settings Group list.


Figure 97: ChangeLog Cleanup – DataStore selection



Figure 98: ChangeLog Cleanup – change log selection



Figure 99: ChangeLog Cleanup – Settings with patterns and objects



Icons in "X" column indicate how the lines will apply to the overall list of change logs:
Pattern will be evaluated during the task execution and the change logs will be added to the overall list
Pattern will be evaluated during the task execution and change logs will be removed from the overall list
Change log will be added to the overall list
Change log will be removed from the overall list
If the pattern is added, by clicking on its technical name it is evaluated and the list of change logs is shown.


Figure 100: ChangeLog Cleanup – List of change logs included in pattern


2nd By clicking on 'Requests sel.' button the user could specify Time period for deletion relevant entries in selected ChangeLog tables and exclude/include the requests with error status from processing. From OutBoard Housekeeping 2.54 version, there is an option that allows the direct deletion of ChangeLog tables by skipping RecycleBin use.
Note: The user may specify different time criterion for every ChangeLog table in the list, if no selection on ChangeLog table is done, selected Time parameter will be set for all ChangeLog tables in the list.


Figure 101: ChangeLog Cleanup – Time period settings


3rd Run 'Test Run' for selected ChangeLog tables, it will scan all ChangeLogs and it will identify all REQESTIDs for particular ChangeLog, which fulfill Time period condition. After 'Test Run' execution, a list of relevant REQUESTIDs, DataStore objects and InfoArea is opened.


Figure 102: ChangeLog Cleanup – Test run result


Note:when creating settings on the landscape level, 'Test Run' button is unavailable and thus this step is skipped.
4th The next step is to set Time limit for Recycle Bin. Enter value in days in RecycleBin Period field or leave the default value, which is 14 days
Note:data stored in Recycle Bin are available and can be possibly restored if necessary during the time period, which was defined during the setup. Once the time period expires, data stored in Recycle bin will be automatically deleted by manual or scheduled execution of system task "OutBoard Housekeeping RecycleBin Cleanup".


Figure 103: ChangeLog Cleanup – Recycle Bin Period


5th step the user may define the maximum number of jobs that can be running in parallel by using the input field on the right side 'Max jobs'.
Note:If parallelization parameter "Max. Jobs" is set to 0, execution of such settings will distribute the selection into particular execution chunks, but the execution of these chunks will not be performed and respective RunID will be paused.


Figure 104: ChangeLog Cleanup – Parallelization – Max Jobs


Once settings for ChangeLog Cleanup are specified, the user can run the created/modified Settings group from Main menu. There are several options how to start the deletion. For more information, refer to Execute and Schedule sections of this user documentation.
The user should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run the user can go to the monitor or check the logs.


Recommendation


It is recommended to periodically delete:


  • Incorrect requests
  • Delta requests that have been updated successfully in an InfoProvider and no further deltas should be loaded for.


It helps to reduce database diskspace usage significantly. This task should be scheduled daily with retention time 15 days in common, but it is application specific.


Cube Compression Analysis

Created by:SAP/DVD
Underlying SAP report:SAP_INFOCUBE_DESIGNS

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no
Support for initial housekeeping:yes
Recommended for HANA pre-migration housekeeping: yes



Introduction


Dimension tables size significantly affects performance on database level (table joins, query performance). This analysis checks two important values for each dimension table:


  • Row count – count of rows in the table.
  • Ratio – it is calculated as division of Dimension table rows to Fact table rows.


The ratio acceptable for dimension tables is up to 10% (in order to avoid false alarm, the cube must have more than 30 000 rows).


Steplist


In the main OutBoard Housekeeping menu select "Cube Compression Analysis – Settings" under the Business Warehouse/Deletion Tasks.
Here, the Settings selection must be specified. The user can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
Should the user create new settings, the "Description" field needs to be filled. In "Selection thresholds", the user can fill "Min. rows" field to check only the tables with rows count greater than minimum. "Min. density (%)" will search for the tables with ratio greater than selected.


Figure 105: Cubes Compression Analysis – Settings detail


If "Create settings for Task Cube compression based on selection" is checked, a consistency check will prepare the settings for "Cube Compression" task and the settings ID can be found in consistency check logs under – "Problem class Important".
Click on info button "Save settings" to save the selection. For any further updates, click on "Modify Settings" info button and confirm.
Once the settings for Cube compression analysis are specified, the user may run the created/modified Settings group from Main menu. There are several options on how to start the deletion. For more information, refer to Execute and Schedule sections of this user documentation.
The user should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run the user could go to the monitor or check the logs.


Recommendation


As data volume grows, this task should be run regularly.


Related Notes


1461926


Cube Compression

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes
Pause / resume support:yes



Introduction


Cube compression is part of Business Warehouse deletion tasks.
The data loaded in the InfoCube is identified with the request IDs associated with them. However, the request ID concept can also cause the same data records to appear more than once in the F fact table. This unnecessarily increases the volume of data and reduces the performance in reporting, as the system has to aggregate using request IDs every time the query is executed. The compression of the InfoCube eliminates these disadvantages, and brings data from different requests together into the one single request. Compression means in this case to roll-up the data so that each data set is only contained once and therefore deleting the request information.
The compression improves the performance as it removes the redundant data. The compression also reduces memory consumption due to following: Deletes request IDs associated with the data. It reduces the redundancy by grouping by on dimension & aggregating on cumulative key figures.
The compression reduces the number of rows in the F fact table because when the requests are compressed the data moves from the F fact table to the E fact table. It results in an accelerated loading into the F fact table, faster updating of the F fact table indexes, shorter index rebuilding time, the accelerated rollups (since the F fact table is the source of data for roll-up).
OutBoard Housekeeping is able to compress the cubes using retention time.


Step list


In the main OutBoard Housekeeping menu select "Cube Compression – Settings" under the Business Warehouse/Deletion Tasks.
Settings-part of the OutBoard Housekeeping allows the user to specify the selection criterion of the Settings Group as well as the time window for data relevancy. Settings are changed by means of setting a corresponding parameter. Parameters are usually different for each system and therefore are not meant to be transported, but are set on each system separately.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.


 Figure 106: Cube Compression – Settings Group selection


The user can edit created Settings Group (by clicking the Save Settings icon on the taskbar in the end of modifying the user saves the changes).

The user can click on 'Add Object' and select the InfoCube(s) that are to be compressed; once selected the user should press confirm.


 Figure 107: Cube Compression – Settings Group selection


Icons in "X" column indicate how the lines will apply to the overall list of InfoCubes:
Pattern will be evaluated during the task execution and the InfoCubes will be added to the overall list
Pattern will be evaluated during the task execution and InfoCubes will be removed from the overall list
InfoCube will be added to the overall list
InfoCube will be removed from the overall list
If the pattern is added, by clicking on its technical name it is evaluated and the list of InfoCubes is shown.
As an alternative to manual selection, settings generated by Cube Compression Analysis can be used.
Once the InfoCube(s) is (are) selected the user should specify settings in the Request sel. Here the selection is according to which RequestIDs for InfoCube compression will be identified for every InfoCube within the list. Filtering restricts criteria based on the request timestamp "older than xxx days" and on the number of requests to be kept uncompressed. For request limitation, it is possible to enter values in following way:


  1. Number of unprocessed requests to Process data records older than XXX days
  2. Both limitations
  3. No limitations: In this case all requests will be compressed


Also, the user can select the option "Zero elimination" after cube compression (see more in "Zero Elimination after Compression" task).


 Figure 108: Cube Compression – Time period settings


For Oracle databases, there is a possibility to check for DIMID duplicates during execution. This elementary test recognizes whether there are several lines that have different DIMIDs (dimension table key), but have the same SIDs for the selected dimension table for the InfoCube specified. (This can occur by using parallel loading jobs). This has nothing to do with an inconsistency. However, unnecessary storage space is occupied in the database.
Since the different DIMIDs with the same SIDs are normally used in the fact tables, they cannot simply be deleted. Therefore, all of the different DIMIDS in the fact tables are replaced by one DIMID that is randomly selected from the equivalent ones.
DIMIDs that have become unnecessary are deleted in the connection. In doing so, not only are the DIMDs deleted that were released in the first part of the repair, but so are all of those that are no longer used in the fact tables (including aggregates).
If this option is chosen for any database other than an Oracle Database, it will be ignored during execution.
The user may display identified RequestIDs by clicking on "Test Run".


Figure 109: Cube Compression – Test run result


Note:when creating settings on group level, 'Test Run' button is unavailable and thus this step is skipped.
As a last step, the user may define the maximum number of jobs that can be running in parallel by using the input field on the right side 'Max jobs'.
Note:If parallelization parameter "Max. Jobs" is set to 0, execution of such settings will distribute the selection into particular execution chunks, but the execution of these chunks will not be performed and respective RunID will be paused.


Figure 110: Cube Compression – Parallelization Max jobs


Once settings for the InfoCube Compression have been specified, the user could run the created/modified Settings group from Main menu. There are several options how to start the InfoCube Compression. For more information, refer to Execute and Schedule sections of this user documentation.
The user should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run the user can go to the monitor or check the logs.


Recommendation


Our recommendation is to compress as soon as possible requests for InfoCubes that are not likely to be deleted; this also applies to the compression of the aggregates. The InfoCube content is likely to be reduced in size so DB time of queries should improve.


Warning


Be careful – after compression the individual requests cannot be accessed or deleted. Therefore, the user should be absolutely certain that the data loaded into the InfoCube is correct.


Cube DB Statistics Rebuild

Created by:DVD

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: no

Note: Task 'Cube DB Statistics rebuild' is obsolete, we recommend 'DB statistics rebuild' instead.


Introduction


The database statistics are used in the system to optimize the query performance. For this reason, the database statistics should be up-to-date. SAP recommends that the statistics should be updated always in case more than a million new records were loaded into the InfoCube, since the last update.
The database statistics can be automatically recalculated after each load or after each delta upload. To avoid unnecessary recalculations, first the OutBoard Housekeeping task determines whether the recalculation is needed, and only afterwards the statistics will be rebuild. The relevant InfoCubes for statistics update can be listed also using the 'Test run' in the settings definition.
The percentage of the InfoCube data that is used to create the statistics is set to 10% by default by SAP. The larger the InfoCube, the smaller percentage should be chosen, since the demand on the system for creating the statistics increases with the change in size. Cube DB Statistics Rebuild task is using the percentage as it is set up by each InfoCube.


Recommendation


Our recommendation is to run this task regularly, as it will update the statistics only for InfoCubes when needed. It will avoid unnecessary statistics update, but on the other hand, it will keep the statistics up-to-date.
Note: While Statistics Are Being Built, It is not possible to:
• Delete indexes
• Build indexes
• Fill aggregates
• Roll up requests in aggregates
• Compress requests
• Archive data
• Update requests to other InfoProviders
• Perform change runs.


BI Background Processes

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes


Introduction


The BW environment performs a lot of processes all the time and some may not always be processed in a visible way. Sometimes this happens in the background in series or in parallel processes.
Background Management provides functions for managing these background processes and the parallel processes in the BW system. As a result of its regular activities, messages, and the internal parameters of the background processes executed by the background management on are created in RSBATCHDATA table. In case of no housekeeping, RSBATCHDATA table may grow out of control.
In OutBoard Housekeeping, it is possible to delete these messages and the internal parameters using retention time.


Step list


In the main OutBoard Housekeeping menu select "BI Background Processes – Settings" under the Basis/Deletion Tasks.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
Settings part of BI Background Processes allows the user to specify the selection criterion of the Settings Group.
Now, the Settings selection must be specified:


  • Delete Messages by – for the internal messages of BI background management, this defines after how many days these should be deleted.
  • Delete Parameters by –for the internal parameters of the background processes, this defines after how many days these should be deleted.
  • Fill the "Description" field with a description comment and click on "Save Settings" button.


Note:SAP recommends deleting messages and parameters older than 30 days. This setting should normally prevent table RSBATCHDATA from being overfilled. When defining the deletion selections, make sure to keep the data as long as necessary in order to track any problems that might occur.
To save settings, the user should click the "Save Settings" button. If the user makes an update on an already existing settings group, the user may save the settings using the "Modify Settings" button. Or delete the complete settings group with the "Delete Settings" info button.
To run this task in Test mode, mark test mode checkbox, to display available deletion results in logs.


Figure 111: BI Background Processes – Settings detail


Once the settings are specified, the user could run the created/modified Settings Group from Main menu. There are several options how to start the deletion. For more information, refer to Execute and Schedule sections of this user documentation.
The user should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run, the user can go to the monitor or check the logs.


Recommendation


Our recommendation is to delete the messages and the parameters stored in RSBATCHDATA table that are older than 30 days and to run this job on a daily basis.


Warning


The user should only delete the messages and the parameters that will no longer be needed. After this report is executed, the logs will be temporarily stored in the recycle bin and eventually deleted.


BW Statistics Deletion

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction

BW statistics are part of Business Warehouse deletion tasks.
To evaluate the fundamental functional areas of the Business Information Warehouse, the system stores BW Statistics in the following tables:


  • RSDDSTATAGGR
  • RSDDSTATAGGRDEF
  • RSDDSTATBIAUSE
  • RSDDSTATCOND
  • RSDDSTATDELE
  • RSDDSTATDM
  • RSDDSTATDTP
  • RSDDSTATEVDATA
  • RSDDSTATHEADER
  • RSDDSTATINFO
  • RSDDSTATLOGGING
  • RSDDSTATPPLINK
  • RSDDSTATTREX
  • ESDDSTATTREXSERV.


After a certain period of time, the statistics are not used, therefore not needed anymore, so these can be deleted to reduce data volume and increase performance when accessing these tables.
With OutBoard Housekeeping, it is possible to delete these BW Statistics using retention time.


Step list


In the main OutBoard Housekeeping menu select "BW Statistics – Settings" under the Business Warehouse/Deletion Tasks.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled and selected whether or not the settings ID will be run in test mode.
The user may also restrict the deletion of BW statistics based on source objects:


  • Query
  • Logs
  • Aggregates
  • Data Transfer Process
  • Data on Deletion


The user has to click on the "Save settings" button to save the selection, for any further updates click on "Modify Settings" info button and confirm.


 Figure 112: BW Statistics – Settings detail


Once settings for BW Statistics cleaning are specified, the user could run the created/modified Settings group from the Main menu. There are several options how to start the deletion. For more information, refer to Execute and Schedule sections of this user documentation.


Recommendation


Our recommendation is to delete BW Statistics older than 60 days.


Warning


Analyse the usage of BW Statistics in the system before deleting them, because OutBoard Housekeeping deletes the BW statistics permanently.


Bookmark Cleanup

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:yes

Introduction


Bookmarks are useful if a user wants return to a particular navigational status (that includes the level to which a hierarchy has been expanded) of a Web application or an ad-hoc query that was created using the Web. The user could set a bookmark to enable a recall to a particular navigational status at a later date because the system creates and stores a URL for the bookmark.
In OutBoard Housekeeping, it is possible to cleanup these bookmarks based on internal parameters using retention time.
In the main OutBoard Housekeeping menu select "Bookmark Cleanup – Settings" under the Business Warehouse/Deletion Tasks.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled.


Figure 113: Bookmark Cleanup – Settings detail


The user could restrict the cleanup of bookmarks based on:


  • Selection Parameters
    • Date created/ last accessed
    • User Name
    • Template
  • Bookmark State (All/ Stateful/ Stateless)
  • Bookmark Type (All/without/with Data)


To run this task in Test mode, mark test mode checkbox, to display available cleanup results in logs.
Once the settings for the Bookmark Cleanup are specified, the user may run the created/modified Settings group from the Main menu. There are several options how to start the deletion. For more information, refer to Execute and Schedule sections of this user documentation.


Web Template Cleanup

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:yes

Introduction


A Web template determines the structure of a Web application. Web Application Designer is used to insert placeholders into a HTML document for Web Items (in the form of object tags), data providers (in the form of object tags) and BW URLs. The HTML document with the BW-specific placeholders is called a Web template. Web templates are checked into the Web Application Designer. The HTML page that is displayed in the Internet browser is called a Web application. Depending on which Web items were inserted into the Web template, a Web application contains one or more tables, an alert monitor, charts, maps, and so on.
The Web template is the keystone of a Web report. It contains placeholders for items and command URLs. Data providers, items, and command URLs are generated for Web reports.
In OutBoard Housekeeping, it is possible to cleanup the Web Templates based on internal parameters and use retention time.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled.
In settings, the field 'Last used' represents the number of days that have passed since the template was last used and the field 'Created before' represents number of days passed since the template was first created.


 

Figure 114: Web Template Cleanup – Settings detail


The user can restrict the cleanup web templates based on the following parameters:


  • Created before (days)
  • Last used (days)
  • User Name
  • Template (Tech Name)


It is possible to run this task in Test mode. Mark this option to display available cleanup results in logs.
Once the settings for the Web Template Cleanup are specified, the user may run the created/modified Settings group from the Main menu. There are several options how to start the deletion. For more information, refer to Execute and Schedule sections of this user documentation.


Precalculated Web Template Cleanup

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:yes


Introduction


Precalculated web template is a complete document that doesn't require database connection, essentially they are web templates filled with data. After filling the web template with the data, this can be distributed and used without executing an OLAP request. As these templates are filled with data, they require more space than the traditional Web Template as these templates are filled multiple times with data.
In OutBoard Housekeeping, it is possible to cleanup these bookmarks based on internal parameters use retention time.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled.


 Figure 115: Precalculated Web Template – Settings detail


The user can restrict the cleanup Precalculated Web Templates based on the following parameters:


  • Older than (Creation Date)
  • User Name
  • Template (Tech Name)


It is possible to run this task in Test mode. Mark this option to display available cleanup results in logs.
Once settings for the Precalculated Web Template Cleanup are specified, the user may run the created/modified Settings group from the Main menu. There are several options how to start the deletion. For more information, refer to Execute and Schedule sections of this user documentation.


Extended Query Statistics Cleanup

Created by:DVD

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no

Introduction


Datavard (DVD) Extended Query Statistics are information that are collected on top of standard OLAP statistics.
These statistics are collected when DVD Extended Query Statistics enhancement is installed in the system and collecting is turned on. Extended statistics store information about filters that were used for each query/navigational step execution by user and create source information for use in other DVD analysis products. If these are not deleted manually, their growth in size is unlimited. Two tables /DVD/QS_QINFO and /DVD/QS_QVAR are filled when a query is executed in the system. Speed of growth depends on overall query usage and number of filters used in average. In productive systems, it is not uncommon that total size of these tables can grow 0.5 GB per week.
In OutBoard Housekeeping it is possible to cleanup the Extended Query Statistics based on internal parameters.


Recommendation


Periodical deletion of Extended Query Statistics should reduce database disk space usage significantly. Our recommendation is to delete all statistics that are no longer needed for analysis in other DVD products.
For example: Heatmap query usage analysis should be done once a month and after analysis, all statistics of that month should be cleaned if there is no other planned analysis for bigger time frame.


Step list


In the main OutBoard Housekeeping menu select "Extended Query Statistics Cleanup – Settings" under the Business Warehouse/Deletion Tasks.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled.
User then needs to fill in optional parameters:


  • ID – generated unique statistic ID
  • Query datum – datum of query execution
  • Query ID – report specific ID
  • User – username of the query creator


It is possible to run this task in Test mode. Mark this option to display available cleanup results in logs.


 Figure 116: Extended Query Statistics Cleanup – Settings detail


Warning


If no parameters are specified, all queries will be deleted.


Unused Dimension Entries of an InfoCube Cleanup

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no

Introduction


During the deletion of data from InfoCubes, it is possible that this data in their respective dimension tables of InfoCubes is retained for future processing. This data can occupy valuable space in database and is easily forgotten, as it is no longer visible in the InfoCube. Task Unused Dimension Entries of an InfoCube Cleanup deals with this retained data if it is no longer needed.


Step list


In the main OutBoard Housekeeping menu, select "Cube Compression – Settings" under the Business Warehouse/Deletion Tasks.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
The user can then specify InfoCubes and InfoCube patterns that will be used in execution of the task.


Figure 117: Unused Dimension Entries of an InfoCube – Settings Group selection


In 'Add object' submenu, the user can specify individual InfoCubes or patterns that are to be processed during execution.


Figure 118: Unused Dimension Entries of an InfoCube – InfoCube(s) specification


Test run is available for this task, which gives an overview of unused dimension entries.


Figure 119: Unused Dimension Entries of an InfoCube – Test run results


Warning
Test run results and task logs follow standard SAP logging logic for this task. So the first five unused entries of a dimension are displayed followed by a summary for each dimension if there are more than five unused entries. If there are five or less unused entries in a dimension, no summary is displayed and information about unused entries of next dimension follows.


Figure 120: Unused Dimension Entries of an InfoCube – Example of summary in results


Query Objects Deletion

Created by:DVD
 Underlying SAP transaction:RSZDELETE

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:yes


Introduction


Transaction RSZDELETE is intended for mass deletion of queries and reusable query components (structures, filters, calculated or restricted key figures and variables) from the system.
As of OutBoard Housekeeping release 2.35, this functionality is enhanced with RecycleBin support, which allows storing deleted queries and query components in RecycleBin for specified retention time.


Step list


In the main OutBoard Housekeeping menu select "Query Objects Deletion – Settings" under the Business Warehouse/Deletion Tasks.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
Settings part of Query Objects Deletion allows the user to specify the selection criterion of the Settings Group.


Figure 121: Query Objects Deletion – Settings detail


Workbook and Role Storage Cleanup

Created by:DVD
Underlying SAP report:RSWB_ROLES_REORG

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:yes
Recommended for HANA pre-migration housekeeping: yes



Introduction


There may be workbooks where there is no longer a reference in a role or in the favorites. Similarly, references to non-existing workbooks may exist in roles or favorites.
In the SAP Easy Access menu or in the role maintenance, the references to workbooks are deleted without a check performed to see whether it may be the last reference. In these cases, the workbook is also deleted in the document storage in the BEx Analyzer or BEx Browser.
This task enables the deletion of references to workbooks in roles and favorites whose workbooks do not exist in the document storage. The task also allows the user to delete workbooks for which no references exist in roles or favorites in the document storage.
As of OutBoard Housekeeping release 2.35, this functionality is enhanced with RecycleBin support, which allows storing deleted references in RecycleBin for specified retention time.


Step list


In the main OutBoard Housekeeping menu select "Query Objects Deletion – Settings" under the Business Warehouse/Deletion Tasks.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.
The settings part of Query Objects Deletion allows the user to specify the selection criterion of the Settings Group.


Figure 122: Workbooks and Role Storage Cleanup – Settings detail


It is possible to run this task in Test mode (checked by default) and it will write the analysis results into task logs.


Workbook Cleanup

Created by:DVD

Client-dependent:

yes
Settings as variant:yes
Support for Recycle bin:yes

Introduction

BEx Analyzer is an analytical, reporting and design tool embedded in Microsoft Excel. Workbooks that are created with this tool can be saved to multiple locations, e.g. SAP NetWeaver server. Workbooks are saved as binary files in the database system and can consume a lot of space. Especially when users are creating multiple workbooks for the same purpose i.e. every month and don't use them after that.


In OutBoard™ for Housekeeping it is possible to delete unused workbooks from system using retention time and OLAP statistics.


Step list

In the main OutBoard Housekeeping menu, select “Workbook Cleanup” under the Business Warehouse/Deletion Tasks.


The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation.


In the settings part of the Workbook Cleanup, the user can specify the selection criterion of the Workbook to be cleaned up. It is possible to run this task in Test mode (checked by default) and it will write the analysis results into task logs.
User needs to fill in parameters:

  • Workbook ID
  • Person Responsible
  • Type of Workbook
  • Location of Data Storage
  • Last usage (in days)
  • Delete all with no statistics checkbox – excludes statistics and deletes all the bookmarks according to selection parameters
  • Retention time in days – amount of time for cleaned bookmark to be hold in recycle bin
  • Verbose logs checkbox – enhances logs with additional information about each Bookmark processing


Recommendation

Our recommendation is to delete unused workbooks on a regular basis. Note that it is crucial to collect OLAP statistics for workbooks, otherwise the deletion program is unable to identify workbooks that are unused. Statistics for workbooks can be set in transaction RSDDSTAT.


Warning

When OLAP statistics are not collected, the deletion program by default excludes such workbooks from deletion. Be careful when you chose settings option 'Delete all with no statistics', it will delete all workbooks that do not have any OLAP statistics for the given selection. If you decide to use this option always use the recycle bin (set 'Retention time in days' > 0), you can then easily reload possibly missing workbooks.


BusinessObjects: Office Cleanup

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no

Introduction

SAP BusinessObjects Analysis is an analytical, reporting and design tool embedded in Microsoft Excel. Workbooks that are created with this tool can be saved to multiple location one of such location is SAP NetWeaver server. Workbooks are saved as binary files in database system and can consume a lot of space. Especially when users are creating multiple workbooks for same purpose i.e. every month and don't use them after that.

In OutBoard™ for Housekeeping it is possible to delete unused workbooks from system using creation time.

Recommendation

Our recommendation is to delete unused workbooks on regular basis with usage of recycle bin.

Warning

Task is using for deletion creation time stamp, that means object could be used recently but unfortunately header table doesn't contain this information. You should always use recycle bin (set 'Retention time in days' > 0), you can then easily reload possibly missing objects.


Tables Buffering on Application Server

Created by:DVD

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no

Introduction


Table buffering on the Application Server is part of Business Warehouse buffering tasks.
Buffering on the application server is to avoid accessing the database server too often. In the case of too many sequential reads and small percentage of invalidations (buffer refreshes), the buffering will increase the performance when accessing the tables.
OutBoard Housekeeping offers the possibility to evaluate the table buffering settings in the test mode. The result is a list of tables to be buffered according to current settings.


Step list


In the main OutBoard Housekeeping menu, select "Tables Buffering on Application Server – Settings" under the Business Warehouse/Buffering Tasks.
The user could create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled.
The user may choose to set the threshold parameters for the tables to be buffered:


  • Max. Table size
  • Max. Sequential reads
  • Max. Invalidation (in percent)


These parameters can be applied for previous or this month statistics.
Click on the info button "Save settings" to save selection.


 Figure 123: Tables Buffering on App. Server – Settings detail


Once the settings for Tables buffering cleaning are specified, the user can run the created/modified Settings group from the Main menu. There are several options how to start the buffering, for more information, refer to Execute and Schedule sections of this user documentation.


Recommendation


Our recommendation is to buffer the tables on the application server according to the results of the test run of this analysis.


Warning


Continue to analyse the usage and invalidations of buffer on a regular basis (monthly).


Number Range Buffering

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no

Introduction


Number Range Buffering is part of Business Warehouse buffering tasks.
During the Master Data Load, each record will go to the database table NRIV and will pick a new SID number. Similarly, also during the Info Cube data loading each record will go to database table NRIV and gets the new DIMID. In the case of a huge amount of data, the performance of loading will be decreased because all the records will go to database table to get new either the SID or DIMID numbers. So in order to rectify this problem, we need to use buffered numbers rather than the hitting the database every time.
OutBoard Housekeeping offers the possibility to buffer required SID and DIMID range objects with the defined buffering value.


Step list


In the main OutBoard Housekeeping menu select "Number Range Buffering – Settings" under the Business Warehouse/Buffering Tasks.
Settings part of Number Range Buffering allows the user to specify the selection criterion of the Settings Group. The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled and selected whether or not the settings ID will be run in test mode.


 Figure 124: Number Range Buffering – Settings detail


Number Range Buffering supports three different activities:


  1. Buffering NR object
  2. Unbuffering NR object
  3. Unbuffering NR for package dimension


Select option in the task Number range buffering was based (OutBoard Housekeeping release older than 2.61) on the number range object number for SID and DIMID. Due to the fact that these numbers are different throughout the landscape, the select options are now based on InfoObject name (NR for InfoObject) and cube dimensions (NR for InfoCube).
In the NR object buffering, specify selection conditions for SID NR object, NR for InfoObject or DIMID NR object and NR for InfoCube. If necessary, specify buffer level, too. In the NR object unbuffering specify selection conditions for NR object to be unbuffered, if necessary.
The user has to click on the button "Save settings" to save selection, for any further updates click on "Modify Settings" info button and confirm.
Once the settings for Number range buffering cleaning are specified, the user could run the created/modified Settings group from Main menu. There are several options how to start the buffering. For more information, refer to Execute and Schedule sections of this user documentation.


Recommendation


Our recommendation is to buffer all SID and DIMID number range objects with the buffering value 10.


Warning


Number range object of characteristic 0REQUEST should never be buffered. Therefore, it is always filtered out by default. Number range object of the Package Dimensions should never be buffered. Therefore, these are always filtered out by default.


Related Notes


1948106
857998


Archiving of Request Administration Data

Created by:SAP/DVD
Underlying SAP report: RSREQARCH_WRITE

Client-dependent:

no
Settings as variant:yes

Support for Recycle bin:

no
Recommended for HANA pre-migration housekeeping: yes
Cyclic execution of standard report:yes

Introduction


The administration tables and log tables of the DTP/ InfoPackage increase with each new request. This in turn affects performance.
The log and administration data for requests can be archived. This results in improved performance of the load monitor and the monitor for load processes. It also allows for the freeing up tablespace on the database.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept, by using SARA transaction and WRITE action. The archiving object BWREQARCH contains information about which database tables are used for archiving. In runtime, the report RSREQARCH_WRITE is executed.


Step list


In the main OutBoard Housekeeping menu select "Archiving of Request Administration Data – Settings" under the Business Warehouse/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks are. For more information, refer to the Creating a Settings ID section of this user documentation.
In the variant screen, the user can set the criteria for the requests to be archived.


Selection Date of Request

Refers to the load date of the request

Requests Older Than (Months)

The requests that were loaded in the selected period (Selection Date) and are older than specified number of months, are archived during the archiving run.

Archive New Requests Only

Only new requests, which have not been archived yet, are archived during the archiving run.

Reloaded Requests Only

Only old requests are archived, which have already been archived once and were reloaded from that archive.

Archive All Requests

All requests are archived that fall within the selection period.

Minimum Number of Requests

If the number of archiving-relevant requests is lower than the minimum number, no requests are archived during the archiving run.

Test Mode / Production Mode

Specifies, in which mode is the report executed. Test mode makes no changes in database.

Detail Log

Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete).

Log Output

Specifies type of output log (List, Application Log, List and Application Log).

Archiving Session Note

Description of the archived content.




Figure 125: Archiving of Request Administration Data – Settings detail


There are several options how to start the Archiving of Request Administration Data. For more information, refer to Execute and Schedule sections of this user documentation.


Recommendation


To avoid unnecessary reloading of data from the archive, we recommend that the user should only archive administration data from requests that are more than three months old and will probably not be edited again.


Warning


  1. After an upgrade from SAP BW 2.x or 3.x to SAP NetWeaver 7.x, the reports RSSTATMAN_CHECK_CONVERT_DTA and RSSTATMAN_CHECK_CONVERT_PSA are to be executed at least once for all objects. We recommend the reports execution in the background.
  2. Because of different selection screens on various BW releases, it is not possible to utilize an inheritance of settings and execution. It is highly recommended to prepare specific settings ID for each landscape node.


Archiving of BI Authorization Protocols

Created by:SAP/DVD
Underlying SAP report:RSECPROT_ARCH_WRITE

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


The table RSCELOG is used as a storage for Authorizations Log files. These are created when the authorization protocol is switched on in RSECPROT. This in turn can affect the performance.
Authorizations Log files can be archived. This results in improved performance of the load monitor and the monitor for load processes. It also allows for the freeing up tablespace on the database.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept by using SARA transaction and WRITE action. The archiving object RSECPROT contains information about which database tables are used for archiving. In runtime, the report RSECPROT_ARCH_WRITE is executed.


Step list


In the main OutBoard Housekeeping menu select "Archiving of BI Authorization Protocols – Settings" under the Business Warehouse/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks. For more information, refer to the Creating a Settings ID section of this user documentation.
In the variant screen, the user can set the criteria for the logs to be archived.


UTC time stamp in short form

Refers to the time the logs were created

Executing User

User name of executing user

Restricted User

User name of restricted user

P_AR_ILM

ILM Action: Archiving – the system copies only the data for which an archivability check was successfully performed to archive files.

P_SNAP

ILM Action: Snapshot – the selected data is copied to the archive files, without undergoing an additional check. The files created with this option can be stored on an external storage system.

P_DEST

ILM Action: Data Destruction – only data added to the archive files, which can be destroyed according to the rules stored in ILM. The user cannot store the archive files created with this option, in an external storage system. The user can use a deletion program to delete the data copied to the archive files from the database. No archive information structures are created as a result. When the data has been deleted from the database, the deletion also deletes the archive files that were created as well as the administration information relating to the files.

Test Mode / Production Mode

Specifies, in which mode the report executes. Test mode makes no changes in database.

Delete with Test Variant

If set, the delete program will be started with the test mode variant. The program will generate statistics about the table entries that would be deleted from the database in the production mode.

Detail Log

Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete).

Log Output

Specifies type of output log (List, Application Log, List and Application Log).

Archiving Session Note

Description of the archived content.


For more detailed information, see the contextual help.


Figure 126: Archiving of BI Authorization Protocols – Settings detail


There are several options how to start the Archiving of BI Authorization Protocols. For more information, refer to Execute and Schedule sections of this user documentation.


Recommendation


We recommend to archive and delete BI Authorization protocols, when:


  • Write step is cancelled with EXPORT_TOO_MUCH_DATA dump when writing to the authorization log;
  • RSECLOG table is becoming quite large;
  • Authorization logs are not required anymore;
  • BW reports have been suffering suddenly and abnormal delays;
  • Traces showed table RSECLOG as the most accessed table.


Related Notes


1592528


Archiving of BI Authorization Change Logs

Created by:SAP/DVD

Underlying SAP report:

RSEC_CHLOG_ARCH_WRITE
Client-dependent:no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Required BW version

7.30 at least


Introduction


As of BW 7.30, it is possible to archive the change log of analysis authorizations using the archiving object RSEC_CHLOG.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept, by using SARA transaction and WRITE action. The archiving object RSEC_CHLOG contains information about which database tables are used for archiving. In runtime, the report RSEC_CHLOG_ARCH_WRITE is executed.


Step list


In the main OutBoard Housekeeping menu select "Archiving of BI Authorization Change Logs – Settings" under the Business Warehouse/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks. For more information, refer to the Creating a Settings ID section of this user documentation.
In the variant screen, the user can set the criteria for the requests to be archived.


Creation Date

Refers to the time the logs were created

Create Archive with Check

ILM Action: Archiving – the system copies only the data for which an archivability check was successfully performed to archive files.

Create File w/o Check

ILM Action: Snapshot – the selected data is copied to the archive files without undergoing an additional check. The files created with this option can be stored on an external storage system.

File with Check w/o SAP-AS

ILM Action: Data Destruction – only data that is added to the archive files, which, according to the rules stored in ILM, can be destroyed. The user cannot store the archive files created with this option in an external storage system. The user can use a deletion program to delete the data copied to the archive files from the database. No archive information structures are created as a result. Once the data has been deleted from the database, the deletion also deletes the archive files that were created as well as the administration information relating to the files.

Test Mode / Live Mode

Specifies, in which mode the report executes. Test mode makes no changes in database.

Delete with Test Variant

If set, the delete program will be started with the test mode variant. The program will generate statistics about the table entries that would be deleted from the database in the production mode.

Detail Log

Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete).

Log Output

Specifies type of output log (List, Application Log, List and Application Log).

Archiving Session Note

Description of the archived content.


For more detailed information, see the contextual help.


Figure 127: Archiving of BI Authorization Change Logs – Settings detail


There are several options how to start the Archiving of BI Authorization Change Logs. For more information, refer to Execute and Schedule sections of this user documentation.


Recommendation


We recommend archiving the data before making a system copy, as this involves a large amount of data that is not actually needed in the new system.


Warning


This task works on BW 7.30 systems and higher. It means central and satellite systems must meet these requirements. In case the central system doesn't meet these requirements, it is not possible to create settings variant. In the case, satellite systems do not meet the requirements, report RSEC_CHLOG_ARCH_WRITE is not found and is not executed.


Archiving of Point-of-Sale Aggregates

Created by:SAP/DVD
Underlying SAP report:/POSDW/ARCHIVE_WRITE_AGGREGATE

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: no

Introduction


While using POS solution, a large amount of data can be generated and this increases the data volume rapidly. The data of POS transaction can be summarized using aggregates, which reduce the size of data significantly. However, regular archiving of this data is very important.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept by using SARA transaction and WRITE action. The archiving object /POSDW/AGG contains information about which database tables are used for archiving. In runtime, the report /POSDW/ARCHIVE_WRITE_AGGREGATE is executed.


Step list


In the main OutBoard Housekeeping menu select "Archiving of Point-of-Sale Aggregates – Settings" under the Business Warehouse/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks. For more information, refer to the Creating a Settings ID section of this user documentation.
In the variant screen, the user can set the criteria for the requests to be archived.


Store

Indicator for stores

Aggregate Number

POS aggregate number.

Aggregate Level

Defines how the aggregated data is structured in the database.

Maximum Posting Date

Day (in internal format YYYYMMDD) to which a POS transaction is assigned.

Test Mode / Production Mode

Specifies, in which mode the report executes. Test mode makes no changes in database.

Detail Log

Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete).

Log Output

Specifies type of output log (List, Application Log, List and Application Log).

Archiving Session Note

Description of the archived content.




For more detailed information, see the contextual help.


Figure 128: Archiving of Point-of-Sale Aggregates – Settings detail


There are several options to start the Archiving for Point-of-Sale Aggregates. For more information, refer to Execute and Schedule sections of this user documentation.


Prerequisites


Prior to processing of the POS aggregates archiving, all POS transactions for selected Store/Date combination must have one of the following statuses: Completed, Rejected or Canceled. It is also assumed that these data are no longer required to be available for SAP POS DM.


Recommendations


We recommend running POS aggregates archiving regularly. Frequency of archiving is system specific.


Warning


This task works on systems where SAP Point of Sale solution is installed. It means central and satellite systems must meet these requirements. In case the central system doesn't meet these requirements, it is not possible to create settings variant. In the case satellite systems do not meet the requirements, report /POSDW/ARCHIVE_WRITE_AGGREGATE is not found and is not executed.


Archiving of Point-of-Sale Transactions

Created by:SAP/DVD
Underlying SAP report:/POSDW/ARCHIVE_WRITE

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: no

Introduction


While using POS solution, a large amount of data can be generated that increases the data volume rapidly. Regular archiving of this data is very important.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept, by using SARA transaction and WRITE action. The archiving object /POSDW/TL contains information about which database tables are used for archiving. In runtime, the report /POSDW/ARCHIVE_WRITE is executed.


Step list


In the main OutBoard Housekeeping menu select "Archiving of Point-of-Sale Transactions – Settings" under the Business Warehouse/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks. For more information, refer to the Creating a Settings ID section of this user documentation.
In the variant screen, the user can set the criteria for the requests to be archived.


Store

Indicator for stores

Posting Date

Day (in internal format YYYYMMDD) to which a POS transaction is assigned.

Test Mode / Production Mode

Specifies, in which mode the report executes. Test mode makes no changes in database.

Detail Log

Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete).

Log Output

Specifies type of output log (List, Application Log, List and Application Log).

Archiving Session Note

Description of the archived content.








For more detailed information, see the contextual help.


Figure 129: Archiving of Point-of-Sale Transactions – Settings detail


There are several options how to start the Archiving of Point-of-Sale Transactions. For more information, refer to Execute and Schedule sections of this user documentation.


Prerequisites


Prior to processing of the POS transactions archiving, all POS transactions for selected Store/Date combination must have one of the following statuses: Completed, Rejected or Canceled. It is also assumed that this data is no longer required to be available for SAP POS DM.


Recommendations


We recommend running POS transactions archiving regularly. Frequency of archiving is system specific.


Note


If you are using SAP POS DM implemented on BW powered by HANA, do not use this archiving object /POSDW/TL, but /POSDW/TLF instead.


Warning


This task works on systems where SAP Point of Sale solution is installed. It means central and satellite systems must meet these requirements. In case the central system doesn't meet these requirements, it is not possible to create settings variant. In the case, satellite systems do not meet the requirements, report /POSDW/ARCHIVE_WRITE is not found and is not executed.


Temporary Database Objects Removing

Created by:SAP
Underlying SAP report:SAP_DROP_TMPTABLES

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


In SAP BW, there are dynamically created temporary database objects (such as tables or views). They are created during query execution, or other processes that read data from BW InfoProviders.
With BI 7.x these objects are always created in ABAP dictionary.
These objects have a name that starts with the '/BI0/0' prefix followed by one alphanumeric character for the object type and a numerical eight digit identification: /BI0/01 ... temporary tables used in connection with query processing. They are used once, the system usually deletes them automatically (unless the process did dump) and the names are reused.
/BI0/02 ... tables that are used for external hierarchies.
/BI0/03 ... not used any longer.
/BI0/06 ... similar as of '/BI0/01', but tables are not deleted from the SAP DDIC.
/BI0/0D ... tables are used in context with the open hub functionality. They are not permitted to be deleted.
/BI0/0P ... tables occur in the courses of an optimized pre-processing that contains many tables. These tables can be reused immediately after releasing.
With BI 7.x, temporary table management has been improved. Now it provides SAP_DROP_TMPTABLE report to remove temporary objects.


Recommendations


This report is not recommended to run on a regular basis. It might be useful to run it manually due to some exceptional situation, when many temporary objects have been created.


Warning


The SAP_DROP_TMPTABLES report deletes all objects (except for the temporary hierarchy tables) without taking into account whether or not they are still in use. For example, this can result in terminations of queries, InfoCube compression and data extraction.


Related Notes


1139396
514907


Process Chain Logs and Assigned Process Logs Deletion

Created by:SAP
Underlying SAP report:RSPC_LOG_DELETE

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


Process chains are used in BW landscape to automate the loading sequence. There are multiple running process chains at a given time. Their logs are stored in database to be ready for analysis. Storing older logs may use up the disk space and it is required to delete the older process chain execution log.
Process chains are executed at different frequencies – daily, weekly, monthly, at specific calendar day etc. There are following tables that hold information about process chain logs: RSPCLOGCHAIN, RSPCPROCESSLOG.
The RSPC_LOG_DELETE report is designed to delete process chain log.


Recommendation


Our recommendation is to delete process chain logs according to the process chain frequency of execution. In the case of daily or weekly execution we recommend to delete logs older than 3 months, in the case of monthly or quarterly execution we recommend to delete logs older than 6 months, other frequencies as per requirement.


Process Chain Instances Deletion

Created by:SAP
Underlying SAP report: RSPC_INSTANCE_CLEANUP

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


An instance is generated for each process chain execution and is stored in tables RSPCINSTANCE and RSPCINSTANCET.


Steplist


The user can provide the following input information to the settings:


  • "Older than" field – entries older than the date will be deleted;
  • If "without corresponding chain run" option is checked, the variant entry without execution of chain run will be deleted;
  • If "delete log" option is checked, these logs will be deleted.


Recommendation


Execute the task with the appropriate settings to delete the entries from the DB tables.


Automatic Deletion of Request Information in Master Data/Text Provider

Created by:SAP
Underlying SAP report:RSSM_AUTODEL_REQU_MASTER_TEXT

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no

Introduction


Master data InfoProviders, resp. text InfoProviders contain request information. It may be useful to keep the request information limited. This would mean to delete old requests (that is, the administration information) from master data InfoProviders and text InfoProviders to improve performance and decrease main memory consumption.
The report RSSM_AUTODEL_REQU_MASTER_TEXT deletes obsolete request information from master data InfoProviders and text InfoProviders.


Recommendation


Our recommendation is to schedule the report periodically or in a process chain.


Unused Master Data Deletion

Created by:SAP
Underlying SAP report:  RSDMDD_DELETE_BATCH

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


In case the selective master data deletion is required, there are two options:
At single record level – using this task in master data maintenance and/or deletion mode.
Master data can be deleted only, if:


  • No transaction data exist for the master data;
  • Master data are not used as an attribute for an InfoObject;
  • There are no hierarchies for this master data.


Steplist


Fill all fields to select required master data:


P_IOBJNM

Name of the InfoObject

P_DELSID

If checked, SIDs will be deleted

P_DELTXT

If checked, the texts will be deleted

P_SIMUL

If checked, simulate only

P_LOG

If checked, log entries are written

P_PROT

If checked, detailed usage protocol is get

P_SMODE

Search Mode for MD Where-Used Check (default – "O" "Only One Usage per Value")

P_CHKNLS

If checked, search for usages in NLS

P_STORE

If checked, Store where used list – changes related to enhancement

P_REUSEA

If checked, Reuse where used list – changes related to enhancement (all)

P_REUSEU

If checked, Reuse where used list – changes related to enhancement (used)

P_REUSEN

If checked, Reuse where used list – changes related to enhancement (unused)


Error Handling Log Analysis

Created by:SAP
Underlying SAP report:RSB_ANALYZE_ERRORLOG

Client-dependent:

no
Settings as variant:(no settings)
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes


Introduction


During Data Transfer Processes (DTP) different kinds of error can occur. There is RSBERRORLOG table (Logs for Incorrect Records) which stores error handling logs due to following and other reasons: Warnings that are created during master data uploads for duplicate records and single record error messages in customer-specific transformation routines.
Thus, RSBERRORLOG table can grow very significantly and can affect overall system performance.
To identify which DTPs are responsible for table size increase there is the report RSB_ANALYZE_ERRORLOG. It provides an overview of all the DTP error stack requests and the number of records marked with errors.


Recommendation


Our recommendation is to run RSB_ANALYZE_ERRORLOG (in background mode) to learn which DTP creates the most erroneous records in RSBERRORLOG table. Thereafter we recommend running the report RSBM_ERRORLOG_DELETE to reduce RSBERRORLOG table size. This action should be done monthly.


Related Notes


1095924


Error Handling Log Deletion

Created by:SAP
Underlying SAP report:RSBM_ERRORLOG_DELETE

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


During Data Transfer Processes (DTP) different kinds of error can occur. There is RSBERRORLOG table (Logs for Incorrect Records), which stores error-handling logs due to following and other reasons:


  • Warnings that are created during master data upload for duplicate records.
  • Single error record messages in customer-specific transformation routines.


Thus, RSBERRORLOG table can grow very significantly and can affect overall system performance.
There is RSBM_ERRORLOG_DELETE report, which helps to reduce the size of RSBERRORLOG table.


Recommendation


Our recommendation is to run the analyzing report RSB_ANALYZE_ERRORLOG (in background mode) in advance to learn which DTP creates the most erroneous records in RSBERRORLOG table. Thereafter we recommend running the RSBM_ERRORLOG_DELETE report to reduce RSBERRORLOG table size. This action should be done monthly.


Related Notes


1095924


PSA Requests Error Logs Deletion

Created by:SAP
Underlying SAP report:RSSM_ERRORLOG_CLEANUP

Client-dependent:

no
Settings as variant:(no settings)
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


If the relevant requests are deleted from the PSA, in most cases the system automatically deletes the PSA error logs. Otherwise, the program RSSM_ERRORLOG_CLEANUP can be used to delete them.


PSA Partition Check

Created by:SAP
Underlying SAP report: RSAR_PSA_PARTITION_CHECK

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


This task can be used in cases where the partitioning logic of PSA ignores the PARTNO field, the error stack for DTPs are created with global index, global index is created with the key fields of the DataSource and the active table of the write-optimized DSOs is partitioned, though there is a global index. See more in Related Notes.


Related Notes


1150724


PSA Partno. Correction

Created by:SAP
Underlying SAP report:SAP_PSA_PARTNO_CORRECT

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


The data may be written with an incorrect PARTNO = '0' into the wrong partition without any checks. When there is a deletion run on the PSA/CHANGELOG, the drop of the lowest existing partition will fail. This task helps to repair the requests written into the incorrect partition and will re-assign it to the correct partition. See more in Related Notes.


Related Notes


1150724


PSA Directory Cleanup

Created by:SAP
Underlying SAP report:RSAR_PSA_CLEANUP_DIRECTORY

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes


Introduction


This task is used to check the PSA tables against the directory entries and the partition related inconsistencies. It is useful in scenarios when the PSA requests are deleted from the administrative data but not from the database. Requests are not available in the PSA tree but exist in the corresponding PSA table. All the requests in a partition are deleted but the partition is not dropped to check if data has been deleted or written into incorrect partitions. See more in Related Notes.


Related Notes


1150724


PSA Definition Cleanup

Created by:SAP
Underlying SAP report:RSAR_PSA_CLEANUP_DEFINITION

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


While deleting requests from the PSA or due to the terminations when generating the transfer rules, in some circumstances, reference-free PSA metadata objects may be generated or may remain. This table partially remains in an inconsistent state; therefore, the DDIC Tool displays these tables as incorrect.


Related Notes


1150724


Query Consistency Check

Created by:SAP
Underlying SAP transaction: RSZTABLES

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


Report ANALYZE_RSZ_TABLES is designed as a check-tool for detecting and solving different types of inconsistencies in the main query definition database tables.
The working mode of the task changed from the previous version. It no longer runs in one-system mode. This means it cannot be executed throughout all the landscape. However, the change offers a more detailed output, which enables to drill down further into output sub sections.


Warning


This task cannot be executed on a system connected via SYSTEM type RFC user.


Related Notes


792779


The F Fact Table Unused/Empty Partition Management

Created by:SAP
Underlying SAP report: SAP_DROP_EMPTY_FPARTITIONS

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes


Introduction


Each loading process of transaction data into an InfoCube generates a new request. For each request, a separate partition is created in the F fact table. When the F table is condensed into the E table of a cube, the partition corresponding to this request is deleted after the condensing has been successful and a partition with this name is never again created. In addition, the entry corresponding to this request is deleted in the packet dimension table. Selective deletion from the InfoCube can remove the data of the entire request without removing the accompanying partitions. Empty partitions are those that no longer contain any data records. They probably result from removing the data from the InfoCube via a selective deletion. Unusable partitions might still contain data; however, no entry for this request is contained in the packet dimension table of the InfoCube. The data is no longer taken into consideration with reporting. The remaining partitions are created if a condenser run has not been correctly ended.


Recommendation


We recommend using the report SAP_DROP_EMPTY_FPARTITIONS to display empty or unusable partitions of the F fact tables of an InfoCube and if necessary to remove them.


Zero Elimination After Compression

Created by:SAP
Underlying SAP report:RSCDS_NULLELIM

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: no


Introduction


The InfoCube compression often results in records with in a compressed table in which key figures have value 0. If these key figures are with aggregation behavior SUM, zero value records can be deleted. This can be achieved by selecting the option "Zero elimination" during compression or – if this option is not used – by applying RSCDS_NULLELIM report.


Notes


The elimination of zero values may result in orphaned dimension entries; therefore, regular dimension trimming is required when using zero elimination.


Recommendation


We recommend using the report RSCDS_NULLELIM after running cube compression task, if "Zero Elimination" flag was not set.


Cluster Table Reorganization

Created by:SAP
Underlying SAP report:RSRA_CLUSTER_TABLE_REORG

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no



Introduction


Cluster table RSIXWWW can contain large datasets that can be no longer accessed. This results in bottlenecks with the disk space.


Recommendation


We recommend running the program RSRA_CLUSTER_TABLE_REORG regularly to delete the entries in the table RSIXWWW that are no longer required.


BEx Web Application Bookmarks Cleanup

Created by:SAP
Underlying SAP transaction:RSWR_BOOKMARK_REORG

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


Bookmarks are saved navigational states of a Web application. They are created in the BI system when using Export and Distribute -> Bookmark function, when saving an ad hoc analysis in BEx Web Analyzer or when personalizing Web applications.
With this task, it is enabled to reorganize the bookmarks that result from Web template in SAP NetWeaver 7.0 format.


Warning


This task cannot be executed on a system connected via SYSTEM type RFC user.


BEx Web Application 3.x Bookmarks Cleanup

Created by:SAP
Underlying SAP transaction:RSRD_ADMIN_BOOKMARKS_3X

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


Bookmarks are saved navigational states of a Web application. They are created in the BI system when using Export and Distribute -> Bookmark function, when saving an ad hoc analysis in BEx Web Analyzer or when personalizing Web applications.
With this task, it is enabled to reorganize the bookmarks that result from Web template in SAP NetWeaver 3.x format.


Warning


This task cannot be executed on a system connected via SYSTEM type RFC user.


BEx Broadcaster Bookmarks Cleanup

Created by:SAP
Underlying SAP report:RSRD_BOOKMARK_REORGANISATION

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


A bookmark ID is the identification number (ID) for a saved navigational state of a Web application. A view ID is the identification number (ID) for a saved navigational state of a query. These IDs are generated when online links for information broadcasting are created.
With this task, it is possible to reorganize and delete bookmark IDs and view IDs that were created by the system for information broadcasting and that are no longer needed.


Recommendation


To automate the reorganization of bookmark and view IDs, this task can be scheduled to run periodically in the background.


Jobs without Variants Deletion

Created by:SAP
Underlying SAP report:RS_FIND_JOBS_WITHOUT_VARIANT

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


Sometimes the error message "Variant xxx does not exist" occurs. This is due to inconsistencies, which happened during a system or client copy, a transport or to a call-back happening at the wrong client.


Recommendation


Use this task according to related notes to repair inconsistencies or mismatch call-back / client.


Related Notes


1455417


Delete BW RSTT Traces

Created by:SAP
Underlying SAP report:RSTT_TRACE_DELETE

Client-dependent:

no
Settings as variant:yes
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: yes

Introduction


Sometimes RSTT traces cannot be systematically deleted. In this case, the user can use this task to correct this problem.


Related Notes


1142427


ERP TASKS


Change Documents Cleanup

Created by:DVD

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:yes

Introduction


During archiving, these changedocs are archived together with main objects. But in some other circumstances (manual deletion, program failure, messy conversion/migration…) the changedocs can stay in the system for non-existent objects.
SAP provides just 1 report for deleting phantom change docs (SD_CHANGEDOCUMENT_REORG), but it's only for SD documents.
In OutBoard Housekeeping, "Change Documents Cleanup" activity checks and deletes phantom change documents for a wider range of the objects types:


  • Material data
  • Customer master
  • Vendor master
  • Purchasing document
  • Sales document
  • Billing document
  • SD document
  • Conditions
  • Handling unit


Step list


In the main OutBoard Housekeeping menu select "Change Documents Cleanup – Settings" under the ERP System Tasks.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled.


 

Figure 130: Change Docs Cleanup – Settings detail


To run this task in Test mode, mark test mode checkbox, to display available cleanup results in logs.
Once the Settings ID for Change Docs Cleanup are specified, the user may run the activity from the Main menu. There are several options how to start the activity. For more information, refers to Execute and Schedule sections of this user documentation.


Longtexts Cleanup

Created by:DVD

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:yes


Introduction


During archiving, these longtexts are archived together with main objects. But in some other circumstances (manual deletion, program failure, messy conversion/migration…) the texts can stay in the system for non-existent objects.
SAP provides report for deleting phantom texts (RVTEXTE), however Housekeeping task supports Recycle bin as an added functionality.
As with Change documents, longtexts can also stay in the system for non-existent objects. In OutBoard Housekeeping, "Change Documents Cleanup" activity checks and deletes phantom Longtexts for the following object types:


  • Material data
  • Customer master
  • Vendor master
  • Sales document
  • Purchasing document


Step list


In the main OutBoard Housekeeping menu select "Longtexts Cleanup – Settings" under the ERP System Tasks.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled.


 Figure 131: Change Docs Cleanup – Settings detail


It is possible to run this task in Test mode. Mark this option to display available cleanup results in logs.
Once the Settings ID for Change Docs Cleanup are specified, the user may run the activity from the Main menu. There are several options how to start the activity. For more information, refer to Execute and Schedule sections of this user documentation.


Marking of Unused Customers / Vendors


Common feature of ERP systems is collecting the data – master and transactional. During the years the systems are in use, the number of master data (MD) become obsolete, e.g. specific vendors and customers are no longer business partners, materials are replaced by others...
The MD lifecycle makes the portion of master data out dated and thus the space in a database is allocated uselessly.
There is no straightforward way to delete the out dated MD, because during the times there is added a number of transactional data bound with MDs. For the legal reasons and the database consistency, these documents can't stay orphaned in the database without the appropriate MDs. Therefore, the good and safe MD housekeeping approach consists of the following steps:


  1. The list of MDs is selected (as a variant).
  2. Selected MD, which together with bound documents, untouched for reasonable period, are marked as blocked and 'for deletion'.
  3. The documents bound with selected MD will be archived and deleted (SARA transaction). The same occurs with MD.


Marking of Unused Customers


The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled and selected whether or not the settings ID will be run in test mode.
The user may choose to set select conditions for marking unused customers:


  • Customer ID
  • Inactive longer than (Days)



Figure 132: Marking of Unused Customers – Settings detail


Once the Settings ID for Marking Unused Customers are specified, the user may run the activity from the Main menu. There are several options how to start the activity. For more information, refer to Execute and Schedule sections of this user documentation.
Note: Marking of unused customers has no visible output; the customers that fall into select conditions specified in the Settings ID will be flagged with 'marked for deletion'. This flag will be used during SARA archiving with object FI_ACCPAYB with checked 'Consider Deletion Indicator'.


Marking of Unused Vendors


The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled and selected whether or not the settings ID will be run in test mode.
The user may choose to set select conditions for marking unused vendors:


  • Customer ID
  • Inactive longer than (Days)


Figure 133: Marking of Unused Vendors – Settings detail


Once the Settings ID for Marking Unused Vendors are specified, the user may run the activity from the Main menu. There are several options how to start the activity. For more information, refer to Execute and Schedule sections of this user documentation.
Note: Marking of unused vendors has no visible output; the vendors that fall into select conditions specified in the Settings ID will be flagged with 'marked for deletion'. This flag will be used during SARA archiving with object FI_ACCRECV with checked 'Consider Deletion Indicator'.


Schedule Manager Tables Cleanup

Created by:DVD

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no

Introduction


The Schedule Manager (transaction SCMA) enables you to monitor periodic tasks, such as period-end closings in overhead cost controlling. In the Monitor (transaction SCMO) you can display information about all scheduled jobs. The Monitor is a component of the Schedule Manager. The tool saves the information in its own tables (SM*) such as SMMAIN (main information about the entry), SMPARAM (processing parameters) and SMSELKRIT (selection criteria). These tables are prone to growing very large.


Recommendation


You can keep the size of Schedule Manager tables down by regularly deleting monitoring data that is no longer used. Once this data is deleted, you will not be able to monitor any more jobs that have already run. Therefore, it is essential that you only delete data that is no longer needed for monitoring, such as period-end closing data that is older than one year.


Related Notes


803641
Authorisations: authority object 'B_SMAN_WPL'


CRM TASKS


BDocs Deletion

Created by:DVD

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no

Created by:DVD Client-dependent: yes Settings as variant: noSupport for Recycle bin:no


Introduction


BDocs are CRM related intermediate documents. Unlike IDocs, which are asynchronous, BDocs can be used in both synchronous and asynchronous modes.
BDocs are used in CRM for data exchange and they can be quite complex comparing to IDocs. They are transferred through qRFC or tRFC.
The tables for the business document message flow and the Middleware trace can increase in size considerably. This may cause the performance issues during the processing of BDoc messages.
This task extends standard SAP report SMO6_REORG.


Step list


In the main OutBoard Housekeeping menu select "BDocs Deletion – Settings" under the CRM System Tasks.
The user can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled.


Figure 134: BDocs Deletion – Settings detail


Once the Settings ID for BDocs Deletion are specified, the user may run the activity from the Main menu. There are several options how to start the activity. For more information, refer to Execute and Schedule sections of this user documentation.


Recommendation


We recommend scheduling this task on regular basis and setting the delete process for BDoc messages, trace data older than 7 days.


Related Notes


206439


Delete Inactive Versions of Products


Created by:SAP
Underlying SAP report:COM_PRODUCT_DELETE_INACTIV

Client-dependent:

yes
Settings as variant:no
Support for Recycle bin:no


Introduction


The task reorganizes the product master. Deletes all inactive versions of products with status I1100 = DELETED.


Recommendation


This task should be scheduled daily.


OutBoard Housekeeping SYSTEM TASKS

Deletion of old runID

Created by:DVD

Client-dependent:

no
Settings as variant:no
Recommended for HANA pre-migration housekeeping: no

Introduction

Datavard OutBoard Housekeeping is using multiple tables for handling execution of tasks. Information like when and how long task was running is not needed after some time period.

In OutBoard™ for Housekeeping it is possible to delete already finished runIDs that are older than certain date.

Recommendation

Our recommendation is to delete runIDs on regular basis.


RecycleBin Cleanup

Created by:DVD

Client-dependent:

no
Settings as variant:no
Recommended for HANA pre-migration housekeeping: no

Introduction


The RecycleBin is an essential part of OutBoard Housekeeping. For the specific tasks, it temporarily stores deleted data and if necessary, this data can be restored back. The number of the days for which the data is kept in RecycleBin, is called retention time.
When the retention time expires, this data is then permanently deleted from RecycleBin. This process is provided by the scheduled job FS_SSYSRECBIN_CUDUMMY_SETT.
The RecycleBin cleanup task doesn't require any specific settings.
It is possible to execute this RecycleBin cleanup task the same way all other tasks do – directly from System view or via scheduling from Activity view (for single system or landscape branch).


Recommendations


To ensure that RecycleBin cleanup doesn't overload the system, it is recommended to run it daily outside of Business Day Hours (this is locally dependent).


RecycleBin Size Recalculation

Created by:DVD

Client-dependent:

no
Settings as variant:no
Recommended for HANA pre-migration housekeeping: no

Introduction


It can occur that the RecycleBin size information is not accurate. Size recalculation task updates this information.
The RecycleBin size recalculation task doesn't require any specific settings.
It is possible to execute this RecycleBin size recalculation task the same way all other tasks do – directly from System view or via scheduling from Activity view (for single system or landscape branch).


Recommendations


To keep the precise data size information of the RecycleBin, this task should be executed regularly.


Task Analysis

Created by:DVD

Client-dependent:

no
Settings as variant:no
Recommended for HANA pre-migration housekeeping: no


Introduction


The main goal of OutBoard Housekeeping is to remove temporary or obsolete data from SAP systems. This will be achieved by regular running of cleaning OutBoard Housekeeping tasks with appropriate settings. In general, these settings are created and maintained by Housekeeping user. However, it requires some level of expert skills concerning SAP basis & BW.
Therefore, OutBoard Housekeeping provides the user with the helper. Together with OutBoard Housekeeping packages, default settings starting with "_" are distributed. They are based on best practices for the specific OutBoard Housekeeping task.
For a limited set of OutBoard Housekeeping tasks, an analysis of data can be executed. Analysis output can be presented to show how much space can be saved after the task execution. It depends on the thresholds set for the selected settings. These settings can be modified and thus a potential outcome of housekeeping can be evaluated ahead.


Step list


Double click the system you want to work on. Scroll down the main OutBoard Housekeeping menu; select "OutBoard Housekeeping Task Analysis" under the OutBoard Housekeeping System Tasks. The user can create new settings:


  1. By entering a new ID
  2. By choosing from existing settings


For more information, refer to the Creating a Settings ID section of this user documentation. Should the user create new settings, the "Description" field needs to be filled. Once the Settings ID for OutBoard Housekeeping Task Analysis are specified, the user may run the activity from the Main menu. There are several options how to start the activity. For more information, refer to Execute and Schedule sections of this user documentation. In this case, the execution and results shown can be reached also by right click on the selected group or system in System View.


Figure 135: Analyses from System view



Figure 136: Result of system analysis for default settings


Recommendations


To keep the analysis up-to-date it is recommended to run this task after each execution of OutBoard Housekeeping tasks to which the analysis are connected.


Scheduling of System Lock

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: no

Introduction


If the system (or group of systems) needs to be locked on regular basis, this can be scheduled using the task Scheduling of System Lock.
In the task settings, set up the period using these time units: days and hours. Once this setting exists, task can be scheduled by defining the start period and the reoccurrence of the scheduled system lock.
For example if system 'X' should be locked every month for first 2 days, the variant using a period of 2 days should be created afterwards the task should be scheduled. Once the locking period is over, the system(s) will be unlocked automatically. In case the lock should be cancelled before the locking period is over, the task Cancel Scheduled System Lock can be used.
Check the statement "Pause / resume" support in the task chapter header. For these tasks there is an automated pause / resume option available, when system lock is scheduled. When the system is being locked, the task finishes last portion (request or package) and pauses. When the system is unlocked again, the cockpit resumes paused task.


Step List


Double click the system you want to work on. Scroll down the main OutBoard Housekeeping menu and select "Scheduling of System Lock" under the OutBoard Housekeeping System Tasks. You can create new settings: by entering a new ID or choose an existing setting. For more information, refer to the Creating a Settings ID section of this user documentation. After filling Settings ID, press continue to get to Schedule system lock screen.


Figure 137: Schedule system lock screen


Fill in Description to specify your settings. In 'Locking time' part of screen, there are 2 time characteristics to choose from: Days and Hours. Choose one and Save Settings.
To schedule a system lock press the button 'Schedule', fill in your settings ID and a job run definition. There are 2 options to execute: Schedule and unscheduled task. Press the calendar to check that the task was correctly scheduled. After a system lock execution, display Logs to view if all the processes were properly executed. To review detailed information of an active lock, go to calendar or display scheduling of system lock logs.


Cancel Scheduled System Lock (Ad hoc)

Created by:DVD

Client-dependent:

no
Settings as variant:no
Support for Recycle bin:no
Recommended for HANA pre-migration housekeeping: no

Introduction


This task allows an emergency cancel of the scheduled system lock in case a lock is to be cancelled within the valid locking period.
Check the statement "Pause / resume" support in the task chapter header. For these tasks is an automated pause / resume option available, when system lock (Ad hoc) is scheduled. When the system is being locked, the task finishes last portion (request or package) and pauses. When the system is unlocked again, the cockpit resumes paused task.
Note: This is only emergency unlock. The scheduled lock will be automatically deleted after required period.
Exception: This task doesn't unlock the manual system lock (system locked through Landscape editor).


Step list


Switch to the activity view and scroll down the main OutBoard Housekeeping menu. Double click "Cancel Scheduled System Lock (Ad hoc)" under the OutBoard Housekeeping System Tasks. Execute task on the system.