(OH-2005) Activities (Tasks)
OutBoard Housekeeping tasks are hierarchically structured in four main topics – Basis, Business Warehouse, ERP and CRM tasks. Each topic contains DataVard implemented housekeeping tasks plus related Standard SAP Housekeeping tasks.
There is a number of useful standard SAP housekeeping transactions or reports, which may be known to the user. In OutBoard Housekeeping, you can easily access standard housekeeping functions within the OutBoard Housekeeping cockpit. OutBoard Housekeeping offers only a short description of their functionality and there is in-depth information available in SAP documentation. As these tasks are part of the standard SAP installation, their maintenance and the correct functionality is the responsibility of SAP.
Basis
Topic Basis is group of the housekeeping tasks, which are SAP Basis-oriented.
Application logs Deletion
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
The application log is a tool that collects the messages, the exceptions and the errors for all the activities and the processes in the system. This information is organized and displayed in a log. Many different applications collect the messages in the application log, which contains the information or the messages for the end user. The application log serves as the temporary storage for the messages. The logs are written to the database but they are not automatically deleted. There is no general procedure to switch the application logging ON or OFF. The log tables tend to grow very considerably as they are not automatically deleted by the system and thus they can significantly impact the overall system performance.
OutBoard Housekeeping takes care about it and deletes logs stored in the old format tables as well as all the logs that are specified.
Tables of the application log that can contain too many entries are:
|
|
An expiry date is assigned to each log in BALHDR table. The logs will remain in the database until these dates expire. After the expiry, date passes the according data and the log is deleted from the database. There are often a large number of the logs in the database because no expiry date was assigned to them. If no specific date has been assigned to the application log, the system assigns an expiry date as 12/31/2098 or 12/31/9999, depending on the release. This allows the logs to stay in the system for as long as possible.
Steplist
In the main OutBoard Housekeeping menu select "Application Logs – Settings" under the Basis/Deletion Tasks.
Now, the Settings selection must be specified. You can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Should you create new settings, the "Description" field needs to be filled and select whether or not the settings ID will be run in a test mode.
Figure 71: Application Logs – Settings detail
Selection conditions
The 'Object' and 'Subobject' fields specify the application area in which the logs were written (see F4 Help).
The field 'External ID' indicates the number, which was assigned by the application for this log.
Fields 'Transaction Code', 'User' and 'Log number' provide additional selection criteria for Application Log deletion.
Field 'Problem class' indicates the importance of the log. By default, this field contains the value '4'; this means only logs with additional information. You may want to delete all logs by entering the value '1' in this field. All logs with log class '1' or higher are then deleted.
Note: If no selection is made under the "Selection conditions" the Application logs will be deleted based only on specified Time criterion.
Expiry Date
A log usually has an expiration date, set by the application, which calls the Application log' tool. If the application log does not set an expiration date, the Application log' tool sets the expiration date as 12/31/2098 or 12/31/9999, depending on the release, which allows the logs to stay in the system for as long as possible.
You can specify if only Application logs, that have reached their expiration date, will be deleted or if the logs expiration date will not be taken into account.
In the field 'Logs older than (in Days)', you may specify the time limit for Application logs to be deleted.
Figure 72: Application Logs – Settings info buttons
- Show Selection – will list all selected Log numbers, which will be deleted.
- Number of objects – will list total number of Application logs, which fulfill combined selection criteria.
Note: information listed by clicking on "Show Selection" and "Number of Objects" buttons is valid for the selected system. If the landscape node is selected, the buttons are hidden.
Once the settings are specified, you can run the created/modified Settings Group from the Main menu. You can start/ schedule the run in several ways. For more information, please refer to Execute and Schedule sections of this user documentation.
You should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run, you can go to the monitor or check the logs.
Recommendation
Our recommendation is to switch on the log update at the beginning in order to determine which objects need to have the log entries. Then delete the application log for example after a maximum of 7 days. If the production system is running smoothly after the initial phase, you are able to deactivate the application log update completely. We recommend to look into SAP – Related Notes for more information.
Related Notes
2057897
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
RFC Logs Deletion
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Transactional RFC (tRFC, previously known as asynchronous RFC) is an asynchronous communication method that executes the called function module just once in the RFC server. The remote system doesn't have to be available at the time when the RFC client program is executing a tRFC. The tRFC component stores the called RFC function, together with the corresponding data, in the SAP database under a unique transaction ID (TID).
The tables ARFCSSTATE, ARFCSDATA, ARFCRSTATE can contain a large number of entries. This leads to poor performance during tRFC processing.
In OutBoard Housekeeping it is possible to delete old data from these tables using retention time.
Step list
In the main OutBoard Housekeeping menu select "RFC Logs – Settings" under the Basis/Deletion Tasks.
Now, the Settings selection must be specified. You can create new settings (1) be entering a new ID 1 or chose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Should you create new settings the "Description" field needs to be filled and selected whether or not the settings ID will be run in test mode.
Figure 73: RFC Logs – Settings detail
Specify selection conditions for Time, Destination and User Name, if necessary.
You can also restrict the deletion of outdated logs based on the status:
- Connection Error
- Recorded
- System Error
- Being Executed
- Already Executed
- Terminated Due to Overload
- Temporary Application Errors
- Serious Application Errors
Click on info button "Save settings" to save the selection, for any further updates click on "Modify Settings" info button and confirm.
Once the settings for RFC logs cleaning are specified, you may run the created/modified Settings group from Main menu. There are several options how to start the deletion, for more information, please refer to Execute and Schedule sections of this user documentation.
You should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run, you can go to the monitor or check the logs.
Recommendation
Our recommendation is to schedule RFC logs deletion task weekly on a regular basis.
TemSe Objects Consistency Check
Created by: | SAP/DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
TemSe is a store for temporary sequential data, e.g. objects that are not normally permanently held in the system. TemSe objects consist of a header entry (stored in the table TST0 e.g. and the object itself (stored in the file system or in table TST03)).
This task checks the consistency of the object header and object data. However, it doesn't check the spool requests (stored in table TSP01) and entries in the table TSP02, if output request exists.
Step list
In the main OutBoard Housekeeping menu select "TemSe Objects Consistency Check – Settings" under the Basis/Deletion Tasks.
Now, the Settings selection must be specified. You can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Should you create new settings, the "Description" field needs to be filled. In "Selection criteria", you can fill "Client" field if check will be made for specific client. "TemSe object (pattern)" will determine the range of checked objects.
Figure 74: TemSe Objects Consistency Check – Settings detail
If "Create settings for TemSe Cleanup based on selection" is checked, consistency check will prepare the settings for "TemSe Objects Cleanup" task. Settings ID can be found in consistency check logs – "Problem class Important".
Click on info button "Save settings" to save the selection, for any further updates click on "Modify Settings" info button and confirm.
Once settings for TemSe objects consistency check are specified, you may run the created/modified Settings group from Main menu. There are several options how to start the deletion, for more information, please refer to Execute and Schedule sections of this user documentation.
You should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run, you can go to the monitor or check the logs.
Recommendations
It is recommended to run consistency check twice with a gap of approx. 30 minutes. Outputs have to be compared and only those TemSe objects, which are in both, should be deleted. Thus the temporary inconsistencies are eliminated.
Warning
TemSe storage is not intended as an archiving system. It can contain limited number of spool requests (default value – 32000, but can be increased up to 2 billion) this can affect the performance.
Related Notes
48284
TemSe Objects Cleanup
Created by: | SAP/DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
This task follows the previous one. Because of it, see introduction "TemSe Objects Consistency Check" to learn more about TemSe objects background.
Step list
There are two ways how to prepare the settings.
- In the main OutBoard Housekeeping menu select "TemSe Objects Cleanup – Settings" under the Basis/Deletion Tasks.
Now, the Settings selection must be specified. You can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Should you create new settings, the "Description" field needs to be filled. In "Selection criteria", selection of the object can be done in two ways: First is by making it as absolute selection: you can fill "Client" field if the check will be made for a specific client. "TemSe object (pattern)" will determine the range of checked objects. The second way is to create relative selection where you can specify how old objects should be deleted and whether also obsolete objects should be deleted.
"Test mode" option is checked by default.
Figure 75: TemSe Objects Deletion – Settings detail relative selection
Figure 76: TemSe Objects Deletion – Settings detail relative selection
- In "TemSe Objects Consistency Check" task, check "Create settings for TemSe Cleanup based on selection" and execute. Now generated settings are prepared based on consistency check. But be aware, they are not differential as recommended for consistency check and can also contain temporary inconsistencies.
In generated settings, "Test mode" option is unchecked.
In this case, all TemSe objects that fit to the criteria are stored in Select-Options (see Figure 77).
Figure 77: TemSe Objects Deletion – Multiple selection detail
Recommendations
It is recommended to run TemSe objects cleanup as follow-up of TemSe objects consistency check.
XML Messages Deletion
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
SAP Exchange Infrastructure (SAP XI) enables an implementation of cross-system processes. It is based on the exchange of XML messages. It enables to connect the systems from different vendors and different version.
SXMSCLUP and SXMSCLUR tables are part of SAP XI. When using SAP XI extensively, their size can increase very rapidly. Therefore, regular deletion is highly recommended.
XML messages deletion task offers possibility to delete different XML messages in one-step.
Recommendation
It is possible to set a retention period for them separately according to the message type and status. Our recommendation for setting the retention period is following:
- Asynchronous messages without errors awaiting... 1 day
- Synchronous messages without errors awaiting ... 1 day
- Synchronous messages with errors awaiting ... 0 days
- History entries for deleted XML messages ...30 days
- Entries for connecting IDocs and messages ... 7 days
This task should be scheduled to run on daily basis.
Step list
In the main OutBoard Housekeeping menu select "XML Message Deletion – Settings" under the Basis/Deletion Tasks.
Now, the Settings selection must be specified. You can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Next step is to provide time settings for XML Message deletion in settings. Time frame can be specified to Days, Hours, Minutes and Seconds.
Figure 78: XML Message Deletion – Settings detail
Single Z* Table Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | yes |
Introduction
The customer-defined tables can sometimes grow rapidly, depending on the purpose of these tables over the time and the content may no longer be needed. OutBoard Housekeeping offers possibility to delete selected data from any customer defined table from naming convention Z* / Y* and keep it in the Recycle Bin for selected period of time.
The name of the task itself points to the intention to safely cleanup only one Z* table during the execution without any development effort in contrast to OutBoard Housekeeping feature called Custom objects cleanup. This feature enables to manage any number of related tables but requires some small amount of coding.
Step list
In the main OutBoard Housekeeping menu, select "Single Z* Table Cleanup – Settings" under the Basis/Deletion Tasks.
Then the Settings selection must be specified. You can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Next step is to provide the table name and generate the selection screen.
Figure 79: Single Z Table Cleanup – initial screen*
The Settings can be saved according to the entered table specific selections.
Figure 80: Single Z Table Cleanup – test run results*
Test run functionality is available for this task. The result of the test run is the number of entries that will be deleted from the "Z" and moved into recycle bin for the current selection.
There are several options how to start the Single Z* Table Cleanup. For more information, please refer to Execute and Schedule sections of this user documentation.
HANA Audit Log Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
HANA specific: | yes |
Introduction
Old audit data can be deleted from SAP HANA database audit table. This is only applicable if audit entries are written to the column store database tables. The threshold date can be set as a specific date or it can be set relatively – all log entries older than x days will be deleted.
Step list
In the main OutBoard Housekeeping menu select "HANA Audit Log Cleanup – Settings" under the Deletion Tasks.
Now, the Settings selection must be specified. You can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Next step is to provide selection condition for HANA Audit Log Cleanup in settings. Time frame can be specified as 'Before date' or 'Older than (days).
Figure 81: HANA Audit Log Cleanup settings
Recommendation
The size of the table can grow significantly therefore we recommend scheduling this task to delete audit logs on weekly basis. Retention time of logs depends on the company policy and local legal requirements.
HANA Traces Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
HANA specific: | yes |
Introduction
All the trace files opened by the SAP HANA database and their content can be deleted. Types of trace files which are deleted can be following:
- ALERT
- CLIENT
- CRASHDUMP
- EMERGENCYDUMP
- EXPENSIVESTATEMENT
- RTEDUMP
- UNLOAD
- ROWSTOREREORG
- SQLTRACE
The task deletes all traces on all hosts when it is executed on distributed system. Trace files are just compressed and saved, not deleted when checkbox 'With backup' is marked.
Step list
In the main OutBoard Housekeeping menu select "HANA Traces Cleanup – Settings" under the Deletion Tasks.
Now, the Settings selection must be specified. You can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Next step is to provide selection condition for HANA Traces Cleanup in the settings. This task can run in test mode and there is also possibility to store the backup data. Required types of traces can be selected.
Housekeeping allows you to delete traces which are older than set amount of days. In case of older release than SAP HANA Platform SPS 09, this setting will be ignored.
Figure 82: HANA Traces Cleanup settings
Recommendation
Our recommendation is to run this task to reduce disk space used by large trace files, especially trace components INFO or DEBUG.
Intermediate Documents Archiving
Created by: | SAP/DVD |
Underlying SAP report: | RSEXARCA |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Intermediate Documents (IDocs) are stored in several tables in the database. To control their size and improve the access time without losing any IDocs, they can be stored in the archives at the operating system level. These archives can be moved to the external storage media for future retrieval. Archived IDocs should be optionally deleted from the SAP system.
This task encapsulates the SAP NetWeaver data archiving concept using the SARA transaction and WRITE action. The archiving object IDOC contains information about which database tables are used for archiving. In runtime, the report RSEXARCA is executed.
Step list
In the main OutBoard Housekeeping menu select "Intermediate Documents Archiving – Settings" under the Basis/Archiving Tasks.
The settings are maintained the same way as a standard SAP housekeeping tasks. For more information, please refer to the Creating a Settings ID section of this user documentation.
In the variant screen you can set the criteria for the requests to be archived.
IDoc number | Identifies the range of document numbers |
Created At | Refers to the time of the document creation |
Create On | Refers to the date of the document creation, this is an absolute date value |
Created ago (in days) | Refers to the age of the document creation date, this allows to specify the relative date and has a higher priority than the absolute creation date value |
Last Changed At | Refers to the time of the last document modification |
Last Changed On | Refers to the date of the last document modification; it is an absolute date value |
Last Changed ago (in days) | Refers to the age of the last document modification, this allows to specify the relative date and has a higher priority than the absolute last modification date value |
Direction | Specifies if the document is out- or inbound |
Current Status | Specifies the document status |
Basic type | Document type |
Extension | Combined with an IDoc type from the SAP standard version (a basic type) to create a new, upwardly-compatible IDoc type |
Port of Sender | Identifies which system sent the IDoc |
Partner Type of Sender | Defines commercial relationship between the sender and receiver |
Partner Number of Sender | Contains partner number of the sender |
Port of Receiver | Identifies which system receives the IDoc |
Partner Type of Receiver | Defines commercial relationship between the sender and receiver |
Partner Number of Receiver | Contains partner number of the receiver |
Test Mode / Production Mode | Specifies, which mode the report executes. (Test mode makes no changes in the database). |
Detail Log | Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete). |
Log Output | Specifies type of the output log (List, Application Log, List and Application Log). |
Archiving Session Note | Description of the archived content. |
Figure 83: Intermediate Documents Archiving – Settings detail
There are several options how to start the Intermediate Documents Archiving. For more information, please refer to Execute and Schedule sections of this user documentation.
Warning
Only use the archiving if the IDocs were not activated through the application. Please make sure that no IDocs are activated and may still be needed by the application.
Work Items Archiving
Created by: | SAP/DVD |
Underlying SAP report: | WORKITEM_WRI |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
For archiving and deleting of the work items, the archiving object WORKITEM is used.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept by using SARA transaction and WRITE action. The archiving object WORKITEM contains information about which database tables are used for archiving. During a runtime, the report WORKITEM_WRI is executed.
Step list
In the main OutBoard Housekeeping menu select "Work Items Archiving – Settings" under the Archiving Tasks.
The settings are maintained the same way as a standard SAP housekeeping tasks. For more information, please refer to the Creating a Settings ID section of this user documentation.
In the variant screen, you can set the criteria for the requests to be archived.
Work Item ID | Unique ID of a work item |
Creation Date | Day on which the work item was generated with the status ready or waiting for the first time. |
End Date | Day on which the work item was set to status done or logically deleted. |
Task ID | Internal and unique ID of the task, which is assigned automatically after the task, is created. |
Actual Agent | User who last reserved or processed the work item – user name |
Delete Unnecessary Log Entries | It is possible to delete or store the log of entries. |
Test Mode / Production Mode | Specifies, in which mode the report executes. Test mode makes no changes in the database. |
Detail Log | Specifies the information contained in Detail log (No Detail Log, Without Success Message, Complete). |
Log Output | Specifies the type of an output log (List, Application Log, List and Application Log). |
Archiving Session Note | Description of the archived content. |
Grouping of List Display | Option to group the list – System Defaults, Grouping by Work Item Title or Task Description. |
For more detailed information, see the contextual help.
Figure 84: Work Items Archiving – Settings detail
There are several options how to start the Work Items Archiving. For more information, please refer to Execute and Schedule sections of this user documentation.
Prerequisites
Work Items that you want to archive should have the status Completed of Logically deleted (CANCELLED).
Recommendations
We recommend to run Work Items archiving regularly. The frequency of archiving is system specific.
Note
SAP recommends to use the archive information structure SAP_O_2_WI_001, which is necessary if you are using ABAP classes or XML objects. If you are using only BOR objects and are already using archive information structure SAP_BO_2_WI_001, you can continue to use it but SAP recommends that you switch to the extended archive information structure SAP_O_2_WI_001.
Change Documents Archiving
Created by: | SAP/DVD |
Underlying SAP report: | CHANGEDOCU_WRI |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
While using CHANGEDOCU solution, any document changes to master data, tables, documents, etc. are archived.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept, by using SARA transaction and WRITE action. The archiving object CHANGEDOCU contains information about which database tables are used for archiving. In a runtime, the report CHANGEDOCU_WRI is executed.
Step list
In the main OutBoard Housekeeping menu select "Change Documents Archiving – Settings" under the Archiving Tasks.
The settings are maintained the same way as a standard SAP housekeeping tasks. For more information, please refer to the Creating a Settings ID section of this user documentation.
In the variant screen, you can set the criteria for the requests to be archived.
Change doc. object | Name of the object |
Object value | Value of the object |
From Date | Starting date of the change documents archiving |
To Date | End date of the change documents archiving |
From Time (HH:MM:SS) | Starting time of the change documents archiving |
To Time (HH:MM:SS) | End of time of the change documents archiving |
Transaction code | Transaction code in which the change was made |
Changed By (User Name) | User name of the person responsible for the change of the document |
Test Mode / Production Mode | Specifies, in which mode the report executes. Test mode makes no changes in database. |
Detail Log | Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete). |
Log Output | Specifies type of output log (List, Application Log, List and Application Log). |
Archiving Session Note | Description of the archived content. |
For more detailed information, see the contextual help.
Figure 85: Change Documents Archiving – Settings detail
There are several options how to start the Change Documents Archiving. For more information, please refer to Execute and Schedule sections of this user documentation.
Recommendations
We recommend running the Change Documents archiving regularly. Frequency of archiving is system specific.
Note
Use the Change Document Archiving to archive the change documents of master data. Change documents for transactional data should still be archived together with the corresponding archiving of the application.
Warning
The business transactions need to be traceable, change documents cannot be deleted for a certain period of time. However, to reduce data volumes in the database, you can archive those change documents that you no longer need in current business processes. You can keep them outside the database for the duration of the legal retention time.
Links Deletion between ALE and IDocs
Created by: | SAP |
Underlying SAP report: | RSRLDREL |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Links are written in the ALE and IDoc environment. These are required for IDoc monitoring of the document trace and ALE audit.
They result in a rapid increase of the size in IDOCREL and SRRELROLES tables.
Recommendation
Links of the type IDC8 and IDCA can be deleted on a regular basis because they are generally no longer required after the IDocs are successfully posted in the target system. For more information, see related note.
Related Notes
505608
IDocs Deletion
Created by: | SAP/DVD |
Underlying SAP report: | RSETESTD |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Cyclic execution of standard report: | yes |
Pause / resume support: | yes |
Introduction
IDoc stands for Intermediate Document. It is a standard SAP document format. IDocs enable the connection of different application systems using message-based interface.
IDoc data is stored in the following DB tables:
- EDIDCcontrol record
- EDIDOCINDXcontrol record
- EDIDOvalue table for IDoc types
- EDIDOTshort description of IDoc types
- EDIDSstatus record
- EDIDD_OLDdata record
- EDID2data record from 3.0 onwards
- EDID3data record from 3.0 onwards
- EDID4data record from 4.0 onwards
- EDID40data record from 4.0 onwards
In case the old IDocs are being kept in the system, these EDI* tables may have become too big.
Unlike the standard SAP settings, this task is extended and enables to set the date of document creation as absolute or relative value (relative value has a higher priority).
Figure 86: IDOCs deletion – Settings detail
Recommendation
Our recommendation is to run this IDocs Deletion regularly. The period is system specific.
IDocs Deletion (Central system release >= 7.40)
Created by: | SAP/DVD |
Underlying SAP report: | RSETESTD |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Cyclic execution of standard report: | yes |
Pause / resume support: | yes |
Introduction
This task replaces the task "IDocs Deletion" in case that central system version is the same or higher than 7.40.
This report deletes IDocs without archiving them. It provides the option for also deleting other objects that were created during IDoc generation - objects such as work items, links, RFC entries, and application logs.
Step list
Make an IDoc selection in accordance with the selection criteria given. If the checkbox 'Test Run' is marked, the IDocs and linked objects will be determined in accordance with the selection made but they will not be deleted. Using the 'Maximum Number of IDocs' parameter, you can control how many IDocs should be deleted. It is not recommended to leave this parameter empty, in case you do, all the IDocs in your selection will be taken into consideration. Recommended value for this parameter is 100 000.
Determining the linked objects is a complex process. Activate the additional functions only if the respective objects are created during IDoc generation and are not already removed from your system by another action.
Dynamic variant usage is available for this task.
BCS Reorganization of Documents and Send Requests
Created by: | SAP |
Underlying SAP report: | RSBCS_REORG |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
The Business Communication Services (BCS) offer functions for SAP applications to send and receive e-mails, faxes or SMS messages. BCS offers ABAP programming interface connected to SAPconnect allowing for exchanging messages with e-mail servers via SMTP.
This task deletes documents with send requests or documents that are not part of a send request, if they are no longer in use and in accordance with the settings under "Reorganization Mode".
Recommendation
It is recommended to run this task when the related tables are increasing over the time and data is no longer required.
Related Notes
966854
1003894
Documents from Hidden Folder Deletion
Created by: | SAP |
Underlying SAP report: | RSSODFRE |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Some applications use the 'dark folder' (meaning it is not visible in the Business Workplace) to store Business Communication Services documents.
This task removes documents from the 'dark folder' and therefore allows the reorganization of the documents and the document content.
Recommendation
It is recommended to run this task when the SOOD, SOFM, SOC3, SOOS, SOES tables have become considerably large and are not significantly reduced when you run common reorganization reports.
Related Notes
567975
Reorganization Program for Table SNAP of Short Dumps
Created by: | SAP |
Underlying SAP report: | RSSNAPDL |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
This program deletes old short dumps from the table SNAP. The dumps that may be kept, can be then selected in transaction ST22.
The program parameters that can be set are following:
- The maximum number of entries to remain after the reorganization.
- The maximum number of table entries to be deleted at once.
- Storage date.
The program first deletes the short dumps older than storage date and not flagged as protected. If there are more entries in the table SNAP than are specified in the first parameter, later short dumps will also be deleted.
In the program, the delete process is split into small units so that only a certain number of entries can be deleted at one time. This prevents the occurrence of any set problems in the database. The split is set in the second program parameter.
Table Log Database Management
Created by: | SAP |
Underlying SAP report: | RSTBPDEL |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Once the table logging is activated, there is a possibility to review the history of changes done in custom tables. These changes are saved in the DBTABLOG table. Logging is carried out "record by record", it means for every change operation, the image of "before state" is written to the log table. This approach is space consuming. So it is really important to adopt well-balanced table logging policy to ensure that data growth of DBTABLOG table will be acceptable.
It is possible to delete the data saved in DBTABLOG using RSTBPDEL report.
Recommendation
Our recommendation is to prepare table-logging policy and to decide with cooperation of data owners which tables will be logged.
Warning
Activating logging in a table has an important disadvantage – updates/modifications to the Z tables for which logging is activated can become slow.
Spool Administration
Created by: | SAP |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
Spool Administration (transaction SPAD) is intended for administrators to cover the following activities:
- Defining output devices in the SAP system.
- Analyzing printing problems.
- Maintaining the spool database – scheduling in dialog.
Warning
This task cannot be executed on a system connected via SYSTEM type RFC user.
Tool for Analyzing and Processing VB Request
Created by: | SAP |
Underlying SAP report: | RSM13002 |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
An update is asynchronous (not simultaneous). Bundling all updates for one SAP transaction into a single database transaction ensures the data belonging to this SAP transaction can be rolled back completely.
An update is divided into different modules corresponding with the update of function modules. The SAP System makes a distinction between:
- Primary (V1, time-critical) update module;
- Secondary (V2, non-time-critical) update module.
An update request or update record describes the data changes defined in an SAP LUW (Logical Unit of Work), carried out either in full or not at all (in the database LUWs for V1 and V2 updates).
This tool allows editing of update requests:
- Starting V2 updating (normally, V2 updating starts directly after V1 updating; for some reasons – e. g. performance bottleneck – V2 update should be postponed by unchecking the STARTV2 option)
- Deleting successfully executed update requests (update requests are normally deleted after they have been successfully executed but for performance issues this behaviour should be switched off by unchecking the DELETE option)
- Reorganizing the update tables (termination of a transaction in progress can lead to incomplete update requests in the update tables; to delete them, run this tool with REORG option checked).
Recommendation
If V2 is not updated directly, the update should be started as often as possible (several times a day). Otherwise, the update tables can get very large.
If deletion is not carried out directly, it should be carried out as often as possible (several times a day).
A reorganization of the update tables is necessary only occasionally (once a day is sufficient).
Delete Statistics Data from the Job Run-time Statistics
Created by: | SAP |
Underlying SAP report: | RSBPSTDE |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
A number of job run-time statistics are calculated during/after the jobs are run. These statistics should be deleted when they become obsolete.
This report reorganizes job run-time statistics. A period can be specified by days or by a date. All period-statistics records are deleted.
Recommendation
Our recommendation is to run this task monthly.
Batch Input: Reorganize Sessions and Logs
Created by: | SAP |
Underlying SAP report: | RSBDCREO |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
If the batch input functionality of the R/3 system is utilized, log entries are produced. This program is responsible for cleaning up the batch input sessions and their logs.
The report does following:
- Deletes (successfully) processed sessions still in the system and their logs. Only these sessions are deleted and not "sessions still to be processed", "sessions with errors", and so on.
- Deletes logs, for which sessions no longer exist.
Recommendation
We recommend periodical use of the program RSBDCREO to reorganize the batch input log file once a day. It can run RSBDCREO in the background or interactively.
Warning
Batch input logs cannot be reorganized using TemSe reorganization, they must be reorganized using the program RSBDC_REORG or transactions SM35/SM35P.
Delete Old Spool Requests
Created by: | SAP |
Underlying SAP report: | RSPO1041 |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Report RSPO1041 is used to tidy up the spool database and to delete old spool requests. This must be done regularly in the productive system, as the spool database may only contain 32000 spool requests.
For the additional info, see program documentation (variant screen).
If previous version of this report (RSPO0041) has been scheduled, it will continue working. It can be re-scheduled but ii cannot be newly scheduled from scratch.
Deletion of Jobs
Created by: | SAP |
Underlying SAP report: | RSBTCDEL2 |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
The report RSBTCDEL2 runs in the background. This task is provided by SAP to delete old, inconsistent and non-deleted jobs from the system. It replaces previously used report RSBTCDEL.
Recommendation
Our recommendation is to run this task regularly in the background once a day.
Related Notes
784969
Orphaned Job Logs Search and Deletion
Created by: | SAP |
Underlying SAP report: | RSTS0024 |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Obsolete jobs deleted with the report RSBTCDEL (predecessor of current report RSBTCDEL2). Sometimes logs that are left cannot be deleted (e.g. problems with system). These logs are called "orphans".
This task searches, checks and deletes these "orphans".
Recommendation
Our recommendation is to run this task regularly in the background once a week.
Related Notes
666290
Spool Files Consistency Check
Created by: | SAP |
Underlying SAP report: | RSPO1042 |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
For the SPOOL files, the DIR_GLOBAL directory or in the client-specific ADSP subdirectories, some of these may be very old. Usually, these files are removed by deleting the relevant spool request. However, under certain circumstances, these files may remain in the system as "orphaned" files.
Recommendation
This task checks whether spool requests still exist in the DIR_GLOBAL directory (ADS and SAP GUI for HTML print requests). If no more spool requests exist, the files are deleted. This task should be scheduled daily.
Related Notes
1493058
Administration Tables for BG Processing Consistency Check
Created by: | SAP |
Underlying SAP report: | RSBTCCNS |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
Background processing stores the job data in multiple database tables and these tables can be checked for consistency. This test is especially important if problems with the database occur and you need to determine whether all job data is still available. The report includes two predefined variants that you can use in the job step. These are:
Variant SAP&AUTOREPNO: Use this variant if consistency problems should only be listed in the output list. No automatic repair of the problems is performed.
Variant SAP&AUTOREPYES: Use this variant, if consistency problems should be logged and automatically corrected.
Recommendation
This task should be scheduled daily.
Related Notes
1440439
1549293
Active Jobs Status
Created by: | SAP |
Underlying SAP report: | BTCAUX07 |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
Sometimes jobs remain in the status 'active' after the background work process terminates or after database connection problems occur and their status can be corrected manually using transaction SM37. This task will do so automatically.
Recommendation
This task should be scheduled hourly.
Related Notes
16083
Collector for Background Job Run-time Statistics
Created by: | SAP |
Underlying SAP report: | RSBPCOLL |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
This task creates job statistics and should run daily.
Recommendation
Make sure the note 2118489 is installed in the system otherwise; RSBPCOLL has poor performance and performs too many DB accesses.
Related Notes
16083
2118489
Performance monitor (RFC) Collector
Created by: | SAP |
Underlying SAP report: | RSCOLL00 |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
This report starts via RFC reports on all servers (SAPSYSTEMS) belonging to the SAP-system of one database (compare SM51), which collects performance relevant data of these servers and write them to the performance database MONI. Furthermore, it starts the reports to collect and store information about the database system itself. These reports will be started on the database server if there is a dialog system available – otherwise the first dialog system in the system list will be used.
It has components RSSTAT80 that read local statistic data from the shared memory and stores it in the performance table MONI and component RSSTAT60 that creates statistic data for day, week, month, and year and reorganizes the table MONI. It also updates following tables:
RSHOSTDB – data for host system monitor
RSHOSTPH – protocol changes on host parameters
RSORATDB – analyzes the space of the database
RSORAPAR – protocol changes on database parameters.
Recommendation
This task should be scheduled hourly. It must always be scheduled in the client 000 with a user DDIC or with a user with the same authorization.
Related Notes
16083
Orphaned Temporary Variants Deletion
Created by: | SAP |
Underlying SAP report: | BTC_DELETE_ORPHANED_IVARIS |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
Delete "Orphaned" Temporary Variants.
Recommendation
This task should be scheduled weekly.
Related Notes
16083
Reorganization of Print Parameters for Background Jobs
Created by: | SAP |
Underlying SAP report: | RSBTCPRIDEL |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
It reorganizes print parameters cross-client. Since the number of print parameters increases more slowly than the number of background processing steps, you can execute this report after longer periods of time (longer than one month).
Recommendation
This task should be scheduled monthly.
Related Notes
16083
Reorganization of XMI Logs
Created by: | SAP |
Underlying SAP report: | RSXMILOGREORG |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
When you use external job scheduling tools, XMI log entries are written to the table TXMILOGRAW. The system may write a very large number of log entries, even if the audit level is set to 0. You must therefore reorganize the TXMILOGRAW table manually on a regular basis.
Recommendation
This task should be scheduled weekly.
Related Notes
16083
182963
SOAP Runtime Management
Created by: | SAP |
Underlying SAP report: | RSWSMANAGEMENT |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
This task does the SOAP runtime monitoring by scheduling other SAP standard programs:
SRT_REORG_LOG_TRACE
SRT_COLLECTOR_FOR_MONITORING
SRT_SEQ_DELETE_BGRFC_QUEUES
SRT_SEQ_DELETE_TERM_SEQUENCES
WSSE_TOKEN_CACHE_CLEANUP (Security Group)
Recommendation
This task should be scheduled hourly.
Delete History Entries for Processed XML Messages
Created by: | SAP |
Underlying SAP report: | RSXMB_DELETE_HISTORY |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
Historical data consists of a small amount of header information from deleted messages. This history data is stored in a table and transferred to a second table on a weekly basis. Thus, data from the first table can be removed every week; but the history is still available in the second table for another month.
Recommendation
This task should be scheduled monthly.
Delete XML Messages from the Persistency Layer
Created by: | SAP |
Underlying SAP report: | RSXMB_DELETE_MESSAGES |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
For the Integration Server/Engine of SAP XI this periodical task is recommended.
Recommendation
This task should be scheduled daily.
Spool Data Consistency Check in Background
Created by: | SAP |
Underlying SAP report: | RSPO1043 |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
The task enables continuous monitoring of inconsistent spool objects. The write locks are analysed and, if necessary, deleted. All found inconsistent objects are gathered in a table. At the end of the test run, old and new tables are compared according to the following scheme:
Old tables | New tables | Action |
No | Yes | Stays in a new table (new inclusion) |
Yes | Yes | If counter > limit then delete object |
Yes | No | If counter <= limit then incr. counter by 1 |
The write locks found are deleted without being gathered in a table.
The functions "Delete write locks" and "Delete inconsistencies" can be used independently of each other but this is not recommended. For normal daily use, the limit values for both functions should be the same. At the moment no uses for differing limit values are known.
Recommendation
This task should be scheduled daily.
Related Notes
16083
Business Warehouse
Topic Business Warehouse is a group of housekeeping tasks, which are Business Warehouse-oriented.
PSA Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | yes |
Support for initial housekeeping: | yes |
Recommended for HANA pre-migration housekeeping: | yes |
Pause / resume support: | yes |
Introduction
PSA Cleanup is part of Business Warehouse deletion tasks.
The Persistent Staging Area (PSA) is the inbound storage area in BI for data from the source system. Requested data is saved unchanged from the source system.
Requested data is stored in the transfer structure format in transparent, relational database tables in BI.
If regular deletion doesn't take place, data from PSA tables can grow to an unlimited size. In applications, this can lead to the poor system performance while from an administration point of view it can cause an increase in the use of resources. High volumes of data can also have a considerable effect on the total cost of an ownership of a system.
In OutBoard Housekeeping, it is possible to delete data from PSA tables using retention time.
Step list
In the main OutBoard Housekeeping menu select "PSA Cleanup – Settings" under the Business Warehouse/Deletion Tasks.
Now, the Settings selection must be specified. You can create new settings (1) by entering a new ID or choose from the existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Figure 87: PSA Cleanup – Settings
You can edit created Settings Group (save the changes by clicking the Save Settings icon on the taskbar in the end of modifying).
1st Specify the DataSource Name(s) by clicking on 'Add Object' Info button. From the list of available Data Sources (F4) you can select which are to be added into newly created Settings Group & confirm. Also, the source system may be specified as an additional filter for DataSources. The Option to Include/Exclude determines if the result of a selection will be added or removed from a selection.
You can set DataSource name in a pattern style and by checking "Do you want to save a pattern?" can skip selecting the PSA tables. This is useful if you create settings on group level.
Figure 88: PSA Cleanup – DataSource selection
After the confirmation, you should select PSA tables that are to be cleaned up and confirm the selected PSA tables to be added into the Settings Group list.
Figure 89: PSA Cleanup – PSA selection
Figure 90: PSA Cleanup – Settings with patterns and objects
Icons in "X" column indicate how the lines will apply to the overall list of PSAs:
Pattern will be evaluated during the task execution and the PSAs will be added to the overall list
Pattern will be evaluated during the task execution and PSAs will be removed from the overall list
PSA will be added to the overall list
PSA will be removed from the overall list
If the pattern is added, by clicking on its technical name it is evaluated and the list of PSAs is shown.
Figure 91: PSA Cleanup – List of PSAs included in pattern
2nd By clicking on 'Select requests' button, you can specify Time period for a deletion of relevant entries in selected PSA tables and exclude/include the requests with an error or archiving status from processing. From OutBoard Housekeeping 2.54 version, there is an option that allows the direct deletion of PSA tables by skipping the RecycleBin use.
Note: You can specify different time criterion for every PSA table in the list, if no selection on PSA table is done, selected Time parameter will be set for all PSA tables in the list.
Figure 92: PSA Cleanup – Time period settings
3rd Run 'Test Run' for settings, it will build the overall list of PSAs, scan them and identify all REQESTIDs, which fulfills the Time period condition. After 'Test Run' execution, the screen "Requests to be deleted" with the list of relevant REQUESTIDs, DataSources and source systems is opened.
Figure 93: PSA Cleanup – Test run result
Note: If the settings are created on the group level, 'Test Run' button is unavailable and thus this step is skipped.
4th Next step is to set Time limit for Recycle Bin. Enter the value in days in RecycleBin Period; use input field or leave the default value, which is 14 days.
Note: Data stored in Recycle Bin is still available and can be possibly restored if necessary during the time period, which was defined during the setup. Once the time period expires, data stored in Recycle bin will be automatically deleted by manual or scheduled execution of the system task "OutBoard Housekeeping RecycleBin Cleanup".
Figure 94: PSA Cleanup – RecycleBin Period
5th You may define the maximum number of jobs that can be running in parallel by using the input field on the right side 'Max jobs'.
Note: If the parallelization parameter "Max. Jobs" is set to 0, execution of such settings will distribute the selection into particular execution chunks. However, the execution of these chunks will not be performed and respective RunID will be paused.
Figure 95: PSA Cleanup – Parallelization – Max jobs
Once the settings for PSA Cleanup have been specified, you may run the created/modified Settings group from the Main menu. There are several options how to start the deletion. For more information, please refer to Execute and Schedule sections of this user documentation.
You should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run, you can go to the monitor or check the logs.
Recommendation
It is recommended to periodically delete:
- All incorrect requests
- All delta requests that have been updated successfully in an InfoProvider and no further deltas should be loaded for.
It helps to reduce the database disk space usage significantly. Our recommended retention time for PSA tables is 15 days and the task should run daily in common but it is application specific.
ADSO ChangeLog Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | yes |
Support for initial housekeeping: | no |
Pause / resume support: | no |
Introduction
ChangeLog cleanup is part of the Business Warehouse deletion tasks.
The change log is a table /BIC/A*3 that is automatically created for Advanced DataStore Object (ADSO) created with options: Activate data and Write changelog
The change log contains the change history for delta updating from the data source or InfoProvider to ADSO.
The data is put into the change log via the activation queue and is written to the table for the active data upon the activation. During activation, the requests are sorted according to their logical keys. This ensures that the data is updated in the correct request sequence in the table for active data.
OutBoard™ for Housekeeping offers the possibility to delete data from ChangeLog tables using retention time.
Step list
In the main OutBoard Housekeeping menu select "ADSO ChangeLog Cleanup – Settings" under the Business Warehouse/Deletion Tasks.
The Settings-part of the Change log Cleanup allows you to specify the selection criterion of the Settings Group as well as the time window for data relevancy. Settings are changed by means of setting corresponding parameters.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Figure 96: ADSO ChangeLog Cleanup – Settings Group selection
You may edit created Settings Group (save the changes by clicking the Save Settings icon on the taskbar in the end of modifying).
1st You can specify the ADSO name(s) by clicking on 'Add Object' Info button. From the list of available ADSOs (F4), you should select the objects to be added into newly created Settings Group & confirm. Also, InfoArea can be specified as an additional filter for ADSOs. Option Include/Exclude determines if the result of the selection will be added or removed from this selection.
You can set ADSO name in a pattern style, e.g. "ZADSO*" and by checking "Do you want to save a pattern?" can skip selecting ChangeLog tables. This is useful, should you create settings on a group level.
After the confirmation, you can select and confirm the ChangeLog tables that are to be cleaned up. The selected ChangeLog tables will be added into the Settings Group list.
Figure 97: ADSO ChangeLog Cleanup – ADSO selection
Figure 98: ADSO ChangeLog Cleanup – change log selection
Figure 99: ADSO ChangeLog Cleanup – Settings with patterns and objects
Icons in "X" column indicate how the lines will apply to the overall list of change logs:
Pattern will be evaluated during the task execution and the change logs will be added to the overall list
Pattern will be evaluated during the task execution and change logs will be removed from the overall list
Change log will be added to the overall list
Change log will be removed from the overall list
If the pattern is added, by clicking on its technical name, it is evaluated and the list of change logs is shown.
Figure 100: ADSO ChangeLog Cleanup – List of change logs included in pattern
2nd By clicking on 'Select requests' button you can specify Time period for deletion relevant entries in selected ChangeLog tables.
Note: You may specify different time criterion for every ChangeLog table in the list, if no selection on ChangeLog table is done, selected Time parameter will be set for all ChangeLog tables in the list.
Figure 101: ChangeLog Cleanup – Time period settings
3rd Run 'Test Run' for selected ChangeLog tables. It will scan all ChangeLogs and it will identify all REQESTIDs for particular ChangeLog which fulfill the Time period condition. After 'Test Run' execution, a list of relevant REQUESTIDs, ADSO objects and InfoArea is opened.
Figure 102: ADSO ChangeLog Cleanup – Test run result
Note: To create the settings on the landscape level, 'Test Run' button is unavailable and thus this step is skipped.
4th The next step is to set the Time limit for Recycle Bin. Enter the value in days in RecycleBin Period field or leave the default value, which is 14 days
Note: Data stored in Recycle Bin are available and can be possibly restored if necessary during the time period, which was defined during the setup. Once the time period expires, data stored in Recycle bin will be automatically deleted by manual or scheduled execution of the system task "OutBoard Housekeeping RecycleBin Cleanup".
Figure 103: ChangeLog Cleanup – Recycle Bin Period
Once the settings for ADSO ChangeLog Cleanup are specified, you can run the created/modified Settings group from Main menu. There are several options how to start the deletion. For more information, please refer to Execute and Schedule sections of this user documentation.
You should specify the Settings ID while executing/ scheduling the activity.
To check the status of the run, you can go to the monitor or check the logs.
Recommendation
It is recommended to periodically delete:
- Delta requests that have been updated successfully in an InfoProvider and no further deltas should be loaded for.
It helps to reduce the database disk space usage significantly. This task should be scheduled daily with retention time 15 days in a row but it is application specific.
ChangeLog Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | yes |
Support for initial housekeeping: | yes |
Pause / resume support: | yes |
Introduction
ChangeLog cleanup is part of the Business Warehouse deletion tasks.
The change log is a table of the PSA that is automatically created for each standard DataStore Object (DSO). Further, for each standard DataStore Object, an export DataSource will be created that serves as a data source for the transfer of data from the change log to other data targets.
Change log contains the change history for delta updating from the ODS Object into other data targets, for example ODS Objects or InfoCubes.
The data are put into the change log via the activation queue and are written to the table for active data. During activation, the requests are sorted according to their logical keys. This ensures that the data are updated in the correct request sequence in the table for active data.
OutBoard Housekeeping is able to delete data from the ChangeLog tables using retention time.
Step list
In the main OutBoard Housekeeping menu select "ChangeLog Cleanup – Settings" under the Business Warehouse/Deletion Tasks.
The Settings-part of the Change log Cleanup allows you to specify the selection criterion of the Settings Group as well as the time window for data relevancy. Settings are changed by means of setting corresponding parameters.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Figure 96: ChangeLog Cleanup – Settings Group selection
You may edit created Settings Group (save the changes by clicking the Save Settings icon on the taskbar in the end of modifying).
1st You can specify the DataStore name(s) by clicking on 'Add Object' Info button. From the list of available DataStores (F4), you should select the objects to be added into newly created Settings Group & confirm. Also, InfoArea can be specified as an additional filter for DataStores. Option Include/Exclude determines if the result of the selection will be added or removed from this selection.
You can set DataStore name in a pattern style, e.g. "ZDSO*" and by checking "Do you want to save a pattern?" can skip selecting ChangeLog tables. This is useful, should you create settings on a group level.
After the confirmation, you can select and confirm the ChangeLog tables that are to be cleaned up. The selected ChangeLog tables will be added into the Settings Group list.
Figure 97: ChangeLog Cleanup – DataStore selection
Figure 98: ChangeLog Cleanup – change log selection
Figure 99: ChangeLog Cleanup – Settings with patterns and objects
Icons in "X" column indicate how the lines will apply to the overall list of change logs:
Pattern will be evaluated during the task execution and the change logs will be added to the overall list
Pattern will be evaluated during the task execution and change logs will be removed from the overall list
Change log will be added to the overall list
Change log will be removed from the overall list
If the pattern is added, by clicking on its technical name, it is evaluated and the list of change logs is shown.
Figure 100: ChangeLog Cleanup – List of change logs included in pattern
2nd By clicking on 'Select requests' button you can specify Time period for deletion relevant entries in selected ChangeLog tables and exclude/include the requests with an error or archived status from processing. From OutBoard Housekeeping 2.54 version, there is an option that allows direct deletion of ChangeLog tables by skipping RecycleBin use.
Note: You may specify different time criterion for every ChangeLog table in the list, if no selection on ChangeLog table is done, selected Time parameter will be set for all ChangeLog tables in the list.
Figure 101: ChangeLog Cleanup – Time period settings
3rd Run 'Test Run' for selected ChangeLog tables. It will scan all ChangeLogs and it will identify all REQESTIDs for particular ChangeLog which fulfill the Time period condition. After 'Test Run' execution, a list of relevant REQUESTIDs, DataStore objects and InfoArea is opened.
Figure 102: ChangeLog Cleanup – Test run result
Note: To create the settings on the landscape level, 'Test Run' button is unavailable and thus this step is skipped.
4th The next step is to set the Time limit for Recycle Bin. Enter the value in days in RecycleBin Period field or leave the default value, which is 14 days
Note: Data stored in Recycle Bin are available and can be possibly restored if necessary during the time period, which was defined during the setup. Once the time period expires, data stored in Recycle bin will be automatically deleted by manual or scheduled execution of the system task "OutBoard Housekeeping RecycleBin Cleanup".
Figure 103: ChangeLog Cleanup – Recycle Bin Period
5th You may define the maximum number of jobs that can be running in parallel by using the input field on the right side 'Max jobs'.
Note: If parallelization parameter "Max. Jobs" is set to 0, the execution of such settings will distribute the selection into particular execution chunks. However, the execution of these chunks will not be performed and respective RunID will be paused.
Figure 104: ChangeLog Cleanup – Parallelization – Max Jobs
Once the settings for ChangeLog Cleanup are specified, you can run the created/modified Settings group from Main menu. There are several options how to start the deletion. For more information, please refer to Execute and Schedule sections of this user documentation.
You should specify the Settings ID while executing/ scheduling the activity.
To check the status of the run, you can go to the monitor or check the logs.
Recommendation
It is recommended to periodically delete:
- Incorrect requests
- Delta requests that have been updated successfully in an InfoProvider and no further deltas should be loaded for.
It helps to reduce the database disk space usage significantly. This task should be scheduled daily with retention time 15 days in a row but it is application specific.
Cube Compression Analysis
Created by: | SAP/DVD |
Underlying SAP report: | SAP_INFOCUBE_DESIGNS |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Support for initial housekeeping: | yes |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Dimension tables size significantly affects the performance on database level (table joins, query performance). This analysis checks two important values for each dimension table:
- Row count – count of rows in the table.
- Ratio – it is calculated as division of Dimension table rows to Fact table rows.
The ratio acceptable for dimension tables is up to 10% (in order to avoid false alarm, the cube must have more than 30 000 rows).
Steplist
In the main OutBoard Housekeeping menu select "Cube Compression Analysis – Settings" under the Business Warehouse/Deletion Tasks.
Here, the Settings selection must be specified. You can create new settings (1) by entering a new ID 1 or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Should you create new settings, the "Description" field needs to be filled. In "Selection thresholds", you can fill "Min. rows" field to check only the tables with rows count greater than minimum. "Min. density (%)" will search for the tables with ratio greater than selected.
Figure 105: Cubes Compression Analysis – Settings detail
If "Create settings for Task Cube compression based on selection" is checked, a consistency check will prepare the settings for "Cube Compression" task and the settings ID can be found in the consistency check logs under – "Problem class Important".
Click on the info button "Save settings" to save the selection. For any further updates, click on "Modify Settings" info button and confirm.
Once the settings for Cube compression analysis are specified, you can run the created/modified Settings group from Main menu. There are several options on how to start the deletion. For more information, please refer to Execute and Schedule sections of this user documentation.
You should specify the Settings ID while executing/ scheduling the activity.
To check the status of the run, you can go to the monitor or check the logs.
Recommendation
As data volume grows, this task should be run regularly.
Related Notes
1461926
Cube Compression
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Pause / resume support: | yes |
Introduction
Cube compression is part of Business Warehouse deletion tasks.
Data loaded in the InfoCube is identified with the request IDs associated with them. However, the request ID concept can also cause the same data records to appear more than once in the F fact table. This unnecessarily increases the volume of data and reduces the performance in reporting as the system has to aggregate using request IDs every time the query is executed. The compression of the InfoCube eliminates these disadvantages and brings data from different requests together into one single request. Compression means in this case to roll-up the data so that each data set is only contained once and therefore deleting the request information.
The compression improves the performance as it removes the redundant data. The compression also reduces memory consumption due to following: Deletes request IDs associated with the data. It reduces the redundancy by grouping by on dimension & aggregating on cumulative key figures.
The compression reduces the number of rows in the F fact table because when the requests are compressed, all data moves from the F fact table to the E fact table. It results in an accelerated loading into the F fact table, faster updating of the F fact table indexes, shorter index rebuilding time, the accelerated rollups (since the F fact table is the source of data for roll-up).
OutBoard Housekeeping is able to compress the cubes using retention time.
Step list
In the main OutBoard Housekeeping menu select "Cube Compression – Settings" under the Business Warehouse/Deletion Tasks.
Settings-part of the OutBoard Housekeeping allows you to specify the selection criterion of the Settings Group as well as the time window for data relevancy. Settings are changed by means of setting a corresponding parameter. Parameters are usually different for each system and therefore are not meant to be transported, but are set on each system separately.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Figure 106: Cube Compression – Settings Group selection
You can edit created Settings Group (by clicking the Save Settings icon on the taskbar in the end of modifying, the changes are saved).
You can click on 'Add Object' and select the InfoCube(s) that are to be compressed; once selected, press confirm.
Figure 107: Cube Compression – Settings Group selection
Icons in "X" column indicate how the lines apply to the overall list of InfoCubes:
Pattern is evaluated during the task execution and the InfoCubes is added to the overall list
Pattern is evaluated during the task execution and InfoCubes is removed from the overall list
InfoCube is added to the overall list
InfoCube is removed from the overall list
If the pattern is added, by clicking on its technical name it is evaluated and the list of InfoCubes is shown.
As an alternative to manual selection, settings generated by Cube Compression Analysis can be used.
Once the InfoCube(s) is (are) selected, you should specify settings in the Request selection. Here the selection is according to which RequestIDs for InfoCube compression is identified for every InfoCube within the list. Filtering restricts criteria based on the request timestamp "older than xxx days" and on the number of requests to be kept uncompressed. For request limitation, it is possible to enter the values in following way:
- Number of unprocessed requests to Process data records older than XXX days
- Both limitations
- No limitations: In this case all requests will be compressed
Also, you can select the option "Zero elimination" after cube compression (see more in "Zero Elimination after Compression" task).
Figure 108: Cube Compression – Time period settings
For Oracle databases, there is a possibility to check for DIMID duplicates during the execution. This elementary test recognizes whether there are several lines that have different DIMIDs (dimension table key), but have the same SIDs for the selected dimension table for the InfoCube specified. (This can occur by using parallel loading jobs). This has nothing to do with an inconsistency. However, unnecessary storage space is occupied in the database.
Since the different DIMIDs with the same SIDs are normally used in the fact tables, they cannot simply be deleted. Therefore, all of the different DIMIDS in the fact tables are replaced by one DIMID that is randomly selected from the equivalent ones.
DIMIDs that have become unnecessary are deleted in the connection. In doing so, not only are the DIMDs deleted that were released in the first part of the repair, but so are all of those that are no longer used in the fact tables (including aggregates).
If this option is chosen for any database other than an Oracle Database, it will be ignored during execution.
When InfoCube contains indexes that can slow down the cube compression, you can choose whether before compression itself, index should be dropped and after compression is done, it is repaired. You can turn on this functionality for DB Indexes or Aggregate Indexes.
You can display identified RequestIDs by clicking on "Test Run".
Figure 109: Cube Compression – Test run result
Note: When creating settings on group level, 'Test Run' button is unavailable and thus this step is skipped.
As a last step, you can define the maximum number of jobs that can be running in parallel by using the input field on the right side 'Max jobs'.
Note: If parallelization parameter "Max. Jobs" is set to 0, execution of such settings will distribute the selection into particular execution chunks, but the execution of these chunks will not be performed and respective RunID will be paused.
Figure 110: Cube Compression – Parallelization Max jobs
Once the settings for the InfoCube Compression have been specified, you can run the created/modified Settings group from Main menu. There are several options how to start the InfoCube Compression. For more information, please refer to Execute and Schedule sections of this user documentation.
You should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run, you can go to the monitor or check the logs.
Recommendation
Our recommendation is to compress as soon as possible any requests for InfoCubes that are not likely to be deleted; this also applies to the compression of the aggregates. The InfoCube content is likely to be reduced in size so DB time of queries should improve.
Warning
Be careful – after compression the individual requests cannot be accessed or deleted. Therefore, you should be absolutely certain that the data loaded into the InfoCube is correct.
Cube DB Statistics Rebuild
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Note: Task 'Cube DB Statistics rebuild' is obsolete, we recommend 'DB statistics rebuild' instead.
Introduction
The database statistics are used in the system to optimize the query performance. For this reason, the database statistics should be up-to-date. SAP recommends that the statistics should be updated always in case more than a million new records were loaded into the InfoCube, since the last update.
The database statistics can be automatically recalculated after each load or after each delta upload. To avoid unnecessary recalculations, first the OutBoard Housekeeping task determines whether the recalculation is needed, and only afterwards the statistics is rebuild. The relevant InfoCubes for statistics update can be listed also using the 'Test run' in the settings definition.
The percentage of the InfoCube data that is used to create the statistics is set to 10% by default by SAP. The larger the InfoCube, the smaller percentage should be chosen, since the demand on the system for creating the statistics increases with the change in size. Cube DB Statistics Rebuild task is using the percentage as it is set up by each InfoCube.
Recommendation
Our recommendation is to run this task regularly, as it will update the statistics only for InfoCubes when needed. It will avoid unnecessary statistics update, but on the other hand, it will keep the statistics up-to-date.
Note: While Statistics Are Being Built, It is not possible to:
• Delete indexes
• Build indexes
• Fill aggregates
• Roll up requests in aggregates
• Compress requests
• Archive data
• Update requests to other InfoProviders
• Perform change runs.
BI Background Processes
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
The BW environment performs a lot of processes all the time and some may not always be processed in a visible way. Sometimes this happens in the background in series or in parallel processes.
Background Management provides functions for managing these background processes and the parallel processes in the BW system. As a result of its regular activities, messages, and the internal parameters of the background processes executed by the background management on are created in RSBATCHDATA table. In case of no housekeeping, RSBATCHDATA table may grow out of control.
In OutBoard Housekeeping, it is possible to delete these messages and the internal parameters using retention time.
Step list
In the main OutBoard Housekeeping menu select "BI Background Processes – Settings" under the Basis/Deletion Tasks.
Youcan create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Settings part of BI Background Processes allows you to specify the selection criterion of the Settings Group.
Now, the Settings selection must be specified:
- Delete Messages by – for the internal messages of BI background management, this defines after how many days these should be deleted.
- Delete Parameters by – for the internal parameters of the background processes, this defines after how many days these should be deleted.
- Fill the "Description" field with a description comment and click on "Save Settings" button.
Note: SAP recommends deleting messages and parameters older than 30 days. This setting should normally prevent table RSBATCHDATA from being overfilled. When defining the deletion selections, make sure to keep the data as long as necessary in order to track any problems that might occur.
To save the settings, click the "Save Settings" button. If you make an update on an already existing settings group, you may save the settings using the "Modify Settings" button. Or delete the complete settings group with the "Delete Settings" info button.
To run this task in a Test mode, mark test mode checkbox, to display available deletion results in logs.
Figure 111: BI Background Processes – Settings detail
Once the settings are specified, you can run the created/modified Settings Group from Main menu. There are several options how to start the deletion. For more information, please refer to Execute and Schedule sections of this user documentation.
You should specify the Settings ID when executing/ scheduling the activity.
To check the status of the run, you can go to the monitor or check the logs.
Recommendation
Our recommendation is to delete the messages and the parameters stored in RSBATCHDATA table that are older than 30 days and to run this job on a daily basis.
Warning
You should only delete the messages and the parameters that will no longer be needed. After this report is executed, the logs will be temporarily stored in the recycle bin and eventually deleted.
BW Statistics Deletion
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
BW statistics are part of Business Warehouse deletion tasks.
To evaluate the fundamental functional areas of the Business Information Warehouse, the system stores BW Statistics in the following tables:
- RSDDSTATAGGR
- RSDDSTATAGGRDEF
- RSDDSTATBIAUSE
- RSDDSTATCOND
- RSDDSTATDELE
- RSDDSTATDM
- RSDDSTATDTP
- RSDDSTATEVDATA
- RSDDSTATHEADER
- RSDDSTATINFO
- RSDDSTATLOGGING
- RSDDSTATPPLINK
- RSDDSTATTREX
- ESDDSTATTREXSERV.
After a certain period of time, the statistics are not used, therefore not needed anymore. They can be deleted to reduce data volume and increase the performance when accessing these tables.
With OutBoard Housekeeping, it is possible to delete these BW Statistics using retention time.
Step list
In the main OutBoard Housekeeping menu select "BW Statistics – Settings" under the Business Warehouse/Deletion Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled and selected whether or not the settings ID will be run in a test mode.
You may also restrict the deletion of BW statistics based on source objects:
- Query
- Logs
- Aggregates
- Data Transfer Process
- Data on Deletion
You need to click on the "Save settings" button to save the selection, for any further updates click on "Modify Settings" info button and confirm.
Figure 112: BW Statistics – Settings detail
Once the settings for BW Statistics cleaning are specified, you can run the created/modified Settings group from the Main menu. There are several options how to start the deletion. For more information, please refer to Execute and Schedule sections of this user documentation.
Recommendation
Our recommendation is to delete BW Statistics older than 60 days.
Warning
Analyse the usage of BW Statistics in the system before deleting them, because OutBoard Housekeeping deletes the BW statistics permanently.
Bookmark Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | yes |
Introduction
Bookmarks are useful if you want to return to a particular navigational status (that includes the level to which some hierarchy has been expanded) of a Web application or an ad-hoc query that was created using the Web. You can set a bookmark to enable a recall to a particular navigational status at a later date because the system creates and stores a URL for the bookmark.
In OutBoard Housekeeping, it is possible to cleanup these bookmarks based on internal parameters using retention time.
In the main OutBoard Housekeeping menu select "Bookmark Cleanup – Settings" under the Business Warehouse/Deletion Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled.
Figure 113: Bookmark Cleanup – Settings detail
You can restrict the cleanup of bookmarks based on:
- Selection Parameters
- Date created/ last accessed
- User Name
- Template
- Bookmark State (All/ Stateful/ Stateless)
- Bookmark Type (All/without/with Data)
To run this task in a Test mode, mark test mode checkbox, to display available cleanup results in logs.
Once the settings for the Bookmark Cleanup are specified, you may run the created/modified Settings group from the Main menu. There are several options how to start the deletion. For more information, please refer to Execute and Schedule sections of this user documentation.
Web Template Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | yes |
Introduction
A Web template determines the structure of a Web application. Web Application Designer is used to insert placeholders into a HTML document for Web Items (in the form of object tags), data providers (in the form of object tags) and BW URLs. The HTML document with the BW-specific placeholders is called a Web template. Web templates are checked into the Web Application Designer. The HTML page that is displayed in the Internet browser is called a Web application. Depending on which Web items were inserted into the Web template, a Web application contains one or more tables, an alert monitor, charts, maps, and so on.
The Web template is the keystone of a Web report. It contains placeholders for items and command URLs. Data providers, items, and command URLs are generated for Web reports.
In OutBoard Housekeeping, it is possible to cleanup the Web Templates based on internal parameters and use retention time.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled.
In settings, the field 'Last used' represents the number of days that have passed since the template was last used and the field 'Created before' represents number of days passed since the template was first created.
Figure 114: Web Template Cleanup – Settings detail
You can restrict the cleanup web templates based on the following parameters:
- Created before (days)
- Last used (days)
- User Name
- Template (Tech Name)
It is possible to run this task in Test mode. Mark this option to display available cleanup results in logs.
Once the settings for the Web Template Cleanup are specified, you may run the created/modified Settings group from the Main menu. There are several options how to start the deletion. For more information, please refer to Execute and Schedule sections of this user documentation.
Precalculated Web Template Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | yes |
Introduction
Precalculated web template is a complete document that doesn't require database connection, essentially they are web templates filled with data. After filling the web template with the data, this can be distributed and used without executing an OLAP request. As these templates are filled with data, they require more space than the traditional Web Template as these templates are filled multiple times with data.
In OutBoard Housekeeping, it is possible to cleanup these bookmarks based on internal parameters using retention time.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled.
Figure 115: Precalculated Web Template – Settings detail
You can restrict the cleanup Precalculated Web Templates based on the following parameters:
- Older than (Creation Date)
- User Name
- Template (Tech Name)
It is possible to run this task in a Test mode. Mark this option to display available cleanup results in logs.
Once settings for the Precalculated Web Template Cleanup are specified, you may run the created/modified Settings group from the Main menu. There are several options how to start the deletion. For more information, please refer to Execute and Schedule sections of this user documentation.
Extended Query Statistics Cleanup
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
Datavard (DVD) Extended Query Statistics are information that are collected on top of standard OLAP statistics.
These statistics are collected when DVD Extended Query Statistics enhancement is installed in the system and collecting is turned on. Extended statistics store information about filters that were used for each query/navigational step execution and create source information for use in other DVD analysis products. If these are not deleted manually, their growth in size is unlimited. Two tables /DVD/QS_QINFO and /DVD/QS_QVAR are filled when a query is executed in the system. Speed of growth depends on overall query usage and number of filters used in average. In productive systems, it is not uncommon that total size of these tables can grow 0.5 GB per week.
In OutBoard Housekeeping it is possible to cleanup the Extended Query Statistics based on internal parameters.
Recommendation
Periodical deletion of Extended Query Statistics should reduce database disk space usage significantly. Our recommendation is to delete all statistics that are no longer needed for an analysis in other DVD products.
For example: Heatmap query usage analysis should be done once a month and after this analysis, all statistics of that month should be cleaned if there is no other planned analysis for bigger time frame.
Step list
In the main OutBoard Housekeeping menu select "Extended Query Statistics Cleanup – Settings" under the Business Warehouse/Deletion Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled.
You then need to fill in optional parameters:
- ID – generated unique statistic ID
- Query datum – datum of query execution
- Query ID – report specific ID
- User – username of the query creator
It is possible to run this task in a Test mode. Mark this option to display available cleanup results in logs.
Figure 116: Extended Query Statistics Cleanup – Settings detail
Warning
If no parameters are specified, all queries will be deleted.
Unused Dimension Entries of an InfoCube Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
During the deletion of data from InfoCubes, it is possible that this data in their respective dimension tables of InfoCubes is retained for future processing. This data can occupy valuable space in database and is easily forgotten, as it is no longer visible in the InfoCube. Task Unused Dimension Entries of an InfoCube Cleanup deals with this retained data if it is no longer needed.
Step list
In the main OutBoard Housekeeping menu, select "Cube Compression – Settings" under the Business Warehouse/Deletion Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
You can then specify InfoCubes and InfoCube patterns that will be used in the execution of the task.
Figure 117: Unused Dimension Entries of an InfoCube – Settings Group selection
In 'Add object' submenu, you can specify individual InfoCubes or patterns that are to be processed during the execution.
Figure 118: Unused Dimension Entries of an InfoCube – InfoCube(s) specification
Test run is available for this task, which gives an overview of unused dimension entries.
Figure 119: Unused Dimension Entries of an InfoCube – Test run results
Warning
Test run results and task logs follow standard SAP logging logic for this task. So the first five unused entries of a dimension are displayed followed by a summary for each dimension if there are more than five unused entries. If there are five or less unused entries in a dimension, no summary is displayed and the information about unused entries of next dimension follows.
Figure 120: Unused Dimension Entries of an InfoCube – Example of summary in results
Query Objects Deletion
Created by: | DVD |
Underlying SAP transaction: | RSZDELETE |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | yes |
Introduction
Transaction RSZDELETE is intended for mass deletion of queries and reusable query components (structures, filters, calculated or restricted key figures and variables) from the system.
As of OutBoard Housekeeping release 2.35, this functionality is enhanced with RecycleBin support. This allows storing deleted queries and query components in RecycleBin for specified retention time.
Step list
In the main OutBoard Housekeeping menu select "Query Objects Deletion – Settings" under the Business Warehouse/Deletion Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
Settings part of Query Objects Deletion allows you to specify the selection criterion of the Settings Group.
Figure 121: Query Objects Deletion – Settings detail
Workbook and Role Storage Cleanup
Created by: | DVD |
Underlying SAP report: | RSWB_ROLES_REORG |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | yes |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
There may be workbooks where there is no longer a reference in a role or in the favorites. Similarly, references to non-existing workbooks may exist in roles or favorites.
In the SAP Easy Access menu or in the role maintenance, the references to workbooks are deleted without a check performed to see whether it may be the last reference. In these cases, the workbook is also deleted in the document storage in the BEx Analyzer or BEx Browser.
This task enables the deletion of references to workbooks in roles and favorites whose workbooks do not exist in the document storage. The task also allows you to delete workbooks for which no references exist in roles or favorites in the document storage.
As of OutBoard Housekeeping release 2.35, this functionality is enhanced with RecycleBin support. This allows storing deleted references in RecycleBin for specified retention time.
Step list
In the main OutBoard Housekeeping menu select "Query Objects Deletion – Settings" under the Business Warehouse/Deletion Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
The settings part of Query Objects Deletion allows you to specify the selection criterion of the Settings Group.
Figure 122: Workbooks and Role Storage Cleanup – Settings detail
It is possible to run this task in a Test mode (checked by default) and it will write the analysis results into task logs.
Workbook Cleanup
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | yes |
Support for Recycle bin: | yes |
Introduction
BEx Analyzer is an analytical, reporting and design tool embedded in Microsoft Excel. Workbooks that are created with this tool can be saved to multiple locations, e.g. SAP NetWeaver server. Workbooks are saved as binary files in the database system and can consume a lot of space. Especially if multiple workbooks are being created for the same purpose i.e. every month and are not used after that.
In OutBoard™ for Housekeeping it is possible to delete unused workbooks from the system using retention time and OLAP statistics.
Step list
In the main OutBoard Housekeeping menu, select “Workbook Cleanup” under the Business Warehouse/Deletion Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation.
In the settings part of the Workbook Cleanup, you can specify the selection criterion of the Workbook to be cleaned up. It is possible to run this task in a Test mode (checked by default) and it will write the analysis results into task logs.
You need to fill in these parameters:
- Workbook ID
- Person Responsible
- Type of Workbook
- Location of Data Storage
- Last usage (in days)
- Delete all with no statistics checkbox – excludes statistics and deletes all the bookmarks according to selection parameters
- Retention time in days – amount of time for cleaned bookmark to be hold in recycle bin
- Verbose logs checkbox – enhances logs with additional information about each Bookmark processing
Recommendation
Our recommendation is to delete unused workbooks on a regular basis. Note that it is crucial to collect OLAP statistics for workbooks, otherwise the deletion program is unable to identify workbooks that are unused. Statistics for workbooks can be set in the transaction RSDDSTAT.
Warning
When OLAP statistics are not collected, the deletion program by default excludes such workbooks from deletion. Be careful when you chose settings option 'Delete all with no statistics', it will delete all workbooks that do not have any OLAP statistics for the given selection. If you decide to use this option, always use the recycle bin (set 'Retention time in days' > 0). You can then easily reload possibly missing workbooks.
BusinessObjects: Office Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
SAP BusinessObjects Analysis is an analytical, reporting and design tool embedded in Microsoft Excel. Workbooks that are created with this tool can be saved to multiple locations. One of such locations is SAP NetWeaver server. Workbooks are saved as binary files in database system and can consume a lot of space. Especially when multiple workbooks are being created for the same purpose i.e. every month and are not used after that.
In OutBoard™ for Housekeeping it is possible to delete unused workbooks from system using creation time.
Recommendation
Our recommendation is to delete unused workbooks on regular basis with the usage of recycle bin.
Warning
Task is using for deletion creation the time stamp. This means the object could be used recently but unfortunately the header table doesn't contain this information. You should always use recycle bin (set 'Retention time in days' > 0). You can then easily reload possibly missing objects.
Tables Buffering on Application Server
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
Table buffering on the Application Server is part of Business Warehouse buffering tasks.
Buffering on the application server is to avoid accessing the database server too often. In the case of too many sequential reads and small percentage of invalidations (buffer refreshes), the buffering will increase the performance when accessing the tables.
OutBoard Housekeeping offers the possibility to evaluate the table buffering settings in the test mode. The result is a list of tables to be buffered according to current settings.
Step list
In the main OutBoard Housekeeping menu, select "Tables Buffering on Application Server – Settings" under the Business Warehouse/Buffering Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled.
You may choose to set the threshold parameters for the tables to be buffered:
- Max. Table size
- Max. Sequential reads
- Max. Invalidation (in percent)
These parameters can be applied for previous or this month statistics.
Click on the info button "Save settings" to save the selection.
Figure 123: Tables Buffering on App. Server – Settings detail
Once the settings for Tables buffering cleaning are specified, you can run the created/modified Settings group from the Main menu. There are several options how to start the buffering, for more information, please refer to Execute and Schedule sections of this user documentation.
Recommendation
Our recommendation is to buffer the tables on the application server according to the results of the test run of this analysis.
Warning
Continue to analyse the usage and invalidations of buffer on a regular basis (monthly).
Number Range Buffering
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
Number Range Buffering is part of Business Warehouse buffering tasks.
During the Master Data Load, each record will go to the database table NRIV and will pick a new SID number. Similarly, also during the Info Cube data loading each record will go to database table NRIV and gets the new DIMID. In the case of a huge amount of data, the performance of loading will be decreased because all the records will go to database table to get new either the SID or DIMID numbers. In order to rectify this problem, we need to use buffered numbers rather than the hitting the database every time.
OutBoard Housekeeping offers the possibility to buffer required SID and DIMID range objects with the defined buffering value.
Step list
In the main OutBoard Housekeeping menu select "Number Range Buffering – Settings" under the Business Warehouse/Buffering Tasks.
Settings part of Number Range Buffering allows you to specify the selection criterion of the Settings Group. You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled and selected whether or not the settings ID will be run in a Test mode.
Figure 124: Number Range Buffering – Settings detail
Number Range Buffering supports three different activities:
- Buffering NR object
- Unbuffering NR object
- Unbuffering NR for package dimension
Select option in the task Number range buffering was based (OutBoard Housekeeping release older than 2.61) on the number range object number for SID and DIMID. Due to the fact that these numbers are different throughout the landscape, the select options are now based on InfoObject name (NR for InfoObject) and Cube dimensions (NR for InfoCube).
In the NR object buffering, specify selection conditions for SID NR object, NR for InfoObject or DIMID NR object and NR for InfoCube. If necessary, specify buffer level, too. In the NR object unbuffering specify selection conditions for NR object to be unbuffered, if necessary.
You need to click on the button "Save settings" to save selection, for any further updates click on "Modify Settings" info button and confirm.
Once the settings for Number range buffering cleaning are specified, you can run the created/modified Settings group from Main menu. There are several options how to start the buffering. For more information, please refer to Execute and Schedule sections of this user documentation.
Recommendation
Our recommendation is to buffer all SID and DIMID number range objects with the buffering value 10.
Warning
Number range object of characteristic 0REQUEST should never be buffered. Therefore, it is always filtered out by default. Number range object of the Package Dimensions should never be buffered. Therefore, these are always filtered out by default.
Related Notes
1948106
857998
Archiving of Request Administration Data
Created by: | SAP/DVD |
Underlying SAP report: | RSREQARCH_WRITE |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Cyclic execution of standard report: | yes |
Introduction
The administration tables and log tables of the DTP/ InfoPackage increase with each new request. This in turn affects the performance.
The log and administration data for requests can be archived. This results in improved performance of the load monitor and the monitor for load processes. It also allows for the freeing up tablespace on the database.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept, by using SARA transaction and WRITE action. The archiving object BWREQARCH contains information about which database tables are used for archiving. In runtime, the report RSREQARCH_WRITE is executed.
Step list
In the main OutBoard Housekeeping menu select "Archiving of Request Administration Data – Settings" under the Business Warehouse/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks are. For more information, please refer to the Creating a Settings ID section of this user documentation.
In the variant screen, you can set the criteria for the requests to be archived.
Selection Date of Request | Refers to the load date of the request |
Requests Older Than (Months) | The requests that were loaded in the selected period (Selection Date) and are older than specified number of months, are archived during the archiving run. |
Archive New Requests Only | Only new requests, which have not been archived yet, are archived during the archiving run. |
Reloaded Requests Only | Only old requests are archived, which have already been archived once and were reloaded from that archive. |
Archive All Requests | All requests are archived that fall within the selection period. |
Minimum Number of Requests | If the number of archiving-relevant requests is lower than the minimum number, no requests are archived during the archiving run. |
Test Mode / Production Mode | Specifies, in which mode the report is executed. Test mode makes no changes in database. |
Detail Log | Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete). |
Log Output | Specifies type of output log (List, Application Log, List and Application Log). |
Archiving Session Note | Description of the archived content. |
Figure 125: Archiving of Request Administration Data – Settings detail
There are several options how to start the Archiving of Request Administration Data. For more information, please refer to Execute and Schedule sections of this user documentation.
Recommendation
To avoid unnecessary reloading of data from the archive, we recommend that you should only archive administration data from requests that are more than three months old and will probably not be edited again.
Warning
- After an upgrade from SAP BW 2.x or 3.x to SAP NetWeaver 7.x, the reports RSSTATMAN_CHECK_CONVERT_DTA and RSSTATMAN_CHECK_CONVERT_PSA are to be executed at least once for all objects. We recommend the reports execution in the background.
- Because of different selection screens on various BW releases, it is not possible to utilize an inheritance of settings and execution. It is highly recommended to prepare specific settings ID for each landscape node.
Archiving of BI Authorization Protocols
Created by: | SAP/DVD |
Underlying SAP report: | RSECPROT_ARCH_WRITE |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
The table RSCELOG is used as a storage for Authorizations Log files. These are created when the authorization protocol is switched on in RSECPROT. This in turn can affect the performance.
Authorizations Log files can be archived. This results in improved performance of the load monitor and the monitor for load processes. It also allows to free up the tablespace on the database.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept by using SARA transaction and WRITE action. The archiving object RSECPROT contains information about which database tables are used for archiving. In runtime, the report RSECPROT_ARCH_WRITE is executed.
Step list
In the main OutBoard Housekeeping menu select "Archiving of BI Authorization Protocols – Settings" under the Business Warehouse/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks. For more information, please refer to the Creating a Settings ID section of this user documentation.
In the variant screen, you can set the criteria for the logs to be archived.
UTC time stamp in short form | Refers to the time the logs were created |
Executing User | User name of executing user |
Restricted User | User name of restricted user |
P_AR_ILM | ILM Action: Archiving – the system copies only the data for which an archivability check was successfully performed to archive files. |
P_SNAP | ILM Action: Snapshot – the selected data is copied to the archive files, without undergoing an additional check. The files created with this option can be stored on an external storage system. |
P_DEST | ILM Action: Data Destruction – only data added to the archive files, which can be destroyed according to the rules stored in ILM. You cannot store the archive files created with this option, in an external storage system. Deletion program can be used to delete the data copied to the archive files from the database. No archive information structures are created as a result. When the data has been deleted from the database, the deletion also deletes the archive files that were created as well as the administration information related to these files. |
Test Mode / Production Mode | Specifies, in which mode the report executes. Test mode makes no changes in database. |
Delete with Test Variant | If set, the delete program will be started with the test mode variant. The program will generate statistics about the table entries that would be deleted from the database in the production mode. |
Detail Log | Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete). |
Log Output | Specifies type of output log (List, Application Log, List and Application Log). |
Archiving Session Note | Description of the archived content. |
For more detailed information, see the contextual help.
Figure 126: Archiving of BI Authorization Protocols – Settings detail
There are several options how to start the Archiving of BI Authorization Protocols. For more information, please refer to Execute and Schedule sections of this user documentation.
Recommendation
We recommend to archive and delete BI Authorization protocols, when:
- Write step is cancelled with EXPORT_TOO_MUCH_DATA dump when writing to the authorization log;
- RSECLOG table is becoming quite large;
- Authorization logs are not required anymore;
- BW reports have been suffering suddenly and abnormal delays;
- Traces showed table RSECLOG as the most accessed table.
Related Notes
1592528
Archiving of BI Authorization Change Logs
Created by: | SAP/DVD |
Underlying SAP report: | RSEC_CHLOG_ARCH_WRITE |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Required BW version | 7.30 at least |
Introduction
As of BW 7.30, it is possible to archive the change log of analysis authorizations using the archiving object RSEC_CHLOG.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept by using SARA transaction and WRITE action. The archiving object RSEC_CHLOG contains information about which database tables are used for archiving. In runtime, the report RSEC_CHLOG_ARCH_WRITE is executed.
Step list
In the main OutBoard Housekeeping menu select "Archiving of BI Authorization Change Logs – Settings" under the Business Warehouse/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks. For more information, please refer to the Creating a Settings ID section of this user documentation.
In the variant screen, you can set the criteria for the requests to be archived.
Creation Date | Refers to the time the logs were created |
Create Archive with Check | ILM Action: Archiving – the system copies only the data for which an archivability check was successfully performed to archive files. |
Create File w/o Check | ILM Action: Snapshot – the selected data is copied to the archive files without undergoing an additional check. The files created with this option can be stored on an external storage system. |
File with Check w/o SAP-AS | ILM Action: Data Destruction – only data that is added to the archive files. These files, according to the rules stored in ILM, can be destroyed. It's not possible to store the archive files created with this option in an external storage system. Deletion program can be used to delete the data copied to the archive files from the database. No archive information structures are created as a result. Once the data has been deleted from the database, the deletion also deletes the archive files that were created as well as the administration information related to these files. |
Test Mode / Live Mode | Specifies, in which mode the report executes. Test mode makes no changes in the database. |
Delete with Test Variant | If set, the delete program starts with the test mode variant. The program generates statistics about the table entries that would be deleted from the database in the production mode. |
Detail Log | Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete). |
Log Output | Specifies type of output log (List, Application Log, List and Application Log). |
Archiving Session Note | Description of the archived content. |
For more detailed information, see the contextual help.
Figure 127: Archiving of BI Authorization Change Logs – Settings detail
There are several options how to start the Archiving of BI Authorization Change Logs. For more information, please refer to Execute and Schedule sections of this user documentation.
Recommendation
We recommend archiving the data before making a system copy, as this involves a large amount of data that is not actually needed in the new system.
Warning
This task works on BW 7.30 systems and higher. It means central and satellite systems must meet these requirements. In case the central system doesn't meet these requirements, it is not possible to create settings variant. In the case, satellite systems do not meet the requirements, report RSEC_CHLOG_ARCH_WRITE is not found and is not executed.
Archiving of Point-of-Sale Aggregates
Created by: | SAP/DVD |
Underlying SAP report: | /POSDW/ARCHIVE_WRITE_AGGREGATE |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
While using POS solution, a large amount of data can be generated and this increases the data volume rapidly. The data of POS transaction can be summarized using aggregates, which reduce the size of data significantly. However, regular archiving of this data is very important.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept by using SARA transaction and WRITE action. The archiving object /POSDW/AGG contains information about which database tables are used for archiving. In runtime, the report /POSDW/ARCHIVE_WRITE_AGGREGATE is executed.
Step list
In the main OutBoard Housekeeping menu select "Archiving of Point-of-Sale Aggregates – Settings" under the Business Warehouse/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks. For more information, please refer to the Creating a Settings ID section of this user documentation.
In the variant screen, you can set the criteria for the requests to be archived.
Store | Indicator for stores |
Aggregate Number | POS aggregate number. |
Aggregate Level | Defines how the aggregated data is structured in the database. |
Maximum Posting Date | Day (in internal format YYYYMMDD) to which a POS transaction is assigned. |
Test Mode / Production Mode | Specifies, in which mode the report executes. Test mode makes no changes in database. |
Detail Log | Specifies information contained in Detail log (No Detail Log, Without Success Message, Complete). |
Log Output | Specifies type of output log (List, Application Log, List and Application Log). |
Archiving Session Note | Description of the archived content. |
For more detailed information, see the contextual help.
Figure 128: Archiving of Point-of-Sale Aggregates – Settings detail
There are several options to start the Archiving for Point-of-Sale Aggregates. For more information, please refer to Execute and Schedule sections of this user documentation.
Prerequisites
Prior to processing of the POS aggregates archiving, all POS transactions for selected Store/Date combination must have one of the following statuses: Completed, Rejected or Canceled. It is also assumed that these data are no longer required to be available for SAP POS DM.
Recommendations
We recommend running POS aggregates archiving regularly. Frequency of archiving is a system specific.
Warning
This task works on systems where SAP Point of Sale solution is installed. It means central and satellite systems must meet these requirements. In case the central system doesn't meet these requirements, it is not possible to create settings variant. In the case satellite systems do not meet the requirements, report /POSDW/ARCHIVE_WRITE_AGGREGATE is not found and is not executed.
Archiving of Point-of-Sale Transactions
Created by: | SAP/DVD |
Underlying SAP report: | /POSDW/ARCHIVE_WRITE |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
While using POS solution, a large amount of data can be generated that increases the data volume rapidly. Regular archiving of this data is very important.
This OutBoard Housekeeping task encapsulates the SAP NetWeaver data archiving concept by using SARA transaction and WRITE action. The archiving object /POSDW/TL contains information about which database tables are used for archiving. In runtime, the report /POSDW/ARCHIVE_WRITE is executed.
Step list
In the main OutBoard Housekeeping menu select "Archiving of Point-of-Sale Transactions – Settings" under the Business Warehouse/Archiving Tasks.
The settings are maintained the same way as standard SAP housekeeping tasks. For more information, please refer to the Creating a Settings ID section of this user documentation.
In the variant screen, you can set the criteria for the requests to be archived.
For more detailed information, see the contextual help.
Figure 129: Archiving of Point-of-Sale Transactions – Settings detail
There are several options how to start the Archiving of Point-of-Sale Transactions. For more information, please refer to Execute and Schedule sections of this user documentation.
Prerequisites
Prior to processing of the POS transactions archiving, all POS transactions for selected Store/Date combination must have one of the following statuses: Completed, Rejected or Canceled. It is also assumed that this data is no longer required to be available for SAP POS DM.
Recommendations
We recommend running POS transactions archiving regularly. Frequency of archiving is system specific.
Note
If you are using SAP POS DM implemented on BW powered by HANA, do not use this archiving object /POSDW/TL. Use /POSDW/TLF instead.
Warning
This task works on systems where SAP Point of Sale solution is installed. It means central and satellite systems must meet these requirements. In case the central system doesn't meet these requirements, it is not possible to create settings variant. In the case that satellite systems do not meet the requirements, report /POSDW/ARCHIVE_WRITE is not found and is not executed.
Enablement for archiving request admin. data for ADSOs
Created by: | SAP |
Underlying SAP report: | RSREQARCH_SIDTSNREQ_REQDONE |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
Requests generated with BW 740 have a TSN and a SID if an ADSO was the target of the request.
These requests have also been updated to the tables RSBMONMESS_DTP, RSDMREQ_DTP and RSBLOGPAR_DTP and not to the new RSPM tables for logging.
However, the requests cannot be archived with BWREQARCH, since they do not generate an entry in RSREQDONE.
The newly delivered report RSREQARCH_SIDTSNREQ_REQDONE generates these RSREQDONE entries, which have the sole purpose of enabling the archiving of the requests with transaction SARA.
Notes
2708933 - P22; DTP; REQARCH; RSPM; TSN: Archiving requests with SID and TSN
Temporary Database Objects Removing
Created by: | SAP |
Underlying SAP report: | SAP_DROP_TMPTABLES |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
In SAP BW, there are dynamically created temporary database objects (such as tables or views). They are created during query execution or other processes that read data from BW InfoProviders.
With BI 7.x these objects are always created in ABAP dictionary.
These objects have a name that starts with the '/BI0/0' prefix followed by one alphanumeric character for the object type and a numerical eight digit identification: /BI0/01 ... temporary tables used in connection with a query processing. They are used once, the system usually deletes them automatically (unless the process did dump) and the names are reused.
/BI0/02 ... tables that are used for external hierarchies.
/BI0/03 ... not used any longer.
/BI0/06 ... similar as of '/BI0/01', but tables are not deleted from the SAP DDIC.
/BI0/0D ... tables are used in context with the open hub functionality. They are not permitted to be deleted.
/BI0/0P ... tables occur in the courses of an optimized pre-processing that contains many tables. These tables can be reused immediately after releasing.
With BI 7.x, temporary table management has been improved. Now it provides SAP_DROP_TMPTABLE report to remove temporary objects.
Recommendations
This report is not recommended to run on a regular basis. It might be useful to run it manually due to some exceptional situation when many temporary objects have been created.
Warning
The SAP_DROP_TMPTABLES report deletes all objects (except for the temporary hierarchy tables) without taking into account whether or not they are still in use. For example, this can result in the terminations of queries, InfoCube compression and data extraction.
Related Notes
1139396
514907
Process Chain Logs and Assigned Process Logs Deletion
Created by: | SAP |
Underlying SAP report: | RSPC_LOG_DELETE |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Process chains are used in BW landscape to automate the loading sequence. There are multiple running process chains at a given time. Their logs are stored in the database to be ready for future analysis. Storing older logs may use up the disk space and it is required to delete the older process chain execution log.
Process chains are executed at different frequencies – daily, weekly, monthly, at specific calendar day etc. There are following tables that hold information about process chain logs: RSPCLOGCHAIN, RSPCPROCESSLOG.
The RSPC_LOG_DELETE report is designed to delete process chain log.
Recommendation
Our recommendation is to delete process chain logs according to the process chain frequency of an execution. In the case of daily or weekly execution we recommend to delete logs older than 3 months, in the case of monthly or quarterly execution we recommend to delete logs older than 6 months, other frequencies as per requirement.
Process Chain Instances Deletion
Created by: | SAP |
Underlying SAP report: | RSPC_INSTANCE_CLEANUP |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
An instance is generated for each process chain execution and is stored in the tables RSPCINSTANCE and RSPCINSTANCET.
Steplist
You can provide the following input information to the settings:
- "Older than" field – entries older than set date will be deleted;
- If "without corresponding chain run" option is checked, the variant entry without execution of a chain run will be deleted;
- If "delete log" option is checked, these logs will be deleted.
Recommendation
Execute the task with the appropriate settings to delete the entries from the DB tables.
Automatic Deletion of Request Information in Master Data/Text Provider
Created by: | SAP |
Underlying SAP report: | RSSM_AUTODEL_REQU_MASTER_TEXT |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
Master data InfoProviders, resp. text InfoProviders contain request information. It may be useful to keep the request information limited. This would mean to delete old requests (administration information) from master data InfoProviders and text InfoProviders to improve the performance and decrease main memory consumption.
The report RSSM_AUTODEL_REQU_MASTER_TEXT deletes obsolete request information from master data InfoProviders and text InfoProviders.
Recommendation
Our recommendation is to schedule the report periodically or in a process chain.
Unused Master Data Deletion
Created by: | SAP |
Underlying SAP report: | RSDMDD_DELETE_BATCH |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
In the case the selective master data deletion is required, there are two options:
At single record level – using this task in master data maintenance and/or deletion mode.
Master data can be deleted only, if:
- No transaction data exist for the master data;
- Master data are not used as an attribute for an InfoObject;
- There are no hierarchies for this master data.
Steplist
Fill all fields to select required master data:
P_IOBJNM | Name of the InfoObject |
P_DELSID | If checked, SIDs will be deleted |
P_DELTXT | If checked, the texts will be deleted |
P_SIMUL | If checked, simulate only |
P_LOG | If checked, log entries are written |
P_PROT | If checked, detailed usage protocol is provided |
P_SMODE | Search Mode for MD Where-Used Check (default – "O" "Only One Usage per Value") |
P_CHKNLS | If checked, search for usages in NLS |
P_STORE | If checked, Store where used list – changes related to enhancement |
P_REUSEA | If checked, Reuse where used list – changes related to enhancement (all) |
P_REUSEU | If checked, Reuse where used list – changes related to enhancement (used) |
P_REUSEN | If checked, Reuse where used list – changes related to enhancement (unused) |
Error Handling Log Analysis
Created by: | SAP |
Underlying SAP report: | RSB_ANALYZE_ERRORLOG |
Client-dependent: | no |
Settings as variant: | (no settings) |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
During Data Transfer Processes (DTP) different kinds of error can occur. There is RSBERRORLOG table (Logs for Incorrect Records) which stores the error handling logs due to following and other reasons: Warnings that are created during the master data uploads for duplicate records and single record error messages in customer-specific transformation routines.
Thus, RSBERRORLOG table can grow very significantly and can affect overall system performance.
To identify which DTPs are responsible for table size increase there is the report RSB_ANALYZE_ERRORLOG. It provides an overview of all DTP error stack requests and the number of records marked with errors.
Recommendation
Our recommendation is to run RSB_ANALYZE_ERRORLOG (in a background mode) to learn which DTP creates the most erroneous records in RSBERRORLOG table. Thereafter we recommend running the report RSBM_ERRORLOG_DELETE to reduce RSBERRORLOG table size. This action should be done monthly.
Related Notes
1095924
Error Handling Log Deletion
Created by: | SAP |
Underlying SAP report: | RSBM_ERRORLOG_DELETE |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
During Data Transfer Processes (DTP) different kinds of error can occur. There is RSBERRORLOG table (Logs for Incorrect Records), which stores error-handling logs due to following and other reasons:
- Warnings that are created during master data upload for duplicate records.
- Single error record messages in customer-specific transformation routines.
Thus, RSBERRORLOG table can grow very significantly and can affect overall system performance.
There is RSBM_ERRORLOG_DELETE report which helps to reduce the size of RSBERRORLOG table.
Recommendation
Our recommendation is to run the analyzing report RSB_ANALYZE_ERRORLOG (in a background mode) in advance to learn which DTP creates the most erroneous records in RSBERRORLOG table. Thereafter we recommend running the RSBM_ERRORLOG_DELETE report to reduce RSBERRORLOG table size. This action should be done monthly.
Related Notes
1095924
PSA Requests Error Logs Deletion
Created by: | SAP |
Underlying SAP report: | RSSM_ERRORLOG_CLEANUP |
Client-dependent: | no |
Settings as variant: | (no settings) |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
If the relevant requests are deleted from the PSA, in most cases the system automatically deletes the PSA error logs. Otherwise, the program RSSM_ERRORLOG_CLEANUP can be used to delete them.
PSA Partition Check
Created by: | SAP |
Underlying SAP report: | RSAR_PSA_PARTITION_CHECK |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
This task can be used in cases where the partitioning logic of PSA ignores the PARTNO field, the error stack for DTPs are created with global index. Global index is created with the key fields of the DataSource and the active table of the write-optimized DSOs is partitioned, though there is a global index. See more in Related Notes.
Related Notes
1150724
PSA Partno. Correction
Created by: | SAP |
Underlying SAP report: | SAP_PSA_PARTNO_CORRECT |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
The data may be written with an incorrect PARTNO = '0' into the wrong partition without any checks. When there is a deletion run on the PSA/CHANGELOG, the drop of the lowest existing partition will fail. This task helps to repair the requests written into the incorrect partition and will re-assign it to the correct partition. See more in Related Notes.
Related Notes
1150724
PSA Directory Cleanup
Created by: | SAP |
Underlying SAP report: | RSAR_PSA_CLEANUP_DIRECTORY |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
This task is used to check the PSA tables against the directory entries and the partition related inconsistencies. It is useful in scenarios when the PSA requests are deleted from the administrative data but not from the database. Requests are not available in the PSA tree but exist in the corresponding PSA table. All the requests in a partition are deleted but the partition is not dropped to check if data has been deleted or written into incorrect partitions. See more in Related Notes.
Related Notes
1150724
PSA Definition Cleanup
Created by: | SAP |
Underlying SAP report: | RSAR_PSA_CLEANUP_DEFINITION |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
While deleting requests from the PSA or due to the terminations when generating the transfer rules, in some circumstances, reference-free PSA metadata objects may be generated or may remain. This table partially remains in an inconsistent state. Therefore, the DDIC Tool displays these tables as incorrect.
Related Notes
1150724
Query Consistency Check
Created by: | SAP |
Underlying SAP transaction: | RSZTABLES |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Report ANALYZE_RSZ_TABLES is designed as a check-tool for detecting and solving different types of inconsistencies in the main query definition database tables.
The working mode of the task changed from the previous version. It no longer runs in one-system mode. This means it cannot be executed throughout all the landscape. However, the change offers a more detailed output, which enables to drill down further into output sub sections.
Warning
This task cannot be executed on a system connected via SYSTEM type RFC user.
Related Notes
792779
The F Fact Table Unused/Empty Partition Management
Created by: | SAP |
Underlying SAP report: | SAP_DROP_EMPTY_FPARTITIONS |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Each loading process of transaction data into an InfoCube generates a new request. For each request, a separate partition is created in the F fact table. When the F table is condensed into the E table of a cube, the partition corresponding to this request is deleted after the condensing has been successful. Partition with this name is never again created. In addition, the entry corresponding to this request is deleted in the packet dimension table. Selective deletion from the InfoCube can remove the data of the entire request without removing the accompanying partitions. Empty partitions are those that no longer contain any data records. They probably result from removing the data from the InfoCube via a selective deletion. Unusable partitions might still contain some data. However, no entry for this request is contained in the packet dimension table of the InfoCube. The data is no longer taken into consideration with reporting. The remaining partitions are created if a condenser run has not been correctly ended.
Recommendation
We recommend using the report SAP_DROP_EMPTY_FPARTITIONS to display empty or unusable partitions of the F fact tables of an InfoCube and if necessary to remove them.
Zero Elimination After Compression
Created by: | SAP |
Underlying SAP report: | RSCDS_NULLELIM |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
The InfoCube compression often results in records within a compressed table in which key figures have 0 value. If these key figures are with aggregation behavior SUM, zero value records can be deleted. This can be achieved by selecting the option "Zero elimination" during compression or – if this option is not used – by applying RSCDS_NULLELIM report.
Notes
The elimination of zero values may result in orphaned dimension entries. Therefore, regular dimension trimming is required when using zero elimination.
Recommendation
We recommend using the report RSCDS_NULLELIM after running cube compression task, if "Zero Elimination" flag was not set.
Cluster Table Reorganization
Created by: | SAP |
Underlying SAP report: | RSRA_CLUSTER_TABLE_REORG |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Introduction
Cluster table RSIXWWW can contain large datasets that can be no longer accessed. This results in bottlenecks with the disk space.
Recommendation
We recommend running the program RSRA_CLUSTER_TABLE_REORG regularly to delete the entries in the table RSIXWWW that are no longer required.
BEx Web Application Bookmarks Cleanup
Created by: | SAP |
Underlying SAP transaction: | RSWR_BOOKMARK_REORG |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Bookmarks are saved navigational states of a Web application. They are created in the BI system when using Export and Distribute -> Bookmark function, when saving an ad hoc analysis in BEx Web Analyzer or when personalizing Web applications.
With this task, it is enabled to reorganize the bookmarks that result from Web template in SAP NetWeaver 7.0 format.
Warning
This task cannot be executed on a system connected via SYSTEM type RFC user.
BEx Web Application 3.x Bookmarks Cleanup
Created by: | SAP |
Underlying SAP transaction: | RSRD_ADMIN_BOOKMARKS_3X |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Bookmarks are saved navigational states of a Web application. They are created in the BI system when using Export and Distribute -> Bookmark function, when saving an ad hoc analysis in BEx Web Analyzer or when personalizing Web applications.
With this task, it is enabled to reorganize the bookmarks that result from Web template in SAP NetWeaver 3.x format.
Warning
This task cannot be executed on a system connected via SYSTEM type RFC user.
BEx Broadcaster Bookmarks Cleanup
Created by: | SAP |
Underlying SAP report: | RSRD_BOOKMARK_REORGANISATION |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
A bookmark ID is the identification number (ID) for a saved navigational state of a Web application. A view ID is the identification number (ID) for a saved navigational state of a query. These IDs are generated when online links for information broadcasting are created.
With this task, it is possible to reorganize and delete bookmark IDs and view IDs that were created by the system for information broadcasting and that are no longer needed.
Recommendation
To automate the reorganization of bookmark and view IDs, this task can be scheduled to run periodically in the background.
Jobs without Variants Deletion
Created by: | SAP |
Underlying SAP report: | RS_FIND_JOBS_WITHOUT_VARIANT |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Sometimes the error message "Variant xxx does not exist" occurs. This is due to inconsistencies, which happened during a system or client copy, a transport or to a call-back happening at the wrong client.
Recommendation
Use this task according to related notes to repair inconsistencies or mismatch call-back / client.
Related Notes
1455417
Delete BW RSTT Traces
Created by: | SAP |
Underlying SAP report: | RSTT_TRACE_DELETE |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | yes |
Introduction
Sometimes RSTT traces cannot be systematically deleted. In this case, you can use this task to correct this problem.
Related Notes
1142427
Deletion of orphaned entries in Errorstack/Log
Created by: | SAP |
Underlying SAP report: | RSBKCLEANUPBUFFER |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
Size of RSBKDATA or RSBKDATAINFO is constantly growing. Task removes orphaned data in temporary storage of the Data Transfer Process for requests.
Related Notes
2407784 - How to clean up RSBKDATA and related tables
Clean up the DTP Runtime Buffer
Created by: | SAP |
Underlying SAP report: | RSBKCHECKBUFFER |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
Size of RSBKDATA or RSBKDATAINFO is constantly growing. These tables contains temporary storage of the Data Transfer Process and can be deleted once DTP is successfully completed.
Recommendation
SAP recommends to run this task daily.
Related Notes
2407784 - How to clean up RSBKDATA and related tables
Operational Delta Queue cleanup
Created by: | SAP |
Underlying SAP report: | ODQ_CLEANUP |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
The Operational Data Provisioning (ODP) framework has been greatly enlarged during the last releases of SAP Business Warehouse (BW). Framework using Operation Delta Queue (ODQ) as temporary storage for data that are loaded from the data sources and pushed further for all subscribers of the queue. These queues should be cleanup to avoid of unnecessary utilisation of database resources.
Recommendation
SAP recommends to run this task daily.
Related Notes
- 2286619 - ODQ_CLEANUP deletes entries in simulation mode
- 2190229 - Composite request can only be confirmed by the subscriber
- 2166526 - Solution to delete unconfirmed requests in transaction ODQMON
- 1836773 - How to delete outdated entries from delta queues
- 1745317 - ODQ: Cleanup Notification does not pass Queue Names
- 1735928 - ODQ: Improvements for Cleanup
- 1655273 - ODQ: More than one cleanup job per client
- 1599248 - ODQ: Real-time Support
- 1507908 - ODQ: Client dependency of the Cleanup Job is undefined
- 706478 - Preventing Basis tables from increasing considerably
Metadata of the object versions cleanup
Created by: | SAP |
Underlying SAP report: | RSO_DELETE_TLOGO_HISTORY |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
Size of RSOTLOGOHISTORY table is growing. Table contains the metadata of the object versions which can be deleted.
Settings
You can specify object type and deletion date to which data should be deleted. If you want to execute task in a test mode, please use option Check only.
Recommendation
We recommend to create dynamic variant and schedule this task for relevant objects on monthly basis.
Related Notes
2248171 - RSOTLOGOHISTORY Cleanup Report
BW Request Status Management cleanup
Created by: | SAP |
Underlying SAP report: | RSPM_HOUSEKEEPING |
Client-dependent: | no |
Settings as variant: | yes |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
RSMP* contains information regarding requests executed on BW system. When these tables are big, overall performance of creating accessing information (DTP loads, activation and so on) regarding BW requests can be slowed down.
Settings
Name of ADSO object where we want to do cleanup
Threshold request - all request created before and threshold request itself are going to be cleaned up
Recommendation
We recommend to execute the task on regular basis.
Related Notes
2716063 - Improvements of RSPM housekeeping
ERP TASKS
Change Documents Cleanup
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | yes |
Introduction
During archiving, these changedocs are archived together with main objects. But in some other circumstances (manual deletion, program failure, messy conversion/migration…) the changedocs can stay in the system for non-existent objects.
SAP provides just 1 report for deleting phantom change docs (SD_CHANGEDOCUMENT_REORG), but it's only for SD documents.
In OutBoard Housekeeping, "Change Documents Cleanup" activity checks and deletes phantom change documents for a wider range of the objects types:
|
|
Step list
In the main OutBoard Housekeeping menu select "Change Documents Cleanup – Settings" under the ERP System Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled.
Figure 130: Change Docs Cleanup – Settings detail
To run this task in a Test mode, mark the test mode checkbox. This allows you to display available cleanup results in logs.
Once the Settings ID for Change Docs Cleanup are specified, you can run the activity from the Main menu. There are several options how to start the activity. For more information, please refer to Execute and Schedule sections of this user documentation.
Longtexts Cleanup
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | yes |
Introduction
During archiving, these longtexts are archived together with main objects. But in some other circumstances (manual deletion, program failure, messy conversion/migration…) the texts can stay in the system for non-existent objects.
SAP provides a report for deleting phantom texts (RVTEXTE), however Housekeeping task supports Recycle bin as an added functionality.
As with Change documents, longtexts can also stay in the system for non-existent objects. In OutBoard Housekeeping, "Change Documents Cleanup" activity checks and deletes phantom Longtexts for the following object types:
|
|
Step list
In the main OutBoard Housekeeping menu select "Longtexts Cleanup – Settings" under the ERP System Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled.
Figure 131: Change Docs Cleanup – Settings detail
It is possible to run this task in a Test mode. Mark this option to display available cleanup results in logs.
Once the Settings ID for Change Docs Cleanup are specified, you can run the activity from the Main menu. There are several options how to start the activity. For more information, please refer to Execute and Schedule sections of this user documentation.
Marking of Unused Customers / Vendors
Common feature of ERP systems is collecting the data – master and also transactional data. During the years the systems are in use, the number of master data (MD) become obsolete, e.g. specific vendors and customers are no longer business partners, materials are replaced by others...
The MD lifecycle makes the portion of master data outdated and thus the space in a database is allocated uselessly.
There is no straightforward way to delete the outdated MD, because during the times there is added number of transactional data bound with MDs. For the legal reasons and the database consistency, these documents can't stay orphaned in the database without the appropriate MDs. Therefore, the good and safe MD housekeeping approach consists of the following steps:
- The list of MDs is selected (as a variant).
- Selected MD, which together with bound documents (untouched for reasonable period), are marked as blocked and 'for deletion'.
- The documents bound with selected MD will be archived and deleted (SARA transaction). The same occurs with MD.
Marking of Unused Customers
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled and selected whether or not the settings ID will be run in a test mode.
You may choose to set select conditions for marking unused customers:
- Customer ID
- Inactive longer than (Days)
Figure 132: Marking of Unused Customers – Settings detail
Once the Settings ID for Marking Unused Customers are specified, you can run the activity from the Main menu. There are several options how to start the activity. For more information, please refer to Execute and Schedule sections of this user documentation.
Note: Marking of unused customers has no visible output; the customers that fall into select conditions specified in the Settings ID will be flagged with 'marked for deletion'. This flag will be used during SARA archiving with object FI_ACCPAYB with checked 'Consider Deletion Indicator'.
Marking of Unused Vendors
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled and selected whether or not the settings ID will be run in a test mode.
You can choose to set select conditions for marking unused vendors:
- Customer ID
- Inactive longer than (Days)
Figure 133: Marking of Unused Vendors – Settings detail
Once the Settings ID for Marking Unused Vendors are specified, you can run the activity from the Main menu. There are several options how to start the activity. For more information, please refer to Execute and Schedule sections of this user documentation.
Note: Marking of unused vendors has no visible output; the vendors that fall into select conditions specified in the Settings ID will be flagged with 'marked for deletion'. This flag will be used during SARA archiving with object FI_ACCRECV with checked 'Consider Deletion Indicator'.
Schedule Manager Tables Cleanup
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
The Schedule Manager (transaction SCMA) enables you to monitor periodic tasks, such as period-end closings in overhead cost controlling. In the Monitor (transaction SCMO) you can display information about all scheduled jobs. The Monitor is a component of the Schedule Manager. The tool saves the information in its own tables (SM*) such as SMMAIN (main information about the entry), SMPARAM (processing parameters) and SMSELKRIT (selection criteria). These tables are prone to growing very large.
Recommendation
You can keep the size of Schedule Manager tables down by regularly deleting monitoring data that is no longer used. Once this data is deleted, you will not be able to monitor any more jobs that have already run. Therefore, it is essential that you only delete data that is no longer needed for monitoring, such as period-end closing data that is older than one year.
Related Notes
803641
Authorisations: authority object 'B_SMAN_WPL'
CRM TASKS
BDocs Deletion
Created by: | DVD |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Created by:DVD Client-dependent: yes Settings as variant: noSupport for Recycle bin:no
Introduction
BDocs are CRM related intermediate documents. Unlike IDocs, which are asynchronous, BDocs can be used in both synchronous and asynchronous modes.
BDocs are used in CRM for data exchange and they can be quite complex comparing to IDocs. They are transferred through qRFC or tRFC.
The tables for the business document message flow and the Middleware trace can increase in size considerably. This may cause the performance issues during the processing of BDoc messages.
This task extends standard SAP report SMO6_REORG.
Step list
In the main OutBoard Housekeeping menu select "BDocs Deletion – Settings" under the CRM System Tasks.
You can create new settings (1) by entering a new ID or choose from existing settings (2). For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled.
Figure 134: BDocs Deletion – Settings detail
Once the Settings ID for BDocs Deletion are specified, you can run the activity from the Main menu. There are several options how to start the activity. For more information, please refer to Execute and Schedule sections of this user documentation.
Recommendation
We recommend scheduling this task on a regular basis and setting the delete process for BDoc messages, trace data older than 7 days.
Related Notes
206439
Delete Inactive Versions of Products
Created by: | SAP |
Underlying SAP report: | COM_PRODUCT_DELETE_INACTIV |
Client-dependent: | yes |
Settings as variant: | no |
Support for Recycle bin: | no |
Introduction
The task reorganizes the product master. Deletes all inactive versions of products with status I1100 = DELETED.
Recommendation
This task should be scheduled daily.
OutBoard Housekeeping SYSTEM TASKS
Deletion of old runID
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
Datavard OutBoard Housekeeping is using multiple tables for handling execution of tasks. Information like when and how long task was running is not needed after some time period.
In OutBoard™ for Housekeeping it is possible to delete already finished runIDs that are older than a certain date.
Recommendation
Our recommendation is to delete runIDs on regular basis.
RecycleBin Cleanup
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
The RecycleBin is an essential part of OutBoard Housekeeping. For the specific tasks, it temporarily stores deleted data and if necessary, this data can be restored back. The number of days for which the data is kept in RecycleBin, is called retention time.
When the retention time expires, this data is then permanently deleted from RecycleBin. This process is provided by the scheduled job FS_SSYSRECBIN_CUDUMMY_SETT.
The RecycleBin cleanup task doesn't require any specific settings.
It is possible to execute this RecycleBin cleanup task the same way as all other tasks – directly from System view or via scheduling from Activity view (for single system or landscape branch).
Recommendations
To ensure that RecycleBin cleanup doesn't overload the system, it is recommended to run it daily outside of Business Day Hours (this is locally dependent).
RecycleBin Size Recalculation
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
It can happen that the RecycleBin size information is not accurate. Size recalculation task updates this information.
The RecycleBin size recalculation task doesn't require any specific settings.
It is possible to execute this RecycleBin size recalculation task the same way as all other tasks – directly from System view or via scheduling from Activity view (for single system or landscape branch).
Recommendations
To keep the precise data size information of the RecycleBin, this task should be executed regularly.
Task Analysis
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
The main goal of OutBoard Housekeeping is to remove temporary or obsolete data from SAP systems. This will be achieved by regular running of cleaning OutBoard Housekeeping tasks with appropriate settings. In general, these settings are created and maintained by Housekeeping user. However, it requires some level of expert skills concerning SAP basis & BW.
Therefore, OutBoard Housekeeping provides you with the helper. Together with OutBoard Housekeeping packages, default settings starting with "_" are distributed. They are based on best practices for the specific OutBoard Housekeeping task.
For a limited set of OutBoard Housekeeping tasks, an analysis of data can be executed. Analysis output can be presented to show how much space can be saved after the task execution. It depends on the thresholds set for the selected settings. These settings can be modified and thus a potential outcome of housekeeping can be evaluated ahead.
Step list
Double click the system you want to work on. Scroll down the main OutBoard Housekeeping menu; select "OutBoard Housekeeping Task Analysis" under the OutBoard Housekeeping System Tasks. You can then create new settings:
- By entering a new ID
- By choosing from existing settings
For more information, please refer to the Creating a Settings ID section of this user documentation. Should you create new settings, the "Description" field needs to be filled. Once the Settings ID for OutBoard Housekeeping Task Analysis are specified, you can run the activity from the Main menu. There are several options how to start the activity. For more information, please refer to Execute and Schedule sections of this user documentation. In this case, the execution and results shown can be reached also by right click on the selected group or system in System View.
Figure 135: Analyses from System view
Figure 136: Result of system analysis for default settings
Recommendations
To keep the analysis up-to-date it is recommended to run this task after each execution of OutBoard Housekeeping tasks to which the analysis are connected.
Scheduling of System Lock
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
If the system (or group of systems) needs to be locked on regular basis, this can be scheduled using the task Scheduling of a System Lock.
In the task settings, set up the period using these time units: days and hours. Once this setting exists, task can be scheduled by defining the start period and the reoccurrence of the scheduled system lock.
For example if system 'X' should be locked every month for first 2 days, the variant using a period of 2 days should be created afterwards the task should be scheduled. Once the locking period is over, the system(s) will be unlocked automatically. In case the lock should be cancelled before the locking period is over, the task Cancel Scheduled System Lock can be used.
Check the statement "Pause / resume" support in the task chapter header. For these tasks there is an automated pause / resume option available, when system lock is scheduled. When the system is being locked, the task finishes last portion (request or package) and pauses. When the system is unlocked again, the cockpit resumes paused task.
Step List
Double click the system you want to work on. Scroll down the main OutBoard Housekeeping menu and select "Scheduling of System Lock" under the OutBoard Housekeeping System Tasks. You can create new settings: by entering a new ID or choose an existing setting. For more information, please refer to the Creating a Settings ID section of this user documentation. After filling Settings ID, press continue to get to Schedule system lock screen.
Figure 137: Schedule system lock screen
Fill in Description to specify your settings. In 'Locking time' part of the screen, there are 2 time characteristics to choose from: Days and Hours. Choose one and Save Settings.
To schedule a system lock press the button 'Schedule', fill in your settings ID and a job run definition. There are 2 options to execute: Scheduled and unscheduled task. Press the calendar to check the task was correctly scheduled. After a system lock execution, display Logs to view if all the processes were properly executed. To review detailed information of an active lock, go to calendar or display scheduling of system lock logs.
Cancel Scheduled System Lock (Ad hoc)
Created by: | DVD |
Client-dependent: | no |
Settings as variant: | no |
Support for Recycle bin: | no |
Recommended for HANA pre-migration housekeeping: | no |
Introduction
This task allows an emergency cancel of the scheduled system lock in case a lock is to be cancelled within the valid locking period.
Check the statement "Pause / resume" support in the task chapter header. For these tasks is an automated pause / resume option available, when system lock (Ad hoc) is scheduled. When the system is being locked, the task finishes last portion (request or package) and pauses. When the system is unlocked again, the cockpit resumes paused task.
Note: This is only emergency unlock. The scheduled lock will be automatically deleted after required period.
Exception: This task doesn't unlock the manual system lock (system locked through Landscape editor).
Step list
Switch to the activity view and scroll down the main OutBoard Housekeeping menu. Double click "Cancel Scheduled System Lock (Ad hoc)" under the OutBoard Housekeeping System Tasks. Execute task on the system.