(DTERP-2211) Archive infostructure table
There is a possibility to replace the archive infostructure table (ZARIX) with the virtual index functionality. Virtual index can work in different modes, it can use a Transparent archive or replicated archive infostructure table as a source of the data. There is also a possibility to work in a mode that also archive infostructure on the primary database will still be used. Below you can find the details of how the process of extraction and virtual index is working.
High level overview
Overview name the main parts of the extraction and cleanup process.
Full extraction using Glue
Deletion of ZARIX on Primary DB with our deletion report
Standard archiving and creation indexes
Reading data from the index
Below you can find the details of how the data will be read once the virtual index is enabled. Reading request is split into two separated requests one goes to the primary database and the other one to the external storage using storage management.
Initial extraction run
You can walk through the process using defined steps in the extraction run.
Transaction: /DVD/CRP_AIND_TH
Run define multiple steps after which data from the archive infostructure table will be moved to the external storage table and you will be able to read this data as before using the virtual index.
Chose infostructures
This step will enable you to choose which infostructures you want to extract and clean up. You need to fill in at least one desired archive infostructure to select the option and execute it.
Please make sure you pick Archive Infostructure that is active, otherwise, the table could not exist.
Create Glue extractors
This step will create all necessary objects in SNP Glue to carry out extraction from the archive infostructure table to external storage. This includes creating a table on external storage and extractor objects like Fetcher, Consumer, and Extraction process itself. When you enter the step generator, a screen will appear with prefilled tables of archive infostructures that you have chosen in previous step. Then you need to follow the step list:
Click on the button Details on the first row of the Glue table generation table
In Settings section
Change prefix for the name of generated tables if needed
Chose development package to which tables will be generated
Enter Storage ID where you want to store the data
Storage ID can be created in transaction /DVD/SM_SETUP
Switch OFF option Add Glue request field
Click on the button Details on the first row of Extractor 2.0 generation table
In Settings section
Change prefix for the name of generated tables if needed
Chose development package to which objects will be generated
Select Source object type: DDIC
Select Target object type: GL_TAB
Target object ID: 1
Press ENTER
Click on Generate or Generate (background)
After successful generation, open transaction /DVD/GL80 in a new window, and find the generated Fetcher/s. In edit mode change the filter definition and check the 'Selection' checkbox for field ARCHIVEKEY. Save Fetcher and confirm reactivation.
Enable Virtual Index
Before starting with the extraction and cleanup step itself, this step will enable reading data from the archive index for selected archive infostructures. This is important because when extraction and cleanup starts part of data will be in original table and part of the data will be on external storage. Therefore, we need to switch the virtual index so that all of the data are continuously available.
Generate Extract and Cleanup
Extraction can be carried out in parallel and one portion will move indexes for one archiving key/file. Tasks are identified and generated here.
Execute Extract and Cleanup
This step contains execution of generated extraction and cleanup tasks for each archiving key.
Regular extraction
When initial extraction is done, after some time archive infostructure table can be filled with new data for new archive files. To extract and cleanup these data regular job could be executed to release the space from the primary database.
Report: /DVD/CRP_AIND_EXTRACT_CLEANUP
This report can be executed for one archive infostructure and specified archiving keys. If you will not set archiving keys, it will take all of the data available currently in the primary database and extract them, and then do the cleanup. Application logs are also available from the process.