New Features
Google BigQuery Streaming
SNP Glue can now ingest data to the Google BigQuery DB using the BigQuery Storage Write API. The API combines streaming ingestion and batch loading to achieve the best possible performance.
Improvements
Extractors
The screen for the configuration of Mass execution has been adjusted to simplify the user experience.
A new option“Overwrite variant“ has been added to the Glue scheduler. If enabled, the parameters of a scheduler overrule the parameters of extraction process variants that were added to the scheduler. For more information, please refer to our documentation.
CDC
The robustness of the SNP Glue trigger implementation has been improved. When creating a trigger, the logic checks whether pre-requisite objects were successfully created, and provides an error message otherwise.
SNP Glue triggers now provide an option to mark a record as archived (A) instead of deleted. This behavior is configurable based on the logic that a specific user is assigned to archive data and all the data "deleted" from the database by that user is automatically marked as archived.
SLT Integration
Data replicated via SLT to ABAP target (BADI implementation) can sometimes get corrupted from a certain table position to the structure's end.
In the new release, we have introduced a check that guarantees the quality of the replicated data. If corrupt data is detected, the replication fails and thus forces the repetition of the extraction. Based on our experiences, the repeated extraction usually ends successfully.
You can find more information on the following link.
Storage Management
Date and time fields in SAP can sometimes contain incorrect values (e.g. invalid characters like letters in a numeric date value). This may cause problems when the target technology does not allow it. In this case, you can use a mechanism that adjusts invalid values to make them compatible with the target technology.
Snowflake Streaming
The stability of the Snowflake streaming connector has been improved to overcome limitations on both the SAP and the Snowflake side.
The following improvements have been introduced:
You can now use CSV conversion to transfer the data from SAP to JCO. The original approach using only JCO tables had limitations when replicating binary data.
A failover mechanism has been introduced to automatically switch to CSV conversion in case JCO table conversion fails.
A check to validate the number of inserted rows has been introduced.
Azure Blob storage
You can now specify a container to which the data is replicated. This container is then used instead of generating a new container for each table.
Fixes
Extractors
You couldn’t specify a filter for the replicated data if parallel processing (mass execution) has been used.
The date and time delta capture was not working properly if a filter was defined on the same field as the delta capture. In such a case, the logic could cause high memory consumption. This has been corrected and now you can use this configuration without any limitations.
When scheduling a data extraction, you can specify a database hint that is used in a select statement while extracting the data. In the previous version, this was not working properly for the HANA database.
Object generator
Using the Glue object generator, you couldn’t generate objects for the extraction from ODP queues. The generator failed also in case a deletion field was specified.
Object navigator
After the initial installation of Glue, you couldn’t add the first Glue object to the Object navigator (GL80).
TMS
Binary settings parameters of a Glue table were not transported to the target system using the Glue TMS. The logic has been fixed and now all the parameters are transferred correctly.
Storage management
In Storage management, you can now check multiple storages with a single click.