Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

This page provides general guidelines on how to set up Snowflake Storage to work with Datavard Storage Management.

1. General prerequisites

1.1 Open Ports

In order to enable communication of SAP systems with Snowflake host, the following port numbers should be reachable from the SAP system:

Port

Type

Snowflake Host

443

https

<account name>.<region>.<platform>.snowflakecomputing.com

1.2 Warehouse and Database

Snowflake warehouse and database must already exist.

1.3 Snowflake user

Snowflake user must be created.

2. OS prerequisites (On SAP host)

This group of requirements relates to the operating systems underlying the SAP system with all its application servers. Datavard products (e.g. Datavard Glue, OutBoard DataTiering) have been developed and tested on the SUSE Linux environment and Windows Server 2012. However, by design, they are not limited by the choice of an operating system, if the requirements listed in this guide are met.

2.1 OS directories

Datavard connector uses a directory dedicated to its configuration files:

$ ls -ld /sapmnt/<SID>/global/security/dvd_conn

rwx------ 2 dvqadm sapsys 4096 --- /sapmnt/<SID>/global/security/dvd_conn

The folder is used to store drivers and is shared among SAP application servers. Set the ownership and permissions appropriately to <sid>adm.

2.2 JDBC Drivers

JDBC protocol is used to connect to Snowflake. Snowflake JDBC driver must be manually stored on the operating system and be accessible to the Datavard connector. Download Snowflake JDBC driver.

We recommend storing the drivers in a folder within the connector directory, organized in sub-folders to avoid possible conflicts.

WARNING: We have detected Snowflake driver version 3.13.8 having a problem properly initializing SSL handshake.
The error message in the JCo log reflecting this is:

net.snowflake.client.jdbc.SnowflakeSQLException: JDBC driver encountered communication error. Message: Exception encountered for HTTP request: Unsupported or unrecognized SSL message.

The problem is fixed in the later driver version snow-jdbc-3.13.11.jar.

$ ls -ld /sapmnt/<SID>/global/security/dvd_conn/*

drwxr-xr-x 2 dvqadm sapsys 4096 --- /sapmnt/<SID>/global/security/dvd_conn/snowflake

$ ls -l /sapmnt/<SID>/global/security/dvd_conn/snowflake

drwxr-xr-x 2 dvqadm sapsys 4096 --- /sapmnt/<SID>/global/security/dvd_conn/snowflake/snowflake-jdbc-3.12.9.jar

3. SAP configuration

3.1 Datavard JAVA connector

Java connector is a critical middle-ware component. Please follow the steps in /wiki/spaces/ReuseLib/pages/1550812246 to set it up/upgrade to a new version before you continue. 

The Java runtime environment is narrowed per Snowflake JDBC driver requirement and must be installed in a 64-bit environment and requires Java 1.8 (or higher).

WARNING: In case of JCo running on JDK 16, please add a custom argument in the Advanced tab → Additional java starting arguments: -Djdk.module.illegalAccess=permit
JDK 16 introduced strong encapsulation of JDK internals (see the documentation https://www.oracle.com/java/technologies/javase/16-all-relnotes.html#JDK-8256299) which causes problems in combination with the Snowflake JDBC driver.
In JDK 17 option -Djdk.module.illegalAccess=permit does not work anymore and is therefore unusable for connection using Snowflake JDBC driver (for more information see https://jdk.java.net/17/release-notes#JDK-8266851).
Snowflake documentation only states that JDBC driver must be installed in a 64-bit environment and requires Java 1.8 (or higher), link to the official page: https://docs.snowflake.com/en/user-guide/jdbc.html.

3.2 Storage Management setup

The final step in SAP & Snowflake connectivity is the creation of two storages in transaction /DVD/SM_SETUP - Snowflake storage which represents the table level of Snowflake infrastructure and the flat Snowflake stage which handles temporary files.

3.2.1 Snowflake Stage (SNOW_STAGE)

It is binary storage pointing to the Internal Stage of the Snowflake user. For loading/unloading data uses extended functionality of Snowflake JDBC driver.

Storage ID - Logical name of the storage connection
Storage type - SNOW_STAGE (Snowflake internal stage)
Full name of Snowflake account - <account name>.<region>.<platform> (more information can be found in Snowflake account name doc).
User - Snowflake user
Password - User password
Password for JDBC Connection is Hashed - Indicator whether the password is hashed (you can create a hash of the password by using report /DVD/XOR_GEN)
Java connector RFC - TCP/IP RFC destination used for communication with Datavard Java connector
Driver path - Snowflake driver directory path

3.2.2 Snowflake Storage (SNOWFLAKE)

It is transparent storage pointing to Snowflake Database. For sending SQL commands data uses Snowflake JDBC driver.

Storage ID - Logical name of the storage connection
Storage type - SNOWFLAKE (Snowflake transparent storage)
Referenced storage - Storage ID of Snowflake stage area for temporary file upload/download
Java connector RFC - RFC Destination used for communication with Datavard JCo
Account name - <account name>.<region>.<platform> (more information can be found in Snowflake account name doc).
Warehouse - Name of an existing warehouse in Snowflake
Database - Name of an existing database in Snowflake
Database schema - Name of an existing database schema in Snowflake
Role - Default role in Snowflake (parameter is not mandatory)
Driver path - Snowflake driver directory (Logical File of type DIR defined in transaction FILE)
Java connector RFC - TCP/IP RFC destination used for communication with Datavard Java connector
User - Snowflake user
Password - User password (preferably in hashed format)
Password for JDBC Connection is Hashed - Indicator whether the password is hashed (you can create a hash of the password by using report /DVD/XOR_GEN)
Table name prefix - optional text value for naming prefix of all Glue tables created within this storage
Wrap values in staged CSV files - depending on table content, it may be necessary to encapsulate field values to avoid errors during INSERT into Snowflake table from staging CSV files

  • No labels