(SM-2111) Snowflake Storage Setup

This page provides general guidelines on how to set up Snowflake Storage to work with Datavard Storage Management.

1. General prerequisites

1.1 Open Ports

In order to enable communication of SAP systems with Snowflake host, the following port numbers should be reachable from the SAP system:

Port

Type

Snowflake Host

Port

Type

Snowflake Host

443

https

<account name>.<region>.<platform>.snowflakecomputing.com

1.2 Warehouse and Database

Snowflake warehouse and database must already exist.

1.3 Snowflake user

Snowflake user must be created.

2. OS prerequisites (On SAP host)

This group of requirements relates to the operating systems underlying the SAP system with all its application servers. Datavard products (e.g. Datavard Glue, OutBoard DataTiering) have been developed and tested in the SUSE Linux environment and Windows Server 2012. However, by design, they are not limited by the choice of an operating system, if the requirements listed in this guide are met.

2.1 OS directory and JDBC Driver

JDBC protocol is used to connect to Snowflake. Snowflake JDBC driver needs to be manually copied to the operating system and be accessible to the Datavard connector. Download Snowflake JDBC driver.
We recommend storing the drivers in a directory shared between application servers, organized in sub-directories to avoid possible conflicts (some customers connect to multiple different platforms/services).

Default directory used to store drivers needed for communication with remote storages/services is:

$ ls -ld /sapmnt/<SID>/global/security/dvd_conn

rwx------ 2 dvqadm sapsys 4096 --- /sapmnt/<SID>/global/security/dvd_conn

Actual Snowflake JDBC driver should be put into dedicated sub-directory, example:

$ ls -ld /sapmnt/<SID>/global/security/dvd_conn/*

drwxr-xr-x 2 dvqadm sapsys 4096 --- /sapmnt/<SID>/global/security/dvd_conn/snowflake

 

$ ls -l /sapmnt/<SID>/global/security/dvd_conn/snowflake

drwxr-xr-x 2 dvqadm sapsys 4096 --- /sapmnt/<SID>/global/security/dvd_conn/snowflake/snowflake-jdbc-3.12.9.jar

Set its ownership and permissions appropriately to <sid>adm[:sapsys].

We have detected Snowflake driver versions 3.13.x are having a problem properly initializing SSL handshake.
The error message in the JCo log reflecting this is:
net.snowflake.client.jdbc.SnowflakeSQLException: JDBC driver encountered communication error. Message: Exception encountered for HTTP request: Unsupported or unrecognized SSL message.
To avoid this issue, please use tested lower version of the driver: snow-jdbc-3.12.9.jar.

3. SAP configuration

3.1 Datavard JAVA connector

Java connector is a critical middle-ware component. Please follow the steps in Java Connector Setup to set it up/upgrade to a new version before you continue. 

The Java runtime environment is narrowed per Snowflake JDBC driver requirement and must be installed in a 64-bit environment and requires Java 1.8 (or higher).

WARNING: In case of JCo running on JDK 16, please add a custom argument in the Advanced tab → Additional java starting arguments: -Djdk.module.illegalAccess=permit
JDK 16 introduced strong encapsulation of JDK internals (see the documentation Consolidated JDK 16 Release Notes) which causes problems in combination with the Snowflake JDBC driver.
In JDK 17 option -Djdk.module.illegalAccess=permit does not work anymore and is therefore unusable for connection using Snowflake JDBC driver (for more information see https://jdk.java.net/17/release-notes#JDK-8266851).
Snowflake documentation only states that JDBC driver must be installed in a 64-bit environment and requires Java 1.8 (or higher), link to the official page: JDBC Driver | Snowflake Documentation.

3.2 Storage Management setup

The final step in SAP & Snowflake connectivity is the creation of two storages in transaction /DVD/SM_SETUP - Snowflake storage which represents the table level of Snowflake infrastructure and the flat Snowflake stage which handles temporary files.

3.2.1 Snowflake Stage (SNOW_STAGE)

It is binary storage pointing to the Internal Stage of the Snowflake user. For loading/unloading data uses extended functionality of Snowflake JDBC driver.

Storage ID - Logical name of the storage connection
Storage type - SNOW_STAGE (Snowflake internal stage)
Full name of Snowflake account - <account name>.<region>.<platform> (more information can be found in Snowflake account name doc).
User - Snowflake user
Password - User password
Password for JDBC Connection is Hashed - Indicator whether the password is hashed (you can create a hash of the password by using report /DVD/XOR_GEN)
Java connector RFC - TCP/IP RFC destination used for communication with Datavard Java connector
Driver path - Snowflake driver directory path

3.2.2 Snowflake Storage (SNOWFLAKE)

It is transparent storage pointing to Snowflake Database. For sending SQL commands data uses Snowflake JDBC driver.

Storage ID - Logical name of the storage connection
Storage type - SNOWFLAKE (Snowflake transparent storage)
Referenced storage - Storage ID of Snowflake stage area for temporary file upload/download
Java connector RFC - RFC Destination used for communication with Datavard JCo
Account name - <account name>.<region>.<platform> (more information can be found in Snowflake account name doc).
Warehouse - Name of an existing warehouse in Snowflake
Database - Name of an existing database in Snowflake
Database schema - Name of an existing database schema in Snowflake
Role - Default role in Snowflake (parameter is not mandatory)
Driver path - Snowflake driver directory (Logical File of type DIR defined in transaction FILE)
Java connector RFC - TCP/IP RFC destination used for communication with Datavard Java connector
User - Snowflake user
Password - User password (preferably in hashed format)
Password for JDBC Connection is Hashed - Indicator whether the password is hashed (you can create a hash of the password by using report /DVD/XOR_GEN)
Table name prefix - optional text value for naming prefix of all Glue tables created within this storage
Wrap values in staged CSV files - depending on table content, it may be necessary to encapsulate field values to avoid errors during INSERT into Snowflake table from staging CSV files