(SM-2311) Snowflake Storage Setup
This page provides general guidelines on how to set up Snowflake Storage to work with Storage Management.
1. General prerequisites
1.1 Network Connectivity
To enable communication of SAP systems with the Snowflake host, the following hosts/port needs to be reachable from the SAP system:
Port | Address |
---|---|
443 | <account name>.<region>.<platform>.snowflakecomputing.com |
443 | ocsp.snowflakecomputing.com |
If your Snowflake account is hosted in a particular cloud environment like AWS or MS Azure, make sure their respective domains are reachable from the SAP system as well (e.g. *.amazonaws.com, *.windows.net); port 443.
This is needed for file upload to the Snowflake staging area.
There is an automatic redirect happening during the PUT operation when transferring data to the Snowflake internal stage, which goes to the respective vendor’s file service (example: customer-stage.s3.us-east-2.amazonaws.com).
In case of connectivity problems, Snowflake offers SnowCD utility to check connectivity towards Snowflake service endpoints possibly involved in the communication: https://docs.snowflake.com/en/user-guide/snowcd.html#
1.2 Warehouse and Database
Snowflake warehouse and database must already exist.
1.3 Snowflake user
Snowflake technical users must be created with full privileges on the chosen Snowflake DB schema.
The recommended privilege for the Snowflake technical user is OWNERSHIP of the Snowflake database.
In case ownership of the database cannot be assigned to the technical user, minimal requirements are the following:
GRANT ALL ON DATABASE <DATABASE> TO ROLE <ROLE>
GRANT USAGE ON WAREHOUSE <WAREHOUSE> TO ROLE <ROLE>
GRANT ALL ON SCHEMA <SCHEMA> TO ROLE <ROLE>
There should be a one-to-one relation between Snowflake schemas and every SAP system connected, ensuring replicated data isolation.
2. OS prerequisites (On SAP host)
This group of requirements relates to the operating systems underlying the SAP system with all its application servers. SNP products (e.g. SNP Glue™, SNP OutBoard™ Data Tiering) have been developed and tested in the SUSE Linux environment and Windows Server 2012. However, by design, they are not limited by the choice of an operating system, if the requirements listed in this guide are met.
2.1 OS directory and JDBC Driver
JDBC protocol is used to connect to Snowflake. Snowflake JDBC driver needs to be manually copied to the operating system and accessible to the connector. Download Snowflake JDBC driver. The latest version of the Snowflake JDBC driver is recommended, except for the Snowpipe Streaming storage connection - the streaming scenario currently requires JDBC driver version 3.13.33 in combination with snowflake-ingest-sdk-2.0.x.jar.
It is recommended to store the drivers in a directory shared between application servers, organized in sub-directories to avoid possible conflicts (some customers connect to multiple different platforms/services).
The default directory used to store drivers needed for communication with remote storages/services is:
$ ls -ld /sapmnt/<SID>/global/security/dvd_conn
rwx------ 2 dvqadm sapsys 4096 --- /sapmnt/<SID>/global/security/dvd_conn
The actual Snowflake JDBC driver should be put into a dedicated sub-directory, for example:
$ ls -ld /sapmnt/<SID>/global/security/dvd_conn/*
drwxr-xr-x 2 dvqadm sapsys 4096 --- /sapmnt/<SID>/global/security/dvd_conn/snowflake
$ ls -l /sapmnt/<SID>/global/security/dvd_conn/snowflake
drwxr-xr-x 2 dvqadm sapsys 4096 --- /sapmnt/<SID>/global/security/dvd_conn/snowflake/snowflake-jdbc-3.12.9.jar
Set its ownership and permissions appropriately to <sid>adm[:sapsys]
.
Be aware that there was an issue with the SSL handshake in older versions of the driver (3.13.x), this issue was fixed in snow-jdbc-3.13.21 version.
3. SAP configuration
3.1 JAVA connector
Java connector is a critical middle-ware component. Follow the steps in Java Connector Setup to set it up/upgrade to a new version before you continue.
The Java runtime environment is narrowed per Snowflake JDBC driver requirement must be installed in a 64-bit environment and requires Java 1.8 (or higher), link to the official page: https://docs.snowflake.com/en/user-guide/jdbc.html.
WARNING: In case of JCo running on JDK 16, add a custom argument in the Advanced tab → Additional java starting arguments: -Djdk.module.illegalAccess=permit
JDK 16 introduced strong encapsulation of JDK internals (see the documentation Consolidated JDK 16 Release Notes) that caused problems in combination with the Snowflake JDBC driver.
In JDK 17 option -Djdk.module.illegalAccess=permit
does not work anymore and is therefore unusable for connection using the current Snowflake JDBC driver (for more information see Consolidated JDK 17 Release Notes ).
3.2 Storage Management Setup
The final step in SAP & Snowflake connectivity is the creation of two storages in transaction /DVD/SM_SETUP
- Snowflake storage which represents the table level of Snowflake infrastructure and the Snowflake stage that handles temporary files.
3.2.1 Snowflake Stage (SNOW_STAGE)
Binary storage points to the Internal Stage of the Snowflake user. It is used for data transfer leveraging the extended functionality of the Snowflake JDBC driver.
Storage ID
: Logical name of the storage connection.Storage type
: SNOW_STAGE (Snowflake internal stage).Java connector RFC
: TCP/IP RFC destination used for communication with Java connector.Account name
: Name of the Snowflake account without .snowflakecomputing.com
domain, in the format: <account name>.<region>.<platform> (more information can be found in Snowflake account name doc).User role
: User's role used for data insertion.JAVA Call Repeat
: Number of times failed calls should be retried.Repeat delay (seconds
: Delay between retried calls.Driver path
: Absolute path to the directory containing the Snowflake JDBC driver.
NOTE: In previous versions of Storage Management, SAP logical file of type DIR (directory) was used. This is now automatically converted to a physical path.Connection pool size
: Number of connections that can be kept open in the pool, reducing resource-expensive establishment of JDBC connections.Username
: Snowflake user.Password
: User password.
If Key Pair authentication is established according to https://docs.snowflake.com/en/user-guide/key-pair-auth.html#key-pair-authentication-key-pair-rotation, it can be configured with or without a passphrase as follows:
3.2.2 Snowflake Storage (SNOWFLAKE)
Transparent storage pointing to Snowflake Database. For sending SQL command data use the Snowflake JDBC driver.
Storage ID
: Logical name of the storage connection.Storage type
: SNOWFLAKE (Snowflake transparent storage).Referenced storage
: Storage ID of Snowflake stage area for temporary file upload/download.Java connector RFC
: RFC Destination used for communication with JCo.Account name
: <account name>.<region>.<platform> (more information can be found in Snowflake account name doc).Warehouse
: Optional parameter, you can select the existing warehouse in the Snowflake account to be used.Database
: Name of an existing database in Snowflake.Database schema
: Name of an existing database schema in Snowflake.User role
: User's role used for data insertion.Driver path
: Snowflake driver directory.Hints
: Additional string added to connection string when JDBC driver establishes a connection.Connection pool size
: Number of connections that can be kept open in the pool, reducing resource-expensive establishment of JDBC connections.Table name prefix
: Optional text value for naming prefix of all Glue tables created within this storage.Use Transient tables
: When enabled, only transient tables will be created on the storage.Use Snowflake App for data merge:
When enabled, the SNP Native app will be used. Snowflake App name
: Name of the existing Snowflake native App to be used.Wrap values in staged CSV files
: Depending on table content, it may be necessary to encapsulate field values to avoid errors during INSERT into the Snowflake table from staging CSV files.Data delivery guarantee
: Data transfer behavior. More information can be found in the chapter (SM-2311) Data Delivery Guarantee.
3.2.3 Snowpipe Streaming Storage (SNOWSTREAM)
Transparent storage pointing to Snowflake Database. For sending SQL commands, data uses the Snowflake JDBC driver. To insert data into the Snowflake database Snowflake Ingest SDK is used. For more information about Snowpipe Streaming, requirements, limitations, and configuration visit the official Snowpipe Streaming documentation.
Storage ID
: Logical name of the storage connection.Storage type
: SNOWSTREAM (Snowflake transparent storage - Streaming).Java connector RFC
: RFC Destination used for communication with JCo.Account name
: <account name>.<region>.<platform> (more information can be found in Snowflake account name doc).Warehouse
: Optional parameter, you can select the existing warehouse in the Snowflake account to be used.Database
: Name of an existing database in Snowflake.Database schema
: Name of an existing database schema in Snowflake.User role
: User's role used for data insertion.Driver path
: Snowflake driver directory. It’s required to include the Snowflake Ingest SDK jar file in this directory. The SDK is available for download from the Snowflake Maven Central Repository.Hints
: Additional string added to connection string when JDBC driver establishes a connection.Connection pool size
: Number of connections that can be kept open in the pool, reducing resource-expensive establishment of JDBC connections.Username
: Snowflake user.Private Key file location:
Path to Private Key file.Private Key Encrypted:
When enabled, a Password input field will be displayed.Password:
Password to Private key.Use Snowflake App for data merge:
When enabled, the SNP Native app will be used. Snowflake App name
: Name of the existing Snowflake native App to be used.Table name prefix
: Optional text value for naming prefix of all tables created within this storage.Data delivery guarantee
: Data transfer behavior. More information can be found in the chapter
Additional Java arguments are required before starting SNP JCo. Go to transaction /DVD/JCO_MNG and in tab Advanced add these two arguments into the Additional Java starting arguments parameter:
--add-opens java.base/jdk.internal.misc=ALL-UNNAMED
-Dio.netty.tryReflectionSetAccessible=true