(SM-2202) Snowflake Storage Setup
This page provides general guidelines on how to set up Snowflake Storage to work with Storage Management.
1. General prerequisites
1.1 Open Ports
In order to enable communication of SAP systems with Snowflake host, the following host/port needs to be reachable from the SAP system:
Port | Type | Snowflake Host |
---|---|---|
443 | https | <account name>.<region>.<platform>.snowflakecomputing.com |
If your Snowflake account is hosted in particular cloud environment like AWS or MS Azure, make sure their respective domains are reachable from SAP system as well (e.g. *.amazonaws.com, *.windows.net); port 443.
This is needed for file upload to Snowflake staging area.
There is automatic redirect happening during PUT operation when transferring data to Snowflake internal stage, which goes to the respective vendor’s file service (example: customer-stage.s3.us-east-2.amazonaws.com)
1.2 Warehouse and Database
Snowflake warehouse and database must already exist.
1.3 Snowflake user
Snowflake technical user must be created with full privileges on chosen Snowflake DB schema.
There should be one-to-one relation between Snowflake schemas and every SAP system connected, ensuring replicated data isolation.
2. OS prerequisites (On SAP host)
This group of requirements relates to the operating systems underlying the SAP system with all its application servers. SNP products (e.g. SNP Glue™, SNP OutBoard™ Data Tiering) have been developed and tested in the SUSE Linux environment and Windows Server 2012. However, by design, they are not limited by the choice of an operating system, if the requirements listed in this guide are met.
2.1 OS directory and JDBC Driver
JDBC protocol is used to connect to Snowflake. Snowflake JDBC driver needs to be manually copied to the operating system and be accessible to the connector. Download Snowflake JDBC driver.
We recommend storing the drivers in a directory shared between application servers, organized in sub-directories to avoid possible conflicts (some customers connect to multiple different platforms/services).
Default directory used to store drivers needed for communication with remote storages/services is:
|
Actual Snowflake JDBC driver should be put into dedicated sub-directory, example:
|
Set its ownership and permissions appropriately to <sid>adm[:sapsys].
We have detected Snowflake driver versions 3.13.x are having a problem properly initializing SSL handshake.
The error message in the JCo log reflecting this is:net.snowflake.client.jdbc.SnowflakeSQLException: JDBC driver encountered communication error. Message: Exception encountered for HTTP request: Unsupported or unrecognized SSL message.
To avoid this issue, please use tested lower version of the driver: snow-jdbc-3.12.9.jar.
3. SAP configuration
3.1 JAVA connector
Java connector is a critical middle-ware component. Please follow the steps in Java Connector Setup to set it up/upgrade to a new version before you continue.
The (SM-2202) Java Connector Setup is narrowed per Snowflake JDBC driver requirement and must be installed in a 64-bit environment and requires Java 1.8 (or higher), link to the official page: https://docs.snowflake.com/en/user-guide/jdbc.html.
WARNING: In case of JCo running on JDK 16, please add a custom argument in the Advanced tab → Additional java starting arguments: -Djdk.module.illegalAccess=permit
JDK 16 introduced strong encapsulation of JDK internals (see the documentation https://www.oracle.com/java/technologies/javase/16-all-relnotes.html#JDK-8256299) which causes problems in combination with the Snowflake JDBC driver.
In JDK 17 option -Djdk.module.illegalAccess=permit
does not work anymore and is therefore unusable for connection using the current Snowflake JDBC driver (for more information see https://jdk.java.net/17/release-notes#JDK-8266851).
3.2 Storage Management setup
The final step in SAP & Snowflake connectivity is the creation of two storages in transaction /DVD/SM_SETUP
- Snowflake storage which represents the table level of Snowflake infrastructure and the Snowflake stage which handles temporary files.
3.2.1 Snowflake Stage (SNOW_STAGE)
This is binary storage pointing to the Internal Stage of the Snowflake user. It is used for data transfer leveraging the extended functionality of the Snowflake JDBC driver.
Storage ID
- Logical name of the storage connectionStorage type
- SNOW_STAGE (Snowflake internal stage)Java connector RFC
- TCP/IP RFC destination used for communication with Java connectorFull name of Snowflake account
- <account name>.<region>.<platform> (more information can be found in Snowflake account name doc).Driver path
- SAP logical file of type DIR (directory) referring to Snowflake driver directory. Can be set up in standard SAP transaction 'FILE'.
Logical path /DVD/DEF_GLOB_DIR
refers to /sapmnt/<SID>/global/security/dvd_conn
directory imported to SAP system with our transports. Example:
Connection pool size
- Number of connections that can be kept open in the pool, reducing resource-expensive establishment of JDBC connections.Username
- Snowflake userPassword
- User passwordPassword for JDBC Connection is Hashed
- Indicator whether the password is hashed (you can also create password hash manually by using report /DVD/XOR_GEN)
User role
- Default role (parameter is not mandatory)
JAVA Call Repeat
- number of times failed calls should be retried
Repeat delay (seconds)
- delay between retried calls
3.2.2 Snowflake Storage (SNOWFLAKE)
It is transparent storage pointing to Snowflake Database. For sending SQL commands data use Snowflake JDBC driver.
Storage ID
- Logical name of the storage connectionStorage type
- SNOWFLAKE (Snowflake transparent storage)Referenced storage
- Storage ID of Snowflake stage area for temporary file upload/downloadJava connector RFC
- RFC Destination used for communication with JCoAccount name
- <account name>.<region>.<platform> (more information can be found in Snowflake account name doc).Warehouse
- Name of an existing warehouse in SnowflakeDatabase
- Name of an existing database in SnowflakeDatabase schema
- Name of an existing database schema in SnowflakeRole
- Default role in Snowflake (parameter is not mandatory)Driver path
- Snowflake driver directory (Logical File of type DIR defined in transaction FILE)Java connector RFC
- TCP/IP RFC destination used for communication with Java connectorUser
- Snowflake userPassword
- User password (preferably in hashed format)Password for JDBC Connection is Hashed
- Indicator whether the password is hashed (you can create a hash of the password by using report /DVD/XOR_GEN)Table name prefix
- optional text value for naming prefix of all Glue tables created within this storageWrap values in staged CSV files
- depending on table content, it may be necessary to encapsulate field values to avoid errors during INSERT into Snowflake table from staging CSV files