...
Storage Management provides the option to check the storage configuration. If the check is unsuccessful, this article lists the troubleshooting steps which that may be helpful in the identification of the cause.
...
If the Connection test is unsuccessful, these are some of the possibilities. SM59 usually returns really generic errors and for more details, it's necessary to check the ST11 dev_icm log. Some examples can be found here.
Possible issue | What to do |
---|---|
HttpFS node hostname cannot be resolved | There can be a multitude of reasons why Hadoop host resolution fails. To be on the safe side Hadoop IP↔host couples can be added to each SAP application server /etc/hosts file (or in Windows: C:\Windows\System32\drivers\etc\hosts) |
HttpFS/WebHDFS service port is no longer reachable | Check the availability of Hadoop service from SAP application server OS using telnet <host> <service_port>. If WebHDFS is used, also datanode service (port 1022) needs to be reachable on all HDFS datanodes. There is an unresolved issue with specific SAP kernel versions and host architectures that cause failures in redirects (SAP ignores 307 redirects from WebHDFS to datanode). In this case, the HttpFS service has to be used to avoid redirects. |
HttpFS service has failed over to an alternate host | Check in the Hadoop cluster manager (Cloudera Manager / Ambari), that the service (HttpFS/WebHDFS) is still running on the host defined in the RFC destination. |
SSL on HttpFS is active, but RFC is not set as SSL active | Change settings in the Logon & Security tab in SM59 to be SSL active. |
HttpFS service is SSL secured, RFC is set to use SSL, but there are missing certificates in STRUST | Add required certificates to STRUST. Details can be found in the Hadoop Storage setup documentation. |
RFC is set to SSL active, but HTTPFS service is not SSL secured | Disable SSL for the RFC |
HTTP/HTTPS service is not active | Check the transaction SMICM → Goto → Services. Make sure that HTTP (HTTPS if SSL is used) has a port number filled and is active. |
...
If this is the issue, it can be found in the Java log. To access the Java log, go to transaction /DVD/JCO_MNG, select the Java connector that is used, and click the [Logs] button. Errors are highlighted in red.
Some examples of error messages can be found here. Some of Oracle jre8 error messages can be found here https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/Troubleshooting.html
...
Possible issue | What to do |
---|---|
Incorrect logical paths in /DVD/HDP_CUS_C | Check table /DVD/HDP_CUS_C. Make sure that logical paths are correct and point to the correct files on the OS level. |
Wrong/expired keytab | It is possible for a keytab to A keytab can expire, either after a fixed period of time or after another copy of the keytab was exported from KDC (KVNO has increased). If the file is present in the correct directory, with the correct format and permissions (/sapmnt/<SID>/global/security/dvd_conn/<sid>hdp.keytab), try manual login with the keytab. |
Wrong principal (case sensitive) | Make sure that the principal name in /DVD/HDP_CUS_C is correct. It should have a format like user@EXAMPLElike user@EXAMPLE.COM . It is case-sensitive. To check the principal name inside the keytab: klist -k <path_to_keytab> |
Wrong Kerberos config | Check the contents of krb5.conf file in /sapmnt/<SID>/global/security/dvd_conn/ directory and compare them with krb5.conf valid for the Hadoop cluster. The contents have to match. Make sure that there is information about the principal's Kerberos realm and about the Hadoop cluster Kerberos realm. |
Port to KDC is not open | Make sure that port 88 to KDC is open. If cross-realm authentication is used, port 88 to KDCs of both realms needs to be open. |
...