...
Possible issue | What to do |
---|---|
HttpFS node hostname cannot be resolved | There can be a multitude of reasons why Hadoop host resolution fails. To be on the safe side Hadoop IP↔host couples can be added to each SAP application server /etc/hosts file (or in Windows: C:\Windows\System32\drivers\etc\hosts) |
HttpFS/WebHDFS service port is no longer reachable | Check the availability of Hadoop service from SAP application server OS using 'telnet <host> <service_port>. If WebHDFS is used, also datanode service (port 1022) needs to be reachable on all HDFS datanodes. There is an unresolved issue with specific SAP kernel versions and host architectures that cause failures in redirects (SAP ignores 307 redirects from WebHDFS to datanode). In this case, the HttpFS service has to be used to avoid redirects. |
HttpFS service has failed over to an alternate host | Check in the Hadoop cluster manager (Cloudera Manager / Ambari), that the service (HttpFS/WebHDFS) is still running on the host defined in the RFC destination. |
SSL on HttpFS is active, but RFC is not set as SSL active | Change settings in the Logon & Security tab in SM59 to be SSL active. |
HttpFS service is SSL secured, RFC is set to use SSL, but there are missing certificates in STRUST | Add required certificates to STRUST. Details can be found in the Hadoop Storage setup documentation. |
RFC is set to SSL active, but HTTPFS service is not SSL secured | Disable SSL for the RFC |
HTTP/HTTPS service is not active | Check the transaction SMICM → Goto → Services. Make sure that HTTP (HTTPS if SSL is used) has port number filled and is active. |
...
Storage type HADOOP uses a Java connector only for authentication with Kerberos. If the Hadoop cluster is not kerberized, this section is not relevant.
The following page contains detailed information with possible issues related to Java connector setup.
...