Problem description
After the cluster upgrade, hadoop cannot load the local library normally
$ hadoop checknative -a 20/05/08 14:32:11 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version 20/05/08 14:32:11 WARN zlib.ZlibFactory: Failed to load/initialize native-zlib library 20/05/08 14:32:11 ERROR snappy.SnappyCompressor: failed to load SnappyCompressor java.lang.NoSuchFieldError: clazz at org.apache.hadoop.io.compress.snappy.SnappyCompressor.initIDs(Native Method) at org.apache.hadoop.io.compress.snappy.SnappyCompressor.<clinit>(SnappyCompressor.java:57) at org.apache.hadoop.io.compress.SnappyCodec.isNativeCodeLoaded(SnappyCodec.java:82) at org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.java:89) 20/05/08 14:32:11 WARN lz4.Lz4Compressor: java.lang.NoSuchFieldError: clazz Native library checking: hadoop: true /home/hadoop/core/jdk/jre/lib/amd64/libhadoop.so zlib: false snappy: false lz4: true revision:99 bzip2: false openssl: true /lib64/libcrypto.so 20/05/08 14:32:11 INFO util.ExitUtil: Exiting with status 1
View the loading of older versions
$ cd hadoop-2.7.7/ $ ./bin/hadoop checknative -a 20/05/08 14:34:19 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native 20/05/08 14:34:19 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library Native library checking: hadoop: true /home/hadoop/core/jdk/jre/lib/amd64/libhadoop.so zlib: true /lib64/libz.so.1 snappy: true /home/hadoop/core/hadoop-2.7.7/lib/native/libsnappy.so.1 lz4: true revision:99 bzip2: true /lib64/libbz2.so.1 openssl: true /lib64/libcrypto.so
It is found that the old version can be loaded, but the path of the local library seems to be different. hadoop is loaded into the jdk directory, and snappy is loaded into the hadoop directory
terms of settlement
According to the above situation, try to delete the library files under the jdk
$ mv /home/hadoop/core/jdk/jre/lib/amd64/libhadoop.so /home/hadoop/core/jdk/jre/lib/amd64/libhadoop.so.bak
Check again and find that it can be loaded
$ hadoop checknative -a 20/05/08 14:43:08 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native 20/05/08 14:43:08 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library Native library checking: hadoop: true /opt/beh/core/hadoop/lib/native/libhadoop.so.1.0.0 zlib: true /lib64/libz.so.1 snappy: true /opt/beh/core/hadoop/lib/native/libsnappy.so.1 lz4: true revision:10301 bzip2: true /lib64/libbz2.so.1 openssl: true /lib64/libcrypto.so
problem analysis
The main reason for the conflict caused by the incorrect path of the loading library is that the cluster has made too many adjustments and docked a variety of other file systems, which could have operated normally, but with the version upgrade, the previous changes may not be applicable to the higher version.