[jira] [Resolved] (HADOOP-17034) Fix failure of TestSnappyCompressorDecompressor on CentOS 8
[ https://issues.apache.org/jira/browse/HADOOP-17034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki resolved HADOOP-17034. --- Assignee: (was: Masatake Iwasaki) Resolution: Duplicate closing this as duplicate of HADOOP-16768. > Fix failure of TestSnappyCompressorDecompressor on CentOS 8 > --- > > Key: HADOOP-17034 > URL: https://issues.apache.org/jira/browse/HADOOP-17034 > Project: Hadoop Common > Issue Type: Bug > Environment: CentOS Linux release 8.0.1905 (Core), > snappy-devel-1.1.7-5.el8.x86_64 >Reporter: Masatake Iwasaki >Priority: Major > > testSnappyCompressDecompress testSnappyCompressDecompressInMultiThreads > reproducibly fails on CentOS 8. These tests has no issue on CentOS 7. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17036) TestFTPFileSystem failing as ftp server dir already exists
Steve Loughran created HADOOP-17036: --- Summary: TestFTPFileSystem failing as ftp server dir already exists Key: HADOOP-17036 URL: https://issues.apache.org/jira/browse/HADOOP-17036 Project: Hadoop Common Issue Type: Improvement Components: fs, test Affects Versions: 3.4.0 Reporter: Steve Loughran TestFTPFileSystem failing as the test dir exists. need to delete in setup/teardown of each test case -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/682/ No changes -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-common-project/hadoop-minikdc Possible null pointer dereference in org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:[line 515] FindBugs : module:hadoop-common-project/hadoop-auth org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest, HttpServletResponse) makes inefficient use of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 192] FindBugs : module:hadoop-common-project/hadoop-common org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At CipherSuite.java:[line 44] org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) unconditionally sets the field unknownValue At CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] Possible null pointer dereference in org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:[line 118] Possible null pointer dereference in org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:[line 383] Useless condition:lazyPersist == true at this point At CommandWithDestination.java:[line 502] org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) incorrectly handles double value At DoubleWritable.java: At DoubleWritable.java:[line 78] org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) incorrectly handles double value At DoubleWritable.java:[line 97] org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly handles float value At FloatWritable.java: At FloatWritable.java:[line 71] org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles float value At FloatWritable.java:int) incorrectly handles float value At FloatWritable.java:[line 89] Possible null pointer dereference in org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:[line 389] Possible bad parsing of shift operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:operation in org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 398] org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory) unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl At DefaultMetricsFactory.java:[line 49] org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) unconditionally sets the field miniClusterMode At DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 92] Useless object stored in variable seqOs of method org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier, AbstractDelegationTokenSecretManager$DelegationTokenInformation, boolean) At ZKDelegationTokenSecretManager.java:seqOs of method org.apache.hadoop.
[jira] [Resolved] (HADOOP-17033) Update commons-codec from 1.11 to 1.14
[ https://issues.apache.org/jira/browse/HADOOP-17033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HADOOP-17033. -- Fix Version/s: 3.4.0 Resolution: Fixed > Update commons-codec from 1.11 to 1.14 > -- > > Key: HADOOP-17033 > URL: https://issues.apache.org/jira/browse/HADOOP-17033 > Project: Hadoop Common > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 3.4.0 > > > We are on commons-codec 1.11 which is slightly outdated. The latest is 1.14. > We should update it if it's not too much of a hassle. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
ZStandard compression crashes
Hadoop devs, A colleague of mine recently hit a strange issue where zstd compression codec crashes. Caused by: java.lang.InternalError: Error (generic) at org.apache.hadoop.io.compress.zstd.ZStandardCompressor.deflateBytesDirect(Native Method) at org.apache.hadoop.io.compress.zstd.ZStandardCompressor.compress(ZStandardCompressor.java:216) at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) at org.apache.hadoop.io.compress.CompressorStream.write(CompressorStream.java:76) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.writeKVPair(IFile.java:617) at org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.append(IFile.java:480) Anyone out there hitting the similar problem? A temporary workaround is to set buffer size "set io.compression.codec.zstd.buffersize=8192;" We suspected it's a bug in zstd library, but couldn't verify. Just want to send this out and see if I can get some luck.
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/137/ [May 10, 2020 6:13:30 AM] (Ayush Saxena) HDFS-15250. Setting `dfs.client.use.datanode.hostname` to true can crash the system because of unhandled UnresolvedAddressException. Contributed by Ctest. -1 overall The following subsystems voted -1: asflicense findbugs mvnsite pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml findbugs : module:hadoop-yarn-project/hadoop-yarn Uncallable method org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance() defined in anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:[line 87] Dead store to entities in org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:[line 190] org.apache.hadoop.yarn.server.webapp.WebServiceClient.sslFactory should be package protected At WebServiceClient.java: At WebServiceClient.java:[line 42] findbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server Uncallable method org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance() defined in anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:[line 87] Dead store to entities in org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:[line 190] org.apache.hadoop.yarn.server.webapp.WebServiceClient.sslFactory should be package protected At WebServiceClient.java: At WebServiceClient.java:[line 42] findbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common org.apache.hadoop.yarn.server.webapp.WebServiceClient.sslFactory should be package protected At WebServiceClient.java: At WebServiceClient.java:[line 42] findbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests Uncallable method org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance() defined in anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:[line 87] Dead store to entities in org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:[line 190] findbugs : module:hadoop-yarn-project Uncallable method org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance() defined in anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:[line 87] Dead store to entities in org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.Test
Re: ZStandard compression crashes
Hi Wei Chiu, What is the Hadoop version being used? Give a check if HADOOP-15822 is there, it had something similar error. -Ayush > On 11-May-2020, at 10:11 PM, Wei-Chiu Chuang wrote: > > Hadoop devs, > > A colleague of mine recently hit a strange issue where zstd compression > codec crashes. > > Caused by: java.lang.InternalError: Error (generic) > at > org.apache.hadoop.io.compress.zstd.ZStandardCompressor.deflateBytesDirect(Native > Method) > at > org.apache.hadoop.io.compress.zstd.ZStandardCompressor.compress(ZStandardCompressor.java:216) > at > org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) > at > org.apache.hadoop.io.compress.CompressorStream.write(CompressorStream.java:76) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57) > at java.io.DataOutputStream.write(DataOutputStream.java:107) > at > org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.writeKVPair(IFile.java:617) > at > org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.append(IFile.java:480) > > Anyone out there hitting the similar problem? > > A temporary workaround is to set buffer size "set > io.compression.codec.zstd.buffersize=8192;" > > We suspected it's a bug in zstd library, but couldn't verify. Just want to > send this out and see if I can get some luck. - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: ZStandard compression crashes
Thanks for the pointer, it does look similar. However we are roughly on the latest of branch-3.1 and this fix is in our branch. I'm pretty sure we have all the zstd fixes. I believe the libzstd version used is 1.4.4 but need to confirm. I suspected it's a library version issue because we've been using zstd compression for over a year, and this bug (reproducible) happens consistently just recently. On Mon, May 11, 2020 at 1:57 PM Ayush Saxena wrote: > Hi Wei Chiu, > What is the Hadoop version being used? > Give a check if HADOOP-15822 is there, it had something similar error. > > -Ayush > > > On 11-May-2020, at 10:11 PM, Wei-Chiu Chuang wrote: > > > > Hadoop devs, > > > > A colleague of mine recently hit a strange issue where zstd compression > > codec crashes. > > > > Caused by: java.lang.InternalError: Error (generic) > > at > > > org.apache.hadoop.io.compress.zstd.ZStandardCompressor.deflateBytesDirect(Native > > Method) > > at > > > org.apache.hadoop.io.compress.zstd.ZStandardCompressor.compress(ZStandardCompressor.java:216) > > at > > > org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) > > at > > > org.apache.hadoop.io.compress.CompressorStream.write(CompressorStream.java:76) > > at > > > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57) > > at java.io.DataOutputStream.write(DataOutputStream.java:107) > > at > > > org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.writeKVPair(IFile.java:617) > > at > > > org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.append(IFile.java:480) > > > > Anyone out there hitting the similar problem? > > > > A temporary workaround is to set buffer size "set > > io.compression.codec.zstd.buffersize=8192;" > > > > We suspected it's a bug in zstd library, but couldn't verify. Just want > to > > send this out and see if I can get some luck. >
Re: ZStandard compression crashes
If I recall this problem correctly, the root cause is the default zstd compression block size is 256kb, and Hadoop Zstd compression will attempt to use the OS platform default compression size, if it is available. The recommended output size is slightly bigger than input size to account for header size in Zstd compression. http://software.icecube.wisc.edu/coverage/00_LATEST/icetray/private/zstd/lib/compress/zstd_compress.c.gcov.html#2982 Where, Hadoop code https://github.com/apache/hadoop/blame/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L259 is setting output size to the same as input size, if input size is bigger than output size. By manually setting buffer size to a small value, input size will be smaller than recommended output size to keep the system working. By returning ZTD_CStreamOutSize() in getSteramSize, it may enable the system to work without a predefined default. On Mon, May 11, 2020 at 2:29 PM Wei-Chiu Chuang wrote: > Thanks for the pointer, it does look similar. However we are roughly on the > latest of branch-3.1 and this fix is in our branch. I'm pretty sure we have > all the zstd fixes. > > I believe the libzstd version used is 1.4.4 but need to confirm. I > suspected it's a library version issue because we've been using zstd > compression for over a year, and this bug (reproducible) happens > consistently just recently. > > On Mon, May 11, 2020 at 1:57 PM Ayush Saxena wrote: > > > Hi Wei Chiu, > > What is the Hadoop version being used? > > Give a check if HADOOP-15822 is there, it had something similar error. > > > > -Ayush > > > > > On 11-May-2020, at 10:11 PM, Wei-Chiu Chuang > wrote: > > > > > > Hadoop devs, > > > > > > A colleague of mine recently hit a strange issue where zstd compression > > > codec crashes. > > > > > > Caused by: java.lang.InternalError: Error (generic) > > > at > > > > > > org.apache.hadoop.io.compress.zstd.ZStandardCompressor.deflateBytesDirect(Native > > > Method) > > > at > > > > > > org.apache.hadoop.io.compress.zstd.ZStandardCompressor.compress(ZStandardCompressor.java:216) > > > at > > > > > > org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) > > > at > > > > > > org.apache.hadoop.io.compress.CompressorStream.write(CompressorStream.java:76) > > > at > > > > > > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57) > > > at java.io.DataOutputStream.write(DataOutputStream.java:107) > > > at > > > > > > org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.writeKVPair(IFile.java:617) > > > at > > > > > > org.apache.tez.runtime.library.common.sort.impl.IFile$Writer.append(IFile.java:480) > > > > > > Anyone out there hitting the similar problem? > > > > > > A temporary workaround is to set buffer size "set > > > io.compression.codec.zstd.buffersize=8192;" > > > > > > We suspected it's a bug in zstd library, but couldn't verify. Just want > > to > > > send this out and see if I can get some luck. > > >