[jira] [Created] (HDFS-11941) Move dfsadmin triggerBlockReport and metaSave to debugadmin
Andrew Wang created HDFS-11941: -- Summary: Move dfsadmin triggerBlockReport and metaSave to debugadmin Key: HDFS-11941 URL: https://issues.apache.org/jira/browse/HDFS-11941 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.9.0, 3.0.0-alpha4 Reporter: Andrew Wang Filing a JIRA for discussion. While reviewing the [dfsadmin commands|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#dfsadmin] some of them seem better suited for debugadmin: * triggerBlockReport: similar to recoverLease in debugadmin, you don't need this unless HDFS is in a bad state * metasave: dumps NN datastructures to a side file. Seems purely for debugging. DebugAdmin commands notably do not need to be compatible between releases and do not have Java API equivalents in HdfsAdmin. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11942) make new chooseDataNode policy work in more operation like seek, fetch
Fangyuan Deng created HDFS-11942: Summary: make new chooseDataNode policy work in more operation like seek, fetch Key: HDFS-11942 URL: https://issues.apache.org/jira/browse/HDFS-11942 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs-client Affects Versions: 2.7.0, 2.6.0 Reporter: Fangyuan Deng in default policy, if a file is ONE_SSD, client will prior read the local disk replica rather than the remote ssd replica. but now, the pci-e SSD and 10G ethernet make remote read SSD more faster than the local disk. HDFS-9666 give us a patch, but the code is not complete and not updated for a long time. this issue give a complete patch and we have tested on three machines [ 32 core cpu, 128G mem , 1000M network, 1.2T HDD, 800G SSD(intel P3600) ]. with this feather, throughput of hbase table(ONE_SSD) is double of which without this feather -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11943) Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class
liaoyuxiangqin created HDFS-11943: - Summary: Warn log frequently print to screen in doEncode function on AbstractNativeRawEncoder class Key: HDFS-11943 URL: https://issues.apache.org/jira/browse/HDFS-11943 Project: Hadoop HDFS Issue Type: Improvement Environment: cluster: 3 nodes os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, Ubuntu4.4.0-31-generic) hadoop version: hadoop-3.0.0-alpha4 erasure coding: XOR-2-1-64k and enabled Intel ISA-L hadoop fs -put file / Reporter: liaoyuxiangqin Priority: Minor when i write file to hdfs on above environment, the hdfs client frequently print warn log of use direct ByteBuffer inputs/outputs in doEncode function to screen, detail information as follows: 2017-06-07 15:20:42,856 WARN rawcoder.AbstractNativeRawEncoder: convertToByteBufferState is invoked, not efficiently. Please use direct ByteBuffer inputs/outputs -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11944) TestAclsEndToEnd#testCreateEncryptionZone failing very frequently.
Rushabh S Shah created HDFS-11944: - Summary: TestAclsEndToEnd#testCreateEncryptionZone failing very frequently. Key: HDFS-11944 URL: https://issues.apache.org/jira/browse/HDFS-11944 Project: Hadoop HDFS Issue Type: Bug Components: encryption, test Affects Versions: 2.8.0 Reporter: Rushabh S Shah TestAclsEndToEnd#testCreateEncryptionZone is failing v frequently. The way test is written makes very hard to debug. Ideally each test case should test only one behavior. But in this test case, it reset the dfs state many times in same test case. It fails with the following stack trace. {noformat} Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 35.17 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestAclsEndToEnd testCreateEncryptionZone(org.apache.hadoop.hdfs.TestAclsEndToEnd) Time elapsed: 3.844 sec <<< FAILURE! java.lang.AssertionError: Allowed zone creation of zone with blacklisted GENERATE_EEK at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertFalse(Assert.java:64) at org.apache.hadoop.hdfs.TestAclsEndToEnd.testCreateEncryptionZone(TestAclsEndToEnd.java:753) Results : Failed tests: TestAclsEndToEnd.testCreateEncryptionZone:753 Allowed zone creation of zone with blacklisted GENERATE_EEK {noformat} It failed in the following pre-commits. [HDFS-11885 precommit|https://issues.apache.org/jira/browse/HDFS-11885?focusedCommentId=16040117&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16040117] [HDFS-11804 precommit|https://issues.apache.org/jira/browse/HDFS-11804?focusedCommentId=16039872&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16039872] -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11945) Internal lease recovery may not be retried for a long time
Kihwal Lee created HDFS-11945: - Summary: Internal lease recovery may not be retried for a long time Key: HDFS-11945 URL: https://issues.apache.org/jira/browse/HDFS-11945 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Kihwal Lee Lease is assigned per client who is identified by its holder ID or client ID, thus a renewal or an expiration of a lease affects all files being written by the client. When a client/writer dies without closing a file, its lease expires in one hour (hard limit) and the namenode tries to recover the lease. As a part of the process, the namenode takes the ownership of the lease and renews it. If the recovery does not finish successfully, the lease will expire in one hour and the namenode will try again to recover the lease. However, if a file system has another lease expiring within the hour, the recovery attempt for the lease will push forward the expiration of the lease held by the namenode. This causes failed lease recoveries to be not retried for a long time. We have seen it happening for days. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/427/ [Jun 6, 2017 1:35:12 PM] (kihwal) HADOOP-14035. Reduce fair call queue backoff's impact on clients. [Jun 6, 2017 3:11:47 PM] (brahma) HADOOP-14485. Redundant 'final' modifier in try-with-resources [Jun 6, 2017 6:06:49 PM] (liuml07) HADOOP-14472. Azure: TestReadAndSeekPageBlobAfterWrite fails [Jun 6, 2017 6:09:28 PM] (liuml07) HADOOP-14491. Azure has messed doc structure. Contributed by Mingliang [Jun 6, 2017 8:51:02 PM] (arp) HDFS-11932. BPServiceActor thread name is not correctly set. Contributed [Jun 6, 2017 9:57:48 PM] (wang) HDFS-11840. Log HDFS Mover exception message of exit to its own log. [Jun 7, 2017 12:19:15 AM] (Carlo Curino) YARN-6547. Enhance SLS-based tests leveraging invariant checker. [Jun 7, 2017 5:05:33 AM] (brahma) HDFS-11711. DN should not delete the block On "Too many open files" [Jun 7, 2017 5:25:53 AM] (vinayakumarb) HDFS-11708. Positional read will fail if replicas moved to different DNs [Jun 7, 2017 5:42:13 AM] (yqlin) HDFS-11929. Document missing processor of hdfs oiv_legacy command. -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-common-project/hadoop-minikdc Possible null pointer dereference in org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:[line 368] FindBugs : module:hadoop-common-project/hadoop-auth org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest, HttpServletResponse) makes inefficient use of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 192] FindBugs : module:hadoop-common-project/hadoop-common org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At CipherSuite.java:[line 44] org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) unconditionally sets the field unknownValue At CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] Possible null pointer dereference in org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:[line 118] Possible null pointer dereference in org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:[line 387] Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) ignored, but method has no side effect At FTPFileSystem.java:but method has no side effect At FTPFileSystem.java:[line 421] Useless condition:lazyPersist == true at this point At CommandWithDestination.java:[line 502] org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) incorrectly handles double value At DoubleWritable.java: At DoubleWritable.java:[line 78] org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) incorrectly handles double value At DoubleWritable.java:[line 97] org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly handles float value At FloatWritable.java: At FloatWritable.java:[line 71] org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles float value At FloatWritable.java:int) incorrectly handles float value At FloatWritable.java:[line 89] Possible null pointer dereference in org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:[line 351] org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet iterator instead of entrySet iterator At
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/338/ [Jun 6, 2017 1:35:12 PM] (kihwal) HADOOP-14035. Reduce fair call queue backoff's impact on clients. [Jun 6, 2017 3:11:47 PM] (brahma) HADOOP-14485. Redundant 'final' modifier in try-with-resources [Jun 6, 2017 6:06:49 PM] (liuml07) HADOOP-14472. Azure: TestReadAndSeekPageBlobAfterWrite fails [Jun 6, 2017 6:09:28 PM] (liuml07) HADOOP-14491. Azure has messed doc structure. Contributed by Mingliang [Jun 6, 2017 8:51:02 PM] (arp) HDFS-11932. BPServiceActor thread name is not correctly set. Contributed [Jun 6, 2017 9:57:48 PM] (wang) HDFS-11840. Log HDFS Mover exception message of exit to its own log. [Jun 7, 2017 12:19:15 AM] (Carlo Curino) YARN-6547. Enhance SLS-based tests leveraging invariant checker. [Jun 7, 2017 5:05:33 AM] (brahma) HDFS-11711. DN should not delete the block On "Too many open files" [Jun 7, 2017 5:25:53 AM] (vinayakumarb) HDFS-11708. Positional read will fail if replicas moved to different DNs [Jun 7, 2017 5:42:13 AM] (yqlin) HDFS-11929. Document missing processor of hdfs oiv_legacy command. -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.sftp.TestSFTPFileSystem hadoop.ha.TestActiveStandbyElectorRealZK hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 hadoop.hdfs.TestRollingUpgrade hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService Timed out junit tests : org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA mvninstall: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/338/artifact/out/patch-mvninstall-root.txt [492K] compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/338/artifact/out/patch-compile-root.txt [20K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/338/artifact/out/patch-compile-root.txt [20K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/338/artifact/out/patch-compile-root.txt [20K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/338/artifact/out/patch-unit-hadoop-assemblies.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/338/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [144K] ht
[jira] [Resolved] (HDFS-11940) Throw an NoSuchMethodError exception when testing TestDFSPacket
[ https://issues.apache.org/jira/browse/HDFS-11940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] legend resolved HDFS-11940. --- Resolution: Auto Closed > Throw an NoSuchMethodError exception when testing TestDFSPacket > > > Key: HDFS-11940 > URL: https://issues.apache.org/jira/browse/HDFS-11940 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.0.0-alpha3 > Environment: org.apache.maven.surefire 2.17 > jdk 1.8 >Reporter: legend > > Throw an exception when I run TestDFSPacket. Details are listed below. > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on > project hadoop-hdfs-client: There are test failures. > [ERROR] > [ERROR] Please refer to > /home/hadoop/GitHub/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports > for the individual test results. > [ERROR] -> [Help 1] > org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute > goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) > on project hadoop-hdfs-client: There are test failures. > Please refer to > /home/hadoop/GitHub/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports > for the individual test results. > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > at > org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128) > at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307) > at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193) > at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106) > at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863) > at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288) > at org.apache.maven.cli.MavenCli.main(MavenCli.java:199) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) > at > org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) > at > org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) > at > org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) > Caused by: org.apache.maven.plugin.MojoFailureException: There are test > failures. > Please refer to > /home/hadoop/GitHub/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports > for the individual test results. > at > org.apache.maven.plugin.surefire.SurefireHelper.reportExecution(SurefireHelper.java:82) > at > org.apache.maven.plugin.surefire.SurefirePlugin.handleSummary(SurefirePlugin.java:195) > at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:861) > at > org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:729) > at > org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207) > ... 20 more -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11946) Ozone: Containers in different datanodes are mapped to the same location
Tsz Wo Nicholas Sze created HDFS-11946: -- Summary: Ozone: Containers in different datanodes are mapped to the same location Key: HDFS-11946 URL: https://issues.apache.org/jira/browse/HDFS-11946 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Tsz Wo Nicholas Sze Assignee: Anu Engineer This is a problem in unit tests. Containers with the same container name in different datanodes are mapped to the same local path location. For example, As a result, the first datanode will be able to succeed creating the container file but the remaining datanodes will fail to create the container file with FileAlreadyExistsException. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11947) BPOfferService prints a invalid warning message "Block pool ID needed, but service not yet registered with NN"
Tsz Wo Nicholas Sze created HDFS-11947: -- Summary: BPOfferService prints a invalid warning message "Block pool ID needed, but service not yet registered with NN" Key: HDFS-11947 URL: https://issues.apache.org/jira/browse/HDFS-11947 Project: Hadoop HDFS Issue Type: Bug Components: datanode Reporter: Tsz Wo Nicholas Sze Priority: Minor -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11948) Ozone: change TestRatisManager to check cluster with data
Tsz Wo Nicholas Sze created HDFS-11948: -- Summary: Ozone: change TestRatisManager to check cluster with data Key: HDFS-11948 URL: https://issues.apache.org/jira/browse/HDFS-11948 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Tsz Wo Nicholas Sze Assignee: Tsz Wo Nicholas Sze TestRatisManager first creates multiple Ratis clusters. Then it changes the membership and closes some clusters. However, it does not test the clusters with data. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11949) Add testcase for ensuring that FsShell cann't move file to the target directory that file exists
legend created HDFS-11949: - Summary: Add testcase for ensuring that FsShell cann't move file to the target directory that file exists Key: HDFS-11949 URL: https://issues.apache.org/jira/browse/HDFS-11949 Project: Hadoop HDFS Issue Type: Test Components: test Affects Versions: 3.0.0-alpha4 Reporter: legend Priority: Minor moveFromLocal returns error when move file to the target directory that the file exists. So we need add test case to check it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org