Ruslan Dautkhanov created HDFS-12113:
----------------------------------------

             Summary: `hadoop fs -setrep` requries huge amount of memory on 
client side
                 Key: HDFS-12113
                 URL: https://issues.apache.org/jira/browse/HDFS-12113
             Project: Hadoop HDFS
          Issue Type: Bug
    Affects Versions: 2.6.5, 2.6.0
         Environment: Java 7
            Reporter: Ruslan Dautkhanov


{code}
$ hadoop fs -setrep -w 3 /
{code}

was failing with 
{noformat}
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2367)
at 
java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at 
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
at java.lang.StringBuilder.append(StringBuilder.java:132)
at org.apache.hadoop.fs.shell.PathData.getStringForChildPath(PathData.java:305)
at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:272)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
{noformat}

Until hadoop fs cli command's Java heap memory was allowed to grow to 5Gb:
{code}
HADOOP_HEAPSIZE=5000 hadoop fs -setrep -w 3 /
{code}

Notice that this setrep change was done for whole HDFS filesystem.

So looks like there is a dependency on amount of memory used by `hadoop fs 
-setrep` command on how many files total HDFS has? This is not a huge HDFS 
filesystem, I would say even "small" by current standards.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to