Lipeng,

-setrep allows you to change the replication of an existing file. You can
also specify the replication factor when you initially create a file. Not
sure what you mean by "dynamically", that to me means calling setrep.

There is replication or invalidation work done as part of running -setrep.
This is done as a low-priority operation, unless the file is already in a
bad replication state (e.g. under-replicated).

Best,
Andrew

On Wed, Mar 4, 2015 at 12:18 PM, Lipeng Wan <lipengwa...@gmail.com> wrote:

> Hi Andrew,
>
> By using the -setrep command, can we change the replication factor of
> existing files? Or, can we change the replication factor of files
> dynamically? If that's possible, how much data movement overhead will
> occur?
> Thanks!
>
> Lipeng
>
> On Tue, Mar 3, 2015 at 2:57 PM, Andrew Wang <andrew.w...@cloudera.com>
> wrote:
> > Yup, definitely. Check out the -setrep command:
> >
> >
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#setrep
> >
> > HTH,
> > Andrew
> >
> > On Tue, Mar 3, 2015 at 11:49 AM, Lipeng Wan <lipengwa...@gmail.com>
> wrote:
> >
> >> Hi Andrew,
> >>
> >> Thanks for your reply!
> >> Then is it possible for us to specify different replication factors
> >> for different files?
> >>
> >> Lipeng
> >>
> >> On Tue, Mar 3, 2015 at 2:38 PM, Andrew Wang <andrew.w...@cloudera.com>
> >> wrote:
> >> > Hi Lipeng,
> >> >
> >> > Right now that is unsupported, replication is set on a per-file basis,
> >> not
> >> > per-block.
> >> >
> >> > Andrew
> >> >
> >> > On Tue, Mar 3, 2015 at 11:23 AM, Lipeng Wan <lipengwa...@gmail.com>
> >> wrote:
> >> >
> >> >> Hi devs,
> >> >>
> >> >> By default, hdfs creates same number of replicas for each block. Is
> it
> >> >> possible for us to create more replicas for some of the blocks?
> >> >> Thanks!
> >> >>
> >> >> L. W.
> >> >>
> >>
>

Reply via email to