Ravi Teja Ch N V created HADOOP-13124:
-
Summary: Support weights/priority for user in Faircallqueue
Key: HADOOP-13124
URL: https://issues.apache.org/jira/browse/HADOOP-13124
Project: Hadoop Common
gards,
Ravi Teja
URL: https://issues.apache.org/jira/browse/HADOOP-7926
Project: Hadoop Common
Issue Type: Bug
Components: build
Affects Versions: 0.23.0
Reporter: Ravi Teja Ch N V
This approach will help to know all the failures even if some testcase fails
,
>org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.shuffleInMemory(ReduceTask.java:1592)
>
The problem has occured,after the inmemory copy was chosen,
Regards,
Ravi Teja
From: Robert Evans [ev...@yahoo-inc.com]
Sent: 01 December 2011
the new child JVMs.
Regards,
Ravi Teja
From: Mingxi Wu [mingxi...@turn.com]
Sent: 01 December 2011 12:37:54
To: common-dev@hadoop.apache.org
Subject: RE: Hadoop - non disk based sorting?
Thanks Ravi.
So, why when map outputs are huge, reducer will not abl
the new child JVMs.
Regards,
Ravi Teja
From: Mingxi Wu [mingxi...@turn.com]
Sent: 01 December 2011 12:37:54
To: common-dev@hadoop.apache.org
Subject: RE: Hadoop - non disk based sorting?
Thanks Ravi.
So, why when map outputs are huge, reducer will not abl
, your Map outputs have got partitioned
amoung 200 reducers, so you didnt get this problem.
By setting the max memory of your reducer with mapred.child.java.opts, you can
get over this problem.
Regards,
Ravi teja
From: Mingxi Wu [mingxi...@turn.com]
Se
I feel #4 as a better option.
Regards,
Ravi Teja
-Original Message-
From: Alejandro Abdelnur [mailto:t...@cloudera.com]
Sent: Wednesday, October 12, 2011 9:38 PM
To: common-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org;
hdfs-...@hadoop.apache.org
Subject: 0.23 & trunk
Hi Alexandre,
You need to set the M2_REPO variable in your eclipse to the maven repository
which is present in your local system. By default it is
{userhome}/.m2/repository
Regards,
Ravi Teja
-Original Message-
From: Alexandre de Assis Bento Lima [mailto:as...@cos.ufrj.br]
Sent
u run.
Regards,
Ravi Teja
-Original Message-
From: Tim Broberg [mailto:tbrob...@yahoo.com]
Sent: Monday, October 03, 2011 2:04 AM
To: common-dev@hadoop.apache.org
Subject: Artifact missing -
org.apache.hadoop:hadoop-project:pom:0.24.0-SNAPSHOT
I am trying to build hadoop so I can
Hi Praveen,
cbuild is a maven profile in Mapreduce project which compiles the native
code. By specifying -P-cbuild, u are deactivating the cbuild profile, hence
avoiding the compilation of the native code while building.
Regards,
Ravi Teja
-Original Message-
From: Prashant
11 matches
Mail list logo