ady done and available in trunk and 2.x releases today:
>
> http://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/
>
>
> On Mon, Jan 14, 2013 at 7:44 PM, Hemant
Hemanth Yamijala created HADOOP-8808:
Summary: Update FsShell documentation to mention deprecation of
some of the commands, and mention alternatives
Key: HADOOP-8808
URL: https://issues.apache.org/jira/browse
u want to change it file a JIRA about it and we can
> discuss on the JIRA the merits of the change.
>
> --Bobby
>
> On 9/11/12 6:28 AM, "Hemanth Yamijala" wrote:
>
>>Hi,
>>
>>hadoop fs -ls dirname
>>
>>lists entries like
>>
>>
Hemanth Yamijala created HADOOP-8788:
Summary: hadoop fs -ls can print file paths according to the
native ls command
Key: HADOOP-8788
URL: https://issues.apache.org/jira/browse/HADOOP-8788
Hi,
hadoop fs -ls dirname
lists entries like
dirname/file1
dirname/file2
i.e. dirname is repeated. And it takes a small second to realize that
there's actually no directory called dirname under dirname.
Native ls doesn't repeat dirname when listing the output.
I suppose the current behaviour
Alejandro Abdelnur wrote:
>>> Makes sense, though the Jenkins runs should continue to run w/ native,
>>> right?
>>>
>>> On Thu, Sep 6, 2012 at 12:49 AM, Hemanth Yamijala
>>> wrote:
>>>> Hi,
>>>>
>>>> The test-patch
Hemanth Yamijala created HADOOP-8776:
Summary: Provide an option in test-patch that can enable / disable
compiling native code
Key: HADOOP-8776
URL: https://issues.apache.org/jira/browse/HADOOP-8776
Hi,
The test-patch script in Hadoop source runs a native compile with the
patch. On platforms like MAC, there are issues with the native
compile. For e.g we run into HADOOP-7147 that has been resolved as
Won't fix.
Hence, should we have a switch in test-patch to not run native compile
? Could ope
Hemanth Yamijala created HADOOP-8765:
Summary: LocalDirAllocator.ifExists API is broken and unused
Key: HADOOP-8765
URL: https://issues.apache.org/jira/browse/HADOOP-8765
Project: Hadoop Common
her way file a JIRA on it.
>
> --Bobby
>
> On 9/4/12 6:34 AM, "Hemanth Yamijala" wrote:
>
>>Hi,
>>
>>Stumbled on the fact that LocalDirAllocator.ifExists() is not used
>>anywhere. The previous usage of this API was in the IsolationRunner
>&
Hi,
Stumbled on the fact that LocalDirAllocator.ifExists() is not used
anywhere. The previous usage of this API was in the IsolationRunner
that was removed in MAPREDUCE-2606.
This API doesn't call the confChanged method and hence there is an
uninitialised variable that causes a NullPointerExcepti
Hi,
On Wed, Dec 1, 2010 at 1:38 AM, Dave Shine
wrote:
> How do I change the heap size for the Name Node (and Secondary Name Node)
> without changing the heap size for the data nodes, job tracker, and task
> trackers?
Which version are you using. I am pretty certain that 0.21 and above
have HAD
Hi,
On Mon, Nov 1, 2010 at 9:13 AM, He Chen wrote:
> If you use the default scheduler of hadoop 0.20.2 or higher. The
> jobQueueScheduler will take the data locality into account.
This is true irrespective of the scheduler in use. Other schedulers
currently add a layer to decide which job to pic
;
> On Thu, Oct 14, 2010 at 11:45 PM, Hemanth Yamijala wrote:
>
>> Hi,
>>
>> On Thu, Oct 14, 2010 at 10:59 PM, He Chen wrote:
>> > they arrived in 1 minute. I understand there will be a setup phase which
>> > will use any free slot no matter map or reduce.
you for your kindly reply. Do you know what really the setup do? Does
> it will take the data locality into account?
>
> On Wed, Oct 13, 2010 at 11:38 PM, Hemanth Yamijala wrote:
>
>> If you are talking about the 'Setup task' that is used to initialize
>> or set
Hi,
On Thu, Oct 14, 2010 at 10:59 PM, He Chen wrote:
> they arrived in 1 minute. I understand there will be a setup phase which
> will use any free slot no matter map or reduce.
>
You mean all jobs were submitted within a minute ? That means a few
seconds between jobs ? Or do you mean each job w
If you are talking about the 'Setup task' that is used to initialize
or setup the job, yes, it can run on either the map slot or reduce
slot depending on what is available.
Thanks
Hemanth
On Thu, Oct 14, 2010 at 1:54 AM, He Chen wrote:
> Hi, all
>
> I found out that if the there is no map slot,
[Moving to mapreduce-dev, copying common-dev]
Hi,
On Thu, Sep 9, 2010 at 11:30 AM, radheshyam nanduri
wrote:
> Hi,
>
> I am working on writing a scheduler plugin for Hadoop.
Currently, the model supported to plug-in schedulers to Hadoop is to
extend the TaskScheduler class in o.a.h.mapred packa
Hi,
> Thanks Arun. Change the mTime is a good idea. However, given a file (the path
> is
>
> A/B/C/D/file) distributed to all the nodes, if I just change the mTime of file
> to a earlier time stamp, it will not be replaced next time. Should I also
> change
> the mTime for all the directories alon
+1 to Vinod's sentiment. Thanks, Tom !
On Wed, Aug 25, 2010 at 9:05 AM, Vinod KV wrote:
> Thanks for the great work leading this effort, Tom!
>
> +Vinod
>
> On Wednesday 25 August 2010 12:48 AM, Tom White wrote:
>>
>> Hi everyone,
>>
>> I am pleased to announce that Apache Hadoop 0.21.0 is availa
Hi,
> Can anyone tell me the purpose of sockets used in hadoop. I mean
> for which purpose hadoop uses sockets? are they used by tasktracker and
> jobtrackers ? or used by namenode and datanode ?
All communication between Hadoop masters (JobTracker, NameNode),
slaves (TaskTracker, Data
Hi,
> I am interested what tools do you use for this QA report [1] for each commit
> ?
> I guess there is a SVN commit hook that triggers the tool.
> I'd like to use it in my company if it is open source.
>
Hudson (http://hudson-ci.org/) is the continuous integration system
used. In the hadoop-co
Matias,
> I'm using Hadoop 0.20.2 and try to do some unit testing. I already used
> mrunit, but now I wan't to use MiniMRCluster. Unfortunatelly, it still uses
> the old deprecated API like JobConf. Is there any newer version of
> MiniMRCluster? Or is there a successor, because 0.21 RC did not con
Versions: 0.21.0
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Fix For: 0.21.0
Modify the cluster_setup guide with information about memory monitoring and
admin configuration.
--
This message is automatically generated by JIRA.
-
You can reply to this
Vamsi,
>
> I have some basic doubt on hadoop Input Data placement...
>
> Like, If i input some 30GB of data to hadoop program , it will place the
> 30gb into HDFS into some set of files based on some input formats..
Conceptually, it would be more accurate to say that it splits the data
into 'blo
27;t locate any now)
asking for this capability.
Thanks
Hemanth
>
> On Fri, May 14, 2010 at 7:04 AM, Hemanth Yamijala wrote:
>
>> Saurabh,
>>
>> > i am experimenting with hadoop. wanted to ask that is the Task
>> distribution
>> > policy by job
Saurabh,
> i am experimenting with hadoop. wanted to ask that is the Task distribution
> policy by job tracker pluggable if yes where in the code tree is it defined.
>
Take a look at o.a.h.mapred.TaskScheduler. That's the abstract class
that needs to be extended to define a new scheduling policy.
[
https://issues.apache.org/jira/browse/HADOOP-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hemanth Yamijala resolved HADOOP-6539.
--
Resolution: Won't Fix
Assignee: Corinne Chandel (was: Amareshwari Sriram
Issue Type: Bug
Reporter: Hemanth Yamijala
A recent test failure on Hudson seems to indicate that Jetty's
Server.getConnectors()[0].getLocalPort() is returning -1 in the
HttpServer.getPort() method. When this happens, Hadoop masters / slaves that
use Jetty fail to st
Hudson giving a +1 though no tests are included.
Key: HADOOP-6341
URL: https://issues.apache.org/jira/browse/HADOOP-6341
Project: Hadoop Common
Issue Type: Bug
Reporter: Hemanth
Issue Type: Bug
Components: build
Reporter: Hemanth Yamijala
It will be useful to add comments in test-patch.sh for changes done as part of
HADOOP-6250 to explain them.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the
[
https://issues.apache.org/jira/browse/HADOOP-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hemanth Yamijala resolved HADOOP-6250.
--
Resolution: Fixed
Fix Version/s: 0.21.0
Hadoop Flags: [Reviewed]
I
[
https://issues.apache.org/jira/browse/HADOOP-6243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hemanth Yamijala resolved HADOOP-6243.
--
Resolution: Fixed
I committed common and mapreduce jars to HDFS thus completing the
[
https://issues.apache.org/jira/browse/HADOOP-6243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hemanth Yamijala reopened HADOOP-6243:
--
Leaving this open until I commit the jars to the affected projects.
> NPE in handl
Common
Issue Type: Bug
Components: conf
Reporter: Hemanth Yamijala
In HADOOP-6105, we provided a method of adding deprecated keys from other
sub-projects like HDFS and Map/Reduce using a key called
hadoop.conf.extra.classes. The expectation was that this key had
[
https://issues.apache.org/jira/browse/HADOOP-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hemanth Yamijala resolved HADOOP-6230.
--
Resolution: Fixed
Fix Version/s: 0.21.0
Hadoop Flags: [Incompatible change
Reporter: Hemanth Yamijala
Currently the configuration class does not allow null keys and values. Null
keys don't make sense, but null values may have semantic meaning for some
features. Not storing these values in configuration causes some arguable side
effects. For instance,
Issue Type: Bug
Components: conf
Reporter: Hemanth Yamijala
A use-case: Recently, we stumbled on a bug that wanted us to disable the
feature to run a debug script in map/reduce on tasks that fail, specified by
mapred.{map|reduce}.task.debug.script. The best way of
Map/Reduce would benefit too from the extended date (Sept 18). I still
think Owen's latest proposal is more suitable.
Tsz Wo (Nicholas), Sze wrote:
seems that the Nigel's previous suggestion (i.e. append Sept 18, others Sept
4) is better.
For your reference, attached the previous message thr
+1
Thanks
Hemanth
Owen O'Malley wrote:
All,
After the discussion settled last time, it seems that HDFS needs more
time to settle append and sync. Therefore, I'd like to propose a
freeze time of 4:30 pst on 18 Sep for making the 0.21 branch for
Common, HDFS, and MapReduce.
-- Owen
[
https://issues.apache.org/jira/browse/HADOOP-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hemanth Yamijala resolved HADOOP-6111.
--
Resolution: Later
In discussions that came up on HADOOP-4491, we realized that this
[
https://issues.apache.org/jira/browse/HADOOP-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12725616#action_12725616
]
Hemanth Yamijala commented on HADOOP-6106:
--
Mapreduce tests also ran, except
[
https://issues.apache.org/jira/browse/HADOOP-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12725565#action_12725565
]
Hemanth Yamijala commented on HADOOP-6106:
--
HDFS tests passed with the new
[
https://issues.apache.org/jira/browse/HADOOP-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12725499#action_12725499
]
Hemanth Yamijala commented on HADOOP-6106:
--
I had a chat with Owen and
44 matches
Mail list logo