[jira] Reopened: (HADOOP-6476) o.a.h.io.Text - setCapacity does not shrink size

2010-02-09 Thread Kay Kay (JIRA)
[ https://issues.apache.org/jira/browse/HADOOP-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kay Kay reopened HADOOP-6476: - > o.a.h.io.Text - setCapacity does not shrink s

[jira] Resolved: (HADOOP-6476) o.a.h.io.Text - setCapacity does not shrink size

2010-02-09 Thread Kay Kay (JIRA)
[ https://issues.apache.org/jira/browse/HADOOP-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kay Kay resolved HADOOP-6476. - Resolution: Later > o.a.h.io.Text - setCapacity does not shrink s

Re: Doubt regarding classification of jobs given to hadoop

2010-01-28 Thread Kay Kay
You can always perform a dry run before the actual deployment to collect the metrics before going into meaningful deployment. Other options would be to do a rough math based on the input data set / complexity of the algorithms used , but that would be an useful after-step after getting through

Re: Doubt regarding classification of jobs given to hadoop

2010-01-28 Thread Kay Kay
On 01/28/2010 08:36 PM, Abhilaash wrote: top iostat utilitities should get some metrics corresponding to the cpu and the io, that can help identify the nature of the job. Thank you sir We found top utility is available and what output it gives but We were not able to fi

Re: Doubt regarding classification of jobs given to hadoop

2010-01-28 Thread Kay Kay
top iostat utilitities should get some metrics corresponding to the cpu and the io, that can help identify the nature of the job. On 1/28/10 3:46 AM, T. Madana Gopal wrote: Hi all, What are the factors on which we can classify the jobs given to hadoop as CPU intensive or I/O intensi

[jira] Created: (HADOOP-6519) o.a.h.hdfs.server.datanode.DataXceiver - run() - Version mismatch exception - more context to help debugging

2010-01-28 Thread Kay Kay (JIRA)
://issues.apache.org/jira/browse/HADOOP-6519 Project: Hadoop Common Issue Type: Improvement Reporter: Kay Kay add some context information in the IOException during a version mismatch to help debugging. -- This message is automatically generated by JIRA. - You can reply to

Re: Rolling a Hadoop 0.20.2

2010-01-26 Thread Kay Kay
Is HDFS-127 going to be part of it ? (seems to have been committed as per the jira). On 1/26/10 6:53 PM, Konstantin Boudnik wrote: +1 On Tue, Jan 26, 2010 at 09:56AM, Owen O'Malley wrote: I'm planning on rolling a Hadoop 0.20.2 today. Are there any blockers that can't wait? -- Owen

Re: build and use hadoop-git

2010-01-22 Thread Kay Kay
Start with hadoop-common to start building . hadoop-hdfs / hadoop-mapred pull the dependencies from apache snapshot repository that contains the nightlies of last successful builds so in theory all 3 could be built independently because of the respective snapshots being present in apache snaps

[jira] Resolved: (HADOOP-6477) 0.21.0 - upload of the latest snapshot to apache snapshot repository

2010-01-04 Thread Kay Kay (JIRA)
[ https://issues.apache.org/jira/browse/HADOOP-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kay Kay resolved HADOOP-6477. - Resolution: Fixed The latest build does it automagically. No need for this. > 0.21.0 - upload of

[jira] Resolved: (HADOOP-6478) 0.21 - .eclipse-templates/.classpath out of sync with file system

2010-01-04 Thread Kay Kay (JIRA)
[ https://issues.apache.org/jira/browse/HADOOP-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kay Kay resolved HADOOP-6478. - Resolution: Duplicate The other bug mentioned handles the cp generation automatically. Hence closing

[jira] Created: (HADOOP-6478) 0.21 - .eclipse-templates/.classpath out of sync with file system

2010-01-04 Thread Kay Kay (JIRA)
Type: Bug Reporter: Kay Kay Fix For: 0.21.0 Attachments: HADOOP-6478.patch some of the jars in .classpath of branch-0.21 out of sync with the file system retrieved by ivy . -- This message is automatically generated by JIRA. - You can reply to this email to

[jira] Created: (HADOOP-6477) 0.21.0 - upload of the latest snapshot to apache snapshot repository

2010-01-04 Thread Kay Kay (JIRA)
Type: Task Reporter: Kay Kay Fix For: 0.21.0 Can you help upload the snapshot from the source control to hadoop-core for branch-0.21 - http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-core/0.21.0-SNAPSHOT/ . HBASE-1433 , about enabling dependency

Frequency of pushing artifacts to apache snapshot

2010-01-03 Thread Kay Kay
What is the frequency with which artifacts are pushed to the apache snapshot repository. https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-core/0.21.0-SNAPSHOT/ . Also - would it be possible to push the sources (along with the jar ) as well, as part of art

Re: clearing o.a.h.io.Text

2010-01-01 Thread Kay Kay
On Thu, Dec 31, 2009 at 11:03 PM, Owen O'Malley wrote: > > On Dec 30, 2009, at 12:36 AM, Kay Kay wrote: > > In o.a.h.io.Text - the clear method currently just resets length to 0, >> while not doing anything about the bytes internally. >> >> Curious to know t

[jira] Created: (HADOOP-6476) o.a.h.io.Text - setCapacity does not shrink size

2010-01-01 Thread Kay Kay (JIRA)
o.a.h.io.Text - setCapacity does not shrink size - Key: HADOOP-6476 URL: https://issues.apache.org/jira/browse/HADOOP-6476 Project: Hadoop Common Issue Type: Bug Reporter: Kay Kay

clearing o.a.h.io.Text

2009-12-30 Thread Kay Kay
In o.a.h.io.Text - the clear method currently just resets length to 0, while not doing anything about the bytes internally. Curious to know the thoughts behind the decision (to let the internal bytes to be reused for future appends vs. memory leaks due to not clearing them ) ? Thanks. $ sv

[jira] Created: (HADOOP-6471) StringBuilder -> StringBuilder unnecessary references

2009-12-27 Thread Kay Kay (JIRA)
Bug Reporter: Kay Kay Fix For: 0.20.2 Across hadoop-common codebase, a good number of StringBuffer-s being used are actually candidates for StringBuilders , since the reference does not escape the scope of the method and no concurrency is needed. -- This message is automatically genera

mapreduce trunk build on hudson - broken

2009-12-22 Thread Kay Kay
( Most of the discussions related to map reduce seem to happen in this mailing list and the mapreduce-dev look more like a notification list from the archives. Hence posting here ). The trunk code of mapreduce is not green in hudson since Dec 11 - http://hudson.zones.apache.org/hudson/view/Hado

[jira] Created: (HADOOP-6376) slaves file to have a header specifying the format of conf/slaves file

2009-11-16 Thread Kay Kay (JIRA)
Issue Type: Improvement Components: conf Affects Versions: 0.20.1 Reporter: Kay Kay Priority: Minor Fix For: 0.20.2 Attachments: HADOOP-6376.patch When we open the file conf/slaves - it is not immediately obvious what the format of the