Re: Writing output in multiple files in Hadoop

2011-12-27 Thread Harsh J
ution. > > Thanks. > > > -- > Regards, > Bhavesh Shah -- Harsh J

Re: Regarding MultipleInputs.addInputPath

2011-12-28 Thread Harsh J
Ensure you are sticking with either new API or old API. I'm sure you have your imports for the Input/Output formats mixed with mapred.* and mapreduce.* stuff. Stabilizing that will fix it. In future, please send user mail to common-u...@hadoop.apache.org and not common-dev, which is for develop

Re: SVN Repo Question

2011-12-30 Thread Harsh J
Ronald, On 31-Dec-2011, at 6:46 AM, Ronald Petty wrote: > Hello, > > Can someone explain the different layouts in the following repos, > specifically regarding building from source? > > - http://svn.apache.org/repos/asf/hadoop/common/trunk/ > - http://svn.apache.org/repos/asf/hadoop/common/t

Re: SVN Repo Question

2011-12-30 Thread Harsh J
hen backport to older versions if absolutely critical/necessary and as applicable. > > Kindest regards. > > Ron > > On Fri, Dec 30, 2011 at 9:51 PM, Harsh J wrote: > >> Ronald, >> On 31-Dec-2011, at 6:46 AM, Ronald Petty wrote: >> >> > Hello, &g

Re: SVN Repo Question

2011-12-30 Thread Harsh J
nd < 2.3 is the current releases of Hadoop? > > Kindest regards. > > Ron > > On Fri, Dec 30, 2011 at 9:51 PM, Harsh J wrote: > >> Ronald, >> On 31-Dec-2011, at 6:46 AM, Ronald Petty wrote: >> >>> Hello, >>> >>> Ca

Re: Request Wiki Account to Hadoop Platform

2012-01-03 Thread Harsh J
Deborah, Unsure what you're really asking for. We're an open source community, all of this information is already available even without signing up. I suggest you start at Apache Hadoop's homepage: http://hadoop.apache.org and checkout the various developer and user links around. On 04-Jan-20

Re: Modifying source code of hadoop

2012-01-24 Thread Harsh J
hanges to the hadoop source code > for my college project. How can I do this? How to compile and test the > modified code? What tools are needed to perform this? Your reply will be of > great help to me. > Thanks in advance. -- Harsh J Customer Ops. Engineer, Cloudera

Re: Modifying source code of hadoop

2012-01-24 Thread Harsh J
ows+cygwin > > Thanks > Samaneh > > On Tue, Jan 24, 2012 at 3:06 PM, Harsh J wrote: > >> Ashok, >> >> Following http://wiki.apache.org/hadoop/HowToContribute should get you >> started at development. Let us know if you have any further, specific >> qu

Re: User is not allowed to impersonate user1

2012-01-26 Thread Harsh J
useful results.  Is there something > I'm missing in the build configuration? > > Thanks. -- Harsh J Customer Ops. Engineer, Cloudera

Re: User is not allowed to impersonate user1

2012-01-26 Thread Harsh J
d in the > document.  I'm not sure what the options do, though. > > I'll give "mvn clean test" a try. > > > On Thu, Jan 26, 2012 at 7:07 AM, Harsh J wrote: >> >> Moving to common-dev@. >> >> I'm able to run all hadoop-hdfs-httpfs t

Re: User is not allowed to impersonate user1

2012-01-26 Thread Harsh J
an try some changes. > > Also, I'm still getting the error even when running "mvn clean test". > > I'm using the OpenJDK 1.6.0_22 on F16.  Is there any other information > needed? > > On Thu, Jan 26, 2012 at 9:16 AM, Harsh J wrote: > >> What are yo

Re: User is not allowed to impersonate user1

2012-01-27 Thread Harsh J
em on OSX though). On Fri, Jan 27, 2012 at 7:15 PM, Bai Shen wrote: > Will do.  Any idea why I get that error, though?  I tried it on the Sun JDK > and it gives me the same error. > > On Thu, Jan 26, 2012 at 11:46 AM, Harsh J wrote: > >> Bai, >> >> In that case

Re: Debugging 1.0.0 with jdb

2012-02-01 Thread Harsh J
emination, or > reproduction is strictly prohibited and may be unlawful.  If you are > not the intended recipient, please contact the sender immediately by > return e-mail and destroy all copies of the original message. -- Harsh J Customer Ops. Engineer Cloudera | http://tiny.cloudera.com/about

Re: Execute a Map/Reduce Job Jar from Another Java Program.

2012-02-02 Thread Harsh J
t; > Can you please give me  a work around for this issue. > -- > View this message in context: > http://old.nabble.com/Execute-a-Map-Reduce-Job-Jar-from-Another-Java-Program.-tp33250801p33250801.html > Sent from the Hadoop core-dev mailing list archive at Nabble.com. > -- H

Re: Getting started with Eclipse for Hadoop 1.0.0?

2012-02-02 Thread Harsh J
strictly prohibited and may be unlawful. If you are > not the intended recipient, please contact the sender immediately by > return e-mail and destroy all copies of the original message. -- Harsh J Customer Ops. Engineer Cloudera | http://tiny.cloudera.com/about

Re: Issue on formating the file system

2012-02-02 Thread Harsh J
; > > Does anyone know this issue? > > I am using the hadoop built myself . The command ( $ bin/hadoop namenode > -format)  is issed under the directory > > /hadoop-common/hadoop-dist/target/hadoop-0.24.0-SNAPSHOT > > > Thanks, > > Haifegng -- Harsh J Customer Ops. Engineer Cloudera | http://tiny.cloudera.com/about

Re: PoweredBy Wiki

2012-02-07 Thread Harsh J
l mentions core-dev instead of common-dev but I can update that > too. > > Thanks! > > Cheers, > Lars > > [1] <http://www.gbif.org> -- Harsh J Customer Ops. Engineer Cloudera | http://tiny.cloudera.com/about

Re: .23 compile times

2012-02-15 Thread Harsh J
ge; just making sure I had everything built!). > > Is there a faster way to get the thing to build? > > Sriram -- Harsh J Customer Ops. Engineer Cloudera | http://tiny.cloudera.com/about

Re: Partitioners - How to know if they are working

2012-02-16 Thread Harsh J
asy to discover > though. > > Does anyone know if there is an easier way to see if your customized > partitioner is working? For instance, a counter that shows how many > partitioners a map generated or a reducer received? > > Thanks in advance, > > Fabio Almeida -- Harsh J Customer Ops. Engineer Cloudera | http://tiny.cloudera.com/about

Re: Building Hadoop UI

2012-02-17 Thread Harsh J
friendly > and shaped for our clients needs. > > Thanks, > > Fabio Pitzolu -- Harsh J Customer Ops. Engineer Cloudera | http://tiny.cloudera.com/about

Re: How to change the scheduler

2012-03-06 Thread Harsh J
rtunately didn't work for >> me. I've been stuck here, I really hope anyone can help me with this issue. >> >> Thank you all. >> Regards, >> SaSa >> >> >> >> -- >> Mohammed El Sayed >> Computer Science Department >> King Abdullah University of Science and Technology >> 2351 - 4700 KAUST, Saudi Arabia >> Home Page <http://cloud.kaust.edu.sa/SiteCollectionDocuments/melsayed.aspx> > -- Harsh J

Re: Release with PB and HA

2012-03-12 Thread Harsh J
t; > > -- > Todd Lipcon > Software Engineer, Cloudera > > The information contained in this email message is considered confidential > and proprietary to the sender and is intended solely for review and use by > the named recipient. Any unauthorized review, use or distribution is strictly > prohibited. If you have received this message in error, please advise the > sender by reply email and delete the message. -- Harsh J

Re: Mac OS-X apache hadoop installation issues

2012-03-22 Thread Harsh J
xception: error in opening zip file >> >> at java.util.zip.ZipFile.open(Native Method) >> >> at java.util.zip.ZipFile.(ZipFile.java:127) >> >> at java.util.jar.JarFile.(JarFile.java:135) >> >> at java.util.jar.JarFile.(JarFile.java:72) >> >> at org.apache.hadoop.util.RunJar.main(RunJar.java:88) >> -- Harsh J

Re: IBM China Big Data team recruitment

2012-03-23 Thread Harsh J
ation techniques. >> Hadoop/Hbase development/running experience is a big plus. >> >> -   Database server development experience is a plus >> >> -   Web application development experience is a plus >> >> -   Data warehouse and analytics experience is a plus >> >> -    NoSQL experience is a plus >> >> -    Ability to work with customers, understand customer business >> requirements and communicate them to development organization >> >> Qualifications:  Bachelor or above Degree in Computer Science or relevant >> areas >> >> -- Harsh J

Re: Adding documentation patches to hadoop

2012-04-05 Thread Harsh J
2-286-8393 > Fax#      512-838-8858 > > -- Harsh J

Re: what does mackmode signify

2012-04-07 Thread Harsh J
e say what > does it signify? > > Regards, > Ranjan -- Harsh J

Re: Warning in running mapreduce jobs

2012-04-17 Thread Harsh J
fault number of map tasks per job.  Typically set >  to a prime several times greater than number of available hosts. >  Ignored when mapred.job.tracker is "local". >   > > However I still get only one map task and not two. Can someone suggest the > solution to this problem. > > Thanking you > > Yours faithfully > Ranjan Banerjee -- Harsh J

Re: [jira] [Created] (HADOOP-8295) ToolRunner.confirmPrompt spins if stdin goes away

2012-04-19 Thread Harsh J
incorrectly, please contact your JIRA >> administrators: >> https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa >> For more information on JIRA, see: http://www.atlassian.com/software/jira >> >> >> -- Harsh J

Re: Reduce side join - Hadoop default - error in combiner

2012-04-20 Thread Harsh J
t > org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:242) >            at > org.apache.hadoop.contrib.utils.join.DataJoinReducerBase.regroup(DataJoinReducerBase.java:106) > > > I checked some forums and found out that the error may occur due to non > static class. My program has no non static class! > > The command I use to run hadoop is >    /hadoop/core/bin/hadoop jar /export/scratch/lopez/Join/DataJoin.jar > DataJoin /export/scratch/user/lopez/Join > /export/scratch/user/lopez/Join_Output > > and the DataJoin.jar file has DataJoin$TaggedWritable packaged in it > > Could someone please help me > -- > View this message in context: > http://old.nabble.com/Reduce-side-join---Hadoop-default---error-in-combiner-tp33705493p33705493.html > Sent from the Hadoop core-dev mailing list archive at Nabble.com. -- Harsh J

Re: Hadoop Mapper Intermediate Result Storage with No Reducer

2012-04-25 Thread Harsh J
SIO to do the benchmark to get write-speed to HDFS, but we > wonder how much the gap is. > > Another question is, when did Hadoop move this chuck to HDFS? > > Any thoughts, guys? Thanks ahead. > > Xun -- Harsh J

Re: Unable to build native binaries

2012-04-27 Thread Harsh J
Reactor Summary: > [INFO] > [INFO] Apache Hadoop Annotations . SKIPPED > [INFO] Apache Hadoop Auth SKIPPED > [INFO] Apache Hadoop Auth Examples ... SKIPPED > [INFO] Apache Hadoop Common .. SKIPPED > [INFO] Apache Hadoop Common Project .. FAILURE [6.965s] > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 7.752s > [INFO] Finished at: Fri Apr 27 22:11:54 EDT 2012 > [INFO] Final Memory: 27M/310M > [INFO] > > [ERROR] Failed to execute goal > org.codehaus.mojo:native-maven-plugin:1.0-alpha-7:javah (default) on > project hadoop-common: Error running javah command: Error executing command > line. Exit code:1 -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, > please read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > [ERROR] > [ERROR] After correcting the problems, you can resume the build with the > command > [ERROR]   mvn -rf :hadoop-common > . > . > . > . -- Harsh J

Re: File Busy or not check. Kindly Help

2012-05-01 Thread Harsh J
ion > of this message without the prior written consent of the author of this > e-mail is strictly prohibited. If you have > received this email in error please delete it and notify the sender > immediately. Before opening any mail and attachments > please check them for viruses and defect. > > --- -- Harsh J

Re: subscribe hadoop

2012-05-16 Thread Harsh J
Hey Han, Welcome! :) Please read http://hadoop.apache.org/common/mailing_lists.html on how to subscribe properly. You need to mail common-dev-subscribe@, not common-dev@ directly. On Thu, May 17, 2012 at 12:22 PM, han tai wrote: > hello,I want to  subscribe hadoop -- Harsh J

Re: org.apache.hadoop.mapreduce.JobSubmissionFiles changes for Windows

2012-05-18 Thread Harsh J
n JOB_FILE_PERMISSION = >    FsPermission.createImmutable((short) 0764); // rwxrw-r-- > > > If these changes are not incompatible with the Linux buil (i.e. if it is > acceptable to grant permissions to the group), then I will check them in to > SVN. > > regards, > > Patrick Toolis -- Harsh J

Re: cmake

2012-05-22 Thread Harsh J
ke and related programs installed to build the >> native parts of Hadoop.  Instead, you would need CMake installed.  CMake is >> packaged by Red Hat, even in RHEL5, so it shouldn't be difficult to install >> locally.  It's also available for Mac OS X and Windows, as I mentioned >> earlier. >> >> The JIRA for this work is at >> https://issues.apache.org/jira/browse/HADOOP-8368 >> Thanks for reading. >> >> sincerely, >> Colin -- Harsh J

Re: Powered By wrtie permission application

2012-05-23 Thread Harsh J
oop/PoweredBy > > -- > WBR, Mikhail Yakshin -- Harsh J

Re: Request for Jira issue review

2012-05-29 Thread Harsh J
gt;> >> Regards, >> Madhukara Phatak >> -- >> https://github.com/zinnia-phatak-dev/Nectar >> >> > > > -- > https://github.com/zinnia-phatak-dev/Nectar -- Harsh J

Re: Using test-patch.sh

2012-05-29 Thread Harsh J
to apply test patch on fresh svn checkout . Can anyone advice > on  how to get rid of this error? > > -- > https://github.com/zinnia-phatak-dev/Nectar -- Harsh J

Re: Using test-patch.sh

2012-05-29 Thread Harsh J
> Hi Harsh, >  If I run test-patch after mvn clean it shows following error > >   Pre-build trunk to verify trunk stability and javac warnings > >  Trunk compilation is broken? > > > On Tue, May 29, 2012 at 11:19 PM, Harsh J wrote: > >> Hi, >> >> Run an {{m

Re: Reg:Sequence id generation

2012-06-07 Thread Harsh J
ginning or at the end of the table can anyone help me on >> this please. >> >> Thanks >> >> Regards >> Abhi. > > > > -- > thanks > ashish > > Blog: http://www.ashishpaliwal.com/blog > My Photo Galleries: http://www.pbase.com/ashishpaliwal -- Harsh J

Re: Which component manages slow tasks?

2012-06-09 Thread Harsh J
scheduler? > > In fairscheduling (0.20.205) there exist a preemption mechanism but it just > handles min and fairshares and kills last submited tips.. > > Thanks. -- Harsh J

Re: Dennis Mårtensson

2012-06-11 Thread Harsh J
re in to the project willing to recommend my an task to > start with? > > Regards > > Dennis Mårtensson > > Dennis Mårtensson > 0768670934 > m...@dennis.is > QR kod för inläsning av kontakt data i mobiltelefon. > -- Harsh J

Re: Hadoop-Git-Eclipse

2012-06-13 Thread Harsh J
> > Can someone please give me a link to the steps to be followed for getting > Hadoop (latest from trunk) started in Eclipse? I need to be able to commit > changes to my forked repository on github. > > Thanks in advance. > Regards, > Prajakta -- Harsh J

Re: Hadoop-Git-Eclipse

2012-06-15 Thread Harsh J
l, hdfs-site.xml, mapred-site.xml > files. Will that cause a problem? > > Regards, > Prajakta > > > > On Wed, Jun 13, 2012 at 8:18 PM, Harsh J wrote: > >> Good to know your progress Prajakta! >> >> Did your submission surely go via the RM/NM or did it exe

Re: Mac OS-X apache hadoop installation issues

2012-06-21 Thread Harsh J
on > Ubuntu.I > have searched the whole internet but could not find a solution to it.Were you > able to solve? If yes, then will you help me? I have started snatching my > hairs!!! > > HTH! -- Harsh J

Re: Doubt regarding debugging Hadoop Classes

2012-07-02 Thread Harsh J
e.hadoop.mapreduce.task.reduce.Fetcher=DEBUG* > * > * > Then I restarted the daemon's and grep'd in the log directory for the Debug > message present in the Fetcher.java class. > I couldn't find any !! > > Am I missing anything here? Any help is highly appreciated .Thanks > > -- > > --With Regards > Pavan Kulkarni -- Harsh J

Re: fs.trash.interval

2012-07-08 Thread Harsh J
option. It's > taking very long time to move data into trash. Can you please help me > how to stop this process of deleting and restart process with skip > trash?? -- Harsh J

Re: JAVA_HOME setup error in Hadoop-0.23.3 single node

2012-07-09 Thread Harsh J
researched for the solution online but couldn't find much. >> So would really appreciate if anyone knows how to resolve this ?Thanks >> >> -- >> >> --With Regards >> Pavan Kulkarni >> >> > > > -- > > --With Regards > Pavan Kulkarni -- Harsh J

Re: Problem setting up 1st generation Hadoop-0.20 (ANT build) in Eclipse

2012-07-10 Thread Harsh J
ion was how to build a binary tar file for hadoop-0.20 > which still uses ANT. The wiki pages only have information for maven. > Any help is highly appreciated.Thanks > -- > > --With Regards > Pavan Kulkarni -- Harsh J

Re: Problem setting up 1st generation Hadoop-0.20 (ANT build) in Eclipse

2012-07-10 Thread Harsh J
n how to build a > binary distribution tar file. > The information on wiki and in BUILDING.txt only has Maven > instructions.Thanks > > On Tue, Jul 10, 2012 at 2:39 PM, Harsh J wrote: > >> Hey Pavan, >> >> The 0.20.x version series was renamed recently to 1.x. Hence, y

New JIRA version field for branch-2's next release?

2012-07-13 Thread Harsh J
Hey devs, I noticed 2.0.1 has already been branched, but there's no newer JIRA version field added in for 2.1.0? Can someone with the right powers add it across all projects, so that backports to branch-2 can be marked properly in their fix versions field? Thanks! -- Harsh J

Re: New JIRA version field for branch-2's next release?

2012-07-15 Thread Harsh J
Thanks Arun! I will now diff both branches and fix any places the JIRA fix version needs to be corrected at. On Mon, Jul 16, 2012 at 8:30 AM, Arun C Murthy wrote: > Done. > > On Jul 13, 2012, at 11:12 PM, Harsh J wrote: > >> Hey devs, >> >> I noticed 2.0.1 has alre

Re: New JIRA version field for branch-2's next release?

2012-07-15 Thread Harsh J
Ah looks like you've covered that edge too, many thanks! On Mon, Jul 16, 2012 at 8:40 AM, Harsh J wrote: > Thanks Arun! I will now diff both branches and fix any places the JIRA > fix version needs to be corrected at. > > On Mon, Jul 16, 2012 at 8:30 AM, Arun C Murthy wrote:

Re: Powered By Hadoop Wiki Page Permissions

2012-07-19 Thread Harsh J
wiki, which you can > get by subscribing to the common-dev@hadoop.apache.org mailing list > and asking for the wiki account you have just created to get this > permission." > > Thanks, > /* Joey */ > -- Harsh J

Re: Shifting to Java 7 . Is it good choice?

2012-07-19 Thread Harsh J
I have to tweak a few classes and for this I needed few packages >> >>which >> >> are >> >> only present in Java 7 like "java.nio.file" , So I was wondering If I >> >>can >> >> shift my >> >> development environment of Hadoop to Java 7? Would this break anything ? >> >openjdk 7 works, but nio async file access is slower then traditional. >> >> > > > -- > > --With Regards > Pavan Kulkarni -- Harsh J

Re: regarding dfs.web.ugi

2012-07-27 Thread Harsh J
will always there, so here the logic should check the new property first, > then the deprecated key. Any idea? > } > } -- Harsh J

Re: IdentityMapper in 1.0.3

2012-07-27 Thread Harsh J
, Abhinav M Kulkarni wrote: > Hi, > > What has become of IdentityMapper in 1.0.3? It is not present under > mapreduce.lib. I understand there was a JIRA improvement to port all the > classes from mapred.lib to mapreduce.lib. > > Thanks. -- Harsh J

Re: Checksum Error during Reduce Phase hadoop-1.0.2

2012-08-12 Thread Harsh J
ames in > the */etc/hosts *file, > but all my nodes have correct info about the hostnames in /etc/hosts, but I > still have these reducers throwing error. > Any help regarding this issue is appreciated .Thanks > > -- > > --With Regards > Pavan Kulkarni -- Harsh J

Re: Failed reduce job in some node

2012-08-12 Thread Harsh J
= key + " " + str(totalpr) > > try: > for k in linkDict[key]: > strTmp += " " + str(k) > print strTmp > except: > pass > > I get this when I submit a simple map-reduce job using streaming. Even the > word count example failed to reduce in some nodes. The /etc/host file and > configuration information in the conf/core-site.xml ,mapred-site.xml and > hdfs-site.xml log file of the master and slave nodes are attached with this > e-mail. > > /etc/hosts > > 172.29.142.240 master > 172.29.142.213 slaveorange > 172.29.142.222 slavecc > > Any help would be appreciated! -- Harsh J

Re: Hadoop cluster/monitoring

2012-08-12 Thread Harsh J
Reduce and HDFS. Among other things, it does provide a general interaction API for all things 'Hadoop' -- Harsh J

Re: hadoop native libs 32 and 64 bit

2012-08-27 Thread Harsh J
tps://issues.apache.org/jira/browse/HADOOP-7874 > > but there's also this one: > > "change location of the native libraries to lib instead of lib/native" > https://issues.apache.org/jira/browse/HADOOP-7996 > > -Steven Willis > >> -Original Message---

Re: Number of reducers

2012-08-27 Thread Harsh J
oose to decide number of reducers to mention explicitly, what should I > consider.Because choosing in appropriate number of reducer hampers the > performance. See http://wiki.apache.org/hadoop/HowManyMapsAndReduces -- Harsh J

Re: Number of reducers

2012-08-28 Thread Harsh J
; > Regards > Abhishek > > > > On Mon, Aug 27, 2012 at 11:29 PM, Harsh J wrote: >> Hi, >> >> On Tue, Aug 28, 2012 at 8:32 AM, Abhishek wrote: >>> Hi all, >>> >>> I just want to know that, based on what factor map reduce framework dec

Re: How to set JVM arguments in Hadoop 0.23

2012-09-11 Thread Harsh J
rocessor.run(ResourceManager.java:327) > at java.lang.Thread.run(Thread.java:680) > > > > Can someone please tell me how to set this VM argument. > > Thanks in advance. -- Harsh J

Re: How to set JVM arguments in Hadoop 0.23

2012-09-11 Thread Harsh J
Btw, you can also set a global JAVA_LIBRARY_PATH env-var containing your paths, and YARN will pick it up. On Wed, Sep 12, 2012 at 9:16 AM, Harsh J wrote: > Hi Shekhar, > > For YARN, try setting YARN_OPTS inside the yarn-env.sh. YARN scripts > do not reuse the hadoop-env.sh like the

Re: About the HDFS_READ/WRITE blocks from namenode to datanode

2012-10-08 Thread Harsh J
cks stored in datanodes, shouldn't that? > > > > Thanks > > May > > > > > -- Harsh J

Re: Wiki permissions for the Poweredby page

2012-10-09 Thread Harsh J
Grant. > > Regards > > James -- Harsh J

Re: Need to add fs shim to use QFS

2012-10-10 Thread Harsh J
; > Have you considered just pulling the kfs lib out and releasing the bridge > classes yourself? It's what the other FS suppliers do, as it gives them > more control over the libraries, including the ability to release more > often. > > -steve -- Harsh J

Re: Write Permissions to Support Wiki

2012-10-12 Thread Harsh J
> > Can you please add Write permissions for the Support for Apache Hadoop page > to the account "learncomputer" (or provide other means by which I can add > an entry to the page)? > > Thanks! > > Michael Dorf -- Harsh J

Mailing list admin?

2012-10-18 Thread Harsh J
hanks, Harsh J

Re: Mailing list admin?

2012-10-23 Thread Harsh J
Ping? On Thu, Oct 18, 2012 at 5:25 PM, Harsh J wrote: > Hey project devs, > > Can someone let me know who the MLs admin is? INFRA suggested that > instead of going to them, I could reach out to the admin group local > to the project itself (I didn't know we had admins locally

Re: libhdfs on windows

2012-10-25 Thread Harsh J
m this effort retained somewhere? If so, where? > Or do I have to start from scratch? Apologies if this has already been asked > recently. > > Any help appreciated. > > Peter Marron -- Harsh J

Re: Hadoop version layout error

2012-11-06 Thread Harsh J
ayout version is 'too old' and latest layout > version this software version can upgrade from is -7. > Any idea how to fix this prob. without losing the data. > > Many Thanks!! > > Daya Chen -- Harsh J

Re: which part of Hadoop is responsible of distributing the input file fragments to datanodes?

2012-11-11 Thread Harsh J
> > Thanks in advance, > Salam > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/which-part-of-Hadoop-is-responsible-of-distributing-the-input-file-fragments-to-datanodes-tp4019530.html > Sent from the Hadoop lucene-dev mailing list archive at Nabble.com. -- Harsh J

Re: trailing whitespace

2012-11-25 Thread Harsh J
patches. > Probably checking for tabs in Java files would be also good idea. -- Harsh J

Re: wiki write access request

2012-11-26 Thread Harsh J
> seeing things that can be better clarified or need updating. Can I have > write access to the Wiki so I can make text updates? I'm user "GlenMazza". > > Thanks, > Glen > > -- > Glen Mazza > Talend Community Coders - coders.talend.com > blog: www.jroller.com/gmazza > -- Harsh J

Re: Anybody know how to configure SSH for eclipse plugin

2012-11-27 Thread Harsh J
; >> > >> > >> > >> > > > > -- > > Glen Mazza > > Talend Community Coders - coders.talend.com > > blog: www.jroller.com/gmazza > > > > > > > -- > ** > * Mr. Jia Yiyu* > * * > * Email: jia.y...@gmail.com * > * * > * Web: http://yiyujia.blogspot.com/* > *** > -- Harsh J

Re: Anybody know how to configure SSH for eclipse plugin

2012-11-27 Thread Harsh J
, it is partly > misleaded by the comments in HadoopServer.java file > > > * > * This class does not create any SSH connection anymore. Tunneling must be > * setup outside of Eclipse for now (using Putty or ssh -D<port> > * <host>) > * > > thanks again!

Re: Mailing list admin?

2012-11-28 Thread Harsh J
24 AM, Harsh J wrote: > Ping? > > On Thu, Oct 18, 2012 at 5:25 PM, Harsh J wrote: >> Hey project devs, >> >> Can someone let me know who the MLs admin is? INFRA suggested that >> instead of going to them, I could reach out to the admin group local >> to the

Re: Mailing list admin?

2012-11-28 Thread Harsh J
a moderator, please send a message to apmail > at apache.org asking to become a moderator. CC private at > hadoop.apache.org to keep the PMC in the loop. > > http://www.apache.org/dev/committers.html#mailing-list-moderators > > Doug > > On Wed, Nov 28, 2012 at 4:23 AM, Ha

Re: Hadoop in ubuntu 12.04

2012-12-02 Thread Harsh J
M > 4.Is it possible to run it on laptop?. > sorry for the silly question. > thank you > cheers > huu -- Harsh J

Re: Do we support contatenated/splittable bzip2 files in branch-1?

2012-12-03 Thread Harsh J
> branch-0.21(also in trunk), say HADOOP-4012 and MAPREDUCE-830, but not > integrated/migrated into branch-1, so I guess we don't support contatenated > bzip2 in branch-1, correct? If so, is there any special reason? Many thanks! > > -- > Best Regards, > Li Yu -- Harsh J

Re: Do we support contatenated/splittable bzip2 files in branch-1?

2012-12-03 Thread Harsh J
eriry > whether HADOOP-7823 has resolved the issue on both write and read side, and > report back. > > On 3 December 2012 19:42, Harsh J wrote: > >> Hi Yu Li, >> >> The JIRA HADOOP-7823 backported support for splitting Bzip2 files plus >> MR support for it, into

Re: SPEC files?

2012-12-04 Thread Harsh J
on >> the internet, and before I attempt to create one I thought I'd ask here. >> Any help would be greatly appreciated. >> >> Sincerely, >> Michael Johnson >> m...@michaelpjohnson.com >> -- Harsh J

Re: Hadoop datajoin package

2013-01-14 Thread Harsh J
wrote: > On the user list, there was a question about the Hadoop datajoin package. > Specifically, its dependency on the old API. > > Is this package still in use ? Should we file a JIRA to migrate it to the > new API ? > > Thanks > hemanth > -- Harsh J

Re: Hadoop datajoin package

2013-01-15 Thread Harsh J
gt; > > http://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-datajoin/src/main/java/org/apache/hadoop/contrib/utils/join/ > > Is it to be deprecated and removed ? > > Thanks > Hemanth > > > On Mon, Jan 14, 2013 at 8:08 PM, Harsh J wrote: > > > Already

Re: Help to setup latest Hadoop source code on Eclipse

2013-01-31 Thread Harsh J
nd running? Can someone please help me out? > > PS: I am working on a project where we want to develop a custom scheduler > for hadoop. > > Thanks, > Karthiek -- Harsh J

Re: pre-historic record IO stuff, is this used anywhere?

2013-02-08 Thread Harsh J
g compiles fine. > > To me all this is dead code, if so, can we nuke them? > > Thx > > -- > Alejandro -- Harsh J

Re: Compile and deploy source code for Hadoop 1.0.4

2013-02-09 Thread Harsh J
elease 1.0.4? I believe I've answered both questions in the above inlines. Do feel free to post any further questions you have! -- Harsh J

Re: APIs to move data blocks within HDFS

2013-02-22 Thread Harsh J
n a research project where we want to investigate how to > optimally place data in hadoop. > > Thanks, > Karthiek -- Harsh J

Re: [Vote] Merge branch-trunk-win to trunk

2013-02-27 Thread Harsh J
Merge patch for this is available on >>> HADOOP-8562<https://issues.apache.org/jira/browse/HADOOP-8562> >>> . >>> >>> Highlights of the work done so far: >>> 1. Necessary changes in Hadoop to run natively on Windows. These changes >>> handle differences in platforms related to path names, process/task >>> management etc. >>> 2. Addition of winutils tools for managing file permissions and >>>ownership, >>> user group mapping, hardlinks, symbolic links, chmod, disk utilization, >>>and >>> process/task management. >>> 3. Added cmd scripts equivalent to existing shell scripts >>> hadoop-daemon.sh, start and stop scripts. >>> 4. Addition of block placement policy implemnation to support cloud >>> enviroment, more specifically Azure. >>> >>> We are very close to wrapping up the work in branch-trunk-win and >>>getting >>> ready for a merge. Currently the merge patch is passing close to 100% of >>> unit tests on Linux. Soon I will call for a vote to merge this branch >>>into >>> trunk. >>> >>> Next steps: >>> 1. Call for vote to merge branch-trunk-win to trunk, when the work >>> completes and precommit build is clean. >>> 2. Start a discussion on adding Jenkins precommit builds on windows and >>> how to integrate that with the existing commit process. >>> >>> Let me know if you have any questions. >>> >>> Regards, >>> Suresh >>> >>> >> >> >>-- >>http://hortonworks.com/download/ > -- Harsh J

Re: Technical question on Capacity Scheduler.

2013-03-03 Thread Harsh J
uhan > MSc student,CS > Univ. of Saskatchewan > IEEE Graduate Student Member > > http://homepage.usask.ca/~jac735/ > Feel free to post any further impl. related questions! :) -- Harsh J

Re: [Vote] Merge branch-trunk-win to trunk

2013-03-03 Thread Harsh J
39 AM, Tsuyoshi OZAWA wrote: > +1 (non-binding), > > Windows support is attractive for lots users. > From point a view from Hadoop developer, Matt said that CI supports > cross platform testing, and it's quite reasonable condition to merge. > > Thanks, > Tsuyoshi -- Harsh J

Re: [Vote] Merge branch-trunk-win to trunk

2013-03-04 Thread Harsh J
Thanks Suresh. Regarding where; we can state it on http://wiki.apache.org/hadoop/HowToContribute in the test-patch section perhaps. +1 on the merge. On Mon, Mar 4, 2013 at 11:39 PM, Suresh Srinivas wrote: > On Sun, Mar 3, 2013 at 8:50 PM, Harsh J wrote: > >> Have we agreed (a

Re: Technical question on Capacity Scheduler.

2013-03-05 Thread Harsh J
with reserved slots wont >> be executed if speculative execution is off? >> >> PS: I am working on MRv1. >> >> >> On Sun, Mar 3, 2013 at 2:41 AM, Harsh J wrote: >> >>> On Sun, Mar 3, 2013 at 1:41 PM, Jagmohan Chauhan < >>> simplefundumn...@gm

Re: [VOTE] Plan to create release candidate for 0.23.7

2013-03-16 Thread Harsh J
t; > >I think enough critical bug fixes have went in to branch-0.23 that > > >warrant another release. I plan on creating a 0.23.7 release by the end > > >March. > > > > > >Please vote '+1' to approve this plan. Voting will close on Wednesday > > >3/20 at 10:00am PDT. > > > > > >Thanks, > > >Tom Graves > > >(release manager) > > > > > -- Harsh J

Re: [VOTE] Plan to create release candidate Monday 3/18

2013-03-16 Thread Harsh J
t; >Release plans have to be voted on too, so please vote '+1' to approve this > >plan. Voting will close on Sunday 3/17 at 8:30pm PDT. > > > >Thanks, > >--Matt > >(release manager) > > -- Harsh J

Re: how to define new InputFormat with streaming?

2013-03-16 Thread Harsh J
the lists > 2. grab a later version of the apache releases if you want help on them on > these mailing lists, or go to the cloudera lists, where they will probably > say "upgrade to CDH 4.x" before asking questions. > > thanks > -- Harsh J

Re: Re: how to define new InputFormat with streaming?

2013-03-17 Thread Harsh J
ib.input.*; > > does it because hadoop-0.20.2-cdh3u3 not include "mapred" API? > > > > > > > At 2013-03-17 14:22:43,"Harsh J" wrote: >>The issue is that Streaming expects the old/stable MR API >>(org.apache.hadoo

Re: HTTP/1.1 405 HTTP method PUT is not supported by this URL??

2013-03-17 Thread Harsh J
DFS. > Hadoop return: > HTTP/1.1 405 HTTP method PUT is not supported by this URL > > > OK, Who know this why? > > > TIA > Levi -- Harsh J

  1   2   3   4   5   >