ution.
>
> Thanks.
>
>
> --
> Regards,
> Bhavesh Shah
--
Harsh J
Ensure you are sticking with either new API or old API. I'm sure you have your
imports for the Input/Output formats mixed with mapred.* and mapreduce.* stuff.
Stabilizing that will fix it.
In future, please send user mail to common-u...@hadoop.apache.org and not
common-dev, which is for develop
Ronald,
On 31-Dec-2011, at 6:46 AM, Ronald Petty wrote:
> Hello,
>
> Can someone explain the different layouts in the following repos,
> specifically regarding building from source?
>
> - http://svn.apache.org/repos/asf/hadoop/common/trunk/
> - http://svn.apache.org/repos/asf/hadoop/common/t
hen backport to
older versions if absolutely critical/necessary and as applicable.
>
> Kindest regards.
>
> Ron
>
> On Fri, Dec 30, 2011 at 9:51 PM, Harsh J wrote:
>
>> Ronald,
>> On 31-Dec-2011, at 6:46 AM, Ronald Petty wrote:
>>
>> > Hello,
&g
nd < 2.3 is the current releases of Hadoop?
>
> Kindest regards.
>
> Ron
>
> On Fri, Dec 30, 2011 at 9:51 PM, Harsh J wrote:
>
>> Ronald,
>> On 31-Dec-2011, at 6:46 AM, Ronald Petty wrote:
>>
>>> Hello,
>>>
>>> Ca
Deborah,
Unsure what you're really asking for.
We're an open source community, all of this information is already available
even without signing up. I suggest you start at Apache Hadoop's homepage:
http://hadoop.apache.org and checkout the various developer and user links
around.
On 04-Jan-20
hanges to the hadoop source code
> for my college project. How can I do this? How to compile and test the
> modified code? What tools are needed to perform this? Your reply will be of
> great help to me.
> Thanks in advance.
--
Harsh J
Customer Ops. Engineer, Cloudera
ows+cygwin
>
> Thanks
> Samaneh
>
> On Tue, Jan 24, 2012 at 3:06 PM, Harsh J wrote:
>
>> Ashok,
>>
>> Following http://wiki.apache.org/hadoop/HowToContribute should get you
>> started at development. Let us know if you have any further, specific
>> qu
useful results. Is there something
> I'm missing in the build configuration?
>
> Thanks.
--
Harsh J
Customer Ops. Engineer, Cloudera
d in the
> document. I'm not sure what the options do, though.
>
> I'll give "mvn clean test" a try.
>
>
> On Thu, Jan 26, 2012 at 7:07 AM, Harsh J wrote:
>>
>> Moving to common-dev@.
>>
>> I'm able to run all hadoop-hdfs-httpfs t
an try some changes.
>
> Also, I'm still getting the error even when running "mvn clean test".
>
> I'm using the OpenJDK 1.6.0_22 on F16. Is there any other information
> needed?
>
> On Thu, Jan 26, 2012 at 9:16 AM, Harsh J wrote:
>
>> What are yo
em on OSX though).
On Fri, Jan 27, 2012 at 7:15 PM, Bai Shen wrote:
> Will do. Any idea why I get that error, though? I tried it on the Sun JDK
> and it gives me the same error.
>
> On Thu, Jan 26, 2012 at 11:46 AM, Harsh J wrote:
>
>> Bai,
>>
>> In that case
emination, or
> reproduction is strictly prohibited and may be unlawful. If you are
> not the intended recipient, please contact the sender immediately by
> return e-mail and destroy all copies of the original message.
--
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about
t;
> Can you please give me a work around for this issue.
> --
> View this message in context:
> http://old.nabble.com/Execute-a-Map-Reduce-Job-Jar-from-Another-Java-Program.-tp33250801p33250801.html
> Sent from the Hadoop core-dev mailing list archive at Nabble.com.
>
--
H
strictly prohibited and may be unlawful. If you are
> not the intended recipient, please contact the sender immediately by
> return e-mail and destroy all copies of the original message.
--
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about
;
>
> Does anyone know this issue?
>
> I am using the hadoop built myself . The command ( $ bin/hadoop namenode
> -format) is issed under the directory
>
> /hadoop-common/hadoop-dist/target/hadoop-0.24.0-SNAPSHOT
>
>
> Thanks,
>
> Haifegng
--
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about
l mentions core-dev instead of common-dev but I can update that
> too.
>
> Thanks!
>
> Cheers,
> Lars
>
> [1] <http://www.gbif.org>
--
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about
ge; just making sure I had everything built!).
>
> Is there a faster way to get the thing to build?
>
> Sriram
--
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about
asy to discover
> though.
>
> Does anyone know if there is an easier way to see if your customized
> partitioner is working? For instance, a counter that shows how many
> partitioners a map generated or a reducer received?
>
> Thanks in advance,
>
> Fabio Almeida
--
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about
friendly
> and shaped for our clients needs.
>
> Thanks,
>
> Fabio Pitzolu
--
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about
rtunately didn't work for
>> me. I've been stuck here, I really hope anyone can help me with this issue.
>>
>> Thank you all.
>> Regards,
>> SaSa
>>
>>
>>
>> --
>> Mohammed El Sayed
>> Computer Science Department
>> King Abdullah University of Science and Technology
>> 2351 - 4700 KAUST, Saudi Arabia
>> Home Page <http://cloud.kaust.edu.sa/SiteCollectionDocuments/melsayed.aspx>
>
--
Harsh J
t;
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>
> The information contained in this email message is considered confidential
> and proprietary to the sender and is intended solely for review and use by
> the named recipient. Any unauthorized review, use or distribution is strictly
> prohibited. If you have received this message in error, please advise the
> sender by reply email and delete the message.
--
Harsh J
xception: error in opening zip file
>>
>> at java.util.zip.ZipFile.open(Native Method)
>>
>> at java.util.zip.ZipFile.(ZipFile.java:127)
>>
>> at java.util.jar.JarFile.(JarFile.java:135)
>>
>> at java.util.jar.JarFile.(JarFile.java:72)
>>
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:88)
>>
--
Harsh J
ation techniques.
>> Hadoop/Hbase development/running experience is a big plus.
>>
>> - Database server development experience is a plus
>>
>> - Web application development experience is a plus
>>
>> - Data warehouse and analytics experience is a plus
>>
>> - NoSQL experience is a plus
>>
>> - Ability to work with customers, understand customer business
>> requirements and communicate them to development organization
>>
>> Qualifications: Bachelor or above Degree in Computer Science or relevant
>> areas
>>
>>
--
Harsh J
2-286-8393
> Fax# 512-838-8858
>
>
--
Harsh J
e say what
> does it signify?
>
> Regards,
> Ranjan
--
Harsh J
fault number of map tasks per job. Typically set
> to a prime several times greater than number of available hosts.
> Ignored when mapred.job.tracker is "local".
>
>
> However I still get only one map task and not two. Can someone suggest the
> solution to this problem.
>
> Thanking you
>
> Yours faithfully
> Ranjan Banerjee
--
Harsh J
incorrectly, please contact your JIRA
>> administrators:
>> https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
>> For more information on JIRA, see: http://www.atlassian.com/software/jira
>>
>>
>>
--
Harsh J
t
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:242)
> at
> org.apache.hadoop.contrib.utils.join.DataJoinReducerBase.regroup(DataJoinReducerBase.java:106)
>
>
> I checked some forums and found out that the error may occur due to non
> static class. My program has no non static class!
>
> The command I use to run hadoop is
> /hadoop/core/bin/hadoop jar /export/scratch/lopez/Join/DataJoin.jar
> DataJoin /export/scratch/user/lopez/Join
> /export/scratch/user/lopez/Join_Output
>
> and the DataJoin.jar file has DataJoin$TaggedWritable packaged in it
>
> Could someone please help me
> --
> View this message in context:
> http://old.nabble.com/Reduce-side-join---Hadoop-default---error-in-combiner-tp33705493p33705493.html
> Sent from the Hadoop core-dev mailing list archive at Nabble.com.
--
Harsh J
SIO to do the benchmark to get write-speed to HDFS, but we
> wonder how much the gap is.
>
> Another question is, when did Hadoop move this chuck to HDFS?
>
> Any thoughts, guys? Thanks ahead.
>
> Xun
--
Harsh J
Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Annotations . SKIPPED
> [INFO] Apache Hadoop Auth SKIPPED
> [INFO] Apache Hadoop Auth Examples ... SKIPPED
> [INFO] Apache Hadoop Common .. SKIPPED
> [INFO] Apache Hadoop Common Project .. FAILURE [6.965s]
> [INFO]
>
> [INFO] BUILD FAILURE
> [INFO]
>
> [INFO] Total time: 7.752s
> [INFO] Finished at: Fri Apr 27 22:11:54 EDT 2012
> [INFO] Final Memory: 27M/310M
> [INFO]
>
> [ERROR] Failed to execute goal
> org.codehaus.mojo:native-maven-plugin:1.0-alpha-7:javah (default) on
> project hadoop-common: Error running javah command: Error executing command
> line. Exit code:1 -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR] mvn -rf :hadoop-common
> .
> .
> .
> .
--
Harsh J
ion
> of this message without the prior written consent of the author of this
> e-mail is strictly prohibited. If you have
> received this email in error please delete it and notify the sender
> immediately. Before opening any mail and attachments
> please check them for viruses and defect.
>
> ---
--
Harsh J
Hey Han,
Welcome! :)
Please read http://hadoop.apache.org/common/mailing_lists.html on how
to subscribe properly. You need to mail common-dev-subscribe@, not
common-dev@ directly.
On Thu, May 17, 2012 at 12:22 PM, han tai wrote:
> hello,I want to subscribe hadoop
--
Harsh J
n JOB_FILE_PERMISSION =
> FsPermission.createImmutable((short) 0764); // rwxrw-r--
>
>
> If these changes are not incompatible with the Linux buil (i.e. if it is
> acceptable to grant permissions to the group), then I will check them in to
> SVN.
>
> regards,
>
> Patrick Toolis
--
Harsh J
ke and related programs installed to build the
>> native parts of Hadoop. Instead, you would need CMake installed. CMake is
>> packaged by Red Hat, even in RHEL5, so it shouldn't be difficult to install
>> locally. It's also available for Mac OS X and Windows, as I mentioned
>> earlier.
>>
>> The JIRA for this work is at
>> https://issues.apache.org/jira/browse/HADOOP-8368
>> Thanks for reading.
>>
>> sincerely,
>> Colin
--
Harsh J
oop/PoweredBy
>
> --
> WBR, Mikhail Yakshin
--
Harsh J
gt;>
>> Regards,
>> Madhukara Phatak
>> --
>> https://github.com/zinnia-phatak-dev/Nectar
>>
>>
>
>
> --
> https://github.com/zinnia-phatak-dev/Nectar
--
Harsh J
to apply test patch on fresh svn checkout . Can anyone advice
> on how to get rid of this error?
>
> --
> https://github.com/zinnia-phatak-dev/Nectar
--
Harsh J
> Hi Harsh,
> If I run test-patch after mvn clean it shows following error
>
> Pre-build trunk to verify trunk stability and javac warnings
>
> Trunk compilation is broken?
>
>
> On Tue, May 29, 2012 at 11:19 PM, Harsh J wrote:
>
>> Hi,
>>
>> Run an {{m
ginning or at the end of the table can anyone help me on
>> this please.
>>
>> Thanks
>>
>> Regards
>> Abhi.
>
>
>
> --
> thanks
> ashish
>
> Blog: http://www.ashishpaliwal.com/blog
> My Photo Galleries: http://www.pbase.com/ashishpaliwal
--
Harsh J
scheduler?
>
> In fairscheduling (0.20.205) there exist a preemption mechanism but it just
> handles min and fairshares and kills last submited tips..
>
> Thanks.
--
Harsh J
re in to the project willing to recommend my an task to
> start with?
>
> Regards
>
> Dennis Mårtensson
>
> Dennis Mårtensson
> 0768670934
> m...@dennis.is
> QR kod för inläsning av kontakt data i mobiltelefon.
>
--
Harsh J
>
> Can someone please give me a link to the steps to be followed for getting
> Hadoop (latest from trunk) started in Eclipse? I need to be able to commit
> changes to my forked repository on github.
>
> Thanks in advance.
> Regards,
> Prajakta
--
Harsh J
l, hdfs-site.xml, mapred-site.xml
> files. Will that cause a problem?
>
> Regards,
> Prajakta
>
>
>
> On Wed, Jun 13, 2012 at 8:18 PM, Harsh J wrote:
>
>> Good to know your progress Prajakta!
>>
>> Did your submission surely go via the RM/NM or did it exe
on
> Ubuntu.I
> have searched the whole internet but could not find a solution to it.Were you
> able to solve? If yes, then will you help me? I have started snatching my
> hairs!!!
>
>
HTH!
--
Harsh J
e.hadoop.mapreduce.task.reduce.Fetcher=DEBUG*
> *
> *
> Then I restarted the daemon's and grep'd in the log directory for the Debug
> message present in the Fetcher.java class.
> I couldn't find any !!
>
> Am I missing anything here? Any help is highly appreciated .Thanks
>
> --
>
> --With Regards
> Pavan Kulkarni
--
Harsh J
option. It's
> taking very long time to move data into trash. Can you please help me
> how to stop this process of deleting and restart process with skip
> trash??
--
Harsh J
researched for the solution online but couldn't find much.
>> So would really appreciate if anyone knows how to resolve this ?Thanks
>>
>> --
>>
>> --With Regards
>> Pavan Kulkarni
>>
>>
>
>
> --
>
> --With Regards
> Pavan Kulkarni
--
Harsh J
ion was how to build a binary tar file for hadoop-0.20
> which still uses ANT. The wiki pages only have information for maven.
> Any help is highly appreciated.Thanks
> --
>
> --With Regards
> Pavan Kulkarni
--
Harsh J
n how to build a
> binary distribution tar file.
> The information on wiki and in BUILDING.txt only has Maven
> instructions.Thanks
>
> On Tue, Jul 10, 2012 at 2:39 PM, Harsh J wrote:
>
>> Hey Pavan,
>>
>> The 0.20.x version series was renamed recently to 1.x. Hence, y
Hey devs,
I noticed 2.0.1 has already been branched, but there's no newer JIRA
version field added in for 2.1.0? Can someone with the right powers
add it across all projects, so that backports to branch-2 can be
marked properly in their fix versions field?
Thanks!
--
Harsh J
Thanks Arun! I will now diff both branches and fix any places the JIRA
fix version needs to be corrected at.
On Mon, Jul 16, 2012 at 8:30 AM, Arun C Murthy wrote:
> Done.
>
> On Jul 13, 2012, at 11:12 PM, Harsh J wrote:
>
>> Hey devs,
>>
>> I noticed 2.0.1 has alre
Ah looks like you've covered that edge too, many thanks!
On Mon, Jul 16, 2012 at 8:40 AM, Harsh J wrote:
> Thanks Arun! I will now diff both branches and fix any places the JIRA
> fix version needs to be corrected at.
>
> On Mon, Jul 16, 2012 at 8:30 AM, Arun C Murthy wrote:
wiki, which you can
> get by subscribing to the common-dev@hadoop.apache.org mailing list
> and asking for the wiki account you have just created to get this
> permission."
>
> Thanks,
> /* Joey */
>
--
Harsh J
I have to tweak a few classes and for this I needed few packages
>> >>which
>> >> are
>> >> only present in Java 7 like "java.nio.file" , So I was wondering If I
>> >>can
>> >> shift my
>> >> development environment of Hadoop to Java 7? Would this break anything ?
>> >openjdk 7 works, but nio async file access is slower then traditional.
>>
>>
>
>
> --
>
> --With Regards
> Pavan Kulkarni
--
Harsh J
will always there, so here the logic should check the new property first,
> then the deprecated key. Any idea?
> }
> }
--
Harsh J
, Abhinav M Kulkarni
wrote:
> Hi,
>
> What has become of IdentityMapper in 1.0.3? It is not present under
> mapreduce.lib. I understand there was a JIRA improvement to port all the
> classes from mapred.lib to mapreduce.lib.
>
> Thanks.
--
Harsh J
ames in
> the */etc/hosts *file,
> but all my nodes have correct info about the hostnames in /etc/hosts, but I
> still have these reducers throwing error.
> Any help regarding this issue is appreciated .Thanks
>
> --
>
> --With Regards
> Pavan Kulkarni
--
Harsh J
= key + " " + str(totalpr)
>
> try:
> for k in linkDict[key]:
> strTmp += " " + str(k)
> print strTmp
> except:
> pass
>
> I get this when I submit a simple map-reduce job using streaming. Even the
> word count example failed to reduce in some nodes. The /etc/host file and
> configuration information in the conf/core-site.xml ,mapred-site.xml and
> hdfs-site.xml log file of the master and slave nodes are attached with this
> e-mail.
>
> /etc/hosts
>
> 172.29.142.240 master
> 172.29.142.213 slaveorange
> 172.29.142.222 slavecc
>
> Any help would be appreciated!
--
Harsh J
Reduce and HDFS. Among other things, it does provide
a general interaction API for all things 'Hadoop'
--
Harsh J
tps://issues.apache.org/jira/browse/HADOOP-7874
>
> but there's also this one:
>
> "change location of the native libraries to lib instead of lib/native"
> https://issues.apache.org/jira/browse/HADOOP-7996
>
> -Steven Willis
>
>> -Original Message---
oose to decide number of reducers to mention explicitly, what should I
> consider.Because choosing in appropriate number of reducer hampers the
> performance.
See http://wiki.apache.org/hadoop/HowManyMapsAndReduces
--
Harsh J
;
> Regards
> Abhishek
>
>
>
> On Mon, Aug 27, 2012 at 11:29 PM, Harsh J wrote:
>> Hi,
>>
>> On Tue, Aug 28, 2012 at 8:32 AM, Abhishek wrote:
>>> Hi all,
>>>
>>> I just want to know that, based on what factor map reduce framework dec
rocessor.run(ResourceManager.java:327)
> at java.lang.Thread.run(Thread.java:680)
>
>
>
> Can someone please tell me how to set this VM argument.
>
> Thanks in advance.
--
Harsh J
Btw, you can also set a global JAVA_LIBRARY_PATH env-var containing
your paths, and YARN will pick it up.
On Wed, Sep 12, 2012 at 9:16 AM, Harsh J wrote:
> Hi Shekhar,
>
> For YARN, try setting YARN_OPTS inside the yarn-env.sh. YARN scripts
> do not reuse the hadoop-env.sh like the
cks stored in datanodes, shouldn't that?
>
>
>
> Thanks
>
> May
>
>
>
>
>
--
Harsh J
Grant.
>
> Regards
>
> James
--
Harsh J
;
> Have you considered just pulling the kfs lib out and releasing the bridge
> classes yourself? It's what the other FS suppliers do, as it gives them
> more control over the libraries, including the ability to release more
> often.
>
> -steve
--
Harsh J
>
> Can you please add Write permissions for the Support for Apache Hadoop page
> to the account "learncomputer" (or provide other means by which I can add
> an entry to the page)?
>
> Thanks!
>
> Michael Dorf
--
Harsh J
hanks,
Harsh J
Ping?
On Thu, Oct 18, 2012 at 5:25 PM, Harsh J wrote:
> Hey project devs,
>
> Can someone let me know who the MLs admin is? INFRA suggested that
> instead of going to them, I could reach out to the admin group local
> to the project itself (I didn't know we had admins locally
m this effort retained somewhere? If so, where?
> Or do I have to start from scratch? Apologies if this has already been asked
> recently.
>
> Any help appreciated.
>
> Peter Marron
--
Harsh J
ayout version is 'too old' and latest layout
> version this software version can upgrade from is -7.
> Any idea how to fix this prob. without losing the data.
>
> Many Thanks!!
>
> Daya Chen
--
Harsh J
>
> Thanks in advance,
> Salam
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/which-part-of-Hadoop-is-responsible-of-distributing-the-input-file-fragments-to-datanodes-tp4019530.html
> Sent from the Hadoop lucene-dev mailing list archive at Nabble.com.
--
Harsh J
patches.
> Probably checking for tabs in Java files would be also good idea.
--
Harsh J
> seeing things that can be better clarified or need updating. Can I have
> write access to the Wiki so I can make text updates? I'm user "GlenMazza".
>
> Thanks,
> Glen
>
> --
> Glen Mazza
> Talend Community Coders - coders.talend.com
> blog: www.jroller.com/gmazza
>
--
Harsh J
; >>
> >>
> >>
> >>
> >
> > --
> > Glen Mazza
> > Talend Community Coders - coders.talend.com
> > blog: www.jroller.com/gmazza
> >
> >
>
>
> --
> **
> * Mr. Jia Yiyu*
> * *
> * Email: jia.y...@gmail.com *
> * *
> * Web: http://yiyujia.blogspot.com/*
> ***
>
--
Harsh J
, it is partly
> misleaded by the comments in HadoopServer.java file
>
>
> *
> * This class does not create any SSH connection anymore. Tunneling must be
> * setup outside of Eclipse for now (using Putty or ssh -D<port>
> * <host>)
> *
>
> thanks again!
24 AM, Harsh J wrote:
> Ping?
>
> On Thu, Oct 18, 2012 at 5:25 PM, Harsh J wrote:
>> Hey project devs,
>>
>> Can someone let me know who the MLs admin is? INFRA suggested that
>> instead of going to them, I could reach out to the admin group local
>> to the
a moderator, please send a message to apmail
> at apache.org asking to become a moderator. CC private at
> hadoop.apache.org to keep the PMC in the loop.
>
> http://www.apache.org/dev/committers.html#mailing-list-moderators
>
> Doug
>
> On Wed, Nov 28, 2012 at 4:23 AM, Ha
M
> 4.Is it possible to run it on laptop?.
> sorry for the silly question.
> thank you
> cheers
> huu
--
Harsh J
> branch-0.21(also in trunk), say HADOOP-4012 and MAPREDUCE-830, but not
> integrated/migrated into branch-1, so I guess we don't support contatenated
> bzip2 in branch-1, correct? If so, is there any special reason? Many thanks!
>
> --
> Best Regards,
> Li Yu
--
Harsh J
eriry
> whether HADOOP-7823 has resolved the issue on both write and read side, and
> report back.
>
> On 3 December 2012 19:42, Harsh J wrote:
>
>> Hi Yu Li,
>>
>> The JIRA HADOOP-7823 backported support for splitting Bzip2 files plus
>> MR support for it, into
on
>> the internet, and before I attempt to create one I thought I'd ask here.
>> Any help would be greatly appreciated.
>>
>> Sincerely,
>> Michael Johnson
>> m...@michaelpjohnson.com
>>
--
Harsh J
wrote:
> On the user list, there was a question about the Hadoop datajoin package.
> Specifically, its dependency on the old API.
>
> Is this package still in use ? Should we file a JIRA to migrate it to the
> new API ?
>
> Thanks
> hemanth
>
--
Harsh J
gt;
>
> http://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-datajoin/src/main/java/org/apache/hadoop/contrib/utils/join/
>
> Is it to be deprecated and removed ?
>
> Thanks
> Hemanth
>
>
> On Mon, Jan 14, 2013 at 8:08 PM, Harsh J wrote:
>
> > Already
nd running? Can someone please help me out?
>
> PS: I am working on a project where we want to develop a custom scheduler
> for hadoop.
>
> Thanks,
> Karthiek
--
Harsh J
g compiles fine.
>
> To me all this is dead code, if so, can we nuke them?
>
> Thx
>
> --
> Alejandro
--
Harsh J
elease 1.0.4?
I believe I've answered both questions in the above inlines. Do feel
free to post any further questions you have!
--
Harsh J
n a research project where we want to investigate how to
> optimally place data in hadoop.
>
> Thanks,
> Karthiek
--
Harsh J
Merge patch for this is available on
>>> HADOOP-8562<https://issues.apache.org/jira/browse/HADOOP-8562>
>>> .
>>>
>>> Highlights of the work done so far:
>>> 1. Necessary changes in Hadoop to run natively on Windows. These changes
>>> handle differences in platforms related to path names, process/task
>>> management etc.
>>> 2. Addition of winutils tools for managing file permissions and
>>>ownership,
>>> user group mapping, hardlinks, symbolic links, chmod, disk utilization,
>>>and
>>> process/task management.
>>> 3. Added cmd scripts equivalent to existing shell scripts
>>> hadoop-daemon.sh, start and stop scripts.
>>> 4. Addition of block placement policy implemnation to support cloud
>>> enviroment, more specifically Azure.
>>>
>>> We are very close to wrapping up the work in branch-trunk-win and
>>>getting
>>> ready for a merge. Currently the merge patch is passing close to 100% of
>>> unit tests on Linux. Soon I will call for a vote to merge this branch
>>>into
>>> trunk.
>>>
>>> Next steps:
>>> 1. Call for vote to merge branch-trunk-win to trunk, when the work
>>> completes and precommit build is clean.
>>> 2. Start a discussion on adding Jenkins precommit builds on windows and
>>> how to integrate that with the existing commit process.
>>>
>>> Let me know if you have any questions.
>>>
>>> Regards,
>>> Suresh
>>>
>>>
>>
>>
>>--
>>http://hortonworks.com/download/
>
--
Harsh J
uhan
> MSc student,CS
> Univ. of Saskatchewan
> IEEE Graduate Student Member
>
> http://homepage.usask.ca/~jac735/
>
Feel free to post any further impl. related questions! :)
--
Harsh J
39 AM, Tsuyoshi OZAWA wrote:
> +1 (non-binding),
>
> Windows support is attractive for lots users.
> From point a view from Hadoop developer, Matt said that CI supports
> cross platform testing, and it's quite reasonable condition to merge.
>
> Thanks,
> Tsuyoshi
--
Harsh J
Thanks Suresh. Regarding where; we can state it on
http://wiki.apache.org/hadoop/HowToContribute in the test-patch
section perhaps.
+1 on the merge.
On Mon, Mar 4, 2013 at 11:39 PM, Suresh Srinivas wrote:
> On Sun, Mar 3, 2013 at 8:50 PM, Harsh J wrote:
>
>> Have we agreed (a
with reserved slots wont
>> be executed if speculative execution is off?
>>
>> PS: I am working on MRv1.
>>
>>
>> On Sun, Mar 3, 2013 at 2:41 AM, Harsh J wrote:
>>
>>> On Sun, Mar 3, 2013 at 1:41 PM, Jagmohan Chauhan <
>>> simplefundumn...@gm
t; > >I think enough critical bug fixes have went in to branch-0.23 that
> > >warrant another release. I plan on creating a 0.23.7 release by the end
> > >March.
> > >
> > >Please vote '+1' to approve this plan. Voting will close on Wednesday
> > >3/20 at 10:00am PDT.
> > >
> > >Thanks,
> > >Tom Graves
> > >(release manager)
> >
> >
>
--
Harsh J
t; >Release plans have to be voted on too, so please vote '+1' to approve this
> >plan. Voting will close on Sunday 3/17 at 8:30pm PDT.
> >
> >Thanks,
> >--Matt
> >(release manager)
>
>
--
Harsh J
the lists
> 2. grab a later version of the apache releases if you want help on them on
> these mailing lists, or go to the cloudera lists, where they will probably
> say "upgrade to CDH 4.x" before asking questions.
>
> thanks
>
--
Harsh J
ib.input.*;
>
> does it because hadoop-0.20.2-cdh3u3 not include "mapred" API?
>
>
>
>
>
>
> At 2013-03-17 14:22:43,"Harsh J" wrote:
>>The issue is that Streaming expects the old/stable MR API
>>(org.apache.hadoo
DFS.
> Hadoop return:
> HTTP/1.1 405 HTTP method PUT is not supported by this URL
>
>
> OK, Who know this why?
>
>
> TIA
> Levi
--
Harsh J
1 - 100 of 412 matches
Mail list logo