What's the best way to extend entry points that extend Tool?
I've written a new entry point that lets you test topology scripts by
loading them, resolving hostnames provided on the command line/in a file,
then printing out the results to the console.
https://issues.apache.org/jira/browse/HADOOP-82
On 22 May 2012 11:33, Radim Kolar wrote:
> i have better experience with scons instead of cmake
>
mmm, may be better to jump beyond make altogether to the higher levels of
nativeish build tools.
What would be good is that the output can be parsed by jenkins & have that
set up to build and test
1. This is a user question, so please use the common-user or mapreduce-user
mailing lists. There are more people on it and it is the better place.
2. Before panicing and asking others for help, always try and do a bit of
research. The stack trace says the cause is BindException and "Address
alread
This is a hadoop-user q, not a development one -please use the right as
user questions get ignored on the dev ones.
also:
http://wiki.apache.org/hadoop/ConnectionRefused
On 11 July 2012 19:23, Momina Khan wrote:
> i use the following command to try to copy data from hdfs to my s3 bucket
>
> ubu
On 31 July 2012 05:46, Web wrote:
> Hi
>
> Could I have the permission as well ?
> My wiki username is "Denis".
>
>
done
There aren't bin and net categories in JIRA, yet often bugs go against the
code there?
Should I add them?
done.
On 6 August 2012 13:45, Eli Collins wrote:
> Works for me. I've been adding missing ones that make sense (eg webhdfs).
>
> On Mon, Aug 6, 2012 at 11:34 AM, Steve Loughran
> wrote:
> > There aren't bin and net categories in JIRA, yet often bugs go
On 8 August 2012 10:23, Clay McDonald wrote:
> Hello all, I would like to know how I can assist with the Hadoop project?
> It doesn't matter in what capacity. I just want to help out with whatever
> is needed.
>
>
One way that can benefit you and the source is to grab the alpha/beta
releases and
ble for using cache on top of
> writables.
>
> https://issues.apache.org/**jira/browse/HADOOP-8619<https://issues.apache.org/jira/browse/HADOOP-8619>
>
> can someone review this ticket and close it (as fixed or wont fix), i need
> to know final decision.
>
correction: i'd like a junit test that pushes some of the std writables to
an ObjectOutputStream and back, verifies round-tripping of data. It may
seem extra work -but it stops serialization breaking in future.
-steve
On 9 August 2012 09:11, Steve Loughran wrote:
> I've voted for
done
On 11 August 2012 11:33, Aggelos Dimitratos wrote:
> I would like to add our site rodacino <http://www.rodacino.gr>
> We use hadoop for crawling news sites and log analysis.
> We also use Apache Cassandra as our back end and Apache Lucene for
> searching capabilit
On 12 August 2012 01:20, Jun Ping Du wrote:
> Thanks Ted. Those are very good suggestions as backup solutions when JIRA
> is down.
> Besides alleviating the impact of JIRA downtime as you mentioned above, do
> we think of some way to keep JIRA system highly available? It is a little
> embarrassin
What are you trying to log/analyse?
I self-assigned to me "do better logging for machine analysis", a long time
ago, but never sat down to do any of it -yet-
https://issues.apache.org/jira/browse/HADOOP-7466
On 23 August 2012 20:44, Li Shengmei wrote:
> Hi, all
>
> I want to do some
On 29 August 2012 10:36, Li Shengmei wrote:
> Hi, Steven,
> Thank you very much. I just start to study the log analysis of
> HADOOP. I will review your work carefully.
> Btw, which part of hadoop source codes should be read carefully if I want
> to get know the log record?
>
There's lot
On 5 October 2012 18:27, Thilee Subramaniam wrote:
> We at Quantcast have released QFS 1.0 (Quantcast File System) to open
> source. This is based on the KFS 0.5 (Kosmos Distributed File System),
> a C++ distributed filesystem implementation. KFS plugs into Apache
> Hadoop via the 'kfs' shim that
On 10 October 2012 16:03, Thilee Subramaniam wrote:
> Hi Steve,
>
> Like Harsh said, HADOOP-8886 addresses removing KFS from apache tree.
>
> But I interpret your suggestion as 'moving qfs.jar out of apache tree, and
> keeping the jar in possibly a maven repo externally. The new fs shim for
> QFS
Good point Steve. This touches on the larger issue of whether it
> makes sense to host FS clients for other file systems in Hadoop
> itself. I agree with what I think you're getting which is - if we can
> handle the testing and integration via external dependencies it would
> probably be better
On 11 October 2012 00:34, Thilee Subramaniam wrote:
>
>
> My initial goal was to make Hadoop use QFS the same way it used KFS. Since
> Hadoop branch-1 had lib/kfs.xx.jar, I was expecting to include a
> qfs.x.x.jar in the Hadoop release; my first patch was to use such jar. But
> now I see that Had
+1, only creates confusion
On 15 October 2012 19:00, Eli Collins wrote:
> Hey Bobby,
>
> That's correct, I mean the "packages" directories in common, hdfs, and
> MR top-level directories, which contain the debs and RPMs. I'm not
> opposed to someone re-working/contributing new code as long as t
On 26 October 2012 01:24, Thilee Subramaniam wrote:
>
> We have made the changes recommended here, and made available a 'Hadoop
> QFS jar' with QFS. This plugin and the QFS libraries will be maintained &
> released by the QFS open-source project.
>
> Please see the download and usage instructions
letes. It allows other operations when the
> deletion is in progress. (umamahesh via suresh)
>
> HDFS-4134. hadoop namenode and datanode entry points should return
> negative exit code on bad arguments. (Steve Loughran via suresh)
>
> MAPREDUCE-4782. NLineInputFormat skips first line of last InputSplit
> (Mark Fuhs via bobby)
>
don't worry about the patch fail message on jenkins as it currently only
patches against trunk
The standard development process for is
-use Git, with git.apache.org the read-only repository
-branch for each JIRA issue, trunk & branch-1 for the asf versions
e.g https://github.com/steveloughran/h
On 22 November 2012 02:40, Chris Nauroth wrote:
>
> It seems like the trickiest issue is preservation of permissions and
> symlinks in tar files. I suspect that any JVM-based solution like custom
> Maven plugins, Groovy, or jtar would be limited in this respect. According
> to Ant documentation
On 21 November 2012 19:15, Matt Foley wrote:
> This discussion started in
>
>
> Those of us involved in the branch-1-win port of Hadoop to Windows without
> use of Cygwin, have faced the issue of frequent use of shell scripts
> throughout the system, both in build time (eg, the utility
> "saveVer
On 21 November 2012 15:03, Radim Kolar wrote:
> what it takes to gain commit access to hadoop?
>
good question.
I've put some of my thoughts on the topic into a presentation I gave last
month:
http://www.slideshare.net/steve_l/inside-hadoopdev
That isn't so much about commit/non-commit status
On 24 November 2012 20:13, Matt Foley wrote:
> For discussion, please see previous thread "[PROPOSAL] introduce Python as
> build-time and run-time dependency for Hadoop and throughout Hadoop stack".
>
> This vote consists of three separate items:
>
> 1. Contributors shall be allowed to use Pytho
On 20 November 2012 22:07, Matt Foley wrote:
> Hello,
> Hadoop-1.1.1-rc0 is now available for evaluation and vote:
> http://people.apache.org/~mattf/hadoop-1.1.1-rc0/
> or in the Nexus repository.
>
> The release notes are available at
> http://people.apache.org/~mattf/hadoop-1.1.1-rc0/re
On 26 November 2012 21:25, Radim Kolar wrote:
>
> The main "feature" is that when you get the +1 vote you yourself get to
>> deal with the grunge work of apply
>> patches to one or more svn branches, resyncing that with the git branches
>> you inevitably do your own work on.
>>
> no, main featu
On 30 November 2012 00:29, Radim Kolar wrote:
>
> * What else in the current build, besides saveVersion.sh, you see as
> candidate to be migrated to Phyton?
>
> inline ant scripts
>
=0. Ant's versioning is stricter; you can pull down the exact Jar versions,
and some of us in the Ant team worked
On 30 November 2012 12:57, Luke Lu wrote:
> I'd like to change my binding vote to -1, -0, -1.
>
> Considering the hadoop stack/ecosystem as a whole, I think the best cross
> platform scripting language to adopt is jruby for following reasons:
>
> 1. HBase already adopted jruby for HBase shell, wh
On 1 December 2012 01:08, Eli Collins wrote:
> -1, 0, -1
>
> IIUC the only platform we plan to add support for that we can't easily
> support today (w/o an emulation layer like cygwin) is Windows, and it
> seems like making the bash scripts simpler and having parallel bat
> files is IMO a better
On 30 November 2012 13:40, Radim Kolar wrote:
>
> inline ant scripts
>>>
>>> =0. Ant's versioning is stricter; you can pull down the exact Jar
>>> versions,
>>> and some of us in the Ant team worked very hard to get it going
>>> everywhere.
>>> You don't gain anything by going to .py
>>>
>> ther
The RPMs are being built with bigtop;
grab it from here
https://github.com/apache/bigtop/tree/master/bigtop-packages/src/rpm/hadoop
I'm not sure which branch to use for hadoop-1.1.1; let me check that
On 4 December 2012 17:24, Michael Johnson wrote:
> Hello All,
>
> I've browsed the common-dev
On 4 December 2012 18:51, Michael Johnson wrote:
> On 12/04/2012 12:54 PM, Harsh J wrote:
>
>> The right branch is branch-0.3 for Bigtop. You can get more
>> information upstream at Apache Bigtop itself
>> (http://bigtop.apache.org).
>>
>> Branch 0.3 of the same URL Steve posted:
>> https://githu
The swiftfs tests need only to run if there's a target filesystem; copying
the s3/s3n tests, something like
test.fs.swift.name
swift://your-object-store-herel/
How does one actually go about making junit tests optional in mvn-land?
Should the probe/skip logic be in the code -which c
ke
> AIX, etc. It seems to be a good tradeoff so far. I imagine that s3
> could do something similar.
>
> cheers,
> Colin
>
>
> On Fri, Dec 14, 2012 at 9:56 AM, Steve Loughran
> wrote:
> > The swiftfs tests need only to run if there's a target filesystem;
>
e not run
That's overkill for adding a few more openstack tests -but I would like to
make it easier to turn those and the rackspace ones without sticking my
secrets into an XML file under SCM
> Tom
>
> On Mon, Dec 17, 2012 at 10:06 AM, Steve Loughran
> wrote:
> > thanks, I&
On 18 December 2012 09:11, Colin McCabe wrote:
> On Tue, Dec 18, 2012 at 1:05 AM, Colin McCabe
> wrote:
>
> >
> >> another tactic could be to have specific test projects: test-s3,
> >> test-openstack, test-... which contain nothing but test cases. You'd set
> >> jenkins up those test projects to
On 18 December 2012 09:05, Colin McCabe wrote:
>
> I think the way to go is to have one XML file include another.
>
>
>
> http://www.w3.org/2001/XInclude";>
>
> boring.config.1
> boring-value
> ... etc, etc...
>
>
>
> That way, you can keep the boring configuration under v
On 18 December 2012 15:00, Simone Leo wrote:
> it looks like I don't have write permissions on the Hadoop wiki anymore
> (account: SimoneLeo).
I've given you access, ping me if it doesn't work
-steve
On 17 December 2012 16:06, Tom White wrote:
> There are some tests like the S3 tests that end with "Test" (e.g.
> Jets3tNativeS3FileSystemContractTest) - unlike normal tests which
> start with "Test". Only those that start with "Test" are run
> automatically (see the surefire configuration in
> h
On 29 December 2012 23:32, Niels Basjes wrote:
> Hi,
>
> I've had this 'itch' with Hadoop that it is hard to sort the counters in a
> "nice" way.
> Now the current trunk sorts the framework counters in such a way that they
> follow the flow quite nicely. For the generic counters (i.e. user code
>
Can someone look at HADOOP-9119 for me? It just adds more tests to the
filesystem contract test base to assert that overwritten and
deleted-then-recreated files have the semantics we expect -and diagnostics
when the FS fails the tests. S3, SwiftFS and others benefit from these tests
https://issues
thanks, I'll deal with them
On 2 January 2013 17:48, Suresh Srinivas wrote:
> I posted some comments on the jira.
>
>
> On Wed, Jan 2, 2013 at 4:31 AM, Steve Loughran >wrote:
>
> > Can someone look at HADOOP-9119 for me? It just adds more tests to the
> &g
On 7 January 2013 15:57, Niels Basjes wrote:
> Hi Steve,
>
> > Now for submitting changes for Hadoop: Is it desirable that I fix these
> in
> > > my change set or should I leave these as-is to avoid "obfuscating" the
> > > changes that are relevant to the Jira at hand?
> > >
> >
> > I recommend a
My setup ( I work from home)
# OS/X laptop w/ 30" monitor
# FTTC broadband, 55Mbit/s down, 15+ up -it's the upload bandwidth that
really helps development: http://www.flickr.com/photos/steve_l/8050751551/
# IntelliJ IDEA IDE, settings edited for a 2GB Heap
# Maven on the command line for builds
#
I've been working on some stricter contract tests for filesystems, and have
some changes for S3 and S3N that we've been working through. While this is
mostly minor, rename(dir,dir/subdir) can lose all data in the source
directory
https://issues.apache.org/jira/browse/HADOOP-9261
I think that shou
disclaimer, personal opinions only, I just can't be bothered to subscribe
with @apache.org right now.
On 4 February 2013 14:36, Todd Lipcon wrote:
> - Quality/completeness: for example, missing docs, buggy UIs, difficult
> setup/install, etc
>
par for the course. Have you ever used Linux?
> -
I've got two homeless projects looking to get in to Hadoop
1. the branch-1 HA canary monitor, which can also monitor arbitrary
services with an HTTP port, including declared dependencies on HDFS being
live (no timeouts reporting to vsphere or Linux HA while HDFS is offline or
in safe m
thanks, just seen and commented on this.
IF we're going to have test timeouts, we need a good recommended default
value for all tests except the extra slow ones.
Or
1. we just use our own JUnit fork that sets up a better default value than
0. I don't know how Ken would react to that.
2. we get ma
On 2 March 2013 03:33, Konstantin Boudnik wrote:
>
> Windows is so different from _any_ Unix or pseudo-Unix flavors, including
> Windows with Cygwin - that even multi-platform Java has hard hard time
> dealing with it. This is enough, IMO, to warrant a separate checkpoint.
>
>
Cygwin is the worst
On 6 March 2013 23:17, Matt Foley wrote:
> Hi, I got stuck in other work and did not make the Hadoop 1.2 branch in
> February.
> Now that release 1.1.2 is out, I'm ready to make the 1.2 branch.
>
> I intend to branch for 1.2 in the next night or two, and at that point will
> make the formal propo
l just have to commit to
> > both branch-1 and branch-1.2.
> >
> > Thanks,
> > --Matt
> >
> >
> > On Thu, Mar 7, 2013 at 1:29 AM, Steve Loughran > >wrote:
> >
> > > On 6 March 2013 23:17, Matt Foley wrote:
> > >
> > >
On 10 March 2013 18:32, Suresh Srinivas wrote:
> Steve, is there a timeline all these changes will be ready? If it is not
> going to be
> ready soon, perhaps these changes could be consider for 1.3?
>
> Suresh, I've put everything into
https://issues.apache.org/jira/browse/HADOOP-9258 - the tests
On 11 March 2013 03:38, Matt Foley wrote:
> Hi all,
> I have created branch-1.2 from branch-1, and propose to cut the first
> release candidate for 1.2.0 on Monday 3/18 (a week from tomorrow), or as
> soon thereafter as I can achieve a stable build.
>
> Between 1.1.2 and the current 1.2.0, there
On 13 March 2013 16:31, Thomas Graves wrote:
> Hello all,
>
> I think enough critical bug fixes have went in to branch-0.23 that warrant
> another release. I plan on creating a 0.23.7 release by the end March.
>
> Please vote '+1' to approve this plan. Voting will close on Wednesday
> 3/20 at 10
On 14 March 2013 17:06, Vikas Jadhav wrote:
> for first job ANT download jar from internet
> how to build offline using ANT
>
> --
ant needs all the dependencies. Once that first build is done, it will not
need to do it again, as the stuff is cached in your disk somewhere
On 15 March 2013 09:18, springring wrote:
> Hi,
>
> my hadoop version is Hadoop 0.20.2-cdh3u3 and I want to define new
> InputFormat in hadoop book , but there is error
> "class org.apache.hadoop.streaming.WholeFileInputFormat not
> org.apache.hadoop.mapred.InputFormat"
>
> Hadoop version i
have you considered joining the u...@hadoop.apache.org and asking the
question there?
On 1 April 2013 17:38, Vikas Jadhav wrote:
> Hi
>
> I want process/store all data pertaining to one reducer.
>
> i want store it in some data structure depending on key for example
>
> (0,ABC)
> (0,TER)
> (1,D
On 3 April 2013 15:46, Chandrashekhar Kotekar wrote:
> Thanks a lot for your help.
>
> You were right. Problem was with Protoc version 1.5 only. I downloaded and
> added protoc 1.4 version and now that error is gone. However now I am stuck
> at this new error. Now maven is not able to find "msbuil
On 8 April 2013 16:08, Mohammad Mustaqeem <3m.mustaq...@gmail.com> wrote:
> Please, tell what I am doing wrong??
> Whats the problem??
>
a lot of these seem to be network-related tests. You can turn off all the
tests; look in BUILDING.TXT at the root of the source tree for the various
operations,
On 18 April 2013 18:32, Noelle Jakusz (c) wrote:
> +1
>
> There are quite a few new people, so maybe start a collaborative group
> where you can collect notes and steps (videos and articles). I know I would
> have some for you that I have created as I have gotten started... it would
> be a great
On 19 April 2013 23:08, Noelle Jakusz (c) wrote:
>
>
> I have created an account (noellejakusz) and would like write access to
> help with this...
>
>
OK, you have write access
On 22 April 2013 14:00, Karthik Kambatla wrote:
> Hadoop devs,
>
>
> This doc does not intend to propose new policies. The idea is to have one
> document that outlines the various compatibility concerns (lots of areas
> beyond API compatibility), captures the respective policies that exist, and
>
On 22 April 2013 18:32, Eli Collins wrote:
> On Mon, Apr 22, 2013 at 5:42 PM, Steve Loughran
> wrote:
>
> >
> > There's a separate issue that says "we make some guarantee that the
> > behaviour of a interface remains consistent over versions", whi
On 23 April 2013 09:13, Chris Smith wrote:
> And there is another scheduler, Dynamic Priority Scheduling, lurking in the
> backwater of 0.21.0 that allows users to 'bid' for additional time.
> Getting this back into current 1.x may be a great way to understand about
> scheduling:
>
>
> http://svn
On 23 April 2013 09:00, Steve Loughran wrote:
>
>
> On 22 April 2013 18:32, Eli Collins wrote:
>
>>
>>
>
>> However if a change made FileSystem#close three times slower, this
>> perhaps a smaller semantic change (eg doesn't change what exceptions
>
On 23 April 2013 11:32, Andrew Purtell wrote:
> At the risk of hijacking this conversation a bit, what do you think of the
> notion of moving interfaces like Seekable and PositionedReadable into a new
> foundational Maven module, perhaps just for such interfaces that define and
> tag support for
On 23 April 2013 17:25, Roman Shaposhnik wrote:
> Hi!
>
> Now that Hadoop 2.0.4-alpha is released I'd like
> to open up a discussion on what practical steps
> would it take for us as a community to get
> Hadoop 2.X from alpha to beta?
>
> There's quite a few preconditions to be met for a piece
you need those patches to remove sun-specific bits in, don't you?
On 25 April 2013 19:23, Amir Sanjar wrote:
> Arun, thanks for the update. This is indeed the news we (IBM) have been
> waiting for. Please let us know if there is anyway
> we can help.
>
> Best Regards
> Amir Sanjar
>
> System Man
On 29 April 2013 14:20, Amit Sela wrote:
> Thanks for the reply Chris!
>
> I'm actually running on Fedora 17... I went ahead and changed to the
> forrest & findbugs versions you recommended (the log file had an issue with
> apache-forrest-0.8), and now when I look at the log and see a bunch of
>
lead
> IBM Senior Software Engineer
> Phone# 512-286-8393
> Fax# 512-838-8858
>
>
>
> [image: Inactive hide details for Steve Loughran ---04/29/2013 05:40:33
> PM---you need those patches to remove sun-specific bits in, don]Steve
> Loughran ---04/29/2013 05:40:33 PM--
On 1 May 2013 06:33, Thoihen Maibam wrote:
> Hi All,
>
> Can somebody help me after creating patch what I need to I do. I have seen
> one subject 'How to test the patch' but that isn't really helping me. I am
> stuck in the below areas.
>
> dev-support/test-patch can test your patch
or you can c
On 8 May 2013 21:20, wrote:
> Hi Harsh,
>
> Thanks for responding,
>
> I would be interested in what the dev group had in mind for this and I
> also have a couple of additional queries ;
>
> I can see that a quick win for this would be to expose the existing Jetty
> statistics metrics within the
On 9 May 2013 20:39, wrote:
>
>> Unless there are existing bits of this stuff lurking somewhere in the
>> Hadoop codebase that I haven't noticed, these could be copied into hadoop
>> core. Reviewing the code as it is would be welcome
>>
>> https://github.com/**steveloughran/hadoop-trunk/**
>> tre
On 15 May 2013 10:57, Arun C Murthy wrote:
> Folks,
>
> A considerable number of people have expressed confusion regarding the
> recent vote on 2.0.5, beta status etc. given lack of specifics, the voting
> itself (validity of the vote itself, whose votes are binding) etc.
>
> IMHO technical argum
On 15 May 2013 15:02, Arun C Murthy wrote:
> Roman,
>
> Furthermore, before we rush into finding flaws and scaring kids at night
> it would be useful to remember one thing:
> Software has *bugs*. We can't block any release till the entire universe
> validates it, in fact they won't validate it if
On 15 May 2013 23:19, Konstantin Boudnik wrote:
> Guys,
>
> I guess what you're missing is that Bigtop isn't a testing framework for
> Hadoop. It is stack framework that verifies that components are dealing
> with
> each other nicely.
which to me means "Some form of integration test"
> Every
On 21 May 2013 23:47, Jagane Sundar wrote:
> I see one significant benefit to having Release Plan votes: Fewer releases
> with more members of the community working on any given release.
> In turn, fewer Hadoop releases implies less confusion for end users
> attempting to download and use an Apac
+1 (committer vote; not sure if it is binding on this or not)
es,
throttling of side-effecting operations, such as a recursive delete of a v.
large directory. That remote testing, therefore, helps me find such pains
before it hits the fueld.
> You might also try asking Steve Loughran, since he did some great work
> recently to try to nail down the exact sem
the operations within the Abstract FileSystem class
> are a little ambiguous. With that said, we've joined Steve Loughran in
> attempting to clarify these for both the Hadoop 1.0 and the Hadoop 2.0
> FileSystem class over at https://issues.apache.org/jira/browse/HADOOP-9371
>
>
congratulations! I propose you should celebrate your new rights by
reviewing some of my outstanding patches, such as
https://issues.apache.org/jira/browse/HADOOP-8545
On 28 May 2013 23:07, Aaron T. Myers wrote:
> On behalf of the Apache Hadoop PMC, I'd like to announce the addition of a
> few n
It's up as https://issues.apache.org/jira/browse/HDFS-4866
On 29 May 2013 21:53, Arpit Agarwal wrote:
> Ralph, could you please file a Jira? We'll fix it.
>
> -Arpit
>
> On Wed, May 29, 2013 at 9:39 AM, Ralph Castain wrote:
>
> > Hi folks
> >
> > On line 228 of
> > hadoop-hdfs-project/hadoop-hd
On 30 May 2013 22:14, Adam Kawa wrote:
> Hi,
>
> When uploading new content (and information about my company), I got the
> exception
> "Sorry, can not save page because "rubbelloselotto.de" is not allowed in
> this wiki."
>
> How could I solve it?
>
> Kind regards,
> Adam
>
the word lotto may b
; For context, here is a closely related JIRA
>
>
> https://issues.apache.org/jira/browse/HADOOP-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
>
>
> On Fri, May 31, 2013 at 6:09 AM, Steve Loughran >wrote:
>
> > Jay, this is is more a commo
a swift service, and how to test
against them. If anyone does want to run the test themselves, try the docs
and then escalate to us if there are any problems.
Steve Loughran, Dmitry Mezhenskiy (Mirantis) & David Dobbins (Rackspace)
thanks
On 3 June 2013 17:20, Suresh Srinivas wrote:
> Steve, I will review this. Might need a couple of days though.
>
>
> On Mon, Jun 3, 2013 at 7:12 AM, Steve Loughran >wrote:
>
> > Hi,
> >
> > We've got the HADOOP-8545<
> > https://issues.a
Marcus Herou wrote:
Hi.
My 5 cents about svn:externals.
I could not live without it but...I always tend to forget to update our
svn:externals and about once a month I wonder why I accidentally released
bleeding edge code in our production environent *smile* (should've written
that auto-branching
Todd Lipcon wrote:
On Wed, Jul 1, 2009 at 2:10 PM, Philip Zeyliger wrote:
-1 to checking in jars. It's quite a bit of bloat in the repository (which
admittedly affects the git.apache folks more than the svn folks), but it's
also cumbersome to develop.
It'd be nice to have a one-liner that bu
Todd Lipcon wrote:
On Wed, Jul 1, 2009 at 10:10 PM, Raghu Angadi wrote:
-1 for committing the jar.
Most of the various options proposed sound certainly better.
Can build.xml be updated such that Ivy fetches recent (nightly) build?
This seems slightly better than actually committing the ja
Owen O'Malley wrote:
On Wed, Jul 1, 2009 at 6:45 PM, Todd Lipcon wrote:
Agree with Phillip here. Requiring a new jar to be checked in anywhere after
every common commit seems unscalable and nonperformant. For git users this
will make the repository size baloon like crazy (the jar is 400KB and we
jayavardhan p wrote:
Hi,
This is jayavardhan.I'm new to this Hadoop.I went to develop a project on
cloud computing.Can you suggest me in a proper way to develop this.
Do you mean a university/student project?
Jonathan Seidman wrote:
Thanks for the replies. We'll create a patch for trunk and then include a
0.18 compatible patch with the Jira, as you suggest.
Based on other contributed FileSystem implementations, we were assuming this
should go in o.a.h.fs and not contrib, so thanks for the clarificat
+1
I plan to get my lifecycle patch up in sync with the forked code this
week, passing tests, documentation in sync, etc, and it will be ready
for review
Hrishikesh Mantri wrote:
Hi All.
I am Masters student in CS . We are a group of two and are looking for adding some additional features
to the HDFS as a part of the Distributed Computing course project . Can someone please provide us with pointers
as in which direction we should take so that i
Raghu Angadi wrote:
A heartBeat is also an RPC. When you pause Namenode for 30 sec the
datanode's heartbeat thread just waits for 30 sec for its heartbeat RPC
to return. Note that when you pause Namenode, the RPCs to it don't fail
immediately. During this wait, DNs can perform other transacti
Dhruba Borthakur wrote:
It is really nice to have wire-compatibility between clients and servers
running different versions of hadoop. The reason we would like this is
because we can allow the same client (Hive, etc) submit jobs to two
different clusters running different versions of hadoop. But
Rekha Joshi wrote:
Hi Sonal,
AFAIK,this exception mostly has no impact other than to indicate object
creation.I agree Hadoop configuration class can be improved to handle it.
if (LOG.isDebugEnabled()) {
LOG.debug(StringUtils.stringifyException
(new IOException("config
Kay Kay wrote:
Start with hadoop-common to start building .
hadoop-hdfs / hadoop-mapred pull the dependencies from apache snapshot
repository that contains the nightlies of last successful builds so in
theory all 3 could be built independently because of the respective
snapshots being present
1 - 100 of 3746 matches
Mail list logo