The "git" way of doing things would be to rebase the feature branch on
master (trunk) and then commit the patch stack.
Squashing the entire feature into a 10 MB megapatch is the "svn" way of
doing things.
The svn workflow evolved because merging feature branches back to trunk
was really painful i
On Mon, Aug 28, 2017, at 14:22, Allen Wittenauer wrote:
>
> > On Aug 28, 2017, at 12:41 PM, Jason Lowe wrote:
> >
> > I think this gets back to the "if it's worth committing" part.
>
> This brings us back to my original question:
>
> "Doesn't this place an undue burden on the contr
On Mon, Aug 28, 2017, at 09:58, Allen Wittenauer wrote:
>
> > On Aug 25, 2017, at 1:23 PM, Jason Lowe wrote:
> >
> > Allen Wittenauer wrote:
> >
> > > Doesn't this place an undue burden on the contributor with the first
> > > incompatible patch to prove worthiness? What happens if it is deci
One anti-pattern that keeps coming up over and over again is people
trying to do big and complex features without feature branches. This
happened with HDFS truncate as well. This inevitably leads to
controversy because people see very big and invasive patches zooming
past and get alarmed. Devops
I think the Tomcat situation is concerning in a lot of ways.
1. We are downloading without authentication, using http rather than
https.
2. We are downloading an obsolete release.
3. Our build process is violating the apache.archive.org guidelines by
downloading from the site directly, rather than
Hi all,
Recently a discussion came up on HADOOP-13028 about the wisdom of
overloading S3AInputStream#toString to output statistics information.
It's a difficult judgement for me to make, since I'm not aware of any
compatibility guidelines for InputStream#toString. Do we have
compatibility guidel
Thanks for explaining, Chris. I generally agree that
UserGroupInformation should be annotated as Public rather than
LimitedPrivate, although you guys have more context than I do.
However, I do think it's important that we clarify that we can break
public APIs across a major version transition suc
On Tue, May 10, 2016, at 11:34, Hitesh Shah wrote:
> There seems to be some incorrect assumptions on why the application had
> an issue. For rolling upgrade deployments, the application bundles the
> client-side jars that it was compiled against and uses them in its
> classpath and expects to be ab
Did INFRA have any information on this?
best,
On Fri, May 6, 2016, at 15:14, Allen Wittenauer wrote:
>
> Anyone know why?
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-ma
+1 for updating this in trunk. Thanks, Tsuyoshi Ozawa.
cheers,
Colin
On Mon, May 9, 2016, at 12:12, Tsuyoshi Ozawa wrote:
> Hi developers,
>
> We’ve worked on upgrading jersey(HADOOP-9613) for a years. It's
> essential change to support complication with JDK8. It’s almost there.
>
> One concer
opy-pasting your
> current comments) on this issue to the JIRA: HADOOP-12893
>
> Thanks
> +Vinod
>
> > On Apr 7, 2016, at 7:43 AM, Sean Busbey wrote:
> >
> > On Wed, Apr 6, 2016 at 6:26 PM, Colin McCabe > <mailto:cmcc...@apache.org>> wrote:
> >
In general, the only bundled native component I can see is lz4. I guess
debatably we should add tree.h to the NOTICE file as well, since it came
from BSD and is licensed under that license.
Please keep in mind bundling means "included in the source tree", NOT
"downloaded during the build process.
; wrote:
>
> > https://issues.apache.org/jira/browse/INFRA-11597 has been filed for this.
> >
> > -Vinay
> >
> > -Original Message-
> > From: Colin McCabe [mailto:co...@cmccabe.xyz]
> > Sent: 05 April 2016 08:07
> > To: common-dev@hadoop.apache.org
&g
Yes, please. Let's disable these mails.
C.
On Mon, Apr 4, 2016, at 06:21, Vinayakumar B wrote:
> bq. We don't spam common-dev about every time a new patch attachment
> gets posted
> to an existing JIRA. We shouldn't do that for github either.
>
> Is there any update on this. ?
> Any INFRA tick
> On 3/22/16, 11:03 PM, "Allen Wittenauer"
> wrote:
>
> >> On Mar 22, 2016, at 6:46 PM, Gangumalla, Uma
> >>wrote:
> >>
> >>> is it possible for me to setup a branch, self review+commit to that
> >>> branch, then request a branch merge?
> >> Basically this is something like Commit-Then-Review(h
If the underlying problem is lack of reviewers for these improvements,
how about a design doc giving some motivation for the improvements and
explaining how they'll be implemented? Then we can decide if a branch
or a few JIRAs on trunk makes more sense.
The description for HADOOP-12857 is just "l
On Mon, Sep 28, 2015 at 12:52 AM, Steve Loughran wrote:
>
> the jenkins machines are shared across multiple projects; cut the executors
> to 1/node and then everyone's performance drops, including the time to
> complete of all jenkins patches, which is one of the goals.
Hi Steve,
Just to be cl
+1, would be great to see Hadoop get ipv6 support.
Colin
On Mon, Aug 17, 2015 at 5:04 PM, Elliott Clark wrote:
> Nate (nkedel) and I have been working on IPv6 on Hadoop and HBase lately.
> We're getting somewhere but there are a lot of different places that make
> assumptions about network. That
+1. Rebasing can really make the history much clearer when used correctly.
Colin
On Tue, Aug 18, 2015 at 2:57 PM, Andrew Wang wrote:
> Hi common-dev,
>
> Based on the prior [DISCUSS] thread, I've put together a new [VOTE]
> proposal which modifies the branch development practices edified by the
I think it might make sense to keep around a repository of third-party
open source native code that we use in Hadoop. Nothing fancy, just a
few .tar.gz files in a git repo that we manage. This would avoid
incidents like this in the future and ensure that we will be able to
build old verisons of H
; >
>> > > Nope. I’m not particularly in the mood to write a book about a
>> > topic that I’ve beat to death in private conversations over the past 6
>> > months other than highlighting that any solution needs to be able to work
>> >
s ago with four active release branches
>> + trunk.
>> >
>> > On Mar 17, 2015, at 10:56 AM, Yongjun Zhang wrote:
>> >
>> >> Thanks Ravi and Colin for the feedback.
>> >>
>> >> Hi Allen,
>> >>
>> >> You pointed out that &q
Branch merges made it hard to access change history on subversion sometimes.
You can read the tale of woe here:
http://programmers.stackexchange.com/questions/206016/maintaining-svn-history-for-a-file-when-merge-is-done-from-the-dev-branch-to-tru
Excerpt:
"prior to Subversion 1.8. The files i
> Hortonworks
> http://hortonworks.com/
>
>
>
>
>
>
> On 3/11/15, 2:10 PM, "Colin McCabe" wrote:
>
>>Is there a maven plugin or setting we can use to simply remove
>>directories that have no executable permissions on them? Clearly we
>>have the
Is there a maven plugin or setting we can use to simply remove
directories that have no executable permissions on them? Clearly we
have the permission to do this from a technical point of view (since
we created the directories as the jenkins user), it's simply that the
code refuses to do it.
Othe
+1 for starting thinking about releasing 2.7 soon.
Re: building Windows binaries. Do we release binaries for all the
Linux and UNIX architectures? I thought we didn't. It seems a little
inconsistent to release binaries just for Windows, but not for those
other architectures and OSes. I wonder
There are two kinds of native code.
We've been roughly following the Linux Kernel Coding style in the C
code. Details here:
https://www.kernel.org/doc/Documentation/CodingStyle
The main exception is that we use 4 spaces for indentation, not hard tabs.
For C++, there was a thread a while back abo
Good find. I filed HADOOP-11505 to fix the incorrect usage of
unoptimized code on x86 and the incorrect bswap on alternative
architectures.
Let's address the fmemcmp stuff in a separate jira.
best,
Colin
On Thu, Jan 22, 2015 at 11:34 AM, Edward Nevill
wrote:
> On 21 January 2015 at 11:42, Edw
Why not just use LocalFileSystem with an NFS mount (or several)? I read
through the README but I didn't see that question answered anywhere.
best,
Colin
On Tue, Jan 13, 2015 at 1:35 PM, Gokul Soundararajan wrote:
> Hi,
>
> We (Jingxin Feng, Xing Lin, and I) have been working on providing a
> F
;> code). In Java this is pretty standard, but I couldn't find any
>>>> implementation for C code.
>>>>
>>>> Here is the terror function:
>>>>
>>>> const char* terror(int errnum)
>>>> {
>>>>
>>> Currently, terror just returns a static string from an array, this is
>>>>>> fast, simple and error-proof.
>>>>>>
>>>>>> In order to use strerror_r inside terror, would require allocating a
>>>>>> buffer inside terror and
thought about it that
hard). I would just get the work done, and let it show up in the
release it's ready in.
cheers,
Colin
>
> Thanks,
> Malcolm
>
>
> On 12/10/2014 10:45 AM, Colin McCabe wrote:
>>
>> Hi Malcolm,
>>
>> In general we file JIRAs for p
est trunk, maybe some fixes aren't needed any more.
>
> I have generated a single patch file with all changes. Perhaps it would be
> better to file multiple JIRAs for each change, perhaps grouped, one per
> issue ? Or should I file a JIRA for each modified source file ?
>
>
On Mon, Dec 8, 2014 at 7:46 AM, Steve Loughran wrote:
> On 8 December 2014 at 14:58, Ted Yu wrote:
>
>> Looks like there was still OutOfMemoryError :
>>
>>
>> https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapsh
Hi Malcolm,
It's great that you are going to contribute! Please make your patches
against trunk.
2.2 is fairly old at this point. It hasn't been the focus of
development in more than a year.
We don't use github or pull requests.
Check the section on http://wiki.apache.org/hadoop/HowToContribu
On Fri, Dec 5, 2014 at 11:15 AM, Karthik Kambatla wrote:
> It would be nice to cut the branch for the next "feature" release (not just
> Java 7) in the first week of January, so we can get the RC out by the end
> of the month?
>
> Yesterday, this came up in an offline discussion on ATS. Given peop
On Wed, Nov 26, 2014 at 2:58 PM, Karthik Kambatla
wrote:
> Yongjun, thanks for starting this thread. I personally like Steve's
> suggestions, but think two digits should be enough.
>
> I propose we limit the restrictions to versioning the patches with version
> numbers and .patch extension. Peopl
table, if
> it's for single sub-project, use the old timeout; otherwise, increase
> accordingly) so that we have at least this option when needed?
>
> Thanks.
>
> --Yongjun
>
>
> On Tue, Nov 25, 2014 at 2:28 AM, Steve Loughran
> wrote:
>
> > On 25 Nove
Multi-subproject patches used to work. If they don't work now, it is
probably a bug in test-patch.sh that we should fix. The code there is
written expecting multi-project changes, but maybe it doesn't get much of a
workout normally.
Conceptually, I think it's important to support patches that mo
I'm usually an advocate for getting rid of unnecessary dependencies
(cough, jetty, cough), but a lot of the things in Guava are really
useful.
Immutable collections, BiMap, Multisets, Arrays#asList, the stuff for
writing hashCode() and equals(), String#Joiner, the list goes on. We
particularly us
On Thu, Oct 2, 2014 at 1:15 PM, Ted Yu wrote:
> On my Mac and on Linux, I was able to
> find /usr/include/openssl/opensslconf.h
>
> However the file is absent on Jenkins machine(s).
>
> Just want to make sure that the file is needed for native build before
> filing INFRA ticket.
opensslconf.h is
>> all the slaves are getting re-booted give it some more time
>>
>> -giri
>>
>> On Fri, Oct 3, 2014 at 1:13 PM, Ted Yu wrote:
>>
>>> Adding builds@
>>>
>>> On Fri, Oct 3, 2014 at 1:07 PM, Colin McCabe
>>> wrote:
>>&
It looks like builds are failing on the H9 host with "cannot access
java.lang.Runnable"
Example from
https://builds.apache.org/job/PreCommit-HDFS-Build/8313/artifact/patchprocess/trunkJavacWarnings.txt
:
[INFO]
[INFO] BUILD
On Wed, Oct 1, 2014 at 4:30 PM, John Smith wrote:
> hi developers.
>
> i have some native code working on solaris, but my changes use getgrouplist
> from openssh. is that ok? do i need to do anything special? is the
> license in file enough?
Is the license in which file enough?
Colin
>
> tha
Thanks, Steve.
Should we just put everything in patchprocess/ like before? It seems
like renaming this directory to PreCommit-HADOOP-Build-patchprocess/
or PreCommit-YARN-Build-patchprocess/ in various builds has created
problems, and not made things any more clear. What do you guys think?
Coli
On Mon, Sep 15, 2014 at 10:48 AM, Allen Wittenauer wrote:
>
> It’s now September. With the passage of time, I have a lot of doubts
> about this plan and where that trajectory takes us.
>
> * The list of changes that are already in branch-2 scare the crap out of any
> risk adverse person
It's an issue with test-patch.sh. See
https://issues.apache.org/jira/browse/HADOOP-11084
best,
Colin
On Mon, Sep 8, 2014 at 3:38 PM, Andrew Wang wrote:
> We're still not seeing findbugs results show up on precommit runs. I see
> that we're archiving "../patchprocess/*", and Ted thinks that sinc
+1 for using git log instead of CHANGES.txt.
Colin
On Wed, Sep 3, 2014 at 11:07 AM, Chris Douglas wrote:
> On Tue, Sep 2, 2014 at 2:38 PM, Andrew Wang wrote:
>> Not to derail the conversation, but if CHANGES.txt is making backports more
>> annoying, why don't we get rid of it? It seems like we
Thanks for making this happen, Karthik and Daniel. Great job.
best,
Colin
On Tue, Aug 26, 2014 at 5:59 PM, Karthik Kambatla wrote:
> Yes, we have requested for force-push disabled on trunk and branch-*
> branches. I didn't test it though :P, it is not writable yet.
>
>
> On Tue, Aug 26, 2014 at
On Fri, Aug 15, 2014 at 8:50 AM, Aaron T. Myers wrote:
> Not necessarily opposed to switching logging frameworks, but I believe we
> can actually support async logging with today's logging system if we wanted
> to, e.g. as was done for the HDFS audit logger in this JIRA:
>
> https://issues.apache.
This mailing list is for questions about Apache Hadoop, not commercial
Hadoop distributions. Try asking a Hortonworks-specific mailing list.
best,
Colin
On Thu, Aug 14, 2014 at 3:23 PM, Niels Basjes wrote:
> Hi,
>
> In the core Hadoop you can on your (desktop) client have multiple clusters
> av
+1.
best,
Colin
On Fri, Aug 8, 2014 at 7:57 PM, Karthik Kambatla wrote:
> I have put together this proposal based on recent discussion on this topic.
>
> Please vote on the proposal. The vote runs for 7 days.
>
>1. Migrate from subversion to git for version control.
>2. Force-push to be
On Tue, Jul 29, 2014 at 2:45 AM, 俊平堵 wrote:
> Sun's java code convention (published in year of 97) suggest 80 column per
> line for old-style terminals. It sounds pretty old, However, I saw some
> developers (not me :)) like to open multiple terminals in one screen for
> coding/debugging so 80-col
+1.
Colin
On Tue, Jul 22, 2014 at 2:54 PM, Karthik Kambatla
wrote:
> Hi devs
>
> As you might have noticed, we have several classes and methods in them that
> are not annotated at all. This is seldom intentional. Avoiding incompatible
> changes to all these classes can be considerable baggage.
A unit test failed. See my response on JIRA.
best,
Colin
On Tue, Jul 8, 2014 at 3:11 PM, Jay Vyas wrote:
> these appear to be java errors related use to your jdk?
> maybe your JDK doesnt match up well with your OS.
> Consider trying red hat 6+ or Fedora 20?
>
>
> On Jul 8, 2014, at 5:45 AM, "mo
Thanks for working on this, Dmitry. It's good to see Hadoop support
another platform. The code changes look pretty minor, too.
best,
Colin
On Tue, Jul 8, 2014 at 7:08 AM, Dmitry Sivachenko wrote:
> Hello,
>
> I am trying to make hadoop usable on FreeBSD OS. Following Steve Loughran's
> sugge
Er, that should read "order in which it ran unit tests."
C.
On Fri, Jun 20, 2014 at 11:02 AM, Colin McCabe wrote:
> I think the important thing to do right now is to ensure our code
> works with jdk8. This is similar to the work we did last year to fix
> issues that c
't need be debated, they are quite minor. On the other hand, I
> would imagine discussion and debate on what 8+ language features might be
> useful to use at some future time could be a lively one.
>
>
>
> On Wed, Jun 18, 2014 at 3:03 PM, Colin McCabe
> wrote:
>
>>
anything on
>>
>> > > > HADOOP-10530 or related until we agree on this.
>>
>> > > >
>>
>> > > > Thanks,
>>
>> > > > Andrew
>>
>> > > >
>>
>> > > >
>>
>&g
aller version increments
>> in future, which branch-2 is -mostly- delivering.
>>
>> While Java 7 doesn't have some must-have features, Java 8 is a significant
>> improvement in the language, and we should be looking ahead to that, maybe
>> even doing some leading-edg
It's not always practical to edit the log4j.properties file. For one
thing, if you're using a management system, there may be many log4j
properties sprinkled around the system, and it could be difficult to figure
out which is the one you need to edit. For another, you may not (should
not?) have p
I think the bottom line here is that as long as our stable release
uses JDK6, there is going to be a very, very strong disincentive to
put any code which can't run on JDK6 into trunk.
Like I said earlier, the traditional reason for putting something in
trunk but not the stable release is that it n
I took a quick glance at the build output, and I don't think openssl
is getting linked statically into libhadooppipes.a.
I see the following lines:
Linking CXX static library libhadooppipes.a
/usr/bin/cmake -P CMakeFiles/hadooppipes.dir/cmake_clean_target.cmake
/usr/bin/cmake -E cmake_link_script
I've been using JDK7 for Hadoop development for a while now, and I
know a lot of other folks have as well. Correct me if I'm wrong, but
what we're talking about here is not "moving towards JDK7" but
"breaking compatibility with JDK6."
There are a lot of good reasons to ditch JDK6. It would let u
I think we need some way of isolating YARN, MR, and HDFS clients from
the Hadoop dependencies. Anything else just isn't sane... whatever we
may say, there will always be clients that rely on the dependencies
that we pull in, if we make those visible. I can't really blame
clients for this. It's s
+1 for making this guarantee explicit.
It also definitely seems like a good idea to test mixed versions in bigtop.
HDFS is not immune to "new client, old server" scenarios because the HDFS
client gets bundled into a lot of places.
Colin
On Mar 20, 2014 10:55 AM, "Chris Nauroth" wrote:
> Our us
There are a few existing portability JIRAs for libhadoop. Check out
HADOOP-7147, HADOOP-9934, HADOOP-6767, HADOOP-7824, and HDFS-5642.
best,
Colin
On Tue, Feb 18, 2014 at 2:14 AM, Malcolm wrote:
> I have started porting the native libraries of Hadoop 2.2.0 to Solaris and
> would like to eventua
Looks good.
+1, also non-binding.
I downloaded the source tarball, checked md5, built, ran some unit
tests, ran an HDFS cluster.
cheers,
Colin
On Tue, Feb 11, 2014 at 6:53 PM, Andrew Wang wrote:
> Thanks for putting this together Arun.
>
> +1 non-binding
>
> Downloaded source tarball
> Verifie
There is a maximum length for message buffers that was introduced by
HADOOP-9676. So messages with length 1752330339 should not be
accepted.
best,
Colin
On Sat, Dec 28, 2013 at 11:06 AM, Dhaivat Pandya
wrote:
> Hi,
>
> I've been working a lot with the Hadoop NameNode IPC protocol (while
> build
If 2.4 is released in January, I think it's very unlikely to include
symlinks. There is still a lot of work to be done before they're
usable. You can look at the progress on HADOOP-10019. For some of
the subtasks, it will require some community discussion before any
code can be written.
For bet
On Wed, Nov 13, 2013 at 10:10 AM, Arun C Murthy wrote:
>
> On Nov 12, 2013, at 1:54 PM, Todd Lipcon wrote:
>
>> On Mon, Nov 11, 2013 at 2:57 PM, Colin McCabe wrote:
>>
>>> To be honest, I'm not aware of anything in 2.2.1 that shouldn't be
>>> the
HADOOP-10020 is a JIRA that disables symlinks temporarily. They will
be disabled in 2.2.1 as well, if the plan is to have only minor fixes
in that branch.
To be honest, I'm not aware of anything in 2.2.1 that shouldn't be
there. However, I have only been following the HDFS and common side
of thi
orks.com/
>> >
>> >
>> >
>> > On Fri, Oct 18, 2013 at 1:37 PM, Chris Nauroth > > >wrote:
>> >
>> > > +1
>> > >
>> > > Sounds great!
>> > >
>> > > Regarding testing caching+federation, thi
I don't see
> branch-2 mentioned, so I assume that we're not voting on merge to branch-2
> yet.
>
> Before I cast my vote, can you please discuss whether or not it's feasible
> to complete all of the above in the next 7 days? For the issues assigned
> to me, I do ex
+1. Thanks, guys.
best,
Colin
On Thu, Oct 17, 2013 at 3:01 PM, Andrew Wang wrote:
> Hello all,
>
> I'd like to call a vote to merge the HDFS-4949 branch (in-memory caching)
> to trunk. Colin McCabe and I have been hard at work the last 3.5 months
> implementing this feature
Log4j configuration is a difficult topic, because there's so many ways
to do it. I keep my log4j.properties file in the same directory as my
hadoop configuration XML files. One of the ways log4j finds its
log4j.properties file is by CLASSPATH, and the configuration directory
is always in the CLAS
I don't think HADOOP-9972 is a must-do for the next Apache release,
whatever version number it ends up having. It's just adding a new
API, not changing any existing ones, and it can be done entirely in
generic code. (The globber doesn't involve FileSystem or AFS
subclasses).
My understanding is
On Tue, Oct 1, 2013 at 8:59 PM, Arun C Murthy wrote:
> Yes, sorry if it wasn't clear.
>
> As others seem to agree, I think we'll be better getting a protocol/api
> stable GA done and then iterating on bugs etc.
>
> I'm not super worried about HADOOP-9984 since symlinks just made it to
> branch-2
What we're trying to get to here is a consensus on whether
FileSystem#listStatus and FileSystem#globStatus should return symlinks
__as_symlinks__. If 2.1-beta goes out with these semantics, I think
we are not going to be able to change them later. That is what will
happen in the "do nothing" scen
The issue is not modifying existing APIs. The issue is that code has
been written that makes assumptions that are incompatible with the
existence of things that are not files or directories. For example,
there is a lot of code out there that looks at FileStatus#isFile, and
if it returns false, as
I think it makes sense to finish symlinks support in the Hadoop 2 GA release.
Colin
On Mon, Sep 16, 2013 at 6:49 PM, Andrew Wang wrote:
> Hi all,
>
> I wanted to broadcast plans for putting the FileSystem symlinks work
> (HADOOP-8040) into branch-2.1 for the pending Hadoop 2 GA release. I think
On Wed, Aug 21, 2013 at 3:49 PM, Stack wrote:
> On Wed, Aug 21, 2013 at 1:25 PM, Colin McCabe wrote:
>
>> St.Ack wrote:
>>
>> > + Once I figured where the logs were, found that JAVA_HOME was not being
>> > exported (don't need this in hadoop-2.0.5 for inst
St.Ack wrote:
> + Once I figured where the logs were, found that JAVA_HOME was not being
> exported (don't need this in hadoop-2.0.5 for instance). Adding an
> exported JAVA_HOME to my running shell which don't seem right but it took
> care of it (I gave up pretty quick on messing w/
> yarn.nodem
ackoverflow.com/questions/170961/whats-the-best-crlf-handling-strategy-with-git
Regardless of what we do or don't do in git, we should have the line
endings correct in subversion.
cheers.
Colin
>> >
>> > --
>> > Raja
>> >
>> >
>> > On Mon,
s as LF, and converted to CRLF as needed. After all,
eol-style=native would not be very useful if it only applied on
checkout. Windows users would be constantly checking in CRLF in that
case.
I'm not an svn expert, though, and I haven't tested the above.
Colin
>
>
> On
Clarification: svn:eol-style = native causes the files to contain
whatever the native platform used to check out the code uses. I think
just setting this property on all the HTML files should resolve this
and future problems.
patch posted.
C.
On Fri, Jun 28, 2013 at 12:56 PM, Colin McCabe
I think the fix for this is to set svn:eol-style to "native" on this
file. It's set on many other files, just not on this one:
cmccabe@keter:~/hadoopST/trunk> svn propget svn:eol-style
./hadoop-project-dist/README.txt
native
cmccabe@keter:~/hadoopST/trunk> svn propget svn:eol-style
./hadoop-hdfs-
Hi Chris,
Thanks for the report. I filed
https://issues.apache.org/jira/browse/HADOOP-9667 for this.
Colin
Software Engineer, Cloudera
On Mon, Jun 24, 2013 at 2:20 AM, Christopher Ng wrote:
> cross-posting this from cdh-users group where it received little interest:
>
> is there a bug in Sequ
You might try looking at what KosmoFS (KFS) did. They have some code in
org/apache/hadoop/fs which calls their own Java shim.
This way, the shim code in hadoop-common gets updated whenever FileSystem
changes, but there is no requirement to install KFS before building Hadoop.
You might also try a
+1 (non-binding)
best,
Colin
On Sun, Mar 10, 2013 at 8:38 PM, Matt Foley wrote:
> Hi all,
> I have created branch-1.2 from branch-1, and propose to cut the first
> release candidate for 1.2.0 on Monday 3/18 (a week from tomorrow), or as
> soon thereafter as I can achieve a stable build.
>
> Be
Hi Erik,
Eclipse can run junit tests very rapidly. If you want a shorter test
cycle, that's one way to get it.
There is also Maven-shell, which reduces some of the overhead of starting
Maven. But I haven't used it so I can't really comment.
cheers,
Colin
On Mon, Jan 21, 2013 at 8:36 AM, Erik
Hi Yiyu,
Are you referring to com.google.protobuf?
We generally do depend on specific versions of jars in the pom.xml files,
to prevent exactly this sort of problem. If you have a patch which adds
this, you should post it. It might help someone else.
cheers,
Colin
On Thu, Jan 17, 2013 at 6:3
In addition to protoc, can someone please also install a 32-bit C++ compiler?
The builds are all failing on this machine because of that.
regards,
Colin
On Fri, Jan 4, 2013 at 11:37 AM, Giridharan Kesavan
wrote:
> When I configured the other machines I used the source to compile and
> install
On Tue, Dec 18, 2012 at 1:05 AM, Colin McCabe wrote:
> On Mon, Dec 17, 2012 at 11:03 AM, Steve Loughran
> wrote:
>> On 17 December 2012 16:06, Tom White wrote:
>>
>>> There are some tests like the S3 tests that end with "Test" (e.g.
>>> Jets3tN
ough to the XML and generated reports, but
>> > you'd have to do a new junit runner for this and tweak the reporting
>> code.
>> > Which, if it involved going near maven source, is not something I am
>> > prepared to do
>> >
>> > On 14 Decembe
One approach we've taken in the past is making the junit test skip
itself when some precondition is not true. Then, we often create a
property which people can use to cause the skipped tests to become a
hard error.
For example, all the tests that rely on libhadoop start with these lines:
> @Test
Hi Radim,
In general, Maven plugins are built and deployed to a repository.
Then, Maven fetches the precompiled binaries from this repository
based on a specific version number in the pom. This is how Maven
plugins work in general. not specific to this proposal.
I did experiment with bundling th
On Mon, Dec 10, 2012 at 10:50 AM, Colin McCabe wrote:
> On Fri, Dec 7, 2012 at 5:31 PM, Radim Kolar wrote:
>> 1. cmake and protoc maven plugins already exists. why you want to write a
>> new ones?
>
> This has already been discussed; see
> https://groups.google.com/f
On Fri, Dec 7, 2012 at 5:31 PM, Radim Kolar wrote:
> 1. cmake and protoc maven plugins already exists. why you want to write a
> new ones?
This has already been discussed; see
https://groups.google.com/forum/?fromgroups=#!topic/cmake-maven-project-users/5FpfUHmg5Ho
Actually the situation is even
I think this is a good idea, for a few different reasons.
* We all know Java, so this code will be readable by all.
* It will benefit the wider community of Maven users, not just our project.
* It gets rid of the shell dependency. The shell dependency is
problematic because Windows doesn't suppor
1 - 100 of 108 matches
Mail list logo