BUILDING.txt in the source code has a section for building on Windows.
Bikas
-Original Message-
From: Andrey Klochkov [mailto:akloch...@griddynamics.com]
Sent: Tuesday, September 24, 2013 5:11 PM
To: common-dev
Subject: Re: windows support in trunk?
Great, thanks!
One more question on W
Great, thanks!
One more question on Windows support: how do I create a distribution and
run it? It's for dev purposes - testing a patch to work on Windows, so I
can't really use HDP. On Linux/OSX I just do a build with -Pdist and then
just use startXXX scripts from hadoop-project-dist/target to ru
I've added MAPREDUCE-5531 to the blocker list. - Zhijie
On Tue, Sep 24, 2013 at 3:41 PM, Arun C Murthy wrote:
> With 4 +1s (3 binding) and no -1s the vote passes. I'll push it out… I'll
> make it clear on the release page, that there are some known issues and
> that we will follow up very short
With 4 +1s (3 binding) and no -1s the vote passes. I'll push it out… I'll make
it clear on the release page, that there are some known issues and that we will
follow up very shortly with another release.
Meanwhile, let's fix the remaining blockers (please mark them as such with
Target Version 2
On Sep 24, 2013, at 3:24 PM, Andrey Klochkov wrote:
> Is Windows support in trunk currently? Or should I still use trunk-win to
> experiment with Hadoop on Windows? I've seen number of windows related
> patches going into trunk that's why I'm asking. Thanks!
>
> I know I just need to ask Chris
Is Windows support in trunk currently? Or should I still use trunk-win to
experiment with Hadoop on Windows? I've seen number of windows related
patches going into trunk that's why I'm asking. Thanks!
I know I just need to ask Chris Nauroth, but sending here just in case it's
useful for others.
-
I ran through my usual check-list for validating the RC. I only checked the
source tarball.
- Signatures and message digests all good. I guess because of differences in
gpg2's version, the message digest has different text wrapping. Anyways.
- The top level full LICENSE, NOTICE and README are g
I've created 2.1.2-beta release version. Please use that for any *critical*
commits on branch-2.1-beta branch. Please be careful, let's keep #commits here
very small.
thanks,
Arun
On Sep 24, 2013, at 2:07 PM, Andrew Wang wrote:
> Hey Arun,
>
> That plan sounds good to me, thanks for being on
On Tue, Sep 24, 2013 at 1:39 PM, Arun C Murthy wrote:
> Rather than spin another RC, let's get this out and follow up with the next
> release - especially since it's not clear how long it will take for the
> symlink stuff to sort itself out.
>
> Getting this out will help downstream projects, eve
[
https://issues.apache.org/jira/browse/HADOOP-9761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Andrew Wang reopened HADOOP-9761:
-
Reopening to track backporting this to branch-2.1 or similar for GA.
> ViewFileSyst
Hey Arun,
That plan sounds good to me, thanks for being on top of things. What's the
new fix version we should be using (2.1.2 or 2.2.0)? Would be good to get
the same clarification regarding which branches should be receiving
commits. I think a 2.1.2 would be nice to get the symlinks changes in a
Rather than spin another RC, let's get this out and follow up with the next
release - especially since it's not clear how long it will take for the symlink
stuff to sort itself out.
Getting this out will help downstream projects, even if it does so in small way.
Arun
On Sep 23, 2013, at 5:36 P
Update: HDFS-5228 has been resolved. It was committed to branch-2.1-beta,
so I think there was an assumption that this would warrant a new RC. (If
that's not the case, then we ought to pull HDFS-5228 back out of
branch-2.1-beta to avoid confusion.)
Chris Nauroth
Hortonworks
http://hortonworks.co
I just created the jira :) Its brand new so it wont be useful just yet -
and also - similar to steve's comment - it is for learning the ecosystem,
not the underlying plumbing of distributed java apps.
https://issues.apache.org/jira/browse/BIGTOP-1089
On Tue, Sep 24, 2013 at 11:31 AM, ankit nadig
How would the ZK approach make things faster? Are you saying the AMs would
do the watching? Currently containers assignments aren't actually sent to
the NodeManagers on heartbeats. The first time a NM hears about a
container is when an AM launches it.
On Tue, Sep 24, 2013 at 4:12 AM, Harsh J
thanks a lot!
On Tue, Sep 24, 2013 at 6:49 PM, Jay Vyas wrote:
> And also, if you want to help out: we are developing blueprints in the
> bigtop project specifically for people who want to learn how real world
> bigdata workflows look.
>
>
> > On Sep 24, 2013, at 4:52 AM, Steve Loughran
> wrot
ping
On Tue, Sep 24, 2013 at 2:36 AM, Alejandro Abdelnur wrote:
> Vote for the 2.1.1-beta release is closing tonight, while we had quite a
> few +1s, it seems we need to address the following before doing a release:
>
> symlink discussion: get a concrete and explicit understanding on what we
> w
And also, if you want to help out: we are developing blueprints in the bigtop
project specifically for people who want to learn how real world bigdata
workflows look.
> On Sep 24, 2013, at 4:52 AM, Steve Loughran wrote:
>
> Hi.
>
> You need to know that we don't really consider Hadoop a good
Yes, but the heartbeat coupling isn't necessary I think. One could
even use ZK write/watch approach for faster assignment of regular
work?
On Tue, Sep 24, 2013 at 2:24 PM, Steve Loughran wrote:
> On 21 September 2013 09:19, Sandy Ryza wrote:
>
>> I don't believe there is any reason scheduling de
On 21 September 2013 09:19, Sandy Ryza wrote:
> I don't believe there is any reason scheduling decisions need to be coupled
> with NodeManager heartbeats. It doesn't sidestep any race conditions
> because a NodeManager could die immediately after heartbeating.
>
>
historically its been done for
Hi.
You need to know that we don't really consider Hadoop a good place to learn
about Java or distributed system programming: it is simply too complex.
It's like learning C by writing linux kernel device drivers -so we
explicitly warn against trying to do this
http://wiki.apache.org/hadoop/Hadoop
21 matches
Mail list logo