visibility of the security utils and things like getCanonicalService.
-
Key: HADOOP-7638
URL: https://issues.apache.org/jira/browse/HADOOP-7638
Project: Hadoop Common
Issue
Thanks a lot, all!
An end goal of mine was to make Hadoop as flexible as possible.
Along the same lines, but unrelated to the above idea, was another I
encountered,
courtesy http://hadoopblog.blogspot.com/2010/11/hadoop-research-topics.html
The blog mentions the ability to dynamically append Inpu
Hi all,
I was thinking to work on a few ideas related to Hadoop.Am seeking for
some suggestion and advice regarding the same.
I would want to integrate Hadoop with an existing cluster of Openstack. I
would appreciate your suggestion on whether this is a feasible
project to go ahead? if not, c
On Sep 14, 2011, at 1:27 PM, Bharath Ravi wrote:
> Hi all,
>
> I'm a newcomer to Hadoop development, and I'm planning to work on an idea
> that I wanted to run by the dev community.
>
> My apologies if this is not the right place to post this.
>
> Amazon has an "Elastic MapReduce" Service (
>
This makes a bit of sense, but you have to worry about the inertia of the
data. Adding compute resources is easy. Adding data resources, not so
much. And if the computation is not near the data, then it is likely to be
much less effective.
On Wed, Sep 14, 2011 at 4:27 PM, Bharath Ravi wrote:
>
Hi Bharath,
Amazon EMR has two kinds of nodes - Task and Core. Core nodes run HDFS and
MapReduce but task nodes run only MapReduce. You can only add core nodes but
you can add and remove task nodes in a running cluster. In other words, you
can't reduce the size of HDFS. You can only increase it.
Hi all,
I'm a newcomer to Hadoop development, and I'm planning to work on an idea
that I wanted to run by the dev community.
My apologies if this is not the right place to post this.
Amazon has an "Elastic MapReduce" Service (
http://aws.amazon.com/elasticmapreduce/) that runs on Hadoop.
The ser
Fair scheduler configuration file is not bundled in RPM
---
Key: HADOOP-7637
URL: https://issues.apache.org/jira/browse/HADOOP-7637
Project: Hadoop Common
Issue Type: Bug
Componen
OK I see in build 800 it took about 1hr17mins end to end.
Thanks
On Sep 14, 2011, at 12:52 PM, Giridharan Kesavan wrote:
If you look further down build is also configure to run tests
"$MAVEN_HOME/bin/mvn test -Dmaven.test.failure.ignore=true -Pclover
-DcloverLicenseLocation=/home/jenkins/tools/
If you look further down build is also configure to run tests
"$MAVEN_HOME/bin/mvn test -Dmaven.test.failure.ignore=true -Pclover
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
> clover.log 2>&1"
mvn clean install -DskipTests is run at the root level to get the
latest
fix misspellings on home page
-
Key: HADOOP-7636
URL: https://issues.apache.org/jira/browse/HADOOP-7636
Project: Hadoop Common
Issue Type: Bug
Affects Versions: site
Reporter: Owen O'Malley
Hi all,
I am trying to run the example from
https://cwiki.apache.org/confluence/display/MAHOUT/Itembased+Collaborative+Filtering,
with the following command bin/mahout
org.apache.mahout.cf.taste.hadoop.item.RecommenderJob
-Dmapred.input.dir=input -Dmapred.output.dir=output --itemsFile itemfile
--
I noticed that even the jenkins build does -DskipTests, is this
because there are too many failures or it simply takes too long?
https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Hdfs-trunk/
799/consoleFull
/home/jenkins/tools/maven/latest/bin/mvn clean install -DskipTests
When I tr
RetryInvocationHandler should release underlying resources on close
---
Key: HADOOP-7635
URL: https://issues.apache.org/jira/browse/HADOOP-7635
Project: Hadoop Common
Issue Type
14 matches
Mail list logo