Allow Super user access only from certian trusted IP Range- This is to avoid
spoofing by others as super user and gain access to the cluster
Enhance checkstyle results by Hudson Hadoop QA to provide a diff
Key: HADOOP-6186
URL: https://issues.apache.org/jira/browse/HADOOP-6186
Project: Hadoop Common
Issue Type: Bug
Hi Philip,
Tried the script. It seems that the script could start a cluster but the web
page did not work. Got the following error from the web interface:
HTTP ERROR: 404
/dfshealth.jsp
RequestURI=/dfshealth.jsp
Powered by Jetty://
Thanks,
Nicholas
- Original Message
> From: Phili
FWIW, I've been using the following simple shell script:
[0]doorstop:hadoop(149128)$cat damnit.sh
#!/bin/bash
set -o errexit
set -x
cd hadoop-common
ant binary
cd ..
cd hadoop-hdfs
ant binary
cd ..
cd hadoop-mapreduce
ant binary
cd ..
mkdir -p all/bin all/lib all/contrib
cp hadoop-common/bin/*
Yeah, I'm hitting the same issues, the patch problems weren't really an
issue (same line for same line conflict on my checkout), but not having the
webapp's sort of a pain.
Looks like ant bin-package puts the webapps dir in
HDFS_HOME/build/hadoop-hdfs-0.21.0-dev/webapps, while the daemon's expecti
Hi Todd,
Two problems:
- The patch in HADOOP-6152 cannot be applied.
- I have tried an approach similar to the one described by the slides but it
did not work since jetty cannot find the webapps directory. See below:
2009-08-10 17:54:41,671 WARN org.mortbay.log: Web application not found
file:
I'd hazard a guess and say we should hitch our wagon to https://issues.apache.org/jira/browse/HADOOP-5107
.
Arun
On Aug 10, 2009, at 5:25 PM, Tsz Wo (Nicholas), Sze wrote:
I have to admit that I don't know the official answer. The hack
below seems working:
- compile all 3 sub-projects;
- c
Replace FSDataOutputStream#sync() by hflush()
-
Key: HADOOP-6185
URL: https://issues.apache.org/jira/browse/HADOOP-6185
Project: Hadoop Common
Issue Type: New Feature
Components: fs
A
Hey Nicholas,
Aaron gave a presentation with his best guess at the HUG last month. His
slides are here: http://www.cloudera.com/blog/2009/07/17/the-project-split/
(starting at slide 16)
(I'd let him reply himself, but he's out of the office this afternoon ;-) )
Hopefully we'll get towards somethi
I have to admit that I don't know the official answer. The hack below seems
working:
- compile all 3 sub-projects;
- copy everything in hdfs/build and mapreduce/build to common/build;
- then run hadoop by the scripts in common/bin as before.
Any better idea?
Nicholas Sze
Apache Wiki wrote:
when the word "Apache" does not appear on the main wiki page, that is a problem.
Thanks for catching that, Greg!
Doug
>> * Should we integrate with the 0.18 branch, or just put our changes into
>> the
>> 0.18.3 release? We're not sure if there are plans for further releases on
>> the 0.18 branch.
This will not be committed to the 0.18 branch, even if there is an
0.18.4 release. If you wanted to post an 0.18 comp
Hi Jonathan,
Responses inline below:
On Mon, Aug 10, 2009 at 1:28 PM, Jonathan Seidman <
jonathan.seid...@opendatagroup.com> wrote:
> We're getting ready to contribute our FileSystem implementation for the
> Sector DFS (sector.sf.net). Up to now our development and testing has been
> against 0.1
We're getting ready to contribute our FileSystem implementation for the
Sector DFS (sector.sf.net). Up to now our development and testing has been
against 0.18.3, so our intention was to first integrate with that release
and then work on integrating with the trunk for the next release. A couple
of
Provide a configuration dump in json format.
Key: HADOOP-6184
URL: https://issues.apache.org/jira/browse/HADOOP-6184
Project: Hadoop Common
Issue Type: Bug
Reporter: rahul k singh
Co
improvement in fairscheduler
Key: HADOOP-6183
URL: https://issues.apache.org/jira/browse/HADOOP-6183
Project: Hadoop Common
Issue Type: Improvement
Components: contrib/hod
Affects Versions: 0.19.1
16 matches
Mail list logo