Hi, Jay, Thanks for reply. I can see "TestBlockRecovery" and
"TestDataDirs" for datanode; also "TestGetImageServlet","TestINodeFile"
and "TestNNLeaseRecovery" for namenode. I will start from these first
then. Thanks.
Best Regards,
Min (Catherine) Long
IBM China Systems and Technology Lab
J
Hi Min, look at the unit tests which make use of MiniDFSCluster -- you can
run those in debug mode and step through the code, and they include a bunch
of use cases. That's generally much easier than running all of the
different services and debugging each separately.
On Mon, Sep 13, 2010 at 9:30
Thanks for help. I could check out codes. Are there any documentation for
setting up debug environment for HDFS? Should Single Node HDFS be set up
for debug purpose? Should Hadoop Common codes be checked out for
running/debugging HDFS? Or can HDFS alone be enough for run/debug? Sorry
for the si
HDFS federation: Upgrade and rolling back of Federation
---
Key: HDFS-1398
URL: https://issues.apache.org/jira/browse/HDFS-1398
Project: Hadoop HDFS
Issue Type: Sub-task
Reporte
HDFS federation: Storage directory of VERSION(/ID) file
Key: HDFS-1397
URL: https://issues.apache.org/jira/browse/HDFS-1397
Project: Hadoop HDFS
Issue Type: Sub-task
Repor
[
https://issues.apache.org/jira/browse/HDFS-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jitendra Nath Pandey resolved HDFS-1396.
Resolution: Duplicate
marking this is as duplicate of HDFS-1364
> reloginFromKeytab
reloginFromKeytab in Hftp clients
-
Key: HDFS-1396
URL: https://issues.apache.org/jira/browse/HDFS-1396
Project: Hadoop HDFS
Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jit
On Mon, Sep 13, 2010 at 1:04 PM, Owen O'Malley wrote:
> On Mon, Sep 13, 2010 at 11:10 AM, Todd Lipcon wrote:
> > Yep, but there are plenty of 10 node clusters out there that do important
> > work at small startups or single-use-case installations, too. We need to
> > provide scalability and secu
use linux heartbeat project and DRBD to build a backup namenode.
Jimmy
--
From: "John Hui"
Sent: Monday, September 13, 2010 2:33 PM
To:
Subject: namenode crash - recovery model?
Hi All,
I am new to Hadoop. I have been reading and playing arou
Hi All,
I am new to Hadoop. I have been reading and playing around with Hadoop, it
seems like the namenode is a single point of failure. I read about the
backup node which is basically a copy of the live namenode.
So if the namenode were to crash, what is some of the recovery model that
people
On Mon, Sep 13, 2010 at 11:10 AM, Todd Lipcon wrote:
> Yep, but there are plenty of 10 node clusters out there that do important
> work at small startups or single-use-case installations, too. We need to
> provide scalability and security features that work for the 100+ node
> clusters but also no
Add @Override annotation to FSDataset methods that implement FSDatasetInterface
---
Key: HDFS-1395
URL: https://issues.apache.org/jira/browse/HDFS-1395
Project: Hadoop HDFS
On Mon, Sep 13, 2010 at 10:59 AM, Owen O'Malley wrote:
> On Mon, Sep 13, 2010 at 10:05 AM, Todd Lipcon wrote:
>
> > This is not MR-specific, since the strangely named hadoop.job.ugi
> determines
> > HDFS permissions as well.
>
> Yeah, after I hit send, I realized that I should have used common-d
On Mon, Sep 13, 2010 at 10:05 AM, Todd Lipcon wrote:
> This is not MR-specific, since the strangely named hadoop.job.ugi determines
> HDFS permissions as well.
Yeah, after I hit send, I realized that I should have used common-dev.
This is really a dev issue.
> "or the user must write a custom g
modify -format option for namenode to generated new blockpool id and accept
newcluster
--
Key: HDFS-1394
URL: https://issues.apache.org/jira/browse/HDFS-1394
Project:
On Sep 13, 2010, at 10:05 AM, Todd Lipcon wrote:
> On Mon, Sep 13, 2010 at 9:31 AM, Owen O'Malley wrote:
>
>> Moving the discussion over to the more appropriate mapreduce-dev.
>>
>
> This is not MR-specific, since the strangely named hadoop.job.ugi determines
> HDFS permissions as well. +CC h
On Mon, Sep 13, 2010 at 9:31 AM, Owen O'Malley wrote:
> Moving the discussion over to the more appropriate mapreduce-dev.
>
This is not MR-specific, since the strangely named hadoop.job.ugi determines
HDFS permissions as well. +CC hdfs-dev... though I actually think this is an
issue that users w
17 matches
Mail list logo