Hello All,
I would like to try out a hadoop configuration involving both lustre and
hdfs. Hence I would like to know any thoughts/criticisms on the idea.
In my cluster I have the lustre parallel file system which mainly exposes
storage over a network. Also there is some local space on each node o
On Jun 10, 2010, at 2:37 AM, Vikas Ashok Patil wrote:
> In my cluster I have the lustre parallel file system which mainly exposes
> storage over a network. Also there is some local space on each node of the
> cluster. This space is not part of the lustre file system. My hadoop
> installation curre
> Your local storage should get used for MR. Use Lustre via file://
> (LocalFileSystem, iirc)
> instead of HDFS via hdfs:// (DistributedFileSystem, irrc) as the default file
> system type.
If Lustre has integrated checksums, you'll want to use the
RawLocalFileSystem instead of LocalFileSystem.
0.20: TestFileAppend3.testTC2 failure
-
Key: HDFS-1197
URL: https://issues.apache.org/jira/browse/HDFS-1197
Project: Hadoop HDFS
Issue Type: Bug
Components: data-node, hdfs client, name-node
Resolving cross-realm principals
Key: HDFS-1198
URL: https://issues.apache.org/jira/browse/HDFS-1198
Project: Hadoop HDFS
Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jiten
This link should help.
http://wiki.apache.org/hadoop/QuickStart
On 6/10/10 12:20 PM, "Alberich de megres" wrote:
Hello!
I'm new on HDFS, i just downloaded the source code and compiled it.
Now I want to excecure it on 2 machines.. but i don't know how to start servers.
Is there any web/do
Hello!
I'm new on HDFS, i just downloaded the source code and compiled it.
Now I want to excecure it on 2 machines.. but i don't know how to start servers.
Is there any web/doc or someone can point me some light on how to start?
Thanks!!
Alberich
You can test hdfs without setting up map-reduce cluster if that's what you mean.
Instead of bin/start-all.sh , use bin/start-dfs.sh and you can skip
configurations related to mapreduce.
To test it, use DFS command line "bin/hadoop dfs".
On 6/10/10 1:16 PM, "Alberich de megres" wrote:
Thanks
Thanks for the quick reply,
But I'm talking about just hdfs.. is it posible to test it separately?
with source code available at:
http://github.com/apache/hadoop-hdfs
I compiled it, and now i want to test it. (aside from hadoop)
On Thu, Jun 10, 2010 at 9:37 PM, Jitendra Nath Pandey
wrote:
> Th
Thanks!
Can i compile just the source at the repo and use it just as is?
I mean, without having any hadoop source code (except the hdfs code at
the web i told you). Or without the need to integrate it with a hadoop
compiiled code? Just as if a diferent or standalone project.
On Thu, Jun 10, 201
Extract a subset of tests for smoke (DOA) validation.
-
Key: HDFS-1199
URL: https://issues.apache.org/jira/browse/HDFS-1199
Project: Hadoop HDFS
Issue Type: Improvement
Components
You can checkout hadoop-20 branch from
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20/ ,
and build and run following the steps on the wiki.
On 6/10/10 2:01 PM, "Alberich de megres" wrote:
Thanks!
Can i compile just the source at the repo and use it just as is?
I mean, with
[
https://issues.apache.org/jira/browse/HDFS-1198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jakob Homan resolved HDFS-1198.
---
Fix Version/s: 0.22.0
Resolution: Fixed
I've committed this. Thanks, Jitendra. Resolving as f
Hi all,
I check the source code of EditLogFileOutputStream, it seems hadoop
will first write edit log to buffer, then flush to disk. I know that
it will improve performance, but in the other hand it will cause the
edit log in buffer lost when the name node is down. So I wonder is it
possible and n
Hi Jeff,
All of the FSNamesystem methods call logSync() before returning to the
client. So, if the edit is lost, it also will not have returned a success to
the client.
-Todd
On Thu, Jun 10, 2010 at 6:29 PM, Jeff Zhang wrote:
> Hi all,
>
> I check the source code of EditLogFileOutputStream, it
Thanks for the replies.
If I have fs.default.name = file://my_lustre_mount_point , then only the
lustre filesystem will be used. I would like to have something like
fs.default.name=file://my_lustre_mount_point , hdfs://localhost:9123
so that both local filesystem and lustre are in use.
Kindly c
The namenode could remember the last good location of a missing block
-
Key: HDFS-1200
URL: https://issues.apache.org/jira/browse/HDFS-1200
Project: Hadoop HDFS
Issue Type:
Hi All,
I have been trying to access the statistics of FSNameSystem using
FSNameSystemMetrics , but i have not been able to do it yet. Am I doing it
right, if not kindly guide me. I am stuck.
Thanks,
Vidur
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
18 matches
Mail list logo