Can't read binary data off HDFS
---
Key: HDFS-1169
URL: https://issues.apache.org/jira/browse/HDFS-1169
Project: Hadoop HDFS
Issue Type: Bug
Components: contrib/thriftfs
Affects Versions: 0.20.2
Add more assertions to TestLargeDirectoryDelete
---
Key: HDFS-1170
URL: https://issues.apache.org/jira/browse/HDFS-1170
Project: Hadoop HDFS
Issue Type: Improvement
Components: test
A
RaidNode should fix missing blocks directly on Data Node
Key: HDFS-1171
URL: https://issues.apache.org/jira/browse/HDFS-1171
Project: Hadoop HDFS
Issue Type: Task
Components:
Hey Hadoop Core Developers,
Just a reminder that the next contributors meeting is next Friday:
Date/Time: Friday, May 28th 2-4 pm
Venue: Cloudera HQ. 210 Portage Ave Palo Alto, CA 94306
Agenda:
- Discuss "HEP" proposal and its implementation (will send mail to
general today)
- Moving contrib out
Hi Harold,
It seems you don't have latest hdfs code. KerberosInfo annotation has
slightly changed in the latest code.
Also, KerberosInfo not being found means you have older jar for common.
Try 'ant clean clean-cache' before compiling.
> How do I force the compile to use my own jar file, the
On May 6, 2010, at 5:38 PM, Arun C Murthy wrote:
# Agenda for next meeting
- Eli: Hadoop Enhancement Process (modelled on PEP?)
- Branching strategies: Development Models
Something to think about w.r.t branching strategies:
http://incubator.apache.org/learn/rules-for-revolutionaries.html
A
Hey Arun,
I updated the agenda on Meetup. I was assuming the branching would
fall out of how to implement HEP but good to discuss separately as
well.
Also, I added moving contrib out of the repos since a couple people
mentioned that at the last meetup but we should do that time
permitting the oth
Hi, There:
While I used hadoop 0.20.9-yahoo distribution and hbase 0.20.4 version, I
found that the hadoop lose blocks under certain
situation, and thus corrupt hbase tables.
I compared namenode, datanode and hbase regionserver and figured out the
reason.
The regionserver 10.110.8.85 asks n
Hi Jinsong,
Could you upload a tarball of the log files somewhere from each of the DNs
and the RS involved? It's hard to trace through the log in the email (the
email added all kinds of wrapping, etc)
-Todd
On Fri, May 21, 2010 at 2:17 PM, Jinsong Hu wrote:
> Hi, There:
> While I used hadoop
[Adding common-dev]
Updated agenda:
* Discuss "HEP" proposal, a mechanism for making enhancements to core
Hadoop, and its implementation
* Branching strategies: Development Models. Check out
http://incubator.apache.org/learn/rules-for-revolutionaries.html
* Moving contrib out of the core repos
[
https://issues.apache.org/jira/browse/HDFS-608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Konstantin Boudnik resolved HDFS-608.
-
Resolution: Duplicate
This is DUP of HDFS-881
> BlockReceiver:receivePacket(): packet's he
[
https://issues.apache.org/jira/browse/HDFS-340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon resolved HDFS-340.
--
Resolution: Not A Problem
This was resolved somewhere along the line with an undocumented config
paramet
Hi Jinsong,
I don't see any data loss here.
The sequence of events from the logs:
==> NN allocates block:
2010-05-18 21:21:29,731 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.allocateBlock: /hbase/.META./1028785192/info/656097411976846533.
blk_5636039758999247483_31304886
===> Fir
Blocks in newly completed files are considered under-replicated too quickly
---
Key: HDFS-1172
URL: https://issues.apache.org/jira/browse/HDFS-1172
Project: Hadoop HDFS
14 matches
Mail list logo