All datanodes are bad in 2nd phase
--
Key: HDFS-1239
URL: https://issues.apache.org/jira/browse/HDFS-1239
Project: Hadoop HDFS
Issue Type: Bug
Components: hdfs client
Affects Versions: 0.20.1
A block is stuck in ongoingRecovery due to exception not propagated
Key: HDFS-1238
URL: https://issues.apache.org/jira/browse/HDFS-1238
Project: Hadoop HDFS
Issue Type: Bu
Client logic for 1st phase and 2nd phase failover are different
---
Key: HDFS-1237
URL: https://issues.apache.org/jira/browse/HDFS-1237
Project: Hadoop HDFS
Issue Type: Bug
Client uselessly retries recoverBlock 5 times
-
Key: HDFS-1236
URL: https://issues.apache.org/jira/browse/HDFS-1236
Project: Hadoop HDFS
Issue Type: Bug
Affects Versions: 0.20.1
Rep
Namenode returning the same Datanode to client, due to infrequent heartbeat
---
Key: HDFS-1235
URL: https://issues.apache.org/jira/browse/HDFS-1235
Project: Hadoop HDFS
Datanode 'alive' but with its disk failed, Namenode thinks it's alive
-
Key: HDFS-1234
URL: https://issues.apache.org/jira/browse/HDFS-1234
Project: Hadoop HDFS
Issue Type:
Corrupted block if a crash happens before writing to checksumOut but after
writing to dataOut
-
Key: HDFS-1232
URL: https://issues.apache.org/jira/browse/HDFS-1232
Bad retry logic at DFSClient
Key: HDFS-1233
URL: https://issues.apache.org/jira/browse/HDFS-1233
Project: Hadoop HDFS
Issue Type: Bug
Components: hdfs client
Affects Versions: 0.20.1
Rep
Generation Stamp mismatches, leading to failed append
-
Key: HDFS-1231
URL: https://issues.apache.org/jira/browse/HDFS-1231
Project: Hadoop HDFS
Issue Type: Bug
Components: hdfs c
[
https://issues.apache.org/jira/browse/HDFS-1219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon resolved HDFS-1219.
---
Resolution: Duplicate
Why file this bug if it's the same as 955?
> Data Loss due to edits log truncat
BlocksMap.blockinfo is not getting cleared immediately after deleting a
block.This will be cleared only after block report comes from the datanode.Why
we need to maintain the blockinfo till that time.
DFSClient incorrectly asks for new block if primary crashes during first
recoverBlock
-
Key: HDFS-1229
URL: https://issues.apache.org/jira/browse/HDFS-1229
Project: H
CRC does not match when retrying appending a partial block
--
Key: HDFS-1228
URL: https://issues.apache.org/jira/browse/HDFS-1228
Project: Hadoop HDFS
Issue Type: Bug
Componen
UpdateBlock fails due to unmatched file length
--
Key: HDFS-1227
URL: https://issues.apache.org/jira/browse/HDFS-1227
Project: Hadoop HDFS
Issue Type: Bug
Components: data-node
Affect
Last block is temporary unavailable for readers because of crashed appender
---
Key: HDFS-1226
URL: https://issues.apache.org/jira/browse/HDFS-1226
Project: Hadoop HDFS
Block lost when primary crashes in recoverBlock
---
Key: HDFS-1225
URL: https://issues.apache.org/jira/browse/HDFS-1225
Project: Hadoop HDFS
Issue Type: Bug
Components: data-node
Affe
Stale connection makes node miss append
---
Key: HDFS-1224
URL: https://issues.apache.org/jira/browse/HDFS-1224
Project: Hadoop HDFS
Issue Type: Bug
Reporter: Thanh Do
- Summary: if a datanod
DataNode fails stop due to a bad disk (or storage directory)
Key: HDFS-1223
URL: https://issues.apache.org/jira/browse/HDFS-1223
Project: Hadoop HDFS
Issue Type: Bug
Comp
NameNode fail stop in spite of multiple metadata directories
Key: HDFS-1222
URL: https://issues.apache.org/jira/browse/HDFS-1222
Project: Hadoop HDFS
Issue Type: Bug
Comp
NameNode unable to start due to stale edits log after a crash
-
Key: HDFS-1221
URL: https://issues.apache.org/jira/browse/HDFS-1221
Project: Hadoop HDFS
Issue Type: Bug
Affects
Namenode unable to start due to truncated fstime
Key: HDFS-1220
URL: https://issues.apache.org/jira/browse/HDFS-1220
Project: Hadoop HDFS
Issue Type: Bug
Components: name-node
Af
Data Loss due to edits log truncation
-
Key: HDFS-1219
URL: https://issues.apache.org/jira/browse/HDFS-1219
Project: Hadoop HDFS
Issue Type: Bug
Components: name-node
Affects Versions: 0.20.2
[
https://issues.apache.org/jira/browse/HDFS-1211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur resolved HDFS-1211.
Resolution: Fixed
I just committed this. Thanks Todd!
> 0.20 append: Block receiver should
[
https://issues.apache.org/jira/browse/HDFS-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur resolved HDFS-1210.
Fix Version/s: 0.20-append
Resolution: Fixed
I just committed this. Thanks Todd.
> D
[
https://issues.apache.org/jira/browse/HDFS-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur resolved HDFS-1204.
Resolution: Fixed
> 0.20: Lease expiration should recover single files, not entire lease hol
[
https://issues.apache.org/jira/browse/HDFS-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur resolved HDFS-1207.
Fix Version/s: 0.20-append
Resolution: Fixed
I just committed this. Thanks Todd!
> 0
[
https://issues.apache.org/jira/browse/HDFS-1141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur resolved HDFS-1141.
Resolution: Fixed
Pulled into hadoop-0.20-append
> completeFile does not check lease owners
[
https://issues.apache.org/jira/browse/HDFS-142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur resolved HDFS-142.
---
Resolution: Fixed
I have committed this. Thanks Sam, Nicolas and Todd.
> In 0.20, move blocks
20 append: Blocks recovered on startup should be treated with lower priority
during block synchronization
-
Key: HDFS-1218
URL: https://issues.apache.org/jira/
[
https://issues.apache.org/jira/browse/HDFS-1216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur resolved HDFS-1216.
Resolution: Fixed
I just committed this. Thanks Todd!
> Update to JUnit 4 in branch 20 appe
Some methods in the NameNdoe should not be public
-
Key: HDFS-1217
URL: https://issues.apache.org/jira/browse/HDFS-1217
Project: Hadoop HDFS
Issue Type: Improvement
Components: name-n
Update to JUnit 4 in branch 20 append
-
Key: HDFS-1216
URL: https://issues.apache.org/jira/browse/HDFS-1216
Project: Hadoop HDFS
Issue Type: Task
Components: test
Affects Versions: 0.20-appen
[
https://issues.apache.org/jira/browse/HDFS-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon resolved HDFS-1215.
---
Assignee: Todd Lipcon
Resolution: Fixed
Dhruba committed to 20-append branch
> TestNodeCount in
TestNodeCount infinite loops on branch-20-append
Key: HDFS-1215
URL: https://issues.apache.org/jira/browse/HDFS-1215
Project: Hadoop HDFS
Issue Type: Bug
Components: test
Affects
hdfs client metadata cache
--
Key: HDFS-1214
URL: https://issues.apache.org/jira/browse/HDFS-1214
Project: Hadoop HDFS
Issue Type: New Feature
Components: hdfs client
Reporter: Joydeep Sen Sarma
Implement a VFS Driver for HDFS
---
Key: HDFS-1213
URL: https://issues.apache.org/jira/browse/HDFS-1213
Project: Hadoop HDFS
Issue Type: New Feature
Components: hdfs client
Reporter: Michael
hi mike,
it will be nice to get a high level doc on what/how it is implemented.
also, you might want to compare it with fufs-dfs
http://wiki.apache.org/hadoop/MountableHDFS
thanks,
dhruba
On Wed, Jun 16, 2010 at 8:55 AM, Michael D'Amour wrote:
> We have an open source ETL tool (Kettle) which
Michael,
Please open a jira (new feature) and attach your patch there:
http://wiki.apache.org/hadoop/HowToContribute
thanks,
Arun
On Jun 16, 2010, at 8:55 AM, Michael D'Amour wrote:
We have an open source ETL tool (Kettle) which uses VFS for many
input/output steps/jobs. We would like to be
We have an open source ETL tool (Kettle) which uses VFS for many
input/output steps/jobs. We would like to be able to read/write HDFS
from Kettle using VFS.
I haven't been able to find anything out there other than "it would be
nice."
I had some time a few weeks ago to begin writing a VFS dr
39 matches
Mail list logo