the jira is not a problem any more. With the fix
for HDFS-11402, we have a workaround to capture immutable copies of open files
in the snapshots.
> SnapshotDiffReport should detect open files in HDFS Snapsh
. With the fix
for HDFS-11402, we have a workaround to capture immutable copies of open files
in the snapshots.
> Add option to skip open files during HDFS Snapshots
> ---
>
> Key: HDFS-11218
>
{{TestLeaseManager}} and verified the successful run.
Committed to branch-2. Thanks [~jlowe].
> HDFS snapshots doesn't capture all open files when one of the open files is
> deleted
> ---
>
>
need to be updated for branch-2 and recommitted.
> HDFS snapshots doesn't capture all open files when one of the open files is
> deleted
> ---
>
> Key: HDFS-12217
>
Manoj Govindassamy created HDFS-12217:
-
Summary: HDFS snapshots doesn't capture all open files when one of
open file is deleted
Key: HDFS-12217
URL: https://issues.apache.org/jira/browse/HDFS-
Manoj Govindassamy created HDFS-11988:
-
Summary: Verify HDFS Snapshots with open files captured are safe
across truncates and appends on current version
Key: HDFS-11988
URL: https://issues.apache.org/jira
Manoj Govindassamy created HDFS-11402:
-
Summary: HDFS Snapshots should capture point-in-time copies of
OPEN files
Key: HDFS-11402
URL: https://issues.apache.org/jira/browse/HDFS-11402
Project
Manoj Govindassamy created HDFS-11220:
-
Summary: SnapshotDiffReport should detect open files in HDFS
Snapshots
Key: HDFS-11220
URL: https://issues.apache.org/jira/browse/HDFS-11220
Project
Manoj Govindassamy created HDFS-11218:
-
Summary: Add option to skip open files during HDFS Snapshots
Key: HDFS-11218
URL: https://issues.apache.org/jira/browse/HDFS-11218
Project: Hadoop HDFS
Why would the state go backwards? Let me try to restate your scenario:
t1: snapshot a file with length 5
t2: append to the file, now length 6
t3: delete the snapshot
After this, since we deleted the snapshot, we only have the current
filesystem state which is length 6.
There's no "restore" funct
Hi,
Actually, how does namenode handle when block report is sent from datanodes?
After removing snapshot the file size = 0.5*BLOCK_SIZE.
but the block report says it is 0.6*BLOCK_SIZE,(if we do not shrink the
block)
there is a mismatch.
On Tue, Oct 21, 2014 at 8:34 PM, Pushparaj Motamari
wrote
Hi,
Generally the user expectation is to get back to the previous state of
FileSystem, so we should see the old state of block(the block size should
shrink). I couldn't find the semantics of this kind of operations in design
document either.
Thank You
Pushparaj
On Tue, Oct 21, 2014 at 8:23 PM,
Hi Pushparaj,
Based on those steps, isn't the expectation that you end up with a 0.6*size
block at the end? Blocks are append only (i.e. only grow in length), so
deleting a snapshot cannot result in a block shrinking.
Best,
Andrew
On Tue, Oct 21, 2014 at 7:20 AM, Pushparaj Motamari
wrote:
> Hi
Hi,
Consider the scenario below.
/A/file.txt in HDFS
1. file.txt size = 0.5*(Size of Block).
2. Take Snapshot on directory A
3. Append to the file.txt, making its size as 0.6*(Size of Block)
4. Remove/delete the snapshot on folder taken in step2.
My Question
Akira AJISAKA created HDFS-5880:
---
Summary: Fix a typo at the title of HDFS Snapshots document
Key: HDFS-5880
URL: https://issues.apache.org/jira/browse/HDFS-5880
Project: Hadoop HDFS
Issue
Hi,
Since the snapshot branch (HDFS-2802) has been merged to the main branch for a
few months (http://s.apache.org/9Qi), I am to going to delete the corresponding
Jenkins build
(https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/) tomorrow.
Please feel free to let me know if
16 matches
Mail list logo