>> Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap
>> support (I believe currently only Nexenta but correct me if I am wrong) the
>> blocks will not be freed, will they?
>
>
> Solaris 11.1 has ZFS with SCSI UNMAP support.
Freeing unused blocks works perfectly well with fst
On Fri, Sep 21, 2012 at 6:31 AM, andy thomas wrote:
> I have a ZFS filseystem and create weekly snapshots over a period of 5 weeks
> called week01, week02, week03, week04 and week05 respectively. Ny question
> is: how do the snapshots relate to each other - does week03 contain the
> changes made s
> I asked what I thought was a simple question but most of the answers don't
> have too much to do with the question.
Hehe, welcome to mailing lists ;).
> What I'd
> really like is an option (maybe it exists) in ZFS to say when a block fails
> a checksum tell me which file it affects
It does exa
On Wed, Aug 29, 2012 at 8:58 PM, Timothy Coalson wrote:
> As I understand it, the used space of a snapshot does not include anything
> that is in more than one snapshot.
True. It shows the amount that would be freed if you destroyed the
snapshot right away. Data held onto by more than one snapsho
> Unfortunately, the Intel 520 does *not* power protect it's
> on-board volatile cache (unlike the Intel 320/710 SSD).
>
> Intel has an eye-opening technology brief, describing the
> benefits of "power-loss data protection" at:
>
> http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-s
Have you not seen my answer?
http://mail.opensolaris.org/pipermail/zfs-discuss/2012-August/052170.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Aug 4, 2012 at 12:00 AM, Burt Hailey wrote:
> We do hourly snapshots. Two days ago I deleted 100GB of
> data and did not see a corresponding increase in snapshot sizes. I’m new to
> zfs and am reading the zfs admin handbook but I wanted to post this to get
> some suggestions on what to
> It is normal for reads from mirrors to be faster than for a single disk
> because reads can be scheduled from either disk, with different I/Os being
> handled in parallel.
That assumes that there *are* outstanding requests to be scheduled in
parallel, which would only happen with multiple reader
> 2) in the mirror case the write speed is cut by half, and the read
> speed is the same as a single disk. I'd expect about twice the
> performance for both reading and writing, maybe a bit less, but
> definitely more than measured.
I wouldn't expect mirrored read to be faster than single-disk rea
> Actually, a write to memory for a memory mapped file is more similar to
> write(2). If two programs have the same file mapped then the effect on the
> memory they share is instantaneous because it is the same physical memory.
> A mmapped file becomes shared memory as soon as it is mapped at leas
> I really makes no sense at all to
> have munmap(2) not imply msync(3C).
Why not? munmap(2) does basically the equivalent of write(2). In the
case of write, that is: a later read from the same location will see
the written data, unless another write happens in-between. If power
goes down followin
> when you say remove the device, I assume you mean simply make it unavailable
> for import (I can't remove it from the vdev).
Yes, that's what I meant.
> root@openindiana-01:/mnt# zpool import -d /dev/lofi
> pool: ZP-8T-RZ1-01
> id: 9952605666247778346
> state: FAULTED
> status: One or more
>> Have you also mounted the broken image as /dev/lofi/2?
>
> Yep.
Wouldn't it be better to just remove the corrupted device? This worked
just fine in my case.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
> The situation now is I have dd'd the drives onto a NAS. These images are
> shared via NFS to a VM running Oracle Solaris 11 11/11 X86.
You should probably also try to use a current OpenIndiana or some
other Illumos distribution.
___
zfs-discuss mailin
> root@solaris-01:/mnt# zpool import -d /dev/lofi
> pool: ZP-8T-RZ1-01
> id: 9952605666247778346
> state: FAULTED
> status: One or more devices contains corrupted data.
> action: The pool cannot be imported due to damaged devices or data.
> see: http://www.sun.com/msg/ZFS-8000-5E
> config:
> Can I say
>
> USED-REFER=snapshot size ?
No. "USED" is the space that would be freed if you destroyed the
snapshot _right now_. This can change (and usually does) if you
destroy previous snapshots.
___
zfs-discuss mailing list
zfs-discuss@openso
> Two questions from a newbie.
>
> 1/ What REFER mean in zfs list ?
The amount of data that is reachable from the file system root. It's
just what I would call the contents of the file system.
> 2/ How can I known the size of all snapshot size for a partition ?
> (OK I can ad
> I saw one team revert from ZoL (CentOS 6) back to ext on some backup servers
> for an application project, the killer was
> stat times (find running slow etc.), perhaps more layer 2 cache could have
> solved the problem, but it was easier to deploy ext/lvm2.
But stat times (think directory trav
After having read this mailing list for a little while, I get the
impression that there are at least some people who regularly
experience on-disk corruption that ZFS should be able to report and
handle. I’ve been running a raidz1 on three 1TB consumer disks for
approx. 2 years now (about 90% full),
> The issue is definitely not specific to ZFS. For example, the whole OS
> depends on relable memory content in order to function. Likewise, no one
> likes it if characters mysteriously change in their word processing
> documents.
I don’t care too much if a single document gets corrupted – there
Inspired by the paper "End-to-end Data Integrity for File Systems: A
ZFS Case Study" [1], I've been thinking if it is possible to devise a way,
in which a minimal in-memory data corruption would cause massive data
loss. I could imagine a scenario where an entire directory branch
drops off the tree
21 matches
Mail list logo