No. From what I've seen, ZFS will periodically flush writes from the
ZIL to disk. You may run into a "read starvation" situation where ZFS is
so busy flushing to disk that you won't get reads. If you have VMs where
developers expect low latency interactivity, they get unhappy. Trust me. :)
Apparently, I must not be using the right web form...
I would update the case sometimes via the web, and it seems like no one
actually saw it. Or, some other engineer comes along and asks me the
same set of questions that were already answered (and recorded in the
case records!).
Another st
On 5/18/10 12:49 PM, Roy Sigurd Karlsbakk wrote:
- "Paul Choi" skrev:
I've been reading this list for a while, there's lots of discussion
about b134 and deduplication. I see some stuff about snapshots not
being
destroyed, and maybe some recovery issues. What I'
dozen VMs suddenly losing
their datastore? I'd love to hear from your experience.
Thanks,
-Paul Choi
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
Is it possible to replicate an entire zpool with AVS? From what I see,
you can replicate a zvol, because AVS is filesystem agnostic. I can
create zvols within a pool, and AVS can replicate replicate those, but
that's not really what I want.
If I create a zpool called "disk1",
paulc..
Hm. That's odd. "zpool clear" should've cleared the list of errors.
Unless you were accessing files at the same time, so there were more
checksum errors being reported upon reads.
As for "zpool scrub", there's no benefit in your case. Since you are
reading from the zpool, and there's checksums b
ted somehow, yet we can read them
just fine. I wish I could tell which ones are really bad, so we
wouldn't have to recreate them unnecessarily. They are mirrored in
various places, or can be recreated via reprocessing, but
recreating/restoring that many files is no easy task.
Thanks
"zpool clear" just clears the list of errors (and # of checksum errors)
from its stats. It does not modify the filesystem in any manner. You run
"zpool clear" to make the zpool forget that it ever had any issues.
-Paul
Jonathan Loran wrote:
Hi list,
First off:
# cat /etc/release
zfs_file_data_buf". Look for mem_inuse.
Running "::memstat" in "mdb -k" also shows Kernel memory usage (probably
includes ZFS overhead) and ZFS File Data memory usage. But it's
painfully slow to run. kstat is probably better.
-Paul Choi
Richard Elling wro