One of us found the following:
The presence of snapshots can cause some unexpected behavior when you attempt
to free space. Typically, given appropriate permissions, you can remove a file
from a full file system, and this action results in more space becoming
available in the file system. Howev
Yes. We run a snap in cron to a disaster recovery site.
NAME USED AVAIL REFER MOUNTPOINT
po...@20100930-22:20:00 13.2M - 19.5T -
po...@20101001-01:20:00 4.35M - 19.5T -
po...@20101001-04:20:00 0 - 19.5T -
po...@20101001-07:20:00
Yes, you're correct. There was a typo when I copied to the forum.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC
SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and
have not received disk for our SAN. Using df -h results in:
Filesystem size used avail capacity Mounted on
pool1
Never mind.
It looks like the controller is flakey. Neither disk in the mirror is clean.
Attempts to backup and recover the remaining disk produced I/O errors that were
traced to the controller.
Thanks for your help Victor.
--
This message posted from opensolaris.org
_
No. Only slice 6 from what I understand.
I didn't create this (the person who did has left the company) and all I know
is that the pool was mounted on /oraprod before it faulted.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
We have a production SunFireV240 that had a zfs mirror until this week. One of
the drives (c1t3d0) in the mirror failed.
The system was shutdown and the bad disk replaced without an export.
I don't know what happened next but by the time I got involved there was no
evidence that the remaining go
rbourbon writes:
> I don't think it was the point of the post. I've read
> it to mean that some customers because of outside
> consideration from ZFS have some need to use storage array in ways
> that may not allow ZFS to develop it's full potential.
I've been following this thread because we ha
> Roch - PAE wrote:
> The hard part is getting a set of simple requirements. As you go into
> more complex data center environments you get hit with older Solaris
> revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
> most of us seem to be playing with ZFS is on the lower end