On Oct 7, 2010, at 11:40 AM, Jim Sloey wrote:
> One of us found the following:
>
> The presence of snapshots can cause some unexpected behavior when you attempt
> to free space. Typically, given appropriate permissions, you can remove a
> file from a full file system, and this action results in
One of us found the following:
The presence of snapshots can cause some unexpected behavior when you attempt
to free space. Typically, given appropriate permissions, you can remove a file
from a full file system, and this action results in more space becoming
available in the file system. Howev
Yes. We run a snap in cron to a disaster recovery site.
NAME USED AVAIL REFER MOUNTPOINT
po...@20100930-22:20:00 13.2M - 19.5T -
po...@20101001-01:20:00 4.35M - 19.5T -
po...@20101001-04:20:00 0 - 19.5T -
po...@20101001-07:20:00
Yes, you're correct. There was a typo when I copied to the forum.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Forgive me, but isn't this incorrect:
---
mv /pool1/000 /pool1/000d
---
rm –rf /pool1/000
Shouldn't that last line be
rm –rf /pool1/000d
??
On 8 October 2010 04:32, Remco Lengers wrote:
> any snapshots?
>
> *zfs list -t snapshot*
>
> ..Remco
>
>
>
> On 10/7/10 7:24 PM, Jim Sloey w
any snapshots?
*zfs list -t snapshot*
..Remco
On 10/7/10 7:24 PM, Jim Sloey wrote:
I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC
SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and
have not received disk for our SAN. Using df -h resu
I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC
SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and
have not received disk for our SAN. Using df -h results in:
Filesystem size used avail capacity Mounted on
pool1