> For various reasons, I can't post the zfs list type
here is one, and it seems inline with expected netapp(tm)
type usage considering the "cluster" size differences.
14 % cat snap_sched
#!/bin/sh
snaps=15
for fs in `echo Videos Movies Music users local`
do
i=$snaps
zfs destroy zfs/[EMAIL
On 8/24/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
On Thu, Aug 24, 2006 at 02:21:33PM -0700, Joe Little wrote:
> well, by deleting my 4-hourlies I reclaimed most of the data. To
> answer some of the questions, its about 15 filesystems (decendents
> included). I'm aware of the space used by sna
On Thu, Aug 24, 2006 at 02:21:33PM -0700, Joe Little wrote:
> well, by deleting my 4-hourlies I reclaimed most of the data. To
> answer some of the questions, its about 15 filesystems (decendents
> included). I'm aware of the space used by snapshots overlapping. I was
> looking at the total space (
On 8/24/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
On Thu, Aug 24, 2006 at 07:07:45AM -0700, Joe Little wrote:
> We finally flipped the switch on one of our ZFS-based servers, with
> approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
> a RAID5 volume on the adaptec card). W
On Thu, Aug 24, 2006 at 07:07:45AM -0700, Joe Little wrote:
> We finally flipped the switch on one of our ZFS-based servers, with
> approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
> a RAID5 volume on the adaptec card). We have snapshots every 4 hours
> for the first few days.