On Wed, Feb 6, 2013 at 4:26 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
<opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
>
> When I used "zpool status" after the system crashed, I saw this:
> NAME      SIZE  ALLOC   FREE  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
> storage   928G   568G   360G         -    61%  1.00x  ONLINE  -
>
> I did some cleanup, so I could turn things back on ... Freed up about 4G.
>
> Now, when I use "zpool status" I see this:
> NAME      SIZE  ALLOC   FREE  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
> storage   928G   564G   364G         -    60%  1.00x  ONLINE  -
>
> When I use "zfs list storage" I see this:
> NAME      USED  AVAIL  REFER  MOUNTPOINT
> storage   909G  4.01G  32.5K  /storage
>
> So I guess the lesson is (a) refreservation and zvol alone aren't enough to
> ensure your VM's will stay up.  and (b) if you want to know how much room is
> *actually* available, as in "usable," as in, "how much can I write before I
> run out of space," you should use "zfs list" and not "zpool status"

Could you run "zfs list -o space storage"? It will show how much is
used by the data, the snapshots, refreservation, and children (if
any). I read somewhere that one should always use "zfs list" to
determine how much space is actually available to be written on a
given filesystem.

I have an idea, but it's a long shot. If you created more than one zfs
on that pool, and added a reservation to each one, then that space is
still technically unallocated as far as "zpool list" is concerned, but
is not available to writing when you do "zfs list". I would imagine
you have one or more of your VMs that grew outside of their
"refreservation" and now crashed for lack of free space on their zfs.
Some of the other VMs aren't using their refreservation (yet), so they
could, between them, still write 360GB of stuff to the drive.

Jan
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to