--On 06 January 2009 16:37 -0800 Carson Gaspar <car...@taltos.org> wrote:

> On 1/6/2009 4:19 PM, Sam wrote:
>> I was hoping that this was the problem (because just buying more
>> discs is the cheapest solution given time=$$) but running it by
>> somebody at work they said going over 90% can cause decreased
>> performance but is unlikely to cause the strange errors I'm seeing.
>> However, I think I'll stick a 1TB drive in as a new volume and pull
>> some data onto it to bring the zpool down to<75% capacity and see if
>> that helps though anyway.  Probably update the OS to 2008.11 as
>> well.
>
> Pool corruption is _always_ a bug. It may be ZFS, or your block devices,
> but something is broken

Agreed - it shouldn't break just because you're using over 90% - checking 
on  one of my systems here I have:

"
Filesystem   1K-blocks        Used     Avail Capacity  Mounted on
vol          2567606528 2403849728 163756800    94%    /vol
"

Been running like that for months without issue... Whilst it may not be 
'ideal' to run it over 90% (I suspect it's worse for pools made up of 
different sized devices / redundancy) - it's not broken in any shape or 
form with gb's of reads/writes going to that file system.

-Kp
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to