> Caveat: do not enable nonvolatile write cache for UFS.
Correction: do not enable *volatile* write cache for UFS :-)
--
Dan.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
MC wrote:
> To expand on this:
>
>> The recommended use of whole disks is for drives with volatile
>> write caches where ZFS will enable the cache if it owns the whole disk.
>
> Does ZFS really never use disk cache when working with a disk slice?
This question doesn't make sense. ZFS doesn't
To expand on this:
> The recommended use of whole disks is for drives with volatile write caches
> where ZFS will enable the cache if it owns the whole disk.
Does ZFS really never use disk cache when working with a disk slice? Is there
any way to force it to use the disk cache?
This messag
MC wrote:
>> This is a problem for replacement, not creation.
>
> You're talking about solving the problem in the future? I'm talking about
> working around the problem today. :) This isn't a fluffy dream problem. I
> ran into this last month when an RMA'd drive wouldn't fit back into a RAID
> This is a problem for replacement, not creation.
You're talking about solving the problem in the future? I'm talking about
working around the problem today. :) This isn't a fluffy dream problem. I
ran into this last month when an RMA'd drive wouldn't fit back into a RAID5
array. RAIDZ is
MC wrote:
> Thanks for the comprehensive replies!
>
> I'll need some baby speak on this one though:
>
>> The recommended use of whole disks is for drives with volatile write
>> caches where ZFS will enable the cache if it owns the whole disk. There
>> may be an RFE lurking here, but it might b
Thanks for the comprehensive replies!
I'll need some baby speak on this one though:
> The recommended use of whole disks is for drives with volatile write caches
> where ZFS will enable the cache if it owns the whole disk. There may be an
> RFE lurking here, but it might be tricky to correctly
MC wrote:
> The situation: a three 500gb disk raidz array. One disk breaks and you
> replace it with a new one. But the new 500gb disk is slightly smaller
> than the smallest disk in the array.
This is quite a problem for RAID arrays, too. It is why vendors use custom
labels for disks. Whe
[EMAIL PROTECTED] said:
> The situation: a three 500gb disk raidz array. One disk breaks and you
> replace it with a new one. But the new 500gb disk is slightly smaller than
> the smallest disk in the array.
> . . .
> So I figure the only way to build smaller-than-max-disk-size functionality
>
The situation: a three 500gb disk raidz array. One disk breaks and you replace
it with a new one. But the new 500gb disk is slightly smaller than the
smallest disk in the array.
I presume the disk would not be accepted into the array because the zpool
replace entry on the zpool man page say
10 matches
Mail list logo