> In a case where a new vdev is added to an almost full zpool, more of
> the writes should land on the empty device though, right? So maybe 2
> slabs will land on the new vdev for every one that goes to an
> previously existing vdev.
(Un)Available disk space influences vdev selection. New writes
> MG> How can I set up a ZVOL that's accessible by non-root users, too?
> MG> The intent is to use sparse ZVOLs as raw disks in virtualization
> MG> (reducing overhead compared to file-based virtual volumes).
>
> change zvol permissions to whatever you want?
The nodes in /dev/zvol are all 777 f
>> We have recently discovered the same issue on one of our internal build
>> machines. We have a daily bringover of the Teamware onnv-gate that is
>> snapshoted when it completes and as such we can never run a full scrub.
>> Given some of our storage is reaching (or past) EOSL I really want to
> Is a 16GB ARC size not considered to be enough? ;-)
>
> I was only describing the behavior that I observed. It seems to me
> that when large files are written very quickly, that when the file
> becomes bigger than the ARC, that what is contained in the ARC is
> mostly stale and does not help m
> Also note: the checksums don't have enough information to
> recreate the data for very many bit changes. Hashes might,
> but I don't know anyone using sha256.
My ~/Documents uses sha256 checksums, but then again, it also uses copies=2
:)
-mg
___
zf
> Maybe an option to scrub... something that says "work on bitflips for
> bad blocks", or "work on bitflips for bad blocks in this file"
I've suggested this, too. But in retrospect, there's no way to detect
whether a bad block is indeed due a bitflip or not. So each checksum error,
ZFS might just