Blake writes:
> I'd be careful about raidz unless you have either:
>
> 1 - automatic notification of failure set up using fmadm
>
> 2 - at least one hot spare
Sorry to be so dense here but can you expand a little on what a `hot
spare' is. Do you mean just a spare similar sized disk to use if on
I'd be careful about raidz unless you have either:
1 - automatic notification of failure set up using fmadm
2 - at least one hot spare
Because raidz is parity-based (it does some math-magic to give you
redundancy), replacing a disk that's failed can take a very long time
compared to mirror resil
Richard Elling writes:
>> With five disks, raidz1 becomes useful.
>
> +1
> also remember that you can add mirrors later. For best data availability,
> start with 2 mirrored disks, each split in half. As your data requirements
> grow, add mirrored halves. For diversity, make each side (half) of
Bob Friesenhahn writes:
> On Thu, 19 Mar 2009, Harry Putnam wrote:
>> I've created a zpool in raidz1 configuration with:
>>
>> zpool create zbk raidz1 c3d0 c4d0 c4d1
>
> This is not a very useful configuration. With this number of disks,
> it is best to use two of them to build a mirror, and
Tomas Ögren writes:
>> I was under the impression raidz1 would take something like 20%.. but
>> this is more like 33.33%.
>>
>> So, is this to be expected or is something wrong here?
>
> Not a percentage at all.. raidz1 "takes" 1 disk. raidz2 takes 2 disks.
> This is to be able to handle 1 vs 2
José Gomes wrote:
Can we assume that any snapshot listed by either 'zfs list -t snapshot'
or 'ls .zfs/snapshot' and previously created with 'zfs receive' is
complete and correct? Or is it possible for a 'zfs receive' command to
fail (corrupt/truncated stream, sigpipe, etc...) and a corrupt or
Bob Friesenhahn wrote:
On Thu, 19 Mar 2009, Harry Putnam wrote:
I've created a zpool in raidz1 configuration with:
zpool create zbk raidz1 c3d0 c4d0 c4d1
This is not a very useful configuration. With this number of disks,
it is best to use two of them to build a mirror, and save the othe
José Gomes wrote:
Can we assume that any snapshot listed by either 'zfs list -t
snapshot' or 'ls .zfs/snapshot' and previously created with 'zfs
receive' is complete and correct? Or is it possible for a 'zfs
receive' command to fail (corrupt/truncated stream, sigpipe, etc...)
and a corrupt or
Neal Pollack wrote:
Hi:
What is the most common practice for allocating (choosing) the two
disks used for
the boot drives, in a zfs root install, for the mirrored rpool?
The docs for thumper, and many blogs, always point at cfgadm slots 0
and 1,
which are sata3/0 and sata/3/4, which most oft
Uh, I should probably clarify some things (I was too quick to hit
send):
> IMO the fundamental problem is that the only way to achieve a write
> barrier is fsync() (disregarding direct I/O etc). Again I would just
> like an fbarrier() as I've mentioned on the list previously. It seems
Of course i
Can we assume that any snapshot listed by either 'zfs list -t snapshot' or
'ls .zfs/snapshot' and previously created with 'zfs receive' is complete and
correct? Or is it possible for a 'zfs receive' command to fail
(corrupt/truncated stream, sigpipe, etc...) and a corrupt or incomplete
snapshot to
James,
The links to the Part 1 and Part 2 demos on this page (http://www.opensolaris.org/os/project/avs/Demos/
) appear to be broken.
http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V1/
http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V2/
They still work for me. What
On Thu, 19 Mar 2009, Harry Putnam wrote:
I've created a zpool in raidz1 configuration with:
zpool create zbk raidz1 c3d0 c4d0 c4d1
This is not a very useful configuration. With this number of disks,
it is best to use two of them to build a mirror, and save the other
disk for something el
> fsync() is, indeed, expensive. Lots of calls to fsync() that are not
> necessary for correct application operation EXCEPT as a workaround for
> lame filesystem re-ordering are a sure way to kill performance.
IMO the fundamental problem is that the only way to achieve a write
barrier is fsync()
On 19 March, 2009 - Harry Putnam sent me these 1,4K bytes:
> I'm finally getting close to the setup I wanted, after quite a bit of
> experimentation and bugging these lists endlessly.
>
> So first, thanks for your tolerance and patience.
>
> My setup consists of 4 disks. One holds the OS (rpool
This verifies my guess:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations
On Thu, Mar 19, 2009 at 6:57 PM, Blake wrote:
> IIRC, that's about right. If you look at the zfs best practices wiki
> (genunix.org I think?), th
IIRC, that's about right. If you look at the zfs best practices wiki
(genunix.org I think?), there should be some space calculations linked
in there somewhere.
On Thu, Mar 19, 2009 at 6:50 PM, Harry Putnam wrote:
> I'm finally getting close to the setup I wanted, after quite a bit of
> experimen
I'm finally getting close to the setup I wanted, after quite a bit of
experimentation and bugging these lists endlessly.
So first, thanks for your tolerance and patience.
My setup consists of 4 disks. One holds the OS (rpool) and 3 more all
the same model and brand, all 500gb.
I've created a zp
Hi Neal,
This example needs to be updated with a ZFS root pool. It could
also be that I mapped the wrong boot disks in this example.
You can name the root pool what ever you want, rpool, mpool,
mypool.
In these examples, I was using rpool for RAIDZ pool and mpool
for mirrored pool, not knowing
Hi:
What is the most common practice for allocating (choosing) the two disks
used for
the boot drives, in a zfs root install, for the mirrored rpool?
The docs for thumper, and many blogs, always point at cfgadm slots 0 and 1,
which are sata3/0 and sata/3/4, which most often map to c5t0d0 and c
> "bf" == Bob Friesenhahn writes:
bf> If ZFS does try to order its disk updates in cronological
bf> order without prioritizing metadata updates over data, then
bf> the risk is minimized.
AIUI it doesn't exactly order them, just puts them into 5-second
chunks. so it rolls the on-
On Thu, 19 Mar 2009, Miles Nordin wrote:
And the guarantees ARE minimal---just:
http://www.google.com/search?q=POSIX+%22crash+consistency%22
and you'll find even people against T'so's who want to change ext4
still agree POSIX is on T'so's side.
Clearly I am guilty of inflated expectations.
Ian Collins wrote:
Darren J Moffat wrote:
Ian Collins wrote:
Cherry Shu wrote:
Are any plans for an API that would allow ZFS commands including
snapshot/rollback integrated with customer's application?
libzfs.h?
The API in there is Contracted Consolidation Private. Note that
private doe
23 matches
Mail list logo