> From: Neil Perrin [mailto:neil.per...@oracle.com]
>
> In general - yes, but it really depends. Multiple synchronous writes of any
> size
> across multiple file systems will fan out across the log devices. That is
> because there is a separate independent log chain for each file system.
>
> Also
On 10/04/12 15:59, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Neil Perrin
The ZIL code chains blocks together and these are allocated round robin
among slogs or
if they don'
On Oct 4, 2012, at 1:33 PM, "Schweiss, Chip" wrote:
> Again thanks for the input and clarifications.
>
> I would like to clarify the numbers I was talking about with ZiL performance
> specs I was seeing talked about on other forums. Right now I'm getting
> streaming performance of sync writ
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Neil Perrin
>
> The ZIL code chains blocks together and these are allocated round robin
> among slogs or
> if they don't exist then the main pool devices.
So, if somebody is doing sync writes
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Schweiss, Chip
>
> If I get to build it this system, it will house a decent size VMware
> NFS storage w/ 200+ VMs, which will be dual connected via 10Gbe. This is all
> medical imaging resear
Again thanks for the input and clarifications.
I would like to clarify the numbers I was talking about with ZiL
performance specs I was seeing talked about on other forums. Right now
I'm getting streaming performance of sync writes at about 1 Gbit/S. My
target is closer to 10Gbit/S. If I get
Thanks Neil, we always appreciate your comments on ZIL implementation.
One additional comment below...
On Oct 4, 2012, at 8:31 AM, Neil Perrin wrote:
> On 10/04/12 05:30, Schweiss, Chip wrote:
>>
>> Thanks for all the input. It seems information on the performance of the
>> ZIL is sparse and
On 10/04/12 05:30, Schweiss, Chip wrote:
Thanks for all the input. It seems information on the
performance of the ZIL is sparse and scattered. I've spent
significant time researching this the past day. I'll summarize
what I've found. Please correct me if I'm w
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Schweiss, Chip
How can I determine for sure that my ZIL is my bottleneck? If it is the
bottleneck, is it possible to keep adding m
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Schweiss, Chip
>
> . The ZIL can have any number of SSDs attached either mirror or
> individually. ZFS will stripe across these in a raid0 or raid10 fashion
> depending on how you configure.
> From: Andrew Gabriel [mailto:andrew.gabr...@cucumber.demon.co.uk]
>
> > Temporarily set sync=disabled
> > Or, depending on your application, leave it that way permanently. I know,
> for the work I do, most systems I support at most locations have
> sync=disabled. It all depends on the workload
Thanks for all the input. It seems information on the performance of the
ZIL is sparse and scattered. I've spent significant time researching this
the past day. I'll summarize what I've found. Please correct me if I'm
wrong.
- The ZIL can have any number of SSDs attached either mirror or
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Schweiss, Chip
>
> How can I determine for sure that my ZIL is my bottleneck? If it is the
> bottleneck, is it possible to keep adding mirrored pairs of SSDs to the ZIL to
> make it faster? O
To answer your questions more directly, zilstat is what I used to check
what the ZIL was doing:
http://www.richardelling.com/Home/scripts-and-programs-1/zilstat
While I have added a mirrored log device, I haven't tried adding multiple
sets of mirror log devices, but I think it should work. I bel
I found something similar happening when writing over NFS (at significantly
lower throughput than available on the system directly), specifically that
effectively all data, even asynchronous writes, were being written to the
ZIL, which I eventually traced (with help from Richard Elling and others o
I'm in the planing stages of a rather larger ZFS system to house
approximately 1 PB of data.
I have only one system with SSDs for L2ARC and ZIL, The ZIL seems to be
the bottle neck for large bursts of data being written.I can't confirm
this for sure, but the when throwing enough data at my st
16 matches
Mail list logo