> > The disks in the SAN servers were indeed striped together with Linux LVM
> > and exported as a single volume to ZFS.
>
> That is really going to hurt. In general, you're much better off
> giving ZFS access to all the individual LUNs. The intermediate
> LVM layer kills the concurrency that's
> The disks in the SAN servers were indeed striped together with Linux LVM
> and exported as a single volume to ZFS.
That is really going to hurt. In general, you're much better off
giving ZFS access to all the individual LUNs. The intermediate
LVM layer kills the concurrency that's native to ZF
Bart Van Assche wrote:
>> If I understand this correctly, you've stripped the disks together
>> w/ Linux lvm, then exported a single ISCSI volume to ZFS (or two for
>> mirroring; which isn't clear).
>>
>
> The disks in the SAN servers were indeed striped together with Linux LVM and
> exporte
> If I understand this correctly, you've stripped the disks together
> w/ Linux lvm, then exported a single ISCSI volume to ZFS (or two for
> mirroring; which isn't clear).
The disks in the SAN servers were indeed striped together with Linux LVM and
exported as a single volume to ZFS. The ZFS p
On 3/20/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
>
> Bart Smaalders wrote:
> > On 4 commodity 500 GB SATA drives set up w/ RAID Z, my 2.6 Ghz dual
> > core AMD box sustains
> > 100+ MB/sec read or write it happily saturates a GB nic w/ multiple
> > concurrent reads over
> > Samba.
> >
> Th
Bart Smaalders wrote:
> On 4 commodity 500 GB SATA drives set up w/ RAID Z, my 2.6 Ghz dual
> core AMD box sustains
> 100+ MB/sec read or write it happily saturates a GB nic w/ multiple
> concurrent reads over
> Samba.
>
This leads me to a question I've been meaning to ask for a while. I'
Bart Van Assche wrote:
> Hello,
>
> I just made a setup in our lab which should make ZFS fly, but unfortunately
> performance is significantly lower than expected: for large sequential data
> transfers write speed is about 50 MB/s while I was expecting at least 150
> MB/s.
>
> Setup
> -
>
Bart Van Assche wrote:
> Hello,
>
> I just made a setup in our lab which should make ZFS fly, but unfortunately
> performance is significantly lower than expected: for large sequential data
> transfers write speed is about 50 MB/s while I was expecting at least 150
> MB/s.
>
> Setup
> -
> Th
> It looks like the ZFS server is communicating with only one SAN server at a
> time.
This leads to the following question: is there a setting in ZFS that enables
concurrent writes to the ZFS storage targets instead of serializing all write
actions ?
Bart.
This message posted from opensola
- "Tim" <[EMAIL PROTECTED]> wrote:
> Have you considered building one solaris system and using its iSCSI
> target? When it comes to software iSCSI, you tend to get VERY
> different results when moving from one platform to the next. In my
> experience, Linux is notorious on iSCSI for working wel
On 3/20/08, Bart Van Assche <[EMAIL PROTECTED]> wrote:
>
> Hello,
>
> I just made a setup in our lab which should make ZFS fly, but
> unfortunately performance is significantly lower than expected: for large
> sequential data transfers write speed is about 50 MB/s while I was expecting
> at least 1
Hi Bart;
Your setup is composed of a lot of components. I'd suggest the following.
1) check the system with one SAN server and see the performance
2) check the internal performance of one SAN server
3) TRY using Solaris instead of Linux as solaris iSCSI target could offer
more performance
4) For
12 matches
Mail list logo