> > Have I missed any changes/updates in the situation?
>
> I'm been getting very bad performance out of a LSI 9211-4i card
> (mpt_sas) with Seagate Constellation 2TB SAS disks, SM SC846E1 and
> Intel X-25E/M SSDs. Long story short, I/O will hang for over 1 minute
> at random under heavy load.
Hm
On Wed, Jun 23, 2010 at 10:14 AM, Jeff Bacon wrote:
>> > Have I missed any changes/updates in the situation?
>>
>> I'm been getting very bad performance out of a LSI 9211-4i card
>> (mpt_sas) with Seagate Constellation 2TB SAS disks, SM SC846E1 and
>> Intel X-25E/M SSDs. Long story short, I/O will
Look again at how XenServer does storage. I think you will find it already has
a solution, both for iSCSI and NFS.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
Reaching into the dusty regions of my brain, I seem to recall that since RAIDz
does not work like a traditional RAID 5, particularly because of variably sized
stripes, that the data may not hit all of the disks, but it will always be
redundant.
I apologize for not having a reference for this a
On 6/23/10 4:29 AM, Brian Nitz wrote:
I saw a problem while upgrading from build 140 to 141 where beadm
activate {build141BE} failed because installgrub failed:
# BE_PRINT_ERR=true beadm activate opensolarismigi-4
be_do_installgrub: installgrub failed for device c5t0d0s0.
Unable to activate open
Hey Robert,
How big of a file are you making? RAID-Z does not explicitly do the parity
distribution that RAID-5 does. Instead, it relies on non-uniform stripe widths
to distribute IOPS.
Adam
On Jun 18, 2010, at 7:26 AM, Robert Milkowski wrote:
> Hi,
>
>
> zpool create test raidz c0t0d0 c1t0
On 06/23/10 10:40, Evan Layton wrote:
On 6/23/10 4:29 AM, Brian Nitz wrote:
I saw a problem while upgrading from build 140 to 141 where beadm
activate {build141BE} failed because installgrub failed:
# BE_PRINT_ERR=true beadm activate opensolarismigi-4
be_do_installgrub: installgrub failed for
Cindy Swearingen wrote:
On 06/23/10 10:40, Evan Layton wrote:
On 6/23/10 4:29 AM, Brian Nitz wrote:
I saw a problem while upgrading from build 140 to 141 where beadm
activate {build141BE} failed because installgrub failed:
# BE_PRINT_ERR=true beadm activate opensolarismigi-4
be_do_installgru
Gack, that's the same message we're seeing with the mpt controller with
SATA drives. I've never seen it with a SAS drive before .
Has anyone noticed a trend of 2TB SATA drives en-masse not working well
with the LSI SASx28/x36 expander chips? I can seemingly reproduce it on
demand - hook > 4 2TB d
> I'm using iozone to get some performance numbers and I/O hangs when
> it's doing the writing phase.
>
> This pool has:
>
> 18 x 2TB SAS disks as 9 data mirrors
> 2 x 32GB X-25E as log mirror
> 1 x 160GB X-160M as cache
>
> iostat shows "2" I/O operations active and SSDs at 100% busy when
> it'
128GB.
Does it mean that for dataset used for databases and similar
environments where basically all blocks have fixed size and there is no
other data all parity information will end-up on one (z1) or two (z2)
specific disks?
On 23/06/2010 17:51, Adam Leventhal wrote:
Hey Robert,
How bi
> Does it mean that for dataset used for databases and similar environments
> where basically all blocks have fixed size and there is no other data all
> parity information will end-up on one (z1) or two (z2) specific disks?
No. There are always smaller writes to metadata that will distribute pa
On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote:
>
> 128GB.
>
> Does it mean that for dataset used for databases and similar environments
> where basically all blocks have fixed size and there is no other data all
> parity information will end-up on one (z1) or two (z2) specific disks?
W
This forum has been tremendously helpful, but I decided to get some help from a
Solaris Guru install Solaris for a backup application.
I do not want to disturb the flow of this forum, but where can I post to get
some paid help on this forum? We are located in the San Francisco Bay Area. Any
hel
On Wed, Jun 23, 2010 at 2:43 PM, Jeff Bacon wrote:
>> >> Swapping the 9211-4i for a MegaRAID ELP (mega_sas) improves
>> >> performance by 30-40% instantly and there are no hangs anymore so
> I'm
>> >> guessing it's something related to the mpt_sas driver.
>
> Wait. The mpt_sas driver by defaul
15 matches
Mail list logo