muchas gracias por vuestro tiempo, por que me hiciste mas fuerte
[if misleading, plz excuse my french...]
;-)
z
- Original Message -
From: "Bob Friesenhahn"
To: "Dmitry Razguliaev"
Cc:
Sent: Saturday, January 10, 2009 10:28 AM
Subject: Re: [zfs-discuss] ZFS
On Sat, 10 Jan 2009, Dmitry Razguliaev wrote:
> At the time of writing that post, no, I didn't run zpool iostat -v
> 1. However, I run it after that. Results for operations of iostat
> command has changed from 1 for every device in raidz to something in
> between 20 and 400 for raidz volume and
At the time of writing that post, no, I didn't run zpool iostat -v 1. However,
I run it after that. Results for operations of iostat command has changed from
1 for every device in raidz to something in between 20 and 400 for raidz volume
and from 3 to something in between 200 and 450 for a singl
A question: why do you want to use HW raid together with ZFS? I thought ZFS
performing better if it was in total control? Would the results have been
better if no HW raid controller, and only ZFS?
--
This message posted from opensolaris.org
___
zfs-dis
Le 20 déc. 08 à 22:34, Dmitry Razguliaev a écrit :
> Hi, I faced with a similar problem, like Ross, but still have not
> found a solution. I have raidz out of 9 sata disks connected to
> internal and 2 external sata controllers. Bonnie++ gives me the
> following results:
> nexenta,8G,
> 10
Hi, I faced with a similar problem, like Ross, but still have not found a
solution. I have raidz out of 9 sata disks connected to internal and 2 external
sata controllers. Bonnie++ gives me the following results:
nexenta,8G,104393,43,159637,30,57855,13,77677,38,56296,7,281.8,1,16,26450,99,+,
On Tue, Sep 30, 2008 at 5:04 PM, Ross Becker <[EMAIL PROTECTED]>wrote:
> At this point, ZFS is performing admirably with the Areca card. Also, that
> card is only 8-port, and the Areca controllers I have are 12-port. My
> chassis has 24 SATA bays, so being able to cover all the drives with 2
> c
At this point, ZFS is performing admirably with the Areca card. Also, that
card is only 8-port, and the Areca controllers I have are 12-port. My chassis
has 24 SATA bays, so being able to cover all the drives with 2 controllers is
preferable.
Also, the driver for the Areca controllers is bein
On Tue, Sep 30, 2008 at 3:51 PM, Tim <[EMAIL PROTECTED]> wrote:
>
>
>>
>>
>> No apology necessary and I'm glad you figured it out - I was just
>> reading this thread and thinking "I'm missing something here - this
>> can't be right".
>>
>> If you have the budget to run a few more "experiments", try
On Mon, Sep 29, 2008 at 12:57 PM, Ross Becker <[EMAIL PROTECTED]> wrote:
> I have to come back and face the shame; this was a total newbie mistake by
> myself.
>
> I followed the ZFS shortcuts for noobs guide off bigadmin;
> http://wikis.sun.com/display/BigAdmin/ZFS+Shortcuts+for+Noobs
>
> What
>
> No apology necessary and I'm glad you figured it out - I was just
> reading this thread and thinking "I'm missing something here - this
> can't be right".
>
> If you have the budget to run a few more "experiments", try this
> SuperMicro card:
> http://www.springsource.com/repository/app/faq
> t
Ross,
No need to apologize...
Many of us work hard to make sure good ZFS information is available so a
big thanks for bringing this wiki page to our attention.
Playing with UFS on ZFS is one thing but even inexperienced admins need
to know this kind of configuration will provide poor performance
I have to come back and face the shame; this was a total newbie mistake by
myself.
I followed the ZFS shortcuts for noobs guide off bigadmin;
http://wikis.sun.com/display/BigAdmin/ZFS+Shortcuts+for+Noobs
What that had me doing was creating a UFS filesystem on top of a ZFS volume, so
I was usi
> "jl" == Jonathan Loran <[EMAIL PROTECTED]> writes:
jl> the single drive speed is in line with the raidz2 vdev,
reviewing the OP
UFS single drive: 50MB/s write 70MB/s read
ZFS 1-drive: 42MB/s write 43MB/s read
raidz2 11-drive: 40MB/s write 40MB/s read
Ross Becker wrote:
> Okay, after doing some testing, it appears that the issue is on the ZFS side.
> I fiddled around a while with options on the areca card, and never got any
> better performance results than my first test. So, my best out of the raidz2
> is 42 mb/s write and 43 mb/s read.
Ross Becker wrote:
> Well, I just got in a system I am intending to be a BIG fileserver;
> background- I work for a SAN startup, and we're expecting in our first
> year to collect 30-60 terabytes of Fibre Channel traces. The purpose of
> this is to be a large repository for those traces w/ stat
That was part of my testing of the RAID controller settings; turning off the
controller cache dropped me to 20 mb/sec read & write under raidz2/zfs.
--Ross
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
On Fri, Sep 26, 2008 at 5:46 PM, Ross Becker <[EMAIL PROTECTED]>wrote:
> Okay, after doing some testing, it appears that the issue is on the ZFS
> side. I fiddled around a while with options on the areca card, and never
> got any better performance results than my first test. So, my best out of
>
Okay, after doing some testing, it appears that the issue is on the ZFS side.
I fiddled around a while with options on the areca card, and never got any
better performance results than my first test. So, my best out of the raidz2 is
42 mb/s write and 43 mb/s read. I also tried turning off crc'
On Fri, 26 Sep 2008, Ross Becker wrote:
>
> I configured up an 11 drive RAID6 set + 1 hot spare on the Areca
> controller put a ZFS on that raid volume, and ran bonnie++ against
> it (16g size), and achieved 150 mb/s write, & 200 mb/s read. I then
> blew that away, configured the Areca to prese
Well, I just got in a system I am intending to be a BIG fileserver;
background- I work for a SAN startup, and we're expecting in our first year to
collect 30-60 terabytes of Fibre Channel traces. The purpose of this is to be
a large repository for those traces w/ statistical analysis run ag
21 matches
Mail list logo