Sandon Van Ness wrote:
Sounds to me like something is wrong as on my 20 disk backup machine
with 20 1TB disks on a single raidz2 vdev I get the following with DD on
sequential reads/writes:
writes:
r...@opensolaris: 11:36 AM :/data# dd bs=1M count=10 if=/dev/zero
of=./100gb.bin
10+0 rec
Sounds to me like something is wrong as on my 20 disk backup machine
with 20 1TB disks on a single raidz2 vdev I get the following with DD on
sequential reads/writes:
writes:
r...@opensolaris: 11:36 AM :/data# dd bs=1M count=10 if=/dev/zero
of=./100gb.bin
10+0 records in
10+0 records
On Fri, Jun 18, 2010 at 6:34 AM, Curtis E. Combs Jr. wrote:
> Oh! Yes. dedup. not compression, but dedup, yes.
dedup may be your problem...it requires some heavy ram and/or decent L2ARC
from what i've been reading.
___
zfs-discuss mailing list
zfs-d
On Fri, Jun 18, 2010 at 1:52 AM, Curtis E. Combs Jr. wrote:
> I am new to zfs, so I am still learning. I'm using zpool iostat to
> measure performance. Would you say that smaller raidz2 sets would give
> me more reliable and better performance? I'm willing to give it a
> shot...
A ZFS pool is mad
If the device driver generates or fabricates device IDs, then moving
devices around is probably okay.
I recall the Areca controllers are problematic when it comes to moving
devices under pools. Maybe someone with first-hand experience can
comment.
Consider exporting the pool first, moving the de
Thank you, all of you, for the super helpful responses, this is probably one of
the most helpful forums I've been on. I've been working with ZFS on some
SunFires for a little while now, in prod, and the testing environment with oSol
is going really well. I love it. Nothing even comes close.
If
Hi Curtis,
You might review the ZFS best practices info to help you determine
the best pool configuration for your environment:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
If you're considering using dedup, particularly on a 24T pool, then
review the current known is
Oh! Yes. dedup. not compression, but dedup, yes.
On Fri, Jun 18, 2010 at 6:30 AM, Arne Jansen wrote:
> Curtis E. Combs Jr. wrote:
>> Um...I started 2 commands in 2 separate ssh sessions:
>> in ssh session one:
>> iostat -xn 1 > stats
>> in ssh session two:
>> mkfile 10g testfile
>>
>> when the mk
Um...I started 2 commands in 2 separate ssh sessions:
in ssh session one:
iostat -xn 1 > stats
in ssh session two:
mkfile 10g testfile
when the mkfile was finished i did the dd command...
on the same zpool1 and zfs filesystem..that's it, really
On Fri, Jun 18, 2010 at 6:06 AM, Arne Jansen wrote:
Sure. And hey, maybe I just need some context to know what's "normal"
IO for the zpool. It just...feels...slow, sometimes. It's hard to
explain. I attached a log of iostat -xn 1 while doing mkfile 10g
testfile on the zpool, as well as your dd with the bs set really high.
When I Ctl-C'ed the dd it s
Yea. I did bs sizes from 8 to 512k with counts from 256 on up. I just
added zeros to the count, to try to test performance for larger files.
I didn't notice any difference at all, either with the dtrace script
or zpool iostat. Thanks for you help, btw.
On Fri, Jun 18, 2010 at 5:30 AM, Pasi Kärkkäi
I also have a dtrace script that I found that supposedly gives a more
accurate reading. Usually, though, it's output is very close to what
zpool iostat says. Keep in mind this is a test environment, there's no
production here, so I can make and destroy the pools as much as I want
to play around wit
I am new to zfs, so I am still learning. I'm using zpool iostat to
measure performance. Would you say that smaller raidz2 sets would give
me more reliable and better performance? I'm willing to give it a
shot...
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen wrote:
> On Fri, Jun 18, 2010 at 01:
artiepen wrote:
> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2,
> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to
> 40 very rarely.
I get Read/write speeds of aprox. 630 MB/s into ZFS on
a SunFire X4540.
It seems that you missconfigur
Curtis E. Combs Jr. wrote:
> Um...I started 2 commands in 2 separate ssh sessions:
> in ssh session one:
> iostat -xn 1 > stats
> in ssh session two:
> mkfile 10g testfile
>
> when the mkfile was finished i did the dd command...
> on the same zpool1 and zfs filesystem..that's it, really
No, this
Curtis E. Combs Jr. wrote:
> Sure. And hey, maybe I just need some context to know what's "normal"
> IO for the zpool. It just...feels...slow, sometimes. It's hard to
> explain. I attached a log of iostat -xn 1 while doing mkfile 10g
> testfile on the zpool, as well as your dd with the bs set reall
artiepen wrote:
> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2,
> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to
> 40 very rarely.
>
> As far as random vs. sequential. Correct me if I'm wrong, but if I used dd to
> make files from /dev
On Fri, Jun 18, 2010 at 02:21:15AM -0700, artiepen wrote:
> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2,
> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to
> 40 very rarely.
>
> As far as random vs. sequential. Correct me if I'm wrong, b
On 06/18/10 09:21 PM, artiepen wrote:
This is a test system. I'm wondering, now, if I should just reconfigure with
maybe 7 disks and add another spare. Seems to be the general consensus that
bigger raid pools = worse performance. I thought the opposite was true...
No, wider vdevs gives poor
Yes, and I apologize for basic nature of these questions. Like I said, I'm
pretty wet behind the ears with zfs. The MB/sec metric comes from dd, not zpool
iostat. zpool iostat usually gives me units of k. I think I'll try with smaller
raid sets and come back to the thread.
Thanks, all
--
This m
On Fri, Jun 18, 2010 at 05:15:44AM -0400, Thomas Burgess wrote:
>On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen <[1]pa...@iki.fi> wrote:
>
> On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> > Well, I've searched my brains out and I can't seem to find a reason
> for
40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, and
6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 40
very rarely.
As far as random vs. sequential. Correct me if I'm wrong, but if I used dd to
make files from /dev/zero, wouldn't that be sequ
On Fri, Jun 18, 2010 at 04:52:02AM -0400, Curtis E. Combs Jr. wrote:
> I am new to zfs, so I am still learning. I'm using zpool iostat to
> measure performance. Would you say that smaller raidz2 sets would give
> me more reliable and better performance? I'm willing to give it a
> shot...
>
Yes, m
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen wrote:
> On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> > Well, I've searched my brains out and I can't seem to find a reason for
> this.
> >
> > I'm getting bad to medium performance with my new test storage device.
> I've got 24 1.5T
On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> Well, I've searched my brains out and I can't seem to find a reason for this.
>
> I'm getting bad to medium performance with my new test storage device. I've
> got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the
Well, I've searched my brains out and I can't seem to find a reason for this.
I'm getting bad to medium performance with my new test storage device. I've got
24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the Areca
raid controller, the driver being arcmsr. Quad core AMD with
26 matches
Mail list logo