On Wed, Dec 16, 2009 at 8:19 AM, Brandon High wrote:
> On Wed, Dec 16, 2009 at 8:05 AM, Bob Friesenhahn
> wrote:
>> In his case 'zfs send' to /dev/null was still quite fast and the network
>> was also quite fast (when tested with benchmark software). The implication
>> is that ssh network trans
My ARC is ~3GB.
I'm doing a test that copies 10GB of data to a volume where the blocks
should dedupe 100% with existing data.
First time, the test that runs <5MB sec, seems to average 10-30% ARC *miss*
rate. <400 arc reads/sec.
When things are working at disk bandwidth, I'm getting 3-5% ARC misse
It looks like the kernel is using a lot of memory, which may be part
of the performance problem. The ARC has shrunk to 1G, and the kernel
is using up over 5G.
I'm doing a send|receive of 683G of data. I started it last night
around 1am, and as of right now it's only sent 450GB. That's about
8.5MB/
I have observed the opposite, and I believe that all writes are slow to my
dedup'd pool.
I used local rsync (no ssh) for one of my migrations (so it was restartable,
as it took *4 days*), and the writes were slow just like zfs recv.
I have not seen fast writes of real data to the deduped volume,
> I'm willing to accept slower writes with compression enabled, par for
> the course. Local writes, even with compression enabled, can still
> exceed 500MB/sec, with moderate to high CPU usage.
> These problems seem to have manifested after snv_128, and seemingly
> only affect ZFS receive speeds. L
Le 17 déc. 09 à 03:19, Brent Jones a écrit :
Something must've changed in either SSH, or the ZFS receive bits to
cause this, but sadly since I upgrade my pool, I cannot roll back
these hosts :(
I'm not sure that's the best way, but to look at how ssh is slowing
down the transfer, I'm usua
On Wed, Dec 16, 2009 at 7:43 PM, Edward Ned Harvey
wrote:
>> I'm seeing similar results, though my file systems currently have
>> de-dupe disabled, and only compression enable, both systems being
>
> I can't say this is your issue, but you can count on slow writes with
> compression on. How slow
> I'm seeing similar results, though my file systems currently have
> de-dupe disabled, and only compression enable, both systems being
I can't say this is your issue, but you can count on slow writes with
compression on. How slow is slow? Don't know. Irrelevant in this case?
Possibly.
___
On Wed, Dec 16, 2009 at 12:19 PM, Michael Herf wrote:
> Mine is similar (4-disk RAIDZ1)
> - send/recv with dedup on: <4MB/sec
> - send/recv with dedup off: ~80M/sec
> - send > /dev/null: ~200MB/sec.
> I know dedup can save some disk bandwidth on write, but it shouldn't save
> much read bandwidt
Mine is similar (4-disk RAIDZ1)
- send/recv with dedup on: <4MB/sec
- send/recv with dedup off: ~80M/sec
- send > /dev/null: ~200MB/sec.
I know dedup can save some disk bandwidth on write, but it shouldn't save
much read bandwidth (so I think these numbers are right).
There's a warning in a Je
On Wed, Dec 16, 2009 at 8:05 AM, Bob Friesenhahn
wrote:
> In his case 'zfs send' to /dev/null was still quite fast and the network
> was also quite fast (when tested with benchmark software). The implication
> is that ssh network transfer performace may have dropped with the update.
zfs send ap
On Wed, Dec 16, 2009 at 7:41 AM, Edward Ned Harvey
wrote:
> I'll first suggest questioning the measurement of speed you're getting,
> 12.5Mb/sec. I'll suggest another, more accurate method:
> date ; zfs send somefilesystem | pv -b | ssh somehost "zfs receive foo" ;
> date
The send failed (I togg
On Wed, 16 Dec 2009, Brandon High wrote:
I've set dedup=verify at the top level filesystem, which is inherited
by everything.
I started the send this morning, and as of now it's only send 590gb of
a 867gb filesystem. According to "zpool iostat 60", it's writing at
about 12-13mb/sec. Reads tend t
hat way.
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Brandon High
> Sent: Wednesday, December 16, 2009 4:08 AM
> To: ZFS discuss
> Subject: [zfs-discuss] zfs zend is very slow
>
> I
I'm doing a "zfs send -R | zfs receive" on a snv_129 system. The
target filesystem has dedup enabled, but since it was upgraded from
b125 the existing data is not deduped.
The pool is an 8-disk raidz2. The system has 8gb of memory, and a dual
core Athlon 4850e cpu.
I've set dedup=verify at the to
15 matches
Mail list logo