On Nov 1, 2012, at 1:24 AM, Jim Klimov wrote:
> On 2012-11-01 01:47, Richard Elling wrote:
>> Finally, a data point: using MTU of 1500 with ixgbe you can hit wire speed
>> on a
>> modern CPU.
>
>> There is no CSMA/CD on gigabit and faster available from any vendor today.
>> Everything today is
On 2012-11-01 01:47, Richard Elling wrote:
Finally, a data point: using MTU of 1500 with ixgbe you can hit wire speed on a
modern CPU.
There is no CSMA/CD on gigabit and faster available from any vendor today.
Everything today is switched.
Ok then, I'll stand corrected by the practice, altho
On Wed, Oct 31, 2012 at 8:44 PM, Richard Elling <
richard.ell...@richardelling.com> wrote:
> > > On the target system I am seeing writes up to
> >> 160 MB/s with frequent zpool iostat probes. When iostat probes are up to
> >> 5s+, there is a steady stream of 62 MB/s.
> >
> > I believe this *may* m
On Oct 31, 2012, at 3:37 AM, Jim Klimov wrote:
> 2012-10-31 13:58, Sebastian Gabler wrote:
>>> 2012-10-30 19:21, Sebastian Gabler wrote:
>Whereas that's relative: performance is still at a quite miserable 62
>MB/s through a gigabit link. Apparently, my environment has room for
>imp
On Oct 31, 2012, at 5:53 AM, Roy Sigurd Karlsbakk wrote:
>> 2012-10-30 19:21, Sebastian Gabler wrote:
>>> Whereas that's relative: performance is still at a quite miserable
>>> 62
>>> MB/s through a gigabit link. Apparently, my environment has room for
>>> improvement.
>>
>> Does your gigabit et
> 2012-10-30 19:21, Sebastian Gabler wrote:
> > Whereas that's relative: performance is still at a quite miserable
> > 62
> > MB/s through a gigabit link. Apparently, my environment has room for
> > improvement.
>
> Does your gigabit ethernet use Jumbo Frames (like 9000 or up to 16KB,
> depending
2012-10-31 13:58, Sebastian Gabler wrote:
2012-10-30 19:21, Sebastian Gabler wrote:
>Whereas that's relative: performance is still at a quite miserable 62
>MB/s through a gigabit link. Apparently, my environment has room for
>improvement.
Does your gigabit ethernet use Jumbo Frames (like 9000 o
Message: 7
Date: Tue, 30 Oct 2012 22:03:13 +0400
From: Jim Klimov
To: Discussion list for OpenIndiana
Subject:
Message-ID:<50901661.9050...@cos.ru>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
2012-10-30 19:21, Sebastian Gabler wrote:
>Whereas that's relative: performan
Some additional example in this one
http://blog.laspina.ca/ubiquitous/encapsulating-vt-d-accelerated-zfs-sto
rage-within-esxi
-Original Message-
From: Sebastian Gabler [mailto:sequoiamo...@gmx.net]
Sent: Tuesday, October 23, 2012 6:53 AM
To: openindiana-discuss@openindiana.org
Subject: [
Hi Sebastian,
Some examples using RBAC in my blog entry
http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zf
s
could help.
Regards,
Mike
-Original Message-
From: Sebastian Gabler [mailto:sequoiamo...@gmx.net]
Sent: Tuesday, October 23, 2012 6:53 AM
To: openindiana-di
2012-10-30 19:21, Sebastian Gabler wrote:
Whereas that's relative: performance is still at a quite miserable 62
MB/s through a gigabit link. Apparently, my environment has room for
improvement.
Does your gigabit ethernet use Jumbo Frames (like 9000 or up to 16KB,
depending on your NICs, switche
Am 23.10.2012 13:52, schrieb Sebastian Gabler:
Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand
it is nowadays default for OI151_a
> I use the sudo method and I also assign the user zfs rights for that
> pool.
> here is my sudoers file:
>
> bkuser ALL = NOPASSWD: /usr/sbin/zfs
>
> and here is the rights assignment:
>
> zfs allow -s @adminrole
> clone,create,destroy,mount,promote,quota,receive,rename,reservation,rollback,sen
> You could try to set the crypo algorithm to none if you do not need
> encryption.
>
> ssh -c none
>
If I really needed the extra speed, it would probably be better to spawn a
netcat over ssh so I don't have to modify the target's sshd_config. I
played with the ciphers and arcfour128 seemed
From: Michael Stapleton [mailto:michael.staple...@techsologic.com]
> You could try to set the crypo algorithm to none if you do not
> need encryption.
>
> ssh -c none
That wont work with the shipped ssh. You could use netcat
target# nc -l -p 31337 | zfs recv data/path/etc
source# zf
You could try to set the crypo algorithm to none if you do not need
encryption.
ssh -c none
Might also be worth trying to see if it is ssh that is slowing you down.
Mike
On Tue, 2012-10-23 at 17:03 -0400, Doug Hughes wrote:
> On 10/23/2012 4:13 PM, Timothy Coalson wrote:
> >
> > Works pr
On 10/23/2012 4:13 PM, Timothy Coalson wrote:
Works pretty well, though I get ~70MB/s on gigabit ethernet instead of the
theoretically possible 120MB/s, and I'm not sure why (NFS gets pretty close
to 120MB/s on the same network).
There's a fair bit of overhead to ssh and to zfs send/recive, s
I set this up with pfexec, I think on 151a4, and it has survived updates
without change so far (currently working on a7), and all I had to do was
add the "ZFS File System Management" profile to the backup user. I did
this via the users-admin gui, I think usermod -P does the same thing, but
here i
On 12-10-23 04:52 AM, Sebastian Gabler wrote:
Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand
it is nowadays default for OI151_a.
Or send to a named pipe on the remote server that root is recving from.
On 10/23/12 13:03, Jonathan Adams wrote:
you could try zfs send'ing to a local file and chmod/chown the file so
that a known local user can access it on the sending server
then on the receiving server you could rsync/ss
you could always set up an rsync server (not ssh):
man rsyncd.conf
this allows very controlled access, including read-only/specific IP
configurations.
Jon
On 23 October 2012 13:32, Gary Gendel wrote:
> On 10/23/12 8:23 AM, Doug Hughes wrote:
>>
>> On 10/23/2012 7:52 AM, Sebastian Gabler wrote:
On 10/23/12 8:23 AM, Doug Hughes wrote:
On 10/23/2012 7:52 AM, Sebastian Gabler wrote:
Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I under
On 10/23/2012 7:52 AM, Sebastian Gabler wrote:
Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand
it is nowadays default for OI151_a
you could try zfs send'ing to a local file and chmod/chown the file so
that a known local user can access it on the sending server
then on the receiving server you could rsync/ssh into the sending
server grab the file and then zfs receive as root.
Jon
On 23 October 2012 12:52, Sebastian Gabler
24 matches
Mail list logo