Is it possible to replicate the whole opensolaris site to
illumos/openindiana/smartos/omnios site in a sub-catalog as archive?
>-Original Message-
>From: zfs-discuss-boun...@opensolaris.org
>[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Klimov
>Sent: Sunday, February 17, 2
I just wanted to add that when I create a pool it is using ashift 9. Arent all
ssds 4k drives at this point?
-Original Message-
From: Grant Albitz [mailto:galb...@albitz.biz]
Sent: Saturday, February 16, 2013 9:28 PM
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss]
sorry I have not, that is the next logical step but only my esxi servers have
10g interfaces and I cannot take them down at the moment.
I have been looking this issue up online and I do find some people with poor
reads and good writes. In some cases they are saying it was a mismatch in the
targ
Grant Albitz wrote:
Setting the MTU to 1500 did not affect read speeds but it cut my write speeds
in half. Flow control on or off did not make any difference.
The cable is somewhat ruled out because I am seeing the same exact performance
on 2 different esxi hosts.
Have you tried a native OI
On Feb 16, 2013, at 3:59 PM, Sašo Kiselkov wrote:
> On 02/17/2013 12:52 AM, Grant Albitz wrote:
>> Yes jim I actually used something similar to enable the 9000 mtu that's why
>> I want familiar with the config file method.
>>
>> dladm set-linkprop -p mtu=9000 InterfaceName
>>
>>
>> Flowcontro
Setting the MTU to 1500 did not affect read speeds but it cut my write speeds
in half. Flow control on or off did not make any difference.
The cable is somewhat ruled out because I am seeing the same exact performance
on 2 different esxi hosts.
-Original Message-
From: Jim Klimov [mai
On 2013-02-17 00:52, Grant Albitz wrote:
Yes jim I actually used something similar to enable the 9000 mtu that's why I
want familiar with the config file method.
dladm set-linkprop -p mtu=9000 InterfaceName
Flowcontrol is currently off on the zfs host but enabled by default on esxi, I
am goi
On 02/17/2013 12:52 AM, Grant Albitz wrote:
> Yes jim I actually used something similar to enable the 9000 mtu that's why I
> want familiar with the config file method.
>
> dladm set-linkprop -p mtu=9000 InterfaceName
>
>
> Flowcontrol is currently off on the zfs host but enabled by default on
Yes jim I actually used something similar to enable the 9000 mtu that's why I
want familiar with the config file method.
dladm set-linkprop -p mtu=9000 InterfaceName
Flowcontrol is currently off on the zfs host but enabled by default on esxi, I
am going to try enabling flow control first and i
On 2013-02-17 00:39, Grant Albitz wrote:
I am not sure that I can disable flow control since no switch is present. Is it
turned on in the host by default?
If you're on OpenIndiana (well, you're on the list, but I think I
haven't seen a statement of your OS version), you can try "dladm
show-li
On 02/17/2013 12:39 AM, Grant Albitz wrote:
> I am not sure that I can disable flow control since no switch is present. Is
> it turned on in the host by default?
Flow control is a feature of the NIC and any two NICs can negotiate to
have it turned on, you don't need a switch in between. See
/kern
I am not sure that I can disable flow control since no switch is present. Is it
turned on in the host by default?
-Original Message-
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Sent: Saturday, February 16, 2013 6:05 PM
To: openindiana-discuss@openindiana.org
Subject: Re: [OpenIn
Ian Collins wrote:
Sašo Kiselkov wrote:
On 02/16/2013 11:58 PM, Sašo Kiselkov wrote:
On 02/16/2013 11:37 PM, Ian Collins wrote:
Have you tried the connecting to the volume form another IO host to
eliminate vmware as the cause?
Second that.
The 9k MTU might also be hitting some NIC driver bug
Sašo Kiselkov wrote:
On 02/16/2013 11:58 PM, Sašo Kiselkov wrote:
On 02/16/2013 11:37 PM, Ian Collins wrote:
Have you tried the connecting to the volume form another IO host to
eliminate vmware as the cause?
Second that.
The 9k MTU might also be hitting some NIC driver bugs - non-standard
set
On 02/16/2013 11:58 PM, Sašo Kiselkov wrote:
> On 02/16/2013 11:37 PM, Ian Collins wrote:
>> Have you tried the connecting to the volume form another IO host to
>> eliminate vmware as the cause?
>
> Second that.
>
> The 9k MTU might also be hitting some NIC driver bugs - non-standard
> settings c
On 02/16/2013 11:37 PM, Ian Collins wrote:
> Have you tried the connecting to the volume form another IO host to
> eliminate vmware as the cause?
Second that.
The 9k MTU might also be hitting some NIC driver bugs - non-standard
settings can, at times. Since the difference is only in the storage -
Grant Albitz wrote:
Local dd results:
write 268.343296 GB via dd, please wait...
time dd if=/dev/zero of=/PSC.Net/dd.tst bs=2048000 count=131027
131027+0 records in
131027+0 records out
real 3:09.8
user0.1
sys 2:40.1
268.343296 GB in 189.8s = 1413.82 MB/s Write
131027+0 rec
Local dd results:
write 268.343296 GB via dd, please wait...
time dd if=/dev/zero of=/PSC.Net/dd.tst bs=2048000 count=131027
131027+0 records in
131027+0 records out
real 3:09.8
user0.1
sys 2:40.1
268.343296 GB in 189.8s = 1413.82 MB/s Write
131027+0 records in
131027+0 recor
Hi Grant,
On 02/16/2013 05:14 PM, Grant Albitz wrote:
> Hi I am trying to track down a performance issue with my setup.
Always be sure to do your performance testing on the machine itself
first, before going on to test through more layers of the stack (i.e.
iSCSI). What does "iostat -xn 1" report
As a technical terminology nit - all components in a zfs pool
are "vdevs", albeit of different levels. "Leaf vdevs" (disks,
slices, files) are aggregated into top-level vdevs (singular,
mirrors, raidzN) and a pool is striped over them.
All in all, if the different vdev topologies yield the same
r
Hi I am trying to track down a performance issue with my setup.
I have 24 ssds in 6 vdevs (4 drives per) that are then striped. Essentially a
raid50. Originally I had a perc h310 and saw similar numbers. I have since
switched to a perch h710 and have each drive in raid 0 and presented to the os
Hi I am trying to track down a performance issue with my setup.
I have 24 ssds in 6 vdevs (4 drives per) that are then striped. Essentially a
raid50. Originally I had a perc h310 and saw similar numbers. I have since
switched to a perch h710 and have each drive in raid 0 and presented to the os.
-Original message-
From: Gregory S. Youngblood
Sent: Fri 15-02-2013 18:47
Subject:Re: [OpenIndiana-discuss] opensolaris.org shutting down next
month
To: Discussion list for OpenIndiana ;
> And it can go away at any time. If they change robots.txt to block spiders
> they
23 matches
Mail list logo