On Mon, Mar 26, 2012 at 1:19 PM, Richard Elling
wrote:
> Apologies to the ZFSers, this thread really belongs elsewhere.
>
> Let me explain below:
>
> Root documentation path of apache is in zfs, you see
> it at No.3 at the above dtrace report.
>
>
> The sort is in reverse order. The large number y
On Mon, Mar 26, 2012 at 12:19 PM, Richard Elling
wrote:
> Apologies to the ZFSers, this thread really belongs elsewhere.
Some of the info in it is informative for other zfs users as well though :)
> Here is the output, I changed to "tick-5sec" and "trunc(@, 5)".
>
> No.2 and No.3 is what I care
Apologies to the ZFSers, this thread really belongs elsewhere.
On Mar 25, 2012, at 10:11 PM, Aubrey Li wrote:
> On Mon, Mar 26, 2012 at 11:34 AM, Richard Elling
> wrote:
>> On Mar 25, 2012, at 6:51 PM, Aubrey Li wrote:
>>> On Mon, Mar 26, 2012 at 4:18 AM, Jim Mauro wrote:
If you're chasing
On Mon, Mar 26, 2012 at 11:34 AM, Richard Elling
wrote:
> On Mar 25, 2012, at 6:51 PM, Aubrey Li wrote:
>> On Mon, Mar 26, 2012 at 4:18 AM, Jim Mauro wrote:
>>> If you're chasing CPU utilization, specifically %sys (time in the kernel),
>>> I would start with a time-based kernel profile.
>>>
>>> #
Hello.
What the best practices for choosing ZFS volume volblocksize setting for
VMware VMFS-5?
VMFS-5 block size is 1Mb. Not sure how it corresponds with ZFS.
Setup details follow:
- 11 pairs of mirrors;
- 600Gb 15k SAS disks;
- SSDs for L2ARC and ZIL
- COMSTAR FC target;
- about 30 virtual ma
On Mar 25, 2012, at 6:51 PM, Aubrey Li wrote:
> On Mon, Mar 26, 2012 at 4:18 AM, Jim Mauro wrote:
>> If you're chasing CPU utilization, specifically %sys (time in the kernel),
>> I would start with a time-based kernel profile.
>>
>> #dtrace -n 'profile-997hz /arg0/ { @[stack()] = count(); } tick-
On Mon, Mar 26, 2012 at 4:18 AM, Jim Mauro wrote:
> If you're chasing CPU utilization, specifically %sys (time in the kernel),
> I would start with a time-based kernel profile.
>
> #dtrace -n 'profile-997hz /arg0/ { @[stack()] = count(); } tick-60sec {
> trunc(@, 20); printa(@0; }'
>
> I would be
On Mon, Mar 26, 2012 at 3:22 AM, Fajar A. Nugraha wrote:
>>
>> I have ever not seen any issues until I did a comparison with Linux.
>
> So basically you're comparing linux + ext3/4 performance with solaris
> + zfs, on the same hardware? That's not really fair, is it?
> If your load is I/O-intensiv
If you're chasing CPU utilization, specifically %sys (time in the kernel),
I would start with a time-based kernel profile.
#dtrace -n 'profile-997hz /arg0/ { @[stack()] = count(); } tick-60sec {
trunc(@, 20); printa(@0; }'
I would be curious to see where the CPU cycles are being consumed first,
On Mon, Mar 26, 2012 at 2:13 AM, Aubrey Li wrote:
>> The problem is, every zfs vnode access need the **same zfs root**
>> lock. When the number of
>> httpd processes and the corresponding kernel threads becomes large,
>> this root lock contention
>> becomes horrible. This situation does not occurs
On Mon, Mar 26, 2012 at 2:58 AM, Richard Elling
wrote:
> On Mar 25, 2012, at 10:25 AM, Aubrey Li wrote:
>
> On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
> wrote:
>
> This is the wrong forum for general purpose performance tuning. So I won't
>
> continue this much farther. Notice the huge num
On Mon, Mar 26, 2012 at 2:10 AM, zfs user wrote:
> On 3/25/12 10:25 AM, Aubrey Li wrote:
>>
>> On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
>> wrote:
>>>
>>> This is the wrong forum for general purpose performance tuning. So I
>>> won't
>>> continue this much farther. Notice the huge number
On Mar 25, 2012, at 10:25 AM, Aubrey Li wrote:
> On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
> wrote:
>> This is the wrong forum for general purpose performance tuning. So I won't
>> continue this much farther. Notice the huge number of icsw, that is a
>> bigger
>> symptom than locks.
>> -
On 3/25/12 10:25 AM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
wrote:
This is the wrong forum for general purpose performance tuning. So I won't
continue this much farther. Notice the huge number of icsw, that is a
bigger
symptom than locks.
-- richard
thanks anywa
On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
wrote:
> This is the wrong forum for general purpose performance tuning. So I won't
> continue this much farther. Notice the huge number of icsw, that is a
> bigger
> symptom than locks.
> -- richard
thanks anyway, lock must be a problem. the sce
On Mar 25, 2012, at 6:26 AM, Jeff Bacon wrote:
>> In general, mixing SATA and SAS directly behind expanders (eg without
>> SAS/SATA intereposers) seems to be bad juju that an OS can't fix.
>
> In general I'd agree. Just mixing them on the same box can be problematic,
> I've noticed - though I thi
This is the wrong forum for general purpose performance tuning. So I won't
continue this much farther. Notice the huge number of icsw, that is a bigger
symptom than locks.
-- richard
On Mar 25, 2012, at 6:24 AM, Aubrey Li wrote:
> SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl us
> In general, mixing SATA and SAS directly behind expanders (eg without
> SAS/SATA intereposers) seems to be bad juju that an OS can't fix.
In general I'd agree. Just mixing them on the same box can be problematic,
I've noticed - though I think as much as anything that the firmware
on the 3G/s exp
On Sun, Mar 25, 2012 at 3:55 PM, Richard Elling
wrote:
> On Mar 24, 2012, at 10:29 PM, Aubrey Li wrote:
>
> Hi,
>
> I'm migrating a webserver(apache+php) from RHEL to solaris. During the
> stress testing comparison, I found under the same session number of client
> request, CPU% is ~70% on RHEL wh
On Mar 24, 2012, at 10:29 PM, Aubrey Li wrote:
> Hi,
>
> I'm migrating a webserver(apache+php) from RHEL to solaris. During the
> stress testing comparison, I found under the same session number of client
> request, CPU% is ~70% on RHEL while CPU% is full on solaris.
>
> After some investigation
20 matches
Mail list logo