Re: Odd RAID Performance Issue

2012-03-27 Thread Stephen Sanders
Bit of a head space on the running space usage question. One of the test systems has 4 g_up/g_down threads running hence the better runningbufspace usages. biodone() gets called a lot more often so the buffer usage is not backing up. It also appears that devstat_start_transaction() / devstat_end

Re: Odd RAID Performance Issue

2012-03-27 Thread Steve Sanders
Thanks for all of the suggestions. We do tune the logging ufs partition to have 64K blocks. We found a solution that makes this problem go away. We've modified the cam such that if a controller has 2 or more disks attached, it divides the number of I/O slots on the card between the disks. So

Re: Odd RAID Performance Issue

2012-02-13 Thread Ivan Voras
On 13/02/2012 15:48, Stephen Sanders wrote: > We've an application that logs data on one very large raid6 array > and updates/accesses a database on another smaller raid5 array. You would be better off with RAID10 for a database (or anything which does random IO). > Both arrays are connected to t

Re: Odd RAID Performance Issue

2012-02-13 Thread Tom Evans
On Mon, Feb 13, 2012 at 2:48 PM, Stephen Sanders wrote: > We've an application that logs data on one very large raid6 array > and updates/accesses a database on another smaller raid5 array. > > Both arrays are connected to the same PCIe 3ware RAID controller.   The > system has 2 six core 3Ghz pro

Odd RAID Performance Issue

2012-02-13 Thread Stephen Sanders
We've an application that logs data on one very large raid6 array and updates/accesses a database on another smaller raid5 array. Both arrays are connected to the same PCIe 3ware RAID controller. The system has 2 six core 3Ghz processors and 24 GB of RAM. The system is running FreeBSD 8.1. The

nsdispatch performance issue for large group files

2009-03-23 Thread Anthony Bourov
Regarding performance of: lib/libc/net/nsdispatch.c When used from: lib/libc/net/getgrent.c (called by initgroups()) I don't normally post here but I wanted to make a suggestion on a performance issue that I spotted. I run a large number of high-volume web hosting servers and noticed on so

7.0 unusual performance issue - vmdaemon hang?

2008-12-10 Thread Steven Hartland
Just had one of hour webservers flag as down here and on investigation the machine seems to be struggling due to a hung vmdaemon process. top is reporting vmdaemon as using a constant 55.57% CPU yet CPU time is not increasing:- last pid: 36492; load averages: 0.04, 0.05, .11 up 89+19:5

Re: Performance issue

2001-12-14 Thread Dimitar Peikov
On Fri, 14 Dec 2001 02:55:33 -0600 Alfred Perlstein <[EMAIL PROTECTED]> wrote: > * Kris Kennaway <[EMAIL PROTECTED]> [011214 02:46] wrote: > > > > Yes, well, we already compile the *entire tree* with static > > (compile-time) optimizations when CPUTYPE is set, so one more (bzero) > > is no diffe

Re: Performance issue

2001-12-14 Thread Alfred Perlstein
* Kris Kennaway <[EMAIL PROTECTED]> [011214 02:46] wrote: > > Yes, well, we already compile the *entire tree* with static > (compile-time) optimizations when CPUTYPE is set, so one more (bzero) > is no difference except that it gives an extra performance benefit. Wait, you go to each and every f

Re: Performance issue

2001-12-14 Thread Kris Kennaway
On Fri, Dec 14, 2001 at 02:26:51AM -0600, Alfred Perlstein wrote: > > This could easily be hung off CPUTYPE like we do for the asm code in > > OpenSSL, right? > > That's not the point, you're proposing a static configuration > which i honestly don't like. What makes more sense is to > teach the

Re: Performance issue

2001-12-14 Thread Matthew Dillon
:That's not the point, you're proposing a static configuration :which i honestly don't like. What makes more sense is to :teach the dynamic linker to look for archetecture specific :subdirectories in order to dynamically link in a shared object :more suited to the running CPU, not the CPU it was

Re: Performance issue

2001-12-14 Thread Alfred Perlstein
* Kris Kennaway <[EMAIL PROTECTED]> [011213 22:17] wrote: > On Sun, Dec 09, 2001 at 03:23:28PM -0800, Peter Wemm wrote: > > Poul-Henning Kamp wrote: > > > > > > There are many effects that could cause this, for instance if FreeBSD > > > manages to align things differently in relation to the CPU c

Re: Performance issue

2001-12-13 Thread Kris Kennaway
On Sun, Dec 09, 2001 at 03:23:28PM -0800, Peter Wemm wrote: > Poul-Henning Kamp wrote: > > > > There are many effects that could cause this, for instance if FreeBSD > > manages to align things differently in relation to the CPU cache you > > could get some very interesting waste of time that way.

Re: Performance issue

2001-12-09 Thread Kenneth Culver
If they're using gcc to compile then that doesn't really matter, last I heard gcc's optimizer wasn't that great, and didn't result in much faster code, but if the glibc people hand optimized stuff, I can see your point. Ken > > This means that Linux's glibc is using an i686 optimized bzero(),

Re: Performance issue

2001-12-09 Thread Peter Wemm
Poul-Henning Kamp wrote: > > There are many effects that could cause this, for instance if FreeBSD > manages to align things differently in relation to the CPU cache you > could get some very interesting waste of time that way. > > Based on the data you show me, I can't really say that something

Re: Performance issue

2001-12-09 Thread Dimitar Peikov
On Sun, 09 Dec 2001 12:42:50 +0100 Poul-Henning Kamp <[EMAIL PROTECTED]> wrote: > > There are many effects that could cause this, for instance if FreeBSD > manages to align things differently in relation to the CPU cache you > could get some very interesting waste of time that way. Yes, I agree

Re: Performance issue

2001-12-09 Thread Poul-Henning Kamp
There are many effects that could cause this, for instance if FreeBSD manages to align things differently in relation to the CPU cache you could get some very interesting waste of time that way. Based on the data you show me, I can't really say that something is wrong or right either way. -- P

Performance issue

2001-12-09 Thread Dimitar Peikov
g and allocation algorithms currently in use in FreeBSD (I've read /usr/src/lib/libc/stdlib/malloc.c and have compl! etely agreed with it, with 1 exception), please help me. PS: I'm not trying to compare both OS-es but I'm curious on this performance issue. On FreeBSD 4.4 Sta

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-25 Thread James Bailie
On Tue, Jan 25, 2000 at 09:12:31AM -0800, Brian D. Moffet wrote: > Okay, stupid question. socketpair returns 2 sockets which according to > the man page are "indistinguishable". Does this mean that you can read and > write to either socket pair? Yes sir. > pipe(2) returns 2 file descriptors,

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-25 Thread Scott Hess
"Alfred Perlstein" <[EMAIL PROTECTED]> wrote: > I think you probably want to experiment with pools attached to the > pipe, and you ought to be using pipe rather than socketpair. My tests indicate that pipe performance in this case is identical to socketpair performance. Perhaps because I'm sendi

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-25 Thread Brian D. Moffet
Okay, stupid question. socketpair returns 2 sockets which according to the man page are "indistinguishable". Does this mean that you can read and write to either socket pair? pipe(2) returns 2 file descriptors, one of which is a read and one of which is a write fd. The other end flips these a

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-25 Thread Alfred Perlstein
* Matthew Dillon <[EMAIL PROTECTED]> [000125 11:51] wrote: > > :OK, so let's say I did spend some time implementing it in terms of semget() > :and semop(). Would you be totally apalled if the performance turned out to > :be about the same as using a single socketpair? Do you have a very strong

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-25 Thread Peter Wemm
"Brian D. Moffet" wrote: > Okay, stupid question. socketpair returns 2 sockets which according to > the man page are "indistinguishable". Does this mean that you can read and > write to either socket pair? Yep, you can write to either end and it will come out the other end. > pipe(2) returns

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-25 Thread Matthew Dillon
:OK, so let's say I did spend some time implementing it in terms of semget() :and semop(). Would you be totally apalled if the performance turned out to :be about the same as using a single socketpair? Do you have a very strong :feeling that it should be significantly better. [Again, under :3.

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-25 Thread Scott Hess
"Scott Hess" <[EMAIL PROTECTED]> wrote: > "Matthew Dillon" <[EMAIL PROTECTED]> wrote: > > :Unfortunately, I've found that having a group of processes reading > > :from a group of socketpairs has better performance than having > > :them all read from a single socketpair. I've been unable to > > :

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-25 Thread Scott Hess
"Matthew Dillon" <[EMAIL PROTECTED]> wrote: > :Unfortunately, I've found that having a group of processes reading from a > :group of socketpairs has better performance than having them all read from > :a single socketpair. I've been unable to determine why. > > The problem is that when you ha

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-24 Thread Peter Wemm
Brian Somers wrote: > > "Scott Hess" wrote: > > > > > I've found an odd performance issue that I cannot explain. I'm using > > > socketpairs to communicate with multiple rfork(RFPROC) processes. > > > > Use 'pipe(2)' rah

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-24 Thread Brian Somers
> "Scott Hess" wrote: > > > I've found an odd performance issue that I cannot explain. I'm using > > socketpairs to communicate with multiple rfork(RFPROC) processes. > > Use 'pipe(2)' rahter than 'socketpair(2)' as both are

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-24 Thread Peter Wemm
"Scott Hess" wrote: > I've found an odd performance issue that I cannot explain. I'm using > socketpairs to communicate with multiple rfork(RFPROC) processes. Use 'pipe(2)' rahter than 'socketpair(2)' as both are bidirectional and pipe is a LOT f

Re: Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-24 Thread Matthew Dillon
:I've found an odd performance issue that I cannot explain. I'm using :socketpairs to communicate with multiple rfork(RFPROC) processes. :Initially, I used a seperate socketpair to communicate requests to each :... : :Unfortunately, I've found that having a group of processes

Performance issue with rfork() and single socketpairs versus multiple socketpairs.

2000-01-24 Thread Scott Hess
I've found an odd performance issue that I cannot explain. I'm using socketpairs to communicate with multiple rfork(RFPROC) processes. Initially, I used a seperate socketpair to communicate requests to each process, with locking in the parent to synchronize access to each client. I