Bit of a head space on the running space usage question. One of the
test systems has 4 g_up/g_down threads running hence the better
runningbufspace usages. biodone() gets called a lot more often so the
buffer usage is not backing up.
It also appears that devstat_start_transaction() /
devstat_end
Thanks for all of the suggestions. We do tune the logging ufs partition
to have 64K blocks.
We found a solution that makes this problem go away.
We've modified the cam such that if a controller has 2 or more disks
attached, it divides the number of I/O slots on the card between the
disks. So
On 13/02/2012 15:48, Stephen Sanders wrote:
> We've an application that logs data on one very large raid6 array
> and updates/accesses a database on another smaller raid5 array.
You would be better off with RAID10 for a database (or anything which
does random IO).
> Both arrays are connected to t
On Mon, Feb 13, 2012 at 2:48 PM, Stephen Sanders
wrote:
> We've an application that logs data on one very large raid6 array
> and updates/accesses a database on another smaller raid5 array.
>
> Both arrays are connected to the same PCIe 3ware RAID controller. The
> system has 2 six core 3Ghz pro
We've an application that logs data on one very large raid6 array
and updates/accesses a database on another smaller raid5 array.
Both arrays are connected to the same PCIe 3ware RAID controller. The
system has 2 six core 3Ghz processors and 24 GB of RAM. The system is
running FreeBSD 8.1.
The
Regarding performance of: lib/libc/net/nsdispatch.c
When used from: lib/libc/net/getgrent.c (called by initgroups())
I don't normally post here but I wanted to make a suggestion on a performance
issue that I spotted. I run a large number of high-volume web hosting servers
and noticed on so
Just had one of hour webservers flag as down here and on
investigation the machine seems to be struggling due to
a hung vmdaemon process.
top is reporting vmdaemon as using a constant 55.57% CPU
yet CPU time is not increasing:-
last pid: 36492; load averages: 0.04, 0.05, .11 up 89+19:5
On Fri, 14 Dec 2001 02:55:33 -0600
Alfred Perlstein <[EMAIL PROTECTED]> wrote:
> * Kris Kennaway <[EMAIL PROTECTED]> [011214 02:46] wrote:
> >
> > Yes, well, we already compile the *entire tree* with static
> > (compile-time) optimizations when CPUTYPE is set, so one more (bzero)
> > is no diffe
* Kris Kennaway <[EMAIL PROTECTED]> [011214 02:46] wrote:
>
> Yes, well, we already compile the *entire tree* with static
> (compile-time) optimizations when CPUTYPE is set, so one more (bzero)
> is no difference except that it gives an extra performance benefit.
Wait, you go to each and every f
On Fri, Dec 14, 2001 at 02:26:51AM -0600, Alfred Perlstein wrote:
> > This could easily be hung off CPUTYPE like we do for the asm code in
> > OpenSSL, right?
>
> That's not the point, you're proposing a static configuration
> which i honestly don't like. What makes more sense is to
> teach the
:That's not the point, you're proposing a static configuration
:which i honestly don't like. What makes more sense is to
:teach the dynamic linker to look for archetecture specific
:subdirectories in order to dynamically link in a shared object
:more suited to the running CPU, not the CPU it was
* Kris Kennaway <[EMAIL PROTECTED]> [011213 22:17] wrote:
> On Sun, Dec 09, 2001 at 03:23:28PM -0800, Peter Wemm wrote:
> > Poul-Henning Kamp wrote:
> > >
> > > There are many effects that could cause this, for instance if FreeBSD
> > > manages to align things differently in relation to the CPU c
On Sun, Dec 09, 2001 at 03:23:28PM -0800, Peter Wemm wrote:
> Poul-Henning Kamp wrote:
> >
> > There are many effects that could cause this, for instance if FreeBSD
> > manages to align things differently in relation to the CPU cache you
> > could get some very interesting waste of time that way.
If they're using gcc to compile then that doesn't really matter, last I heard
gcc's optimizer wasn't that great, and didn't result in much faster code, but
if the glibc people hand optimized stuff, I can see your point.
Ken
>
> This means that Linux's glibc is using an i686 optimized bzero(),
Poul-Henning Kamp wrote:
>
> There are many effects that could cause this, for instance if FreeBSD
> manages to align things differently in relation to the CPU cache you
> could get some very interesting waste of time that way.
>
> Based on the data you show me, I can't really say that something
On Sun, 09 Dec 2001 12:42:50 +0100
Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
>
> There are many effects that could cause this, for instance if FreeBSD
> manages to align things differently in relation to the CPU cache you
> could get some very interesting waste of time that way.
Yes, I agree
There are many effects that could cause this, for instance if FreeBSD
manages to align things differently in relation to the CPU cache you
could get some very interesting waste of time that way.
Based on the data you show me, I can't really say that something is
wrong or right either way.
--
P
g and allocation algorithms currently in use in
FreeBSD (I've read /usr/src/lib/libc/stdlib/malloc.c and have compl!
etely agreed with it, with 1 exception), please help me.
PS:
I'm not trying to compare both OS-es but I'm curious on this performance issue.
On FreeBSD 4.4 Sta
On Tue, Jan 25, 2000 at 09:12:31AM -0800, Brian D. Moffet wrote:
> Okay, stupid question. socketpair returns 2 sockets which according to
> the man page are "indistinguishable". Does this mean that you can read and
> write to either socket pair?
Yes sir.
> pipe(2) returns 2 file descriptors,
"Alfred Perlstein" <[EMAIL PROTECTED]> wrote:
> I think you probably want to experiment with pools attached to the
> pipe, and you ought to be using pipe rather than socketpair.
My tests indicate that pipe performance in this case is identical to
socketpair performance. Perhaps because I'm sendi
Okay, stupid question. socketpair returns 2 sockets which according to
the man page are "indistinguishable". Does this mean that you can read and
write to either socket pair?
pipe(2) returns 2 file descriptors, one of which is a read and one of
which is a write fd. The other end flips these a
* Matthew Dillon <[EMAIL PROTECTED]> [000125 11:51] wrote:
>
> :OK, so let's say I did spend some time implementing it in terms of semget()
> :and semop(). Would you be totally apalled if the performance turned out to
> :be about the same as using a single socketpair? Do you have a very strong
"Brian D. Moffet" wrote:
> Okay, stupid question. socketpair returns 2 sockets which according to
> the man page are "indistinguishable". Does this mean that you can read and
> write to either socket pair?
Yep, you can write to either end and it will come out the other end.
> pipe(2) returns
:OK, so let's say I did spend some time implementing it in terms of semget()
:and semop(). Would you be totally apalled if the performance turned out to
:be about the same as using a single socketpair? Do you have a very strong
:feeling that it should be significantly better. [Again, under
:3.
"Scott Hess" <[EMAIL PROTECTED]> wrote:
> "Matthew Dillon" <[EMAIL PROTECTED]> wrote:
> > :Unfortunately, I've found that having a group of processes reading
> > :from a group of socketpairs has better performance than having
> > :them all read from a single socketpair. I've been unable to
> > :
"Matthew Dillon" <[EMAIL PROTECTED]> wrote:
> :Unfortunately, I've found that having a group of processes reading from
a
> :group of socketpairs has better performance than having them all read
from
> :a single socketpair. I've been unable to determine why.
>
> The problem is that when you ha
Brian Somers wrote:
> > "Scott Hess" wrote:
> >
> > > I've found an odd performance issue that I cannot explain. I'm using
> > > socketpairs to communicate with multiple rfork(RFPROC) processes.
> >
> > Use 'pipe(2)' rah
> "Scott Hess" wrote:
>
> > I've found an odd performance issue that I cannot explain. I'm using
> > socketpairs to communicate with multiple rfork(RFPROC) processes.
>
> Use 'pipe(2)' rahter than 'socketpair(2)' as both are
"Scott Hess" wrote:
> I've found an odd performance issue that I cannot explain. I'm using
> socketpairs to communicate with multiple rfork(RFPROC) processes.
Use 'pipe(2)' rahter than 'socketpair(2)' as both are bidirectional and
pipe is a LOT f
:I've found an odd performance issue that I cannot explain. I'm using
:socketpairs to communicate with multiple rfork(RFPROC) processes.
:Initially, I used a seperate socketpair to communicate requests to each
:...
:
:Unfortunately, I've found that having a group of processes
I've found an odd performance issue that I cannot explain. I'm using
socketpairs to communicate with multiple rfork(RFPROC) processes.
Initially, I used a seperate socketpair to communicate requests to each
process, with locking in the parent to synchronize access to each client.
I
31 matches
Mail list logo