* Timo Sirainen :
> On 16.12.2010, at 22.09, Daniel L. Miller wrote:
>
> > You mean these past two months of grief - concern over Dovecot 2.0's
> > underlying design, exploring various configurations and settings, and
> > general hysteria came down to...a two line patch?!?!?!
>
> At least now I
* Daniel L. Miller :
> You mean these past two months of grief - concern over Dovecot 2.0's
> underlying design, exploring various configurations and settings, and
> general hysteria came down to...a two line patch?!?!?!
Yes. It's like that. With Windows, nobody would have ever fixed that.
--
Ra
> Oops, now I finally understand why Mail.app kept asking for my password for
> each mail I sent: it helpfully decided to start signing mails with the only
> client cert I had, without asking me.. Forget about those signatures in the
> last two mails :)
>
Heh, is that the key you used to get
Oops, now I finally understand why Mail.app kept asking for my password for
each mail I sent: it helpfully decided to start signing mails with the only
client cert I had, without asking me.. Forget about those signatures in the
last two mails :)
On 16.12.2010, at 18.33, Mark Moseley wrote:
>> http://lkml.org/lkml/2010/12/15/470
>
> Timo, if we apply the above kernel patch, do we still need to patch
> dovecot or is it just an either-or thing?
Either-or.
> I'm guessing the reason I saw so many less context switches when using
> a client_
On 16.12.2010, at 22.09, Daniel L. Miller wrote:
> You mean these past two months of grief - concern over Dovecot 2.0's
> underlying design, exploring various configurations and settings, and general
> hysteria came down to...a two line patch?!?!?!
At least now I know much more about context sw
hi cor,
> For those interested, graph showing the difference before and after patch:
>
> context switches: http://grab.by/7W9u
> load:http://grab.by/7W9x
which patch, timo's patch on dovecot or the also posted lkml patch
christoph
You mean these past two months of grief - concern over Dovecot 2.0's
underlying design, exploring various configurations and settings, and
general hysteria came down to...a two line patch?!?!?!
I know it was more than that - and that in fact this exposed a Linux
kernel flaw - but I'm still lau
For those interested, graph showing the difference before and after patch:
context switches: http://grab.by/7W9u
load:http://grab.by/7W9x
Cor
2010/12/16 Jose Celestino :
> On Qui, 2010-12-16 at 12:56 +0100, Ralf Hildebrandt wrote:
>> * Cor Bosman :
>
>> > I saw someone also posted a patch to the LKML.
>>
>> I guess I missed that one
>>
>
> http://lkml.org/lkml/2010/12/15/470
Timo, if we apply the above kernel patch, do we still need to
On Qui, 2010-12-16 at 12:56 +0100, Ralf Hildebrandt wrote:
> * Cor Bosman :
> > I saw someone also posted a patch to the LKML.
>
> I guess I missed that one
>
http://lkml.org/lkml/2010/12/15/470
--
Jose Celestino | http://japc.uncovering.org/files/japc-pgpkey.asc
* Charles Marcus :
> On 2010-12-16 6:53 AM, Cor Bosman wrote:
> > It looks like Timo's patch fixes the problem. Context switches are
> > now back to normal, and the load graph is smooth and lower. I saw
> > someone also posted a patch to the LKML.
>
> Out of curiosity, when you say lower - how doe
On 2010-12-16 6:53 AM, Cor Bosman wrote:
> It looks like Timo's patch fixes the problem. Context switches are
> now back to normal, and the load graph is smooth and lower. I saw
> someone also posted a patch to the LKML.
Out of curiosity, when you say lower - how does the load actually
compare now
* Cor Bosman :
> It looks like Timo's patch fixes the problem. Context switches are now
> back to normal, and the load graph is smooth and lower.
Same here.
> I saw someone also posted a patch to the LKML.
I guess I missed that one
--
Ralf Hildebrandt
Geschäftsbereich IT | Abteilung Netzwe
It looks like Timo's patch fixes the problem. Context switches are now back to
normal, and the load graph is smooth and lower. I saw someone also posted a
patch to the LKML.
Cor
* Jose Celestino :
> I guess that would depend on the CPUs, caches, etc.
No it doesn't :)
--
Ralf Hildebrandt
Geschäftsbereich IT | Abteilung Netzwerk
Charité - Universitätsmedizin Berlin
Campus Benjamin Franklin
Hindenburgdamm 30 | D-12203 Berlin
Tel. +49 30 450 570 155 | Fax: +49 30
On Qua, 2010-12-15 at 21:06 +0100, Cor Bosman wrote:
> >
> > Correct. Only Linux is affected.
> >
> > Anyway, Postfix's design has been like this forever, so no one would have
> > ever noticed context switch counts increasing. But changing this might make
> > them notice that context switches a
>
> Correct. Only Linux is affected.
>
> Anyway, Postfix's design has been like this forever, so no one would have
> ever noticed context switch counts increasing. But changing this might make
> them notice that context switches are dropping.
>
Im a little surprised we havent seen more report
On 15.12.2010, at 19.42, Jerry wrote:
>> BTW. Postfix's behavior is similar (unsurprisingly, since I basically
>> copied its design for v2.0). I wonder how much of a problem this is
>> with Postfix. Is a similar patch or a kernel fix going to drop
>> context switches 10x there too?
>>
>> http://m
On Wed, 15 Dec 2010 19:16:45 +
Timo Sirainen articulated:
> BTW. Postfix's behavior is similar (unsurprisingly, since I basically
> copied its design for v2.0). I wonder how much of a problem this is
> with Postfix. Is a similar patch or a kernel fix going to drop
> context switches 10x there
Am 15.12.2010 20:16, schrieb Timo Sirainen:
> BTW. Postfix's behavior is similar (unsurprisingly, since I basically copied
> its design for v2.0). I wonder how much of a problem this is with Postfix. Is
> a similar patch or a kernel fix going to drop context switches 10x there too?
>
> http://ma
BTW. Postfix's behavior is similar (unsurprisingly, since I basically copied
its design for v2.0). I wonder how much of a problem this is with Postfix. Is a
similar patch or a kernel fix going to drop context switches 10x there too?
http://marc.info/?l=linux-kernel&m=129243588809986&w=2
On Wed, 2010-12-15 at 17:43 +0100, Ralf Hildebrandt wrote:
> > Attached patch should workaround this. The main problem with it is that
> > Dovecot doesn't die very easily after this. You have to kill all the
> > processes manually. I'll probably have to add yet another pipe just for
> > this.
>
>
> Attached patch should workaround this. The main problem with it is that
> Dovecot doesn't die very easily after this. You have to kill all the
> processes manually. I'll probably have to add yet another pipe just for
> this.
Yes. Plenty of killall's are now needed!
So what does this patch actua
Attached patch should workaround this. The main problem with it is that
Dovecot doesn't die very easily after this. You have to kill all the
processes manually. I'll probably have to add yet another pipe just for
this.
But I wonder if this could be considered a kernel bug too? I think I'll
write a
On Fri, 2010-12-10 at 16:31 -0800, Daniel L. Miller wrote:
> Is it possible to run the imap-login process from 1.2 against a 2.0 system?
No, they're too different.
* Stan Hoeppner :
> Mark Moseley put forth on 12/10/2010 2:25 PM:
>
> > Yeah, my comment on the kernel thing was just in reply to one of Cor's
> > 3 debugging tracks, 1 of which was to try upgrading the kernel. I
> > figured I should mention the load issue if he might be upgrading to
> > the lates
Mark Moseley put forth on 12/10/2010 2:25 PM:
> Yeah, my comment on the kernel thing was just in reply to one of Cor's
> 3 debugging tracks, 1 of which was to try upgrading the kernel. I
> figured I should mention the load issue if he might be upgrading to
> the latest/greatest, since it could mak
This is probably a really dumb question, but here goes...
Is it possible to run the imap-login process from 1.2 against a 2.0 system?
--
Daniel
On Thu, Dec 9, 2010 at 4:54 PM, Stan Hoeppner wrote:
> Mark Moseley put forth on 12/9/2010 12:18 PM:
>
>> If you at some point upgrade to >2.6.35, I'd be interested to hear if
>> the load skyrockets on you. I also get the impression that the load
>> average calculation in these recent kernels is '
>
> Using gcc:
> gcc version 4.4.5 (Debian 4.4.5-8)
We run gcc version 4.3.2
Im not using any configure options, except for a few that Timo had me try
during debugging.
Cor
* Ralf Hildebrandt :
> I'm using:
> ./configure --prefix=/usr/dovecot-2 --enable-maintainer-mode && make
and
export LDFLAGS="-Wl,--as-needed"
export CPPFLAGS='-Wl,--as-needed'
but omitting that makes no difference
--
Ralf Hildebrandt
Geschäftsbereich IT | Abteilung Netzwerk
Charité - Uni
* Stan Hoeppner :
> Has anyone considered a compiler issue? A gcc optimization flags issue?
> A gcc version issue? Something along these lines?
Using gcc:
gcc version 4.4.5 (Debian 4.4.5-8)
> Cor and Ralf, are you two using the same gcc version? Same flags? Are
> they different than the sw
Ralf Hildebrandt put forth on 12/10/2010 1:39 AM:
> * Timo Sirainen :
>
>> Cor's debugging has so far shown that a single epoll_wait() call can
>> sometimes generate a few thousand voluntary context switches. I can't
>> really understand how that's possible. Those epoll_wait() calls about
>> half
Timo Sirainen put forth on 12/9/2010 7:10 PM:
> On 10.12.2010, at 0.54, Stan Hoeppner wrote:
>
>> However, this still doesn't seem to explain Ralf's issue, where the
>> kernel stays the same, but the Dovecot version changes, with 2.0.x
>> causing the high load and 1.2.x being normal. Maybe 2.0.x
* Timo Sirainen :
> Cor's debugging has so far shown that a single epoll_wait() call can
> sometimes generate a few thousand voluntary context switches. I can't
> really understand how that's possible. Those epoll_wait() calls about
> half of the total voluntary context switches generated by imap
* Stan Hoeppner :
> Mark Moseley put forth on 12/9/2010 12:18 PM:
>
> > If you at some point upgrade to >2.6.35, I'd be interested to hear if
> > the load skyrockets on you. I also get the impression that the load
> > average calculation in these recent kernels is 'touchier' than in
> > pre-2.6.35
On 10.12.2010, at 0.54, Stan Hoeppner wrote:
> However, this still doesn't seem to explain Ralf's issue, where the
> kernel stays the same, but the Dovecot version changes, with 2.0.x
> causing the high load and 1.2.x being normal. Maybe 2.0.x simply causes
> this bug to manifest itself more loud
Mark Moseley put forth on 12/9/2010 12:18 PM:
> If you at some point upgrade to >2.6.35, I'd be interested to hear if
> the load skyrockets on you. I also get the impression that the load
> average calculation in these recent kernels is 'touchier' than in
> pre-2.6.35.
This thread may be of value
On Thu, Dec 9, 2010 at 12:58 PM, Timo Sirainen wrote:
> On Thu, 2010-12-09 at 10:18 -0800, Mark Moseley wrote:
>
>> Upping the client_limit actually results in less processes, since a
>> single process can service up to #client_limit connections. When I
>> bumped up the client_limit for imap, my c
On Thu, 2010-12-09 at 10:18 -0800, Mark Moseley wrote:
> Upping the client_limit actually results in less processes, since a
> single process can service up to #client_limit connections. When I
> bumped up the client_limit for imap, my context switches plummeted.
> Though as Timo pointed out on an
On Thu, Dec 9, 2010 at 11:13 AM, Ralf Hildebrandt
wrote:
> * Mark Moseley :
>
>> > We're on 2.6.32 and the load only goes up when I change dovecot (not
>> > when I change the kernel, which I didn't do so far)
>>
>> If you at some point upgrade to >2.6.35, I'd be interested to hear if
>> the load s
* Mark Moseley :
> > We're on 2.6.32 and the load only goes up when I change dovecot (not
> > when I change the kernel, which I didn't do so far)
>
> If you at some point upgrade to >2.6.35, I'd be interested to hear if
> the load skyrockets on you.
You mean even more? I'm still hoping it would
On Wed, Dec 8, 2010 at 11:54 PM, Ralf Hildebrandt
wrote:
> * Mark Moseley :
>> On Wed, Dec 8, 2010 at 3:03 PM, Timo Sirainen wrote:
>> > On 8.12.2010, at 22.52, Cor Bosman wrote:
>> >
>> >> 1 server with service_count = 0, and src/imap/main.c patch
>> >
>> > By this you mean service_count=0 for b
Some preliminary findings..
Changing the kernel seems to have a positive effect on the load. I changed from
2.6.27.46 to 2.6.27.54 (sorry, im bound by locally available kernels due to a
kernel patch we created to fix some NFS problems in the linux kernel. Patch
should be available in the stock
On Dec 9, 2010, at 10:41 AM, Timo Sirainen wrote:
> On 9.12.2010, at 9.13, Cor Bosman wrote:
>
>> If you want to have a quick look already, im mailing you the locations of 2
>> files, 1 with service_count=0 and one with service_count=1.
>
> I see that about half the commands that do hundreds
On 9.12.2010, at 9.13, Cor Bosman wrote:
> If you want to have a quick look already, im mailing you the locations of 2
> files, 1 with service_count=0 and one with service_count=1.
I see that about half the commands that do hundreds or thousands of volcses are
IDLE. Wonder if that is the prob
> Great. If the logs aren't too huge I could look at the raw ones, or you could
> try to write a script yourself to parse them. I'm basically interested in
> things like:
>
> 1. How large are the first volcs entries for processes? (= Is the initial
> process setup cost high?)
>
> 2. Find the l
On 9.12.2010, at 8.24, Cor Bosman wrote:
>>
>> Are the process pids also logged in the messages, so it's clear which
>> messages belong to which imap process? If not, add %p to mail_log_prefix.
>
> Done. It wasnt logging this, now it is.
Great. If the logs aren't too huge I could look at the
>
> Are the process pids also logged in the messages, so it's clear which
> messages belong to which imap process? If not, add %p to mail_log_prefix.
Done. It wasnt logging this, now it is.
>
>> As I said previously, im not longer running the imap server=0 patch because
>> it caused these er
On 9.12.2010, at 6.50, Cor Bosman wrote:
>> lcs values for v1.2 and v2.0. Wonder if you get different values?
>>
>> If you don't mind some huge logs, you could also try the attached patch that
>> logs the voluntary context switch count for every executed IMAP command.
>> Maybe some command show
On 9.12.2010, at 7.57, Ralf Hildebrandt wrote:
>> The v1.2 values look pretty good. v2.0's involuntary context switches
>> isn't too bad either. So where do all the 3700 new voluntary context
>> switches come from? The new method of initializing user logins can't
>> add more than a few more of tho
* Timo Sirainen :
> So with Ralf's previous statistics for one hour:
>
> v2.0:
> 6887 imap logouts (processes)
> 26672928 voluntary context switches (3872 / process)
> 1313631 involuntary context switches (190 / process)
>
> v1.2:
> 6832 imap logouts (processes)
> 1003200 voluntary context switc
* Cor Bosman :
> login_process_per_connection = yes
>
> So seems like we did have that set in 1.2.
We didn't use that (in both 1.2 and 2.0)
--
Ralf Hildebrandt
Geschäftsbereich IT | Abteilung Netzwerk
Charité - Universitätsmedizin Berlin
Campus Benjamin Franklin
Hindenburgdamm 30 | D-1
* Cor Bosman :
> >
> > I see you're using userdb passwd. Do your users have unique UIDs? If they
> > have, maybe it has to do with that..
>
> Yes, we have about 1 million unique UIDs in the passwd file (actually
> NIS).
And we have about 15.000 unique UIDs in the passwd file (no NIS!)
> I did
* Mark Moseley :
> On Wed, Dec 8, 2010 at 3:03 PM, Timo Sirainen wrote:
> > On 8.12.2010, at 22.52, Cor Bosman wrote:
> >
> >> 1 server with service_count = 0, and src/imap/main.c patch
> >
> > By this you mean service_count=0 for both service imap-login and service
> > imap blocks, right?
> >
>
Just for thoroughness ive started 2 servers with the logging patch. One with
service_count=0 and one with service_count=1
> lcs values for v1.2 and v2.0. Wonder if you get different values?
>
> If you don't mind some huge logs, you could also try the attached patch that
> logs the voluntary context switch count for every executed IMAP command.
> Maybe some command shows up that generates them much more than others.
>>
>> 1 server with service_count = 0, and src/imap/main.c patch
>>
>> Is that ok?
>
> Looks good!
>
I had to revert this patch because it's causes permission errors on our
filesystem. Directories are being created for user X with the euid of user Y
(which fails, so at least i didnt get corr
On 9.12.2010, at 2.24, Timo Sirainen wrote:
> Like the answer to question: Is the number of voluntary context switches by
> imap processes close to the number of called syscalls?
Oh, right, no. Syscalls don't increase context switch counts. So the voluntary
context switch basically seems to mea
On 9.12.2010, at 1.52, Timo Sirainen wrote:
> trace looks like an interesting new tool: http://lwn.net/Articles/415728/
>
> Wonder if that would tell something about this problem.
Like the answer to question: Is the number of voluntary context switches by
imap processes close to the number of c
trace looks like an interesting new tool: http://lwn.net/Articles/415728/
Wonder if that would tell something about this problem.
On Wed, 2010-12-08 at 23:37 +0100, Cor Bosman wrote:
> We're running on bare metal, no VM involved.
>
> Cor
>
Missed most of this list due to some noise levels and my lack of time to
sit through hundreds messages on every list I'm on, so apologies if this
was already suggested, due to errors i
On Wed, Dec 8, 2010 at 3:03 PM, Timo Sirainen wrote:
> On 8.12.2010, at 22.52, Cor Bosman wrote:
>
>> 1 server with service_count = 0, and src/imap/main.c patch
>
> By this you mean service_count=0 for both service imap-login and service imap
> blocks, right?
>
>
Speaking from my own experience,
>
>> 1 server with service_count = 0, and src/imap/main.c patch
>
> By this you mean service_count=0 for both service imap-login and service imap
> blocks, right?
>
>
Yes, on both imap-login and imap,
The 2 servers without the patch only have it on imap-login.
Cor
On 8.12.2010, at 22.52, Cor Bosman wrote:
> 1 server with service_count = 0, and src/imap/main.c patch
By this you mean service_count=0 for both service imap-login and service imap
blocks, right?
On 8.12.2010, at 22.52, Cor Bosman wrote:
> Right now I have running:
>
> 3 servers with just a new kernel, no other changes
> 2 servers with service_count = 0, no other changes
> 1 server with service_count = 0, and src/imap/main.c patch
>
> Is that ok?
Looks good!
> I wont be seeing much imp
Hope im doing what you want :) Getting kinda confusing.
Right now I have running:
3 servers with just a new kernel, no other changes
2 servers with service_count = 0, no other changes
1 server with service_count = 0, and src/imap/main.c patch
Is that ok?
I wont be seeing much impact until tomo
Cor Bosman put forth on 12/8/2010 4:37 PM:
> We're running on bare metal, no VM involved.
Ok, that's good to know--should eliminate some potential complexity in
troubleshooting this.
--
Stan
We're running on bare metal, no VM involved.
Cor
Cor Bosman put forth on 12/8/2010 9:45 AM:
>>>
>>
>> It could be that you both are running a different Kernel from the Standard
>> Lenny Kernel 2.6.26. (this could be a clue ..)
>
>
> It would be interesting to hear from people that aren't seeing a big load
> increase. My initial guess wa
It would be useful to know if the problem is with the imap process startup or
after that. If you apply the attached patch, you can set:
service imap {
service_count = 0
}
This causes the imap processes to be reused for future connections. With the
patch enabled the processes keep the ability
login_process_per_connection = yes
So seems like we did have that set in 1.2.
Cor
On 8.12.2010, at 22.19, Cor Bosman wrote:
> Oh, and I dont know if we did in 1.2. I think so, but cant be positive. I
> tried making the config the same. I have the config still around if you want
> to see it.
In v1.2 it was called login_process_per_connection
Oh, and I dont know if we did in 1.2. I think so, but cant be positive. I
tried making the config the same. I have the config still around if you want
to see it.
Cor
> So you have tons of imap-login processes. Did you have that in v1.2 too? Try
> setting service_count=0 (and same for pop3-login)
>
> http://wiki2.dovecot.org/LoginProcess
Yes, we have tons of imap-login processes. I'll set service_count=0 for imap
(we dont do pop with dovecot) on a few server
> service imap-login {
> service_count = 1
> }
So you have tons of imap-login processes. Did you have that in v1.2 too? Try
setting service_count=0 (and same for pop3-login)
http://wiki2.dovecot.org/LoginProcess
>
> I see you're using userdb passwd. Do your users have unique UIDs? If they
> have, maybe it has to do with that..
Yes, we have about 1 million unique UIDs in the passwd file (actually NIS). I
did upgrade 4 machines to the latest kernel, but it's hard to tell if that
changed much as our us
On 8.12.2010, at 20.30, Timo Sirainen wrote:
> So same problem as with Ralf. What kernel version are you using? Maybe try a
> newer one to see if that happens to fix it?
I see you're using userdb passwd. Do your users have unique UIDs? If they have,
maybe it has to do with that..
On 8.12.2010, at 17.56, Cor Bosman wrote:
>> Timo and I found excessive numbers of context switches, factor 20-30.
>> But it's unclear why the IMAP process would do/cause this.
>
> Im seeing this as well...http://grab.by/7Nni
So same problem as with Ralf. What kernel version are you using? M
* Jose Celestino :
> It ended up being due to a slight increase in memory usage by the imap
> processes that made the servers start using swap and the load to spike.
My machine is not swapping; it doesn't even HAVE swap, it still has
lots of free memory
--
Ralf Hildebrandt
Geschäftsbereich IT
* David Ford :
> what's your task switch HZ compiled at? CONFIG_HZ_1000? you would
> probably be better at 300 or 250. have you tried tickless?
# fgrep HZ /boot/config-2.6.32-25-generic-pae
CONFIG_NO_HZ=y
CONFIG_HZ_250=y
CONFIG_HZ=250
> is your kernel compiled for precisely your cpu type and s
>
> Timo and I found excessive numbers of context switches, factor 20-30.
> But it's unclear why the IMAP process would do/cause this.
Im seeing this as well...http://grab.by/7Nni
Cor
On Qua, 2010-12-08 at 15:39 +0100, Cor Bosman wrote:
> I upgraded most of our servers from 1.2.x to 2.0.8 and am noticing a really
> big increase in server load. This is across the board, not any specific
> server. Check this screenshot of a load graph: http://grab.by/7N8V
>
> Is there anything
On Dec 8, 2010, at 5:11 PM, David Ford wrote:
> what's your task switch HZ compiled at? CONFIG_HZ_1000? you would probably
> be better at 300 or 250. have you tried tickless? is your kernel compiled
> for precisely your cpu type and smp/hyper options set correctly? what about
> CONFIG_PREE
On 08-12-10 17:11, David Ford wrote:
what's your task switch HZ compiled at? CONFIG_HZ_1000? you would
probably be better at 300 or 250. have you tried tickless? is your
kernel compiled for precisely your cpu type and smp/hyper options set
correctly? what about CONFIG_PREEMPT? definitely d
what's your task switch HZ compiled at? CONFIG_HZ_1000? you would
probably be better at 300 or 250. have you tried tickless? is your
kernel compiled for precisely your cpu type and smp/hyper options set
correctly? what about CONFIG_PREEMPT? definitely don't use realtime,
server is appropri
* David Ford :
> gprof for detail, and even with simple strace timing. i.e. strace
> -c. if load is going up significantly, there should be one or more
> functions significantly fatter than the rest. you can pick either to
> run it on the whole group, or just attach to certain processes.
> (mas
gprof for detail, and even with simple strace timing. i.e. strace -c.
if load is going up significantly, there should be one or more functions
significantly fatter than the rest. you can pick either to run it on
the whole group, or just attach to certain processes. (master, imap,
lda, etc)
* David Ford :
> Ralf, did you do the profiling yet?
With gprof or what exactly is on your mind?
> On 12/08/10 09:50, Ralf Hildebrandt wrote:
> >[...]
> >Yes, this looks like my graphs. Same increase. Factor 5.
--
Ralf Hildebrandt
Geschäftsbereich IT | Abteilung Netzwerk
Charité - Universit
* Cor Bosman :
> It would be interesting to hear from people that aren't seeing a big
> load increase.
Indeed. Tomi was mentioning some big company; what kind of setup were
the using, Timo?
> My initial guess was some kind of NFS problem, but since Ralf isn't
> doing NFS, that's probably not it.
* Cor Bosman :
> Here's the doveconf -n output: http://wa.ter.net/download/doveconf-n.txt
Except for your storage being on NFS, it looks fairly identical!
masteruser, passdb pam, userdb passwd.
--
Ralf Hildebrandt
Geschäftsbereich IT | Abteilung Netzwerk
Charité - Universitätsmedizin Berli
>>
>
> It could be that you both are running a different Kernel from the Standard
> Lenny Kernel 2.6.26. (this could be a clue ..)
It would be interesting to hear from people that aren't seeing a big load
increase. My initial guess was some kind of NFS problem, but since Ralf isn't
doi
Ralf, did you do the profiling yet?
On 12/08/10 09:50, Ralf Hildebrandt wrote:
[...]
Yes, this looks like my graphs. Same increase. Factor 5.
On 12/8/2010 9:55 AM, Cor Bosman wrote:
Here's the doveconf -n output: http://wa.ter.net/download/doveconf-n.txt
Cor
It could be that you both are running a different Kernel from the
Standard Lenny Kernel 2.6.26. (this could be a clue ..)
M.A.
Here's the doveconf -n output: http://wa.ter.net/download/doveconf-n.txt
Cor
* Ralf Hildebrandt :
> > http://wa.ter.net/download/doveconf.txt
>
> We need to find out WHAT is common in our two config files!
My config is attached
--
Ralf Hildebrandt
Geschäftsbereich IT | Abteilung Netzwerk
Charité - Universitätsmedizin Berlin
Campus Benjamin Franklin
Hindenburgda
* Cor Bosman :
> I upgraded most of our servers from 1.2.x to 2.0.8 and am noticing a
> really big increase in server load. This is across the board, not any
> specific server. Check this screenshot of a load graph:
> http://grab.by/7N8V
Yes, this looks like my graphs. Same increase. Factor 5.
I upgraded most of our servers from 1.2.x to 2.0.8 and am noticing a really big
increase in server load. This is across the board, not any specific server.
Check this screenshot of a load graph: http://grab.by/7N8V
Is there anything i should be looking at that could cause such a drastic load
i
99 matches
Mail list logo