Chris,
On 10/6/10 9:42 PM, "Chris Hobbs" wrote:
> 3) Modified my NFS mount with noatime to reduce i/o hits there. Need to
> figure out what Brad's suggestions about readahead on the server mean.
It's been a while since I mucked with Linux as a NFS server, I've been on
Netapp for a while. There
Timo,
I'm working with a webmail client that periodically polls unread message
counts for a list of folders. It currently does this by doing a LIST or LSUB
and then iterating across all of the folders, running a SEARCH ALL UNSEEN,
and counting the resulting UID list.
Eventually I'd like to see it
Timo,
On 10/17/10 3:56 PM, "Timo Sirainen" wrote:
>
> The reason why STATUS is mentioned to be possibly slow is to discourage
> clients from doing a STATUS to all mailboxes.
>
> STATUS is definitely faster than SELECT+SEARCH with all IMAP servers.
That's what I figured, thanks! Other than actu
Timo,
On 10/17/10 4:20 PM, "Timo Sirainen" wrote:
> On 18.10.2010, at 0.19, Brandon Davidson wrote:
>
>> Other than actually calling THREAD and
>> counting the resulting groups, is there a good way to get a count of
>> threads?
>
> Nope, that's
Timo,
On 10/28/10 5:13 AM, "Timo Sirainen" wrote:
>> . list (subscribed) "" "*"
>> * LIST (\Subscribed \NonExistent) "/"
>> "Shared/tester2/sdfgsg/gsdfgf/vtyjyfgj/rtdhrthxs/zhfhg"
>> . OK List completed.
>
> Looks like a bug, yeah. Should be fixed in v2.0. I don't know if it's worth
> the
Stan,
On 11/1/10 7:30 PM, "Stan Hoeppner" wrote:
> 1. How many of you have a remote site hot backup Dovecot IMAP server?
+1
> 2. How are you replicating mailbox data to the hot backup system?
> C. Other
Netapp Fabric MetroCluster, active IMAP/POP3 nodes at both sites mounting
storage o
Stan,
On 11/8/10 10:39 AM, "Stan Hoeppner" wrote:
>
> However, if CONFIG_HZ=1000 you're generating WAY too many interrupts/sec
> to the timer, ESPECIALLY on an 8 core machine. This will exacerbate the
> high context switching problem. On an 8 vCPU (and physical CPU) machine
> you should have C
Stan,
On 1/14/11 7:09 PM, "Stan Hoeppner" wrote:
>
> The average size of an email worldwide today is less than 4KB, less than one
> typical filesystem block.
>
> 28TB / 4KB = 28,000,000,000,000 bytes / 4096 bytes = 6,835,937,500 =
> 6.8 billion emails / 5,000 users =
> 1,367,188 emails per user
On 1/14/11 8:59 PM, "Brandon Davidson" wrote:
> I work for central IS, so this is the first stage of a consolidated service
> offering that we anticipate may encompass all of our staff and faculty. We
> bought what we could with what we had, anticipating that usage will
Stan,
On 1/20/11 7:45 PM, "Stan Hoeppner" wrote:
>
> What you're supposed to do, and what VMWare recommends, is to run ntpd _only
> in
> the ESX host_ and _not_ in each guest. According to:
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displ
> ayKC&externalId=100642
Axel,
On 5/30/10 12:05 AM, "Axel Thimm" wrote:
> beta4 built under RHEL4, RHEL5 and RHEL6 (the latter being the public
> beta). beta5 now builds only for RHEL5, the other two fail with:
>
> strnum.c: In function `str_to_llong':
> strnum.c:139: error: `LLONG_MIN' undeclared (first use in this fu
Axel,
On 5/30/10 3:39 AM, "Axel Thimm" wrote:
>
> Now it is more consistent and looks like a change between 4.1.2 and
> 4.4.1.
>
> Maybe in the older gcc -std=gnu99 didn't set __USE_ISOC99 and thus the
> missing constants were not defined?
If I '%define optflags -std=gnu99' in the spec it buil
Axel,
On 5/30/10 10:22 AM, "Axel Thimm" wrote:
>>
>> Oh, the spec file overrides CFLAGS and doesn't contain -std=gnu99?
>>
>
> The config.log for RHEL5/x86_64 says:
>
> CFLAGS='-std=gnu99 -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
> -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m6
On 5/30/10 2:49 PM, "Axel Thimm" wrote:
>
> How are your %optflags (which is the same as $RPM_OPT_FLAGS) merged
> into the build if it is not passed to make? And it would yield the
> same CFLAGS as above (merged default optflags with what configure adds
> to it).
They're exported by the %conf
Timo,
After straightening out some issues with Axel's spec file, I'm back to
poking at this.
On 5/25/10 3:14 PM, "Timo Sirainen" wrote:
> So instead of having separate proxies and mail servers, have only hybrids
> everywhere? I guess it would almost work, except proxy_maybe isn't yet
> compatibl
Timo,
On 5/31/10 6:04 AM, "Timo Sirainen" wrote:
> Well .. maybe you could use separate services. Have the proxy listen on
> public IP and the backend listen on localhost. Then you can do:
>
> local_ip 127.0.0.1 {
> passdb {
> ..
> }
> }
>
> and things like that. I think it would work,
Timo,
On 5/31/10 4:13 PM, "Timo Sirainen" wrote:
> You need to put the other passdb/userdb to the external IP:
>
> local 1.2.3.4 {
>> userdb {
>> driver = passwd
>> }
>> passdb {
>> driver = sql
>> args = /etc/dovecot/proxy-sqlite.conf
>> }
>
> }
>
It still doesn't seem to work. I tried t
Timo,
On 5/31/10 4:36 PM, "Timo Sirainen" wrote:
>
> The passdbs and userdbs are checked in the order they're defined. You could
> add them at the bottom. Or probably more easily:
>
> local 128.223.143.138 {
> passdb {
> driver = sql
> args = ..
> }
>
> passdb {
> driver = pam
>
Timo,
On 5/31/10 5:09 PM, "Timo Sirainen" wrote:
>
> Right .. it doesn't work exactly like that I guess. Or I don't remember :)
> Easiest to test with:
>
> doveconf -f lip=128.223.142.138 -n
That looks better:
[r...@cc-popmap7 ~]# doveconf -f lip=128.223.142.138 -h |grep -B1 -A7 passdb
}
pass
Timo,
On 5/31/10 5:34 PM, "Brandon Davidson" wrote:
>
> Still not sure why it's not proxying though. The config looks good but it's
> still using PAM even for the external IP.
I played with subnet masks instead of IPs and using remote instead of local,
as well a
Timo,
On 5/31/10 6:56 PM, "Timo Sirainen" wrote:
>
> Oh, you're right. For auth settings currently only protocol blocks work. It
> was a bit too much trouble to make local/remote blocks to work. :)
That's too bad! Any hope of getting support for this and
director+proxy_maybe anytime soon?
-Bra
Pascal,
On 5/31/10 11:40 PM, "Pascal Volk"
wrote:
>
> I've spent some time for the fine manual. Whats new?
>
> Location: http://hg.localdomain.org/dovecot-2.0-man
> So I don't have to flood the wiki with attachments.
> As soon as the manual pages are complete, they will be included in the
> Dove
On 6/2/10 7:33 PM, "Timo Sirainen" wrote:
>> I wonder if they can stand up to 10k+ concurrent proxied
>> connections though?
>
> I'd think so.
I could probably give that a try, but I'll have a hard time convincing folks
to do that until after 2.0 has out of beta for a bit. Maybe after summer
ter
Timo,
On 6/24/10 4:23 AM, "Timo Sirainen" wrote:
>>
>> I'd recommend also installing and configuring imapproxy - it can be
>> beneficial with squirrelmail.
>
> Do you have any about a real world numbers about installation with and without
> imapproxy?
We run imapproxy behind our Roundcube ins
Xavier,
On 7/8/10 1:29 AM, "Xavier Pons" wrote:
>
> Yes, we will have two hardware balancers in front of proxies. Thus, the
> director service will detect failures of backend servers and not forward
> sessions them? how detects if a backend server it's alive or not?
IIRC, it does not detect fa
On 7/9/10 12:01 AM, "Xavier Pons" wrote:
> I think this new funcionalities would be perfect (necessary ;-) ) for a
> complete load balanced/high availability mail system.
Timo, what you described sounds great.
Pretty much anything built into Dovecot would be an improvement over an
external scr
dsync in hg tip is failing tests:
test-dsync-brain.c:176: Assert failed:
test_dsync_mailbox_create_equals(&box_event.box, &src_boxes[6])
test-dsync-brain.c:180: Assert failed:
test_dsync_mailbox_create_equals(&box_event.box, &dest_boxes[6])
Segmentation fault
I'm currently using rev 77f244924009,
Leander,
On 7/10/10 2:14 PM, "Leander S." wrote:
> "You have attempted to establish a connection with "server". However,
> the security certificate presented belongs to "*.server". It is
> possible, though unlikely, that someone may be trying to intercept your
> communication with this web site."
Timo,
On 7/11/10 12:06 PM, "Timo Sirainen" wrote:
>> Pretty much anything built into Dovecot would be an improvement over an
>> external script from my point of view.
>
> Yeah, some day I guess..
Well, I would definitely make use of it if you ever get around to coding it.
>> With a script I h
Timo,
On 7/11/10 10:58 AM, "Timo Sirainen" wrote:
>
>> dsync in hg tip is failing tests:
>
> Fixed now, as well as another dsync bug.
Looks good!
New doveadm director status is a little odd though. The 'mail server ip'
column is way wide (I guess it adjusts to term size though?) and the users
I've got a couple more issues with the doveadm director interface:
1) If I use "doveadm director remove" to disable a host with active users,
the director seems to lose track of users mapped to that host. I guess I
would expect it to tear down any active sessions by killing the login
proxies, like
On 7/13/10 4:53 AM, "Timo Sirainen" wrote:
>
> Hmm. "Between"? Is it doing CAPABILITY before or after login or both? That
> anyway sounds different from the idle timeout problem..
I added some additional logging to imapproxy and it looks like it's actually
getting stuck in a few different com
Timo,
On 7/15/10 4:12 PM, "Timo Sirainen" wrote:
>
>> Maybe there could be a parameter to get the user list from a file (one
>> username per line) instead of userdb.
>
> Added -f parameter for this.
Awesome! I dumped a userlist (one username per line) which it seems to read
through quite quickl
Timo,
On 7/15/10 4:18 PM, "Timo Sirainen" wrote:
>>> Jul 15 13:46:24 cc-popmap7 dovecot: auth: Error: auth worker: Aborted
>>> request: Lookup timed out
>>> Jul 15 13:53:25 cc-popmap7 dovecot: auth: Error: getpwent() failed: No such
>>> file or directory
>
> Also see if http://hg.dovecot.org/dov
Timo,
On 7/16/10 4:23 AM, "Timo Sirainen" wrote:
>
>> Jul 16 01:50:44 cc-popmap7 dovecot: auth: Error: auth worker: Aborted
>> request: Lookup timed out
>> Jul 16 01:50:44 cc-popmap7 dovecot: master: Error: service(auth): child 1607
>> killed with signal 11 (core dumps disabled)
>
> I don't thi
Timo,
On 7/17/10 11:06 AM, "Timo Sirainen" wrote:
>
>> Here's a stack trace. Standard null function pointer. No locals, I think I'd
>> have to recompile to get additional information.
>>
>> #0 0x in ?? ()
>> #1 0x00415a71 in auth_worker_destroy ()
>> #2 0x00415
Timo,
>> Maybe this fixes it: http://hg.dovecot.org/dovecot-2.0/rev/cfd15170dff7
>
> Nope, still crashes with the same stack. I'll rebuild with -g and report
> back.
Here we go. Attached, hopefully Entourage won't mangle the line wrap.
-Brad
auth-worker-gdb.txt
Description: Binary data
Timo,
On 7/19/10 9:38 AM, "Timo Sirainen" wrote:
>
> http://hg.dovecot.org/dovecot-2.0/rev/f178792fb820 fixes it?
It makes it further before crashing. Trace attached.
I still wonder why it's timing out in the first place. Didn't you change it
to reset the timeout as long as it's still getting d
Timo,
Just out of curiosity, how are incoming connections routed to login
processes when run with:
service imap-login { service_count = 0 }
I've been playing with this on our test director, and the process connection
counts look somewhat unbalanced. I'm wondering if there are any performance
issu
Noel,
On 8/26/10 9:59 PM, "Noel Butler" wrote:
>> I fail to see advantage if anything it add in more point of failure, with
>
> i agree with this and it is why we dont use it
>
> we use dovecots deliver with postfix and have noticed no problems, not
> to say there was none, but if so, we dont
Noel,
On 8/26/10 11:28 PM, "Noel Butler" wrote:
> I just fail to see why adding more complexity, and essentially making
> $9K load balancers redundant, is the way of the future.
To each their own. If your setup works without it, then fine, don't use
it... but I don't see why you feel the need to
Michael,
On 9/1/10 12:18 AM, "Michael M. Slusarz" wrote:
> imapproxy *should* really be using UNSELECT, but that looks like a
> different (imapproxy) bug.
I run imapproxy too. If you're using Dovecot 2.0, set:
imap_capability = +UNSELECT IDLE
Imapproxy is naive and only reads capabilities from
Hi all,
We recently attempted to update our Dovecot installation to version
1.2.5. After doing so, we noticed a constant stream of crash messages in
our log file:
Sep 22 15:58:41 hostname dovecot: imap-login: Login: user=,
method=PLAIN, rip=X.X.X.X, lip=X.X.X.X, TLS
Sep 22 15:58:41 hostname dovec
Tom,
Tom Diehl wrote:
I just updated to dovecot 1.2.5 on centos5.
1.2.4 did not show this problem. I am going to roll back for the time being
but I am willing to do whatever I need to to fix this.
This is an x86_64 system. filesystem is ext3.
I am now seeing the following in the logs:
Sep 22
Hi all,
We have a number of machines running Dovecot 1.2.4 that have been assert
crashing occasionally. It looks like it's occurring when the users
expunge their mailboxes, but I'm not sure as I can't reproduce it
myself. The error in the logs is:
Oct 6 07:33:09 oh-popmap3p dovecot: imap: user=,
We recently upgraded from Dovecot 1.2.4 to 1.2.6 (with the sieve patches
of course). Everything has been running quite well since the upgrade.
The occasional issue with assert-crashing when expunging has gone away.
However, one of our users seems to have triggered a new issue. She's
been the only
Timo,
> -Original Message-
> -O2 compiling has dropped one stage from the backtrace, but I think
this
> will fix the crash:
>
> I guess it would be time for 1.2.7 somewhat soon..
Thanks! As always, you're one step ahead of us with the bug fixes! I've
got one more for you that just popped
I seem to have run into the same issue on two of our 12 Dovecot servers
this morning:
Oct 15 03:41:51 oh-popmap5p dovecot: dovecot: child 7529 (login)
returned error 89 (Fatal failure)
Oct 15 03:41:51 oh-popmap5p dovecot: dovecot: child 7532 (login)
returned error 89 (Fatal failure)
Oct 15 03:41
Hi Timo,
> -Original Message-
> From: Timo Sirainen [mailto:t...@iki.fi]
>
> This just shouldn't be happening. Are you using NFS? Anyway this
should
> replace the crash with a nicer error message:
> http://hg.dovecot.org/dovecot-1.2/rev/6c6460531514
Yes, we've got a pool of servers with
On Red Hat based distros, do:
echo 'DAEMON_COREFILE_LIMIT="unlimited"' >> /etc/sysconfig/dovecot &&
service dovecot restart
Might be worth putting in the wiki if it's not there already?
-Brad
> -Original Message-
> ==> /var/log/dovecot/dovecot.log <==
> Oct 15 09:07:33 maste
On 10/21/09 8:59 AM, "Guy" wrote:
> Our current setup uses two NFS mounts accessed simultaneously by two
> servers. Our load balancing tries to keep a user on the same server whenever
> possible. Initially we just had roundrobin load balancing which led to index
> corruption.
> The problems we've
Thomas,
On 10/22/09 1:29 AM, "Thomas Hummel" wrote:
> On Wed, Oct 21, 2009 at 09:39:22AM -0700, Brandon Davidson wrote:
>> As a contrasting data point, we run NFS + random redirects with almost no
>> problems.
>
> Thanks for your answer as well.
>
> Wh
Hi Marco,
On 10/22/09 1:50 AM, "Marco Nenciarini" wrote:
> This morning it happened another time, another time during the daily
> cron execution.
>
> Oct 22 06:26:57 server dovecot: pop3-login: Panic: Leaked file fd 5: dev
> 0.12 inode 1005
> Oct 22 06:26:57 server dovecot: dovecot: Temporary fa
We've had this reoccur twice this week. In both cases, it seems to hit a
swath of machines all within a few minutes. For some reason it's been
limited to the master serving pop3 only. In all cases, the logging
socket at fd 5 had gone missing.
I haven't applied the fd leak detection patch, but I do
Hi Timo,
> -Original Message-
> From: Timo Sirainen [mailto:t...@iki.fi]
>
> On Thu, 2009-10-29 at 12:08 -0700, Brandon Davidson wrote:
> > I haven't applied the fd leak detection patch, but I do have lsof
output
> > and a core file available here:
>
> -Original Message-
> On Sun, 2009-11-22 at 23:54 +0100, Edgar Fuß wrote:
> > I'm getting this Panic with some users on dovecot-1.2.7:
> >
> > Panic: file maildir-uidlist.c: line 1242
> > (maildir_uidlist_records_drop_expunges): assertion failed: (recs[i]-
> > >uid < rec->
> > uid)
>
> I
Timo,
> -Original Message-
> > I'm not really sure why these are happening. I anyway changed them from
> > being assert-crashes to just logged errors. I'm interested to find out
> > what it logs now and if there are any user-visible errors.
> > http://hg.dovecot.org/dovecot-1.2/rev/e47eb50
Hi Timo,
We've been running Dovecot with Maildir on NFS for quite a while - since
back in the 1.0 days I believe. I'm somewhat new here. Anyway...
The Wiki article on NFS states that 1.1 and newer will flush attribute
caches if necessary with mail_nfs_storage=yes. We're running 1.2.8 with
that s
We've started seeing the maildir_uidlist_records_array_delete assert crash as
well. It always seems to be preceded by a 'stale NFS file handle' error from a
the same user on a different connection.
Dec 22 10:12:20 oh-popmap5p dovecot: imap: user=, rip=a.a.a.a, pid=2439:
fdatasync(/home11/apbao/
Timo,
On 12/23/09 8:37 AM, "David Halik" wrote:
> I switched all of our servers to dotlock_use_excl=no last night, but
> we're still seeing the errors:
We too have set dotlock_use_excl = no. I'm not seeing the "Stale NFS file
handle" message any more, but I am still seeing a crash. The crashes s
Timo,
> -Original Message-
> From: Timo Sirainen
>
> 1721 is not in the recs[] list, since it's sorted and the first one is
1962.
>
> So there's something weird going on why it's in the filename hash
table, but
> not in the array. I'll try to figure it out later..
I hope your move is go
Hi David,
On 1/14/10 3:13 PM, "David Halik" wrote:
>
> FYI, we backed out of the "noac" change today. When our 20K accounts
> started coming to work the NetApp NFS server was pushing 70% CPU usage
> and 25K NFS Ops/s, which resulted in all kinds of other havoc as normal
> services started becomi
David,
> -Original Message-
> From: dovecot-bounces+brandond=uoregon@dovecot.org
[mailto:dovecot-
> Our physical setup is 10 Centos 5.4 x86_64 IMAP/POP servers, all with
> the same NFS backend where the index, control, and Maildir's for the
> users reside. Accessing this are direct con
Cor,
On 1/22/10 1:05 PM, "Cor Bosman" wrote:
>
> Pretty much the same as us as well. 35 imap servers. 10 pop servers.
> clustered pair of 6080s, with about 250 15K disks. We're seeing some
> corruption as well. I myself am using imap extensively and regularly have
> problems with my inbox disap
David,
On 1/22/10 12:34 PM, "David Halik" wrote:
>
> We currently have IP session 'sticky' on our L4's and it didn't help all
> that much. yes, it reduces thrashing on the backend, but ultimately it
> won't help the corruption. Like you said, multiple logins will still go
> to different servers
David,
> -Original Message-
> From: David Halik [mailto:dha...@jla.rutgers.edu]
>
> *sigh*, it looks like there still might be the occasional user visible
> issue. I was hoping that once the assert stopped happening, and the
> process stayed alive, that the users wouldn't see their inbox
David,
> Though we aren't using NFS we do have a BigIP directing IMAP and POP3
> traffic to multiple dovecot stores. We use mysql authentication and
the
> "proxy_maybe" option to keep users on the correct box. My tests using
an
> external proxy box didn't significantly reduce the load on the store
Timo,
> -Original Message-
> From: Timo Sirainen [mailto:t...@iki.fi]
>
> On 25.1.2010, at 21.30, Brandon Davidson wrote:
> > If it could be set up to just fall back to
> > using a local connection in the event of a SQL server outage, that
might
> > help
Timo,
On 1/25/10 12:31 PM, "Timo Sirainen" wrote:
>
> I don't think it's immediate.. But it's probably something like:
>
> - notice it's not working -> reconnect
> - requests are queued
> - reconnect fails, hopefully soon, but MySQL connect at least fails in max.
> 10 seconds
> - reconnect
David,
> -Original Message-
> From: dovecot-bounces+brandond=uoregon@dovecot.org
[mailto:dovecot-
>
> There are ways of doing this in mysql, with heartbeats etc (which
we've
> discussed before), but then I'm back to mysql again. Maybe mysql just
> has to be the way to go in this case.
Hi David,
> -Original Message-
> From: David Halik
>
> I've been running both patches and so far they're stable with no new
> crashes, but I haven't really seen any "better" behavior, so I don't
> know if it's accomplishing anything. =)
>
> Still seeing entire uidlist list dupes after th
Hi David,
> -Original Message-
> From: David Halik
>
> It looks like we're still working towards a layer 7 solution anyway.
> Right now we have one of our student programmers hacking Perdition
with
> a new plugin for dynamic username caching, storage, and automatic fail
> over. If we get
72 matches
Mail list logo