the index. ;)
Any help would be appreciated.
Thanks,
-Dave
--
David Halik
System Administrator
OIT-CSS Rutgers University
[EMAIL PROTECTED]
http://www.digitalruin.net
Great, I was hoping the answer was to use the 1.1.0 release. I'll let
you know if the issues continue, but that sounds like the problem.
Thanks again,
-Dave
Charles Marcus wrote:
On 6/18/2008, David Halik ([EMAIL PROTECTED]) wrote:
* Linux workstations running Fedora 8/9 i386 and a lo
6:28 host IMAP(user): : Fatal: io_loop_handle_add:
epoll_ctl(1, 2): Operation not permitted
That doesn't really tell me much though.
Any ideas? Is dovecot taking different arguments now so my rimapd line
isn't working properly?
Any help would be appreciated.
Thanks.
--
===
rameter is used as section."
I'm still not sure what the fatal error message was for, but it must
have been because of this since it has gone away.
David Halik wrote:
Hi all,
I just upgraded a couple of Fedora 9 workstations to try out 1.1.1
over NFS'd homedir's and I
fixed.
I have seen other "corrupted" error reports in the mail archive, but
they did not seem to be
exactly like this one.
--
========
David Halik
System Administrator
OIT-CSS Rutgers University
[EMAIL PROTECTED]
index/control/user/.INBOX/dovecot-uidlist
Timo Sirainen wrote:
On Jun 18, 2008, at 9:12 PM, David Halik wrote:
Now this setup is just a test example and not exactly what we'll be
running in production, but it tipped up the problem either way. Since
the index is shared by both the Linu
This was starting from a clean index, first opening pine on the NFS
Solaris 9 sparc machine, and then at the same time opening pine on my
Fedora 9 i386 workstation.
Why does it matter where you run Pine? Does it directly execute Dovecot
on the local machine instead of connecting via TCP?
cot reports to mail.error, my
consoles and legitimate error logs are going to be full of something
considered a non-issue by most.
Any help is appreciated, thanks.
-Dave
--
====
David Halik
System Administrator
OIT-CSS Rutgers University
[EMAIL PROTECTED]
Whoops, that would explain it since we've been testing with 1.1.3.
Thanks Timo, I'll upgrade and hopefully that will fix it.
Timo Sirainen wrote:
On Mon, 2008-11-24 at 10:45 -0500, David Halik wrote:
Nov 24 08:34:34 xxx.xxx.xxx.xxx dovecot: [ID 107833 mail.error]
auth-worker(def
: private
separator: .
prefix: INBOX.
inbox: yes
list: yes
subscriptions: yes
auth default:
passdb:
driver: pam
args: *
userdb:
driver: passwd
plugin:
quota: fs
Any help would be appreciated!
Thanks,
-Dave
--
====
David Halik
System Administrator
OIT-CS
Timo Sirainen wrote:
On Thu, 2009-03-19 at 15:25 -0400, David Halik wrote:
So far
everything is working smoothly, but when someone does a search through
directory with a large number of emails, dovecot dies and prints the
following message:
[ID 107833 mail.crit] Panic: Trying to allocate
so.1
libmp.so.2 =>/usr/lib/libmp.so.2
/usr/platform/SUNW,Ultra-250/lib/libc_psr.so.1
/usr/platform/SUNW,Ultra-250/lib/libmd5_psr.so.1
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
On 3/20/2009 6:26 PM, Timo Sirainen wrote:
Maybe you could try copying the libraries and using LD_LIBRARY_PATH (or
whatever it is in Solaris) to force using those libraries instead of the
default ones?
I could try that as a last resort, but I don't think it will help. We
build our own packa
awhile it
would get an E2BIG and bump the memory over and over.
Timo Sirainen wrote:
On Sat, 2009-03-21 at 18:58 -0400, David Halik wrote:
Mar 21 18:43:57 er0.rutgers.edu IMAP(dhalik): : [ID 107833 mail.crit]
Panic: Trying to allocate 2147483648 bytes
Attached patch prob
] Select completed.
2 SEARCH BODY "berry"Aborted (core dumped)
Thanks again Timo,
-Dave
Timo Sirainen wrote:
On Sat, 2009-03-21 at 18:58 -0400, David Halik wrote:
Mar 21 18:43:57 er0.rutgers.edu IMAP(dhalik): : [ID 107833 mail.crit]
Panic: Trying to allocate 214
searched.
Thanks,
-Dave
David Halik wrote:
Unfortunately, the patch didn't help, BUT I've discovered some very
interesting things along the way that I think you'd like to hear:
1) The problem stems from certain emails with odd or badly formed
characters.
The reason I wasn
That did the trick! No more memory allocation issues. That you very much
for the patch and your help Timo. Should we expect the patch to be in
the final 1.2 release, or in a future version?
-Dave
Timo Sirainen wrote:
On Mar 26, 2009, at 1:00 PM, David Halik wrote:
Any thoughts on this
rably more.
--
========
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
Nevermind, stupid question. I found it on the squat page. ;)
David Halik wrote:
Timo Sirainen wrote:
For message body indexing there are a couple of choices:
http://wiki.dovecot.org/Plugins/FTS
Just curious, what is the average disk usage for FTS? I know it's
usually around 10-15%
serdb:
driver: passwd
plugin:
quota: fs
fts: squat
fts_squat: partial=4 full=4
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
My mistake, bad copy and paste:
# dovecot -n
# 1.1.13: /usr/local/etc/dovecot.conf
# OS: SunOS 5.9 sun4u
protocols: imap imaps pop3 pop3s
Charles Marcus wrote:
On 4/21/2009, David Halik (dha...@jla.rutgers.edu) wrote:
# OS: SunOS 5.9 sun4u protocols: imap imaps pop3 pop3s
ssl_disable
-n
# 1.1.13: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.23-gentoo-r9 x86_64 Gentoo Base System release 1.12.11.1
protocols: imaps
listen: [::]
...
etc...
--
====
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
thout it though. It's useful
when you have a couple thousand emails and want to do multiple searches
because each successive search is fast. We're actually running it on our
Sun boxes in preparation for running it on faster linux boxes though, so
I'm not to worried.
--
====
Timo,
I had another core dump randomly while doing a search with squat on. The
backtrace is in the link:
http://pastebin.com/fd59ae03
Thanks.
David Halik wrote:
Timo Sirainen wrote:
Did you happen to get a core dump? gdb backtrace would be helpful:
http://dovecot.org/bugreport.html
/dovecot.index.search.uids:
wrong indexid
David Halik wrote:
Timo,
I had another core dump randomly while doing a search with squat on.
The backtrace is in the link:
http://pastebin.com/fd59ae03
Thanks.
David Halik wrote:
Timo Sirainen wrote:
Did you happen to get a core dump? gdb backtrace would
47864 -> 0xfef48220 -> 0x11f2e4 -> 0x86124 -> 0x86458
-> 0x193170 -> 0x193200 -> 0x193f04 -> 0x193290 -> 0xa4308 -> 0x77368
Sorry for sending this in multiple emails, I'm trying to help someone
debug it and I'm not getting the info all at once.
David Halik wro
Thu, 2009-04-23 at 11:51 -0400, David Halik wrote:
I did some more digging and here's the actual error that it fails with:
Apr 23 09:28:21 batman.rutgers.edu IMAP(kmech): : [ID 107833 mail.crit]
Panic: file squat-trie.c: line 441: assertion failed:
(!node->have_sequential)
Are
x27;t forget to enable it.
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
box: yes
list: yes
subscriptions: yes
lda:
postmaster_address: postmas...@jla.rutgers.edu
auth default:
verbose: yes
debug: yes
passdb:
driver: pam
args: *
userdb:
driver: passwd
plugin:
quota: fs
fts: squat
fts_squat: partial=4 full=4
--
=====
ns: yes
auth default:
passdb:
driver: pam
args: dovecot
userdb:
driver: passwd
--
====
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
On 12/17/2009 01:07 PM, Timo Sirainen wrote:
On Thu, 2009-12-17 at 12:49 -0500, David Halik wrote:
I applied those patches to my 1.2.8 installation before 1.2.9 was
released and that immediately fixed the expunge crash, but the
array_delete bug is still there.
Do you also see the
match ==
IMAP_MATCH_YES)
Let me know if you need full backtraces from the core dump.
On 12/17/2009 02:06 PM, David Halik wrote:
On 12/17/2009 01:07 PM, Timo Sirainen wrote:
On Thu, 2009-12-17 at 12:49 -0500, David Halik wrote:
I applied those patches to my 1.2.8 installation before 1.2.9 was
release
I'm seeing both of these dumps on multiple users now with 1.2.9, so I
went ahead and did backtraces for them both.
maildir_uidlist_records_array_delete panic: http://pastebin.com/f20614d8
ns_get_listed_prefix panic: http://pastebin.com/f1420194c
On 12/21/2009 12:43 PM, David Halik
eing this crash until we took 'noac' out of
our NFS mount options, as discussed on this list late last week. On the other
hand, load on our NFS server (as measured in IOPS/sec) has dropped by a factor
of 10.
-Brad
-Original Message-
From: dovecot-bounces+brandond=uoreg
Looks like you're running 1.2.8, the
maildir_uidlist_records_drop_expunge crash was fixed in 1.2.9. Upgrading
should fix your problem.
On 12/23/2009 5:29 AM, Anton Dollmaier wrote:
Hi all,
after inserting another sieve-rule, I get the following backtrace on
deliver.
The mail gets delive
->
/usr/libexec/dovecot/imap(io_loop_run+0x1d) [0x4a5c
ed] -> /usr/libexec/dovecot/imap(main+0x620) [0x428ef0] ->
/lib64/libc.so.6(__libc_start_main+0xf4) [0x38af41d994] ->
/usr/libexec/dovecot/imap [0x419ac9]
On 12/22/2009 08:17 PM, Timo Sirainen wrote:
On 22.12.2009, at 16.42, David Hali
quot;match" in current context.
(gdb) p ns_prefix
No symbol "ns_prefix" in current context.
(gdb) p p
No symbol "p" in current context.
...and here's the full trace for reference
http://pastebin.com/f77189785
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
:
On Wed, 2009-12-23 at 14:06 -0500, David Halik wrote:
Dec 23 11:18:32 gehenna17.rutgers.edu dovecot: IMAP(user2): Panic: file
cmd-list.c: line 242 (ns_get_listed_prefix): assertion failed: (match ==
IMAP_MATCH_YES)
Fixed: http://hg.dovecot.org/dovecot-1.2/rev/56dd8c276ed6
gv=optimized out>, envp=0x7fff86f840b8) at main.c:327
And the full backtrace:
http://pastebin.com/f651f649e
--
========
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
) [0x4a5d6d] ->
/usr/libexec/dovecot/imap(main+0x620) [0x428f20] ->
/lib64/libc.so.6(__libc_start_main+0xf4) [0x354301d994] ->
/usr/libexec/dovecot/imap [0x419ac9]
Dec 24 10:44:38 gehenna11 dovecot: dovecot: child 19032 (imap) killed
with signal 6 (core dumped)
On 12/24/2009 11:26
mains...
On 12/24/2009 11:39 AM, David Halik wrote:
I should probably also post the messages leading up for reference.
Note that I did not see any stale NFS messages this time, but did get
the usual duplicate file messages:
Dec 24 10:43:07 gehenna11 dovecot: IMAP(user):
/rci/nqu/rci/u1/user/do
On 12/29/2009 6:18 PM, Timo Sirainen wrote:
Wonder if there's a corresponding "Expunged message reappeared, giving a
new UID (old uid=x)" having "Dupliate file entry .. (uid x -> " for each
log line? Meaning that the duplicate file entries are caused by those
reappearing messages? (And the reapp
Hi Ralf, Timo commited a patch for this last week, give it a try. So far
it has stopped these crashes for me.
http://hg.dovecot.org/dovecot-1.2/rev/56dd8c276ed6
On 12/30/2009 4:54 AM, Ralf Hildebrandt wrote:
* Ralf Hildebrandt:
I got about 20 (!) of these today
Log:
Dec 30 10:48
ehenna11
Second user log: http://pastebin.com/f3a1756f2
Second user gdb: http://pastebin.com/m59aacde4
On 12/29/2009 7:50 PM, Timo Sirainen wrote:
On 29.12.2009, at 19.09, David Halik wrote:
I'll definitely get back to you on this. Right now we're closed until after New
Years and I don
_size = 4},
v = 0x950fe70, v_modifiable = 0x950fe70}, parser = 0x9512620,
state = CLIENT_COMMAND_STATE_WAIT_INPUT, sync = 0x0, uid = 0, cancel = 0,
param_error = 0, search_save_result = 0, temp_executed = 0}
(gdb) quit
- End forwarded message -
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
01250P7938V04240006I01EF76E0_0.gehenna10.rutgers.edu,S=1731278:2,",
extensions = 0x1109d240 "W1753846"}
(gdb) p *recs[1]
$4 = {uid = 55109, flags = 4, filename = 0x1109d268
"1262788793.M851477P3866V045C0007I01EF76E3_0.gehenna8.rutgers.edu,S=19990",
extensions = 0x0}
issues up) is 10 dedicated IMAP/POP servers, 2 frontend
*nix boxes running pine, and four webmail machines using imapproxy to
connect to the IMAP servers... all using the same NFS backend. More than
likely it's an multi-server access NFS issue.
--
David Halik
S
Same here. I laughed because our help desk started sending us the exact
same complaints and then today I got a little bit of a red nose when a
director's mail "disappeared" in a meeting. ;) Whoops.
It looks like users who end up with the off by 1 uid list rebuild and
crash experience and emp
nd
5K, so such a large increase of load was too much to handle.
During the 12 hour window I didn't see a single uid error as expected,
but the fix was worse than the problem.
On 01/13/2010 07:41 PM, David Halik wrote:
Same here. I laughed because our help desk started sending us
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
49:2,Sa
(uid 8747 -> 9859)
Notice it's dump free at the end! ;)
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
uid/subscriptions to a non-quota area outside their homedir
then this is necessary.
Unfortunately, that was something we learned too late after we did our
migration. =)
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
cases have been
with Thunderbird 3.0, I'm not sure if other clients are seeing this.
Any idea?
On 01/20/2010 01:45 PM, Timo Sirainen wrote:
On 20.1.2010, at 20.24, David Halik wrote:
Jan 20 13:13:59 gehenna13.rutgers.edu dovecot: IMAP(user):
/rci/nqu/rci/u2/user/dovecot/.INBOX/dove
is? Google
hasn't been helpful yet.
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
On 01/21/2010 12:14 PM, Timo Sirainen wrote:
On 21.1.2010, at 19.13, Timo Sirainen wrote:
On 21.1.2010, at 19.10, David Halik wrote:
This seems to have started since TB 3.0 came out. Especially, with the new
indexing feature they added. I'm not seeing it with Pine or S
been trying to fix it for the
last two releases. So far the alpha build for 3.0.2 is working for me.
https://bugzilla.mozilla.org/show_bug.cgi?id=517461
https://bugzilla.mozilla.org/show_bug.cgi?id=524902
--
David Halik
System Administrator
OIT-CSS Rutgers Unive
multiple
services with an NFS backend.
What has been your experience so far?
Thanks,
-Dave
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
anyone notices
the corruption.
Thanks for all the feedback. I'm going over some of the ideas you
suggested and we'll be thinking about long term solutions.
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
aints.
That's where we are, and as long as the corruptions stay user invisible,
I'm fine with it. Crashes seem to be the only user visible issue so far,
with "noac" being out of the question unless they buy a ridiculously
expensive filer.
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
maybe it's not such a problem after all. Anyway, what has you experience
been?
--
====
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
Well, I don't know how you feel about it, but you could always go with
something similar to what courier does and call it "doveauth" while
keeping the real "dovecot" user for the reset of the processes.
It's eight characters, reminds you of the login process, and very easy
to understand for
Seems everyone is starting to notice this now. :)
Yes, there are several flaws and bug reports open with Mozilla on broken
behavior with TB 3.0 and TB 3.0.1. Currently the easiest fix is to turn
off CONDSTOR either server side or in TB itself. There is also a TB
3.0.2pre nightly available tha
So, what is the real world benefit of CONDSTORE? In other words, what do
you lose by disabling it?
My general understanding is that it reduces both client and server sync
load by essentially allowing the two to be smarter about flag/time
changes. When a client connects it does not need to
ow they're just have to live with it until I either get proxy_maybe
setup, or some other solution.
--
====
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
On 01/25/2010 01:00 PM, Charles Marcus wrote:
On 2010-01-25 12:57 PM, David Halik wrote:
I just had user experience this with TB 2, and after looking at the logs
I found the good ole' stale nfs message:
Maybe TB3 would be better behaved? It has many, many MAP improvements
ove
On 01/25/2010 01:02 PM, David Halik wrote:
On 01/25/2010 01:00 PM, Charles Marcus wrote:
On 2010-01-25 12:57 PM, David Halik wrote:
I just had user experience this with TB 2, and after looking at the
logs
I found the good ole' stale nfs message:
Maybe TB3 would be better behaved? It has
On 01/25/2010 01:31 PM, Timo Sirainen wrote:
On Mon, 2010-01-25 at 12:57 -0500, David Halik wrote:
Jan 25 11:39:24 gehenna21 dovecot: IMAP(user):
fdatasync(/rci/nqu/rci/u8/user/dovecot/.INBOX/dovecot-uidlist) failed:
Stale NFS file handle
Well, two possibilities:
a) The attached
On 01/25/2010 02:18 PM, David Halik wrote:
On 01/25/2010 01:31 PM, Timo Sirainen wrote:
On Mon, 2010-01-25 at 12:57 -0500, David Halik wrote:
Jan 25 11:39:24 gehenna21 dovecot: IMAP(user):
fdatasync(/rci/nqu/rci/u8/user/dovecot/.INBOX/dovecot-uidlist) failed:
Stale NFS file handle
Well, two
On 01/25/2010 03:26 PM, Timo Sirainen wrote:
On Mon, 2010-01-25 at 15:12 -0500, David Halik wrote:
I patched and immediately starting seeing *many* of these:
Jan 25 15:05:33 gehenna18.rutgers.edu dovecot: IMAP(user):
lseek(/rci/nqu/rci/u1/sendick/dovecot/.Trash/dovecot-uidlist) failed:
Bad
No guts no glory! So far, so good. The first patch started spewing messages
within seconds. I've been running for about twenty minutes with this version
and I haven't seen much of anything yet.
I'll report back tomorrow after it has a day to burn in.
It's still a bit buggy. I haven't see
event *) 0x12bec350
list = (struct io_list *) 0x12bf01c0
io = (struct io_file *) 0x12bfb760
tv = {tv_sec = 1799, tv_usec = 999417}
events_count =
t_id = 2
msecs =
ret = 1
i = 0
call =
#14 0x004a5d7d in io_loop_run (ioloop=0x12bec0f0) at ioloop.c:335
No locals.
#15 0x00428f3
s is going to end up being what we use to avoid NFS
problems. It's lightweight, drops into our current setup, and doesn't
require a mysql database which adds another point of failure.
--
====
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
#x27;re pretty much in the exact same spot. I'm getting pressured into
doing something one way or the other since user's mail is still
resyncing when they hit the "stale NFS" message.
--
David Halik
System Administrator
OIT-CSS Rutgers University
dha...@jla.rutgers.edu
On 2/1/2010 5:47 PM, Timo Sirainen wrote:
On 2.2.2010, at 0.39, David Halik wrote:
Back to dovecot, actually sqlite might work, because then I don't need a
database backend, just a local sqlite regex. The question then being, how would
dovecot handle multiple servers? For example:
On 2/6/2010 2:06 PM, Timo Sirainen wrote:
ab9e0, st=0x7fffc949d4b0) at maildir-uidlist.c:382
Oh, interesting. An infinite loop. Looks like this could have happened
ever since v1.1. Wonder why it hasn't shown up before. Anyway, fixed:
http://hg.dovecot.org/dovecot-1.2/rev/a9710cb350c0
Or you can run any TB 3 version, just turn off CONDSTORE support in the
config. Doing so won't hurt anything.
On 2/8/2010 8:35 AM, Christian Rohmann wrote:
Hello,
On 02/08/2010 01:54 PM, Holger wrote:
Is this problem Dovecot related ?
Do you have the same trouble ?
Yes and I do
On 02/06/2010 02:32 PM, Timo Sirainen wrote:
On Sat, 2010-02-06 at 14:28 -0500, David Halik wrote:
On 2/6/2010 2:06 PM, Timo Sirainen wrote:
ab9e0, st=0x7fffc949d4b0) at maildir-uidlist.c:382
Oh, interesting. An infinite loop. Looks like this could have happened
ever since v1.1
On 02/08/2010 01:46 PM, Brandon Davidson wrote:
Hi David,
-Original Message-
From: David Halik
I've been running both patches and so far they're stable with no new
crashes, but I haven't really seen any "better" behavior, so I don't
know if it'
On 02/10/2010 06:15 PM, Brandon Davidson wrote:
Hi David,
-Original Message-
From: David Halik
It looks like we're still working towards a layer 7 solution anyway.
Right now we have one of our student programmers hacking Perdition
with
a new plugin for dynamic use
79 matches
Mail list logo