map_quota trash
}
protocol pop3 {
mail_plugins = quota quota
pop3_logout_format = top=%t/%p, retr=%r/%b, del=%d/%m, size=%s
pop3_uidl_format = %08Xu%08Xv
}
Regards, Mikkel
entry everything works as expected :-)
Regards, Mikkel
Den 14/06/12 10.14, Mikkel skrev:
Hello
In my installation the disable_plaintext_auth does not appear to take
effect.
I can see that the value is correct using doveconf -a but it doesn't
change anything.
Whenever attempting to log in
s.
It seems irrational that dovecot itself doesnt support this considering
dovecot being much ahead of qmail in every other aspect.
By the way I use it for much the same tasks as you do; for spam reporting
but also for certain e-mails that need be put into another database
besides its mailbox.
Regards, Mikkel
All betas so far have compiled without problems on my setup, but something
breaks in b13.
This problem occurs with both make and gmake.
Below are outputs from make and gmake.
Regards, Mikkel
make all-recursive
Making all in src
Making all in lib
make all-am
if gcc -DHAVE_CONFIG_H -I. -I
le.
Compile errors are actually put into the development tree from time to
time (there appear to be other compile errors with this specific beta
release as well) so reporting it to the list makes perfectly sense.
After all the feedback comes from the users.
Regards (and a happy new year), Mikkel.
ed all
previous v1.1 betas on the same setup including b12).
Ref.:
http://dovecot.org/list/dovecot/2007-December/027702.html
http://dovecot.org/list/dovecot/2007-December/027704.html
Regards, Mikkel
On Fri, January 4, 2008 1:56 am, [EMAIL PROTECTED] wrote:
> Any news on the b13 compile issue on Solaris 10 (both Sparc and X86)?
>
I found this looking through your commits:
http://hg.dovecot.org/dovecot/rev/d45c3058b91a
Guess that'll fix it, thanks.
like a minor error since the delivery functions as it
should, apart from the delivery error it returns and the core dump.
Regards, Mikkel
[EMAIL PROTECTED] tail -f /log/deliver.log | grep @euro123.dk
deliver([EMAIL PROTECTED]): Jan 07 22:20:02 Error:
nfs_flush_file_handle_cache_dir: rmdir(/n
Just tested hg 20080110 with the same result.
- Mikkel
dn't be the same as mail dir
>(http://wiki.dovecot.org/VirtualUsers#homedirs), but I'll see if I can
> do something about this.
Actually I am using a different directory;
home=/nfs/euro123.dk/mikkel, mail=/nfs/euro123.dk/mikkel/Maildir
And normally everything works as expected, except i
somebody decides to use "/"
as the home-directory (probably wouldn't happen). Then I guess changing to
the parent directory isn't possible.
I tried to look at your commit but my C-skills (don't have any) aren't
quite enough to decide whether you took this into consideration already:)
Regards, Mikkel
p (even though
the e-mail is correctly delivered to both accounts).
The e-mail is delivered, but afterwards Deliver fails and returns the
error (listed in the bottom of this e-mail).
Beats me why.
You wrote in a previous mail:
>> msgid=<20080107212004.782C817DB8 at mta01.euro123.dk>
t's your take on this?
Should I change the layout (and what layout should I use then) or is this
a Dovecot bug?
My layout is like the first example show here (Mail directory under home):
http://wiki.dovecot.org/VirtualUsers#homedirs
(The new bug is described here:
http://dovecot.org/list/dovecot/2008-January/028062.html)
Do you need any further information?
Regards, Mikkel
ver: sql
args: /local/config/dovecot-sql.conf
socket:
type: listen
client:
path: /var/spool/postfix/private/auth
mode: 432
user: postfix
group: postfix
master:
path: /var/run/dovecot/auth-master
mode: 384
user: vmail
plugin:
quota: maildir
quota_rule2: Trash:storage=10M:messages=100
trash: /local/config/dovecot-trash.conf
Regards, Mikkel
mail storage are located on NFS.
Regards, Mikkel
This is the output of dovecot -n:
# 1.1.beta14: /local/config/dovecot.conf
Warning: fd limit 256 is lower than what Dovecot can use under full load
(more than 768). Either grow the limit or change login_max_processes_count
and max_mail_processes
hout waiting a little to see if anything
comes up seems like a bad idea.
Regards, Mikkel
ally written to maildirsize.
>> plugin:
>> quota: maildir
>> quota_rule2: Trash:storage=10M:messages=100
>
> I guess quota_rule comes from userdb? Is it the same for both imap/pop3?
> Although that shouldn't matter since the maildirsize should be updated
> in any case..
>
The queries are exactly alike for POP3 and IMAP.
Thanks for looking into this.
Regards, Mikkel
ot;
message in the log (and an "e-mail undeliverable" reply would normally be
sent to the sender).
The messages vary a bit in different versions though.
Other than that I don't know what could cause it.
Also the errors are better the newer the version of dovecot (upgrading may
help in that regard).
Regards, Mikkel
et close to the cause.
Does Dovecot actually check whether updating the maildirsize is successful
or not after calling the operations (e.g. what happens if the code is
unable to read from or write to maildirsize)?
Regards, Mikkel
e then I'd
definitely have to add at least one more LUN to handle it.
I think that the IO penalty for delivery is more due to updating indexes
than handling Sieve though...anyway it's performing nicely :)
Hope this helps
Regards, Mikkel
missions on
the folder.
And I think you should try with: fileinto "Junk";
Instead of: fileinto ".Junk";
(without the preceding dot that is)
The preceding dot will automatically be added to the folder name by
dovecot so it may create confusion if you use it too.
Dovecot always uses the dot-prefix on folder names in the file system but
this isn't really part of the IMAP/Sieve folder name that you wanna use.
Regards, Mikkel
r and I assume these are related to locking?
Just guessing here but it's kind of odd why it works now if it's not
locking related.
Regards, Mikkel
ced with a
schwarzeneggerish accent like in the last three seconds of this splendid
video http://www.youtube.com/watch?v=adc3MSS5Ydc).
Best regards, Mikkel
one huge database.
Dbox would be the ultimate compromise between crash resilience and a low
number of files (not to mention the enormous potential for speed gains).
Regards, Mikkel
Also there is the risk of data being deleted by a mistake,
hacker attacks or software malfunctioning.
But we really are moving off-topic here.
Regards, Mikkel
I would prefer if I were to backup the entire store with
one command would be generating a snapshot of the file system.
And then rsync or cp that snapshot. That way youll always get a
consistent backup and you wont have to worry about how long the backup
takes to finish.
Regards, Mikkel
is
> running
I only have experience wuth UFS (FreeBSD) and ZFS (Solaris).
Snapshots on UFS is a horrible thing for large file systems.
Snapshots on ZFS is marvellous (which I use). It does not result in any
extra IO whatsoever due to some clever designing.
If you have the option of using ZFS it's definitely the best way to do it.
Regards, Mikkel
nge today in my opinion) and reducing the IO. Buying
more IO is an order of magnitude more expensive than getting more RAM or
CPU power (and dovecot barely needs any RAM and CPU anyway).
Best wishes, Mikkel
to multi-dbox after upgrading to
2.0 or is a completely new migration then needed?
Would this scenario be much different if the system is upgraded to
version 1.2 before the change to single-dbox?
Kind regards, Mikkel
ates (while all subsequent lines are
deleted).
This means that just reading the second line of the file will give you a
pretty good indication of whether dovecot has counted the usage correctly.
If dovecot has made an error just delete the maildirsize which will
cause dovecot to recount the usage.
Regards, Mikkel
Timo Sirainen skrev:
On Oct 14, 2009, at 7:03 AM, Mikkel wrote:
It has been my wish to move to dbox for a long time hoping to reduce
the number of writes which is really killing us.
BTW. Have you tried maildir_very_dirty_syncs=yes setting? That should
reduce disk i/o, and I'm r
Timo Sirainen skrev:
On Oct 14, 2009, at 7:03 AM, Mikkel wrote:
Now the big question is whether multi-dbox and single-dbox are
compatible formats.
Kind of, but not practically.
If a Maildir->dbox migration is made on a system running dovecot v.
1.1, would it then be trivial later chang
n a long term perspective?
Regards, Mikkel
Timo Sirainen skrev:
On Wed, 2009-10-14 at 21:14 +0200, Mikkel wrote:
It has been my wish to move to dbox for a long time hoping to reduce
the number of writes which is really killing us.
BTW. Have you tried maildir_very_dirty_syncs=yes setting? That should
reduce disk i/o, and I'm r
that this might happen
with dbox under the same circumstances.
A very good reason to wait for 2.0 I guess...
Regards, Mikkel
Timo Sirainen wrote:
On Wed, 2009-10-14 at 23:41 +0200, Mikkel wrote:
Timo Sirainen wrote:
And you've actually been looking at Dovecot's error log? Good if it
doesn't break, most people seem to complain about random errors.
Well, it does complain once in a while but it has ne
Timo Sirainen skrev:
On Wed, 2009-10-14 at 23:04 +0200, Mikkel wrote:
So basically you prefer mdbox but are maintaining dbox because of its
almost lockless design which is better for NFS users?
Do you consider it to be viable having two different dbox formats or are
you planning to keep only
Timo Sirainen wrote:
On Wed, 2009-10-14 at 23:52 +0200, Mikkel wrote:
But it should be able to heal itself using the backup files in version
2.0, right?
That's the theory anyway. :)
How often are they created anyway?
Whenever dovecot.index file would normally get recreated, the ol
Timo Sirainen skrev:
On Wed, 2009-10-14 at 23:59 +0200, Mikkel wrote:
In case of mdbox wouldn't you have the very same problem since larger
files may be fragmented all over the disk just like many small files in
a directory might?
I guess this depends on filesystem. But the files
Timo Sirainen skrev:
> On Thu, 2009-10-15 at 10:55 +0200, Mikkel wrote:
>> Some users would have mailboxes a several hundred megabytes and having
>> to recreate thousands of these every night because of a single mail
>> getting expunged a day could result in a huge pe
ave anyone else experienced at problem like this?
Best wishes, Mikkel
Configuration:
/opt/freeware/dovecot-1.1b3/sbin/dovecot -c /local/config/dovecot2.conf -n
# 1.1.beta3: /local/config/dovecot2.conf
Warning: fd limit 256 is lower than what Dovecot can use under full load
(more than 768). Eithe
I somehow solved this compiling with gmake instead of make.
I think it's related to libiconv and not to Dovecot.
Sorry for the confusion.
- Mikkel
On Wed, October 17, 2007 1:55 pm, [EMAIL PROTECTED] wrote:
> Hi there
>
>
> I'm using dovecot-1.1b2/b3 with deliver. All infor
th dovecot 1.0.x
Is there a way around this?
- Mikkel
easy to use and also very reliable (my own oppinion).
Since it's Perl it's not as fast as it could be but it should be just fine
for your needs.
- Mikkel
, and sometimes completely).
This could be OS specific (Im using ZFS and Solaris 10 on Sparc) but it
may also be due to the way dovecot is programmed.
Theres always plenty of RAM and CPU available so thats not whats
causing troubles.
Is anyone else familiar with this issue?
Regards, Mikkel
(maybe writes could be grouped together or something like that).
Dovecot is very cheap on the CPU side so the only real limit in terms of
scalability is the storage.
Regards, Mikkel
s been present in all v1.1.x I think (some changes was made to the
error messages in b3 or b4 but believe this didnt change).
Regards, Mikkel
s I'll just have to
accept it as yet another ZFS curiosity :|
(Possibly this is also the answer to my other post regarding
stalled/delayed I/O)
- Mikkel
On Sun, November 4, 2007 2:20 pm, Timo Sirainen wrote:
> Well, if you use only clients that don't really need indexes they could
> just slow things down. You could try disabling indexes to see how it works
> then (:INDEX=MEMORY to mail_location).
I tried that earlier and it did result in less writ
On Sun, November 4, 2007 2:54 pm, [EMAIL PROTECTED] wrote:
>> You could truss the hanging process to see what it's doing.
> It's not an easy task since the delay is sometimes just a few (5-10)
> seconds. And when there is a complete stall the client aborts before I can
> find the process. But I'll
On Sun, November 4, 2007 4:32 pm, Timo Sirainen wrote:
>>
>> I didn't know that mail_nfs_index=yes resulted in a forced chown.
>> How come that's necessary with NFS but not on local disks?
>>
>
> It's used to flush NFS attribute cache. Enabling it allows you to use
> multiple servers to access the
51 matches
Mail list logo