Re: Dovecot release v2.3.7

2019-07-12 Thread Michael Grimm via dovecot
Aki Tuomi via dovecot  wrote:

> We are pleased to release Dovecot release v2.3.7.

My upgrade from 2.3.6 to 2.3.7 broke replication (FreeBSD 12.0-STABLE r349799):

| dovecot[76032]: master: Dovecot v2.3.7 (494d20bdc) starting up for imap, 
lmtp, sieve
| dovecot[76035]: replicator: Error: 
file_ostream.net_set_tcp_nodelay((conn:unix:/var/run/dovecot/stats-writer), 
TRUE) failed: Invalid argument
| dovecot[76035]: replicator: Error: 
file_ostream.net_set_tcp_nodelay((conn:unix:auth-userdb), TRUE) failed: Invalid 
argument
| dovecot[76035]: auth: Error: 
file_ostream.net_set_tcp_nodelay((conn:unix:/var/run/dovecot/stats-writer), 
TRUE) failed: Invalid argument

Back to 2.3.6 fixed it.

FYI:I do not have any …
metric name {}  
… in my config.

Regards,
Michael



Re: Dovecot release v2.3.7

2019-07-12 Thread Michael Grimm via dovecot
Timo Sirainen via dovecot  wrote:
> 
> On 12 Jul 2019, at 21.05, Michael Grimm via dovecot  
> wrote:

>> My upgrade from 2.3.6 to 2.3.7 broke replication (FreeBSD 12.0-STABLE 
>> r349799):
> 
> I don't think these cause any actual breakage?

It doesn't break completely, but it leaves replication shaken:

1) replication is pretty much delayed in some cases (monitored on both 
instances)
2) replication doesn't become completed after 15 minutes in some cases

Returning to 2.3.6 completes these hanging replications instantaneously.

> Of course, there are tons of errors logged..

Yes.

> 
> This likely fixes it anyway:
> 
> diff --git a/src/lib/ostream-file.c b/src/lib/ostream-file.c
> index e7e6f62d1..766841f2f 100644
> --- a/src/lib/ostream-file.c
> +++ b/src/lib/ostream-file.c
> @@ -334,7 +334,7 @@ static void o_stream_tcp_flush_via_nodelay(struct 
> file_ostream *fstream)
> {
>if (net_set_tcp_nodelay(fstream->fd, TRUE) < 0) {
>if (errno != ENOTSUP && errno != ENOTSOCK &&
> -   errno != ENOPROTOOPT) {
> +   errno != ENOPROTOOPT && errno != EINVAL) {
>i_error("file_ostream.net_set_tcp_nodelay(%s, TRUE) 
> failed: %m",
>o_stream_get_name(&fstream->ostream.ostream));
>}

I do not compile from source, I'm using FreeBSD's ports. Thus I tried to put 
this patch at the desired location to become used whilst compiling dovecot. But 
this patch fails to become applied, most probably due to incorrect whitespaces. 
As I do not have access to the dovecot's source tree, I do not have a clue who 
the ASCII version of the patch is. Thus, could you mail it to my as an 
attachment?

Regards,
Michael



Re: Dovecot release v2.3.7

2019-07-12 Thread Michael Grimm via dovecot
Michael Grimm via dovecot  wrote:
> 
> Timo Sirainen via dovecot  wrote:

>> This likely fixes it anyway:
>> 
>> diff --git a/src/lib/ostream-file.c b/src/lib/ostream-file.c

> I do not compile from source, I'm using FreeBSD's ports. Thus I tried to put 
> this patch at the desired location to become used whilst compiling dovecot. 
> But this patch fails to become applied, most probably due to incorrect 
> whitespaces. As I do not have access to the dovecot's source tree, I do not 
> have a clue who the ASCII version of the patch is. Thus, could you mail it to 
> my as an attachment?

Never mind, Larry has applied your patch to FreeBSD's port system in the 
meantime, and yes, it does silence those error messages.

But, replication still doesn't works as known from previous versions. I will 
give it a try over night, but I am seeing …

| Queued 'full resync' requests 1

… after 1 minute, already.

Regards,
Michael



Re: Dovecot release v2.3.7

2019-07-13 Thread Michael Grimm via dovecot

Michael Grimm wrote:


But, replication still doesn't works as known from previous versions.
I will give it a try over night, but I am seeing …

| Queued 'full resync' requests 1

… after 1 minute, already.


The following modifications regarding my setup made replication work 
again (at least for the last 8 hours):


#) Re-compiling dovecot with the patch from 
https://dovecot.org/pipermail/dovecot/2019-July/116479.html
#) Activating mailbox attributes as mentioned in 
https://dovecot.org/pipermail/dovecot/2019-July/116492.html


Now replication is working from my point of view, besides occational 
error messages like:


| imap-login: Error: file_ostream.net_set_tcp_nodelay(, TRUE) failed: 
Connection reset by peer


(This has been reported before: 
https://dovecot.org/pipermail/dovecot/2019-July/116491.html)


Regards,
Michael




Re: Dovecot release v2.3.7

2019-07-17 Thread Michael Grimm via dovecot
Timo Sirainen via dovecot  wrote:
> On 13 Jul 2019, at 18.39, Michael Grimm via dovecot  
> wrote:

>> Now replication is working from my point of view, besides occational error 
>> messages like:
>> 
>> | imap-login: Error: file_ostream.net_set_tcp_nodelay(, TRUE) failed: 
>> Connection reset by peer
> 
> Here's the final fix for it:
> 
> https://github.com/dovecot/core/commit/25028730cd1b76e373ff989625132d526eea2504

No wonder, after applying your patch those error messages are history :-)

Thank you very much and regards,
Michael

@Larry: from my point of view you may replace this patch with the previous one.



[whishlist] new option for 'doveadm purge'

2019-12-08 Thread Michael Grimm via dovecot
Hi,

I do store mail in mdbox format of 150m in size (dovecot 2.3.9). 

Once in a while I do experience mdbox files of smaller size, even after 
applying 'doveadm purge' and previous expunges by the users. like:

-rw---  1 vmail  dovecot  104854595 Feb  9  2019 
/var/mail/.maildirs/userX/storage/m.22
-rw---  1 vmail  dovecot   29088478 Mar  8  2019 
/var/mail/.maildirs/userX/storage/m.31
-rw---  1 vmail  dovecot   98210890 Mar 20  2019 
/var/mail/.maildirs/userX/storage/m.39

(Currently the counter is at file number 129.)

Well, I never experienced missing mail or alike, but these "holes" in filesize 
irritates me, and yes, it is more or less a cosmetic issue. 

Nevertheless, I do sometimes want to get rid of these "holes" by backing up all 
mail and re-injecting the backup into a vanilla account of that user. And I 
used this approach when I wanted to store all mail messages in larger mdbox 
files; again, rather a cosmetic issue.

BUT that takes a very, very long time contrary to the speed of 'doveadm purge'. 
Unfortunately, that command starts somewhere with more recent mdbox files and 
never from scratch (oldest mdbox file).

Whishlist: Would it be much of an effort to implement an option like:

'doveadm purge -f' 
and '-f' standing for 'force' or 'from scratch' or 'from the very first 
message found' or 'you name it'?

Thanks in advance and thanks for Dovecot and with kind regards,
Michael



Re: [whishlist] new option for 'doveadm purge'

2019-12-09 Thread Michael Grimm via dovecot
Aki Tuomi via dovecot  wrote:
> On 8.12.2019 22.10, Michael Grimm via dovecot wrote:

>> I do store mail in mdbox format of 150m in size (dovecot 2.3.9). 
>> 
>> Once in a while I do experience mdbox files of smaller size, even after 
>> applying 'doveadm purge' and previous expunges by the users. like:
>> 
>> -rw---  1 vmail  dovecot  104854595 Feb  9  2019 
>> /var/mail/.maildirs/userX/storage/m.22
>> -rw---  1 vmail  dovecot   29088478 Mar  8  2019 
>> /var/mail/.maildirs/userX/storage/m.31
>> -rw---  1 vmail  dovecot   98210890 Mar 20  2019 
>> /var/mail/.maildirs/userX/storage/m.39
>> 
>> (Currently the counter is at file number 129.)
>> 
>> Well, I never experienced missing mail or alike, but these "holes" in 
>> filesize irritates me, and yes, it is more or less a cosmetic issue. 
>> 
>> Nevertheless, I do sometimes want to get rid of these "holes" by backing up 
>> all mail and re-injecting the backup into a vanilla account of that user. 
>> And I used this approach when I wanted to store all mail messages in larger 
>> mdbox files; again, rather a cosmetic issue.
>> 
>> BUT that takes a very, very long time contrary to the speed of 'doveadm 
>> purge'. Unfortunately, that command starts somewhere with more recent mdbox 
>> files and never from scratch (oldest mdbox file).
>> 
>> Whishlist: Would it be much of an effort to implement an option like:
>> 
>>  'doveadm purge -f' 
>>  and '-f' standing for 'force' or 'from scratch' or 'from the very first 
>> message found' or 'you name it'?

> What purge does is that it removes mails that have refcount=0, so "from 
> scratch" makes no sense.

My observations are that 'doveadm purge' starts with the very first mdbox file 
found with an refcount=0, and then shuffles all subsequent mail with 
refcount!=0 in all subsequent mdbox files into newly created mdbox files 
starting right after the last mdbox file number.

> Renumbering the files "for neatness" is rather heavy operation, as you'd need 
> to move mails around quite a lot.

In the scenario described above, 'doveadm purge' already moves quite a lot of 
mails around, here a couple of 100m in volume.

> This is very little benefit to just cater for holes.

Agreed, as I stated above, it is more or less a cosmetic issue. 

But I do see some value in such an option as well, e.g.:

Given you want to decrease your mdbox size from 100m to 10m because you 
decided, that it is too risky to store that many mails into one file. You may 
modify 'mdbox_rotate_size=10m' accordingly and all subsequent mail will be 
stored in smaller mdbox files, but old mail will remain being stored in the 
larger files. I do not know how a Dovecot user may accomplish this task without 
backing up all mail and re-injecting the backup into a vanilla account. 
Correct, or do I miss some functionality?

With kind regards,
Michael



Re: Blacklistd

2023-04-22 Thread Michael Grimm via dovecot
Marc  wrote:

>> Blacklistd places a very short set of code to send a small packet to a 
>> socket when
>> the decision is made to deny access.

> And how does blacklistd get fed?


Actually, one needs to add a small amount of code to dovecot which writes to a 
socket. This code needs to be invoked whenever someone tries to "break in" or 
"abuse" your dovecot server. Thus, the application informs the blacklistd 
daemon about abuse and who did so. Blacklistd listens to that socket [1].

The running blacklistd then decides what to do with these attempts and uses 
firewall functionality to block future attempts if wanted. 

[1] https://github.com/paul-chambers/blacklistd

The sources of bind, ftp, sshd, and postfix have already been modified 
accordingly.

Regards,
Michael
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Replication going away?

2023-07-17 Thread Michael Grimm via dovecot
Emmanuel Fusté  wrote:

> Le dim. 16 juil. 2023, 18:55, Aki Tuomi via dovecot  a 
> écrit :
>
>> Yes, director and replicator are removed, and won't be available for pro 
>> users either.

Why in hell would one remove replicator? It's working for years now. Yes, I 
recall issues in the beginning, and others and me helped Timo in 
debugging/testing. After that it runs without any flaws.

So why removing it?

>> Regards to replication, doveadm sync is not being removed. So you can still 
>> run 
>> doveadm sync on your system to have a primary / backup setup

AND: What do you believe an alternative should be, for a failover scenario of 
two IMAP servers?
doveadm sync is not! That's why replicator has been implemented!

> That's completely crazy ! 

+1

Regards,
Michael

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Replication going away?

2023-07-19 Thread Michael Grimm via dovecot
Michael Slusarz via dovecot  wrote:
>> On 07/18/2023 9:00 AM MDT Gerald Galster  wrote:

>> While I understand it takes effort to maintain the replication plugin, this 
>> is especially problematic for small active/active high-availability 
>> deployments.
> 
> To clarify: replication absolutely does not provide "active/active".  
> Replication was meant to copy data to a standby server, but you can't have 
> concurrent mailbox access.  This is why directors existed.

That simply isn't true, and I am baffled that you don't know that replication 
works with a two server active/active setup for years now! Two separate 
instances (active/active) on two different continents are a completely reliable 
failover scenario for years now.

Very irritating to read such a statement.

Regards,
Michael
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Replication going away?

2023-07-19 Thread Michael Grimm via dovecot
Marc  wrote:

>> That simply isn't true, and I am baffled that you don't know that
>> replication works with a two server active/active setup for years now!
>> Two separate instances (active/active) on two different continents are a
>> completely reliable failover scenario for years now.
> 
> Maybe it works like this in your environment? Maybe if the load increases you 
> run into trouble? The director is making sure you never utilize an 
> active/active situation from the perspective of user access. The user is only 
> accessing one server. It is quite a different story when the same user starts 
> writing to both servers at the same time.

If I do rapidly inject tens of thousands of mails locally on both servers 
SIMULTANEOUSLY for the very same user I never ever loose one of it. Tested 
numerous times before rolling it out. In the very beginning of Timo's 
publishing replication it had had flaws, but other users and myself tested it 
while Timo enhances his code (and IIRC once even rewrote it from scratch). For 
years now it runs as expected and documented.

As mentioned in this thread this ist true for small setups.

Regards,
Michael

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: maintainer-feedback requested: [Bug 280929] mail/dovecot move bogus warning "Time moved forwards" to debug

2024-09-01 Thread Michael Grimm via dovecot
Timo Sirainen via dovecot  wrote:
> 
> On 30. Aug 2024, at 19.00, dco2024--- via dovecot  wrote:

>> This is not limited to FreeBSD. I'm seeing it on Gentoo Linux. Kernel is 
>> 6.6.47-gentoo-x86_64, dovecot 2.3.21.1 (d492236fa0). The warning is logged 
>> once every 12-15 hours. 
>> 
>> Syslog:
>> 2024-08-24 18:03:49 UTC myhost dovecot: master: Warning: Time moved forwards 
>> by 0.100068 seconds - adjusting timeouts.
>> 2024-08-25 06:18:49 UTC myhost dovecot: master: Warning: Time moved forwards 
>> by 0.100063 seconds - adjusting timeouts.
>> [snip]
>> Chrony ntp keeps the time in sync and the time has been in sync to within 
>> 30us of UTC for many days. I noticed that it reports that the unadjusted 
>> system clock is about 2.31 ppm fast of UTC. Doing the math for dovecot's 12 
>> hour warning interval:
>>  12 hours * 3600 secs/hour * 2.3/100 = 0.0998 seconds.
>> Could it be that dovecot is effectively measuring intervals of the 
>> uncorrected system clock time instead of the longer term adjusted time, and 
>> it complains when the accumulated NTP adjustments sum to 0.1 seconds.
> 
> I don't see how that would be possible. The check is using only just 
> generated timestamps, not anything from a long time ago.
> 
> I wonder if this kind of a simple patch would be good enough of a fix:
> 
> [snip]

I did apply your patch to dovecot-2.3.21.1 on 14.1-STABLE FreeBSD.

Now, after 24 hours, dovecot doesn't complain about "Time moved forwards" any 
longer.

Before, I had had between 10 and 250 complaints every day.

HTH and regards,
Michael



___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Sekt

2024-12-19 Thread Michael Grimm via dovecot
.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Sekt

2024-12-19 Thread Michael Grimm via dovecot
> On 20. Dec 2024, at 08:37, Michael Grimm via dovecot  
> wrote:
> 
> .

Sorry for the noise, me culpa, mea maxima culpa.

Regards, 
Michael
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Dovecot v2.4.0 released

2025-01-25 Thread Michael Grimm via dovecot
Aki Tuomi via dovecot  wrote:

> after a very long wait we are finally happy to release Dovecot v2.4.0!

Will there be security fixes/patches for dovecot-2.3.21.1 in the future?

Background: As replication has been removed, I would love to remain with 
v2.3.21.1 ...

Regards,
Michael

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Replication ade

2025-01-26 Thread Michael Grimm via dovecot
Hanns Mattes via dovecot  wrote

> Sorry,
> 
> this was aimed to the german list.

I would have answered in English anyway ;-)

> But feel free to answer - any hint appreciated :-)
> 
> Am 26.01.25 um 16:37 schrieb Hanns Mattes via dovecot:
>> Hei,
>> mit 2.4 verschwindet ja, wenn ich das richtig verstanden habe, die
>> Replication aus dem Werkzeugkasten von Dovecot. Obwohl ich nur ein recht
>> kleines System mit etwa 500 Nutzern betreibe fand ich die Replication
>> als ziemlich beruhigend :-)
>> Meine Frage in die Runde: Wie wollt Ihr die Replikation ersetzen?

How to replace replication?

Yeah, that's what I am currently investigating, and I do have a crude proof of 
concept which needs more thorough testing, though.

BTW: I do have only 5 users ;-)

Concept:

1) every user needs a sieve script starting with:

require ["vnd.dovecot.execute"]; # add all your other requiries ...
# replication with doveadm sync …
execute "_REPLICATE" "test-replication";

2) dovecot.conf:

# home-brewn replication
sieve_extensions = +vnd.dovecot.execute
sieve_plugins = sieve_extprograms
sieve_execute_bin_dir = /any/path/to/your/script
sieve_extension_exec_timeout = 1s

3) script /any/path/to/your/script/_REPLICATION (sorry its csh)

#!/bin/csh

set LOGGER = "/usr/bin/logger -p mail.info -t replication"

if ( $#argv != 1 ) then
echo "missing user in $0" | ${LOGGER}
exit 1
endif

set USER = $argv[1]
set DOVEADM = "/usr/local/bin/doveadm"
set DELAY = "1" # this will give sieve time to 
finalise before synchronising
set LOCK = "1"
set TIMEOUT = "30"
set PORT = "12345"  # see doveadm service in 
dovecot.conf
set DESTINATION = "my.destination.server"

# immediately spawn this script to let sieve continue ist work and 
store that mail accordingly
# otherwise synchronisation will come before storage

( sleep ${DELAY} ; ${DOVEADM} sync -P -l ${LOCK}  -T ${TIMEOUT} -u 
${USER} tcp:${DESTINATION}:${PORT} ) |& ${LOGGER} &
exit 0


This will replicate all incoming mails, even those incoming simultaneously at 
both servers. 
(But I haven't tested it with huge numbers of mails, yet)


4) This doesn't trigger modified IMAP flags, deletions, and such. Thus I will 
add a crontab script similar to that above, that will synchronise both servers 
every minute or such.


Remarks:

#) This is FreeBSD. I am running postfix and dovecot in the same service jail

#) Both servers are connected via an ipsec tunnel. Thus I do not need to use 
ssh. But that should be easy to add.

#) I could live with 4) only, but I do now my users ;-)

#) my script needs to become much more bullet proof w.r.t. to locking, checking 
for runaway processes, and such.

#) v2.3 
https://doc.dovecot.org/2.3/configuration_manual/sieve/plugins/extprograms/

#) v2.4 https://doc.dovecot.org/2.4.0/core/plugins/sieve_extprograms.html


I am testing this on both of my servers with active replication with dovecot 
2.3.21.1. 
Thus needed to disable replication for test user "test-replication", though.


Again, this is just an initial concept that needs further testing and 
investigation [1]


HTH and regards,
Michael


[1] how could one tell an listening daemon about modifications of IMAP flags 
... ? 

















___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org