How can I get original subject in bounce template

2010-11-01 Thread Ramprasad
Can I configure the bounce template to include the original subject
inside the subject of the NDR. 
The bounce man page does not mention original subject anywhere. 



Thanks
Ram



Re: Posfix: deliver to spam folder analog of reject_rbl_client

2010-11-01 Thread Покотиленко Костик
For now almost a week without sorbs and wothout spam.

Remebered that the metter I was installed sorbs list was many forged
freemail spams. That time I've done client/hello/sender match check for
a list of free mail services (discussed on this list). And I was also
advised to add sorbs, b/c all cases with forged freemails were listed
there.

So, now sorbs removed, client/hello/sender match check is working, and
no spam.

В Чтв, 28/10/2010 в 22:07 -0500, Stan Hoeppner пишет:
> Покотиленко Костик put forth on 10/28/2010 5:31 AM:
> 
> > a. mail was send directly from company's public ip which is DSL (shouldn't 
> > send direct)
> > b. advertising company's mail server doesn't have revers DNS
> > c. doesn't send proper hello
> > d. advertising company's ip black listed by sorbs
> 
> Ahh, I see.  You live in one of "those" internet neighborhoods.
> 
> > Whitelists are growing fast in my experience, so I'm looking for solutions 
> > which work
> > well and doesn't need much attention from my side. Most should work 
> > automatic, rest is
> > left to user's attention. I should only support this ballance.
> 
> And whitelists that never stop growing are often the most popular
> solution, as you've done.  Have you tried a content filter such as
> SpamAssassin, turning off the client dnsbl function and relying on Bayes
> and rhsbl checks of header/body domains?  SA's built in tagging function
> would allow you to easily filter to user spam folder with sieve,
> procmail, or maildrop.  This setup might help you eliminate the FPs or
> drop them into the spam folder instead of rejecting them.
> 
> > This worth experementing. In my experience sorbs blocks much more spam (not
> > blocked by the rest) than producing FP. That's why I'm looking for solution
> > to make those FPs easy recoverable.
> 
> Until hearing from you, I'd never heard an OP state that SORBS was so
> effective at catching spam the other dnsbls did not that they were
> willing to accept and deal with the FP rate of SORBS.  Maybe this is due
> to your location in eastern Europe?
> 
> > Several months statistic on my own mailbox shows that without sorbs I was
> > getting 3-10 spams a day. With sorbs I recover 1-5 messages a week for
> > entire ~200 users. Well, this is not counting 41 blocked messages from
> > this list this week.
> 
> This is good example of why SORBS sucks and why the FPs are not
> acceptable.  They list the postfix-users outbound list server IP
> (probably shared with other lists) due to a trap hit(s), even though the
> ham ratio is 100% on most days.  I'm sure there was no "spam run" but
> merely a couple of hits.  Again, bad policy, and why I haven't used
> SORBS for years.
> 
> Usually when I sign up for a mailing list I manually add a whitelist
> entry, or I just let my auto whitelisting script take care of it.
> 
> > This worth trying, thanks.
> 
> I'm not saying BRBL is a great dnsbl, but from what I hear from other
> OPs it's pretty decent and as good or better than SORBS without the high
> FPs.  I tried it out for a while but it wasn't catching much so I dumped
> it.  Most dnsbls don't catch much spam here because my other A/S
> countermeasures kill most of it first.  dnsbls get crumbs here, same
> with postgrey.
> 
> > So the question is: how it is possible to direct SPAM mail to a user's
> > imap spam folder?
> >>
> >> The answer is don't do this.  Reject the spam during the SMTP connection.
> > 
> > This is costy in management.
> 
> If you have filters with higher accuracy that don't cause FPs it's not
> costly in management.
> 
> >> Try this out for a week or two:
> >>
> >> 1.  Comment out your SORBS entries in main.cf
> >> 2.  Implement reject_rbl_client b.barracudacentral.org
> >> See http://www.barracudacentral.org/rbl as sign up is required
> >> 3.  Implement this dynamic/generic (residential/zombie) blocking PCRE
> >> check_client_access pcre:/etc/postfix/fqrdns.pcre
> >> http://www.hardwarefreak.com/fqrdns.pcre
> > 
> > Who's supporting this file?
> 
> There is no support, and none needed.  It's a home grown regular
> expression table that matches fully qualified reverse or forward DNS
> names of connecting clients.  It targets dynamic IPs and generic static
> IPs of broadband providers around the world, mostly in the US and
> Europe, but includes some others around the world.  I.e. it blocks
> direct senders who shouldn't be sending direct.  It's much like the
> Spamhaus PBL regarding results, but blocks many client IPs that the PBL,
> SORBS DUL, and other "dynamic" dnsbls don't.
> 
> If you don't trust it because no big vendor name is behind it, use sed
> and replace REJECT with "WARN fqrdns".  Monitor its effectiveness by
> greping your log for "fqrdns".
> 
> Put it above your RBL checks in main.cf so it gets first crack at the
> connections.  You will likely be pleasantly surprised by the results.
> 
-- 
Покотиленко Костик 



Postfix as an SMTP proxy?

2010-11-01 Thread Nicholas Sideris
Hello,

I am in a case, where I need to configure a postfix daemon for acting as an 
SMTP server, where some spam-filtering and some anti-virus would run in 
parallel in the box. This would be a help, for a local ISP, to control spam 
relayed outside from his own network and thus avoiding IPs to get blacklisted, 
etc. Now my problem. The users can use the SMTP server directly, thus if they 
select mysmtp.mynetwork.com everything is okay.

Now, we do suppose that a few users do have a valid subscription for an SMTP 
server, outside our network, say theirsmtp.theirnetwork.com. That foreign 
server uses SMTP auth as well. Obviously, redirecting that traffic first to our 
proxy, results in complete e-mail delivery failure.

Is any way to handle this? Preferable methods.
a) Our SMTP proxy, talks with the foreign SMTP and sends the e-mail accordingly.
b) Our SMTP proxy, just forwards the commands, without checking the e-mail for 
spam/virus (not vey wise, but if there's no other solution, is part of the 
foreign server's responsibility to do these checks)
c) Our SMTP proxy, just sends the e-mail directly to the recipient after 
checking it, without ever talking to the foreign SMTP server (it can cause 
problems with DKIM and SPF domains, but in any case, it may be helpful).

What I need, is some configuration instructions about how to achive such a 
functionality.

Best Regards
N. Sideris




Re: How can I get original subject in bounce template

2010-11-01 Thread Wietse Venema
Ramprasad:
> Can I configure the bounce template to include the original subject
> inside the subject of the NDR. 
> The bounce man page does not mention original subject anywhere. 

If it is not in the documentation, then it is not supported.
As you correctly observed, the subject and more is in the returned
email message that sits at the bottom of the bounce message.

Wietse


Re: Postfix as an SMTP proxy?

2010-11-01 Thread mouss

Le 01/11/2010 10:36, Nicholas Sideris a écrit :

Hello,

I am in a case, where I need to configure a postfix daemon for acting as an 
SMTP server, where some spam-filtering and some anti-virus would run in 
parallel in the box. This would be a help, for a local ISP, to control spam 
relayed outside from his own network and thus avoiding IPs to get blacklisted, 
etc. Now my problem. The users can use the SMTP server directly, thus if they 
select mysmtp.mynetwork.com everything is okay.

Now, we do suppose that a few users do have a valid subscription for an SMTP 
server, outside our network, say theirsmtp.theirnetwork.com. That foreign 
server uses SMTP auth as well. Obviously, redirecting that traffic first to our 
proxy, results in complete e-mail delivery failure.

Is any way to handle this? Preferable methods.
a) Our SMTP proxy, talks with the foreign SMTP and sends the e-mail accordingly.
b) Our SMTP proxy, just forwards the commands, without checking the e-mail for 
spam/virus (not vey wise, but if there's no other solution, is part of the 
foreign server's responsibility to do these checks)
c) Our SMTP proxy, just sends the e-mail directly to the recipient after 
checking it, without ever talking to the foreign SMTP server (it can cause 
problems with DKIM and SPF domains, but in any case, it may be helpful).


In general, you should not redirect traffic "transparently"...

The "common" approach is to block port 25:
- TCP traffic from one of your IPs to a foreign IP on port 25
- TCP traffic from a foreign IP with source port 25 to one of your IPs
then your customers can use port 587.

you can allow few customers to send directly (by whitelisiting their IP 
from the block-25 rule).



This way, you don't need an smtp proxy.


[snip]





default_destination_recipient_limit not working after changing the mailbox_transport to local_transport

2010-11-01 Thread guido
Hello everyone. Im using postfix 2.5.5 and im trying to use the
singleinstancestore of cyrus to hardlink mails instead of having one copy
of every recipient.

To make this work, I had to change the:

mailbox_transport = lmtp:unix:/var/spool/postfix/public/lmtp
local_transport =

to

mailbox_transport =
local_transport = lmtp:unix:/var/spool/postfix/public/lmtp

When I made this change, now it seems that the:

default_destination_recipient_limit = 60

isn't working...

Now I can add 600 recipients in one mail, with no problem. Before the
change, the 60 limit was working.

Why? Any ideas how to fix this?

Tnxs in advance.



Re: default_destination_recipient_limit not working after changing the mailbox_transport to local_transport

2010-11-01 Thread Wietse Venema
gu...@lorenzutti.com.ar:
> local_transport = lmtp:unix:/var/spool/postfix/public/lmtp
> default_destination_recipient_limit = 60
> 
> isn't working...

As documented, the local_destination_recipient_limit setting has
precedence over the default_destination_recipient_limit setting.

Wietse


Re: Postfix as an SMTP proxy?

2010-11-01 Thread Victor Duchovni
On Mon, Nov 01, 2010 at 11:36:00AM +0200, Nicholas Sideris wrote:

> Hello,
> 
> I am in a case, where I need to configure a postfix daemon for acting
> as an SMTP server, where some spam-filtering and some anti-virus would
> run in parallel in the box. This would be a help, for a local ISP, to
> control spam relayed outside from his own network and thus avoiding IPs
> to get blacklisted, etc. Now my problem. The users can use the SMTP server
> directly, thus if they select mysmtp.mynetwork.com everything is okay.

Don't silently redirect users' SMTP traffic.

Your options:

- Join the SpamHaus PBL as an ISP, and add your IPs to the PBL. Allow
  users to request being exempted from the PBL.

- Block port 25 outbound, and allow users to request having the 
  filter removed. Operate a reliable relay that users may elect
  to use. Don't block port 587.

- Deploy something similar to the Symantec 8600 (aka Turntide)
  SMTP traffic shaping appliance, that can rate limit outgoing
  spam without rerouting the SMTP connection (limitation:
  it can't see through STARTTLS).

-- 
Viktor.


Re: default_destination_recipient_limit not working after changing the mailbox_transport to local_transport

2010-11-01 Thread Reinaldo de Carvalho
On Mon, Nov 1, 2010 at 2:13 PM,  wrote:
>
> Hello everyone. Im using postfix 2.5.5 and im trying to use the
> singleinstancestore of cyrus to hardlink mails instead of having one copy
> of every recipient.
>
> To make this work, I had to change the:
>
> mailbox_transport = lmtp:unix:/var/spool/postfix/public/lmtp
> local_transport =
>
> to
>
> mailbox_transport =
> local_transport = lmtp:unix:/var/spool/postfix/public/lmtp
>
> When I made this change, now it seems that the:
>
> default_destination_recipient_limit = 60
>
> isn't working...

What the problem? Do you want enforce 60? Do you want a regular file
foreach 60 recipients? why?

>
> Now I can add 600 recipients in one mail, with no problem. Before the
> change, the 60 limit was working.
>
> Why? Any ideas how to fix this?
>

What the local_destination_concurrency_limit and
local_destination_recipient_limit values?

With local_destination_concurrency_limit > 1, you can't enforce one
regular file. Cyrus will be create 1 regular file peer message, if you
have concurrency connections, the recipients will be splited in some
messages.

--
Reinaldo de Carvalho
http://korreio.sf.net
http://python-cyrus.sf.net

"While not fully understand a software, don't try to adapt this
software to the way you work, but rather yourself to the way the
software works" (myself)


Re: default_destination_recipient_limit not working after changing the mailbox_transport to local_transport

2010-11-01 Thread Victor Duchovni
On Mon, Nov 01, 2010 at 02:13:53PM -0300, gu...@lorenzutti.com.ar wrote:

> Hello everyone. Im using postfix 2.5.5 and im trying to use the
> singleinstancestore of cyrus to hardlink mails instead of having one copy
> of every recipient.

This is only possible if you use LMTP delivery directly, without a trip
through local(8), and move all alias processing from aliases(5) to
virtual(5).

The local(8) delivery agent always delivers one recipient at a time,
even when the recipient concurrency is incorrectly set > 1, it just loops
through the recipient list, doing one-at-a-time delivery.

-- 
Viktor.


Re: default_destination_recipient_limit not working after changing the mailbox_transport to local_transport

2010-11-01 Thread Reinaldo de Carvalho
On Mon, Nov 1, 2010 at 2:13 PM,   wrote:
> Hello everyone. Im using postfix 2.5.5 and im trying to use the
> singleinstancestore of cyrus to hardlink mails instead of having one copy
> of every recipient.
>
> To make this work, I had to change the:
>
> mailbox_transport = lmtp:unix:/var/spool/postfix/public/lmtp
> local_transport =
>
> to
>
> mailbox_transport =
> local_transport = lmtp:unix:/var/spool/postfix/public/lmtp
>

As Victor explain, local LDA don't send multirecipients on a message,
and you must remove mailbox_transport and local_transport values and
use transport_maps:

# main.cf
transport_maps = hash:/etc/postfix/transport

# /etc/postfix/transport
exmaple.org  lmtp:unix:/path/to/cyrus-lmtp-server-socket



-- 
Reinaldo de Carvalho
http://korreio.sf.net
http://python-cyrus.sf.net

"While not fully understand a software, don't try to adapt this
software to the way you work, but rather yourself to the way the
software works" (myself)


Trying to use "prepared statements" in PostgreSQL queries

2010-11-01 Thread Patrick Ben Koetter
Out of curiosity I started to play around with Postfix and PostgreSQL.
PostgreSQL recommends "prepared statements" to speed up queries (by ~%20).

As I understand it "prepared statements" must be defined once when a DB
session starts and they will be available only to the particular client that
requested the "prepared statement". Any subsequent client connecting will have
to PREPARE a "prepared statement" for itself.

I see I can get around multiple PREPARE statements if I use the Postfix
proxymap daemon, but how would I send the initial PREPARE query?

Has anyone ever tried this? Is it "if its not documented, then its not there"?

p...@rick

-- 
All technical questions asked privately will be automatically answered on the
list and archived for public access unless privacy is explicitely required and
justified.

saslfinger (debugging SMTP AUTH):



Re: Postfix as an SMTP proxy?

2010-11-01 Thread Stan Hoeppner
Victor Duchovni put forth on 11/1/2010 12:27 PM:

> - Deploy something similar to the Symantec 8600 (aka Turntide)
>   SMTP traffic shaping appliance, that can rate limit outgoing
>   spam without rerouting the SMTP connection (limitation:
>   it can't see through STARTTLS).

Is this what you refer to Victor?

http://www.symantec.com/business/brightmail-traffic-shaper

-- 
Stan


Re: default_destination_recipient_limit not working after changing the mailbox_transport to local_transport

2010-11-01 Thread Victor Duchovni
On Mon, Nov 01, 2010 at 03:30:57PM -0300, Reinaldo de Carvalho wrote:

> > To make this work, I had to change the:
> >
> > mailbox_transport = lmtp:unix:/var/spool/postfix/public/lmtp
> > local_transport =
> >
> > to
> >
> > mailbox_transport =
> > local_transport = lmtp:unix:/var/spool/postfix/public/lmtp
> >
> 
> As Victor explain, local LDA don't send multirecipients on a message,
> and you must remove mailbox_transport and local_transport values and
> use transport_maps:
> 
> # main.cf
> transport_maps = hash:/etc/postfix/transport
> 
> # /etc/postfix/transport
> exmaple.org  lmtp:unix:/path/to/cyrus-lmtp-server-socket

No, setting "local_transport" is equivant to using transport_maps, but
is more drastic, since it disables local delivery for all domains in
$mydestination. Typically one wants to leave some ability to process
local aliases in suitably designated domains.

-- 
Viktor.


Re: Postfix as an SMTP proxy?

2010-11-01 Thread Victor Duchovni
On Mon, Nov 01, 2010 at 01:43:05PM -0500, Stan Hoeppner wrote:

> Victor Duchovni put forth on 11/1/2010 12:27 PM:
> 
> > - Deploy something similar to the Symantec 8600 (aka Turntide)
> >   SMTP traffic shaping appliance, that can rate limit outgoing
> >   spam without rerouting the SMTP connection (limitation:
> >   it can't see through STARTTLS).
> 
> Is this what you refer to Victor?
> 
> http://www.symantec.com/business/brightmail-traffic-shaper

Yes.

-- 
Viktor.


Re: Postfix as an SMTP proxy?

2010-11-01 Thread Rich
Nick I have a simple and elegant solution that has been working for 
years. I am using postfix, spamassassin with spampd proxy server and 
god-forbid, a purchase piece of software for antivirus from Command 
Central called Vexira.  It is a simple setup and has worked for us.


On 11/1/2010 5:36 AM, Nicholas Sideris wrote:

Hello,

I am in a case, where I need to configure a postfix daemon for acting as an 
SMTP server, where some spam-filtering and some anti-virus would run in 
parallel in the box. This would be a help, for a local ISP, to control spam 
relayed outside from his own network and thus avoiding IPs to get blacklisted, 
etc. Now my problem. The users can use the SMTP server directly, thus if they 
select mysmtp.mynetwork.com everything is okay.

Now, we do suppose that a few users do have a valid subscription for an SMTP 
server, outside our network, say theirsmtp.theirnetwork.com. That foreign 
server uses SMTP auth as well. Obviously, redirecting that traffic first to our 
proxy, results in complete e-mail delivery failure.

Is any way to handle this? Preferable methods.
a) Our SMTP proxy, talks with the foreign SMTP and sends the e-mail accordingly.
b) Our SMTP proxy, just forwards the commands, without checking the e-mail for 
spam/virus (not vey wise, but if there's no other solution, is part of the 
foreign server's responsibility to do these checks)
c) Our SMTP proxy, just sends the e-mail directly to the recipient after 
checking it, without ever talking to the foreign SMTP server (it can cause 
problems with DKIM and SPF domains, but in any case, it may be helpful).

What I need, is some configuration instructions about how to achive such a 
functionality.

Best Regards
N. Sideris







Re: Trying to use "prepared statements" in PostgreSQL queries

2010-11-01 Thread Victor Duchovni
On Mon, Nov 01, 2010 at 07:35:44PM +0100, Patrick Ben Koetter wrote:

> Out of curiosity I started to play around with Postfix and PostgreSQL.
> PostgreSQL recommends "prepared statements" to speed up queries (by ~%20).
> 
> As I understand it "prepared statements" must be defined once when a DB
> session starts and they will be available only to the particular client that
> requested the "prepared statement". Any subsequent client connecting will have
> to PREPARE a "prepared statement" for itself.
> 
> I see I can get around multiple PREPARE statements if I use the Postfix
> proxymap daemon, but how would I send the initial PREPARE query?
> 
> Has anyone ever tried this? Is it "if its not documented, then its not there"?

You need to customize the Postfix PgSQL driver to (automatically) support
prepared statements.

-- 
Viktor.


Re: Trying to use "prepared statements" in PostgreSQL queries

2010-11-01 Thread Jeroen Geilman

On 11/01/2010 07:35 PM, Patrick Ben Koetter wrote:

Out of curiosity I started to play around with Postfix and PostgreSQL.
PostgreSQL recommends "prepared statements" to speed up queries (by ~%20).
   


From the 8.0 manual:

Prepared statements have the largest performance advantage when a single 
session is being used to execute a large number of similar statements. 
The performance difference will be particularly significant if the 
statements are complex to plan or rewrite, for example, if the query 
involves a join of many tables or requires the application of several 
rules. *If the statement is relatively simple to plan and rewrite but 
relatively expensive to execute, the performance advantage of prepared 
statements will be less noticeable.*


It is doubtful whether a simple key lookup query - such as postfix does 
- benefits from PSs.


If the postgres database in question is used primarily to lookup postfix 
maps, every possible value will be cached in RAM for 99% of the time 
anyway - this gives incomparably larger advantages than writing faster 
queries.



As I understand it "prepared statements" must be defined once when a DB
session starts and they will be available only to the particular client that
requested the "prepared statement". Any subsequent client connecting will have
to PREPARE a "prepared statement" for itself.
   


A prepared statement remains in memory during a session, yes.


I see I can get around multiple PREPARE statements if I use the Postfix
proxymap daemon, but how would I send the initial PREPARE query?
   


That's untrivial, since even a proxymap connection doesn't live forever.
All postfix processes are recycled after a period of time.

If the Pl/pgSQL language allows it, you could write a SP that checks if 
the statement is already prepared, and then execute it.
This will have a lot more overhead than the potential gain from 
preparing it.


You should have absolutely no delusions about the performance cost of 
this extra check - just writing a stored procedure that runs the SELECT 
will win every single time.


--
J.



Re: Trying to use "prepared statements" in PostgreSQL queries

2010-11-01 Thread Patrick Ben Koetter
Jeroen,

thanks for the detailed answer. Please read my annotations below.

* Jeroen Geilman :
> On 11/01/2010 07:35 PM, Patrick Ben Koetter wrote:
> >Out of curiosity I started to play around with Postfix and PostgreSQL.
> >PostgreSQL recommends "prepared statements" to speed up queries (by ~%20).
> 
> From the 8.0 manual:
> 
> Prepared statements have the largest performance advantage when a
> single session is being used to execute a large number of similar
> statements. The performance difference will be particularly
> significant if the statements are complex to plan or rewrite, for
> example, if the query involves a join of many tables or requires the
> application of several rules. *If the statement is relatively simple
> to plan and rewrite but relatively expensive to execute, the
> performance advantage of prepared statements will be less
> noticeable.*
> 
> It is doubtful whether a simple key lookup query - such as postfix
> does - benefits from PSs.

Agreed. I doubt that too, but I don't know a better approach to prove that
except for trying and measuring.


> If the postgres database in question is used primarily to lookup
> postfix maps, every possible value will be cached in RAM for 99% of
> the time anyway - this gives incomparably larger advantages than
> writing faster queries.

So the best approach is to ensure all tables can be loaded into memory i.e.
provide enough $work_mem in pgSQL?


> >As I understand it "prepared statements" must be defined once when a DB
> >session starts and they will be available only to the particular client that
> >requested the "prepared statement". Any subsequent client connecting will 
> >have
> >to PREPARE a "prepared statement" for itself.
> 
> A prepared statement remains in memory during a session, yes.
> 
> >I see I can get around multiple PREPARE statements if I use the Postfix
> >proxymap daemon, but how would I send the initial PREPARE query?
> 
> That's untrivial, since even a proxymap connection doesn't live forever.
> All postfix processes are recycled after a period of time.
> 
> If the Pl/pgSQL language allows it, you could write a SP that checks
> if the statement is already prepared, and then execute it.
> This will have a lot more overhead than the potential gain from
> preparing it.

Do I understand you correctly? Are you saying the potential gain is not worth
the effort?

p...@rick


> You should have absolutely no delusions about the performance cost
> of this extra check - just writing a stored procedure that runs the
> SELECT will win every single time.
> 
> -- 
> J.
> 

-- 
All technical questions asked privately will be automatically answered on the
list and archived for public access unless privacy is explicitely required and
justified.

saslfinger (debugging SMTP AUTH):



Re: Trying to use "prepared statements" in PostgreSQL queries

2010-11-01 Thread Jeroen Geilman

On 11/01/2010 08:40 PM, Patrick Ben Koetter wrote:

Jeroen,

thanks for the detailed answer. Please read my annotations below.

* Jeroen Geilman:
   

On 11/01/2010 07:35 PM, Patrick Ben Koetter wrote:
 

Out of curiosity I started to play around with Postfix and PostgreSQL.
PostgreSQL recommends "prepared statements" to speed up queries (by ~%20).
   

 From the 8.0 manual:

Prepared statements have the largest performance advantage when a
single session is being used to execute a large number of similar
statements. The performance difference will be particularly
significant if the statements are complex to plan or rewrite, for
example, if the query involves a join of many tables or requires the
application of several rules. *If the statement is relatively simple
to plan and rewrite but relatively expensive to execute, the
performance advantage of prepared statements will be less
noticeable.*

It is doubtful whether a simple key lookup query - such as postfix
does - benefits from PSs.
 

Agreed. I doubt that too, but I don't know a better approach to prove that
except for trying and measuring.


   


You're obviously free to do that - but as Victor said, postfix doesn't 
support preparing statements, so you'd have to hack the driver :)



If the postgres database in question is used primarily to lookup
postfix maps, every possible value will be cached in RAM for 99% of
the time anyway - this gives incomparably larger advantages than
writing faster queries.
 

So the best approach is to ensure all tables can be loaded into memory i.e.
provide enough $work_mem in pgSQL?

   


Even the indexes would be enough. It depends on how big your dataset is.


As I understand it "prepared statements" must be defined once when a DB
session starts and they will be available only to the particular client that
requested the "prepared statement". Any subsequent client connecting will have
to PREPARE a "prepared statement" for itself.
   

A prepared statement remains in memory during a session, yes.

 

I see I can get around multiple PREPARE statements if I use the Postfix
proxymap daemon, but how would I send the initial PREPARE query?
   

That's untrivial, since even a proxymap connection doesn't live forever.
All postfix processes are recycled after a period of time.

If the Pl/pgSQL language allows it, you could write a SP that checks
if the statement is already prepared, and then execute it.
This will have a lot more overhead than the potential gain from
preparing it.
 

Do I understand you correctly? Are you saying the potential gain is not worth
the effort?
   


I am saying exactly what I am saying ;)

Given that A. postfix does not support preparing the select_query, and 
B. indexing properly will provide much bigger gains than any other 
measure (orders of magnitude bigger gains), and C. the manual suggests 
that using prepared statements is much less beneficial for simple 
queries, odds are that it's not going to be worth the effort.



You should have absolutely no delusions about the performance cost
of this extra check - just writing a stored procedure that runs the
SELECT will win every single time.

 


Thus.


--
J.



Re: Trying to use "prepared statements" in PostgreSQL queries

2010-11-01 Thread Kenneth Marshall
It might be worth checking out the pre_prepare module:

http://preprepare.projects.postgresql.org/README.html

Cheers,
Ken

On Mon, Nov 01, 2010 at 08:45:17PM +0100, Jeroen Geilman wrote:
> On 11/01/2010 08:40 PM, Patrick Ben Koetter wrote:
>> Jeroen,
>>
>> thanks for the detailed answer. Please read my annotations below.
>>
>> * Jeroen Geilman:
>>
>>> On 11/01/2010 07:35 PM, Patrick Ben Koetter wrote:
>>>  
 Out of curiosity I started to play around with Postfix and PostgreSQL.
 PostgreSQL recommends "prepared statements" to speed up queries (by 
 ~%20).

>>>  From the 8.0 manual:
>>>
>>> Prepared statements have the largest performance advantage when a
>>> single session is being used to execute a large number of similar
>>> statements. The performance difference will be particularly
>>> significant if the statements are complex to plan or rewrite, for
>>> example, if the query involves a join of many tables or requires the
>>> application of several rules. *If the statement is relatively simple
>>> to plan and rewrite but relatively expensive to execute, the
>>> performance advantage of prepared statements will be less
>>> noticeable.*
>>>
>>> It is doubtful whether a simple key lookup query - such as postfix
>>> does - benefits from PSs.
>>>  
>> Agreed. I doubt that too, but I don't know a better approach to prove that
>> except for trying and measuring.
>>
>>
>>
>
> You're obviously free to do that - but as Victor said, postfix doesn't 
> support preparing statements, so you'd have to hack the driver :)
>
>>> If the postgres database in question is used primarily to lookup
>>> postfix maps, every possible value will be cached in RAM for 99% of
>>> the time anyway - this gives incomparably larger advantages than
>>> writing faster queries.
>>>  
>> So the best approach is to ensure all tables can be loaded into memory 
>> i.e.
>> provide enough $work_mem in pgSQL?
>>
>>
>
> Even the indexes would be enough. It depends on how big your dataset is.
>
 As I understand it "prepared statements" must be defined once when a DB
 session starts and they will be available only to the particular client 
 that
 requested the "prepared statement". Any subsequent client connecting 
 will have
 to PREPARE a "prepared statement" for itself.

>>> A prepared statement remains in memory during a session, yes.
>>>
>>>  
 I see I can get around multiple PREPARE statements if I use the Postfix
 proxymap daemon, but how would I send the initial PREPARE query?

>>> That's untrivial, since even a proxymap connection doesn't live forever.
>>> All postfix processes are recycled after a period of time.
>>>
>>> If the Pl/pgSQL language allows it, you could write a SP that checks
>>> if the statement is already prepared, and then execute it.
>>> This will have a lot more overhead than the potential gain from
>>> preparing it.
>>>  
>> Do I understand you correctly? Are you saying the potential gain is not 
>> worth
>> the effort?
>>
>
> I am saying exactly what I am saying ;)
>
> Given that A. postfix does not support preparing the select_query, and B. 
> indexing properly will provide much bigger gains than any other measure 
> (orders of magnitude bigger gains), and C. the manual suggests that using 
> prepared statements is much less beneficial for simple queries, odds are 
> that it's not going to be worth the effort.
>
>>> You should have absolutely no delusions about the performance cost
>>> of this extra check - just writing a stored procedure that runs the
>>> SELECT will win every single time.
>>>
>>>  
>
> Thus.
>
>
> -- 
> J.
>
>


Re: Persistent mails being received

2010-11-01 Thread Michael Orlitzky
On 10/31/2010 10:21 AM, sunhux G wrote:
> 
> I'll need the exact commands in a Shell script to send email
>  to x...@yahoo.com  & y...@gmail.com
>  with a log file attached
> to it.

I believe you're looking for the 'sendmail' command.


Re: postfix clustering

2010-11-01 Thread Peter
Hi Stan,

> 1.  What are your specific failure concerns with your
> primary site?
> Network failure?  Host failure?  Storage hardware
> failure?

You have a great suggestion assuming the data center functions well.

the data center primary site failure means that the data center
itself failed, meaning the host machine/storage/network are all in-accessible. 






> Maybe a good question for you to ask of the members of this
> list at this
> point is:
> 
> How many OPs here run with a multi site IMAP cluster setup
> with a
> physically distributed mail store, either via replication
> or a cluster
> filesystem over a wide area network?
> 

That is a good question. It is something I am looking for.

however, if not going for expensive cluster filesystem over a wide area network,
one simple way is to rsync  every 5 minutes to copy over a backup serer in 
another data center and a quick
dns change if the primary data center failed. The TTL in DNS settings can be 5 
minutes.


Peter







Re: postfix clustering

2010-11-01 Thread Stan Hoeppner
Peter put forth on 11/1/2010 6:51 PM:
> Hi Stan,
> 
>> 1.  What are your specific failure concerns with your
>> primary site?
>> Network failure?  Host failure?  Storage hardware
>> failure?
> 
> You have a great suggestion assuming the data center functions well.
> 
> the data center primary site failure means that the data center
> itself failed, meaning the host machine/storage/network are all 
> in-accessible. 

You seem to be looking at this from a macro point of view.  For an
entire datacenter to "fail" you're looking at something like a natural
disaster (hurricane, earthquake, tornado, flood, lightning) destroying
the facility, or all power and comm lines into it.  The probability of
these things is very low, assuming the datacenter was located, designed,
and constructed properly.

I suggest you need to look at disaster avoidance and recovery from a
micro point of view.  Failures at the micro level are far more common,
and less expensive to architect around and recover from.

>> Maybe a good question for you to ask of the members of this
>> list at this
>> point is:
>>
>> How many OPs here run with a multi site IMAP cluster setup
>> with a
>> physically distributed mail store, either via replication
>> or a cluster
>> filesystem over a wide area network?
>>
> 
> That is a good question. It is something I am looking for.

Then simply start a new thread and ask the question of the members of
this list.  The answers you get should be very instructive.  My guess is
that very very few OPs here are doing what you're attempting to do, and
yet they have great reliability.  I may be all wrong.  Ask the list and
get consensus on how others approach disaster avoidance and recover of
their IMAP stores.

> however, if not going for expensive cluster filesystem over a wide area 
> network,
> one simple way is to rsync  every 5 minutes to copy over a backup serer in 
> another data center and a quick
> dns change if the primary data center failed. The TTL in DNS settings can be 
> 5 minutes.

This is not automatic fail over.  If you're going to bother with a
remote hot site, you should have automatic fail over of the mailbox server.

Again, I ask you, is your primary site so prone to failure that you
_need_ a remote site?  Let me guess:  You have already sold your
superiors on the idea of a remote hot site, and now you're trying to
figure out how to implement it?

If this is the case I'm wasting my breath and you are wasting the lists'
time.  A technical engineer identifies a problem and then finds and
implements the proper solution.  He doesn't pick a solution from a
magazine article or blog that sounds neat, to a problem that may or may
not exist at his organization, and then drive to implement it due to
personal desires instead of operational needs.

A hot backup site is actually relatively rare.  Few organizations
implement this strategy.  In the vast majority of cases the cost of such
an architecture (hardware, comms links, testing, admin time, etc)
outweighs the benefit due to reliability of the primary site and thus
the fact the hot site is rarely if ever used.

Ask the members of this list how many do IMAP store replication/fail
over to a remote site.

-- 
Stan