exceptions for smtpd_end_of_data_restrictions

2012-08-22 Thread anant

Dear List,

I have this in my main.cf

smtpd_end_of_data_restrictions =
  check_policy_service inet:127.0.0.1:9998


This basically checks for mail size and allows/not allows a mail based  
on contents of a file.


Is there a way to say, not to use this policy service, based on some  
headers of a mail?


Regards,
Anant.

--
Confidentiality Notice: This e-mail message, including any attachments, is for
the sole use of the intended recipient(s) and may contain confidential and
privileged information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient, please
contact the sender by reply e-mail and destroy all copies of the original
message.
--



Re: ..::Rbl not working::..

2012-08-22 Thread DN Singh
On Tue, Aug 21, 2012 at 7:52 PM, /dev/rob0  wrote:

> On Tue, Aug 21, 2012 at 09:03:47AM -0500,
>Alfonso Alejandro Reyes Jiménez wrote:
> > I've postfix working great but I cant make the rbl works, I have
> > the configuration but when I test the configuration it seems not
> > to be working.
> >
> > I'm testing with http://www.crynwr.com/spam/ Spamhaus has that ip
> > address listed but I'm still getting those emails.
> >
> > Here's the postconf -n result:
> >
> > [root@mail ~]# postconf -n
>
> Irrelevant parts removed, possibly relevant lines here:
>
> > mynetworks = 127.0.0.0/8, 10.1.8.27/32, 10.1.8.23/32,
> > 172.16.18.101/32, 10.1.215.26/32
>
> > smtpd_recipient_restrictions =
> > permit_mynetworks,permit_sasl_authenticated,reject_rbl_client
> > zen.spamhaus.org,reject_rhsbl_sender
> > dsn.rfc-ignorant.org,reject_unauth_destination
>
> > any ideas? thanks in advance for your help.
>
> You neglected to show the logs of the acceptance of the crynwr.com
> test mail.
>
> Nevertheless, I do have a WAG for you. Test your server's ability to
> resolve records in zen.spamhaus.org.
>
> [alfonso@mail ~]$ dig 2.0.0.127.zen.spamhaus.org. any
>
> You should see among the output:
>
> ;; ANSWER SECTION:
> 2.0.0.127.zen.spamhaus.org. 300 IN  TXT "
> http://www.spamhaus.org/query/bl?ip=127.0.0.2";
> 2.0.0.127.zen.spamhaus.org. 300 IN  TXT "
> http://www.spamhaus.org/sbl/query/SBL233";
> 2.0.0.127.zen.spamhaus.org. 300 IN  A   127.0.0.4
> 2.0.0.127.zen.spamhaus.org. 300 IN  A   127.0.0.10
> 2.0.0.127.zen.spamhaus.org. 300 IN  A   127.0.0.2
>
> If you're using a nameserver external to you, such as Google Public
> DNS or any ISP's resolver, there is a very good chance that Spamhaus
> is blocking your queries.
>
> If my guess is right, you can possibly fix it by installing and using
> your own local caching resolver, i.e., BIND named(8) or other
> implementation of DNS recursion. Offer void where taxed or
> restricted, or if your number of queries puts you in excess of
> Spamhaus maximum allowed. (In that case, see about their paid
> service; well worth the small expense per mailbox.)
> --
>   http://rob0.nodns4.us/ -- system administration and consulting
>   Offlist GMX mail is seen only if "/dev/rob0" is in the Subject:
>


 I never realized that I had this issue too. But, after running the tests,
I found out that my queries were indeed blocked by spamhaus.

So, I changed the servers as pointed out and bingo, spam was successfully
being blocked.

Thanks /dev/rob0


Re: exceptions for smtpd_end_of_data_restrictions

2012-08-22 Thread Wietse Venema
an...@isac.gov.in:
> Is there a way to say, not to use this policy service, based on some  
> headers of a mail?

That would be a bad mistake. Headers are too easy to spoof.

Wietse


Re: exceptions for smtpd_end_of_data_restrictions

2012-08-22 Thread Noel Jones
On 8/22/2012 2:14 AM, an...@isac.gov.in wrote:
> Dear List,
> 
> I have this in my main.cf
> 
> smtpd_end_of_data_restrictions =
>   check_policy_service inet:127.0.0.1:9998
> 
> 
> This basically checks for mail size and allows/not allows a mail
> based on contents of a file.
> 
> Is there a way to say, not to use this policy service, based on some
> headers of a mail?
> 


You can skip the policy based on envelope information by using a
check_*_access map before the policy check.  You could also likely
do this inside the policy server itself.

You cannot skip it based on headers.



  -- Noel Jones


Re: ..::Rbl not working::..

2012-08-22 Thread /dev/rob0
On Wed, Aug 22, 2012 at 01:23:12PM +0530, DN Singh wrote:
> I never realized that I had this issue too. But, after running
> the tests, I found out that my queries were indeed blocked by
> spamhaus.
> 
> So, I changed the servers as pointed out and bingo, spam was 
> successfully being blocked.
> 
> Thanks /dev/rob0

Hehe, glad to hear it! And best of all, from someone with the 
initials, "DNS"! ;)

Many people think that using forwarders rather than recursion is 
somehow more effective or "net-green" (conserving of the network 
resources provided by others.) It's really not, and it carries an 
added risk of external cache poisoning.

If you query a record from a forwarder, and the forwarder has it 
cached, yes, you get a quick response from said forwarder. But you 
get a cached record, which means the TTL is ticking away. You get, on 
average, half the published TTL, which means you'll do, on average, 
twice the number of queries.

If you query a record from a forwarder, and the forwarder does NOT 
have it cached, you have introduced extra latency in getting your 
reply whilst the forwarder recurses. (But you end up with the full 
TTL minus the latency.)

Google Public DNS seems to look up records again before the TTL 
expires in their cache, so you are indeed likely to see a slight 
improvement in your DNS response time without the doubling of your 
external queries, when using their service. But is that in any way 
something you could call "net-green"? Since they're ignoring the 
published TTL, I think not.

Other benefits of running your own nameserver, not to be overlooked: 

1. You're shielded from the impact of decisions of greedy business 
types who don't understand DNS. Every so often one of them gets the 
idea to replace NXDOMAIN responses with an IP address pointing to 
their own web server. For a mail server doing DNSBL/DNSWL lookups, 
the result of that can only be a disaster. And it can happen at any 
time. Lots of ISPs do this, and they usually won't warn you in 
advance of such a change.

2. You are in control of your own DNSSEC policy. You can strictly 
validate all signatures, you can allow expired signatures, or you can 
choose to ignore DNSSEC altogether. If a zone you know exists 
suddenly comes up as SERVFAIL, you know what to check. Conversely, if 
DNS for a signed zone is hijacked while you are checking signatures, 
you are not going to fall for the bogus data.

3. You control your own cache. If you are aware of cached data being 
wrong, you can flush that data and move ahead; whereas you cannot 
flush your forwarder, and you have to wait for the TTL to expire. 
"Propagation" is a myth propagated by and for people who don't 
understand DNS.

I go for one nameserver per site, or at a bigger site, maybe two.
-- 
  http://rob0.nodns4.us/ -- system administration and consulting
  Offlist GMX mail is seen only if "/dev/rob0" is in the Subject:


Re: ..::Rbl not working::..

2012-08-22 Thread Wietse Venema
/dev/rob0:
> Google Public DNS seems to look up records again before the TTL 
> expires in their cache, so you are indeed likely to see a slight 

There is an article that shows that different resolvers report
TTL values in different ways. 

Begin quote:

For a record initially served with a TTL equal to N by authoritative servers:

  * Google DNS serves it with a TTL in the interval [0, N-1]
  * dnscache is serving it with a TTL in the interval [0, N]
  * Unbound serves it with a TTL in the interval [0, N]
  * Bind serves it with a TTL in the interval [1, N]
  * PowerDNS Recursor always serves it with a TTL of N

End quote.

http://00f.net/2011/11/17/how-long-does-a-dns-ttl-last/

Wietse


Re: ..::Rbl not working::..

2012-08-22 Thread Jamie Paul Griffin
[ /dev/rob0 wrote on Wed 22.Aug'12 at  8:47:06 -0500 ]

> On Wed, Aug 22, 2012 at 01:23:12PM +0530, DN Singh wrote:
> > I never realized that I had this issue too. But, after running
> > the tests, I found out that my queries were indeed blocked by
> > spamhaus.
> > 
> > So, I changed the servers as pointed out and bingo, spam was 
> > successfully being blocked.
> > 
> > Thanks /dev/rob0
 
> Hehe, glad to hear it! And best of all, from someone with the 
> initials, "DNS"! ;)
> 
> Many people think that using forwarders rather than recursion is 
> somehow more effective or "net-green" (conserving of the network 
> resources provided by others.) It's really not, and it carries an 
> added risk of external cache poisoning.
> 
> If you query a record from a forwarder, and the forwarder has it 
> cached, yes, you get a quick response from said forwarder. But you 
> get a cached record, which means the TTL is ticking away. You get, on 
> average, half the published TTL, which means you'll do, on average, 
> twice the number of queries.
> 
> If you query a record from a forwarder, and the forwarder does NOT 
> have it cached, you have introduced extra latency in getting your 
> reply whilst the forwarder recurses. (But you end up with the full 
> TTL minus the latency.)
> 
> Google Public DNS seems to look up records again before the TTL 
> expires in their cache, so you are indeed likely to see a slight 
> improvement in your DNS response time without the doubling of your 
> external queries, when using their service. But is that in any way 
> something you could call "net-green"? Since they're ignoring the 
> published TTL, I think not.
> 
> Other benefits of running your own nameserver, not to be overlooked: 
> 
> 1. You're shielded from the impact of decisions of greedy business 
> types who don't understand DNS. Every so often one of them gets the 
> idea to replace NXDOMAIN responses with an IP address pointing to 
> their own web server. For a mail server doing DNSBL/DNSWL lookups, 
> the result of that can only be a disaster. And it can happen at any 
> time. Lots of ISPs do this, and they usually won't warn you in 
> advance of such a change.
> 
> 2. You are in control of your own DNSSEC policy. You can strictly 
> validate all signatures, you can allow expired signatures, or you can 
> choose to ignore DNSSEC altogether. If a zone you know exists 
> suddenly comes up as SERVFAIL, you know what to check. Conversely, if 
> DNS for a signed zone is hijacked while you are checking signatures, 
> you are not going to fall for the bogus data.
> 
> 3. You control your own cache. If you are aware of cached data being 
> wrong, you can flush that data and move ahead; whereas you cannot 
> flush your forwarder, and you have to wait for the TTL to expire. 
> "Propagation" is a myth propagated by and for people who don't 
> understand DNS.

I experienced this issue earlier this week when I was setting up my DNS server 
on my FreeBSD machine.

Their Handbook article[1] kind-of implied (at least to me) that using a 
forwarder was a good idea; however, it was immediately clear after setting it 
up that it wasn't doing me any favours. 

The article itself is good and would be worthwhile reading if you're setting 
your own DNS server up for the first time but everything /dev/rob0 has advised 
rings true for me. My local network works much more smoothly now since setting 
up BIND/named on my FreeBSD server.

1 - http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/network-dns.html


Re: [SPAM] Re: The ultimate email server

2012-08-22 Thread Terry Barnum

On Aug 21, 2012, at 2:22 PM, Daniele Nicolodi wrote:

> On 21/08/2012 19:34, Mikkel Bang wrote:
>> Thanks a lot everyone! After thinking long and hard about all your
>> advice I finally ended up with:
>> 
>> OpenBSD + postfix-anti-UCE.txt + undeadly's spamd setup (which
>> includes greylisting+greytrapping) + dspam: https://gist.github.com/3417519
>> 
>> Feedback would be much appreciated.
> 
> Am I missing something or in this setup dspam is not used to reject spam
> but only to classify messages? I would like to give dspam a try, but in
> the the documentation no hints are given on how  use dspam to reject
> spam, and this is an important requirement, IMHO.
> 
> Cheers,
> Daniele

Daniele, it's easy to configure dspam to classify/tag only.

Preference "spamAction=tag" # { quarantine | tag | deliver } -> 
default:quarantine

http://dspam.git.sourceforge.net/git/gitweb.cgi?p=dspam/dspam;a=blob_plain;f=README;hb=HEAD

-Terry



Re: [SPAM] Re: The ultimate email server

2012-08-22 Thread Daniele Nicolodi
On 22/08/2012 18:47, Terry Barnum wrote:
> 
> On Aug 21, 2012, at 2:22 PM, Daniele Nicolodi wrote:
> 
>> On 21/08/2012 19:34, Mikkel Bang wrote:
>>> Thanks a lot everyone! After thinking long and hard about all your
>>> advice I finally ended up with:
>>>
>>> OpenBSD + postfix-anti-UCE.txt + undeadly's spamd setup (which
>>> includes greylisting+greytrapping) + dspam: https://gist.github.com/3417519
>>>
>>> Feedback would be much appreciated.
>>
>> Am I missing something or in this setup dspam is not used to reject spam
>> but only to classify messages? I would like to give dspam a try, but in
>> the the documentation no hints are given on how  use dspam to reject
>> spam, and this is an important requirement, IMHO.
>>
>> Cheers,
>> Daniele
> 
> Daniele, it's easy to configure dspam to classify/tag only.
> 
> Preference "spamAction=tag"   # { quarantine | tag | deliver } -> 
> default:quarantine
> 
> http://dspam.git.sourceforge.net/git/gitweb.cgi?p=dspam/dspam;a=blob_plain;f=README;hb=HEAD

You didn't understand my comment. I want spam to be rejected at the SMTP
transaction level. I don't want it to be stored anywhere, but of course
I want the sender to be notified that the message didn't go through.

Looks like this is not possible with dspam alone. Googling, the only
proposed solution I found is to use a SMPT proxy which integrates dspam.

Cheers,
Daniele



Re: [SPAM] Re: The ultimate email server

2012-08-22 Thread Ralf Hildebrandt
* Daniele Nicolodi :

> Looks like this is not possible with dspam alone. Googling, the only
> proposed solution I found is to use a SMPT proxy which integrates dspam.

Yeah, like amavisd

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de