nk of is to run
spamc -c < mail
and parse the x.x/y.y output, get x.x and compare it to a value we
read from the database.
wouldn't it be nicer to call
spamc -c --spam-threshold k.k < mail
and have spamc return 1 if x.x >= k.k (with k.k variable per user and
!= y.y)?
Is this a
think of is to run
spamc -c < mail
and parse the x.x/y.y output, get x.x and compare it to a value we read
from the database.
wouldn't it be nicer to call
spamc -c --spam-threshold k.k < mail
and have spamc return 1 if x.x >= k.k (with k.k variable per user and !=
y.y)?
Is this
and parse the x.x/y.y output, get x.x and compare it to a value we read
from the database.
wouldn't it be nicer to call
spamc -c --spam-threshold k.k < mail
and have spamc return 1 if x.x >= k.k (with k.k variable per user and !=
y.y)?
Is this a feature request you'd like to
On Sat, Oct 30, 2010 at 09:28:27PM +0100, Justin Mason wrote:
> btw, I think this is already possible using the shortcircuit plugin.
> Just use rule priorities to run the non-net rules first, and
> shortcircuit if they are sufficient.
Currently DNS queries are sent no matter what, but it's fixabl
btw, I think this is already possible using the shortcircuit plugin.
Just use rule priorities to run the non-net rules first, and
shortcircuit if they are sufficient.
On Sat, Oct 30, 2010 at 08:05, Henrik K wrote:
> On Sat, Oct 30, 2010 at 02:23:00AM -0400, dar...@chaosreigns.com wrote:
>> On 10/
On 30/10/2010 4:28 AM, Yet Another Ninja wrote:
rsync? to check mail?
Hrm, not a bad idea for the basis of a bayesian filter.
Daryl
On Sat, 30 Oct 2010 10:28:09 +0200
Yet Another Ninja wrote:
> On 2010-10-30 9:56, RW wrote:
> > On Sat, 30 Oct 2010 02:23:00 -0400
> > dar...@chaosreigns.com wrote:
> >
> >
> >> But the total amount of bandwidth and processing time saved on the
> >> internet from not running unnecessary tests o
On 2010-10-30 9:56, RW wrote:
On Sat, 30 Oct 2010 02:23:00 -0400
dar...@chaosreigns.com wrote:
But the total amount of bandwidth and processing time saved on the
internet from not running unnecessary tests on every instance of
spamassassin seems worth doing.
You are also wasting resources by
On Sat, 30 Oct 2010 02:23:00 -0400
dar...@chaosreigns.com wrote:
> But the total amount of bandwidth and processing time saved on the
> internet from not running unnecessary tests on every instance of
> spamassassin seems worth doing.
You are also wasting resources by putting the round-trips on
On Sat, Oct 30, 2010 at 02:23:00AM -0400, dar...@chaosreigns.com wrote:
> On 10/30, Michael Parker wrote:
> > > I'd like to see spamassassin only run network tests when they might
> > > affect the outcome.
> >
> > Why?
>
> To reduce the network load on my server which is one of the hosts of the
>
On 10/30, Michael Parker wrote:
> > I'd like to see spamassassin only run network tests when they might
> > affect the outcome.
>
> Why?
To reduce the network load on my server which is one of the hosts of the
DNSWL.org list?
> Assuming a reasonably fast connection network checks are basically f
On Oct 29, 2010, at 8:42 PM, dar...@chaosreigns.com wrote:
> I'd like to see spamassassin only run network tests when they might
> affect the outcome.
Why?
Assuming a reasonably fast connection network checks are basically free.
They are kicked off at the start of a scan and the results are co
I'd like to see spamassassin only run network tests when they might
affect the outcome.
For example, if you run all non-network tests, and at that point an email's
score qualifies as spam, and then you run all the non-spam network tests
(hitting whitelists), and it still qualifies as spam, there's
Hi, everybody (but specially developers). I've been running a sitewide
Bayes setup for almost three years, with a wonderful result. Along
with that, I report spam messages to my local spamassassin setup (and
some to spamcop) via a web interface (embedded in our Webmail).
>From the last training ru
I have a spf whitelisting cf with 100s of lines of
def_whitelist_from_spf [EMAIL PROTECTED]
Mainly I have all banks, mailing lists etc
The problem is with maintenance of this file. Everytime I have to update
this file and rsync it to all my nodes , whenever there is a new entry
We could have a
On Thu, Aug 02, 2007 at 04:38:53PM +0100, neil wrote:
> I didnt see anything in the perldoc, but I have heard the idea some
> where. Is this possible? Is is a feature that the devs know about?
> Should I raise it as a feature request?
I'm still rather annoyed about this whole thin
is possible? Is is a feature that the devs know about?
Should I raise it as a feature request?
In the FuzzyOcr.pm there is this code which make me think its is likely,
but I'm not a perl guru.
my $internal_score = 0;
my $current_score = $pms->get_score();
my $score = $con
I have been using SA since ~2.60 and because I work for an ISP, I need
to be more tolerant than most with regards to handling email. With this
in mind I have made a few modifications to the STOCK version and have to
manually patch with every upgrade, so here are some of the modifications
I have ma
hi -- could you open these as *multiple* *separate* bugs on
http://issues.apache.org/SpamAssassin/ ? Some of them will be more
likely to get accepted than others. ;)
--j.
Jorge Valdes writes:
> Hi,
>
> I have been using SA since ~2.60 and because I work for an ISP, I need
> to be more toleran
Hi,
I have been using SA since ~2.60 and because I work for an ISP, I need
to be more tolerant than most with regards to handling email. With this
in mind I have made a few modifications to the STOCK version and have to
manually patch with every upgrade, so here are some of the modifications
On Sat, Oct 28, 2006 at 11:41:48AM -0400, Joe Flowers wrote:
> I'm not sure what you are talking about Theo. Sorry.
I was talking about SMTP.
> Below is an envelope example from a Spam email, at least an envelope
> from my mail system.
> ---
> D1161764311
> [EMAIL PROTECT
I'm not sure what you are talking about Theo. Sorry.
Below is an envelope example from a Spam email, at least an envelope
from my mail system.
I assume the "[EMAIL PROTECTED]" is useful to SA.
Is there a better form I can put this in before pre-pending to the
message body? Also, currently, the
Mark Martinec wrote:
If scanning at the MTA level with amavisd-new, a synthetic Return-Path
is prepended to a copy of a message that is given to SA for examination.
Much like David B Funk says a sendmail-SA-milter does.
MIMEDefang does this as well. It synthesizes Received and Return-Path
whe
On Wed, Oct 25, 2006 at 02:35:07PM -0400, Joe Flowers wrote:
> If I pre-pend a message's Envelope to it's Body, can Spamassassin do
> anything useful with it?
It depends what you mean by "a message's envelope". If you mean add in
standard headers for MAIL FROM and RCPT TO, then sure, go ahead an
> > For envelope sender there is a standard header: Return-Path
>
> Return-Path is supposed to be added when the message is placed in the
> mailstore (ie, last hop, after the transfer network). Since I do scanning
> at the MTA level before delivery, I don't have Return-Path yet.
If scanning at the
On 10/25/2006 5:46 PM, Ken A wrote:
> It should be mentioned that envelope To: is not there for a reason. :-(
> Including it in the header will remove the privacy enabled by Bcc
This is true--BCC will be made entirely pointless if the envelope
recipients are irreversibly pasted into the message
On 10/25/2006 7:15 PM, Mark Martinec wrote:
> For envelope sender there is a standard header: Return-Path
Return-Path is supposed to be added when the message is placed in the
mailstore (ie, last hop, after the transfer network). Since I do scanning
at the MTA level before delivery, I don't have
David B Funk wrote:
When the milter is passing the message to spamd, it is easy to add
synthesized headers (such as 'Return-Path:' & 'X-Envelope-To:') to pass
envelope addresses to SA (that's what I did with the milter that I use).
Still, pre-pending is 10x easier than inserting.
On Wed, 25 Oct 2006, Joe Flowers wrote:
> Ken A wrote:
> > It should be mentioned that envelope To: is not there for a reason.
> > :-( Including it in the header will remove the privacy enabled by Bcc,
> > so if you have privacy considerations to worry about, you might think
> > twice.
>
> I pre-p
Ken A wrote:
It should be mentioned that envelope To: is not there for a reason.
:-( Including it in the header will remove the privacy enabled by Bcc,
so if you have privacy considerations to worry about, you might think
twice.
I pre-pend the envelope to a copy of the message and then send
Eric A. Hall wrote:
> Other possibilities exist too. Envelope sender can be used for some SPF
> filters that aren't currently done, for example.
> The first problem is that there is no standard header field, and in the
> case of envelope recipient(s) where there can be multiple entries, there
> is
Eric A. Hall wrote:
On 10/25/2006 2:35 PM, Joe Flowers wrote:
If I pre-pend a message's Envelope to it's Body, can Spamassassin do
anything useful with it?
At a minimum you can use the envelope recipient(s) to do some kinds of
spam-trap filtering (eg, is the message addressed to a spamtrap
On 10/25/2006 2:35 PM, Joe Flowers wrote:
> If I pre-pend a message's Envelope to it's Body, can Spamassassin do
> anything useful with it?
At a minimum you can use the envelope recipient(s) to do some kinds of
spam-trap filtering (eg, is the message addressed to a spamtrap and me).
You can use
Hey guys,
If I pre-pend a message's Envelope to it's Body, can Spamassassin do
anything useful with it?
Joe
"jdow" schrieb:
>> However,
>> if a message came from a client who gave SMTP-AUTH, it ought to be
>> "trusted" (and not subjected to the blacklist checks).
>
> Would you care to expound on your theory here. What makes you think
> a valid SPF is a sign of a good guy?
SMTP authentification has
On Aug 24, 2005, at 5:09 PM, Justin Mason wrote:
I think we've already implemented that in 3.1.0. ;)
I just love it when I request a feature that's already in the current
release candidate.
Thanks muchly :-)
t who gave SMTP-AUTH, it ought to be
"trusted" (and not subjected to the blacklist checks). And that's
what my feature request boils down to:
Would you care to expound on your theory here. What makes you think
a valid SPF is a sign of a good guy?
What makes you think SPF was in any
sted" (and not subjected to the blacklist checks). And that's what
my feature request boils down to:
Would you care to expound on your theory here. What makes you think
a valid SPF is a sign of a good guy? Spammers can SPF their own
messages. All it does is cut down on bot spam, a very li
ought to be
> "trusted" (and not subjected to the blacklist checks). And that's what
> my feature request boils down to:
>
> If the message was authenticated on the most immediate relay, then give
> a configuration option which says "trust this message as thoug
That sounds odd, doesn't it? "dynamic trusted_networks". The whole
point of a trusted network is that it's a specific network. However,
if a message came from a client who gave SMTP-AUTH, it ought to be
"trusted" (and not subjected to the blacklist checks)
On Thursday, December 9, 2004, 8:14:16 AM, Larry Rosenbaum wrote:
> By the way, if you have a message that's been forwarded in such a way
> that the original recipient addresses become part of the message text,
> the URI extraction code will extract these too. Therefore, if you get
> one of those
> -Original Message-
> From: Jeff Chan [mailto:[EMAIL PROTECTED]
> Posted At: Wednesday, December 08, 2004 8:45 PM
> Posted To: sa-users
> Conversation: Feature Request: Whitelist_DNSRBL
> Subject: Re: Feature Request: Whitelist_DNSRBL
>
> On Wednesday, Dece
On Wednesday, December 8, 2004, 11:41:41 PM, hamann w wrote:
>>> How about a way to use wildcards with uridnsbl_skip_domain? I'd like to
>>> be able to tell the SURBL code not to look up
>>>
>>> *.gov
>>> *.mil
>>> *.edu
>>> and even *.??.us
>>>
>>> since these are unlikely to be hosting spammer
>> How about a way to use wildcards with uridnsbl_skip_domain? I'd like to
>> be able to tell the SURBL code not to look up
>>
>> *.gov
>> *.mil
>> *.edu
>> and even *.??.us
>>
>> since these are unlikely to be hosting spammer web pages.
>>
>> Larry
>>
>>
Hi,
I have received obscure web tra
On Wednesday, December 8, 2004, 7:25:31 PM, Rob McEwen wrote:
> 1st, I'm not a SpamAssassin user. In fact, none of your particular
> suggestions (so far) regarding local whitelisting will be benefit me.
OK That's fine, but please chose a parent zone you control
if you want to set up a subdomain.
Jeff Chan wrote:
> On Wednesday, December 8, 2004, 9:06:26 AM, Daryl O'Shea wrote:
>>It doesn't cause more lookups for anyone. A local white list file would
>>reduces lookups at the expense of process size (and time if the white
>>list is very large).
>
>
> The SA developers chose an appropriately
On Wednesday, December 8, 2004, 9:49:55 AM, Daryl O'Shea wrote:
> Additionally, assuming there isn't an extreme query frequency drop off
> after the top 100 or 200 excluded domains, it would be nice to have
> access to the rest of the exclusion list which wouldn't be realistic to
> be storing (an
On Wednesday, December 8, 2004, 9:21:37 AM, Chris Santerre wrote:
> My whole idea was skipping the lookup entirley. Why would you want to do a
> lookup for google even if it is cached?
Yep it's a good idea. Which is why we're already doing it. ;-)
Jeff C.
--
Jeff Chan
mailto:[EMAIL PROTECTED]
On Wednesday, December 8, 2004, 9:06:26 AM, Daryl O'Shea wrote:
> Bill Landry wrote:
> >> From: "Chris Santerre" <[EMAIL PROTECTED]>
> >>
> >> Well we have talked about it and didn't come up with a solid
> >> answer. The idea would cause more lookups and time for those who
> >> don't cac
On Wednesday, December 8, 2004, 9:07:44 AM, Chris Santerre wrote:
> Actually I was only saying to list the top look ups from the whitelist, not
> the 66,500. That is more of a research and exclusion tool. So no more then
> 200-300 domains. Check it every month for changes and update.
This is alre
This is a forwarded message
From: Jeff Chan <[EMAIL PROTECTED]>
To: "Rob McEwen (PowerView Systems)" <[EMAIL PROTECTED]>
Date: Wednesday, December 8, 2004, 4:13:32 PM
Subject: [SURBL-Discuss] Feature Request: Whitelist_DNSRBL
===8<==Original message text===
On Wednesday, December 8, 2004, 8:33:11 AM, Bill Landry wrote:
> Actually, I was thinking of the whitelist that Jeff has already compiled at
> http://spamcheck.freeapp.net/whitelist-domains.sort (currently over 66,500
> whitelisted domains). If you set a long TTL on the query responses, it
> would
On Wednesday, December 8, 2004, 8:15:28 AM, David Hooton wrote:
> The floor in offering a DNS based whitelist is that it encourages
> people to place a negative score on it. The problem with this is that
> spammers can poison messages with whitelisted domains, thereby
> bypassing the power of the
On Wednesday, December 8, 2004, 8:15:49 AM, Chris Santerre wrote:
> The idea [of a whitelist DNS list] would cause more lookups and
> time for those who don't cache dns.
That's another excellent argument. Barring caching, which not
all resolvers do, why do a gazillion DNS lookups on yahoo.com,
w3
On Wednesday, December 8, 2004, 8:03:35 AM, Bill Landry wrote:
> - Original Message -
> From: "Daryl C. W. O'Shea" <[EMAIL PROTECTED]>
>> >> Was the whitelist you were referring to really the SURBL server-side
>> whitelist?
>> >
>> >
>> > Yes! But local SURBL whitelists are needed to
On Wednesday, December 8, 2004, 8:47:18 AM, Larry Rosenbaum wrote:
> How about a way to use wildcards with uridnsbl_skip_domain? I'd like to
> be able to tell the SURBL code not to look up
> *.gov
> *.mil
> *.edu
> and even *.??.us
> since these are unlikely to be hosting spammer web pages.
Tru
At 10:58 AM 12/8/2004, Michael Barnes wrote:
> Um. They are?? AFAIK there are absolutely no whitelists to the DNSRBLs in
> SA itself.
I'm not sure if DNSRBLs are the same as URIDNSBLs, or if this was the
intent of the original poster
It was a mistake on Chris's part, and he replied as such.
As for
Chris Santerre wrote:
Assuming that this whitelist would be used to LOWER the score of an email,
and not just exclude them from SURBL. Then we would go thru even
moreresearch before we whitelist a domain. There is a LOT of work that goes
into adding a domain to our whitelist, and that is JUST for e
>
> >> We do have a whitelist that our private research tools do
>poll. The
> >> idea is that if it isn't in SURBL then it is white.
> >>
> >> This also puts more work to the already overworked contributors. ;)
>
>
>How so? The lookup code is already compatible as is, it's
>just a matter
>of
Bill Landry wrote:
>> From: "Chris Santerre" <[EMAIL PROTECTED]>
>>
>> Well we have talked about it and didn't come up with a solid
>> answer. The idea would cause more lookups and time for those who
>> don't cache dns.
It doesn't cause more lookups for anyone. A local white list file would
>-Original Message-
>From: Rosenbaum, Larry M. [mailto:[EMAIL PROTECTED]
>Sent: Wednesday, December 08, 2004 11:47 AM
>To: users@spamassassin.apache.org
>Subject: RE: Feature Request: Whitelist_DNSRBL
>
>
>How about a way to use wildcards with uridnsbl_skip_do
How about a way to use wildcards with uridnsbl_skip_domain? I'd like to
be able to tell the SURBL code not to look up
*.gov
*.mil
*.edu
and even *.??.us
since these are unlikely to be hosting spammer web pages.
Larry
- Original Message -
From: "David Hooton" <[EMAIL PROTECTED]>
> On Wed, 8 Dec 2004 08:03:35 -0800, Bill Landry <[EMAIL PROTECTED]>
wrote:
> > I agree, and have suggested a whitelist SURBL several times on the SURBL
> > discussion list, but it has always fallen on deaf ears - nary a
respon
- Original Message -
From: "Chris Santerre" <[EMAIL PROTECTED]>
> >-Original Message-
> >From: Bill Landry [mailto:[EMAIL PROTECTED]
> >Sent: Wednesday, December 08, 2004 11:04 AM
> >To: users@spamassassin.apache.org; [EMAIL PR
On Wed, 8 Dec 2004 08:03:35 -0800, Bill Landry <[EMAIL PROTECTED]> wrote:
> I agree, and have suggested a whitelist SURBL several times on the SURBL
> discussion list, but it has always fallen on deaf ears - nary a response.
> It would be nice if someone would at least respond as to why this is not
>-Original Message-
>From: Bill Landry [mailto:[EMAIL PROTECTED]
>Sent: Wednesday, December 08, 2004 11:04 AM
>To: users@spamassassin.apache.org; [EMAIL PROTECTED]
>Subject: Re: Feature Request: Whitelist_DNSRBL
>
>
>- Original Message -
>From: &
- Original Message -
From: "Daryl C. W. O'Shea" <[EMAIL PROTECTED]>
> >> Was the whitelist you were referring to really the SURBL server-side
> whitelist?
> >
> >
> > Yes! But local SURBL whitelists are needed to reduce traffic and time.
>
>
> I'd much rather see SURBL respond with 12
>> Thoughts, suggestions, or coffee?
>
>First, where's that coffee?
In my belly!
>then: I keep a .cf file with a quite a few lines like.
>
>uridnsbl_skip_domain ibill.com blabla.tld local-boobie-site.dom
Doh! It helps to RTFM I guess :) LOL at boobie-site!
Seeing as this feature is in place
On Wed, Dec 08, 2004 at 10:26:15AM -0500, Matt Kettler wrote:
> At 10:17 AM 12/8/2004 -0500, Chris Santerre wrote:
> >OK, we know that the popular domains like yahoo.com and such are hard coded
> >into SA to be skipped on DNSRBL lookups. But it would be great to have a
> >function to add more local
Chris Santerre wrote:
>> Was the whitelist you were referring to really the SURBL server-side
whitelist?
>
>
> Yes! But local SURBL whitelists are needed to reduce traffic and time.
I'd much rather see SURBL respond with 127.0.0.0 with a really large TTL
for white listed domains. Any sensible s
Chris Santerre wrote:
OK, we know that the popular domains like yahoo.com and such are hard coded
into SA to be skipped on DNSRBL lookups. But it would be great to have a
function to add more locally.
Thinking one step bigger, it would be even better to feed this a file. This
way maybe SURBL can
>
>>Thinking one step bigger, it would be even better to feed
>this a file. This
>>way maybe SURBL can create a file for the top hit legit
>domains. Then using
>>SARE and RDJ, people could update that. This would reduce a
>lot of traffic
>>and time.
>
>Wait, now you're bringing SURBL into this.
At 10:17 AM 12/8/2004 -0500, Chris Santerre wrote:
OK, we know that the popular domains like yahoo.com and such are hard coded
into SA to be skipped on DNSRBL lookups. But it would be great to have a
function to add more locally.
Um. They are?? AFAIK there are absolutely no whitelists to the DNSRBL
OK, we know that the popular domains like yahoo.com and such are hard coded
into SA to be skipped on DNSRBL lookups. But it would be great to have a
function to add more locally.
Thinking one step bigger, it would be even better to feed this a file. This
way maybe SURBL can create a file for the
We
consider the Bayes system as a detector of SPAM, which 'technically' it isn't.
What it reports is how close a given message is to one of two sets, given that
it has been previously shown examples of each of the two
sets.
Because this is the case, I'm thinking it should be possible to us
On Wed, 22 Sep 2004, Ivan Histand wrote:
Hi... first I'd like to congratualte the SA team for a job well done with
the 3.0 release. I've been using the program for a couple years now with
excellent success.
I've been thinking about a possible improvement to SA. Currently the
classification of mai
Hi... first I'd like to congratualte the SA team for a job well done with
the 3.0 release. I've been using the program for a couple years now with
excellent success.
I've been thinking about a possible improvement to SA. Currently the
classification of mail is pretty much black and white. Yes,
77 matches
Mail list logo