Bot reporting - best procedure?

2010-11-16 Thread Simon Waters
Sure it is something I should know, but I keep hitting dead ends.

What is current state on botnet reporting procedures?

A minor irritation currently, but clearly well resource botnet is pestering 
one of our services, only a couple of thousand IP addresses in use, but I'd 
like to mop up as much of it as possible whilst it is only an irritation, 
since presumably between irritation and being off the Internet is only one 
command.

Lots of Botnet related resources seem to have vanished from the net, or be in 
poor repair.

RIPE provide an API for Abuse address lookup, so a potential solution exists 
for automaton. But I figure someone else will have written some scripts or 
interfaces to save me messing it up, and landing 100's of abuse desks with 
useless information.



Re: wikileaks dns (was Re: Blocking International DNS)

2010-12-03 Thread Simon Waters
On Friday 03 December 2010 13:22:19 Frank Bulk wrote:
> I guess the USG's cyberwar program does work (very dryly said).

They missed ;)

http://wikileaks.ch
http://twitter.com/wikileaks





Re: (wikileaks) Fwd: [funsec] And Google becomes a DNS..

2010-12-06 Thread Simon Waters
On Sunday 05 December 2010 15:50:32 Gadi Evron wrote:
>
> I withhold comment... "discuss amongst yourselves".

Since it is an uncommon but occasional complaint that someones site is indexed 
in Google by IP address not domain name, I assume simply that since wikileaks 
were redirecting to URLs with IP addresses in, Google assumed this is what 
they wanted indexed.

I share their pain, we had disk and a botnet issue with one of our sites, and 
Google's contribution was to drop our ranking (presumably speed penalty 
because it was now slower and less reliably than normal).

Frustrating but Google now reflects the reality of the web experience. They 
are "lucky" not to have a speed penalty, or perhaps they do but they are 
still ranked 1 for the term "wikileaks" even with the relevant penalties.

I dare say in a few iterations Google will spot DDoS attacks, and other forms 
of abuse, and bump up your ranking on the basis you are clearly notable 
enough to attract that sort of attention.



Re: Cloud proof of failure - was:: wikileaks unreachable

2010-12-06 Thread Simon Waters
On Monday 06 December 2010 09:47:43 Jay Mitchell wrote:
>
> "The Cloud" went down? I think not.

It did for at least one customer.

> Having ones account terminated as opposed to an outage caused by DDoS are
> two very different things.

Although not for all DNS providers.

There are operational lessons here. But do they boil down to technical issues 
may not be the limiting factor on your uptime. As commented already by 
someone, perhaps time to review plans for responses to non-technical threats 
to availability.



Re: Abuse@ contacts

2010-12-07 Thread Simon Waters
> Or have had any luck with abuse@ contacts in
> the past? Who's good and who isn't?

http://www.rfc-ignorant.org/tools/submit_form.php?table=abuse



Re: Spamhaus under DDOS from AnonOps (Wikileaks.info)

2010-12-19 Thread Simon Waters
On 19/12/10 18:51, Paul Ferguson wrote:
> Not for nothing, but Spamhaus wasn't the only organization to warn about
> Heihachi:
>
> http://blog.trendmicro.com/wikileaks-in-a-dangerous-internet-neighborhood/

All the domains listed by Trend Micro as neighbours appear to be down.

Have to say as someone whose employer will buy and host a domain name if
you fill in the credit card details and the credit card company accept
them, if you listed only the sites we've cancelled first thing on a
Monday morning (or as soon as we are notified) we'd look pretty poor.

>From the many adverse comments about the hosting services in use they
look as bad as they come, but on the other hand this weakens the
usefulness of the Trend statement (well to people who check what they
are told).

Were the sites up when the announcement was made?



Re: Internet to Tunisia

2011-01-11 Thread Simon Waters
On Tuesday 11 January 2011 14:58:51 Marshall Eubanks wrote:
>
> On twitter right now there are frequent claims that all https is blocked
> (presumably a port blocking).

A quick search pulls up.
http://www.cpj.org/internet/2011/01/tunisia-invades-censors-facebook-other-accounts.php

Since Gmail defaults to HTTPS, and many other sites left to their own devices, 
it is necessary for an attacker to try and force clients to use HTTP or start 
conversation using HTTP (so that no one notices when the important bit isn't 
encrypted, or to enable javascript from a third part to be injected).

NoScript for Firefox has a force HTTPS for a domain feature.
http://noscript.net/faq#qa6_3

But what clients really need is a way for servers to say "always use 
encryption".
http://noscript.net/faq#STS

Of course when it gets to the level of countries, it is quite plausible your 
browser may already trust a certificate authority under their jurisdiction so 
all bets are off.

I think I'm saying HTTPS doesn't quite hack it in browsers yet, but it will 
be "secure enough" real soon now.




Re: Request Spamhaus contact

2011-01-18 Thread Simon Waters
On Tuesday 18 January 2011 11:46:53 Ken Gilmour wrote:
> 
> Obviously they know about them because google has the information.

I'm not sure this is a reasonable deduction.



Re: Hostexploit report/Intercage/Esthost

2008-10-13 Thread Simon Waters
On Monday 13 October 2008 15:30:07 Konstantin Poltev wrote:
> 
> and Spamhaus itself claims not to be
> subject to any US laws, where it clearly does business. 

The Spamhaus website lists addresses in the UK and Switzerland.

They appear to operate from the UK, and they claim to be subject to UK law.

Searching for "spamhaus jurisdiction" answers this in the first paragraph of 
the first result, not that Google is always this accurate.

Spamhaus might not be perfect, but they demonstrably provide the best public 
source of information on spam sources on the Internet. As such criticizing 
them makes you look suspect in the eyes of those who have very positive 
experiences of spamhaus's data, and who are use to seeing criticism of them 
come almost exclusively from shady characters. If they are wrong say so, and 
tell them, they've always been very responsive to communications in the past, 
but don't rant.



Re: [funsec] McColo: Major Source of Online Scams andSpams KnockedOffline (fwd)

2008-11-13 Thread Simon Waters
On Wednesday 12 November 2008 21:52:12 Nick Newman wrote:
>
> Let's compare these two scenarios:
>
> 1. The world-wide community of people who essentially run the Internet have
> had enough with a nasty webhosting company in California.  They've
> determined that the majority of spam world-wide originates from this
> company offering bullet-proof hosting.  So they call the upstream providers
> and get them cut off. 

> 2. Some LE agency serves a search warrant for "any digital evidence" and
> collects hundreds of terabytes of worth of data.  5 years later

These aren't mutually exclusive.

> nw3c.org

Grr - those stupid DreamWeaver menus that only work in 66% of browsers.



Re: mail traffic

2008-11-13 Thread Simon Waters
On Thursday 13 November 2008 13:13:17 Revolver Onslaught wrote:
> 
> Did you enconuter the same problem ?

The view here is see McColo thread.

Spamcop and DCC report significant drop coincident with McColo going offline.

I just wish I could say the same about local spam volumes. 

We were blocking most bot spam thanks to the CBL and greylisting, so I suspect 
that the received volumes won't be affected that much.

Still someone should probably prod law enforcement, as this counts as 
circumstantial evidence of criminal activity ;)






Re: an over-the-top data center

2008-11-28 Thread Simon Waters
On Friday 28 November 2008 16:41:45 Craig Holland wrote:
> Just me, or is showing the floorplan not the typical behavior of a
> super-secure anything?

I'm not sure anyone but the press are claiming anything is super secure there.

I can't imagine being in a bunker makes physical security worse (although it 
could make cooling, and working diesel backup generators more interesting).

Having had to visit data centres so secure they don't list their name on the 
front of the building, which is great for security till you need an engineer 
in a hurry and he is driving around looking for the building.

I'm thinking physical security is over done in some data centers. Sure it is a 
great idea to make sure no one steals the hardware, but much beyond that and 
allowing in expected personnel only, it soon gets to being counter 
productive.

I was once back-up for a facility so "secure" I never got to visit it?! I'm 
not saying I might not have been that useful if I was ever called on to 
provide support - guess we'll never know. Although for that one I did at 
least happen to know where it was despite it not being sign posted.



Re: Anyone on a Virgin Media UK broadband connection?

2008-12-08 Thread Simon Waters
On Sunday 07 December 2008 14:10:02 Drew Linsalata wrote:
>
> Drop me a note off-list if possible.

We have a business line from them

Urm no Wikipedia this morning - hmm - I think the IWF is self destructing.



Re: Criminals, The Network, and You [Was: Something Else]

2007-09-12 Thread Simon Waters

On Wednesday 12 September 2007 16:54, you wrote:
>
> My mail servers return 5xx on NXDOMAIN.  If my little shop can spend not
> too much money for three-9s reliability in the DNS servers, other shops
> can as well.  

You get NXDOMAIN when an authoratitive servers says there is no such domain, 
it doesn't occur if the DNS servers aren't available. So I fail to see the 
connection to reliability of DNS servers.

All well engineers mail services provide 4xx (or accept the email) on SERVFAIL 
(or other lookup failure), if they insist on checking DNS information as part 
of accepting email. One has to allow for the case where the mail servers 
can't speak to the DNS servers, which may include cases where the DNS servers 
are available, but say routing, or other parts of the DNS are fubar.

Serious programmer(s?) spent a lot of time making sure the MTA we use does the 
right thing under all error conditions so far encountered, I'd consider 
altering that behaviour vandalism. I feel like some sort of clumsy cave man 
compared to the authors every time I configure it as it is.


Re: [NANOG] Tired of ...

2008-05-15 Thread Simon Waters
On Thursday 15 May 2008 16:23, Jay Hennigan wrote:
> Someone via nanog@nanog.org spammed:
> > Tired of
>
> [snip]
>
> Can anyone suggest a faster way to get yourself blackholed than to spam
> this list?

Spammers still spam our abuse address, that might do it.

___
NANOG mailing list
NANOG@nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog


Re: ICANN opens up Pandora's Box of new TLDs

2008-06-27 Thread Simon Waters
On Friday 27 June 2008 17:13:10 Marshall Eubanks wrote:
> 
> .localhost is already reserved through RFC 2606, so this should not be
> a problem. 

.localdomain shouldn't cause a problem, since most Unix systems that use it 
put it in the name resolution before the DNS is invoked (i.e. /etc/hosts).

ICANN have a technical review step in the procedure, which hopefully would 
flag a request for ".localdomain", I don't think we want to try to enumerate 
possible brokenness.

Probably appropriate for the review  step is to ask the root name server 
operators if there is substantive traffic for a proposed TLD, as if there is 
it may reveal a problem.

That said substantive traffic for a proposed domain need not of itself block a 
request, ICANN are tasked with maintaining the stability of the net, not the 
stability of every broken piece of software on the net.

Does anyone has a specific operational concerns - otherwise I think this topic 
should probably be laid to rest on this list.



Re: ICANN opens up Pandora's Box of new TLDs

2008-06-30 Thread Simon Waters
On Monday 30 June 2008 17:24:45 John Levine wrote:
> >> In the usual way.  Try typing this into your browser's address bar:
> >>
> >>   http://museum/
> >
> > That was amusing.  Firefox very handily took me to a search
> > results page listing results for the word "museum", none of
> > which was the actual page in question.
>
> Gee, it works fine for me in Firefox 2.0.0.14.  I'd suggest filing a bug
> report against your OS vendor to fix their resolver library.

I think that the following hold ... but I have a headache

Since the hostname part of the URI does not contain a dot, RFC1034 section 3.1 
applies (according to RFC 2396), the name is treated as relative initially, 
and the interpretation "varies from implementation to implementation".

In particular if a "museum" name exists in your current domain, and/or other 
searches that may be applied to the lookup that may be correctly returned 
first. The language implies that "." need not be part of the search, so the 
URL need not work and the implementation could still be RFC conforming.

Certainly there is scope for confusion here. My machines resolver is 
configurable - option ndots in resolv.conf - but ndots defaults to 1 - I'm 
sure the folks who wrote the resolver code considered the default carefully.

Clearly an area ICANN need to ponder, if they haven't already, as changing the 
resolver defaults globally might undermine the stability of the root name 
servers. And introducing new domains will encourage vendors to change their 
resolvers default behaviour (even for areas where it is already technically 
RFC conforming, if not "least surprise" conforming).

> > Thanks for all the pointers!  I guess I won't be suggesting the
> > use of such TLDs as gmail and ymail as a way to shorten up
> > email addresses for people, given the inconsistent behaviour
> > of client resolvers.  ^_^;
>
> Too bad.  You might try writing the guy whose address is [EMAIL PROTECTED] 
> (yes, his
> name is Ian) and see what his experience has been.

Now that is an email address not a URI, so section 5 of RFC 2821 applies, and 
treating the domain as "relative" is "discouraged". So I'd expect his email 
address to work (with a few exceptions - just like addresses with 
apostrophes - some folks will have bad implementations).

IE gets to the correct page here. Firefox on Windows did something else 
again - sigh (I'm sure it is can be corrected in the browser configuration 
somewhere). There is a squid proxy behind both of them.

On the other hand if people need short domain names, my employers may have a 
couple to sell of the old fashioned variety the same total length as "museum" 
but without the added complications, and the ICANN fee is a lot less ;)



Re: Multiple DNS implementations vulnerable to cache poisoning

2008-07-09 Thread Simon Waters
On Wednesday 09 July 2008 14:16:53 Jay R. Ashworth wrote:
> On Wed, Jul 09, 2008 at 04:39:49AM -0400, Jean-Fran?ois Mezei wrote:
> > My DNS server made the various DNS requests from the same port and is
> > thus vulnerable. (VMS TCPIP Services so no patches expected).
>
> Well, yes, but unless I've badly misunderstood the situation, all
> that's necessary to mitigate this bug is to interpose a non-buggy
> recursive resolver between the broken machine and the Internet at
> large, right?

He said "DNS server", which you wouldn't want to point at a correct named, 
because that would be forwarding, and forwarding has its own security issues.

I've already dragged a name server here back to a supported OS version today 
because of this, don't see why others should escape ;)



TLD servers with recursion was Re: Exploit for DNS Cache Poisoning - RELEASED

2008-07-24 Thread Simon Waters
On Thursday 24 July 2008 05:17:59 Paul Ferguson wrote:
>
> Let's hope some very large service providers get their act together
> real soon now.
>
> http://www.hackerfactor.com/blog/index.php?/archives/204-Poor-DNS.html

It isn't going to happen without BIG political pressure, either from users, or 
governments, and other bodies.

I checked last night, and noticed TLD servers for .VA and .MUSEUM are still 
offering recursion amongst a load of less popular top level domains.

Indeed just under 10% of the authoritative name servers mentioned in the root 
zone file still offer recursion.

I didn't check IPv6 servers, but these IPv4 servers are potentially vulnerable 
to this (and other) poisoning attacks. Hard to pin down numbers as some have 
been patched, and some have unusual behaviour on recursion, but I fancy my 
chances of owning more than a handful of TLDs if I had the time to try (and 
immunity from prosecution).

The advice NOT to allow recursion on TLD servers is well over a decade old. So 
who thinks the current fashionable problem will be patched widely in a 
month - given it is much less critical in nature?

The .MUSEUM server that is offering recursion is hosted by the Getty 
Foundation, so I assume money isn't the issue. The Vatican ought to be able 
to find someone in its billion adherents prepared to help configure a couple 
of name servers.

I also noticed that one of the ".US" servers doesn't exist in the DNS proper, 
glue exists but not the record in the zone. I'm guessing absence of a name 
servers name record in its proper zone makes certain spoofing attacks easier 
(since you are only competing with glue records), although I can't 
specifically demonstrate that one for blackhat 2008 - it suggests a certain 
lack of attention on the part of the domain's administrators.

I was tempted to write a mock RFC, proposing dropping all top level domain 
names which still have recursion enabled in one or more of their name 
servers - due to "lack of maintanence". A little humour might help make the 
point, slashdot might go for it.






Re: ingress SMTP

2008-09-03 Thread Simon Waters
On Wednesday 03 September 2008 18:07:22 Stephen Sprunk wrote:
>
> When port 25 block was first instituted, several providers actually
> redirected connections to their own servers (with spam filters and/or
> rate limits) rather than blocking the port entirely.  This seems like a
> good compromise for port 25 in particular, provided you have the tools
> available to implement and support it properly.

It generated some very confused support calls here, where folks said I sent 
email to your server, and we had to tell them "no you didn't, you only 
thought you did".

Please if you are going to block it block it clearly and transparently.

On the other hand abuse by bots isn't restricted to SMTP, and I suspect ISPs 
would be better of long term having a way of spotting compromised/malicious 
hosts and dealing with them, than applying a sticky plaster to port 25. 
Indeed spewing on port 25 is probably a good sign you need to apply said 
system.




Re: ingress SMTP

2008-09-05 Thread Simon Waters
On Friday 05 September 2008 00:33:54 Mark Foster wrote:
>
> *rest snipped*
>
> Is the above described limitation a common occurrance in the
> world-at-large?

If the ISP blocks port 25, then the ISP is taking responsibility for 
delivering all email sent by a user, and they have to start applying rate 
limits. Otherwise if they send all email from their users, all they've done 
is take the spam, and mix it in with the legitimate email, making spam 
filtering harder.

Locally one of the big ISP insists you register all sender addresses with 
them, so all the spam from them has legitimate sender credentials.

The problem is that by blocking port 25, you are basically then switching 
users to arbitrary per ISP rules for how to send email. This is probably good 
for ISPs (provides some sort of lock-in) but bad for their users.

Whilst the antispam folk think it is a godsend because their block lists are 
smaller, it is relatively easy to block spewing IP addresses, and hard to 
filter when good and bad email is combined. Which is why they hate Google 
hiding the source IP address.

This will continue until the real issue is addressed, which is the security of 
end user systems.