Paul,
>
>>When one suggests that a first tier ISP would not need to filter
>>traffic from down stream providers, because IF they do the filtering,
>>then the problem will not arise via those links, one is suggesting
>>precisely this sort of model.
>
>You're approaching this from the wrong perspective, in my opinion.
>
>There is no assumption implied that RFC2267 filtering is needed --
>it is required. What good is it if one or two or 300 people do
>it, and another 157,000 do not?
>
>Well, there is a little good, but the more people that do it, the
>better off we all are.
>
>The bottom line here is that RFC2267-style filtering (or unicast
>RPF checks, or what have you) stops spoofed source address packets
>from being transmitted into the Internet from places they have no
>business being originated from to begin with.
>
>In even the worst case, those conscientious network admins that
>_do_ do it can say without remorse that they are doing their part,
>and can at least be assured that DoS attacks using spoofed source
>addresses are not being originated from their customer base.
>
>And this is a Bad Thing?
it is a bad thing if one bases defenses on the assumption that ALL
the access points into the Internet will perform such filtering, and
will do it consistently. Even if all ISPs, and down stream providers
performed the filtering, there is no guarantee that attackers could
not circumvent the filter controls, either through direct attack on
the routers, or through indirect attack on the management stations
used to configure them. I'm just saying that while edge filtering is
potentially useful, it would not be a good idea to assume that it
will be effective.
>
>>Edge filtering would often be helpful, but it is not a panacea, as
>>pointed out by others in regard to the current set of attacks, nor is
>>the performance impact trivial with most current routers.
>
>It is negligible at the edge in most cases, but you really need to
>define "edge" a little better. In some cases, it is very low speed
>links, in others it is an OC-12.
In talking with the operations folks at GTE-I, they expressed concern
over the performance hit for many of their edge routers, based on the
number of subscribers involved and other configuration
characteristics.
>
>>Because
>>most routers are optimized for transit traffic forwarding, the
>>ability to filter on the interface cards is limited, as I'm sure you
>>know.
>
>No, I don't know that at all. _Backbone_routers_ are optimized for
>packet forwarding -- I do know that.
I would state that devices that examine IP headers and make routing
decisions entirely on interface cards are optimized for traffic
forwarding, vs. firewall-style devices that focus on header
examination and ACL checking, and which typically do this by passing
a packet through a general purpose processor, vs. in I/O interfaces.
But, these are just generalizations.
>
>> Also, several of the distributed DoS attacks we are seeing do
>>not use fake source addresses from other sites, so simple filtering
>>of the sort proposed in 2267 would not be effective in these cases.
>
>Again, you're missing the point.
>
>If attackers are limited to launching DoS attacks using traceable
>addresses, then not only can their zombies be traced & found, but
>so can their controller (the perpetrator himself). Of this, make no
>mistake.
Not necessarily. The traffic from a controller to the clients may be
sufficiently offset in time as to make tracing back to the controller
hard. I agree that tracing to the traffic sources (or at least to
the sites where the traffic sources are) would be easier if edge
filtering were in place, and if it were not compromised.
>
>>Finally, I am aware of new routers for which this sort of filtering
>>would be child's play, but they are not yet deployed. One ought not
>>suggest that edge filtering is not being applied simply because of
>>laziness on the part of ISPs.
>
>Steve, you said that -- I didn't. I think ISP's will do what their
>customers pay them to do.
ISPs do what they perceive is appropriate to maintain and gain market
share, consistent with their cost models and router product
availability. Different ISPs have different ideas of how to deploy
routers and switches to aggregate traffic, which are driven by their
traffic models, by economics, and by vendors.
Note that this is an international problem, not just a domestic one.
Our operations folks tell me that many attacks are traceable to
foreign sources, where the ability to ensure adherence to policies
such as edge filtering is rather difficult. Also, from a national
security perspective, one would hardly rely on other countries
enforcing such policies in their ISP domains. That's why I think the
best, long term approach to these problems requires a combination of
improved host security and monitoring for attacks near the hosts
(both appropriate measures when the hosts are servers with a vested
interest in maintaining availability), plus rapid, automated response
to detected attacks, and an ability to activate and adjust filters at
all ISP interfaces, not just at subscriber interfaces. This
combination of measures does not rely on every ISP in the world doing
the right thing, although it would benefit from such behavior. It
embodies a notion of self-protection, both at the subscriber and ISP
levels, in support of the principle of least privilege.
Steve