Hi, Phil.
It strikes me that the first and second points below are something which
David Harrington perhaps ought to give an opinion on. He has got to
defend it to the IESG.
On the first point, my feeling is that neither the requirements doc nor
this doc is sufficient to guarantee an interoperable implementation.
There seems to me to be a cleft stick here (irrespective of whether this
ends up as informational or experimental.) The WG is is specifying
pieces of functionality that go in two or more different types of boxes
(three if there is a separate implementation of the central decision
point). If the system is going to be generally deployable or even to be
experimented with there may be different implementations. The box types
communicate using the information specifications in the doc. This
appears to require protocol definitions. Where they are defined is
another issue but I feel it has either to be in this doc or in another
doc referenced from this. If they aren't specified I can't see that
anybody will be interested in making commercial implementations.
I see David didn't make any comment about this situation in his write
up, so maybe I am over reacting.
Regards,
Elwyn
On 13/06/2011 18:04, [email protected] wrote:
Elwyn,
Thanks for the detailed review.
A few follow-ups in-line
Thanks
phil
--
Major issues:
The draft contains partial definitions of two control protocols (egress
-> decision point; decision point -> ingress). It does not make it
clear whether the reader is expected to get full definitions of these
protocols here or whether there will be another document that specifies
these protocols completely. As is stands one could build the protocols
and pretty much guarantee that they would not be interoperable with
other implementations since message formats are not included although
high level specs are. The document needs to be much clearer about what
is expected to happen here.
[phil] there is another document, " Requirements for Signaling of (Pre-) Congestion
Information in a DiffServ Domain"
[http://tools.ietf.org/html/draft-ietf-pcn-signaling-requirements-04] that deals with
your issue to some extent. It doesn't specify message formats but does at least specify
better what information the messages must contain. My understanding is that unfortunately
the WG doesn't feel it has the effort to specify these protocols completely.
Use of EXP codepoint: My understanding of what is said in RFC 5696 is
that EXP is supposed to be left for other (non=PCN) systems to use.
This draft uses it. Is this sensible? Is this whole draft experimental
really?
[phil] the intention of rfc5696 was that the EXP codepoint is for experimental
*PCN* encodings - ie beyond the baseline. For instance, the CL behaviour needs
separate codepoints for (PCN) threshold-marking and (PCN)
excess-traffic-marking& this would require using the EXP codepoint.
However...... There is currently some discussion on what PCN encodings to
specify beyond the baseline. At the time we wrote the baseline, we envisaged
the need for several encodings - however it now seems that one may be enough,
in which case there may possibly be just one PCN encoding (ie a revised 5696
that now uses the 01 codepoint), so possibly may be Standards track - ??
Anyway, i take your point that we need to think about Status.
s3.3.1:
[CL-specific] The outcome of the comparison is not very sensitive
to the value of the CLE-limit in practice, because when threshold-
marking occurs it tends to persist long enough that threshold-
marked traffic becomes a large proportion of the received traffic
in a given interval.
This statement worries me. It sounds like a characteristic of an
unstable system. If the value is that non-critical why are we
bothering?
[phil] admission control system. imagine the pcn-domain has one bottleneck link. If it can
cope with the current number of calls (their bandwidth), then no pkts gets
threshold-marked, so the CLE = 0. If there are too many, then all pkts gets
threshold-marked, so the CLE = 1. If there is almost exactly the right number of calls,
then threshold-marking will tend to be on for a while, then off for a while (perhaps when
several flows are transmitting less traffic than normal), so the CLE will jiggle about
between 0& 1. If the CLE is< CLE-limit (say CLE-limit = 0.6& current CLE =
0.5), when a new call admission request happens to arrive then it gets admitted. But then
there's more traffic and it's likely that the CLE will rise to 1 - hence another admission
request will get blocked. When a call finishes, then the reverse is true.
Now suppose we had in fact configured CLE-limit = 0.4, then in the scenario
above the call request would have been blocked. However, (1) the PCN-domain has
only admitted one fewer call, (2) a short time later, either the CLE happens to
be lower or a call departs, and then the next admission request is accepted.
All this means that it doesn't matter much exactly what you set CLE-limit to -
it barely affects the average number of calls admitted. The argument above is
hand-wavy, but lots of simulations have been done that show this is true (I
hope i'm representing the results correctly)
So the lack of sensitivity to CLE-limit seems like a good thing.
Minor issues:
s6: The potential introduction of a centralized decision point appears
to provide additional attack points beyond the architecture in RFC 5559.
It appears to me that the requests for information about specific flows
to the ingress are ighly vulnerable as they (probably) contain all the
information needed to craft a DoS attack on the flow.
[phil] seems a good point to me
--
Join the Cambridge Oxfam Walk on Sunday 15 May, 2011.
For more information, see http://www.oxfam.org.uk/get_involved/fundraise/walk/
and follow us on Twitter at http://twitter.com/CambOxfamWalk.
_______________________________________________
Gen-art mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/gen-art