In particular why don’t we simply indicate that a lost packet can induce a 
delay of the fixed packet interval times the window size - 1, and so the widow 
size should be kept to a minimum, and leave it at that.

Thanks,
Chris.

> On Aug 17, 2021, at 9:15 AM, Christian Hopps <cho...@chopps.org> wrote:
> 
> Hi Tero,
> 
> Let’s keep things simple here at this point in the process, and also match 
> the results we have already verified with running code.
> 
> We can add more text that talks directly about how the reorder widnow size 
> should be kept as small as possible (it should NEVER be 1000 packets, I’m not 
> sure where you got 1000 from but that’s not a reasonable number so perhaps 
> pointing this out IS important). It should be something between 0 and 5 
> perhaps 10 if you want to really handle wild cases of reordering (you 
> probably dont).
> 
> Regular packet loss kills TCP etc.. we do not need to optimize the protocol 
> for this condition; however, this brings me to the next point:
> 
> We are not transport experts here and we need to stay away from straying into 
> that area. We had this draft reviewed and approved by the transport area (the 
> experts) because of this. We should not start getting into transport area 
> issues of describing affects on the network of jitter or re-ordered or lost 
> packets etc. This is not our expertise and will only cause trouble when this 
> comes before the transport area AD.
> 
> We have approved text from the transport experts now (in addition to clearing 
> WG LC). I do not want to open this draft back up for major modifications that 
> start talking about new ways to handle packets and their affects on the 
> drownstream network etc. This is not our area of expertise, and we have 
> already received approval from the experts for the text that we have. Let’s 
> stick with the approved text and make clarifying modifications only.
> 
> Thanks,
> Chris.
> 
> 
> 
> 
>> On Aug 17, 2021, at 6:48 AM, Tero Kivinen <kivi...@iki.fi> wrote:
>> 
>> Christian Hopps writes:
>>>>> It might be obvious to you, but it might not be obvious to the person
>>>>> doing the actual implementations. I always consider it a good idea to
>>>>> point out pitfalls and cases where implementor should be vary to the
>>>>> implementor and not to assume that implementor actually realizes this. 
>>>> 
>>>> I agree with that sentiment.
>>> 
>>> This is the specific case here:
>> 
>> No it is not.
>> 
>>> “Given an ordered packet stream, A, B, C, if you send B before A you
>>> will be sending packets in a different order”
>> 
>> The question there is that it is very hard to see from the text in
>> current draft section 2.5 that it can cause extra 32-1000 packets
>> (reorder window size) of buffering for every single lost packet.
>> 
>> And that current text does not allow the sending packets in different
>> order, as it does not allow processing packets in any other order than
>> in-order.
>> 
>> So there are multiple choises here, which affect how the
>> implementation behaves:
>> 
>> 1) Make sure that packets are processed in-order always, i.e., do
>> not allow any outer packets to be processed until you are sure they
>> are in-order thus causing extra buffering/latency if any packet is
>> lost, as you need to wait for that packet to drop out of reorder
>> window before you know it is lots, thus before you can continue
>> processing packets. This will not cause any reordering of packets.
>> 
>> 2) Process incoming outer packets as they come in, and do not
>> reorder them before processing. In that case you need to process
>> outer packets partially, i.e., only send those inner packets out
>> which have been fully received, but buffer those pieces of inner
>> packets which are still missing pieces as outer packets were either
>> lost or reordered. In this case if there is reordering in the outer
>> packets this will cause this reordering on the inner packets too.
>> 
>> 3) Do hybrid version where when you notice missing packet on the
>> outer packets you postpone processing of it for short duration and
>> to see the reordering was only very small (for example wait for just
>> next outer packet). If the outer packet stream can be reordered
>> inside this small window you do that and process packets in order
>> and send them out in order, but you limit the latency caused by this
>> to for example only for one packet and if larger reordering is
>> happening then you still buffer wait until the full reorder window
>> until you deem that you are not able to process that inner packet as
>> it was not completely received because of missing packet.
>> 
>> The current text only allows option 1, and I would like to allow
>> options 2 and 3 and perhaps also others, but I would also like to have
>> some text explaining the trade offs of different options. This does
>> not affect the interoperability as such, as two implementations using
>> different methods will interoperate, but this might cause very bad
>> performance issues.
>> 
>> Actually I think option 1 (the one only allowed now) can and will
>> cause large jitter for round trip times for every single lost frame. I
>> am not sure what large jitter of round trip times does for different
>> protocols running inside the tunnel. I would assume that any kind of
>> audio conferencing system would have really bad performance if run
>> over such system.
>> 
>>> Again I’ll put this text to unblock this document, but really,
>>> sometimes things *are* obvious. 
>> 
>> I had to parse the section 2.5 several times before I realised that it
>> do really require me to process packets in-order i.e., it forbids
>> options 2 and 3.
>> 
>> It might be obvious for you, but it was not obvious for me, and I
>> think that restriction do make the performance really bad.
>> -- 
>> kivi...@iki.fi
>> 
> 
> _______________________________________________
> IPsec mailing list
> IPsec@ietf.org
> https://www.ietf.org/mailman/listinfo/ipsec

_______________________________________________
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec

Reply via email to