Hi, everyone
I'am a newer with VPP architeture, I want't to use the vlib_buffer reference
count(such as clone) to reduce the copies,
but I'am confused with the referent count operations between the vlib_buffer
and rte_mbuf (use DPDK as the interface input), I have searched the mailing
list a
I missed a place that when all data acked, FIN is send out and the flag will be
removed, then if peer side ACK the FIN, the connection will step to FIN_WAIT_2,
So please ignore this.
At 2019-06-12 23:42:09, "guangwei" wrote:
From the code, when active close in tcp_connection_
fin. Still, others may do it, so here’s a
quick fix for that [1].
Thanks,
Florin
[1] https://gerrit.fd.io/r/c/20403/
On Jun 28, 2019, at 7:25 AM, guangwei wrote:
I think the FIN is reported to session layer only when it's indeed all the data
which allowed in SEQ window has received
Thanks.
At 2019-06-29 00:52:21, "Florin Coras" wrote:
Here’s the patch [1].
Thanks,
Florin
[1] https://gerrit.fd.io/r/c/20404/
On Jun 28, 2019, at 7:54 AM, guangwei wrote:
Yes, please fixed it, I'am just searching the code, no environment to verify it.
At 2
Yes, please fixed it, I'am just searching the code, no environment to verify it.
At 2019-06-28 22:45:23, "Florin Coras" wrote:
>Hi,
>
>That is correct. You want to provide the patch or should I do it?
>
>Thanks,
>Florin
>
>> On Jun 28, 2019, at
I think the FIN is reported to session layer only when it's indeed all the data
which allowed in SEQ window has received, but from code, when the packet flaged
with FIN
no matter whether it's a Out-Of-Order segment or not , no matter whether it's
data all in the allowed window or not, the stack
Now, I'am searching the TCP stack of VPP 19.04, and have a doubt, please look
following comment in line.
tcp46_rcv_process_inline
{
...
/* 5: check the ACK field */
...
case TCP_STATE_CLOSE_WAIT:
/* Do the same processing as for the ESTABLISHED state. */
if (tcp_rcv_ack (wrk, tc0, b0, tcp0, &err
>From the code, when active close in tcp_connection_close, the TCP state will
>change to FIN_WAIT_1 and if there is TX data meanwhile, it will set TCP flag
>TCP_CONN_FINPNDG also,
but there is no method in the stack for this TCP to step into the FIN_WAIT_2
state, the only thing this TCP can happ
I'am now working on version 19.04, and find the behavior of
vlib_buffer_free_no_next is different between 18.07 and 19.04, I think it's
the bug for 19.04:
always_inline void
vlib_buffer_free_no_next (vlib_main_t * vm,
/* pointer to first buffer */
u32 * buffers,
/* number of buffers to free *
Add printf in VPP code (mainly used to dump the content of packets in TCP
stack), when start it (make run) and test it to trigger these places, only part
of these places print the info
out to the console, others missed, and some info print out but were truncated,
what's the issue, how to resol
VPP run under CentOS and add clib_warning and printf in the code, it seems
suppressed and don't print every log out to stdout which triggered indeed, is
there a method to resolve this ?
or tune the parameters or print output to somewhere else ?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all me
I'am searching the TCP stack of VPP 18.07, from the code the TCP stack don't
count the sequence of FIN when it come with some data in payload under passive,
which will lead the active close side retransmit FIN, the code as following:
tcp46_established_inline {
...
/* 8: check the FIN bit */
if (PR
12 matches
Mail list logo