Hi xyxue, 

VPP now has a dependency on memfd which is not available in 14.04. 
Unfortunately, that means you’ll have to switch to ubuntu 16.04. 
Florin

> On Sep 27, 2017, at 1:34 AM, 薛欣颖 <xy...@fiberhome.com> wrote:
> 
>> Hi Florin,
>> 
> There is a compile error:not support ‘__NR_memfd_create’
> /home/vpp_communication/vpp/build-root/../src/vppinfra/linux/syscall.h:45:19: 
> error: '__NR_memfd_create' undeclared (first use in this function)
>    return syscall (__NR_memfd_create, name, flags);
>    
> Is the kernel version of the problem ?  What is your kernel version?
>  
> The Kernel version of mine:
> root@ubuntu:/home/vpp_communication/vppsb/vcl-ldpreload/src# uname -a
> Linux ubuntu 4.4.0-31-generic #50~14.04.1-Ubuntu SMP Wed Jul 13 01:07:32 UTC 
> 2016 x86_64 x86_64 x86_64 GNU/Linux
> 
>  If I change ‘return syscall (__NR_memfd_create, name, flags);’ to ‘return 0’.
>  Will the vcl function  be affected?
> 
> Thanks,
> xyxue
>  
> From: Florin Coras <mailto:fcoras.li...@gmail.com>
> Date: 2017-09-20 13:05
> To: 薛欣颖 <mailto:xy...@fiberhome.com>
> CC: dwallacelf <mailto:dwallac...@gmail.com>; vpp-dev 
> <mailto:vpp-dev@lists.fd.io>
> Subject: Re: [vpp-dev] Failed to use vcl_test_client
> Hi xyxue, 
> 
> I just tested the stack with linux and everything seems to be working fine. I 
> tested the stack with linux using uri_tcp_test and nc and the cut-through 
> path with the vcl test tools. How are you running the server?
> 
> Florin
> 
>> On Sep 19, 2017, at 8:29 PM, 薛欣颖 <xy...@fiberhome.com 
>> <mailto:xy...@fiberhome.com>> wrote:
>> 
>> Hi Florin,
>> 
>> The server is started on the peer vpp . And through  
>> 'sock_test_client/sock_test_server' can be normal communication.
>> 
>> Thanks,
>> xyxue
>>  
>> From: Florin Coras <mailto:fcoras.li...@gmail.com>
>> Date: 2017-09-20 10:52
>> To: 薛欣颖 <mailto:xy...@fiberhome.com>
>> CC: dwallacelf <mailto:dwallac...@gmail.com>; vpp-dev 
>> <mailto:vpp-dev@lists.fd.io>
>> Subject: Re: [vpp-dev] Failed to use vcl_test_client
>> Hi xyxue,
>> 
>> What you’re getting is a connect fail because, I assume from trace, the 
>> server is not started on the peer vpp. Because the server is not started, 
>> i.e., bind wasn’t called, when the peer vpp receives the syn it replies with 
>> a reset. That reset finally results in a connect fail notify. 
>> 
>> As for your other questions:
>> 
>> 1. Obviously, the one in vpp is for handling connect requests from 
>> applications while the one in vppcom is for cut-through session. That is, 
>> vpp acts as introduction mechanism for the two apps which afterwards 
>> exchange data via shared memory fifos. 
>> 2. For exchanging control messages, e.g., bind, connect, accept, they use 
>> the binary api. For exchanging data, that is, moving data from vpp to vcl 
>> and vice-versa, they use shared memory fifos. 
>> 3. That message is a notification from vpp to the application (vcl in this 
>> case) regarding a previous connect attempt. As you’ve discovered, if 
>> is_fail=1, the connect attempt failed. 
>> 
>> Hope this helps, 
>> Florin
>> 
>>> On Sep 19, 2017, at 7:30 PM, 薛欣颖 <xy...@fiberhome.com 
>>> <mailto:xy...@fiberhome.com>> wrote:
>>> 
>>> 
>>> Hi ,
>>> 
>>> There are still problems:
>>> root@ubuntu:/home/vpp_communication/vpp/build-root/install-vpp-native/vpp/bin#
>>>  ./vcl_test_client -U 1.1.1.2 22000
>>> 
>>> CLIENT: Connecting to server...
>>> vl_api_connect_session_reply_t_handler:697: [9478] connect failed: Session 
>>> failed to connect (-115)
>>> 
>>> Breakpoint 1, send_session_connected_callback (app_index=1, api_context=0, 
>>> s=0x0, is_fail=1 '\001')
>>>     at 
>>> /home/vpp_communication/vpp/build-data/../src/vnet/session/session_api.c:157
>>> 157     {
>>> (gdb) bt
>>> #0  send_session_connected_callback (app_index=1, api_context=0, s=0x0, 
>>> is_fail=1 '\001')
>>>     at 
>>> /home/vpp_communication/vpp/build-data/../src/vnet/session/session_api.c:157
>>> #1  0x00007f35c658459c in stream_session_connect_notify (tc=0x7f358585e3f8, 
>>> is_fail=<optimized out>, 
>>>     is_fail@entry=1 '\001') at 
>>> /home/vpp_communication/vpp/build-data/../src/vnet/session/session.c:489
>>> #2  0x00007f35c6456972 in tcp_connection_reset (tc=tc@entry=0x7f358585e3f8)
>>>     at /home/vpp_communication/vpp/build-data/../src/vnet/tcp/tcp.c:258
>>> #3  0x00007f35c6429977 in tcp46_syn_sent_inline (is_ip4=1, 
>>> from_frame=<optimized out>, node=<optimized out>, 
>>>     vm=<optimized out>) at 
>>> /home/vpp_communication/vpp/build-data/../src/vnet/tcp/tcp_input.c:2023
>>> #4  tcp4_syn_sent (vm=<optimized out>, node=<optimized out>, 
>>> from_frame=<optimized out>)
>>>     at 
>>> /home/vpp_communication/vpp/build-data/../src/vnet/tcp/tcp_input.c:2184
>>> #5  0x00007f35c6962d14 in dispatch_node (last_time_stamp=204974335045786, 
>>> frame=0x7f35858596c0, 
>>>     dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INTERNAL, 
>>> node=0x7f3584da06c0, 
>>> 
>>> 
>>> code segment:
>>> int
>>> send_session_connected_callback (u32 app_index, u32 api_context,
>>>                              stream_session_t * s, u8 is_fail)
>>> {
>>>   vl_api_connect_session_reply_t *mp;
>>>   unix_shared_memory_queue_t *q;
>>>   application_t *app;
>>>   unix_shared_memory_queue_t *vpp_queue;
>>> 
>>>   app = application_get (app_index);
>>>   q = vl_api_client_index_to_input_queue (app->api_client_index);
>>> 
>>>   if (!q)
>>>     return -1;
>>> 
>>>   mp = vl_msg_api_alloc (sizeof (*mp));
>>>   mp->_vl_msg_id = clib_host_to_net_u16 (VL_API_CONNECT_SESSION_REPLY); 
>>>   mp->context = api_context;
>>>   if (!is_fail)
>>>     {
>>>       vpp_queue = session_manager_get_vpp_event_queue (s->thread_index);
>>>       mp->server_rx_fifo = pointer_to_uword (s->server_rx_fifo);
>>>       mp->server_tx_fifo = pointer_to_uword (s->server_tx_fifo);
>>>       mp->handle = stream_session_handle (s);
>>>       mp->vpp_event_queue_address = pointer_to_uword (vpp_queue);
>>>       mp->retval = 0;
>>>     }
>>>   else
>>>     {
>>>       mp->retval = clib_host_to_net_u32 
>>> (VNET_API_ERROR_SESSION_CONNECT_FAIL);  
>>>     }
>>>   vl_msg_api_send_shmem (q, (u8 *) & mp);
>>>   return 0;
>>> }
>>> 
>>> ruturn message to VCL, connect failed: Session failed to connect 
>>> 
>>> I have two question :
>>> 1.  there are two function "vl_api_connect_sock_t_handler", one in 
>>> session_api.c,  another in vppcom.c, 
>>> How to work ?
>>> 
>>> 2.  How to VCL and VPP communication ? 
>>> 
>>> 3. Why the message of VPP to VCL  is "send_session_connected_callback" 
>>> function send ?
>>> 
>>> thanks,
>>> xyxue
>>>  
>>> From: Dave Wallace <mailto:dwallac...@gmail.com>
>>> Date: 2017-09-19 01:29
>>> To: 薛欣颖 <mailto:xy...@fiberhome.com>; vpp-dev@lists.fd.io 
>>> <mailto:vpp-dev@lists.fd.io>
>>> Subject: Re: [vpp-dev] Failed to use vcl_test_client
>>> Hi Xyeue,
>>> 
>>> I believe this patch fixes this issue:  https://gerrit.fd.io/r/#/c/8315/ 
>>> <https://gerrit.fd.io/r/#/c/8315/>
>>> 
>>> Can you please pull the latest source code and try again?
>>> 
>>> Thanks,
>>> -daw-
>>> 
>>> On 9/18/2017 2:43 AM, 薛欣颖 wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> When I test vcl ,there is some error :
>>>> root@ubuntu:/home/vpp_communication/vpp/build-root/install-vpp-native/vpp/bin#
>>>>  ./vcl_test_client -U 1.1.1.2 22000
>>>> 
>>>> CLIENT: Connecting to server...
>>>> msg_handler_internal:429: no handler for msg id 424
>>>> ..........
>>>> ...................
>>>> ...................
>>>> ERROR in main(): Bad file descriptor
>>>> ERROR: connect failed (errno = 9)!
>>>> Segmentation fault
>>>> 
>>>> The msg id 424 is VL_API_CONNECT_URI_REPLY .The VL_API_CONNECT_URI_REPLY 
>>>> is registed in vat.
>>>> Is there anything wrong in my test?
>>>> 
>>>> The gdb information is shown below:
>>>> (gdb) bt
>>>> #0  vl_msg_api_send_shmem (q=q@entry=0x302891c0, 
>>>> elem=elem@entry=0x7faafab32cc8 "\344o\006\060")
>>>>     at 
>>>> /home/vpp_communication/vpp/build-data/../src/vlibmemory/memory_shared.c:584
>>>> #1  0x00007fab3c053b55 in send_session_connected_callback 
>>>> (app_index=<optimized out>, api_context=3472551422, 
>>>>     s=0x0, is_fail=<optimized out>) at 
>>>> /home/vpp_communication/vpp/build-data/../src/vnet/session/session_api.c:186
>>>> #2  0x00007fab3c03cc44 in stream_session_connect_notify 
>>>> (tc=0x7faafa776bd8, is_fail=<optimized out>, 
>>>>     is_fail@entry=1 '\001') at 
>>>> /home/vpp_communication/vpp/build-data/../src/vnet/session/session.c:489
>>>> #3  0x00007fab3bf0f642 in tcp_connection_reset (tc=tc@entry=0x7faafa776bd8)
>>>>     at /home/vpp_communication/vpp/build-data/../src/vnet/tcp/tcp.c:257
>>>> #4  0x00007fab3bee4077 in tcp46_syn_sent_inline (is_ip4=1, 
>>>> from_frame=<optimized out>, node=<optimized out>, 
>>>>     vm=<optimized out>) at 
>>>> /home/vpp_communication/vpp/build-data/../src/vnet/tcp/tcp_input.c:1938
>>>> #5  tcp4_syn_sent (vm=<optimized out>, node=<optimized out>, 
>>>> from_frame=<optimized out>)
>>>>     at 
>>>> /home/vpp_communication/vpp/build-data/../src/vnet/tcp/tcp_input.c:2091
>>>> #6  0x00007fab3c4159e4 in dispatch_node (last_time_stamp=1926897640132334, 
>>>> frame=0x7faafb34a000, 
>>>>     dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INTERNAL, 
>>>> node=0x7faafa86a600, 
>>>>     vm=0x7fab3c668320 <vlib_global_main>) at 
>>>> /home/vpp_communication/vpp/build-data/../src/vlib/main.c:1011
>>>> #7  dispatch_pending_node (vm=vm@entry=0x7fab3c668320 <vlib_global_main>, 
>>>>     pending_frame_index=pending_frame_index@entry=5, 
>>>> last_time_stamp=last_time_stamp@entry=1926897640132334)
>>>>     at /home/vpp_communication/vpp/build-data/../src/vlib/main.c:1161
>>>> #8  0x00007fab3c4177a5 in vlib_main_or_worker_loop (is_main=1, 
>>>> vm=0x7fab3c668320 <vlib_global_main>)
>>>>     at /home/vpp_communication/vpp/build-data/../src/vlib/main.c:1622
>>>> #9  vlib_main_loop (vm=0x7fab3c668320 <vlib_global_main>)
>>>>     at /home/vpp_communication/vpp/build-data/../src/vlib/main.c:1641
>>>> #10 vlib_main (vm=vm@entry=0x7fab3c668320 <vlib_global_main>, 
>>>> input=input@entry=0x7faafab32fa0)
>>>>     at /home/vpp_communication/vpp/build-data/../src/vlib/main.c:1799
>>>> #11 0x00007fab3c44f433 in thread0 (arg=140373429486368)
>>>>     at /home/vpp_communication/vpp/build-data/../src/vlib/unix/main.c:534
>>>> #12 0x00007fab3ba4dbf8 in clib_calljmp () at 
>>>> /home/vpp_communication/vpp/build-data/../src/vppinfra/longjmp.S:110
>>>> #13 0x00007ffe9df58600 in ?? ()
>>>> #14 0x00007fab3c44ffb5 in vlib_unix_main (argc=<optimized out>, 
>>>> argv=<optimized out>)
>>>>     at /home/vpp_communication/vpp/build-data/../src/vlib/unix/main.c:597
>>>> #15 0x0000000000000000 in ?? ()
>>>> 
>>>> Thanks,
>>>> xyxue
>>>> 
>>>> 
>>>> _______________________________________________
>>>> vpp-dev mailing list
>>>> vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>>>> https://lists.fd.io/mailman/listinfo/vpp-dev 
>>>> <https://lists.fd.io/mailman/listinfo/vpp-dev>
>>> _______________________________________________
>>> vpp-dev mailing list
>>> vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>>> https://lists.fd.io/mailman/listinfo/vpp-dev 
>>> <https://lists.fd.io/mailman/listinfo/vpp-dev>
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to