On Apr 12 08:44, Oleg Moskalenko wrote:
> 
> > 
> > However, I think I found a workaround on the application level.
> > Apparently all packets sent to a specific address are sent to the first 
> > socket
> > which has been bound to the address.  If that socket has been closed, the 
> > next in
> > line gets the packets (unless it has been connected and the sender is not 
> > the
> > connected address).  So what I did was this:
> > 
> > Before starting step 14, I created a third socket, which then replaced the 
> > server
> > socket:
> 
> Thank you, Corinna, for the reply and for the idea.
> 
> Unfortunately, the workaround will work well only in the case of a single 
> client. 
> In the multiple clients scenario, it will create a sort of race condition: 
> 
> 1) some packets already scheduled by OS to the "original" packet will be lost;
> 2) some packets delivered in between the sockets destruction/creation will be 
> wrongly rejected.
> 
> But this is better than nothing. I'll think whether we can live with it.
Too bad.  I don't know the DTLS protocol, but isn't it possible to do
the server part with a single UDP socket?  If you keep track of the
already connected clients, you know if the just incoming packet is a
connected or connecting client, and then you can use different threads
to handle the packet further.


Corinna

-- 
Corinna Vinschen                  Please, send mails regarding Cygwin to
Cygwin Maintainer                 cygwin AT cygwin DOT com
Red Hat

--
Problem reports:       http://cygwin.com/problems.html
FAQ:                   http://cygwin.com/faq/
Documentation:         http://cygwin.com/docs.html
Unsubscribe info:      http://cygwin.com/ml/#unsubscribe-simple

Reply via email to