Hi Mark, I understand what you are stating the root of issue originated with the client (wrong client). I am stating following when you have request/response on the same TCP connection. for example,
My understanding (please correct me if my wrong): Client ---------------------------------------SAME TCP SOCKET----------------------------------Server 1)PUT API Content-Lenght: 419 2)server reads header and content body Response 2XX 3)Client Read response. 4)Client SEND GET Request GET API HTTP1/1.1 5) Server reads it (now tomcat reads request GET request due to the position of previous wrong content ) CAN reset position of reading here? 6) The server sends 400 with a close connection. 7) Client Read response. So, if you look request/response model, how can tomcat read ahead on PUT call on a socket the data is not there? Thanks, Bhavesh On Thu, Feb 7, 2019 at 1:51 PM Mark Thomas <ma...@apache.org> wrote: > On 07/02/2019 20:05, Bhavesh Mistry wrote: > > Hi Mark, > > > > There is no way to validate the END of a request for PUT call and if > > Content-Lenght does not match what client had sent payload body then > > rejected it and reset position. > > You can't do that. The only way to determine how much data to expect is > from the content-length header. The server has no way to determine (with > any certainty) that the client has stopped sending the previous request > body (or hasn't sent any body at all) and is starting to send a new > request. > > > > > If content length does match then reject PUT request, > > Not possible. > > > and then close the > > connection for PUT call not for subsequent request. How can you read > > ahead from TCP socket that does not have data yet for the next request? > > The server is going to do a blocking read (this is Servlet I/O so it is > blocking) for more data. Again, the server has no way of knowing that > the data that arrives is for a new request rather than the request body > it was expecting. > > > It > > request/response model so PUT request processed with wrong content length > > should not impact the next request. > > HTTP doesn't work like that. > > > Another server like Jetty has no issue. > > The only way to guarantee that is to disable HTTP keep-alive. And I > would be amazed if the Jetty folks did that by default. > > I suspect what you are seeing are the effects of different read > timeouts. If the connection times out waiting for the client to send the > data *before* the client tries to send the next request you'll get the > behaviour you describe. > > The problem is that the server can't differentiate between slow clients > and misbehaving clients so by lowering the timeout to work-around the > broken clients you may end up breaking slower clients unintentionally. > > As I said, your best solution is to fix the broken client. > > Mark > > > > > Our use case: > > Client ------> Jetty ---> Apache-Camel HTTP Proxy ---> tomcat (Spring > boot). > > > > The failure on the SAME TCP occurs at tomcat, not at Jetty for the same > TCP > > connection. > > > > > > Thanks, > > Bhavesh > > > > > > On Thu, Feb 7, 2019 at 11:25 AM Mark Thomas <ma...@apache.org> wrote: > > > >> On 07/02/2019 18:48, Bhavesh Mistry wrote: > >>> Hello Tomcat Developers, > >>> > >>> I have a unique situation about HTTP Protocol PAYLOAD parsing and > >>> Content-Length Header. > >> > >> There is nothing unique here. > >> > >>> When PUT/POST Content-Length is NOT correct > >>> (client send wrong Content-Lenght), the tomcat is able to parse the > >>> request and respond to request with 2xx but subsequent on SAME TCP > >>> connection fails with corrupted HTTP HEADER. > >> > >> As expected. > >> > >> Tomcat can't read minds. If the content length header is not correct, > >> Tomcat can't correctly identify the end of the request so it is going to > >> read too much / too little and - on a keep-alive connection - the next > >> request is going to fail. > >> > >> There is nothing unusual about this. > >> > >> There is no Tomcat bug here. > >> > >> You need to fix the broken client so the content-length is correctly > set. > >> > >> You could disable keep-alive connections. That would limit the failures > >> to the faulty requests but at the cost of reduced performance. > >> > >> Mark > >> > >> --------------------------------------------------------------------- > >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > >> For additional commands, e-mail: users-h...@tomcat.apache.org > >> > >> > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >