================
@@ -96,7 +115,7 @@ Expected<std::optional<Message>> Transport::Read() {
     return createStringError(
         formatv("invalid content length {0}", *raw_length).str());
 
-  Expected<std::string> raw_json = ReadFull(*input, length);
----------------
vogelsgesang wrote:

we should probably only apply a timeout before receiving the first byte of a 
message.

Otherwise, we might run into hard-to-debug issues where the client sends

```
Content-Length: 123
\r\n\r\n
<wait for 2 seconds>
actual request body
```

With the current logic, we would first consume the `Content-Length: 
123\r\n\rn\n` header, then run into the timeout. Upon retrying the read in 
`DAP::Loop()` we would the find the request body without the required 
`Content-Length` header.

The client would be compliant with the Debug Adapter Protocol specification, 
yet `lldb-dap` would choke on this message.

It seems we are only using the timeout such that the `disconnecting` flag is 
checked regularly in `DAP::Loop`. Instead of using a timeout to wake up the 
reader-thread, would it maybe make sense to instead call `Transport::Close` 
when we want to shut down the reader? That should also cancel any outstanding 
reads, doesn't it?

https://github.com/llvm/llvm-project/pull/130169
_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to