On 4/2/25 22:09, Thomas Lamprecht wrote:
Am 12.03.25 um 14:27 schrieb Dominik Csapak:
In some situations, e.g. having a large resource mapping, the UI can
generate a request that is bigger than the current limit of 64KiB.
Our files in pmxcfs can grow up to 1 MiB, so theoretically, a single
mapping can grow to that size. In practice, a single entry will have
much less. In #6230, a user has a mapping with about ~130KiB.
Increase the limit to 512KiB so we have a bit of buffer left.
s/buffer/headroom/ ?
yes, makes more sense^^
We have to also increase the 'rbuf_max' size here, otherwise the request
will fail (since the buffer is too small for the request).
Since the post limit and the rbuf_max are tightly coupled, let it
reflect that in the code. To do that sum the post size + max header
size there.
Signed-off-by: Dominik Csapak <d.csa...@proxmox.com>
---
sending as RFC because:
* not sure about the rbuf_max calculation, but we have to increase it
when we increase $limit_max_post. (not sure how much is needed exactly)
* ther are alternative ways to deal with that, but some of those are vastly
more work:
- optimize the pci mapping to reduce the number of bytes we have to
send (e.g. by reducing the property names, or somehow magically
detect devices that belong together)
- add a new api for the mappings that can update the entries without
sending the whole mapping again (not sure if we can make this
backwards compatible)
- ignore the problem and simply tell the users to edit the file
manually (I don't like this one...)
also, I tried to benchmark this, but did not find a tool that does this
in a good way (e.g. apachebench complained about ssl, and i couldn't get
it to work right). @Thomas you did such benchmarks laft according to git
log, do you remember what you used then?
argh, my commit message back then looks like I tried to write what I used
but then fubmled (or got knocked on the head) and sent it out unfinished.
To my defence, Wolfgang applied it ;P
I'm not totally sure what I used back then, might have been something
custom-made too. FWIW, recently I used oha [0] and found it quite OK, albeit
I did not try it with POST data, but one can define the method and pass a
request body from CLI argument directly or a file, and it has a flag to
allow "insecure" TLS certs.
[0]: https://github.com/hatoo/oha
thanks, i'll try to do some benchmarks with it
@@ -1891,7 +1891,7 @@ sub accept_connections {
$self->{conn_count}++;
$reqstate->{hdl} = AnyEvent::Handle->new(
fh => $clientfh,
- rbuf_max => 64*1024,
+ rbuf_max => $limit_max_post + ($limit_max_headers *
$limit_max_header_size),
The header part is wrong as the header limits are independent, i.e., the
request must have less than $limit_max_headers separate headers and all
those together must be smaller than $limit_max_header_size.
So just adding $limit_max_header_size is enough, no multiplication required.
ah yes, seems i read those wrong
timeout => $self->{timeout},
linger => 0, # avoid problems with ssh - really needed ?
on_eof => sub {
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel