On 24.11.21 18:32, Stoiko Ivanov wrote:
> Huge thanks for addressing this so quickly and elegantly!
> 
> One question/nit:
> 
> On Wed, 24 Nov 2021 15:47:48 +0100
> Dominik Csapak <d.csa...@proxmox.com> wrote:
> 
>> instead of accumulating the whole output of 'mini-journalreader' in
>> the api call (this can be quite big), use the download mechanic of the
>> http-server to stream the output to the client.
>>
>> we lose some error handling possibilities, but we do not have
>> to allocate anything here, and since perl does not free memory after
>> allocating[0] this is our desired behaviour.
>>
>> to keep api compatiblitiy, we need to give the journalreader the '-j'
>> flag to let it output json.
>>
>> also tell the http server that the encoding is gzip and pipe
>> the output through it.
>>
>> 0: 
>> https://perldoc.perl.org/perlfaq3#How-can-I-free-an-array-or-hash-so-my-program-shrinks?
>>
>> Signed-off-by: Dominik Csapak <d.csa...@proxmox.com>
>> ---
>>  PVE/API2/Nodes.pm | 22 ++++++++++++++--------
>>  1 file changed, 14 insertions(+), 8 deletions(-)
>>
>> diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
>> index 565cbccc..d57a1937 100644
>> --- a/PVE/API2/Nodes.pm
>> +++ b/PVE/API2/Nodes.pm
>> @@ -819,19 +819,25 @@ __PACKAGE__->register_method({
>>      my $rpcenv = PVE::RPCEnvironment::get();
>>      my $user = $rpcenv->get_user();
>>  
>> -    my $cmd = ["/usr/bin/mini-journalreader"];
>> +    my $cmd = ["/usr/bin/mini-journalreader", "-j"];
>>      push @$cmd, '-n', $param->{lastentries} if $param->{lastentries};
>>      push @$cmd, '-b', $param->{since} if $param->{since};
>>      push @$cmd, '-e', $param->{until} if $param->{until};
>> -    push @$cmd, '-f', $param->{startcursor} if $param->{startcursor};
>> -    push @$cmd, '-t', $param->{endcursor} if $param->{endcursor};
>> +    push @$cmd, '-f', PVE::Tools::shellquote($param->{startcursor}) if 
>> $param->{startcursor};
>> +    push @$cmd, '-t', PVE::Tools::shellquote($param->{endcursor}) if 
>> $param->{endcursor};
>> +    push @$cmd, ' | gzip ';
> Not sure which would be more efficient - but the http-server does support
> gzipping the result from the API

It has no api result in that case, it just returns the open file handle and 
anyevent
can then stream that directly, see the response_stream method in 
PVE::APIServer::AnyEvent.

> might it make sense to let the worker do the gzip-encoding vs. forking
> gzip here?

contrary to our rust infra from PBS we do not have a streaming encoder directly 
in
the server that we can just layer in between, pve-http-server can currently 
only compress
if it has the whole response, and that's, which is what we want to avoid here.

The response method explicitly guards for the stream-an-fh case, AnyEvent.pm 
line 325:

if ($content && !$stream_fh) {
   # ...
   my $comp = Compress::Zlib::memGzip($content);

What we can do though is compress directly in the mini-journal reader, but 
that'll be
way easier once we rewrite in in rust (heh), as then we can reuse our existing
infrastructure to just layer that in-between.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to