On 05/27/2013 01:29 AM, Jim Meyering wrote: > Pádraig Brady wrote: > >> unarchive 13530 >> stop > > Thanks. > > ... >>> @@ -358,6 +356,14 @@ elide_tail_bytes_pipe (const char *filename, int fd, >>> uintmax_t n_elide_0) >>> >>> if (buffered_enough) >>> { >>> + if (n_elide_0 != n_elide) >>> + { >>> + error (0, 0, _("memory exhausted while reading %s"), >>> + quote (filename)); >>> + ok = false; >>> + goto free_mem; >>> + } >>> + > ... >> Oh right it's coming back to me a bit now. >> So by removing these upfront checks where possible, >> it only makes sense of the program could under different >> free mem available, fulfil the operation up to the specified limit. >> In this case though, the program could never fulfil the request, >> so it's better to fail early as is the case in the code currently? > > Well, it *can* fulfill the request whenever the request is degenerate, > i.e., when the size of the input is smaller than N and also small enough > to be read into memory.
Sure, but... > Technically, we could handle this case the same way we handle it > in tac.c: read data from nonseekable FD and write it to a temporary file. > I'm not sure it's worth the effort here, though. Yes ideally we could avoid this limit, though I don't see it as a priority either. > ... >>> -(ulimit -v 20000; head --bytes=-E < /dev/null) || fail=1 >>> +(ulimit -v 20000; head --bytes=-$OFF_T_MAX < /dev/null) || fail=1 > > I'm inclined to make the above (nonseekable input) cases succeed, > for consistency with the seekable-input case like this: > > : > empty > head --bytes=-E empty > > I confess that I did not like the way my manual test ended up > using so much memory... but it couldn't know if it was going > to be able to succeed without actually reading/allocating all > of that space. > > If we give up immediately, we fail unnecessarily in cases like > the above where the input is smaller than N. > > What do you think? ... I'm inclined to treat a value that could never be fulfilled in total as invalid. Otherwise one might run into unexpected limits in _future_. This is the same way we immediately error on values over UINTMAX_MAX, rather truncating the values silently to something we can. thanks, Pádraig.