On 01/22/2015 03:32 AM, Stefan Beller wrote:
> Signed-off-by: Stefan Beller <sbel...@google.com>
> ---
>  t/t1400-update-ref.sh | 28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
> 
> diff --git a/t/t1400-update-ref.sh b/t/t1400-update-ref.sh
> index 7b4707b..47d2fe9 100755
> --- a/t/t1400-update-ref.sh
> +++ b/t/t1400-update-ref.sh
> @@ -973,4 +973,32 @@ test_expect_success 'stdin -z delete refs works with 
> packed and loose refs' '
>       test_must_fail git rev-parse --verify -q $c
>  '
>  
> +run_with_limited_open_files () {
> +     (ulimit -n 32 && "$@")
> +}
Regarding the choice of "32", I wonder what is the worst-case number of
open file descriptors that are needed *before* counting the ones that
are currently wasted on open loose-reference locks. On Linux it seems to
be only 4 with my setup:

    $ (ulimit -n 3 && git update-ref --stdin </dev/null)
    bash: /dev/null: Too many open files
    $ (ulimit -n 4 && git update-ref --stdin </dev/null)
    $

This number might depend a little bit on details of the repository, like
whether config files import other config files. But as long as the
"background" number of fds required is at least a few less than 32, then
your number should be safe.

Does anybody know of a platform where file descriptors are eaten up
gluttonously by, for example, each shared library that is in use or
something? That's the only think I can think of that could potentially
make your choice of 32 problematic.

> [...]

Michael

-- 
Michael Haggerty
mhag...@alum.mit.edu

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to