On Fri, Jan 8, 2016 at 8:50 AM, Paul Eggert <egg...@cs.ucla.edu> wrote:
> On 01/07/2016 09:47 PM, Jim Meyering wrote:
>>
>> FAIL: encoding-error
>> ====================
>> ...
>> --- exp 2016-01-07 21:39:42.018646618 -0800
>> +++ out 2016-01-07 21:39:42.018646618 -0800
>> @@ -1 +1 @@
>> -Binary file in matches
>> +Pedro P\xe9rez
>> + fail=1
>
>
> I can't reproduce that in Fedora 23 x86-64, which is using gcc 5.3.1
> 20151207 (Red Hat 5.3.1-2).
>
> One hypothetical explanation is a bug or incompatibility in the
> bleeding-edge Debian shell, which I suppose could cause
> require_en_utf8_locale_ to do the wrong thing (i.e., to fail to report that
> the en_US.UTF-8 locale is missing). You might check the output of the
> command './get-mb-cur-max en_US.UTF-8' when you have the time.

Will investigate. In the mean time, here's a patch for the
false-positive failure I mentioned:
From 6c7f8166c7928cf48a8288c0333941f3bf3466c1 Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyer...@fb.com>
Date: Thu, 7 Jan 2016 21:30:08 -0800
Subject: [PATCH] mb-non-UTF8-performance: avoid FP test failure on fast
 hardware

* tests/mb-non-UTF8-performance: Don't use a fixed size.
Otherwise, on a fast system, the fixed-size unibyte test
would complete in a nominal 0 ms, which might well be
smaller than 1/30 of the multibyte duration, provoking
a false positive test failure.  Instead, increase the
size of the input until we obtain a unibyte duration of
at least 10ms.
---
 tests/mb-non-UTF8-performance | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/tests/mb-non-UTF8-performance b/tests/mb-non-UTF8-performance
index e8433d8..fc371bd 100755
--- a/tests/mb-non-UTF8-performance
+++ b/tests/mb-non-UTF8-performance
@@ -27,11 +27,16 @@ fail=0
 # "expensive", making it less likely to be run by regular users.
 expensive_

-# Make this large enough so that even on high-end systems
-# it incurs at least 5-10ms of user time.
-yes $(printf '%078d' 0) | head -400000 > in || framework_failure_
+# Make the input large enough so that even on high-end systems
+# the unibyte test takes at least 10ms of user time.
+n_lines=100000
+while :; do
+  yes $(printf '%078d' 0) | head -$n_lines > in || framework_failure_
+  ubyte_ms=$(LC_ALL=C user_time_ 1 grep -i foobar in) || fail=1
+  test $ubyte_ms -ge 10 && break
+  n_lines=$(expr $n_lines + 200000)
+done

-ubyte_ms=$(LC_ALL=C user_time_ 1 grep -i foobar in) || fail=1
 require_JP_EUC_locale_
 mbyte_ms=$(user_time_ 1 grep -i foobar in) || fail=1

-- 
2.6.4

Reply via email to