Interested in helping open source friends on HP-UX?
On Thu, Feb 19, 2015 at 12:20:02PM +0100, Michael J Gruber wrote:
> OK, so we should use NO_ICONV on HP_UX then.
>
> >> Failing so many tests with NO_ICONV is certainly not ideal, but I'm not
> >> sure we should care to protect so many tests with a prerequisite.
> >
> > How feasible is it to isolate those tests into separate test files that
> > people that know to not use e.g. Asian can safely ignore them?
>
> We have the prerequisite mechanism for that, and most probably, the
> tests are "isolated" already, in the sense that with NO_ICONV, only
> trivial setup tests succeed for those test files but all "proper" tests
> fail. But I'll check. Need a good test to set the prerequisite, though.
I took a first pass at this. The results are below (and I am hoping one
of you can use it as a base to build on, as I do not want to commit to
doing the second half, as you will see :) ).
It passes NO_ICONV through to the test suite, sets up a prerequisite,
disables some test scripts which are purely about i18n (e.g.,
t3900-i18n-commit), and marks some of the scripts with one-off tests
using the ICONV prereq.
Note that it also has some code changes around reencode_string_len.
These aren't strictly necessary, but they silence gcc warnings when
compiled with NO_ICONV. In that case we do:
#define reencode_string_len(a,b,c,d,e) NULL
but "e" is an out-parameter. We don't promise it is valid if the
function returns NULL (which it does here). I'm kind of surprised the
compiler doesn't realize that:
foo = reencode_string_len(...);
if (foo)
bar();
is dead code, since the first line becomes "foo = NULL". So that's
optional.
So, on to the tricky parts. Here are the failures that remain:
1. The script builds up a commit history through the script, and later
tests depend on this for things like commit timestamps or the exact
shape of history. t9350 is an example of this (it has one failing
test which can be marked, but then other tests later fail in
confusing ways).
2. The script creates commits with encoded commit messages, then uses
those both for cases that care about the encoding, and those that
do not. t4041 is an example here. I think it would be best to use
vanilla commit mesages for the main body of tests, and then
explicitly test the encoding-related features separately. I think
t4205 and t6006 are in this boat, too.
I also tested this on a system with a working "iconv". If we are
building with NO_ICONV, I am tempted to say that there should be no need
to run the "iconv" command-line program at all. But t6006, for example,
does it a lot outside of any test_expect_*. Probably it should be:
test_lazy_prereq ICONV '
test -z "$NO_ICONV" &&
utf8_o=$(printf "\303\263") &&
latin1_o=$(printf "\363") &&
test "$(echo $utf8_o | iconv -f UTF-8 -t ISO-8559-1)" = "$latin1_o"
'
or something, and all of that setup should be wrapped in a
"test_expect_success ICONV ...". Of course that is the easy part. The
hard part is splitting the ICONV setup from the vanilla commit setup so
that the other tests can run.
12 files changed: