On 22.09.21 22:18, John Snow wrote:
On Fri, Sep 17, 2021 at 7:00 AM Hanna Reitz <hre...@redhat.com
<mailto:hre...@redhat.com>> wrote:
[...]
As you say, run_linters() to me seems very iotests-specific still: It
emits a specific output that is compared against a reference output.
Fine for 297, but not fine for a function provided by a
linters.py, I’d say.
I’d prefer run_linters() to return something like a Map[str,
Optional[str]], that would map a tool to its output in case of error,
i.e. ideally:
`{'pylint': None, 'mypy': None}`
Splitting the test apart into two sub-tests is completely reasonable.
Python CI right now has individual tests for pylint, mypy, etc.
297 could format it something like
```
for tool, output in run_linters().items():
print(f'=== {tool} ===')
if output is not None:
print(output)
```
Or something.
To check for error, you could put a Python script in python/tests
that
checks `any(output is not None for output in
run_linters().values())` or
something (and on error print the output).
Pulling out run_linters() into an external file and having it print
something to stdout just seems too iotests-centric to me. I
suppose as
long as the return code is right (which this patch is for) it should
work for Avocado’s simple tests, too (which I don’t like very much
either, by the way, because they too seem archaic to me), but,
well. It
almost seems like the Avocado test should just run ./check then.
Yeh. Ideally, we'd just have a mypy.ini and a pylintrc that configures
the tests adequately, and then 297 (or whomever else) would just call
the linters which would in turn read the same configuration. This
series is somewhat of a stop-gap to measure the temperature of the
room to see how important it was to leave 297 inside of iotests. So, I
did the conservative thing that's faster to review even if it now
looks *slightly* fishy.
As for things being archaic or not ... possibly, but why involve extra
complexity if it isn't warranted?
My opinion is that I find an interface of “prints something to stdout
and returns an integer status code” to be non-intuitive and thus rather
complex actually. That’s why I’d prefer different complexity, namely
one which has a more intuitive interface.
I’m also not sure where the extra complexity would be for a
`run_linters() -> Map[str, Optional[str]]`, because 297 just needs the
loop suggested above over `run_linters().items()`, and as for the
Avocado test...
a simple pass/fail works perfectly well.
I don’t find `any(error_msg for error_msg in run_linters().values())`
much more complex than pass/fail.
(Note: Above, I called it `output`. Probably should have called it
`error_msg` like I did now to clarify that it’s `None` in case of
success and a string otherwise.)
(And the human can read the output to understand WHY it failed.) If
you want more rigorous analytics for some reason, we can discuss the
use cases and figure out how to represent that information, but that's
definitely a bit beyond scope here.
[...]
Like, can’t we have a Python script in python/tests that imports
linters.py, invokes run_linters() and sensibly checks the output? Or,
you know, at the very least not have run_linters() print anything to
stdout and not have it return an integer code. linters.py:main()
can do
that conversion.
Well, I certainly don't want to bother parsing output from python
tools and trying to make sure that it works sensibly cross-version and
cross-distro. The return code being 0/non-zero is vastly simpler. Let
the human read the output log on failure cases to get a sense of what
exactly went wrong. Is there some reason why parsing the output would
be beneficial to anyone?
Perhaps there was a misunderstanding here, because there’s no need to
parse the output in my suggestion. `run_linters() -> Map[str,
Optional[str]]` would map a tool name to its potential error output; if
the tool exited successfully (exit code 0), then that `Optional[str]`
error output would be `None`. It would only be something if there was
an error.
(The Python GitLab CI hooks don't even bother printing output to the
console unless it returns non-zero, and then it will just print
whatever it saw. Seems to work well in practice.)
Or, something completely different, perhaps my problem is that you
put
linters.py as a fully standalone test into the iotests directory,
without it being an iotest. So, I think I could also agree on
putting
linters.py into python/tests, and then having 297 execute that.
Or you
know, we just drop 297 altogether, as you suggest in patch 13 – if
that’s what it takes, then so be it.
I can definitely drop 297 if you'd like, and work on making the linter
configuration more declarative. I erred on the side of less movement
instead of more so that disruption would be minimal. It might take me
some extra time to work out how to un-scriptify the test, though. I'd
like to get a quick temperature check from kwolf on this before I put
the work in.
So since we seem to want to keep 297, would it be possible to have 297
run a linters.py that’s in python/tests?
Hanna
PS: Also, summing up processes’ return codes makes me feel not good.
That's the part Vladimir didn't like. There was no real reason for it,
other than it was "easy".
I very much don’t find it easy, because it’s semantically wrong and thus
comparatively hard to understand.
I can make it a binary 0/1 return instead, if that'd grease the wheels.
Well, while I consider it necessary, it doesn’t really make the patch
more palatable to me.
Hanna