Taylan Kammer <taylan.kam...@gmail.com> writes: > Do I understand correctly that this is an additional test suite for > testing SRFI-64 itself? Like the "meta test suite" shipped with > SRFI-64?
Yes, exactly. Vast majority of the tests are just derived from the specification, with few non-portable written just for my implementation, but those can be turned off. > > Is there a brief description somewhere on how to run it with Guile? > Would be really neat if I can use it to further test my > implementation. It is not hard, but at the same time the test suite is not really stand-alone, so the instructions will be bit hackish. 1. Download https://files.wolfsden.cz/releases/guile-wolfsden/guile-wolfsden-0.0.1.tar.gz 2. Unpack it. 3. Open `build-aux/srfi64test-driver.scm'. 4. On line 35 replace 'wolfsden with 'guile. 5. Open `Makefile.am'. 6. Delete lines 17-19 (assignment to `TESTS' variable). 7. Open `tests/srfi-64/local.mk'. 8. On line 4 change `TESTS +=' to `TESTS ='. 9. Build the project and run the tests: $ autoreconf -fvi $ ./configure --disable-doc-snarf $ make $ make check Due to step 4. only portable (== written based on specification) tests will run, and they will run against (srfi srfi-64). Since you library is available as that module, just make sure it is on the load path. When I follow the steps, I get: # TOTAL: 340 # PASS: 265 # SKIP: 36 # XFAIL: 0 # FAIL: 39 # XPASS: 0 # ERROR: 0 You should get less FAILs I guess (since you have fixed many problems already, and some you did not even have). I am sure you will dispute some of those tests. ^_^ >> You can find my version here[0]. If you do not use Guix, building from >> tarball[1] might be easier. Contrary to your version, mine is available >> as (wolfsden srfi srfi-64). >> >> 0: https://git.wolfsden.cz/guile-wolfsden/ >> 1: https://wolfsden.cz/project/guile-wolfsden.html > > Your implementation seems written specifically with Guile in mind, > which is a big plus I guess. Yes, I decided to write my version in as-readable manner as possible (well, at least I hope the code is readable), at the cost of portability. Since I have seen what portability did to (srfi srfi-64). > If the quality of the implementations is the same or higher, in terms of > observable behavior, then it should be preferred for Guile, I think. If I find > the time, I'll see if I can use your implementation to run some of my test > suites, like the bytestructures test suite, and report if I notice any > issues. Oh, that would be much appreciated. I did test my version against Guix's test suite (and it revealed 4 bugs in Guix's tests) and none in my library, so I hope results for your project would be similar. >>> In one case, the reference implementation clearly violates the >>> specification: >>> The simple test runner uses the `aux` field which the spec claims it doesn't >>> use. (My implementation fixes this.) However, in this case it's not that >>> clear-cut. >>> >>> In this case, I think raising an error is good default behavior, since the >>> mismatched end name indicates a problem with the test suite itself rather >>> than >>> the code being tested. If it poses a problem to the user, one can override >>> that >>> callback with the `test-runner-on-bad-end-name!` setter. >>> >>> What do you think? >> I agree that raising an error is good behavior. However I do not think >> that on-bad-end-name-function is a place where to do it. In my opinion >> the name mismatch is a hard error, in my implementation subclass of >> &programming-error[4]. If I am writing new test runner, the >> specification does not mention that raising the error is *my* >> responsibility, just that test-end will signal an error. >> >> To rephrase that: test-end is mandated to signal error, but custom test >> runner has no provision requiring it to do it in >> on-bad-end-name-function. Hence I believe test-end needs to be the one >> to signal the error. > > Makes sense I guess. I've generally tried to imitate the reference > implementation's behavior as closely as possible in such matters, worrying > that > there might be code out there that relies on its various quirks, but maybe I'm > being too paranoid. I tried to not use reference implementation that much, and instead relied on the specification. It was slow and painful process. > I don't have a strong opinion either way. The number of people, who want to > write a test runner that does something special on bad-end-name (something > other > than raise an error), is probably very small. I definitely agree on this one. > > - Making `test-end` itself raise an error would probably be most convenient, > so test runner authors don't have to take care of it. > > - But if `test-end` doesn't do it, it's not a big deal either IMO, because all > they would need to do is to call `(test-runner-on-bad-end-name! my-runner > test-on-bad-end-name-simple)` to make their custom runner raise an error as > well. (And, if they want to do something before, they can use a procedure that > ends with the call `(test-on-bad-end-name-simple ...)`.) > > The latter is my preference, because enabling the behavior via a single line > of > code is easy, whereas disabling it would be difficult / impossible if > `test-end` > were to be hardcoded to raise an error. But if a SRFI-64 implementation made > its > `test-end` always raise an error, it probably wouldn't anyone in practice, so > I > wouldn't see it as a real problem. I still think test-end itself raising is what specification mandates (whether it *should* mandate it is a different question :) ), however I agree, I also am skeptical anyone's code actually cares either way. Tomas -- There are only two hard things in Computer Science: cache invalidation, naming things and off-by-one errors.