Could work, though it is somewhat complex. I don't know why you'd need a
"solution" though, since you could just provide some high-level test cases.
I would worry about these constraining the program design or giving too much
away, however.

My professor for graduate algorithms automated grading through some
scripting. His script would run the student's code a few times with
different inputs (from a set of files). It would compare the program output
to another set of files containing the expected output, and generate a
pass/fail for each case. The professor then only needed to examine the
student's code if cases failed.

He had it set up rather nicely - the script would output the results of all
tests plus include the student's source code, and the professor would give
us this printout with our grade and any comments necessary.

Also, he'd provide us with a subset of the test cases ahead of time, so we
could test our program. Didn't give anything away or suggest any program
structure - only the format of the input/output was defined. Usually this
was made as simple as possible, and we didn't have to deal with invalid
inputs.

We also submitted our code as a tarball through a web page. Very convenient
for all, though it requires some setup on the professor's part (but hey,
he's getting paid).

--
Daniel Siegmann
FJA-US, Inc.
(212) 840-2618 ext. 139

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to