zturner added a comment.

It's not so much about the speed as it is about the general principle of 
exercising as little functionality as possible within a given test.  This way 
when a test fails, you already know exactly what the problem is.  When this 
test fails, is it in the step that creates the core or the step that reads the 
core?  If you have one test that creates a core and another test that reads a 
pre-generated core, and only the second one fails, you know where to look to 
find the problem.

Also, we shouldn't be be tied to a single program for all minidump tests.  If 
they need different programs, just put them in different directories.  That way 
you don't have to worry about needing to change this program and check in a new 
dump.  You'd only have to do that if you needed to change this specific test.  
But when the test tests as little functionality is possible, it's rare that you 
will need to change it.

Anyway, up to you with this CL (I don't know off the top of my head how many 
minidump tests we already have and how many others do this).  I kind of don't 
think it should be come the norm going forward though.  I know there's a 
balance when checking in binary files, but unless there's a very compelling 
reason like core files being prohibitively large or something like that, I 
think we should err on the side of having each test test the minimal amount of 
functionality possible.


http://reviews.llvm.org/D15435



_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to