On 15 May 2013 23:19, Konstantin Boudnik <c...@apache.org> wrote: > Guys, > > I guess what you're missing is that Bigtop isn't a testing framework for > Hadoop. It is stack framework that verifies that components are dealing > with > each other nicely.
which to me means "Some form of integration test" > Every single stack is different: Bigtop 0.5.0 differs from > 0.6.0, and so on. Bigtop - as any other ASF project - has its releases that > might or might not be aligned with particular version of Hadoop. Hence, an > ethalon stack needs to be defined first and foremost. > > Before we even start talking about running it nightly (another question is > on > what hardware, let's not get there for now) let's understand who will can > help > with triage'ing test failures? Downstreams, Hadoop or Bigtop? > > > Judging by a number of other emails there's a number of people on this list > who care plenty about integration issues. Any volunteers to help with > integration testing in the open? > > As I said at the HUG, I want to get the non-swift-FS specific tests that do things like run Pig jobs against any FS in, though I also need a home for some very swift-specific partitioned file tests. > Is this a previously solved problem? > > Yes. The problem is solved by separating actively developed (aka unstable) > release from more mature and less volatile ones. not in filesystems. If you look how long it took ext4 to be implemented and then adopted, you can see that nobody put data they cared about on it until they were happy that what you put on write() came back on a read() [and stat() returned the amount of data, [seek(X);read()] returned the byte at offset X and other little details that those of us writing tests for the filesystem APIs care about]