Just to summarize for those in the future who might run into this issue. Discussed with Darragh over IRC and he came up with a fix for this issue [1]. Once the patch is merged it should be available in the next release.
In the meantime if you encounter this issue the workaround is to pass the "-o /path/to/dir" option to the test command to redirect the output to a directory. Thanh [1] https://review.openstack.org/265959 On 11 January 2016 at 12:51, Darragh Bailey <daragh.bai...@gmail.com> wrote: > Hi Thanh > > On 11 January 2016 at 16:03, Thanh Ha <thanh...@linuxfoundation.org> > wrote: > > Hi Darragh, > > > > The extremely strange thing about this one is I've only been able to > > reproduce it in Jenkins. So imagine having JJB installed on the same > server > > as your Jenkins instance or slave. If you ssh to the machine and run JJB > on > > the commandline it works. Create a job running on the same system and > > Jenkins will see the failure in console logs. > > > > In case it helps with reproducing. The git repo I've been using to > reproduce > > this error is this one: > > > > https://git.opendaylight.org/gerrit/#/admin/projects/releng/builder > > > > After cloning this repo run "jenkins-jobs test -r jjb/". > > > > Regards, > > > > Thanh > > > > > > This triggered a suspicion with me that it wasn't anything to do with > the project, and it's something to do with how jenkins is reading the > console output from processes which causes python to use a different > object for stdout. > > I managed to reproduce by creating a local pipe, tailed from that in > one terminal and run the above command in another and redirecting the > output into the pipe. > > term 1: > mkfifo /tmp/test_jjb_output_to_pipe > tail -f /tmp/test_jjb_output_to_pipe > > term 2: > jenkins-jobs test -r builder/jjb > /tmp/test_jjb_output_to_pipe > > > However on closer examination of the code, I suspect that the > following piece is a little stupid and is causing the problem. Quite > why it works some of the time and doesn't fail consistently has me > stumped. > > if output: > for job in self.parser.xml_jobs: > if hasattr(output, 'write'): > # `output` is a file-like object > logger.info("Job name: %s", job.name) > logger.debug("Writing XML to '{0}'".format(output)) > output = utils.wrap_stream(output) > try: > output.write(job.output()) > except IOError as exc: > if exc.errno == errno.EPIPE: > # EPIPE could happen if piping output to > something > # that doesn't read the whole input (e.g.: the > UNIX > # `head` command) > return > raise > continue > > This ends up wrapping the object on each an every loop iteration, so > if it's writing to stdout it wraps the output many, many times. I'm > going to fix that. > > The reason why it may only occur when using a named pipe possibly has > something to do with the object writing to it not being able to set > the encoding, so therefore it keeps trying to wrap it with > "codecs.EncodedFile(stream, encoding, stream_enc)", where as for > normal pipes and stdout, the encoding is set and read correctly and > therefore it just returns the stream on subsequent iterations. > > Wrapping a StreamReader from codecs with a subsequent StreamReader > object, would eventually result in the getattr call hitting the > recursion limit as it tries to find the base object attribute to read. > > > Should be able to knock together a sensible solution relatively easily. > > > -- > Darragh Bailey > "Nothing is foolproof to a sufficiently talented fool" >
_______________________________________________ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra