Hi, Paul. Thank you for your response. I just still feel that it's a very risky approach to deliver a new release if community haven't adopted and tried a previous one because future unidentified regressions are multiplied to currently unidentified regressions. But, I see it's a trade and controversity here.
2017-12-13 21:46 GMT+07:00 Paul Angus <paul.an...@shapeblue.com>: > Thanks Rene. > > @Ivan, I understand your concerns. But if 4.10 is unusable, then it will > never get much production testing. > The longer between releases, the harder testing and triage becomes. > > By putting a line in the sand for 4.11 and 4.12, and with the desire to > keep making every release better than the last we can keep moving forward. > I think we're all largely in agreement that the process around 4.10 was > sub-optimal, which is why we've set out clear guidelines that we'd like to > work to. > > You are correct that there is more to quality than just Marvin tests (or > at least the current ones), and long term, if community members like > yourselves and Rene, come up with tests/test structures that push the > boundaries of CloudStack, then automated testing will only get better. > > For now though, I would suggest that the best way to galvanise the > community around the manual testing of CloudStack is to have a release > candidate that everyone can coalesce around. > > > > Kind regards, > > Paul Angus > > paul.an...@shapeblue.com > www.shapeblue.com > 53 Chandos Place, Covent Garden, London WC2N 4HSUK > @shapeblue > > > > > -----Original Message----- > From: Rene Moser [mailto:m...@renemoser.net] > Sent: 13 December 2017 12:56 > To: dev <dev@cloudstack.apache.org>; us...@cloudstack.apache.org > Subject: Re: Call for participation: Issue triaging and PR review/testing > > Hi all > > On 12/13/2017 05:04 AM, Ivan Kudryavtsev wrote: > > Hello, devs, users, Rohit. Have a good day. > > > > Rohit, you intend to freeze 4.11 on 8 january and, frankly speaking, I > > see risks here. A major risk is that 4.10 is too buggy and it seems > > nobody uses it actually right now in production because it's unusable, > > unfortunately, so we are planning to freeze 4.11 which stands on > > untested 4.10 with a lot of lacks still undiscovered and not reported. > > I believe it's a very dangerous way to release one more release with > > bad quality. Actually, marvin and units don't cover regressions I meet > > in 4.10. Ok, let's take a look at new one our engineers found today in > 4.10: > > So, the point is, how do we (users, devs, all) improve quality? > > Marvin is great for smoke testing but CloudStack is dealing with many > infra vendor components, which are not covered by the tests. How can we > detect flows not covered by marvin? > > For me, I decided (independent of this discussion) to write integration > tests in a way one would not expect, not following the "happy path": > > Try to break CloudStack, to make a better CloudStack. > > Put a chaos monkey in your test infra: Shut down storage, kill a host, put > latency on storage, disable network on hosts, make load on a host. > read only fs on a cluster wide primary fs. shut down a VR, remove a VR. > > Things that can happen! > > Not surprisingly I use Ansible. It has an extensive amount of modules > which can be used to battle prove anything of your infra. Ansible playbooks > are fairly easy to write, even when you are not used to write code. > > I will share my works when ready. > > René > > > > > > -- With best regards, Ivan Kudryavtsev Bitworks Software, Ltd. Cell: +7-923-414-1515 WWW: http://bitworks.software/ <http://bw-sw.com/>