Hi David,

Not to overstep the implied etiquette of these situations/patronise or 
condescend(!), but reading between the lines, I don't think you have a 
technology constraint I think you have a process constraint. You dropped 
the word "agile" in there (which is usually a license for people to 
mercilessly butcher a process, removing everything they personally don't 
like and call it a "lightweight agile process" :)) but what you describe 
doesn't really relate to me definition of "agile". The word "agile" also 
doesn't really mean anything as it has been abused and associated with so 
many things.

For me, agile (http://www.agilemanifesto.org/) is first and foremost about 
getting the right people in the right place at the right time. Agility is 
about reponding to change, whether that is a change in requirements, 
tooling, process, large/small design, implementation etc. I have never seen 
anything so efficient as this when it is done right. The main criteria of 
success for implementing agile is mindset and culture.

TOC (http://www.agilemanifesto.org/) also tells us to focus *only* on the 
one thing that is holding up the flow the most. Finding the answer to that 
is the interesting part, but "testers idle waiting for programmers and 
programmers idle waiting for testers" can't be right.

If I were in your steps I would ask myself the following questions:

- what is the main cause of this problem? Process or tech?
- can I cause the necessary change? (If not, get out now!)
- what is the constraint of the system and how can I help them (e.g. get 
everybody else to leave them the heck alone/provide whatever they need)

I would also push for the following changes:
- working software is key. The software works. Always. If you aren't 
producing software that works then what are people paid for? Everything 
should subordinate to that.
- developers write code that works, period. There is no separate "quality 
enforcement", there is only development. Developer's definition of "done" 
is sufficient. Of course you still might want a "tick box" quality test but 
it should be a safety check which consistently says "yeah, all fine".
- people need to have skills not job titles. If you are really good at 
finding overlooked edge cases then great come here for a bit and kick the 
tyres on this. I don't care what your title is. Don't segment your 
resources - it is all one team
- figure out what actions need to be done to get software to the clients, 
reframe everybody's purpose in terms of addressing those actions. Often the 
right thing is to have resources sitting idle so they can immediately 
subordinate to the constraint
- process, just like everything else is just another chunk of 
inventory/investment which should be scrutinised and refactored ruthlessly

I don't know what to say to help you move forward, and this is one guy's 
opinion and I can guarantee that there will be many contradictory ones, and 
that is great. The point being there are very few silver bullets (except 
Clojure, obviously ;)); process is incredibly context sensitive so what 
works for me might not work for you. 

As I say, these are just my thoughts and might not work out for you, but 
even if everybody became Clojure experts I am not sure that is solving your 
biggest constraint. I also fully expect somebody else to come and explain 
why these things (which have worked for me for a good while now) are 
completely wrong :). It is all context sensitive. I am not sure continuing 
this (process discussion) on a Clojure thread is the right way forward 
either but please feel free to email me if this is helpful.

If you wanted some more reading, I can highly recommend The Pragmatic 
Programmer[1] and The Clean Coder[2] (not "Clean Code" though - sorry Bob 
:)).

[1] https://pragprog.com/book/tpp/the-pragmatic-programmer
[2] 
http://www.amazon.co.uk/The-Clean-Coder-Professional-Programmers/dp/0137081073%3FSubscriptionId%3DAKIAILSHYYTFIVPWUY6Q%26tag%3Dduc08-21%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3D0137081073

On Wednesday, 29 October 2014 00:12:52 UTC, David Mitchell wrote:
>
> Hi Colin
>
> Thanks for your reply.
>
> My post is almost exclusively technology oriented because I think the 
> technology is what's killing us!
>
> We've got what you'd probably call "BDD lite" working, in that we've got a 
> mutant form of agile process running whereby we work in 2 week sprints, but 
> there's rarely an installable product that emerges at the end of the 2 
> weeks.  I won't go into detail as to what I feel are the root causes in a 
> public forum - however I'm convinced that our adoption of Clojure is at 
> least partly to blame.
>
> Just to make it clear, I absolutely believe Clojure is a good tool to use 
> for this project, and personally I'll be actively seeking out other Clojure 
> projects in the future.  I'm saying that from the viewpoint of someone 
> who's employed in the testing area, but who also has quite a bit of Clojure 
> development experience.  There's just this gulf at present between the 
> people who know Clojure (almost exclusively developers) and other technical 
> staff involved in the application lifecycle (testers, infrastructure 
> owners, all the various technical managers) that we're finding very 
> difficult to manage.
>
> For example, it'd be great if we could pair up our testers and developers, 
> have them working side by side and rapidly iterating through e.g. Cucumber 
> feature definition, coding and testing.  That would be absolutely ideal for 
> this particular project, where a complete set of test cases can't be 100% 
> defined up front and lots of minor questions arise even within an 
> iteration.  If this working arrangement was viable, every time we hit a 
> point that needed clarification, the tester could engage the product owner, 
> get clarification and jump back in to their normal work with minimal 
> disruption.  However, our testers simply can't provide enough useful input 
> into development - they're currently stuck waiting for developers to hand 
> their code over *in a form that the testers can test it*, and often there's 
> a lot of extra (wasted?) effort involved to take working Clojure code and 
> make it testable using non-Clojure tools.  
>
> To say this is an inefficient working model would be a massive 
> understatement.  What we're seeing is that our developers work like mad for 
> the first week of a 2 week iteration, while the testers are largely idle; 
> then code gets handed over and the developers are largely idle while the 
> testers work like mad trying to finish their work before the end of the 
> iteration.  Our automation testers are valiantly trying to use SoapUI and 
> Groovy and (to a small extent) Cucumber/Ruby to test our Clojure code, but 
> those tools require that there are exposed HTTP endpoints (SoapUI) or Java 
> classes (Groovy or *JRuby*) that the tool can use to exercise the 
> underlying Clojure code.  These endpoints exist, but only at a very high 
> level - our UI testing, which works very well, is already hitting those 
> same endpoints.
>
> Additionally, our QA manager wants our testers to be able to do more 
> exploratory testing, based on his personal experience of using Ruby's 
> interactive shell, and simply "trying stuff out".  That approach makes a 
> lot of sense for this project, and I know that using a Clojure REPL could 
> provide a great platform for this type of testing, but doing that would 
> require a sizeable investment in our testers learning to use Clojure.
>
> I'm starting to wonder whether there's actually any point trying to do 
> *any* system testing of Clojure apps under active development, as maybe 
> that risk exposure can best be addressed by enforcing suitable coding 
> standards (e.g. :pre and :post conditions), and then extending what would 
> normally be unit tests to address whole-of-system functionality.  After 
> all, for an app written in a functional language - where you've basically 
> only got functions that take parameters and return a result, minimal state 
> to manage, and usually a small set of functions having side effects like 
> database IO - surely a lot of your traditional functional test scenarios 
> would simply be tightly-targetted unit tests anyway.
>
> Maybe we should be handing off our single-system functional testing 
> entirely to developers, and only engaging our dedicated QA people once we 
> get to integrating all the different streams of development together.  That 
> seems to be the approach that Craig's project (thanks Craig!) is taking, 
> and it'd definitely be easier to work with compared to our current 
> processes.  Due to the lack of oversight and up-front objective 
> requirements, there could be an increased risk that our developers are 
> writing code to solve the wrong problem, but maybe that's just something we 
> need to live with.
>
> If anyone else has any thoughts, I'd REALLY appreciate hearing about them. 
>  Thanks again to Colin and Craig
>
> On Tuesday, 28 October 2014 20:04:39 UTC+11, Colin Yates wrote:
>>
>> Hi David,
>>
>> Your post is very technology orientated (which is fine!). Have you looked 
>> into BDD type specifications? I am talking specifically the process 
>> described in http://specificationbyexample.com/. If you haven't, I 
>> strongly recommend you do as the win in this situation is they separate the 
>> required behaviour of the system (i.e. the specification) being tested from 
>> the technical geekery of asserting that behaviour. In brief, this process, 
>> when done well:
>>  - defines behaviour in readable text documents (albeit restricted by the 
>> jerkin grammar)
>>  - the same specification is consumed by the stake holders and the 
>> computer (and if you want bonus points are produced by/with the stake 
>> holders :))
>>  - provides access to many libraries to interpret and execute those specs 
>> (http://cukes.info/ being the main one etc.)
>>
>> Once you get into the whole vibe of freeing your specs from 
>> implementation a whole new world opens up. http://fitnesse.org/ for 
>> example, is another approach.
>>
>> I am suggesting the tension in your post around "how do we collate all 
>> our resources around an unfamiliar tool" might be best addressed by using a 
>> new tool - the shared artifacts are readable English textual specifications 
>> which everybody collaborates on. The geeks do their thing (using Ruby, 
>> Clojure, groovy, Scala, Selenium, A.N.Other etc.) to execute those same 
>> specs.
>>
>> On Monday, 27 October 2014 04:21:07 UTC, David Mitchell wrote:
>>>
>>> Hi group,
>>>
>>> Apologies for the somewhat cryptic subject line - I'll try to explain... 
>>>  Apologies also for the length of the post, but I'm sure others will hit 
>>> the same problem if they haven't already done so, and hopefully this 
>>> discussion will help them find a way out of a sticky situation.
>>>
>>> We've got a (notionally agile) Clojure app under heavy development.  The 
>>> project itself follows the Agile Manifesto to a degree, but is constrained 
>>> in having to interface with other applications that are following a 
>>> waterfall process.  Yep, it's awkward, but that's not what I'm asking about.
>>>
>>> Simplifying it as much as possible, we started with a pre-existing, 
>>> somewhat clunky, Java app, then extended the server side extensively using 
>>> Clojure, and added a web client.  There's loads of (non-Clojure) supporting 
>>> infrastructure - database cluster, queue servers, identity management, etc. 
>>>  At any point, we've got multiple streams of Clojure development going on, 
>>> hitting different parts of the app.  The web client development is 
>>> "traditional" in that it's not using ClojureScript, and probably won't in 
>>> the foreseeable future.  As mentioned above, a key point is that the app 
>>> has a significant requirement to interface to legacy systems - other Java 
>>> apps, SAP, Oracle identity management stack and so on.
>>>
>>> From a testing perspective, for this app we've got unit tests written in 
>>> Clojure/midje which are maintained by the app developers (as you'd expect). 
>>>  These work well and midje is a good fit for the app.  However, given all 
>>> the various infrastructure requirements of the app, it's hard to see how we 
>>> can use midje to go all the way up the testing stack (unit -> system -> 
>>> integration -> pre-production -> production).
>>>
>>> From the web client perspective, we've got UI automation tests written 
>>> using Ruby/Capybara, a toolset which I suspect was chosen based on the 
>>> existing skillset of the pool of testers.  Again this works well for us.
>>>
>>> The problem is with the "middle ground" between the two extremes of unit 
>>> and UI testing - our glaring problem at present is with integration 
>>> testing, but there's also a smaller problem with system testing.  We're 
>>> struggling to find an approach that works here, given the skillsets we have 
>>> on hand - fundamentally, we've got a (small) pool of developers who know 
>>> Clojure, a (small) pool of testers who know Ruby, and a larger pool of 
>>> testers who do primarily non-automated testing.
>>>
>>> In an ideal world, we'd probably use Clojure for all automated testing. 
>>>  It seems relatively straightforward to use Stuart Sierra's component 
>>> library (https://github.com/stuartsierra/component) to mock out 
>>> infrastructure components such as databases, queues, email servers etc., 
>>> and doing so would let us address our system-level testing.  
>>>
>>> On the integration front, we could conceivably also leverage the same 
>>> component library to manage the state of all the various infrastructure 
>>> components that the app depends on, and thus ensure that we had a suitably 
>>> production-like environment for integration testing.  This would be a 
>>> non-trivial piece of work.
>>>
>>> Our big problem really boils down to just not having enough skilled 
>>> Clojure people available to the project.  You could point to any of the 
>>> following areas that are probably common to any non-trivial Clojure 
>>> application: either we don't have enough Clojure developers to address the 
>>> various requirements of system and integration testing, or our techops guys 
>>> don't have the necessary skills to expose a Clojure/component interface to 
>>> the various test/development environments, or our testers don't know 
>>> Clojure and not willing to take the word of developers that their Clojure 
>>> tests are both fit for purpose and sufficient from a risk management 
>>> perspective.
>>>
>>> Obvious options, none of which seem great:
>>>
>>>    - hire more Clojure people (expensive, as they're still pretty rare) 
>>>    and put them to work in testing & techops.  We've tried turning some of 
>>> our 
>>>    Clojure devs into techops already, but strangely devs who've taken the 
>>> time 
>>>    and had the initiative to learn Clojure don't like doing techops work. 
>>>     What a surprise ;->  I suspect the same would apply if we tried turning 
>>>    them into testers
>>>    - retrain our testers so they can write automated tests in Clojure. 
>>>     That would be quite a stretch for our testers, and I'd suggest it would 
>>> be 
>>>    the same for most testers out there (otherwise they'd probably be 
>>> working 
>>>    as developers).  Another factor is career development: once testers 
>>> start 
>>>    to move on from this project, what is the chance that "Clojure" will be 
>>> a 
>>>    useful thing to have on their CVs?
>>>    - retrain techops people so they can wrap up and expose their 
>>>    infrastructure using Clojure components, making it available in such a 
>>> way 
>>>    that it would better support integration and systems testing.  Same 
>>>    problems here as with retraining testers to use Clojure
>>>    - enforce the use of :pre and :post conditions in all our Clojure 
>>>    code, to bring in a "design by contract" approach and try to reduce the 
>>>    "surprises" that occur during integration of different streams of work. 
>>>     Aside from being a sizeable piece of work to do this, we're stuck with 
>>> a 
>>>    strong reliance on the pre-existing Java app and I can't see a way of 
>>>    reducing the integration risk of this element
>>>    - use something other than Clojure (e.g. Ruby) for systems and 
>>>    integration testing, so we could leverage the existing skillsets of our 
>>>    test workforce.  This is probably conceivable if we made the effort to 
>>>    expose much of the functionality of the app using something like REST 
>>> APIs, 
>>>    but it would require a significant investment in time and would have no 
>>>    likely future benefit beyond making it easier to test.  I realise that's 
>>> a 
>>>    desirable aim in itself, but it's a hard sell to the people who pay the 
>>>    bills!
>>>    
>>>
>>> I'll go out on a limb and suggest that, as of late 2014, probably any 
>>> non-trivial Clojure project doesn't have enough skilled Clojure people on 
>>> board to cover all the testing and operational requirements for the 
>>> project.  How then are you addressing the non-development requirements of 
>>> your project that require Clojure expertise - especially testing and devops?
>>>
>>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to