Hi Brian,



This is a nice overview.  I think you've captured the requirements nicely.   
The only change I'd suggest is that it'd be nice to turn the functional test 
framework into a lightweight load test because some issues will only be seen 
under load.   This wouldn't be able to hammer ATS the way a real load testing 
framework would, but it would also be able to do deeper testing than a real 
load testing framework.




In terms of the fun part, I think we should schedule a phone call or irc chat 
where we discuss more interactively how we proceed.   There's a complicated 
balance I think we need to strike between must have features and nice to have 
features that can be added later (as Alan said in the conference, 'we don't 
want the perfect to be the enemy of the good').   We also should be mindful of 
who actually has the cycles to do the work now and where it makes sense to 
parallelize the work.   The approach I like because I'm ready to go now is to 
collect requirements, send out pull requests for code review, and get everyone 
interested to climb onto the code reviews, try out the framework, and send 
patches until we converge on a framework we're happy with.   




Thoughts?




Thanks,




Josh



 

     On Tuesday, November 4, 2014 10:04 AM, Brian Geffon <bri...@apache.org> 
wrote:
   

 Hi All, thanks for your patience I know many people are eager to start
pooling resources to make this happen. (Thanks Susan for helping with notes
during the summit)

To briefly summarize what was discussed at the summit, we have an existing
framework called TSQA which is based on bash, while this is a nice start
it's not really what we need. Josh Blatt hacked together a prototype in
NodeJS, but I think the consensus was that the tooling and existing code
for Python would result it in being a better language choice, it seemed to
be a more-or-less unanimous agreement (were there any objections to using
Python?).

With the language defined, we outlined the following high level
requirements:
  - It must be very easy to write simple tests (ie. basic http get -> proxy
-> simple http origin).
  - It must be expressive enough to handle complex test cases (ie testing
an ESI plugin or advanced networking cases such as sending a FIN randomly).
  - Such a framework MUST allow for integration testing plugins.
  - We must be able to bootstrap trafficserver with relevant configs and
plugins.
  - We must have a port manager that is shared between each component of
the integration test.
  - We would punt on multiple OS support to start in favor of developing
something more generally useful to start. However, at some point in the
future it would be nice to have. Thus design decisions shouldn't be linux
specific with the end goal of supporting multiple OSes.
  - This framework will run in our existing CI environment, thus existing
testing frameworks for generating reports suitable for Jenkins is a must.

Other issues raised during this session were:
  - Using such a framework for perf testing, most people agreed this should
be considered separately from this testing framework.
  - We should investigate what other proxies are using before development
begins to determine if we can reuse/share components if the goals of our
framework aligns with an existing framework.
  - The following wiki page exists related to QA:
https://cwiki.apache.org/confluence/display/TS/Quality+Assurance

Please feel free to respond if I left anything out or if I in anyway
misstated the discussions at the summit.

Now moving along to the fun part. Several people have expressed interest in
this project, which is awesome, but let's all coordinate. Let's work
together and make something great that can really benefit the entire
community. Does anyone have suggestions for how we can coordinate and/or
deal with task distribution? Obviously Jira should be used for tasks, but
should we do regular IRC check-ins and summarize those discussions on the
mailing list? Ideas?

Brian


   

Reply via email to