Hi, if I understand your question properly you are aiming to validate failure scenarios. I usually see this done for learning purposes, basically to answer how Kafka would react to such and such situation? or to validate the current setup / configuration an Organisation have.
For the first one, or as well the second one but not directly on the cluster, colleagues of mine actually wrote https://github.com/Dabz/kafka-boom-boom, this uses docker to simulate the scenarios. I've actually seen this approach used often. This days I would only run a chaos testing if supported by a tooling like the Netflix Chaos Monkey (https://github.com/Netflix/chaosmonkey) or the other tooling available for the cloud providers, Kubernetes or Cloud Foundry. A nice list is available from https://github.com/dastergon/awesome-chaos-engineering#notable-tools Cheers -- Pere Missatge de Parviz deyhim <dey...@gmail.com> del dia dj., 27 de des. 2018 a les 21:57: > +d...@kafka.apache.org > > On Wed, Dec 26, 2018 at 8:53 PM Parviz deyhim <dey...@gmail.com> wrote: > > > Thanks fair points. Probably best if I simplify the question: How does > > Kafka community run tests besides using mocked local Kafka components? > > Surely there are tests to confirm different failure scenarios such as > > losing a broker in a real clustered environment (multi node cluster with > > Ip, port, hostnsmes and etc). The answer would be a good starting point > for > > me. > > > > On Wed, Dec 26, 2018 at 6:11 PM Stephen Powis <spo...@salesforce.com> > > wrote: > > > >> Without looking into how the integration tests work my best guess is > >> within > >> the context they were written to run in, it doesn't make sense to run > them > >> against a remote cluster. The "internal" cluster is running the same > >> code, > >> so why require having to coordinate with an external dependency? > >> > >> For the use case you gave, and I'm not sure if tests exist that cover > this > >> behavior or not -- Running the brokers locally in the context of the > tests > >> mean that those tests have control over the brokers (IE shut them off, > >> restart them, etc.. programmatically) and validate behavior. To > >> coordinate > >> these operations on a remote broker would be significantly more > difficult. > >> > >> Not sure this helps...but perhaps you're either asking the wrong > questions > >> or trying to go about solving your problem using the wrong set of tools? > >> My gut feeling says if you want to do a full scale multi-server load / > HA > >> test, Kafka's test suite is not the best place to start. > >> > >> Stephen > >> > >> > >> > >> On Thu, Dec 27, 2018 at 10:53 AM Parviz deyhim <dey...@gmail.com> > wrote: > >> > >> > Hi, > >> > > >> > I'm looking to see who has done this before and get some guidance. On > >> > frequent basis I like to run basic tests on a remote Kafka cluster > while > >> > some random chaos/faults are being performed. In other words I like to > >> run > >> > chaos engineering tasks (network outage, disk outage, etc) and see how > >> > Kafka behaves. For example: > >> > > >> > 1) bring some random Broker node down > >> > 2) send 2000 messages > >> > 3) consumes messages > >> > 4) confirm there's no data loss > >> > > >> > My questions: I'm pretty sure most of the scenarios I'm looking to > test > >> > have been covered under Kafka's integration,unit and other existing > >> tests. > >> > What I cannot figure out is how to run those tests on a remote cluster > >> vs. > >> > a local one which the tests seems to run on. For example I like to run > >> the > >> > following command but the tests to be executed on a remote cluster of > my > >> > choice: > >> > > >> > ./gradlew cleanTest integrationTest > >> > > >> > Any guidance/help would be appreciated. > >> > > >> > Thanks > >> > > >> > > > -- Pere Urbon-Bayes Software Architect http://www.purbon.com https://twitter.com/purbon https://www.linkedin.com/in/purbon/