Could this delay be contributing to the various issues Im seeing?

- extended / repeating rebalances
- broker connection drops


On Sat, Dec 17, 2016 at 10:27 AM, Eno Thereska <eno.there...@gmail.com>
wrote:

> Yeah, the numbers for the streams tests seem to be low. For reference,
> here is what I get when I run it on my laptop, with Kafka co-located
> (Macbook pro, 16GB, SSD). These are rounded up with no decimal places:
>
> > Producer Performance [MB/sec write]: 40
> > Consumer Performance [MB/sec read]: 126
>
> > Streams Performance [MB/sec read]: 81
> > Streams Performance [MB/sec read+write]: 45
> > Streams Performance [MB/sec read+store]: 22
> > Streams KStreamKTable LeftJoin Performance [MB/s joined]: 51
> > Streams KStreamKStream LeftJoin Performance [MB/s joined]: 11
> > Streams KTableKTable LeftJoin Performance [MB/s joined]: 12
>
> I haven't tried this on AWS unfortunately so I don't know what to expect
> there.
>
> Eno
>
>
> > On 17 Dec 2016, at 15:39, Jon Yeargers <jon.yearg...@cedexis.com> wrote:
> >
> > stateDir=/tmp/kafka-streams-simple-benchmark
> >
> > numRecords=10000000
> >
> > SLF4J: Class path contains multiple SLF4J bindings.
> >
> > SLF4J: Found binding in
> > [jar:file:/home/ec2-user/kafka/streams/build/dependant-
> libs-2.10.6/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/
> StaticLoggerBinder.class]
> >
> > SLF4J: Found binding in
> > [jar:file:/home/ec2-user/kafka/tools/build/dependant-
> libs-2.10.6/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/
> StaticLoggerBinder.class]
> >
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> >
> > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> >
> > [2016-12-17 15:26:15,011] WARN Error while fetching metadata with
> > correlation id 1 : {simpleBenchmarkSourceTopic=LEADER_NOT_AVAILABLE}
> > (org.apache.kafka.clients.NetworkClient:709)
> >
> > Producer Performance [MB/sec write]: 87.52977203130317
> >
> > Consumer Performance [MB/sec read]: 88.05408180729077
> >
> > Streams Performance [MB/sec read]: 23.306380376435413
> >
> > [2016-12-17 15:27:22,722] WARN Error while fetching metadata with
> > correlation id 1 : {simpleBenchmarkSinkTopic=LEADER_NOT_AVAILABLE}
> > (org.apache.kafka.clients.NetworkClient:709)
> >
> > Streams Performance [MB/sec read+write]: 20.37099360560648
> >
> > Streams Performance [MB/sec read+store]: 11.918550778354337
> >
> > Initializing kStreamTopic joinSourceTopic1kStreamKTable
> >
> > [2016-12-17 15:29:39,597] WARN Error while fetching metadata with
> > correlation id 1 : {joinSourceTopic1kStreamKTable=LEADER_NOT_AVAILABLE}
> > (org.apache.kafka.clients.NetworkClient:709)
> >
> > Initializing kTableTopic joinSourceTopic2kStreamKTable
> >
> > [2016-12-17 15:29:50,589] WARN Error while fetching metadata with
> > correlation id 1 : {joinSourceTopic2kStreamKTable=LEADER_NOT_AVAILABLE}
> > (org.apache.kafka.clients.NetworkClient:709)
> >
> > Streams KStreamKTable LeftJoin Performance [MB/s joined]:
> 14.690136622553428
> >
> > Initializing kStreamTopic joinSourceTopic1kStreamKStream
> >
> > [2016-12-17 15:31:12,583] WARN Error while fetching metadata with
> > correlation id 1 : {joinSourceTopic1kStreamKStream=LEADER_NOT_AVAILABLE}
> > (org.apache.kafka.clients.NetworkClient:709)
> >
> > Initializing kStreamTopic joinSourceTopic2kStreamKStream
> >
> > [2016-12-17 15:31:23,534] WARN Error while fetching metadata with
> > correlation id 1 : {joinSourceTopic2kStreamKStream=LEADER_NOT_AVAILABLE}
> > (org.apache.kafka.clients.NetworkClient:709)
> >
> > Streams KStreamKStream LeftJoin Performance [MB/s joined]:
> 8.647640177490924
> >
> > Initializing kTableTopic joinSourceTopic1kTableKTable
> >
> > [2016-12-17 15:33:34,586] WARN Error while fetching metadata with
> > correlation id 1 : {joinSourceTopic1kTableKTable=LEADER_NOT_AVAILABLE}
> > (org.apache.kafka.clients.NetworkClient:709)
> >
> > Initializing kTableTopic joinSourceTopic2kTableKTable
> >
> > [2016-12-17 15:33:45,520] WARN Error while fetching metadata with
> > correlation id 1 : {joinSourceTopic2kTableKTable=LEADER_NOT_AVAILABLE}
> > (org.apache.kafka.clients.NetworkClient:709)
> >
> > Streams KTableKTable LeftJoin Performance [MB/s joined]:
> 6.530348031376133
> >
> > On Sat, Dec 17, 2016 at 6:39 AM, Jon Yeargers <jon.yearg...@cedexis.com>
> > wrote:
> >
> >> I'd be happy to but the AWS AMI (default) i'm using is fighting this at
> >> every turn. Will keep trying.
> >>
> >> On Sat, Dec 17, 2016 at 2:46 AM, Eno Thereska <eno.there...@gmail.com>
> >> wrote:
> >>
> >>> Jon,
> >>>
> >>> It's hard to tell. Would you be willing to run a simple benchmark and
> >>> report back the numbers? The benchmark is called SimpleBenchmark.java,
> it's
> >>> included with the source, and it will start a couple of streams apps.
> It
> >>> requires a ZK and a broker to be up. Then you run it:
> >>> org.apache.kafka.streams.perf.SimpleBenchmark
> <zookeeperHostPortString>
> >>> <brokerHostPortString>.
> >>>
> >>> Thanks
> >>> Eno
> >>>> On 16 Dec 2016, at 20:00, Jon Yeargers <jon.yearg...@cedexis.com>
> >>> wrote:
> >>>>
> >>>> Looking for reasons why my installations seem to be generating so many
> >>>> issues:
> >>>>
> >>>> Starting an app which is
> >>>>
> >>>> stream->aggregate->filter->foreach
> >>>>
> >>>> While it's running the system in question (AWS) averages >10% IOWait
> >>> with
> >>>> spikes to 60-70%.  The CPU load is in the range of 3/2/1 (8 core
> >>> machine w/
> >>>> 16G RAM)
> >>>>
> >>>> Could this IO delay be causing problems? The 'kafka-streams' folder is
> >>> on a
> >>>> separate EBS optimized volume with 2500 dedicated IOPs.
> >>>
> >>>
> >>
>
>

Reply via email to