Re: Kafka source connector for avro files

2017-04-26 Thread Stas Chizhov
Yes that should work. Thanks a lot!

2017-04-25 21:44 GMT+02:00 Gwen Shapira :

> We added a Byte Converter which essentially does no conversion. Is this
> what you are looking for?
>
> https://issues.apache.org/jira/browse/KAFKA-4783
>
> On Tue, Apr 25, 2017 at 11:54 AM, Stas Chizhov  wrote:
>
> > Hi,
> >
> > I have a kafka topic with avro messages + schema registry, which is being
> > backed up into s3 as a set of avro files. I need to be able to restore a
> > subset of those files into a new topic in the original format with
> schemas
> > published into a schema registry. Am I right that at the moment there is
> no
> > way of avoiding conversion of original avro messages into kafka connect
> > format and back in a source connector?
> >
> > Thank you,
> > Stanislav.
> >
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 
>


Re: Rename consumer group?

2017-04-26 Thread Stas Chizhov
Hi,

We've written a few-liner command that reads offsets for the consumer group
we want to copy and commits those for a new group. That way you can inspect
"__consumer_offsets" topic and make sure everything is correct before you
start consuming messages.

BR
Stanislav.


2017-04-25 22:02 GMT+02:00 Samuel Taylor :

> Hi all,
>
> Is there a good way to rename a consumer group?
>
> My current plan is to stop the existing consumer and seek a new consumer to
> the same offsets as the existing consumer, but I'd love to hear any
> suggestions or experiences people have with this!
>
> Thanks,
> Samuel
>


Re: Stream applications dying on broker ISR change

2017-04-26 Thread Ian Duffy
Hi Eno,

Looks like we just didn't wait long enough. It eventually recovered and
started processing again.

Thanks for all the fantastic work in the 0.10.2.1 client.

On 25 April 2017 at 18:12, Eno Thereska  wrote:

> Hi Ian,
>
> Any chance you could share the full log? Feel free to send it to me
> directly if you don't want to broadcast it everywhere.
>
> Thanks
> Eno
>
>
> > On 25 Apr 2017, at 17:36, Ian Duffy  wrote:
> >
> > Thanks again for the quick response Eno.
> >
> > We just left the application running in the hope it would recover; After
> > ~1hour it's still just continuously spilling out the same exception and
> not
> > managing to continue processing.
> >
> > On 25 April 2017 at 16:24, Eno Thereska  wrote:
> >
> >> Hi Ian,
> >>
> >> Retries are sometimes expected and don't always indicate a problem. We
> >> should probably adjust the printing of the messages to not print this
> >> warning frequently. Are you seeing any crash or does the app proceed?
> >>
> >> Thanks
> >> Eno
> >>
> >> On 25 Apr 2017 4:02 p.m., "Ian Duffy"  wrote:
> >>
> >> Upgraded a handful of our streams applications to 0.10.2.1 as suggested.
> >> Seeing much less issues and much smoother performance.
> >> They withstood ISR changes.
> >>
> >> Seen the following when more consumers were added to a consumer group:
> >>
> >> 2017-04-25 14:57:37,200 - [WARN] - [1.1.0-11] - [StreamThread-2]
> >> o.a.k.s.p.internals.StreamThread - Could not create task 1_21. Will
> retry.
> >> org.apache.kafka.streams.errors.LockException: task [1_21] Failed to
> lock
> >> the state directory for task 1_21
> >> at
> >> org.apache.kafka.streams.processor.internals.ProcessorStateM
> >> anager.(ProcessorStateManager.java:100)
> >> at
> >> org.apache.kafka.streams.processor.internals.AbstractTask.<
> >> init>(AbstractTask.java:73)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamTask.<
> >> init>(StreamTask.java:108)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread.
> >> createStreamTask(StreamThread.java:864)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread$
> >> TaskCreator.createTask(StreamThread.java:1237)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread$Ab
> >> stractTaskCreator.retryWithBackoff(StreamThread.java:1210)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread.
> >> addStreamTasks(StreamThread.java:967)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread.
> >> access$600(StreamThread.java:69)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread$1.
> >> onPartitionsAssigned(StreamThread.java:234)
> >> at
> >> org.apache.kafka.clients.consumer.internals.ConsumerCoordina
> >> tor.onJoinComplete(ConsumerCoordinator.java:259)
> >> at
> >> org.apache.kafka.clients.consumer.internals.AbstractCoordina
> >> tor.joinGroupIfNeeded(AbstractCoordinator.java:352)
> >> at
> >> org.apache.kafka.clients.consumer.internals.AbstractCoordina
> >> tor.ensureActiveGroup(AbstractCoordinator.java:303)
> >> at
> >> org.apache.kafka.clients.consumer.internals.ConsumerCoordina
> >> tor.poll(ConsumerCoordinator.java:290)
> >> at
> >> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(
> >> KafkaConsumer.java:1029)
> >> at
> >> org.apache.kafka.clients.consumer.KafkaConsumer.poll(
> >> KafkaConsumer.java:995)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread.
> >> runLoop(StreamThread.java:592)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread.
> >> run(StreamThread.java:361)
> >>
> >>
> >>
> >> On 24 April 2017 at 16:02, Eno Thereska  wrote:
> >>
> >>> Hi Sachin,
> >>>
> >>> In KIP-62 a background heartbeat thread was introduced to deal with the
> >>> group protocol arrivals and departures. There is a setting called
> >>> session.timeout.ms that specifies the timeout of that background
> thread.
> >>> So if the thread has died that background thread will also die and the
> >>> right thing will happen.
> >>>
> >>> Eno
> >>>
>  On 24 Apr 2017, at 15:34, Sachin Mittal  wrote:
> 
>  I had a question about this setting
>  ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,
> >>> Integer.toString(Integer.MAX_
>  VALUE)
> 
>  How would the broker know if a thread has died or say we simply
> stopped
> >>> an
>  instance and needs to be booted out of the group.
> 
>  Thanks
>  Sachin
> 
> 
>  On Mon, Apr 24, 2017 at 5:55 PM, Eno Thereska  >
>  wrote:
> 
> > Hi Ian,
> >
> >
> > This is now fixed in 0.10.2.1. The default configuration need
> >> tweaking.
> >>> If
> > you can't pick that up (it's currently being voted), make sure you
> >> have
> > these two parameters set as follows in your streams config:
> >
> > final Properties props = new Properties();
> > ...
> > props.put(ProducerConfig.RETRIES_CONFIG, 10);  < increase to 10
> >>> from
> > default of 0
> > props.put(ConsumerConfig.

Re: [VOTE] 0.10.2.1 RC3

2017-04-26 Thread Ian Duffy
+1

Started using kafka client 0.10.2.1 for our streams applications, seen a
much greater improvement on retries when failures occur.
We've been running without manual intervention for > 24 hours which is
something we haven't seen in awhile.

Found it odd that the RC tag wasn't within the version on the maven staging
repository, how do you identify different RC versions? How do you flush
clients cache? etc. Ended up digging down on the index of pages and
verifying the last modified date matched the date on this email thread.

Thanks,
Ian.

On 22 April 2017 at 22:45, Michal Borowiecki 
wrote:

> It's listed below:
>
> * Maven artifacts to be voted 
> upon:https://repository.apache.org/content/groups/staging/
>
>
>
> On 22/04/17 19:23, Shimi Kiviti wrote:
>
> Is there a maven repo with these jars so I can test it against our kafka
> streams services?
>
> On Sat, Apr 22, 2017 at 9:05 PM, Eno Thereska  
> 
> wrote:
>
>
> +1 tested the usual streams tests as before.
>
> Thanks
> Eno
>
> On 21 Apr 2017, at 17:56, Gwen Shapira  
>  wrote:
>
> Hello Kafka users, developers, friends, romans, countrypersons,
>
> This is the fourth (!) candidate for release of Apache Kafka 0.10.2.1.
>
> It is a bug fix release, so we have lots of bug fixes, some super
> important.
>
> Release notes for the 0.10.2.1 
> release:http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/RELEASE_NOTES.html
>
> *** Please download, test and vote by Wednesday, April 26, 2017 ***
>
> Kafka's KEYS file containing PGP keys we use to sign the 
> release:http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and 
> binary):http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/
>
> * Maven artifacts to be voted 
> upon:https://repository.apache.org/content/groups/staging/
>
> * Javadoc:http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/javadoc/
>
> * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 
> tag:https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
>
> 8e4f09caeaa877f06dc75c7da1af7a727e5e599f
>
>
> * Documentation:http://kafka.apache.org/0102/documentation.html
>
> * Protocol:http://kafka.apache.org/0102/protocol.html
>
> /**
>
> Your help in validating this bugfix release is super valuable, so
> please take the time to test and vote!
>
> Suggested tests:
> * Grab the source archive and make sure it compiles
> * Grab one of the binary distros and run the quickstarts against them
> * Extract and verify one of the site docs jars
> * Build a sample against jars in the staging repo
> * Validate GPG signatures on at least one file
> * Validate the javadocs look ok
> * The 0.10.2 documentation was updated for this bugfix release
> (especially upgrade, streams and connect portions) - please make sure
> it looks ok: http://kafka.apache.org/documentation.html
>
> But above all, try to avoid finding new bugs - we want to get this
>
> release
>
> out the door already :P
>
>
> Thanks,
> Gwen
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent650.450.2760 <(650)%20450-2760> | @gwenshap
> Follow us: Twitter  
>  | blog 
> 
>
>
> --
>  Michal Borowiecki
> Senior Software Engineer L4
> T: +44 208 742 1600 <+44%2020%208742%201600>
>
>
> +44 203 249 8448 <+44%2020%203249%208448>
>
>
>
> E: michal.borowie...@openbet.com
> W: www.openbet.com
> OpenBet Ltd
>
> Chiswick Park Building 9
>
> 566 Chiswick High Rd
>
> London
>
> W4 5XT
>
> UK
> 
> This message is confidential and intended only for the addressee. If you
> have received this message in error, please immediately notify the
> postmas...@openbet.com and delete it from your system as well as any
> copies. The content of e-mails as well as traffic data may be monitored by
> OpenBet for employment and security purposes. To protect the environment
> please do not print this e-mail unless necessary. OpenBet Ltd. Registered
> Office: Chiswick Park Building 9, 566 Chiswick High Road, London, W4 5XT,
> United Kingdom. A company registered in England and Wales. Registered no.
> 3134634. VAT no. GB927523612
>


About "org.apache.kafka.common.protocol.types.SchemaException" Problem

2017-04-26 Thread Yang Cui
 Dear All,

  I am using Kafka cluster 2.11_0.9.0.1,  and the new consumer of 2.11_0.9.0.1.
  When I set the quota configuration is:
  quota.producer.default=100
  quota.consumer.default=100
  And I used the new consumer to consume data, then the error  happened 
sometimes:
  
  org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'responses': Error reading array of size 1140343, only 37 bytes available
  at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)
  at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:439)
  at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:265)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
  at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:908)
  at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
  at com.fw.kafka.ConsumerThread.run(TimeOffsetPair.java:458)
  
  It is not occurred every time, but when it happened, it occurs repeatedly 
many times.
  
  



Time synchronization between streams

2017-04-26 Thread Murad Mamedov
Hi,

Suppose that we have two topics, one with each event size 100 bytes and the
other each event size 5000 bytes. Producers are producing events (with
timestamps) for 3 days in both topics, same amount of events, let assume
10 events in each topic.

Kafka Client API consumers will obviously consume both topics in different
amount of time.

Now after 3 days application starts consuming these topics as High-level
API KStream.

If we need to join these two streams on time based, let's say 15 minutes
window, how it would behave then?

Obviously, stream with smaller events will be consumed faster than stream
with larger event size. Stream100 could be reading 1 day ago data, while
stream5000 would still be reading 3 days ago events.

Is there any global time synchronization between streams in Kafka Streams
API? So that, it would not consume more events from one stream while the
other is still behind in time. Or probably better to rephrase it like, is
there global event ordering based on timestamp of event?

The other thing could be to join streams in window, however same question
arises, if one stream days behind the other, will the join window of 15
minutes ever work?

I'm trying to grasp a way on how to design replay of long periods of time
for application with multiple topics/streams. Especially when combining
with low-level API processors and transformers which relay on each other
via GlobalKTable or KTable stores on these streams. For instance, smaller
topic could have the following sequence of events:

T1 - (k1, v1)
T1 + 10 minutes - (k1, null)
T1 + 20 minutes - (k1, v2)

While topic with larger events:

T1 - (k1, vt1)
T1 + 5 minutes - (k1, null)
T1 + 15 minutes - (k1, vt2)

If one would join or lookup these streams in realtime (timestamp of event
is approximately = wall clock time) result would be:

T1 - topic_small (k1, v1) - topic_large (k1, vt1)
T1 + 5 minutes - topic_small (k1, v1) - topic_large (k1, null)
T1 + 10 minutes - topic_small (k1, null) - topic_large (k1, null)
T1 + 15 minutes - topic_small (k1, null) - topic_large (k1, vt2)
T1 + 20 minutes - topic_small (k1, v2) - topic_large (k1, vt2)

However, when replaying streams from beginning, from perspective of topic
with large events, it would see topic with small events as (k1, v2),
completely missing v1 and null states in case of GlobalKTable/KTable
presentation or events in case of KStream-KStream windowed join.

Do I miss something here? Should application be responsible in global
synchronization between topics, or Kafka Streams does / can do that? If
application should, then what could be approach to solve it?

I hope I could explain myself.

Thanks in advance


Re: Time synchronization between streams

2017-04-26 Thread Damian Guy
Hi Murad,

On Wed, 26 Apr 2017 at 13:37 Murad Mamedov  wrote:

> Is there any global time synchronization between streams in Kafka Streams
> API? So that, it would not consume more events from one stream while the
> other is still behind in time. Or probably better to rephrase it like, is
> there global event ordering based on timestamp of event?
>

Yes. When streams are joined each partition from the joined streams are
grouped together into a single Task. Each Task maintains a record buffer
for all of the topics it is consuming from. When it is time process a
record it will chose a record from the partition that has the smallest
timestamp. So in this way it makes a best effort to keep the streams in
sync.


>
> The other thing could be to join streams in window, however same question
> arises, if one stream days behind the other, will the join window of 15
> minutes ever work?
>
>
If the data is arriving much later you can use
JoinWindows.until(SOME_TIME_PERIOD) to keep the data around. In this case
the streams will still join. Once SOME_TIME_PERIOD has expired the streams
will no longer be able to join.


> I'm trying to grasp a way on how to design replay of long periods of time
> for application with multiple topics/streams. Especially when combining
> with low-level API processors and transformers which relay on each other
> via GlobalKTable or KTable stores on these streams. For instance, smaller
> topic could have the following sequence of events:
>
> T1 - (k1, v1)
> T1 + 10 minutes - (k1, null)
> T1 + 20 minutes - (k1, v2)
>
> While topic with larger events:
>
> T1 - (k1, vt1)
> T1 + 5 minutes - (k1, null)
> T1 + 15 minutes - (k1, vt2)
>
> If one would join or lookup these streams in realtime (timestamp of event
> is approximately = wall clock time) result would be:
>
> T1 - topic_small (k1, v1) - topic_large (k1, vt1)
> T1 + 5 minutes - topic_small (k1, v1) - topic_large (k1, null)
> T1 + 10 minutes - topic_small (k1, null) - topic_large (k1, null)
> T1 + 15 minutes - topic_small (k1, null) - topic_large (k1, vt2)
> T1 + 20 minutes - topic_small (k1, v2) - topic_large (k1, vt2)
>
> However, when replaying streams from beginning, from perspective of topic
> with large events, it would see topic with small events as (k1, v2),
> completely missing v1 and null states in case of GlobalKTable/KTable
> presentation or events in case of KStream-KStream windowed join.
>
>
I don't really follow here. In the case of a GlobalKTable it will be
initialized with all of the existing data before the rest of the streams
start processing.


> Do I miss something here? Should application be responsible in global
> synchronization between topics, or Kafka Streams does / can do that? If
> application should, then what could be approach to solve it?
>
> I hope I could explain myself.
>
> Thanks in advance
>


Re: Time synchronization between streams

2017-04-26 Thread m...@muradm.net
Yes, basically I'm ok with how join works including window and 
retention periods, under normal circumstances. In real time of 
occurrence of events, application joining streams will get something 
like this:


T1 + 0 => topic_small (K1, V1)  => join result (None)
T1 + 1 min =>  topic_large (K1, VT1) => join result (K1, V1, VT1)
T1 + 3 mins => topic_large (K1, VT2) => join result (K1, V1, VT2)
T1 + 7 mins => topic_small (K1, V2) => join result (K1, V2, VT2)

According to Windowed and WindowedSerializer it keeps only start of 
window with key when storing it to state store. Assuming that window 
start time same for both topics/KStreams (not sure yet, still reading 
source), but even if not same, state stores actions of Kafka Streams 
will be like this:


join_left_side_store.put ( K1-W1, V1 )
join_right_side_store.put ( K1-W1, VT1 )
join_left_side_store.put ( K1-W1, V2 )
join_right_side_store.put ( K1-W1, VT2 )

However when consuming same topics by the same application from 
beginning from scratch (no application local state stores) for large 
period of time (greater than window period, but less than retention 
period), join result for 10 minutes window will be different, like this:


join result (None)
join result (K1, V2, VT1)
join result (K1, V2, VT2)

Because topic_large's stream is being read slower, value of topic_small 
in window will change from V1 to V2, before Kafka Streams will receive 
VT1.


I.e. state stores actions of Kafka Streams will be like this:

join_left_side_store.put ( K1-W1, V1 )
join_left_side_store.put ( K1-W1, V2 )
join_right_side_store.put ( K1-W1, VT1 )
join_right_side_store.put ( K1-W1, VT2 )

Isn't it?

On Wed, Apr 26, 2017 at 6:50 PM, Damian Guy  
wrote:

Hi Murad,

On Wed, 26 Apr 2017 at 13:37 Murad Mamedov  wrote:

 Is there any global time synchronization between streams in Kafka 
Streams
 API? So that, it would not consume more events from one stream 
while the
 other is still behind in time. Or probably better to rephrase it 
like, is

 there global event ordering based on timestamp of event?



Yes. When streams are joined each partition from the joined streams 
are
grouped together into a single Task. Each Task maintains a record 
buffer

for all of the topics it is consuming from. When it is time process a
record it will chose a record from the partition that has the smallest
timestamp. So in this way it makes a best effort to keep the streams 
in

sync.




 The other thing could be to join streams in window, however same 
question
 arises, if one stream days behind the other, will the join window 
of 15

 minutes ever work?



If the data is arriving much later you can use
JoinWindows.until(SOME_TIME_PERIOD) to keep the data around. In this 
case
the streams will still join. Once SOME_TIME_PERIOD has expired the 
streams

will no longer be able to join.



 I'm trying to grasp a way on how to design replay of long periods 
of time
 for application with multiple topics/streams. Especially when 
combining
 with low-level API processors and transformers which relay on each 
other
 via GlobalKTable or KTable stores on these streams. For instance, 
smaller

 topic could have the following sequence of events:

 T1 - (k1, v1)
 T1 + 10 minutes - (k1, null)
 T1 + 20 minutes - (k1, v2)

 While topic with larger events:

 T1 - (k1, vt1)
 T1 + 5 minutes - (k1, null)
 T1 + 15 minutes - (k1, vt2)

 If one would join or lookup these streams in realtime (timestamp of 
event

 is approximately = wall clock time) result would be:

 T1 - topic_small (k1, v1) - topic_large (k1, vt1)
 T1 + 5 minutes - topic_small (k1, v1) - topic_large (k1, null)
 T1 + 10 minutes - topic_small (k1, null) - topic_large (k1, null)
 T1 + 15 minutes - topic_small (k1, null) - topic_large (k1, vt2)
 T1 + 20 minutes - topic_small (k1, v2) - topic_large (k1, vt2)

 However, when replaying streams from beginning, from perspective of 
topic

 with large events, it would see topic with small events as (k1, v2),
 completely missing v1 and null states in case of GlobalKTable/KTable
 presentation or events in case of KStream-KStream windowed join.



I don't really follow here. In the case of a GlobalKTable it will be
initialized with all of the existing data before the rest of the 
streams

start processing.


 Do I miss something here? Should application be responsible in 
global
 synchronization between topics, or Kafka Streams does / can do 
that? If

 application should, then what could be approach to solve it?

 I hope I could explain myself.

 Thanks in advance



Kafka 24/7 support

2017-04-26 Thread Benny Rutten
Good morning,

I am trying to convince my company to choose Apache Kafka as our standard 
messaging system.
However, this can only succeed if I can also propose a partner who can provide 
24/7 support in case of production issues.
Would you by any chance have a list of companies that provide such support?

Kind regards,







Benny Rutten

Database Administrator Sr.
brut...@isabel.eu
t. +32 2 5451 448
m.+32472931814

Isabel NV/SA
Keizerinlaan 13-15 Bld de l'Impératrice
1000 Brussels
Belgium
t. +32 2 5451 711
f. +32 2 5451 719
www.isabel.eu

[cid:imageffc3f5.PNG@c3d02f89.498a21f7]

[cid:image1de0f7.PNG@8e8c8d20.42993b79]


Disclaimer: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you are not the named addressee, you should not further read, 
disclose, distribute, copy or use this e-mail or its contents, immediately 
notify the sender by reply to this e-mail and delete this message as well as 
any attachments without retaining a copy. The entire email disclaimer of Isabel 
NV/SA




Last chance: ApacheCon is just three weeks away

2017-04-26 Thread Rich Bowen
ApacheCon is just three weeks away, in Miami, Florida, May 15th - 18th.
http://apachecon.com/

There's still time to register and attend. ApacheCon is the best place
to find out about tomorrow's software, today.

ApacheCon is the official convention of The Apache Software Foundation,
and includes the co-located events:
  * Apache: Big Data
  * Apache: IoT
  * TomcatCon
  * FlexJS Summit
  * Cloudstack Collaboration Conference
  * BarCampApache
  * ApacheCon Lightning Talks

And there's dozens of opportunities to meet your fellow Apache
enthusiasts, both from your project, and from the other 200+ projects at
the Apache Software Foundation.

Register here:
http://events.linuxfoundation.org/events/apachecon-north-america/attend/register-

More information here: http://apachecon.com/

Follow us and learn more about ApacheCon:
  * Twitter: @ApacheCon
  * Discussion mailing list:
https://lists.apache.org/list.html?apachecon-disc...@apache.org
  * Podcasts and speaker interviews: http://feathercast.apache.org/
  * IRC: #apachecon on the https://freenode.net/

We look forward to seeing you in Miami!

-- 
Rich Bowen - VP Conferences, The Apache Software Foundation
http://apachecon.com/
@apachecon



signature.asc
Description: OpenPGP digital signature


RE: Kafka 24/7 support

2017-04-26 Thread Tauzell, Dave
Both Confluent and Cloudera provide support.

-Dave

From: Benny Rutten [mailto:brut...@isabel.eu]
Sent: Wednesday, April 26, 2017 2:36 AM
To: users@kafka.apache.org
Subject: Kafka 24/7 support

Good morning,

I am trying to convince my company to choose Apache Kafka as our standard 
messaging system.
However, this can only succeed if I can also propose a partner who can provide 
24/7 support in case of production issues.
Would you by any chance have a list of companies that provide such support?

Kind regards,







Benny Rutten
Database Administrator Sr.
brut...@isabel.eu
t. +32 2 5451 448
m.+32472931814

Isabel NV/SA
Keizerinlaan 13-15 Bld de l'Impératrice
1000 Brussels
Belgium
t. +32 2 5451 711
f. +32 2 5451 719
www.isabel.eu

[cid:imageffc3f5.PNG@c3d02f89.498a21f7]


[cid:image1de0f7.PNG@8e8c8d20.42993b79]


Disclaimer: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you are not the named addressee, you should not further read, 
disclose, distribute, copy or use this e-mail or its contents, immediately 
notify the sender by reply to this e-mail and delete this message as well as 
any attachments without retaining a copy. The entire email disclaimer of Isabel 
NV/SA


This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.


How to increase replication factor in kafka 10.2

2017-04-26 Thread Naanu Bora
Hi,
   In our team some developers created topics with replication factor as 1
by mistake and number of partition in the range of 20-40. How to increase
the replication factor to 3 for those topics now? Do we need to come up
with a manual assignment plan for each of the partitions? Is there any
quicker way to achieve this?

Thanks!


Re: Kafka 24/7 support

2017-04-26 Thread Dean Wampler
Talk to Confluent. https://www.confluent.io/ Nobody knows Kafka better than
they do ;)

Some of the Hadoop vendors offer commercial support as part of their
subscriptions, too.

My company, Lightbend, will be rolling out a distribution of streaming
technologies this Fall that will also include Kafka support.
http://www.lightbend.com/fast-data-platform.

dean



*Dean Wampler, Ph.D.*

*VP, Fast Data Engineering at Lightbend*
Author: Programming Scala, 2nd Edition
, Fast Data Architectures
for Streaming Applications
,
Functional Programming for Java Developers
, and Programming Hive
 (O'Reilly)
@deanwampler 
http://polyglotprogramming.com
https://github.com/deanwampler

On Wed, Apr 26, 2017 at 2:36 AM, Benny Rutten  wrote:

> Good morning,
>
>
>
> I am trying to convince my company to choose Apache Kafka as our standard
> messaging system.
>
> However, this can only succeed if I can also propose a partner who can
> provide 24/7 support in case of production issues.
>
> Would you by any chance have a list of companies that provide such support?
>
>
>
> Kind regards,
>
>
>
>
>
>
>
>
>
>
>
> Benny Rutten
> Database Administrator Sr.
> brut...@isabel.eu
> t. +32 2 5451 448
> m.+32472931814 <+32%20472%2093%2018%2014>
>
> *Isabel NV/SA *Keizerinlaan 13-15 Bld de l'Impératrice
> 1000 Brussels
> Belgium
> t. +32 2 5451 711
> f. +32 2 5451 719
> www.isabel.eu
>
> 
>
>
> Disclaimer: This email and any files transmitted with it are confidential
> and intended solely for the use of the individual or entity to whom they
> are addressed. If you are not the named addressee, you should not further
> read, disclose, distribute, copy or use this e-mail or its contents,
> immediately notify the sender by reply to this e-mail and delete this
> message as well as any attachments without retaining a copy. The entire
> email disclaimer of Isabel NV/SA
> 
>
>


Re: How to increase replication factor in kafka 10.2

2017-04-26 Thread Todd Palino
You can use the reassign partitions CLI tool to generate a partition
reassignment for the topic, and then manually edit the JSON to add a third
replica ID to each partition before you run it.

Alternately, you can use our kafka-assigner tool (
https://github.com/linkedin/kafka-tools) to do it in a more automated
fashion.

-Todd

On Wed, Apr 26, 2017 at 12:25 PM Naanu Bora  wrote:

> Hi,
>In our team some developers created topics with replication factor as 1
> by mistake and number of partition in the range of 20-40. How to increase
> the replication factor to 3 for those topics now? Do we need to come up
> with a manual assignment plan for each of the partitions? Is there any
> quicker way to achieve this?
>
> Thanks!
>
-- 
*Todd Palino*
Senior Staff Engineer, Site Reliability
Data Infrastructure Streaming



linkedin.com/in/toddpalino


Issue in Kafka running for few days

2017-04-26 Thread Abhit Kalsotra
Hi *

My kafka setup


**OS: Windows Machine*6 broker nodes , 4 on one Machine and 2 on other
Machine*

**ZK instance on (4 broker nodes Machine) and another ZK on (2 broker nodes
machine)*
** 2 Topics with partition size = 50 and replication factor = 3*

I am producing on an average of around 500 messages / sec with each message
size close to 98 bytes...

More or less the message rate stays constant throughout, but after running
the setup for close to 2 weeks , my Kafka cluster broke and this happened
twice in a month.  Not able to understand what's the issue, Kafka gurus
please do share your inputs...

the controlle.log file at the time of Kafka broken looks like















*[2017-04-26 12:03:34,998] INFO [Controller 0]: Broker failure callback for
0,1,3,5,6 (kafka.controller.KafkaController)[2017-04-26 12:03:34,998] INFO
[Controller 0]: Removed ArrayBuffer() from list of shutting down brokers.
(kafka.controller.KafkaController)[2017-04-26 12:03:34,998] INFO [Partition
state machine on Controller 0]: Invoking state change to OfflinePartition
for partitions
[__consumer_offsets,19],[mytopic,11],[__consumer_offsets,30],[mytopicOLD,18],[mytopic,13],[__consumer_offsets,47],[mytopicOLD,26],[__consumer_offsets,29],[mytopicOLD,0],[__consumer_offsets,41],[mytopic,44],[mytopicOLD,38],[mytopicOLD,2],[__consumer_offsets,17],[__consumer_offsets,10],[mytopic,20],[mytopic,23],[mytopic,30],[__consumer_offsets,14],[__consumer_offsets,40],[mytopic,31],[mytopicOLD,43],[mytopicOLD,19],[mytopicOLD,35],[__consumer_offsets,18],[mytopic,43],[__consumer_offsets,26],[__consumer_offsets,0],[mytopic,32],[__consumer_offsets,24],[mytopicOLD,3],[mytopic,2],[mytopic,3],[mytopicOLD,45],[mytopic,35],[__consumer_offsets,20],[mytopic,1],[mytopicOLD,33],[__consumer_offsets,5],[mytopicOLD,47],[__consumer_offsets,22],[mytopicOLD,8],[mytopic,33],[mytopic,36],[mytopicOLD,11],[mytopic,47],[mytopicOLD,20],[mytopic,48],[__consumer_offsets,12],[mytopicOLD,32],[__consumer_offsets,8],[mytopicOLD,39],[mytopicOLD,27],[mytopicOLD,49],[mytopicOLD,42],[mytopic,21],[mytopicOLD,31],[mytopic,29],[__consumer_offsets,23],[mytopicOLD,21],[__consumer_offsets,48],[__consumer_offsets,11],[mytopic,18],[__consumer_offsets,13],[mytopic,45],[mytopic,5],[mytopicOLD,25],[mytopic,6],[mytopicOLD,23],[mytopicOLD,37],[__consumer_offsets,6],[__consumer_offsets,49],[mytopicOLD,13],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[mytopic,12],[mytopicOLD,30],[__consumer_offsets,31],[__consumer_offsets,44],[mytopicOLD,15],[mytopicOLD,29],[mytopic,37],[mytopic,38],[__consumer_offsets,42],[mytopic,27],[mytopic,26],[mytopic,15],[__consumer_offsets,34],[mytopic,42],[__consumer_offsets,46],[mytopic,14],[mytopicOLD,12],[mytopicOLD,1],[mytopic,7],[__consumer_offsets,25],[mytopicOLD,24],[mytopicOLD,44],[mytopicOLD,14],[__consumer_offsets,32],[mytopic,0],[__consumer_offsets,43],[mytopic,39],[mytopicOLD,5],[mytopic,9],[mytopic,24],[__consumer_offsets,36],[mytopic,25],[mytopicOLD,36],[mytopic,19],[__consumer_offsets,35],[__consumer_offsets,7],[mytopic,8],[__consumer_offsets,38],[mytopicOLD,48],[mytopicOLD,9],[__consumer_offsets,1],[mytopicOLD,6],[mytopic,41],[mytopicOLD,41],[mytopicOLD,7],[mytopic,17],[mytopicOLD,17],[mytopic,49],[__consumer_offsets,16],[__consumer_offsets,2]
(kafka.controller.PartitionStateMachine)[2017-04-26 12:03:35,045] INFO
[SessionExpirationListener on 1], ZK expired; shut down all controller
components and try to re-elect
(kafka.controller.KafkaController$SessionExpirationListener)[2017-04-26
12:03:35,045] DEBUG [Controller 1]: Controller resigning, broker id 1
(kafka.controller.KafkaController)[2017-04-26 12:03:35,045] DEBUG
[Controller 1]: De-registering IsrChangeNotificationListener
(kafka.controller.KafkaController)[2017-04-26 12:03:35,060] INFO [Partition
state machine on Controller 1]: Stopped partition state machine
(kafka.controller.PartitionStateMachine)[2017-04-26 12:03:35,060] INFO
[Replica state machine on controller 1]: Stopped replica state machine
(kafka.controller.ReplicaStateMachine)[2017-04-26 12:03:35,060] INFO
[Controller 1]: Broker 1 resigned as the controller
(kafka.controller.KafkaController)[2017-04-26 12:03:36,013] DEBUG
[OfflinePartitionLeaderSelector]: No broker in ISR is alive for
[__consumer_offsets,19]. Pick the leader from the alive assigned replicas:
(kafka.controller.OfflinePartitionLeaderSelector)[2017-04-26 12:03:36,029]
DEBUG [OfflinePartitionLeaderSelector]: No broker in ISR is alive for
[mytopic,11]. Pick the leader from the alive assigned replicas:
(kafka.controller.OfflinePartitionLeaderSelector)[2017-04-26 12:03:36,029]
DEBUG [OfflinePartitionLeaderSelector]: No broker in ISR is alive for
[__consumer_offsets,30]. Pick the leader from the alive assigned replicas:
(kafka.controller.OfflinePartitionLeaderSelector)[2017-04-26 12:03:37,811]
DEBUG [OfflinePartitionLeaderSelector]: Some broker in ISR is alive for
[mytopicOLD,18]. Select 2 from ISR 2 to be the leader.
(kafka.controller.OfflinePartitionLeade

Re: [VOTE] 0.10.2.1 RC3

2017-04-26 Thread Guozhang Wang
+1

Verified unit test on source, and quick start on binary (Scala 2.12 only).


Guozhang


On Wed, Apr 26, 2017 at 2:43 AM, Ian Duffy  wrote:

> +1
>
> Started using kafka client 0.10.2.1 for our streams applications, seen a
> much greater improvement on retries when failures occur.
> We've been running without manual intervention for > 24 hours which is
> something we haven't seen in awhile.
>
> Found it odd that the RC tag wasn't within the version on the maven
> staging repository, how do you identify different RC versions? How do you
> flush clients cache? etc. Ended up digging down on the index of pages and
> verifying the last modified date matched the date on this email thread.
>
> Thanks,
> Ian.
>
> On 22 April 2017 at 22:45, Michal Borowiecki <
> michal.borowie...@openbet.com> wrote:
>
>> It's listed below:
>>
>> * Maven artifacts to be voted 
>> upon:https://repository.apache.org/content/groups/staging/
>>
>>
>>
>> On 22/04/17 19:23, Shimi Kiviti wrote:
>>
>> Is there a maven repo with these jars so I can test it against our kafka
>> streams services?
>>
>> On Sat, Apr 22, 2017 at 9:05 PM, Eno Thereska  
>> 
>> wrote:
>>
>>
>> +1 tested the usual streams tests as before.
>>
>> Thanks
>> Eno
>>
>> On 21 Apr 2017, at 17:56, Gwen Shapira  
>>  wrote:
>>
>> Hello Kafka users, developers, friends, romans, countrypersons,
>>
>> This is the fourth (!) candidate for release of Apache Kafka 0.10.2.1.
>>
>> It is a bug fix release, so we have lots of bug fixes, some super
>> important.
>>
>> Release notes for the 0.10.2.1 
>> release:http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Wednesday, April 26, 2017 ***
>>
>> Kafka's KEYS file containing PGP keys we use to sign the 
>> release:http://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and 
>> binary):http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/
>>
>> * Maven artifacts to be voted 
>> upon:https://repository.apache.org/content/groups/staging/
>>
>> * Javadoc:http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/javadoc/
>>
>> * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 
>> tag:https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
>>
>> 8e4f09caeaa877f06dc75c7da1af7a727e5e599f
>>
>> * Documentation:http://kafka.apache.org/0102/documentation.html
>>
>> * Protocol:http://kafka.apache.org/0102/protocol.html
>>
>> /**
>>
>> Your help in validating this bugfix release is super valuable, so
>> please take the time to test and vote!
>>
>> Suggested tests:
>> * Grab the source archive and make sure it compiles
>> * Grab one of the binary distros and run the quickstarts against them
>> * Extract and verify one of the site docs jars
>> * Build a sample against jars in the staging repo
>> * Validate GPG signatures on at least one file
>> * Validate the javadocs look ok
>> * The 0.10.2 documentation was updated for this bugfix release
>> (especially upgrade, streams and connect portions) - please make sure
>> it looks ok: http://kafka.apache.org/documentation.html
>>
>> But above all, try to avoid finding new bugs - we want to get this
>>
>> release
>>
>> out the door already :P
>>
>>
>> Thanks,
>> Gwen
>>
>>
>>
>> --
>> *Gwen Shapira*
>> Product Manager | Confluent650.450.2760 <(650)%20450-2760> | @gwenshap
>> Follow us: Twitter  
>>  | blog 
>> 
>>
>>
>> --
>>  Michal Borowiecki
>> Senior Software Engineer L4
>> T: +44 208 742 1600 <+44%2020%208742%201600>
>>
>>
>> +44 203 249 8448 <+44%2020%203249%208448>
>>
>>
>>
>> E: michal.borowie...@openbet.com
>> W: www.openbet.com
>> OpenBet Ltd
>>
>> Chiswick Park Building 9
>>
>> 566 Chiswick High Rd
>>
>> London
>>
>> W4 5XT
>>
>> UK
>> 
>> This message is confidential and intended only for the addressee. If you
>> have received this message in error, please immediately notify the
>> postmas...@openbet.com and delete it from your system as well as any
>> copies. The content of e-mails as well as traffic data may be monitored by
>> OpenBet for employment and security purposes. To protect the environment
>> please do not print this e-mail unless necessary. OpenBet Ltd. Registered
>> Office: Chiswick Park Building 9, 566 Chiswick High Road, London, W4 5XT,
>> United Kingdom. A company registered in England and Wales. Registered no.
>> 3134634. VAT no. GB927523612
>>
>
>


-- 
-- Guozhang


Re: Stream applications dying on broker ISR change

2017-04-26 Thread Guozhang Wang
Hello Sachin,

When instance is stopped, it will stop the underlying heart beat thread
during the stopping process so that the coordinator will realize it's
leaving the group.

As for non-graceful stopping, say there is a bug in the stream app code
that cause the thread to die, currently Streams library captures most of
the exceptions, plus we rely on the global error handling for unexpected
exceptions. This is admittedly not ideal, and we are working on finer
grained error handling to fix such issues.

Guozhang

On Mon, Apr 24, 2017 at 7:34 AM, Sachin Mittal  wrote:

> I had a question about this setting
> ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, Integer.toString(Integer.MAX_
> VALUE)
>
> How would the broker know if a thread has died or say we simply stopped an
> instance and needs to be booted out of the group.
>
> Thanks
> Sachin
>
>
> On Mon, Apr 24, 2017 at 5:55 PM, Eno Thereska 
> wrote:
>
> > Hi Ian,
> >
> >
> > This is now fixed in 0.10.2.1. The default configuration need tweaking.
> If
> > you can't pick that up (it's currently being voted), make sure you have
> > these two parameters set as follows in your streams config:
> >
> > final Properties props = new Properties();
> > ...
> > props.put(ProducerConfig.RETRIES_CONFIG, 10);  < increase to 10 from
> > default of 0
> > props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,
> > Integer.toString(Integer.MAX_VALUE)); <- increase to infinity
> > from default of 300 s
> >
> > Thanks
> > Eno
> >
> > > On 24 Apr 2017, at 10:38, Ian Duffy  wrote:
> > >
> > > Hi All,
> > >
> > > We're running multiple Kafka Stream applications using Kafka client
> > > 0.10.2.0 against a 6 node broker cluster running 0.10.1.1
> > > Additionally, we're running Kafka Connect 0.10.2.0 with the
> ElasticSearch
> > > connector by confluent [1]
> > >
> > > On an ISR change occurring on the brokers, all of the streams
> > applications
> > > and the Kafka connect ES connector threw exceptions and never
> recovered.
> > >
> > > We've seen a correlation between Kafka Broker ISR change and stream
> > > applications dying.
> > >
> > > The logs from the streams applications throw out the following and fail
> > to
> > > recover:
> > >
> > > 07:01:23.323 stream-processor /var/log/application.log  2017-04-24
> > > 06:01:23,323 - [WARN] - [1.1.0-6] - [StreamThread-1]
> > > o.a.k.s.p.internals.StreamThread - Unexpected state transition from
> > RUNNING
> > > to NOT_RUNNING
> > > 07:01:23.323 stream-processor /var/log/application.log  2017-04-24
> > > 06:01:23,324 - [ERROR] - [1.1.0-6] - [StreamThread-1] Application -
> > > Unexpected Exception caught in thread [StreamThread-1]:
> > > org.apache.kafka.streams.errors.StreamsException: Exception caught in
> > > process. taskId=0_81, processor=KSTREAM-SOURCE-00,
> > > topic=kafka-topic, partition=81, offset=479285
> > > at
> > > org.apache.kafka.streams.processor.internals.
> > StreamTask.process(StreamTask.java:216)
> > > at
> > > org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
> > StreamThread.java:641)
> > > at
> > > org.apache.kafka.streams.processor.internals.
> > StreamThread.run(StreamThread.java:368)
> > > Caused by: org.apache.kafka.streams.errors.StreamsException: task
> [0_81]
> > > exception caught when producing
> > > at
> > > org.apache.kafka.streams.processor.internals.RecordCollectorImpl.
> > checkForException(RecordCollectorImpl.java:119)
> > > at
> > > org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(
> > RecordCollectorImpl.java:76)
> > > at
> > > org.apache.kafka.streams.processor.internals.SinkNode.
> > process(SinkNode.java:79)
> > > at
> > > org.apache.kafka.streams.processor.internals.
> > ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
> > > at
> > > org.apache.kafka.streams.kstream.internals.KStreamFlatMap$
> > KStreamFlatMapProcessor.process(KStreamFlatMap.java:43)
> > > at
> > > org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(
> > ProcessorNode.java:48)
> > > at
> > > org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.
> > measureLatencyNs(StreamsMetricsImpl.java:188)
> > > at
> > > org.apache.kafka.streams.processor.internals.ProcessorNode.process(
> > ProcessorNode.java:134)
> > > at
> > > org.apache.kafka.streams.processor.internals.
> > ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
> > > at
> > > org.apache.kafka.streams.processor.internals.
> > SourceNode.process(SourceNode.java:70)
> > > at
> > > org.apache.kafka.streams.processor.internals.
> > StreamTask.process(StreamTask.java:197)
> > > ... 2 common frames omitted
> > > Caused by: org.apache.kafka.common.errors.
> NotLeaderForPartitionException
> > :
> > > This server is not the leader for that topic-partition.
> > > 07:01:23.558 stream-processor /var/log/application.log  2017-04-24
> > > 06:01:23,558 - [WARN] - [1.1.0-6] - [StreamThread-3]
> > > o.a.k.s.p.internals.StreamThread - Unexpected state transition from
> > RUNNING
> > > to NOT_R

Re: Issue in Kafka running for few days

2017-04-26 Thread Abhit Kalsotra
Any pointers please


Abhi

On Wed, Apr 26, 2017 at 11:03 PM, Abhit Kalsotra  wrote:

> Hi *
>
> My kafka setup
>
>
> **OS: Windows Machine*6 broker nodes , 4 on one Machine and 2 on other
> Machine*
>
> **ZK instance on (4 broker nodes Machine) and another ZK on (2 broker
> nodes machine)*
> ** 2 Topics with partition size = 50 and replication factor = 3*
>
> I am producing on an average of around 500 messages / sec with each
> message size close to 98 bytes...
>
> More or less the message rate stays constant throughout, but after running
> the setup for close to 2 weeks , my Kafka cluster broke and this happened
> twice in a month.  Not able to understand what's the issue, Kafka gurus
> please do share your inputs...
>
> the controlle.log file at the time of Kafka broken looks like
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[2017-04-26 12:03:34,998] INFO [Controller 0]: Broker failure callback
> for 0,1,3,5,6 (kafka.controller.KafkaController)[2017-04-26 12:03:34,998]
> INFO [Controller 0]: Removed ArrayBuffer() from list of shutting down
> brokers. (kafka.controller.KafkaController)[2017-04-26 12:03:34,998] INFO
> [Partition state machine on Controller 0]: Invoking state change to
> OfflinePartition for partitions
> [__consumer_offsets,19],[mytopic,11],[__consumer_offsets,30],[mytopicOLD,18],[mytopic,13],[__consumer_offsets,47],[mytopicOLD,26],[__consumer_offsets,29],[mytopicOLD,0],[__consumer_offsets,41],[mytopic,44],[mytopicOLD,38],[mytopicOLD,2],[__consumer_offsets,17],[__consumer_offsets,10],[mytopic,20],[mytopic,23],[mytopic,30],[__consumer_offsets,14],[__consumer_offsets,40],[mytopic,31],[mytopicOLD,43],[mytopicOLD,19],[mytopicOLD,35],[__consumer_offsets,18],[mytopic,43],[__consumer_offsets,26],[__consumer_offsets,0],[mytopic,32],[__consumer_offsets,24],[mytopicOLD,3],[mytopic,2],[mytopic,3],[mytopicOLD,45],[mytopic,35],[__consumer_offsets,20],[mytopic,1],[mytopicOLD,33],[__consumer_offsets,5],[mytopicOLD,47],[__consumer_offsets,22],[mytopicOLD,8],[mytopic,33],[mytopic,36],[mytopicOLD,11],[mytopic,47],[mytopicOLD,20],[mytopic,48],[__consumer_offsets,12],[mytopicOLD,32],[__consumer_offsets,8],[mytopicOLD,39],[mytopicOLD,27],[mytopicOLD,49],[mytopicOLD,42],[mytopic,21],[mytopicOLD,31],[mytopic,29],[__consumer_offsets,23],[mytopicOLD,21],[__consumer_offsets,48],[__consumer_offsets,11],[mytopic,18],[__consumer_offsets,13],[mytopic,45],[mytopic,5],[mytopicOLD,25],[mytopic,6],[mytopicOLD,23],[mytopicOLD,37],[__consumer_offsets,6],[__consumer_offsets,49],[mytopicOLD,13],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_offsets,37],[mytopic,12],[mytopicOLD,30],[__consumer_offsets,31],[__consumer_offsets,44],[mytopicOLD,15],[mytopicOLD,29],[mytopic,37],[mytopic,38],[__consumer_offsets,42],[mytopic,27],[mytopic,26],[mytopic,15],[__consumer_offsets,34],[mytopic,42],[__consumer_offsets,46],[mytopic,14],[mytopicOLD,12],[mytopicOLD,1],[mytopic,7],[__consumer_offsets,25],[mytopicOLD,24],[mytopicOLD,44],[mytopicOLD,14],[__consumer_offsets,32],[mytopic,0],[__consumer_offsets,43],[mytopic,39],[mytopicOLD,5],[mytopic,9],[mytopic,24],[__consumer_offsets,36],[mytopic,25],[mytopicOLD,36],[mytopic,19],[__consumer_offsets,35],[__consumer_offsets,7],[mytopic,8],[__consumer_offsets,38],[mytopicOLD,48],[mytopicOLD,9],[__consumer_offsets,1],[mytopicOLD,6],[mytopic,41],[mytopicOLD,41],[mytopicOLD,7],[mytopic,17],[mytopicOLD,17],[mytopic,49],[__consumer_offsets,16],[__consumer_offsets,2]
> (kafka.controller.PartitionStateMachine)[2017-04-26 12:03:35,045] INFO
> [SessionExpirationListener on 1], ZK expired; shut down all controller
> components and try to re-elect
> (kafka.controller.KafkaController$SessionExpirationListener)[2017-04-26
> 12:03:35,045] DEBUG [Controller 1]: Controller resigning, broker id 1
> (kafka.controller.KafkaController)[2017-04-26 12:03:35,045] DEBUG
> [Controller 1]: De-registering IsrChangeNotificationListener
> (kafka.controller.KafkaController)[2017-04-26 12:03:35,060] INFO [Partition
> state machine on Controller 1]: Stopped partition state machine
> (kafka.controller.PartitionStateMachine)[2017-04-26 12:03:35,060] INFO
> [Replica state machine on controller 1]: Stopped replica state machine
> (kafka.controller.ReplicaStateMachine)[2017-04-26 12:03:35,060] INFO
> [Controller 1]: Broker 1 resigned as the controller
> (kafka.controller.KafkaController)[2017-04-26 12:03:36,013] DEBUG
> [OfflinePartitionLeaderSelector]: No broker in ISR is alive for
> [__consumer_offsets,19]. Pick the leader from the alive assigned replicas:
> (kafka.controller.OfflinePartitionLeaderSelector)[2017-04-26 12:03:36,029]
> DEBUG [OfflinePartitionLeaderSelector]: No broker in ISR is alive for
> [mytopic,11]. Pick the leader from the alive assigned replicas:
> (kafka.controller.OfflinePartitionLeaderSelector)[2017-04-26 12:03:36,029]
> DEBUG [OfflinePartitionLeaderSelector]: No broker in ISR is alive for
> [__consumer_offsets,30]. Pick the leader from the alive assigned replicas:
> (kafka.controller.OfflinePartitionLeaderS

Re: [VOTE] 0.10.2.1 RC3

2017-04-26 Thread Jun Rao
Hi, Gwen,

Thanks for doing the release. +1 from me.

Jun

On Fri, Apr 21, 2017 at 9:56 AM, Gwen Shapira  wrote:

> Hello Kafka users, developers, friends, romans, countrypersons,
>
> This is the fourth (!) candidate for release of Apache Kafka 0.10.2.1.
>
> It is a bug fix release, so we have lots of bug fixes, some super
> important.
>
> Release notes for the 0.10.2.1 release:
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/RELEASE_NOTES.html
>
> *** Please download, test and vote by Wednesday, April 26, 2017 ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/javadoc/
>
> * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 tag:
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> 8e4f09caeaa877f06dc75c7da1af7a727e5e599f
>
>
> * Documentation:
> http://kafka.apache.org/0102/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0102/protocol.html
>
> /**
>
> Your help in validating this bugfix release is super valuable, so
> please take the time to test and vote!
>
> Suggested tests:
>  * Grab the source archive and make sure it compiles
>  * Grab one of the binary distros and run the quickstarts against them
>  * Extract and verify one of the site docs jars
>  * Build a sample against jars in the staging repo
>  * Validate GPG signatures on at least one file
>  * Validate the javadocs look ok
>  * The 0.10.2 documentation was updated for this bugfix release
> (especially upgrade, streams and connect portions) - please make sure
> it looks ok: http://kafka.apache.org/documentation.html
>
> But above all, try to avoid finding new bugs - we want to get this release
> out the door already :P
>
>
> Thanks,
> Gwen
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 
>


Re: [VOTE] 0.10.2.1 RC3

2017-04-26 Thread Gwen Shapira
+1 (binding)

Validated unit tests, quickstarts, connect, signatures

On Wed, Apr 26, 2017 at 11:30 AM, Guozhang Wang  wrote:

> +1
>
> Verified unit test on source, and quick start on binary (Scala 2.12 only).
>
>
> Guozhang
>
>
> On Wed, Apr 26, 2017 at 2:43 AM, Ian Duffy  wrote:
>
> > +1
> >
> > Started using kafka client 0.10.2.1 for our streams applications, seen a
> > much greater improvement on retries when failures occur.
> > We've been running without manual intervention for > 24 hours which is
> > something we haven't seen in awhile.
> >
> > Found it odd that the RC tag wasn't within the version on the maven
> > staging repository, how do you identify different RC versions? How do you
> > flush clients cache? etc. Ended up digging down on the index of pages and
> > verifying the last modified date matched the date on this email thread.
> >
> > Thanks,
> > Ian.
> >
> > On 22 April 2017 at 22:45, Michal Borowiecki <
> > michal.borowie...@openbet.com> wrote:
> >
> >> It's listed below:
> >>
> >> * Maven artifacts to be voted upon:https://repository.
> apache.org/content/groups/staging/
> >>
> >>
> >>
> >> On 22/04/17 19:23, Shimi Kiviti wrote:
> >>
> >> Is there a maven repo with these jars so I can test it against our kafka
> >> streams services?
> >>
> >> On Sat, Apr 22, 2017 at 9:05 PM, Eno Thereska 
> 
> >> wrote:
> >>
> >>
> >> +1 tested the usual streams tests as before.
> >>
> >> Thanks
> >> Eno
> >>
> >> On 21 Apr 2017, at 17:56, Gwen Shapira  <
> g...@confluent.io> wrote:
> >>
> >> Hello Kafka users, developers, friends, romans, countrypersons,
> >>
> >> This is the fourth (!) candidate for release of Apache Kafka 0.10.2.1.
> >>
> >> It is a bug fix release, so we have lots of bug fixes, some super
> >> important.
> >>
> >> Release notes for the 0.10.2.1 release:http://home.apache.
> org/~gwenshap/kafka-0.10.2.1-rc3/RELEASE_NOTES.html
> >>
> >> *** Please download, test and vote by Wednesday, April 26, 2017 ***
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
> >>
> >> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/
> >>
> >> * Maven artifacts to be voted upon:https://repository.
> apache.org/content/groups/staging/
> >>
> >> * Javadoc:http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/javadoc/
> >>
> >> * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 tag:
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> >>
> >> 8e4f09caeaa877f06dc75c7da1af7a727e5e599f
> >>
> >> * Documentation:http://kafka.apache.org/0102/documentation.html
> >>
> >> * Protocol:http://kafka.apache.org/0102/protocol.html
> >>
> >> /**
> >>
> >> Your help in validating this bugfix release is super valuable, so
> >> please take the time to test and vote!
> >>
> >> Suggested tests:
> >> * Grab the source archive and make sure it compiles
> >> * Grab one of the binary distros and run the quickstarts against them
> >> * Extract and verify one of the site docs jars
> >> * Build a sample against jars in the staging repo
> >> * Validate GPG signatures on at least one file
> >> * Validate the javadocs look ok
> >> * The 0.10.2 documentation was updated for this bugfix release
> >> (especially upgrade, streams and connect portions) - please make sure
> >> it looks ok: http://kafka.apache.org/documentation.html
> >>
> >> But above all, try to avoid finding new bugs - we want to get this
> >>
> >> release
> >>
> >> out the door already :P
> >>
> >>
> >> Thanks,
> >> Gwen
> >>
> >>
> >>
> >> --
> >> *Gwen Shapira*
> >> Product Manager | Confluent650.450.2760 <(650)%20450-2760> | @gwenshap
> >> Follow us: Twitter  <
> https://twitter.com/ConfluentInc> | blog <
> http://www.confluent.io/blog>
> >>
> >>
> >> --
> >>  Michal Borowiecki
> >> Senior Software Engineer L4
> >> T: +44 208 742 1600 <+44%2020%208742%201600>
> >>
> >>
> >> +44 203 249 8448 <+44%2020%203249%208448>
> >>
> >>
> >>
> >> E: michal.borowie...@openbet.com
> >> W: www.openbet.com
> >> OpenBet Ltd
> >>
> >> Chiswick Park Building 9
> >>
> >> 566 Chiswick High Rd
> >>
> >> London
> >>
> >> W4 5XT
> >>
> >> UK
> >> 
> >> This message is confidential and intended only for the addressee. If you
> >> have received this message in error, please immediately notify the
> >> postmas...@openbet.com and delete it from your system as well as any
> >> copies. The content of e-mails as well as traffic data may be monitored
> by
> >> OpenBet for employment and security purposes. To protect the environment
> >> please do not print this e-mail unless necessary. OpenBet Ltd.
> Registered
> >> Office: Chiswick Park Building 9, 566 Chiswick High Road, London, W4
> 5XT,
> >> United Kingdom. A company registered in England and Wales. Registered
> no.
> >> 3134634. VAT no. GB927523612
> >

Re: [VOTE] 0.10.2.1 RC3

2017-04-26 Thread Shimi Kiviti
+1

I compiled our (Rollout.io) kafka-stream project, run unit-tests and
end-to-end tests (against streams 0.10.2.1 and broker 0.10.1.1)
Everything works as expected

On Wed, Apr 26, 2017 at 10:05 PM, Gwen Shapira  wrote:

> +1 (binding)
>
> Validated unit tests, quickstarts, connect, signatures
>
> On Wed, Apr 26, 2017 at 11:30 AM, Guozhang Wang 
> wrote:
>
> > +1
> >
> > Verified unit test on source, and quick start on binary (Scala 2.12
> only).
> >
> >
> > Guozhang
> >
> >
> > On Wed, Apr 26, 2017 at 2:43 AM, Ian Duffy  wrote:
> >
> > > +1
> > >
> > > Started using kafka client 0.10.2.1 for our streams applications, seen
> a
> > > much greater improvement on retries when failures occur.
> > > We've been running without manual intervention for > 24 hours which is
> > > something we haven't seen in awhile.
> > >
> > > Found it odd that the RC tag wasn't within the version on the maven
> > > staging repository, how do you identify different RC versions? How do
> you
> > > flush clients cache? etc. Ended up digging down on the index of pages
> and
> > > verifying the last modified date matched the date on this email thread.
> > >
> > > Thanks,
> > > Ian.
> > >
> > > On 22 April 2017 at 22:45, Michal Borowiecki <
> > > michal.borowie...@openbet.com> wrote:
> > >
> > >> It's listed below:
> > >>
> > >> * Maven artifacts to be voted upon:https://repository.
> > apache.org/content/groups/staging/
> > >>
> > >>
> > >>
> > >> On 22/04/17 19:23, Shimi Kiviti wrote:
> > >>
> > >> Is there a maven repo with these jars so I can test it against our
> kafka
> > >> streams services?
> > >>
> > >> On Sat, Apr 22, 2017 at 9:05 PM, Eno Thereska  >
> > 
> > >> wrote:
> > >>
> > >>
> > >> +1 tested the usual streams tests as before.
> > >>
> > >> Thanks
> > >> Eno
> > >>
> > >> On 21 Apr 2017, at 17:56, Gwen Shapira  <
> > g...@confluent.io> wrote:
> > >>
> > >> Hello Kafka users, developers, friends, romans, countrypersons,
> > >>
> > >> This is the fourth (!) candidate for release of Apache Kafka 0.10.2.1.
> > >>
> > >> It is a bug fix release, so we have lots of bug fixes, some super
> > >> important.
> > >>
> > >> Release notes for the 0.10.2.1 release:http://home.apache.
> > org/~gwenshap/kafka-0.10.2.1-rc3/RELEASE_NOTES.html
> > >>
> > >> *** Please download, test and vote by Wednesday, April 26, 2017 ***
> > >>
> > >> Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> > >>
> > >> * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/
> > >>
> > >> * Maven artifacts to be voted upon:https://repository.
> > apache.org/content/groups/staging/
> > >>
> > >> * Javadoc:http://home.apache.org/~gwenshap/kafka-0.10.2.1-
> rc3/javadoc/
> > >>
> > >> * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 tag:
> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > >>
> > >> 8e4f09caeaa877f06dc75c7da1af7a727e5e599f
> > >>
> > >> * Documentation:http://kafka.apache.org/0102/documentation.html
> > >>
> > >> * Protocol:http://kafka.apache.org/0102/protocol.html
> > >>
> > >> /**
> > >>
> > >> Your help in validating this bugfix release is super valuable, so
> > >> please take the time to test and vote!
> > >>
> > >> Suggested tests:
> > >> * Grab the source archive and make sure it compiles
> > >> * Grab one of the binary distros and run the quickstarts against them
> > >> * Extract and verify one of the site docs jars
> > >> * Build a sample against jars in the staging repo
> > >> * Validate GPG signatures on at least one file
> > >> * Validate the javadocs look ok
> > >> * The 0.10.2 documentation was updated for this bugfix release
> > >> (especially upgrade, streams and connect portions) - please make sure
> > >> it looks ok: http://kafka.apache.org/documentation.html
> > >>
> > >> But above all, try to avoid finding new bugs - we want to get this
> > >>
> > >> release
> > >>
> > >> out the door already :P
> > >>
> > >>
> > >> Thanks,
> > >> Gwen
> > >>
> > >>
> > >>
> > >> --
> > >> *Gwen Shapira*
> > >> Product Manager | Confluent650.450.2760 <(650)%20450-2760> | @gwenshap
> > >> Follow us: Twitter  <
> > https://twitter.com/ConfluentInc> | blog <
> > http://www.confluent.io/blog>
> > >>
> > >>
> > >> --
> > >>  Michal Borowiecki
> > >> Senior Software Engineer L4
> > >> T: +44 208 742 1600 <+44%2020%208742%201600>
> > >>
> > >>
> > >> +44 203 249 8448 <+44%2020%203249%208448>
> > >>
> > >>
> > >>
> > >> E: michal.borowie...@openbet.com
> > >> W: www.openbet.com
> > >> OpenBet Ltd
> > >>
> > >> Chiswick Park Building 9
> > >>
> > >> 566 Chiswick High Rd
> > >>
> > >> London
> > >>
> > >> W4 5XT
> > >>
> > >> UK
> > >> 
> > >> This message is confidential and intended only for the addressee. If
> you
> > >> have received this message in error,

Re: [VOTE] 0.10.2.1 RC3

2017-04-26 Thread Gwen Shapira
Vote summary:
+1: 6 (3 binding) - Eno, Ian, Guozhang, Jun, Gwen and Shimi
0: 0
-1: 0

W00t! 72 hours passed and we have 3 binding +1!

Thank you for playing "bugfix release". See you all at the next round :)
I'll get our bug fixes out the door ASAP.

Gwen


On Wed, Apr 26, 2017 at 12:12 PM, Shimi Kiviti  wrote:

> +1
>
> I compiled our (Rollout.io) kafka-stream project, run unit-tests and
> end-to-end tests (against streams 0.10.2.1 and broker 0.10.1.1)
> Everything works as expected
>
> On Wed, Apr 26, 2017 at 10:05 PM, Gwen Shapira  wrote:
>
> > +1 (binding)
> >
> > Validated unit tests, quickstarts, connect, signatures
> >
> > On Wed, Apr 26, 2017 at 11:30 AM, Guozhang Wang 
> > wrote:
> >
> > > +1
> > >
> > > Verified unit test on source, and quick start on binary (Scala 2.12
> > only).
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Wed, Apr 26, 2017 at 2:43 AM, Ian Duffy  wrote:
> > >
> > > > +1
> > > >
> > > > Started using kafka client 0.10.2.1 for our streams applications,
> seen
> > a
> > > > much greater improvement on retries when failures occur.
> > > > We've been running without manual intervention for > 24 hours which
> is
> > > > something we haven't seen in awhile.
> > > >
> > > > Found it odd that the RC tag wasn't within the version on the maven
> > > > staging repository, how do you identify different RC versions? How do
> > you
> > > > flush clients cache? etc. Ended up digging down on the index of pages
> > and
> > > > verifying the last modified date matched the date on this email
> thread.
> > > >
> > > > Thanks,
> > > > Ian.
> > > >
> > > > On 22 April 2017 at 22:45, Michal Borowiecki <
> > > > michal.borowie...@openbet.com> wrote:
> > > >
> > > >> It's listed below:
> > > >>
> > > >> * Maven artifacts to be voted upon:https://repository.
> > > apache.org/content/groups/staging/
> > > >>
> > > >>
> > > >>
> > > >> On 22/04/17 19:23, Shimi Kiviti wrote:
> > > >>
> > > >> Is there a maven repo with these jars so I can test it against our
> > kafka
> > > >> streams services?
> > > >>
> > > >> On Sat, Apr 22, 2017 at 9:05 PM, Eno Thereska <
> eno.there...@gmail.com
> > >
> > > 
> > > >> wrote:
> > > >>
> > > >>
> > > >> +1 tested the usual streams tests as before.
> > > >>
> > > >> Thanks
> > > >> Eno
> > > >>
> > > >> On 21 Apr 2017, at 17:56, Gwen Shapira  <
> > > g...@confluent.io> wrote:
> > > >>
> > > >> Hello Kafka users, developers, friends, romans, countrypersons,
> > > >>
> > > >> This is the fourth (!) candidate for release of Apache Kafka
> 0.10.2.1.
> > > >>
> > > >> It is a bug fix release, so we have lots of bug fixes, some super
> > > >> important.
> > > >>
> > > >> Release notes for the 0.10.2.1 release:http://home.apache.
> > > org/~gwenshap/kafka-0.10.2.1-rc3/RELEASE_NOTES.html
> > > >>
> > > >> *** Please download, test and vote by Wednesday, April 26, 2017 ***
> > > >>
> > > >> Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > > >>
> > > >> * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/
> > > >>
> > > >> * Maven artifacts to be voted upon:https://repository.
> > > apache.org/content/groups/staging/
> > > >>
> > > >> * Javadoc:http://home.apache.org/~gwenshap/kafka-0.10.2.1-
> > rc3/javadoc/
> > > >>
> > > >> * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 tag:
> > > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > > >>
> > > >> 8e4f09caeaa877f06dc75c7da1af7a727e5e599f
> > > >>
> > > >> * Documentation:http://kafka.apache.org/0102/documentation.html
> > > >>
> > > >> * Protocol:http://kafka.apache.org/0102/protocol.html
> > > >>
> > > >> /**
> > > >>
> > > >> Your help in validating this bugfix release is super valuable, so
> > > >> please take the time to test and vote!
> > > >>
> > > >> Suggested tests:
> > > >> * Grab the source archive and make sure it compiles
> > > >> * Grab one of the binary distros and run the quickstarts against
> them
> > > >> * Extract and verify one of the site docs jars
> > > >> * Build a sample against jars in the staging repo
> > > >> * Validate GPG signatures on at least one file
> > > >> * Validate the javadocs look ok
> > > >> * The 0.10.2 documentation was updated for this bugfix release
> > > >> (especially upgrade, streams and connect portions) - please make
> sure
> > > >> it looks ok: http://kafka.apache.org/documentation.html
> > > >>
> > > >> But above all, try to avoid finding new bugs - we want to get this
> > > >>
> > > >> release
> > > >>
> > > >> out the door already :P
> > > >>
> > > >>
> > > >> Thanks,
> > > >> Gwen
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> *Gwen Shapira*
> > > >> Product Manager | Confluent650.450.2760 <(650)%20450-2760> |
> @gwenshap
> > > >> Follow us: Twitter  <
> > > https://twitter.com/ConfluentInc> | blog
> <
> > > http://www.confluent.io/blo

Re: [VOTE] 0.10.2.1 RC3

2017-04-26 Thread Gwen Shapira
Quick update:
I closed the release on JIRA and bumped the versions in github. Uploaded
artifacts and released the jars in Maven.
Waiting for everything to actually show up before I update the website and
send the announcement. Expect something tonight or tomorrow morning.

Gwen

On Wed, Apr 26, 2017 at 12:16 PM, Gwen Shapira  wrote:

>
> Vote summary:
> +1: 6 (3 binding) - Eno, Ian, Guozhang, Jun, Gwen and Shimi
> 0: 0
> -1: 0
>
> W00t! 72 hours passed and we have 3 binding +1!
>
> Thank you for playing "bugfix release". See you all at the next round :)
> I'll get our bug fixes out the door ASAP.
>
> Gwen
>
>
> On Wed, Apr 26, 2017 at 12:12 PM, Shimi Kiviti  wrote:
>
>> +1
>>
>> I compiled our (Rollout.io) kafka-stream project, run unit-tests and
>> end-to-end tests (against streams 0.10.2.1 and broker 0.10.1.1)
>> Everything works as expected
>>
>> On Wed, Apr 26, 2017 at 10:05 PM, Gwen Shapira  wrote:
>>
>> > +1 (binding)
>> >
>> > Validated unit tests, quickstarts, connect, signatures
>> >
>> > On Wed, Apr 26, 2017 at 11:30 AM, Guozhang Wang 
>> > wrote:
>> >
>> > > +1
>> > >
>> > > Verified unit test on source, and quick start on binary (Scala 2.12
>> > only).
>> > >
>> > >
>> > > Guozhang
>> > >
>> > >
>> > > On Wed, Apr 26, 2017 at 2:43 AM, Ian Duffy  wrote:
>> > >
>> > > > +1
>> > > >
>> > > > Started using kafka client 0.10.2.1 for our streams applications,
>> seen
>> > a
>> > > > much greater improvement on retries when failures occur.
>> > > > We've been running without manual intervention for > 24 hours which
>> is
>> > > > something we haven't seen in awhile.
>> > > >
>> > > > Found it odd that the RC tag wasn't within the version on the maven
>> > > > staging repository, how do you identify different RC versions? How
>> do
>> > you
>> > > > flush clients cache? etc. Ended up digging down on the index of
>> pages
>> > and
>> > > > verifying the last modified date matched the date on this email
>> thread.
>> > > >
>> > > > Thanks,
>> > > > Ian.
>> > > >
>> > > > On 22 April 2017 at 22:45, Michal Borowiecki <
>> > > > michal.borowie...@openbet.com> wrote:
>> > > >
>> > > >> It's listed below:
>> > > >>
>> > > >> * Maven artifacts to be voted upon:https://repository.
>> > > apache.org/content/groups/staging/
>> > > >>
>> > > >>
>> > > >>
>> > > >> On 22/04/17 19:23, Shimi Kiviti wrote:
>> > > >>
>> > > >> Is there a maven repo with these jars so I can test it against our
>> > kafka
>> > > >> streams services?
>> > > >>
>> > > >> On Sat, Apr 22, 2017 at 9:05 PM, Eno Thereska <
>> eno.there...@gmail.com
>> > >
>> > > 
>> > > >> wrote:
>> > > >>
>> > > >>
>> > > >> +1 tested the usual streams tests as before.
>> > > >>
>> > > >> Thanks
>> > > >> Eno
>> > > >>
>> > > >> On 21 Apr 2017, at 17:56, Gwen Shapira  <
>> > > g...@confluent.io> wrote:
>> > > >>
>> > > >> Hello Kafka users, developers, friends, romans, countrypersons,
>> > > >>
>> > > >> This is the fourth (!) candidate for release of Apache Kafka
>> 0.10.2.1.
>> > > >>
>> > > >> It is a bug fix release, so we have lots of bug fixes, some super
>> > > >> important.
>> > > >>
>> > > >> Release notes for the 0.10.2.1 release:http://home.apache.
>> > > org/~gwenshap/kafka-0.10.2.1-rc3/RELEASE_NOTES.html
>> > > >>
>> > > >> *** Please download, test and vote by Wednesday, April 26, 2017 ***
>> > > >>
>> > > >> Kafka's KEYS file containing PGP keys we use to sign the release:
>> > > http://kafka.apache.org/KEYS
>> > > >>
>> > > >> * Release artifacts to be voted upon (source and binary):
>> > > http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc3/
>> > > >>
>> > > >> * Maven artifacts to be voted upon:https://repository.
>> > > apache.org/content/groups/staging/
>> > > >>
>> > > >> * Javadoc:http://home.apache.org/~gwenshap/kafka-0.10.2.1-
>> > rc3/javadoc/
>> > > >>
>> > > >> * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.1 tag:
>> > > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
>> > > >>
>> > > >> 8e4f09caeaa877f06dc75c7da1af7a727e5e599f
>> > > >>
>> > > >> * Documentation:http://kafka.apache.org/0102/documentation.html
>> > > >>
>> > > >> * Protocol:http://kafka.apache.org/0102/protocol.html
>> > > >>
>> > > >> /**
>> > > >>
>> > > >> Your help in validating this bugfix release is super valuable, so
>> > > >> please take the time to test and vote!
>> > > >>
>> > > >> Suggested tests:
>> > > >> * Grab the source archive and make sure it compiles
>> > > >> * Grab one of the binary distros and run the quickstarts against
>> them
>> > > >> * Extract and verify one of the site docs jars
>> > > >> * Build a sample against jars in the staging repo
>> > > >> * Validate GPG signatures on at least one file
>> > > >> * Validate the javadocs look ok
>> > > >> * The 0.10.2 documentation was updated for this bugfix release
>> > > >> (especially upgrade, streams and connect portions) - please make
>> sure
>> > > >> it looks ok: http://kafka.apache.org/documentation.html
>> > > >>
>> > > >> But above

Re: Issue in Kafka running for few days

2017-04-26 Thread Svante Karlsson
You are not supposed to run an even number of zookeepers. Fix that first

On Apr 26, 2017 20:59, "Abhit Kalsotra"  wrote:

> Any pointers please
>
>
> Abhi
>
> On Wed, Apr 26, 2017 at 11:03 PM, Abhit Kalsotra 
> wrote:
>
> > Hi *
> >
> > My kafka setup
> >
> >
> > **OS: Windows Machine*6 broker nodes , 4 on one Machine and 2 on other
> > Machine*
> >
> > **ZK instance on (4 broker nodes Machine) and another ZK on (2 broker
> > nodes machine)*
> > ** 2 Topics with partition size = 50 and replication factor = 3*
> >
> > I am producing on an average of around 500 messages / sec with each
> > message size close to 98 bytes...
> >
> > More or less the message rate stays constant throughout, but after
> running
> > the setup for close to 2 weeks , my Kafka cluster broke and this happened
> > twice in a month.  Not able to understand what's the issue, Kafka gurus
> > please do share your inputs...
> >
> > the controlle.log file at the time of Kafka broken looks like
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > *[2017-04-26 12:03:34,998] INFO [Controller 0]: Broker failure callback
> > for 0,1,3,5,6 (kafka.controller.KafkaController)[2017-04-26
> 12:03:34,998]
> > INFO [Controller 0]: Removed ArrayBuffer() from list of shutting down
> > brokers. (kafka.controller.KafkaController)[2017-04-26 12:03:34,998]
> INFO
> > [Partition state machine on Controller 0]: Invoking state change to
> > OfflinePartition for partitions
> > [__consumer_offsets,19],[mytopic,11],[__consumer_
> offsets,30],[mytopicOLD,18],[mytopic,13],[__consumer_
> offsets,47],[mytopicOLD,26],[__consumer_offsets,29],[
> mytopicOLD,0],[__consumer_offsets,41],[mytopic,44],[
> mytopicOLD,38],[mytopicOLD,2],[__consumer_offsets,17],[__
> consumer_offsets,10],[mytopic,20],[mytopic,23],[mytopic,30],
> [__consumer_offsets,14],[__consumer_offsets,40],[mytopic,
> 31],[mytopicOLD,43],[mytopicOLD,19],[mytopicOLD,35]
> ,[__consumer_offsets,18],[mytopic,43],[__consumer_offsets,26],[__consumer_
> offsets,0],[mytopic,32],[__consumer_offsets,24],[
> mytopicOLD,3],[mytopic,2],[mytopic,3],[mytopicOLD,45],[
> mytopic,35],[__consumer_offsets,20],[mytopic,1],[
> mytopicOLD,33],[__consumer_offsets,5],[mytopicOLD,47],[__
> consumer_offsets,22],[mytopicOLD,8],[mytopic,33],[
> mytopic,36],[mytopicOLD,11],[mytopic,47],[mytopicOLD,20],[
> mytopic,48],[__consumer_offsets,12],[mytopicOLD,32],[_
> _consumer_offsets,8],[mytopicOLD,39],[mytopicOLD,27]
> ,[mytopicOLD,49],[mytopicOLD,42],[mytopic,21],[mytopicOLD,
> 31],[mytopic,29],[__consumer_offsets,23],[mytopicOLD,21],[_
> _consumer_offsets,48],[__consumer_offsets,11],[mytopic,
> 18],[__consumer_offsets,13],[mytopic,45],[mytopic,5],[
> mytopicOLD,25],[mytopic,6],[mytopicOLD,23],[mytopicOLD,37]
> ,[__consumer_offsets,6],[__consumer_offsets,49],[
> mytopicOLD,13],[__consumer_offsets,28],[__consumer_offsets,4],[__consumer_
> offsets,37],[mytopic,12],[mytopicOLD,30],[__consumer_
> offsets,31],[__consumer_offsets,44],[mytopicOLD,15],[
> mytopicOLD,29],[mytopic,37],[mytopic,38],[__consumer_
> offsets,42],[mytopic,27],[mytopic,26],[mytopic,15],[__
> consumer_offsets,34],[mytopic,42],[__consumer_offsets,46],[
> mytopic,14],[mytopicOLD,12],[mytopicOLD,1],[mytopic,7],[__
> consumer_offsets,25],[mytopicOLD,24],[mytopicOLD,44]
> ,[mytopicOLD,14],[__consumer_offsets,32],[mytopic,0],[__
> consumer_offsets,43],[mytopic,39],[mytopicOLD,5],[mytopic,9]
> ,[mytopic,24],[__consumer_offsets,36],[mytopic,25],[
> mytopicOLD,36],[mytopic,19],[__consumer_offsets,35],[__
> consumer_offsets,7],[mytopic,8],[__consumer_offsets,38],[
> mytopicOLD,48],[mytopicOLD,9],[__consumer_offsets,1],[
> mytopicOLD,6],[mytopic,41],[mytopicOLD,41],[mytopicOLD,7],
> [mytopic,17],[mytopicOLD,17],[mytopic,49],[__consumer_
> offsets,16],[__consumer_offsets,2]
> > (kafka.controller.PartitionStateMachine)[2017-04-26 12:03:35,045] INFO
> > [SessionExpirationListener on 1], ZK expired; shut down all controller
> > components and try to re-elect
> > (kafka.controller.KafkaController$SessionExpirationListener)[2017-04-26
> > 12:03:35,045] DEBUG [Controller 1]: Controller resigning, broker id 1
> > (kafka.controller.KafkaController)[2017-04-26 12:03:35,045] DEBUG
> > [Controller 1]: De-registering IsrChangeNotificationListener
> > (kafka.controller.KafkaController)[2017-04-26 12:03:35,060] INFO
> [Partition
> > state machine on Controller 1]: Stopped partition state machine
> > (kafka.controller.PartitionStateMachine)[2017-04-26 12:03:35,060] INFO
> > [Replica state machine on controller 1]: Stopped replica state machine
> > (kafka.controller.ReplicaStateMachine)[2017-04-26 12:03:35,060] INFO
> > [Controller 1]: Broker 1 resigned as the controller
> > (kafka.controller.KafkaController)[2017-04-26 12:03:36,013] DEBUG
> > [OfflinePartitionLeaderSelector]: No broker in ISR is alive for
> > [__consumer_offsets,19]. Pick the leader from the alive assigned
> replicas:
> > (kafka.controller.OfflinePartitionLeaderSelector)[2017-04-26
> 12:03:36,029]
> > DEBUG [OfflineParti

Re: About "org.apache.kafka.common.protocol.types.SchemaException" Problem

2017-04-26 Thread Yang Cui
Hi All,
   Have anyone can help answer this question?  Thanks a lot!

On 26/04/2017, 8:00 PM, "Yang Cui"  wrote:

 Dear All,

  I am using Kafka cluster 2.11_0.9.0.1,  and the new consumer of 
2.11_0.9.0.1.
  When I set the quota configuration is:
  quota.producer.default=100
  quota.consumer.default=100
  And I used the new consumer to consume data, then the error  happened 
sometimes:
  
  org.apache.kafka.common.protocol.types.SchemaException: Error reading 
field 'responses': Error reading array of size 1140343, only 37 bytes available
  at 
org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)
  at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:439)
  at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:265)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
  at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:908)
  at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
  at com.fw.kafka.ConsumerThread.run(TimeOffsetPair.java:458)
  
  It is not occurred every time, but when it happened, it occurs repeatedly 
many times.
  
  





答复: About "org.apache.kafka.common.protocol.types.SchemaException" Problem

2017-04-26 Thread Hu Xi
Seems it's similar to https://issues.apache.org/jira/browse/KAFKA-4599?


发件人: Yang Cui 
发送时间: 2017年4月27日 11:55
收件人: users@kafka.apache.org
主题: Re: About "org.apache.kafka.common.protocol.types.SchemaException" Problem

Hi All,
   Have anyone can help answer this question?  Thanks a lot!

On 26/04/2017, 8:00 PM, "Yang Cui"  wrote:

 Dear All,

  I am using Kafka cluster 2.11_0.9.0.1,  and the new consumer of 
2.11_0.9.0.1.
  When I set the quota configuration is:
  quota.producer.default=100
  quota.consumer.default=100
  And I used the new consumer to consume data, then the error  happened 
sometimes:

  org.apache.kafka.common.protocol.types.SchemaException: Error reading 
field 'responses': Error reading array of size 1140343, only 37 bytes available
  at 
org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)
  at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:439)
  at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:265)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
  at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
  at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:908)
  at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
  at com.fw.kafka.ConsumerThread.run(TimeOffsetPair.java:458)

  It is not occurred every time, but when it happened, it occurs repeatedly 
many times.







Re: stunning error - Request of length 1550939497 is not valid, it is larger than the maximum size of 104857600 bytes

2017-04-26 Thread James Cheng
Ramya, Todd, Jiefu, David,

Sorry to drag up an ancient thread. I was looking for something in my email 
archives, and ran across this, and I might have solved part of these mysteries.

I ran across this post that talked about seeing weirdly large allocations when 
incorrect requests are accidentally sent to a port expecting a binary protocol. 
https://rachelbythebay.com/w/2016/02/21/malloc/

I took those finding and applied them to the weird big numbers you were seeing.

Ramya, Jiefu, about your allocation of 1347375956:
1347375956 converted to hex is 504F5354
504F5354 converted to ascii is the letters "POST"
So, someone sent a POST request to your Kafka broker by accident!

David, about your allocation of 1550939497:
1550939497 converted to hex is 5C717569
5C717569 converted to ascii is "\qui"
Maybe that's the beginning of the word "\quit"? Is there some protocol that 
uses the word "\quit"? Like IRC or SMTP or IMAP something? I'm not sure.

Anyway, thought you might find that interesting!

-James




> On Dec 12, 2016, at 9:39 AM, Todd Palino  wrote:
> 
> Are you actually getting requests that are 1.3 GB in size, or is something
> else happening, like someone trying to make HTTP requests against the Kafka
> broker port?
> 
> -Todd
> 
> 
> On Mon, Dec 12, 2016 at 4:19 AM, Ramya Ramamurthy <
> ramyaramamur...@teledna.com> wrote:
> 
>> We have got exactly the same problem.
>> nvalid receive (size = 1347375956 larger than 104857600).
>> 
>> When trying to increase the size, Java Out of Memory Exception.
>> Did you find a work around for the same ??
>> 
>> Thanks.
>> 
> 
> 

On Tue, Jul 14, 2015 at 11:18 AM, JIEFU GONG mailto:jg...@berkeley.edu>> wrote:

> @Gwen
> I am having a very very similar issue where I am attempting to send a
> rather small message and it's blowing up on me (my specific error is:
> Invalid receive (size = 1347375956 larger than 104857600)). I tried to
> change the relevant settings but it seems that this particular request is
> of 1340 mbs (and davids will be 1500 mb) and attempting to change the
> setting will give you another error saying there is not enough memory in
> the java heap. Any insight here?



> 
> -- 
> *Todd Palino*
> Staff Site Reliability Engineer
> Data Infrastructure Streaming
> 
> 
> 
> linkedin.com/in/toddpalino



Re: stunning error - Request of length 1550939497 is not valid, it is larger than the maximum size of 104857600 bytes

2017-04-26 Thread Onur Karaman
@James: that was incredible. Thank you.

On Wed, Apr 26, 2017 at 9:53 PM, James Cheng  wrote:

> Ramya, Todd, Jiefu, David,
>
> Sorry to drag up an ancient thread. I was looking for something in my
> email archives, and ran across this, and I might have solved part of these
> mysteries.
>
> I ran across this post that talked about seeing weirdly large allocations
> when incorrect requests are accidentally sent to a port expecting a binary
> protocol. https://rachelbythebay.com/w/2016/02/21/malloc/
>
> I took those finding and applied them to the weird big numbers you were
> seeing.
>
> Ramya, Jiefu, about your allocation of 1347375956:
> 1347375956 converted to hex is 504F5354
> 504F5354 converted to ascii is the letters "POST"
> So, someone sent a POST request to your Kafka broker by accident!
>
> David, about your allocation of 1550939497:
> 1550939497 converted to hex is 5C717569
> 5C717569 converted to ascii is "\qui"
> Maybe that's the beginning of the word "\quit"? Is there some protocol
> that uses the word "\quit"? Like IRC or SMTP or IMAP something? I'm not
> sure.
>
> Anyway, thought you might find that interesting!
>
> -James
>
>
>
>
> > On Dec 12, 2016, at 9:39 AM, Todd Palino  wrote:
> >
> > Are you actually getting requests that are 1.3 GB in size, or is
> something
> > else happening, like someone trying to make HTTP requests against the
> Kafka
> > broker port?
> >
> > -Todd
> >
> >
> > On Mon, Dec 12, 2016 at 4:19 AM, Ramya Ramamurthy <
> > ramyaramamur...@teledna.com> wrote:
> >
> >> We have got exactly the same problem.
> >> nvalid receive (size = 1347375956 larger than 104857600).
> >>
> >> When trying to increase the size, Java Out of Memory Exception.
> >> Did you find a work around for the same ??
> >>
> >> Thanks.
> >>
> >
> >
>
> On Tue, Jul 14, 2015 at 11:18 AM, JIEFU GONG  jg...@berkeley.edu>> wrote:
>
> > @Gwen
> > I am having a very very similar issue where I am attempting to send a
> > rather small message and it's blowing up on me (my specific error is:
> > Invalid receive (size = 1347375956 larger than 104857600)). I tried to
> > change the relevant settings but it seems that this particular request is
> > of 1340 mbs (and davids will be 1500 mb) and attempting to change the
> > setting will give you another error saying there is not enough memory in
> > the java heap. Any insight here?
>
>
>
> >
> > --
> > *Todd Palino*
> > Staff Site Reliability Engineer
> > Data Infrastructure Streaming
> >
> >
> >
> > linkedin.com/in/toddpalino
>
>