@James: that was incredible. Thank you.
On Wed, Apr 26, 2017 at 9:53 PM, James Cheng wrote:
> Ramya, Todd, Jiefu, David,
>
> Sorry to drag up an ancient thread. I was looking for something in my
> email archives, and ran across this, and I might have solved part of these
> mysteries.
>
> I ran a
Ramya, Todd, Jiefu, David,
Sorry to drag up an ancient thread. I was looking for something in my email
archives, and ran across this, and I might have solved part of these mysteries.
I ran across this post that talked about seeing weirdly large allocations when
incorrect requests are accidental
Seems it's similar to https://issues.apache.org/jira/browse/KAFKA-4599?
发件人: Yang Cui
发送时间: 2017年4月27日 11:55
收件人: users@kafka.apache.org
主题: Re: About "org.apache.kafka.common.protocol.types.SchemaException" Problem
Hi All,
Have anyone can help answer this que
Hi All,
Have anyone can help answer this question? Thanks a lot!
On 26/04/2017, 8:00 PM, "Yang Cui" wrote:
Dear All,
I am using Kafka cluster 2.11_0.9.0.1, and the new consumer of
2.11_0.9.0.1.
When I set the quota configuration is:
quota.producer.default=10
You are not supposed to run an even number of zookeepers. Fix that first
On Apr 26, 2017 20:59, "Abhit Kalsotra" wrote:
> Any pointers please
>
>
> Abhi
>
> On Wed, Apr 26, 2017 at 11:03 PM, Abhit Kalsotra
> wrote:
>
> > Hi *
> >
> > My kafka setup
> >
> >
> > **OS: Windows Machine*6 broker
Quick update:
I closed the release on JIRA and bumped the versions in github. Uploaded
artifacts and released the jars in Maven.
Waiting for everything to actually show up before I update the website and
send the announcement. Expect something tonight or tomorrow morning.
Gwen
On Wed, Apr 26, 201
Vote summary:
+1: 6 (3 binding) - Eno, Ian, Guozhang, Jun, Gwen and Shimi
0: 0
-1: 0
W00t! 72 hours passed and we have 3 binding +1!
Thank you for playing "bugfix release". See you all at the next round :)
I'll get our bug fixes out the door ASAP.
Gwen
On Wed, Apr 26, 2017 at 12:12 PM, Shimi K
+1
I compiled our (Rollout.io) kafka-stream project, run unit-tests and
end-to-end tests (against streams 0.10.2.1 and broker 0.10.1.1)
Everything works as expected
On Wed, Apr 26, 2017 at 10:05 PM, Gwen Shapira wrote:
> +1 (binding)
>
> Validated unit tests, quickstarts, connect, signatures
>
Hi, Gwen,
Thanks for doing the release. +1 from me.
Jun
On Fri, Apr 21, 2017 at 9:56 AM, Gwen Shapira wrote:
> Hello Kafka users, developers, friends, romans, countrypersons,
>
> This is the fourth (!) candidate for release of Apache Kafka 0.10.2.1.
>
> It is a bug fix release, so we have lots
+1 (binding)
Validated unit tests, quickstarts, connect, signatures
On Wed, Apr 26, 2017 at 11:30 AM, Guozhang Wang wrote:
> +1
>
> Verified unit test on source, and quick start on binary (Scala 2.12 only).
>
>
> Guozhang
>
>
> On Wed, Apr 26, 2017 at 2:43 AM, Ian Duffy wrote:
>
> > +1
> >
> >
Any pointers please
Abhi
On Wed, Apr 26, 2017 at 11:03 PM, Abhit Kalsotra wrote:
> Hi *
>
> My kafka setup
>
>
> **OS: Windows Machine*6 broker nodes , 4 on one Machine and 2 on other
> Machine*
>
> **ZK instance on (4 broker nodes Machine) and another ZK on (2 broker
> nodes machine)*
> *
Hello Sachin,
When instance is stopped, it will stop the underlying heart beat thread
during the stopping process so that the coordinator will realize it's
leaving the group.
As for non-graceful stopping, say there is a bug in the stream app code
that cause the thread to die, currently Streams li
+1
Verified unit test on source, and quick start on binary (Scala 2.12 only).
Guozhang
On Wed, Apr 26, 2017 at 2:43 AM, Ian Duffy wrote:
> +1
>
> Started using kafka client 0.10.2.1 for our streams applications, seen a
> much greater improvement on retries when failures occur.
> We've been r
Hi *
My kafka setup
**OS: Windows Machine*6 broker nodes , 4 on one Machine and 2 on other
Machine*
**ZK instance on (4 broker nodes Machine) and another ZK on (2 broker nodes
machine)*
** 2 Topics with partition size = 50 and replication factor = 3*
I am producing on an average of around 500
You can use the reassign partitions CLI tool to generate a partition
reassignment for the topic, and then manually edit the JSON to add a third
replica ID to each partition before you run it.
Alternately, you can use our kafka-assigner tool (
https://github.com/linkedin/kafka-tools) to do it in a
Talk to Confluent. https://www.confluent.io/ Nobody knows Kafka better than
they do ;)
Some of the Hadoop vendors offer commercial support as part of their
subscriptions, too.
My company, Lightbend, will be rolling out a distribution of streaming
technologies this Fall that will also include Kafk
Hi,
In our team some developers created topics with replication factor as 1
by mistake and number of partition in the range of 20-40. How to increase
the replication factor to 3 for those topics now? Do we need to come up
with a manual assignment plan for each of the partitions? Is there any
qui
Both Confluent and Cloudera provide support.
-Dave
From: Benny Rutten [mailto:brut...@isabel.eu]
Sent: Wednesday, April 26, 2017 2:36 AM
To: users@kafka.apache.org
Subject: Kafka 24/7 support
Good morning,
I am trying to convince my company to choose Apache Kafka as our standard
messaging syst
ApacheCon is just three weeks away, in Miami, Florida, May 15th - 18th.
http://apachecon.com/
There's still time to register and attend. ApacheCon is the best place
to find out about tomorrow's software, today.
ApacheCon is the official convention of The Apache Software Foundation,
and includes t
Good morning,
I am trying to convince my company to choose Apache Kafka as our standard
messaging system.
However, this can only succeed if I can also propose a partner who can provide
24/7 support in case of production issues.
Would you by any chance have a list of companies that provide such s
Yes, basically I'm ok with how join works including window and
retention periods, under normal circumstances. In real time of
occurrence of events, application joining streams will get something
like this:
T1 + 0 => topic_small (K1, V1) => join result (None)
T1 + 1 min => topic_large (K1, VT
Hi Murad,
On Wed, 26 Apr 2017 at 13:37 Murad Mamedov wrote:
> Is there any global time synchronization between streams in Kafka Streams
> API? So that, it would not consume more events from one stream while the
> other is still behind in time. Or probably better to rephrase it like, is
> there g
Hi,
Suppose that we have two topics, one with each event size 100 bytes and the
other each event size 5000 bytes. Producers are producing events (with
timestamps) for 3 days in both topics, same amount of events, let assume
10 events in each topic.
Kafka Client API consumers will obviously co
Dear All,
I am using Kafka cluster 2.11_0.9.0.1, and the new consumer of 2.11_0.9.0.1.
When I set the quota configuration is:
quota.producer.default=100
quota.consumer.default=100
And I used the new consumer to consume data, then the error happened
sometimes:
org
+1
Started using kafka client 0.10.2.1 for our streams applications, seen a
much greater improvement on retries when failures occur.
We've been running without manual intervention for > 24 hours which is
something we haven't seen in awhile.
Found it odd that the RC tag wasn't within the version o
Hi Eno,
Looks like we just didn't wait long enough. It eventually recovered and
started processing again.
Thanks for all the fantastic work in the 0.10.2.1 client.
On 25 April 2017 at 18:12, Eno Thereska wrote:
> Hi Ian,
>
> Any chance you could share the full log? Feel free to send it to me
>
Hi,
We've written a few-liner command that reads offsets for the consumer group
we want to copy and commits those for a new group. That way you can inspect
"__consumer_offsets" topic and make sure everything is correct before you
start consuming messages.
BR
Stanislav.
2017-04-25 22:02 GMT+02:0
Yes that should work. Thanks a lot!
2017-04-25 21:44 GMT+02:00 Gwen Shapira :
> We added a Byte Converter which essentially does no conversion. Is this
> what you are looking for?
>
> https://issues.apache.org/jira/browse/KAFKA-4783
>
> On Tue, Apr 25, 2017 at 11:54 AM, Stas Chizhov wrote:
>
> >
28 matches
Mail list logo