Hi,
I would like to use Kafka as transaction log in order to support a case of
replicated state machine, but actually (using 0.9.x) there is a feature I would
like to have.
I'm using Apache BookKeeper and this feature (fencing) is native, but I have
some cases of customers which already use Kafk
Hi Vinay,
This statement is very interesting.
"I noticed that in case where a consumer is marked dead or a rebalance is in
progress, kafka throws CommitFailedException. A KafkaException is thrown only
when something unknown has happened which is not yet categorized."
I will test this out but w
Hi Team,
Can anybody help with issue I am facing with running multiple consumer
instances on single topic having single partition?
Kafka is broadcasting messages to all the consumers though all consumers
are running in same group
I am using kafka-php client lib to connect with kafka server an
Yes, a partition can be accessed by only a single thread at a time, so that
second consumer of yours, since it's in the same group, has nothing to do.
If you add more partitions your consumers would help each other consume all
messages out of that topic (roughly 50% of messages each).
If you were t
I restarted brainstroming for Kafka setup. I found link
http://www.michael-noll.com/blog/2013/03/13/running-a-multi-broker-apache-kafka-cluster-on-a-single-node/
for kafka cluster setup (Thanks Lohith Samaga for valuable suggestion) .
After try on local machine, I got this is what I need.
Now I am
Sorry for wrong php code. It should be like that
3. In php code, Instead of using partition = 0, Use App1 = 0, App2 = 1,
App3 = 2
In App 1 Producer Script :
$producer->setMessages($queue, 0, array($data));
In App 2 Producer Script :
$producer->setMessages($queue, 1, array($da
Hi,
When a stream of data passes through Kafka, wanted to apply the filter and
then let that message pass through to partitions.
Regards,
Subramanian. K
On Apr 26, 2016 12:33, "Marko Bonaći" wrote:
> Instantly reminded me of Streams API, where you can use Java8 streams
> semantics (filter being
You need to persist which producer is the leader so that when a new broker
takes over it can find out. There could be a special fencing topic that never
deleted messages ( using log compaction to save space ). You'd need to think
about all the edge cases and race conditions.
Dave
> On Apr 2
Using kafka streams is one way, I used camel before with kafka, which also
has a nice way of using filters.
On Fri, Apr 29, 2016 at 1:51 PM Subramanian Karunanithi
wrote:
> Hi,
>
> When a stream of data passes through Kafka, wanted to apply the filter and
> then let that message pass through to
I saw this in code of 0.9.0.1. I am sure about this because i am caching
this exception and executing a logic where i stop further record processing
if autocommit is off else i keep processing records got in current poll
even if commit fails. This is because kafka marks all records sent to user
as
Hi
Using Storm would be another way. This will scale as well.
Spark streaming would fit as well
It all depends on the complexity of the filter and any additional processing
required.
HTH
Lohith
Sent from my Sony Xperia™ smartphone
Gerard Klijs wrote
Using kafka streams is one way
hi :
i am a kafka developer, our company want to upgrade kafka cluster from
0.8.2.1 to 0.9.0.1.
At the 0.9.0.0 doc, the new Consumer is beta state, also the release notes
of 0.9.0.1 do not point out the state change. can any body tell me if it is
ready for production
Thanks & Regards,
Bruce Y
I'm going to try both of these ideas.
Thank you so much Phil for speaking up. I thought I was the only one with this
issue. Your analysis was great. I think I can easily send a test message but
would I need to send one every 5 mins? Or just in the _first_ 5 mins?
> On Apr 28, 2016, at 4:10 PM
Apache Samza is the way to go. Never used Kafka Streams so no opinion on that
one.
Best regards,
Radek Gruchalski
ra...@gruchalski.com (mailto:ra...@gruchalski.com)
(mailto:ra...@gruchalski.com)
de.linkedin.com/in/radgruchalski/ (http://de.linkedin.com/in/radgruchalski/)
Confidentia
Hello Guozhang,
thanks a lot for your response (to this and all of my previous questions). Here
is how I produce to the topic:
cat /tmp/file-input.txt | ./kafka-console-producer.sh --broker-list
localhost:9092 --topic streams-file-input
Here is the content of the file:
~/kafka-0.10.0/bin$ cat /
Hi ,
I came up with a sink connector to HBase which is available at
https://github.com/mravi/kafka-cdc-hbase .
A note of thanks to the team at Confluent for the elegant Connect API !!
Ravi
PS
Please refer to https://github.com/mravi/hbase-cdc-kafka, if you would
like to capture HBase change
Any idea why it's happening? I'm sure rolling restart would fix it. Is it a
bug?
On Wed, Apr 27, 2016 at 5:42 PM, Kane Kim wrote:
> Hello,
>
> Looks like we are hitting leader election bug. I've stopped one broker
> (104224873) on other brokers I see following:
>
> WARN kafka.controller.Control
What version of ZooKeeper are you on? There have been a few bugs over
the years where ZK has lost ephemeral nodes (and spontaneously
de-registered brokers).
On Fri, Apr 29, 2016 at 11:30 AM, Kane Kim wrote:
> Any idea why it's happening? I'm sure rolling restart would fix it. Is it a
> bug?
>
> O
Hi Hema,
I was about to bump this convo but then found this resolved issue:
https://github.com/sgroschupf/zkclient/issues/25
It's looking promising, but we are still testing the upgrade. The
resolution time of the issue would suggest it was resolved in release #0.8.
Hope this helps
-greg
On W
Not sure about your config, but I read somewhere (also a newbie) that if
number of consumers is more than number of partitions on the topic, some
will not get any messages. Search consumer parallelism.
On Fri, Apr 29, 2016 at 4:11 AM, Marko Bonaći
wrote:
> Yes, a partition can be accessed by onl
Do you mind sharing your log4j2 xml and if you can run it with your version
separately as a simple standalone client?
On Wed, Apr 20, 2016 at 4:26 AM, Prem Panchami wrote:
> Hi,
> We have a Kafka producer app that participates in the larger system. It
> worked fine sending messages. We just adde
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 0.10.0.0. This
is a major release that includes: (1) New message format including
timestamps (2) client interceptor API (3) Kafka Streams. (4)
Configurable SASL authentication mechanisms (5
22 matches
Mail list logo