There's also an end-to-end example for DSL and Processor API integration:
https://github.com/confluentinc/examples/blob/3.2.x/kafka-streams/src/test/java/io/confluent/examples/streams/MixAndMatchLambdaIntegrationTest.java
Best,
Michael
On Tue, Mar 7, 2017 at 4:51 PM, LongTian Wang wrote:
> Re
Hi Jeff,
Yes, the work I am doing is ops work. Logstash is consuming from the topic +
consumer group, and I don't want it to start at the beginning, but rather at a
specific offset,
so setting the offset for the consumer group externally, then starting up
logstash is my goal.
I'm still a li
For this type of use case, there’s no problem with mirroring and producing
into the same topic. Kafka can handle it just fine, and as long as you’re
OK with the intermingled data from the consumer side (for example, knowing
that it may not be time-ordered if you’re working with keyed data), it will
Offsets for modern kafka consumers are stored in an internal Kafka topic,
so they aren't as easy to change as zookeeper.
To set a consumer offset, you need a consumer within a consumer group to
call commit() with your explicit offset. If needed, you can create a dummy
consumer and tell it to join
Hi,
We are running Kafka 0.10.0.X, with zookeeper. I'm trying to figure out if I
can manually
set a consumer offset, for a specific consumer when that consumer is stopped.
It looks like it used to be done using: kafka.tools.ExportZkOffsets and
kafka.tools.ImportZkOffsets
(
https://cwiki.ap
> On Mar 7, 2017, at 12:18 PM, James Cheng wrote:
>
>
>> On Mar 7, 2017, at 7:44 AM, Shrikant Patel wrote:
>>
>> Thanks for clarification. I am seeing strange behavior in that case,
>>
>> When I set min.insync.replicas=2 in my server.properties (restart the
>> server) and set the acks=all o
Hello,
I have a consumer group that is continually stuck in a reblance with the
following error being produced in the broker logs:
[2017-03-07 22:16:20,473] ERROR [Group Metadata Manager on Broker 0]:
Appending metadata message for group tagfire_application generation 951
failed due to org.apache
It sounds like you’re running into a problem, but we use
min.insync.replicas here and it works. When you describe the topic using
kafka-topics.sh, what does it show you?
On Tue, Mar 7, 2017 at 12:18 PM, James Cheng wrote:
>
> > On Mar 7, 2017, at 7:44 AM, Shrikant Patel wrote:
> >
> > Thanks fo
> On Mar 7, 2017, at 7:44 AM, Shrikant Patel wrote:
>
> Thanks for clarification. I am seeing strange behavior in that case,
>
> When I set min.insync.replicas=2 in my server.properties (restart the server)
> and set the acks=all on producer, I am still able to publish to topic even
> when on
Hi,
I have read in the note below
https://www.quora.com/What-is-viable-hardware-for-Zookeeper-and-Kafka-brokers
Which states
"Take your expected message size * expected messages/second, and multiply
that by how many seconds you would like to keep your messages available in
your kafka cluster fo
Hi Marco,
Absolutely no problem at all. I'm glad it work it out. For reference here
is a PR that fixes the problem:
https://github.com/apache/kafka/pull/2645 and the associated JIRA
https://issues.apache.org/jira/browse/KAFKA-4851
Thanks,
Damian
On Tue, 7 Mar 2017 at 17:48 Marco Abitabile
wrote
Thanks for explaining in more detail. Now I understood it. Was on a
complete wrong track before that!
Great find and thanks for fixing it :)
With regard to two unrelated issues. We should have two JIRAs and two PRs.
-Matthias
On 3/6/17 8:48 PM, Sachin Mittal wrote:
>> As for the second issue y
Hello Damian,
Thanks a lot for your precious support.
I confirm you that your workaround is perfectly working for my use case.
I'll be glad to support you to test the original code whenever the issue
you've spotted will be solved.
Thanks a lot again.
Marco.
Il 06/mar/2017 16:03, "Damian Guy"
Hello,
I am learning Kafka, and I am trying to connect to a remote server/machine to
pull data from a file which is continuously updating every 5 minutes. I tried
to connect to the remote server/machine but couldn't get any success. I try to
find examples but I couldn't get any which explain how
we can set "advertised.listeners" to a host name, not an IP address and
make sure host name always resolve to the correct IP.
On Tue, Mar 7, 2017 at 3:15 PM, Stanislav Melashchuk
wrote:
> Hello!
> Here is a situation : we have Kafka producer connected to a Kafka node,
> which is available by hos
Hello!
Here is a situation : we have Kafka producer connected to a Kafka node,
which is available by host kukuku.com (for example) and uses dynamic IP
address. If we restart our node, it will change an IP address. But producer
has only previous node IP in it's metadata and it doesn't initiate new
a
Thanks for clarification. I am seeing strange behavior in that case,
When I set min.insync.replicas=2 in my server.properties (restart the server)
and set the acks=all on producer, I am still able to publish to topic even when
only leader is up (none of followers are alive). With this configurat
Really appreciated, Matthias!
That's what I wanted.
Regards,
Long Tian
From: Matthias J. Sax
Sent: 07 March 2017 07:48:08
To: users@kafka.apache.org
Subject: Re: Can I create user defined stream processing function/api?
Hi,
you can implements custom operator via
A raspberry pi or a apu board would be enough for your majority ZK node
in your 3rd room.
One kafka cluster belongs to one ZK cluster. You could have one ZK/Kafka
cluster per room. There are tools for one direction copying of Kafka
messages but bi-directional "replication" is not possible as f
in my opinion for production I would run with raid10, its true that kafka
has durability as to shutdown of brokers, there are exceptions or hiccups,
if you want to avoid tons of movement between brokers and connection errors
on the clients (which may or not depending on how loaded your cluster is
c
is RAID 10 (mirrored and striped) essential for kafka cluster with three
nodes in DEV as I have been told that disk failures can shutdown the whole
kafka clusters? this is a non-prod environment
thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd
21 matches
Mail list logo