Anyone ?
On Tue, Sep 6, 2016 at 4:21 PM, Shamik Bandopadhyay
wrote:
> Hi,
>
> I'm trying to send java object using kryo object serializer . The
> producer is able to send the payload to the queue, but I'm having issues
> reading the data in consumer. I'm using consumer group using KafkaStream.
The Kafka Streams documentation discussed how to assign timestamps to
records received from source topic via TimestampExtractor. But neither the
Kafka nor the Confluent documentation on Kafka Streams explain what
timestamp is associated with a record that has been transformed.
What timestamp is a
I’m currently wondering if it’s possible to use the internal
`__consumer_offsets` topic to manage offsets outside the consumer group APIs.
I’m using the low-level API to manage the consumers but I’d still like to store
offsets in Kafka.
If it’s not possible to publish and fetch offsets from the
Hello everyone!
We seem to be experiencing some odd behavior in Kafka and were wondering if
anyone has come across the same issue and if you’ve been able to fix it.
Here’s the setup:
8 brokers in the cluster. Kafka 0.10.0.0.
One topic, and only one topic on this cluster, is having issues whe
Ah works! Thanks! I was under the impression that these are sequentially
chained using the DSL. Didn’t realize I can still use allRecords parallel to
the branches.
Ara.
> On Sep 9, 2016, at 5:27 AM, Michael Noll wrote:
>
> Oh, my bad.
>
> Updating the third predicate in `branch()` may not even
Hi,
I'd like to use the Java Kafka producer in a non-blocking async mode.
My assuptions were that until the new message can fit into the producer's
memory, it will queue up those messages and send out once the broker is
available.
I tested a simple case when I am sending messages using
KafkaPro
Same in both cases:
client.id=Test-Prototype
application.id=test-prototype
group.id=test-consumer-group
bootstrap.servers=broker1:9092,broker2:9092zookeeper.connect=zk1:2181
replication.factor=2
auto.offset.reset=earliest
On Friday, September 9, 2016 8:48 AM, Eno Thereska
wrote:
Hi Caleb,
It looks like this problem is caused by this bug in Kafka 8, which was
fixed in Kafka 9:
https://issues.apache.org/jira/browse/KAFKA-972
On Thu, Sep 8, 2016 at 3:55 PM, Jeff Widman wrote:
> How do I permanently remove a broker from a Kafka cluster?
>
> Scenario:
>
> I have a stable cluster of
I have a need for volume based commits in a few sink connectors, and the
current interval-only based commit strategy creates some headaches. After
skimming the code, it appears that an alternate put() method that returned
a Map might be used to allow a sink connector to keep
Kafka up to date wrt co
Hi there,
We are preparing update of our Kafka cluster and applications to Kafka
0.10.x and we have some difficulties with configuration of *Kafka
producer to be asynchronous and reliably non-blocking*.
As I understand KIP-19 (1), the main intention of Kafka developers was
to hard-limit how
Hi Caleb,
Could you share your Kafka Streams configuration (i.e., StreamsConfig
properties you might have set before the test)?
Thanks
Eno
On Thu, Sep 8, 2016 at 12:46 AM, Caleb Welton wrote:
> I have a question with respect to the KafkaStreams API.
>
> I noticed during my prototyping work tha
The send() method returns a Future. You need to get the result at some point
to see what happened. A simple way would be:
m_kafkaProducer.send(prMessage).get();
-Dave
-Original Message-
From: Agostino Calamita [mailto:agostino.calam...@gmail.com]
Sent: Friday, September 9, 2016 9:33 A
Hi,
I'm writing a little test to check Kafka high availability, with 2 brokers,
1 topic with replication factor = 2 and min.insync.replicas=2.
This is the test:
System.out.println("Building KafkaProducer...");
KafkaProducer m_kafkaProducer = new
KafkaProducer(propsProducer);
Hi,
Am a newbie to Kafka and would like to use it. I have a question regarding
the same partition selection from multiple producers. Assume that I did not
specify any key while sending a message from producer and let producer to
choose partition in a round-robin manner. I have multiple producer no
Oh, my bad.
Updating the third predicate in `branch()` may not even be needed.
You could simply do:
KStream[] branches = allRecords
.branch(
(imsi, callRecord) -> "VOICE".equalsIgnoreCase(callR
ecord.getCallCommType()),
(imsi, callRecord) -> "DATA".equalsIgnoreCas
Ara,
you have shared this code snippet:
>allRecords.branch(
>(imsi, callRecord) -> "VOICE".equalsIgnoreCase(
callRecord.getCallCommType()),
>(imsi, callRecord) -> "DATA".equalsIgnoreCase(
callRecord.getCallCommType()),
>(imsi, callRecord) -> true
>);
T
Coming back to this issue, looks like it was a result of the centos 7
systemd cleanup task on tmp:
/usr/lib/tmpfiles.d/tmp.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License
17 matches
Mail list logo