For #1, fetcher.getTopicMetadata() is called.
If you have time, you can read getTopicMetadata(). It is a blocking call
with given timeout.
For #2, I don't see any mechanism for metadata sharing.
FYI
On Fri, Dec 29, 2017 at 8:25 AM, Viliam Ďurina
wrote:
> Hi,
>
> I use KafkaConsumer.partitionsF
Thanks, yeah, I understand that I couldn't truly consume from multiple
threads. I was hoping I could get away with just commits from the thread
(to signal that processing of the data it was passed was complete).
It's just a little extra complication. A SMOP.
Skip
On Dec 29, 2017 10:46 AM, "Steph
Hey Skip, the Kafka consumer is NOT thread safe. If you want to share across
threads you will need to properly lock. Alternatively you can create a consumer
per thread. Check out the class javadoc under the "multithreaded processing"
for more details and suggestions.
___
Hi Peter,
Kafka 1.0 certainly survives restarts, we have lots of tests verifying
that. :) Are you running Kafka on Windows? There is a known issue affecting
Windows (which is not a supported platform at this point).
Ismael
On Fri, Dec 29, 2017 at 3:24 PM, peter holm wrote:
> Hi, kafka 1.0 does
Hi,
I use KafkaConsumer.partitionsFor() method to discover partitions that
might be added at runtime. I use manual partition assignment. I call it
once per second and rely on the metadata.max.age.ms property to throttle
real number of remote calls.
My questions:
1.
can the partitionsFor c
Hi, kafka 1.0 does not survive restart. The server.log has many - cannot
delete file - is this a known bug ?
I subscribe to topics and receive input in my main thread (this
happens to be in Python):
consumer = kafka.KafkaConsumer(...)
for data in consumer:
do_stuff(data)
do_stuff(...) can hand messages off to a separate thread for
processing. Is it okay to call commit() from that thread
>From core/src/main/scala/kafka/server/KafkaConfig.scala :
val CompressionTypeDoc = "Specify the final compression type for a given
topic. This configuration accepts the standard compression codecs " +
"('gzip', 'snappy', 'lz4'). It additionally accepts 'uncompressed' which
is equivalent to no
I checked the latest .log files and the earliest. They are all the same with
human-readable message payload.
I tried setting LZ4 but that leads to a fatal on startup:
[2017-12-29 13:55:15,393] FATAL [KafkaServer id=1] Fatal error during
KafkaServer startup. Prepare to shutdown (kafka.server.Ka
Is this config added after sending some data? Can you verify the latest
logs?
This wont recompress existing messages. Only applicable to new messages.
On Fri, Dec 29, 2017 at 6:59 PM, Ted Yu wrote:
> Looking at https://issues.apache.org/jira/browse/KAFKA-5686 , it seems you
> should have specifi
Looking at https://issues.apache.org/jira/browse/KAFKA-5686 , it seems you
should have specified LZ4.
FYI
On Fri, Dec 29, 2017 at 5:00 AM, Sven Ludwig wrote:
> Hi,
>
> we thought we have lz4 applied as broker-side compression on our Kafka
> Cluster for storing measurements, but today I looked i
Hi,
we thought we have lz4 applied as broker-side compression on our Kafka Cluster
for storing measurements, but today I looked into the individual .log files and
I was able to read all the measurements in plain text by just using less on the
command line. This is for me an indicator that batc
Thank you very much!
Best regards
---
Jean
jeanking...@gmail.com
>-Original Message-
>From: Ted Yu [mailto:yuzhih...@gmail.com]
>Sent: Friday, Decembe
Can you take a look at KAFKA-5337 (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-169+-+Lag-Aware+Partition+Assignment+Strategy)
?
Cheers
On Thu, Dec 28, 2017 at 11:17 PM, 赖剑清 wrote:
> Hi, all
>
> I met a problem while using Kafka as a message queue.
>
> I have 10 consumer servers and 3
Hi
I suppose that you used kafka-x.x.x-src.tgz instead of kafka_2.11-x.x.x.tgz?
Best regards
---
Jean
jeanking...@gmail.com
From: bogdan.ivtsje...@kpn.com [ma
15 matches
Mail list logo