Thanks Guazhang for the reply!
So in fact if it's the case you said, if I understand correctly, then the
messages lost should be the last messages. But in our use case it is not
the last messages get lost. And this does not explain that the different
behavior depending on `kill -9` moment (before
Hi
I'm using kafka 0.9 in server and clients.
I want to KafkaProducer send data fast and does not wait long, so I reduced
max.block.ms to 100 ms. 100 ms is my ideal. And producer send data in less
than 100 ms.
But there is problem: document says first time that produce send data it
fetch topic's m
To set the stage:
We currently have 2 zookeeper ensembles (ZK1 and ZK2), which are
running and stable. We are in the process of consolidating to just
one. Currently we have a set of brokers "registered" to the ZK1
ensemble, which does have some topics. We can shutdown the brokers
(for a short p
Sorry in fact the test code in gist does not exactly reproduce the problem
we're facing. I'm working on that.
2016-02-02 10:46 GMT+01:00 Han JU :
> Thanks Guazhang for the reply!
>
> So in fact if it's the case you said, if I understand correctly, then the
> messages lost should be the last messa
Hi
This is jingbo,I am a database engineer work in sina(china), I meet some error
when I start the kafka cluster,can you help me ?
The error is:
2016-02-02 16:52:20,916] ERROR Error while electing or becoming leader on
broker 10798841 (kafka.server.ZookeeperLeaderElector)
org.I0Itec.zkclient.exce
Thanks for the information James, the slides are really good.
One question, in the new producer the property block.on.buffer.full (in the
slides put this value in TRUE is a good practice, I image that this will
avoid a buffer overflow) is deprecated, and instead the use of max.block.ms,
which bloc
To set the stage:
We currently have 2 zookeeper ensembles (ZK1 and ZK2), which are
running and stable. We are in the process of consolidating to just
one. Currently we have a set of brokers "registered" to the ZK1
ensemble, which does have some topics. We can shutdown the brokers
(for a short p
Hi Eric,
We have a slightly different use case where we publish to Kafka using a
(modified) Connect Source and are using Spark Streaming to read the data
from Kafka and write to C* - it was really easy to write simple code to
parse SchemaAndValue objects.
Setting up Spark Streaming is extremely
It is indeed wired that if you kill -9 before the first commit, then there
is no data loss.
But with what I suspect you can get data loss in the middle, not only the
last messages. Since once consumer1 is killed, consumer2 will take over
partitions assigned to consumer1 and resume from committed o
This timeout is used while fetching metadata and for blocking when there
is not enough space in the producers memory to store the batches that are
waiting to be sent to kafka brokers.
If you increase your producers memory, reduce your linger time (and also
batch size if required) you will have en
Assigned to you :)
On Mon, Feb 1, 2016 at 10:46 PM, Adam Kunicki wrote:
> Done: https://issues.apache.org/jira/browse/KAFKA-3191
> <
> https://mailtrack.io/trace/link/3bdac0fc9ba873124166ced41257394c37af6bdd?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FKAFKA-3191&signature=7188d1843f834
MAX_INT is a good value if you want to just block until the buffer has some
space (and never get an exception).
On Tue, Feb 2, 2016 at 8:08 AM, Franco Giacosa wrote:
> Thanks for the information James, the slides are really good.
>
> One question, in the new producer the property block.on.buffer
Hi All,
I see that the —num.producers is removed from the Kafka 0.9 MirrorMaker. Why is
this?
How can we create multiple producer threads for publishing in MirrorMaker?
Thanks,
Tushar
It looks like the gist of it is to use the client’s
ByteArraySerializer/ByteArrayDeserializer. Can someone point me to nice
examples for Json or Avro or Scala case classes with the 0.9 Kafka Java client?
Googling for answers produces too much noise to wade through.
Thanks,
Gary
I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With
messages ~70 KB everything works fine. However, after the producer enqueues a
larger, 70 MB message, kafka appears to stop delivering the messages to the
consumer. I.e. not only is the large message not delivered but also s
Make sure the topic is created after message Max bytes is set.
On Feb 2, 2016 9:04 PM, "Tech Bolek" wrote:
> I'm running kafka_2.11-0.9.0.0 and a java-based producer/consumer. With
> messages ~70 KB everything works fine. However, after the producer enqueues
> a larger, 70 MB message, kafka appe
16 matches
Mail list logo