Jason,
My os/vm is OSX 10.11.3, JDK 1.8.0.40
—
Krzysztof
On 26 January 2016 at 19:04:58, Jason Gustafson (ja...@confluent.io) wrote:
Hey Krzysztof,
So far I haven't had any luck figuring out the cause of the 5 second pause,
but I've reproduced it with the old consumer on 0.8.2, so that rul
Rajiv,
Could you try to build the new consumer from 0.9.0 branch and see if the
issue can be re-produced?
Guozhang
On Mon, Jan 25, 2016 at 9:46 PM, Rajiv Kurian wrote:
> The exception seems to be thrown here
>
> https://github.com/apache/kafka/blob/0.9.0/clients/src/main/java/org/apache/kafka/
Hi ,
We are trying to write a Kafka-connect connector for Mongodb. The issue is,
MongoDB does not provide an entire changed document for update operations,
It just provides the modified fields.
if Kafka allows custom log compaction then It is possible to eventually
merge an entire document and su
The new consumer only supports offset stores in Kafka
On Wed, 27 Jan 2016 at 05:26 wrote:
> Does the new KafkaConsumer support storing offsets in Zookeeper or only in
> Kafka? By looking at the source code I could not find any support for
> Zookeeper, but wanted to confirm this.
>
> --
> Best re
I managed to reproduce this issue on my mac with receive.buffer.bytes
setting to new consumer default value. My JVM is hotspot 64 bit 1.7.0_60
and mac 10.10.5
On Wed, 27 Jan 2016 at 02:54 Krzysztof Ciesielski <
krzysztof.ciesiel...@softwaremill.pl> wrote:
> Hi Jason,
>
> Lowering "receive.buffer.
Nikhil,
You should search the mailing list archives, but I'm not aware of any
discussion around that. If you wanted to try something like that, you might
be able to accomplish it via FUSE or similar. For example, this page lists
ways you can mount HDFS as a normal filesystem, including fuse-based
Producer.send() by itself will not throw anything.
You need to either wait on the future:
producer.send().get()
Or to use it with a callback that logs the error.
On Tue, Jan 26, 2016 at 8:50 AM, Joe San wrote:
> Is this strange or wierd? I had no Kafka or Zookeeper running on my local
> machin
I would give this a look and see if it works, since you mention HDFS
https://github.com/linkedin/gobblin
On Tue, Jan 26, 2016 at 3:49 PM, Nikhil Joshi wrote:
> Hi,
>
> I'm new to the Kafka community. Has there been any discussion around
> plugging-in external filesystems (like HDFS) for Kafka p
Thanks Jun.
On Tue, Jan 26, 2016 at 3:48 PM, Jun Rao wrote:
> Rajiv,
>
> We haven't released 0.9.0.1 yet. To try the fix, you can build a new client
> jar off the 0.9.0 branch.
>
> Thanks,
>
> Jun
>
> On Mon, Jan 25, 2016 at 12:03 PM, Rajiv Kurian wrote:
>
> > Thanks Jason. We are using an affe
Hi,
I'm new to the Kafka community. Has there been any discussion around
plugging-in external filesystems (like HDFS) for Kafka persistence? Though
local filesystem gives the best throughput for the append-only Kafka log
data-structure, other filesystems might be able to provide better storage
eff
Rajiv,
We haven't released 0.9.0.1 yet. To try the fix, you can build a new client
jar off the 0.9.0 branch.
Thanks,
Jun
On Mon, Jan 25, 2016 at 12:03 PM, Rajiv Kurian wrote:
> Thanks Jason. We are using an affected client I guess.
>
> Is there a 0.9.0 client available on maven? My search at
Thanks Ewen!
-J
On Tue, Jan 26, 2016 at 1:44 PM, Ewen Cheslack-Postava
wrote:
> No, you don't need to keep adding ZK nodes. You should have a 3 or 5 node
> ZK cluster. The more nodes you use, the slower write performance becomes,
> so adding more can hurt performance of any ZK-related operation
No, you don't need to keep adding ZK nodes. You should have a 3 or 5 node
ZK cluster. The more nodes you use, the slower write performance becomes,
so adding more can hurt performance of any ZK-related operations. The
tradeoff between 3 and 5 ZK nodes is fault tolerance (better with 5) vs
write per
Does the new KafkaConsumer support storing offsets in Zookeeper or only in
Kafka? By looking at the source code I could not find any support for
Zookeeper, but wanted to confirm this.
--
Best regards,
Marko
www.kafkatool.com
Hi Andrew,
I’m the main maintainer of Reactive-Kafka which wraps Kafka as a sink/source of
a Reactive Stream. Maybe it will suit your needs:
https://github.com/softwaremill/reactive-kafka
Java API is also available.
—
Bests,
Krzysiek
SoftwareMill
On 26 January 2016 at 22:10:13, Andrew Pennebak
It's not an iterator (ConsumerRecords is a collection of records), but you
also won't just get the entire set of messages all at once. You would have
the same issue if you set auto.offset.reset to earliest for a new consumer
-- everything that's in the topic will need to be consumed.
Under the hoo
The Apache Kafka Java library requires an inordinate amount of code to
send/receive messages. Has anyone thought of writing a wrapper library to
make kafka easier to use?
* Assume more default values
* Easier access to message iterators / listening-reacting loops
* Consumer thread pools
* Easy top
Hi Guys,
In general, is it a good idea to run a ZK node on each Kafka broker? In
otherwords, as you add broker nodes you are also adding ZK nodes 1:1. Or
should the ZK cluster be kept a smaller fixed size (like 3)?
Thank you in advance.
-J
I updated the KIP accordingly.
Cheers,
- pyr
Oops! So sorry about that...
On Tue, Jan 26, 2016 at 8:25 PM, Henrik Lundahl
wrote:
> Hi
>
> Perhaps someone already told you, but you should send to
> users-h...@kafka.apache.org instead to subscribe.
>
>
> BR
>
> --
> Henrik
>
>
>
> On Sun, Jan 24, 2016 at 8:48 AM, Richard He
> wrote:
>
>>
Hi
Perhaps someone already told you, but you should send to
users-h...@kafka.apache.org instead to subscribe.
BR
--
Henrik
On Sun, Jan 24, 2016 at 8:48 AM, Richard He wrote:
>
>
Hi Jason,
Lowering "receive.buffer.bytes” helps, but when the message size gets bigger -
it comes back again.
I will test with 65536 and check how big the message has to be to make the
issue reappear with this value (I suspect that quite big).
--
Krzysztof Ciesielski
SoftwareMill
On 26 Januar
Hey Krzysztof,
So far I haven't had any luck figuring out the cause of the 5 second pause,
but I've reproduced it with the old consumer on 0.8.2, so that rules out
anything specific to the new consumer. Can you tell me which os/jvm you're
seeing it with? Also, can you try changing the "receive.buf
Is this strange or wierd? I had no Kafka or Zookeeper running on my local
machine and I was expecting an exception, but for some strange reason, I do
not see any errors:
try {
logger.info(s"kafka producer obtained is ${producer}")
producer.send(
new ProducerRecord[String, String](producerC
Hi,
Are there any future plans to add request/reply functionality to Kafka?
I've currently implemented this functionality by basically creating and
deleting temporary topics. It works fine under lighter loads, but under
very high loads, it can overwhelm Zookeeper because of the intensive IO
requi
Hi Ismael,
Thanks for the review, I will act on these a bit later today.
- pyr
Ismael Juma writes:
> Thanks Pierre. Including the dev mailing list.
>
> A few comments:
>
> 1. It's worth mentioning that the KafkaConsumer has the
> @InterfaceStability.Unstable annotation.
> 2. It would be go
Thanks Pierre. Including the dev mailing list.
A few comments:
1. It's worth mentioning that the KafkaConsumer has the
@InterfaceStability.Unstable annotation.
2. It would be good to show the existing signatures of the methods being
changed before we show the changed signatures.
3. The proposed c
We do not seek in the onPartitionAssigned.
In our test setup (evaluating kafka for a new project) we put a constant load
on one of the topics.
We have a consumer group pulling messages from the different partitions on the
topic.
At a certain point in time, the poll() does not return any message
KAFKA-3006 is under review, and would change some commonly used
signatures in the Kafka client library. The idea behind the proposal is
to provide a unified way of interacting with anything sequence like in
the client.
If the change is accepted, these would be the signatures that change:
void su
29 matches
Mail list logo