Hi Francesco,
There are a few things to think about before turning on log compaction for
a topic.
1. Does the topic have non-keyed message? Log compaction only works if all
the messages have a key.
2. The log cleaner needs some memory to build the offset map for log
compaction, so the memory cons
Hey Marcos,
Thanks for the report. Can you check out
https://issues.apache.org/jira/browse/KAFKA-3994 and see if it matches? At
a glance, it looks like the same problem. We tried pretty hard to get the
fix into the release, but it didn't quite make it. A few questions:
1. Did you not see this in
What happens when run it as windows service? And you save/dump the consumed
data somewhere other than console (for windows service)?
On Thu, Nov 3, 2016 at 1:11 AM, Birendra Kumar Singh
wrote:
> Hi
>
> I need help in creating a windows service to consume from kafka topic. The
> service is writte
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hi John,
first of all, a KTable is a (changelog) stream; thus, by definition it
is infinite.
However, I assume you are worried about the internal materialized
view, of the changelog stream (ie, a table state). This view only
contains the latest val
We hit an error in some custom monitoring code for our Kafka cluster where
the root cause was zookeeper was storing for some partition offsets for
consumer groups, but those partitions didn't actually exist on the brokers.
Apparently in the past, some colleagues needed to reset a stuck cluster
cau
Confirmed. It is Magic byte ahead of each avro message. I am able to get it
flink consumer work. Thanks you, Dave :)
Thanks,
Dayong
> On Nov 3, 2016, at 8:01 AM, Dayong wrote:
>
> Not quite sure, will try to find out today.
>
> Thanks,
> Dayong
>
>> On Nov 2, 2016, at 9:59 PM, "Tauzell, Dave
Just to expand on Lawrence's answer: The increase in file descriptor usage
goes from 2-3K under normal conditions, to 64K+ under deadlock, which it
hits within a couple of hours, at which point the broker goes down, because
that's our OS-defined limit.
If it was only a 33% increase from the new t
We saw this increase when upgrading from 0.9.0.1 to 0.10.0.1.
We’re now running on 0.10.1.0, and the FD increase is due to a deadlock, not
functionality or new features.
Lawrence Weikum | Software Engineer | Pandora
1426 Pearl Street, Suite 100, Boulder CO 80302
m 720.203.1578 | lwei...@pandora
Newbie here, I am working with Kafka Streams with java 1.8.
I want to use the ktable as a lookup table in a join to a kstream. I had no
issue implementing this. However, I do not want the ktable to grow without
bounds, I want to limit the ktable to the past 2 weeks data, more of a
'sliding'
The 0.10.1 broker will use more file descriptor than previous releases
because of the new timestamp indexes. You should expect and plan for ~33%
more file descriptors to be open.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Thu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
First a hint about "group.id". Please read this to make sense of this
parameter:
http://stackoverflow.com/documentation/apache-kafka/5449/consumer-groups
- -and-offset-management
It might also help to understand how to get the "last value" of a
top
We're running into a recurrent deadlock issue in both our production and
staging clusters, both using the latest 0.10.1 release. The symptom we
noticed was that, in servers in which kafka producer connections are short
lived, every other day or so, we'd see file descriptors being exhausted,
until
Hi list,
I need some input on best practices on wiritng Java Kafka (0.10.1.0)
consumers.
*The scenario:*
A java distributed system sending/receving messages, currently based on
Akka + RabbitMQ.
A reasonably low number of channels (~dozen) (mapped to Kafka topics)
however it can potentially grow t
I've just realised the parameter of poll method. It's been explained as:
"The time, in milliseconds, spent waiting in poll if data is not available
in the buffer."
When I set to a big number ''sometimes" I can see a result in it. When I
set it to 0 and push something to do topic that it listens s
Not quite sure, will try to find out today.
Thanks,
Dayong
> On Nov 2, 2016, at 9:59 PM, "Tauzell, Dave"
> wrote:
>
> Is Kafka connect adding some bytes to the beginning of the avro with the
> scheme registry id?
>
> Dave
>
>> On Nov 2, 2016, at 18:43, Will Du wrote:
>>
>> By using the ka
Hi Matthias,
Thanks for the response. I stream output as follows:
longCounts.toStream((wk, v) -> wk.key())
.to(Serdes.String(),
Serdes.Long(),
"qps-aggregated");
I want to read last value from that topic at another applicati
Hi,
some Flink users recently noticed that they can not check the consumer lag
when using Flink's kafka consumer [1]. According to this discussion on the
Kafka user list [2] the kafka-consumer-groups.sh utility doesn't work with
KafkaConsumers with manual partition assignment.
Is there a way to g
Hi
I need help in creating a windows service to consume from kafka topic. The
service is written i n csharp and am using
https://github.com/ah-/rdkafka-dotnet as the client.
I am able to successfully create a console application.
But I am not able to proceed with converting the same to windows ser
18 matches
Mail list logo