Hi all,
We found a deadlock that affects metrics reporters that use synchronization
(https://issues.apache.org/jira/browse/KAFKA-7136). Since it may introduce
new issue in the bug fix release, I will create another RC that includes
the fix.
Thank you all for testing and voting for this release! P
On server side, we use INFO for everything.
Log4j setting can be a temporary hack, but we would like to keep INFO
logging as the default.
I think those two logging lines can be just downgraded into DEBUG loggings,
moving start offset is not that eventful to be logged as INFO.
On Fri, Jul 6, 2018
Hello Henry,
What's your server-side log4j settings? Could you use WARN on these two
classes: kafka.server.epoch.LeaderEpochFileCache and kafka.log.Log.
Guozhang
On Fri, Jul 6, 2018 at 3:08 PM, Henry Cai
wrote:
> @guozhang
>
> After we moved to kafka-1.1.0 for our Kafka streams application,
@guozhang
After we moved to kafka-1.1.0 for our Kafka streams application, our broker
logs are polluted with loggings such as:
[2018-07-06 21:59:26,170] INFO Cleared earliest 0 entries from epoch cache
based on passed offset 301483601 leaving 1 in EpochFile for partition
inflight_spend_unified_st
We do not have in-memory window stores implemented yet:
https://issues.apache.org/jira/browse/KAFKA-4730
On Wed, Jul 4, 2018 at 12:05 PM, Gleb Stsenov
wrote:
> Hello Guozhang,
> Thank you!
> One thing to clarify: so, only persistent store can be windowed, right?
> Demo default in-memory key-valu
Yes, please create a JIRA reporting this: the `all()` and `fetchAll()`
source code was not modified since its first added into ReadOnlyWindowStore
API, so it's likely a lurking bug caused the issue. And please attach your
code / sample data if possible to help us reproduce this issue in order to
in
Kafka provides total ordering only within individual partitions. A topic
with multiple partitions is considered a "partial order", in which multiple
subsets of the topic are considered well-ordered but the topic as a whole
is not. The tradeoff between scalability via partitioning and message
orderi
Hi Kafka Streams Users,
I have posted the same question on stackoverflow and if anybody could point
some directions it would be of great help.
https://stackoverflow.com/questions/51214506/kafka-consumer-hung-on-certain-partitions-single-kafka-consumer
On Fri, Jul 6, 2018 at 10:25 PM, dev loper
Hello,
We are building a data pipe line with the following semantics. We need to
maintain order till the last unit of work is done in this pipe line .We
cannot have a single partition since that looses our ability to scale .
Looked at using partitioning keys ,but does that guarantee order in the
Hi Kafka Streams Users,
My test environment, I have three Kafka brokers and my topic is having 16
partitions, there are 16 IOT Devices which are posting the messages to
Kafka. I have a single system with one Kafka Consumer which subscribes to
this topic. Each IOT devices are posting the message
Any other ideas here? Should I create a bug?
On Tue, Jul 3, 2018 at 1:21 PM, Christian Henry wrote:
> Nope, we're setting retainDuplicates to false.
>
> On Tue, Jul 3, 2018 at 6:55 AM, Damian Guy wrote:
>
>> Hi,
>>
>> When you create your window store do you have `retainDuplicates` set to
>> `t
Hello everyone,
We are running a 3 broker Kafka 0.10.0.1 cluster. We have a java app which
spawns many consumer threads consuming from different topics. For every
topic we have specified different consumer-group. A lot of times I see that
whenever this application is restarted a CG on one or two t
My administrator will not allow messages larger than 1MB to be stored in Kafka.
How can I limit the size of my messages to 1MB? If I have a message larger than
1MB, I want to truncate or throw away the message to avoid the
RecordTooLargeException. What is the max size of the headers? Where is t
13 matches
Mail list logo