Hi,
I again received this exception while running my streams app. I am using
Kafka 11.0.1. After restarting my app, this error got fixed.
I guess this might be due to bad network. Any pointers. Any config wherein
I can configure it for retries.
Exception trace is attached.
Regards,
-Sameer.
201
Yes, Steve. I guess the workaround is choose your min.insync.replicas
wisely. Also, in case of produces with acks=all, producer after sufficient
retries would fail eventually and streams apps would stall itself. But,
they should resume when the brokers are fixed.
-Sameer.
On Tue, Sep 26, 2017 at
Hi,
I'm looking for information on where the stream transformations are
applied - the server(broker) or the client?
Would it be possible for clients to share the topology?
--
Warm regards
Roshan
Hi,
Is it a good idea to run kafka as docker containers in the production
deployments? Do you guys foresee any blocks with this approach?
Please advise.
-Anoop P
Hi,
we have an existing Kafka cluster (0.10) already setup and working in
production.
I woudl like to explore using Confluent's Elasticsearch Connector - however, I
see it comes as part of the Confluent distribution of Kafka (with separate
confluent scripts, libs, etc.).
Is there an easy way to
We see even when we use https://github.com/edenhill/librdkafka nuget
version .11.0, https://www.nuget.org/packages/librdkafka.redist/
-Vignesh.
On Tue, Sep 26, 2017 at 11:03 AM, Vignesh wrote:
> I am just sending the request directly using my own client, Protocol api
> version I used is "1" ht
I am just sending the request directly using my own client, Protocol api
version I used is "1" https://kafka.apache.org/protocol#The_Messages_Offsets
Broker version is .10.2.0 . .This broker version supports protocol version
1.
Where are the logs related to such errors stored? Also, is this error
Hi,
I've recently experienced a reset of consumer group offset on a cluster of
3 Kafka nodes (v0.11.0.0).
I use 3 high level consumers using librdkafka 0.9.4
They first ask the consumer group assigned partition offsets just after
each rebalance and before consuming anything.
every offset related
Le mardi 26 septembre 2017 à 16:30 +0200, Bastien Durel a écrit :
> Hello,
>
> I want to allow any user to consume messages from any host, but
> restrict publishing from only one host (and one user), so I think I
> need ACLs
>
> I use the default authorizer :
> authorizer.class.name=kafka.securi
If I'm not mistaken, kafka-streams exactly once guarantee gives
transactional guarantees, as long as everything happens within single Kafka
cluster.
I.e. logic, based on Kafka streams with exactly once enabled, can read from
that cluster's topic, process read message, optionally write any processin
Great that it's working. Yes, you need retries not to drop messages during
broker restarts.
Ismael
On Tue, Sep 26, 2017 at 3:33 PM, Yogesh Sangvikar <
yogesh.sangvi...@gmail.com> wrote:
> Hi Team,
>
> Thanks a lot for the suggestion Ismael.
>
> We have tried kafka cluster rolling upgrade by doin
Excellent, the DumpLogSegment tool did the trick!
Wes
On Thu, Sep 21, 2017 at 4:32 AM, Manikumar
wrote:
> you can try DumpLogSegments tools to verify messages from log files. This
> will give compression type for each message.
> https://cwiki.apache.org/confluence/display/KAFKA/
> System+Tools
Hi Team,
Thanks a lot for the suggestion Ismael.
We have tried kafka cluster rolling upgrade by doing the version changes
(CURRENT_KAFKA_VERSION - 0.10.0, CURRENT_MESSAGE_FORMAT_VERSION - 0.10.0
and upgraded respective version 0.10.2) in upgraded confluent package 3.2.2
and observed the in-sync
Hello,
I want to allow any user to consume messages from any host, but
restrict publishing from only one host (and one user), so I think I
need ACLs
I use the default authorizer :
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
I added the following ACLs to allow anyone to read fr
Consumers can fetch messages up to the high watermark, which is dependent
on the in sync replicas, but not directly dependent on
`min.insync.replicas` (e.g. if there are 3 in sync replicas, the high
watermark is the min of the log end offset of the 3 replicas, even if min
in sync replicas is 2).
I
By default kafkf does not allow dirty reads for clients, so while
`min.insync.replicas`
is not achieved consumers don't see new messages.
On 26 September 2017 at 11:09, Sameer Kumar wrote:
> Thanks Stevo for pointing me out to correct link.
> In this case, how would exactly once feature of strea
It looks like only one of the restoring tasks ever transitions to running,
but it is impossible to tell why from the logs. My guess is there is a bug
in there somewhere.
Interestingly i only see this log line once:
"2017-09-22 14:08:09 DEBUG StoreChangelogReader:152 - stream-thread
[argyle-streams
Hi Stevo,
We are aware. There have been regressions in recent versions which have
prevented us from upgrading. See:
https://github.com/apache/kafka/pull/3519#issuecomment-327992362
https://github.com/apache/kafka/pull/3819
Ismael
On Tue, Sep 26, 2017 at 8:49 AM, Stevo Slavić wrote:
> There is
Created https://issues.apache.org/jira/browse/KAFKA-5977 for this issue.
On Tue, Sep 26, 2017 at 9:49 AM, Stevo Slavić wrote:
> There is legal problem for older rocksdb versions, see
> https://issues.apache.org/jira/browse/LEGAL-303?focusedCommentId=16109870&;
> page=com.atlassian.jira.plugin.sy
Here is my broker configuration:
# Server Basics #
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
host.name=
port=9092
#The maximum size of message that the server can receive
message.
Hello,
I always get this error when I send a big amount of data to the topic:
"org.apache.kafka.common.errors.TimeoutException: Expiring 35 record(s) for
words-4 due to 30006 ms has passed since batch creation plus linger time"
If anybody faced that problem before, could you please share your
error trace is attached in the initial mail.
On Tue, Sep 26, 2017 at 1:42 PM, Sameer Kumar
wrote:
> error trace is attached.
>
> On Tue, Sep 26, 2017 at 1:41 PM, Sameer Kumar
> wrote:
>
>> I received this error, and was wondering why would cause this. I received
>> this only once, got fixed for
error trace is attached.
On Tue, Sep 26, 2017 at 1:41 PM, Sameer Kumar
wrote:
> I received this error, and was wondering why would cause this. I received
> this only once, got fixed for next run.
>
> For every run, I change my state store though.
>
> -Sameer.
>
I received this error, and was wondering why would cause this. I received
this only once, got fixed for next run.
For every run, I change my state store though.
-Sameer.
2017-09-26 13:27:42 INFO ClassPathXmlApplicationContext:513 - Refreshing
org.springframework.context.support.ClassPathXmlAppl
Thanks Stevo for pointing me out to correct link.
In this case, how would exactly once feature of streams would behave since
they configure producers with acks=all. I think they would fail and would
need to be resumed once the broker comes back.
-Sameer.
On Tue, Sep 26, 2017 at 1:09 PM, Stevo Sla
There is legal problem for older rocksdb versions, see
https://issues.apache.org/jira/browse/LEGAL-303?focusedCommentId=16109870&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16109870
Dependencies with Facebook BSD+Patents license are not allowed to be
included in A
Hello Sameer,
Behavior depends on min.insync.replicas configured for the topic.
Find more info in the documentation
https://kafka.apache.org/documentation/#topicconfigs
Kind regards,
Stevo Slavic.
On Tue, Sep 26, 2017 at 9:01 AM, Sameer Kumar
wrote:
> In case one of the brokers fail, the brok
In case one of the brokers fail, the broker would get removed from the
respective ISR list of those partitions.
In case producer has acks=all, how would it behave? would the producers be
throttled and wait till the broker get backed up.
-Sameer.
28 matches
Mail list logo