Hello, recently we upgraded kafka from 2.8.0 to 3.7.0 and noticed a 5x
value increase from the kafka.request.produce.time.avg datadog metric.
There was also a considerable increase in
the kafka.request.metadata.time.avg value. It is worth noting the
99percentile metrics are comparable to before the
could be having multiple MirrorMaker instances - one to mirror
from A and one to mirror from B - that could then be controlled by another
process depending on the availability of A or B.
Has anyone else had to handle these types of scenarios?
Thanks,
Mark
What is the status of support for Java 17 in Kafka for both brokers and
clients?
The docs for Kafka 3.0.0 state that Java 8 and Java 11 are supported.
Thanks,
Mark
y server certificate
chain
I thought combining/chaining the intermediate cert will fix it but nothing.
Best regards,
John Mark Causing
(does
not support ssl.clientAuth).
Any ideas on how can I connect my Kafka server using -cert and -key option?
Best regards,
John Mark Causing
erties:
ssl.clientAuth=want and I no longer see any SSL errors.
Any tips/suggestions on how to fix this SSL error without upgrading (I
don't want to update at the moment to avoid other conflicts like Kafka
Cruise Control and others).
Thanks in advance!
Best regards,
John Mark Causing
k you in advance!
Best regards,
John Mark Causing
08)
at
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:447)
... 17 more
Best regards,
John Mark Causing
have a bunch of consumer groups, one per topic,
> so they can rebalance independently.
>
> On Wed, Feb 26, 2020, 1:05 AM Mark Zang wrote:
>
> > Hi,
> >
> > I have a 20 brokers kafka cluster and there are about 50 topics to
> consume.
> >
> > Between creatin
0.10.2
Thanks!
Mark
lease feel free to ask more.
I’m looking for confirmation on the above or corrections, both welcome.
Thanks,
Mark
Jackson was updated to 2.10 in the latest Kafka release. The method
mentioned no longer exists in 2.10.
Do you have multiple versions of Jackson on the ckasspath?
On Thu, 12 Dec 2019, 11:09 Charles Bueche, wrote:
> Hello again,
>
> spending hours debugging this and having no clue...
>
> * Kaf
le to import 2.3.1-rc2 in spring boot?
>
> Thanks!
>
> On Thu, Oct 24, 2019 at 4:21 PM Mark Anderson
> wrote:
> >
> > Are you using Spring Boot?
> >
> > I know that the recent Spring Boot 2.2.0 release specifically updates
> their
> > Kafka depe
Are you using Spring Boot?
I know that the recent Spring Boot 2.2.0 release specifically updates their
Kafka dependency to 2.3.0. Previous version used Kafka 2.1.x though I've
used 2.2.x with it.
Maybe running mvn dependency:tree would help see if there are multiple
Kafka versions that could conf
The first thing I would do is update to the latest Java 8 release. Just in
case you are hitting any G1GC bugs in such an old version.
Mark
On Thu, 22 Aug 2019, 07:17 Xiaobing Bu, wrote:
> it not a network issues, since i had capture the network packets.
> when the GC remark and unloading
you could do something similar?
Mark
On Thu, 13 Jun 2019, 17:49 Murphy, Gerard, wrote:
> Hi,
>
> I am wondering if there is something I am missing about my set up to
> facilitate long running jobs.
>
> For my purposes it is ok to have `At most once` message delivery, this
Kafka has its own version of the zookeeper client libraries that are still
3.4.13.
I'd be interested to know if it is compatible with 3.5.x now that has a
stable release.
Mark
On Wed, 5 Jun 2019, 21:27 Sebastian Schmitz, <
sebastian.schm...@propellerhead.co.nz> wrote:
> Hi,
>
Further investigation has uncovered a defect when resolving a hostname
fails - https://issues.apache.org/jira/browse/KAFKA-8182
Looks like it has been present since support for resolving all DNS IPs was
added.
On Mon, 1 Apr 2019 at 15:55, Mark Anderson wrote:
> Hi list,
>
> I'
Hello,
I would like to know, how many opened windows Kafka streams can hold?
Sincerely, Mark
[cid:image001.png@01D3B960.F400ABC0]
This message has been scanned for malware by Websense. www.websense.com
ception being thrown
at line 926. Can anyone shed any light?
Thanks,
Mark
need to replace it with a new consumer instance?
Thanks
Mark
ccur? On a fixed schedule? The next time
Producer.send is called for the same topic and partition? Or does something
else trigger it?
Thanks,
Mark
e any cases where it would be OK to re-use the KafkaConsumer after
poll() throws an exception?
Thanks,
Mark
I'm sure I initially made this assumption when trying to read all records
from a compacted topic on application startup and it was incorrect.
Due to latency, threading GC pauses etc it would return 0 when there were
still records on the topic.
Mark
On Mon, 4 Feb 2019, 18:02 Pere Urbón Baye
e was a network
problem that caused it to timeout before it could return that record?
Mark
t for?
Thanks,
Mark
nyone have any experience of running Kafka with this flag and seeing
a reduction in pauses?
Thanks,
Mark
mpact performance or stability?
Thanks,
Mark
he 'readTimeout' is defined as:
>
> readTimeout = sessionTimeout * 2 / 3;
>
> Thus, the 'actually' sessionTimeout is 1333ms while
> config:zookeeper.session.timeout=2000ms
>
>
> >-Original Message-
> >From: Mark Anderson [mailto:manderso...@gm
tCnxn)
However, my zookeeper session timeout is configured as 2000ms.
Why does the log file show a session timeout for a value less than what is
configured?
Thanks,
Mark
is it possible this will cause messages to be re-ordered
within the transaction?
Mark
twork threads (To try and work through the queued requests faster)?
Thanks,
Mark
On Thu, 6 Dec 2018 at 23:43 Mayuresh Gharat
wrote:
> Hi Mark,
>
> The leader election of a new topic partition happens once the controller
> detects that the Leader has crashed.
> This happens asynchrono
at the key parameters are to tune to make this possible.
Does anyone have any pointers? Or are there any guides online?
Thanks,
Mark
che/kafka/pull/6005/files
>
>
> Guozhang
>
> On Wed, Dec 5, 2018 at 6:54 AM Mark Anderson
> wrote:
>
> > Hi,
> >
> > I'm periodically seeing ConcurrentModificationExceptions in the producer
> > when records are expired e.g.
> >
> > E
27;t seen
this issue.
Please let me know if there is any more information I can supply.
Thanks,
Mark
nt to the new
leader.
Could you please confirm my understanding is correct?
Thanks,
Mark
d and 2.1.x is imminent.
Thanks,
Mark
Have you reviewed
https://www.confluent.io/blog/getting-started-apache-kafka-kubernetes/ as a
starting point?
On Mon, 22 Oct 2018, 18:07 M. Manna, wrote:
> Thanks a lot for your prompt answer. This is what I was expecting.
>
> So, if we had three pods where volumes are mapped as the following
>
Also in this case will it fall back to a full request?
Hence no data is lost but it might increase latency?
Thanks
Mark
On Thu, 26 Jul 2018, 12:28 Mark Anderson, wrote:
> Ted,
>
> Below are examples of the DEBUG entries from FetchSession
>
> [2018-07-26 11:14:43,461] DEBUG Crea
there are always out by one significant?
Thanks,
Mark
On Wed, 13 Jun 2018 at 17:46 Ted Yu wrote:
> You would need this (plus any appender you want the log to go to):
>
> log4j.logger.kafka.server=DEBUG
>
> FYI
>
> On Wed, Jun 13, 2018 at 9:15 AM, Ted Yu wrote:
>
>>
n.id}: expected " +
>
> s"epoch ${session.epoch}, but got epoch $
> {reqMetadata.epoch()}.")
>
> new SessionErrorContext(Errors.INVALID_FETCH_SESSION_EPOCH,
> reqMetadata)
>
> Can you pastebin the log line preceding what you pas
ith respect
to receiving records but I would like to understand
1. Why is the message being logged?
2. Do I need to do anything?
3. Can anything be done to stop it being logged?
Thanks,
Mark
of our 'data loss', which isn't actually loss but a bad
interaction of failover and catching a stale HWM leading to errors being
thrown by the broker when it maybe doesn't need to.
Thoughts?
--
Mark Smith
m...@qq.is
On Wed, Jan 18, 2017, at 02:11 PM, Jun Rao wrote:
>
this is being referred
to sort-of in Scenario 1, however, that scenario is mentioning broker
failure -- and my concern is that data loss is possible even in the
normal scenario with no broker failures.
Any thoughts?
--
Mark Smith
m...@qq.is
kes sense and means my understanding was wrong and this wasn't an
issue. Thanks for helping clear that up.
This means there is still an unresolved issue, unfortunately. I can
replicate the conditions that led to it and see if I can reproduce the
problem. If so, I'll update this thread
this case, I still don't think any discussion about multiple-
failovers is germane to the problem we saw. Each of our partitions only
had a single failover, and yet 4 of them still truncated committed data.
--
Mark Smith
m...@qq.is
On Mon, Nov 21, 2016, at 05:12 PM, Jun Rao wrote:
rg/jira/browse/KAFKA-1211
* I've read through this but I'm not entirely sure if it addresses the
above. I don't think it does, though. I don't see a step in the ticket
about become-leader making a call to the old leader to get the latest
generation snapshot?
--
Mark Smith
m...@
Correct, we've disabled unclean leader election. There were also no log
messages from an unclean election. I believe that Kafka thinks it
performed a clean election and still lost data.
--
Mark Smith
m...@qq.is
On Thu, Nov 17, 2016, at 06:23 PM, Tauzell, Dave wrote:
> Do
old, can anybody explain what happened? I'm
happy to provide more logs or whatever.
Thanks!
--
Mark Smith
m...@qq.is
Thanks, Gwen.
I was testing RC2 with Spark Streaming and found an issue that has a minor
fix. I think this issue exists with RC3 as well:
https://issues.apache.org/jira/browse/KAFKA-3669
On Thu, May 5, 2016 at 1:46 PM, Gwen Shapira wrote:
> Hello Kafka users, developers and client-developers,
>
ast enough to allow
you to quickly skip through all the stuff you don't need...
On Mon, Mar 28, 2016 at 10:53 AM, Mark van Leeuwen wrote:
Hi all,
When using Kafka for event sourcing in a CQRS style app, what approach do
you recommend for mapping DDD aggregates to topic partitions?
Assign
oughput. And feel free to
replace ³publisherID² with ³TargetConsumerID² or ³OrderedEventBusID².
On 3/29/16, 7:38 AM, "Mark van Leeuwen" wrote:
Thanks for sharing your experience.
I'm surprised no one else has responded. Maybe there are few people
using Kafka for event sourc
ition granularity issue; another is the
lack of a way to guarantee exclusive write access, i.e. ensure that only a
single process can commit an event for an aggregate at any one time.
On Mon, 28 Mar 2016 at 16:54 Mark van Leeuwen wrote:
Hi all,
When using Kafka for event sourcing in a CQRS style app,
n the cluster
to low tens of thousand".
One partition per aggregate will far exceed that number.
Thanks,
Mark
om/about/contact.html>
On Mon, Mar 21, 2016 at 6:25 PM, Ben Stopford wrote:
It sounds like a fairly typical pub-sub use case where you’d likely be
choosing Kafka because of its scalable data retention and built in fault
tolerance. As such it’s a reasonable choice.
On 21 Mar 2016, at 17
de by all current clients, preferably without polling. That's why
keeping track of offsets with each client seemed the way to go.
Not sure how stream processing engines help with that - but happy to be
advised otherwise.
Cheers.
On 22/03/16 02:35, Sandesh Hegde wrote:
Hello Mark,
Have
ebase. Had a quick look. Synchronization with
multiple client looks good, but there other requirements we have such as
preserving full change history which still make me think Kafka could be
the best fit.
On 22/03/16 01:55, Gerard Klijs wrote:
Hi Mark,
I don't think it would be a goo
one
user would be published to multiple consumer clients over a websocket,
each having their own offset.
Would this be viable?
Are there any considerations I should be aware of?
Thanks,
Mark
Kafka
rather than by the numerous proprietary systems that do this.
Mark
ere. I'll also
close the github issue with a link to this thread.
Thanks again,
Mark.
On Wed, Dec 16, 2015 at 9:51 PM Ewen Cheslack-Postava
wrote:
> Mark,
>
> There are definitely limitations to using JDBC for change data capture.
> Using a database-specific implementa
ql would handle this
nicely as it would get the changes once they're committed.
Thanks for any insight,
Mark.
Original github issue:
https://github.com/confluentinc/kafka-connect-jdbc/issues/27
call about the best approach to take.
Thanks,
Mark.
I usually approach this questions by looking at possible consumers. You
> usually want each consumer to read from relatively few topics, use most
> of the messages it receives and have fairly cohesive logic for using these
> messages
ences from the
members of this group. What is the general best practice for reusing
topics or creating new ones? What has worked well in the past? What
should we be considering while making this decision?
Thanks in advance!
Mark.
bs/kafka_2.10-0.8.1.2.2.0.0-2041-scaladoc.jar
I will also raise this under
http://hortonworks.com/community/forums/forum/kafka/
Regards
Mark
-Original Message-
From: Harsha [mailto:ka...@harsha.io]
Sent: 04 October 2015 16:42
To: users@kafka.apache.org
Subject: Re: Kafka Broker proces
.
I have done the usual Google searching, and followed many threads over the past
few days, but have drawn a blank.
Any thoughts where I should start to look?
Regards
Mark
---
Mark Whalley
Principal Consultant
Actian | Services
Accelerating Big Data 2.0
O +44 01
relationship."
http://www.quora.com/What-is-the-relation-between-Kafka-the-writer-and-Apache-Kafka-the-distributed-messaging-system
Regards,
Mark
On 20 May 2015 at 15:32, András Serény wrote:
> Hi All,
>
> I wonder, how the messaging system Kafka has got its name? I believe it
> w
I found the 0.8.2.0 and 0.8.2.1 has a KafkaConsumer. But this class seems
not completed and not functional. Lots of method returns null or throws
NSM. Which version of consumer for kafka 0.8.2 broker?
Thanks!
--
Best regards!
Mike Zang
/kafka/0.8.2.1/RELEASE_NOTES.html
Mark
On 13 March 2015 at 15:10, Marc Labbe wrote:
> Hi,
>
> our cluster is deployed on AWS, we have brokers on r3.large instances, a
> decent amount of topics+partitions (+600 partitions). We're not making that
> many requests/sec, roughly 80
between each
cluster?
*How do we prevent a message published at one cluster being replicated
to another cluster and into an infinite loop (assuming we use a MirrorMaker
whitelist like Global.* at each cluster)?
Regards,
Mark Flores
Project Manager, Enterprise Technology
Direct206-576
Hi,
I would like to subscribe to the Kafka mailing list for general questions.
Please let me know what I need to do in order to submit questions to the Kafka
general mailing list. Thanks.
Regards,
Mark Flores
Project Manager, Enterprise Technology
Direct206-576-2675
Email mark.flo
Wouldn't it be a better choice to store the logs offline somewhere? HDFS and S3
are both good choices...
-Mark
> On Feb 27, 2015, at 16:12, Warren Kiser wrote:
>
> Does anyone know how to achieve unlimited log retention either globally or
> on a per topic basis? I tried e
Hi Saravana,
Since 0.8.1 Kafka uses Gradle, previous to that SBT was used. Here is the
JIRA which the drove the change:
https://issues.apache.org/jira/browse/KAFKA-1171
The offical docs for the latest release (0.8.2) can be found here:
http://kafka.apache.org/documentation.html
Regards,
Mark
>
> I don't think there are any issues.
>
+1, I've been running Kafka with Java 7 for quite some time now and haven't
experienced any issues.
Regards,
Mark
On 2 February 2015 at 19:09, Otis Gospodnetic
wrote:
> I don't think there are any issues. We use 0.8.1
g [queue] rethought as a
distributed commit log.
-Mark
On Tue, Jan 6, 2015 at 3:14 PM, Joseph Pachod
wrote:
> Hi
>
> Having read a lot about kafka and its use at linkedin, I'm still unsure
> whether Kafka can be used, with some mindset change for sure, as a general
> purpose data s
ery JMX
for the server version of every server in the cluster? Is there a reason
not to include this in the API itself?
-Mark
On Wed, Nov 12, 2014 at 9:50 AM, Joel Koshy wrote:
> +1 on the JMX + gradle properties. Is there any (seamless) way of
> including the exact git hash? That would b
Just to be clear: this is going to be exposed via some Api the clients can call
at startup?
> On Nov 12, 2014, at 08:59, Guozhang Wang wrote:
>
> Sounds great, +1 on this.
>
>> On Tue, Nov 11, 2014 at 1:36 PM, Gwen Shapira wrote:
>>
>> So it looks like we can use Gradle to add properties to
I think it will depend on how your producer application logs things, but
yes I have historically seen exceptions in the producer logs when they
exceed the max message size.
-Mark
On Mon, Oct 27, 2014 at 10:19 AM, Chen Wang
wrote:
> Hello folks,
> I recently noticed our message amount in
a: Yes, absolutely.
-Mark
On Thu, Oct 23, 2014 at 3:08 AM, Po Cheung
wrote:
> Hello,
>
> We are planning to set up a data pipeline and send periodic, incremental
> updates from Teradata to Hadoop via Kafka. For a large DW table with
> hundreds of GB of data, is it okay (in terms o
Did this mailing list ever get created? Was there consensus that it did or
didn't need created?
-Mark
> On Jul 18, 2014, at 14:34, Jay Kreps wrote:
>
> A question was asked in another thread about what was an effective way
> to contribute to the Kafka project for people
te, I have lots of questions about a formalized "certified client"
process. I'm not against the idea (in fact quite the opposite), but I'm
concerned that non-Java clients will be constrained purely to the currently
existing Java API in the name of client uniformity and standard
e new metadata API.
-Mark
> On Jun 18, 2014, at 4:06, "Shlomi Hazan" wrote:
>
> Hi,
>
> Doing some evaluation testing, and accidently create a queue with wrong
> replication factor.
>
> Trying to delete as in:
>
> kafka_2.10-0.8.1.1/bin/kafka-topics.sh
There is Bifrost, which archives Kafka data to S3:
https://github.com/uswitch/bifrost
Obviously that's a fairly specific archive solution, but it might work for
you.
Mark.
On Mon, Jun 16, 2014 at 11:02 AM, Anatoly Deyneka
wrote:
> Hi all,
>
> I'm looking for the way of ar
You would ship the contents of the file across as a message. In general this
would mean that your maximum file size must be smaller than your maximum
message size. It would generally be a better choice to put a pointer to the
file in some shared location on the queue.
-Mark
> On Jun 15, 2
some work in this area:
I anticipate releasing the client as OS when complete, and would be interested
in co-operating if anyone is interested.
Regards
Mark Farnan.
CTO - Petrolink International.
On Sep 14, 2013, at 2:18 AM, Richard Park wrote:
> We are currently trying out A
igs here - http://kafka.apache.org/07/configuration.html
>
> Thanks,
> Neha
>
>
> On Wed, Oct 9, 2013 at 10:07 AM, Mark wrote:
>
>> This is in regards to consumer group consumption in 0.7.2.
>>
>> Say we have 3 machines with 3 partitions in each topi
This is in regards to consumer group consumption in 0.7.2.
Say we have 3 machines with 3 partitions in each topic totaling 9 partitions.
Now if I create a consumer group with 9 threads on the same machine then all
partitions will be read from. Now what happens if I start another 9 threads on
a
are using A10's
On Sep 26, 2013, at 6:41 PM, Nicolas Berthet wrote:
> Hi Mark,
>
> I'm using centos 6.2. My file limit is something like 500k, the value is
> arbitrary.
>
> One of the thing I changed so far are the TCP keepalive parameters, it had
&g
eptember 26, 2013 12:39 PM
> To: users@kafka.apache.org
> Subject: Re: Too many open files
>
> If a client is gone, the broker should automatically close those broken
> sockets. Are you using a hardware load balancer?
>
> Thanks,
>
> Jun
>
>
> On Wed, Sep
>> Sent: Thursday, September 26, 2013 12:39 PM
>> To: users@kafka.apache.org
>> Subject: Re: Too many open files
>>
>> If a client is gone, the broker should automatically close those broken
>> sockets. Are you using a hardware load balancer?
>>
FYI if I kill all producers I don't see the number of open files drop. I still
see all the ESTABLISHED connections.
Is there a broker setting to automatically kill any inactive TCP connections?
On Sep 25, 2013, at 4:30 PM, Mark wrote:
> Any other ideas?
>
> On Sep 25, 2013, a
x27;t close the old
> ones.
>
> Thanks,
>
> Jun
>
>
> On Wed, Sep 25, 2013 at 6:08 AM, Mark wrote:
>
>> No. We are using the kafka-rb ruby gem producer.
>> https://github.com/acrosa/kafka-rb
>>
>> Now that you asked that question I need to ask. Is
er client?
>
> Thanks,
>
> Jun
>
>
>> On Tue, Sep 24, 2013 at 5:33 PM, Mark wrote:
>>
>> Our 0.7.2 Kafka cluster keeps crashing with:
>>
>> 2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in
>> acceptor
>>java.i
Our 0.7.2 Kafka cluster keeps crashing with:
2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in acceptor
java.io.IOException: Too many open
The obvious fix is to bump up the number of open files but I'm wondering if
there is a leak on the Kafka side and/or our applicati
What is the quickest and easiest way to write message from Kafka into HDFS?
I've come across Camus but before we go the whole route of writing Avro
messages we want to test plain old vanilla messages.
Thanks
I needed to add the hostname to get it working..
-Djava.rmi.server.hostname=${HOSTNAME}
On Aug 29, 2013, at 4:47 PM, Mark wrote:
> Strange.. looks like that works. Not sure if because I am using that locally
> whereas jconsole and visualvm are remote
>
> On Aug 29, 2013
org/jmxterm/tutorial
>
>
> On Thu, Aug 29, 2013 at 7:02 PM, Mark wrote:
>
>> I should note this is Kafka 0.7
>>
>> On Aug 29, 2013, at 3:59 PM, Mark wrote:
>>
>>> I tried changing the ports and still no luck. Does it work with JConsole
>> an
I should note this is Kafka 0.7
On Aug 29, 2013, at 3:59 PM, Mark wrote:
> I tried changing the ports and still no luck. Does it work with JConsole
> and/or do I need anything in my class path?
>
>
> On Aug 29, 2013, at 3:44 PM, Surendranauth Hiraman
> wrote:
>
&
line to see the jmx port
> usually.
>
> -Suren
>
>
>
> On Thu, Aug 29, 2013 at 6:41 PM, Mark wrote:
>
>> Can you view Kafka metrics via JConsole? I've tried connecting to port
>> with no such luck?
>
>
>
>
> --
>
Can you view Kafka metrics via JConsole? I've tried connecting to port
with no such luck?
We are thinking about using Kafka to collect events from our Rails application
and I was hoping to get some input from the Kafka community.
Currently the only gems available are:
https://github.com/acrosa/kafka-rb
https://github.com/bpot/poseidon (Can't use since we are only running 1.8.7)
Now
1 - 100 of 128 matches
Mail list logo