Congrats!!! Thanks David for the release and big thanks to all
contributors!!
Best regards
Dave Canton
On Wed, 19 Mar 2025 at 09:14, andreasvdber...@avathar.be <
andreasvdber...@avathar.be> wrote:
> Well done! I’m looking forward to learning what’s changed.
>
> On 19 Mar 2
The docs say: “Each task is assigned to a thread. Each task is capable of
handling multiple Kafka partitions, but a single partition must be handled by
only one task.”From what I understand additional tasks would sit idle.
From: Yeikel Santana
Date: Thursday, May 30, 2024 at 7:43 AM
To:
Consider purchasing support from Confluent to get this sort of request answered
quickly.
From: Sahil Sharma D
Date: Tuesday, May 9, 2023 at 12:40 PM
To: users@kafka.apache.org
Subject: [EXTERNAL] RE: CVEs related to Kafka
Gentle reminder-2 !
-Original Message-
From: Sahil Sharma D
Se
) then you need a
more traditional database or single owner.
-Dave
From: pod...@gmx.com
Date: Monday, October 3, 2022 at 6:15 AM
To: users@kafka.apache.org
Subject: [EXTERNAL] Streaming processing in real life scenario
Hello Guys,
I’m trying to understand streaming processing in real life scenario
looked into Faust but
we're preferring something that is pure Kafka without Java development.
On Monday, May 2, 2022, 07:11:55 PM PDT, Liam Clarke-Hutchinson
wrote:
Hi Dave,
I think you forgot to ask your question :D However, if you're looking to
group those events from
, iot_device_id:1,
sensor_data: {name:”steam_drum_temp”, data:{value:99.6, si_unit:”c”}}
Each kafka event has a reference to the total events to be grouped
(sensors_per_cycle) and the order for which those events will be grouped by, is
taken from seq_number.
Regards,
Dave
FOSS == Free Open Source Software
From: andrew davidson
Date: Wednesday, March 30, 2022 at 3:16 PM
To: users@kafka.apache.org
Subject: [EXTERNAL] Re: Newbie looking for a connector I can configure on my mac
Thanks Liam.
What is 'FOSS Kafka'? google did not find any useful definitions
A tutoria
From: Jatin Chhabriya
Date: Wednesday, March 16, 2022 at 9:20 AM
To: users@kafka.apache.org
Cc: Murali Krishna
Subject: [EXTERNAL] Apache Kafka Questions
Hello Team
Upon careful perusal of documentation and tutorials, our team has a few open
questions, and we would appreciate to have these c
PR means “Pull Request”. It is a way to have others review your code changes
and, when ready, they can merge them in.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests
From: Andreas Gillmann
Date: Mon
ingly to do that
> work.
> I'm so sorry that I can't help.
>
> Best regards
> Franziska
>
> -Ursprüngliche Nachricht-
> Von: Tauzell, Dave
> Gesendet: Montag, 10. Januar 2022 14:30
> An: users@kafka.apache.org
> Betreff: Re: Log4j 1.2
>
>
Log4j 2.x isn’t a drop-in replacement for 1.x. It isn’t a difficult change
but somebody does need to go through all the source code and do the work.
-Dave
From: Brosy, Franziska
Date: Monday, January 10, 2022 at 3:16 AM
To: users@kafka.apache.org
Subject: [EXTERNAL] AW: Log4j 1.2
Hi Roger
much everything else
Israel mentioned, I would suggest you go to http://developer.confluent.io
There you’ll find free video courses, quick-starts, tutorials, and more.
Sounds like you are at the beginning of an exciting journey! Enjoy!
Dave
> On Dec 30, 2021, at 8:29 AM, Ola Biss
I’m sorry. I misread your message. I thought you were asking about increasing
the number of partitions on a topic after there were keyed events in it.
> On Nov 22, 2021, at 3:07 AM, Pushkar Deole wrote:
>
> Dave,
>
> i am not sure i get your point... it is not about le
Another possibility, if you can pause processing, is to create a new topic with
the higher number of partitions, then consume from the beginning of the old
topic and produce to the new one. Then continue processing as normal and all
events will be in the correct partitions.
Regards,
Dave
you really are just consuming to write to a DB, you may want to
consider Kafka Connect.
Let me know if this is unclear.
Thanks,
Dave
> On Jun 26, 2021, at 7:05 AM, SuarezMiguelC
> wrote:
>
> DaveKlein, in the reply email of "Kafka Streams" on the question to use
>
, since Streams will
safely manage that state for you.
But Streams and Consumer are just libraries, so start with Consumer and if you
find yourself doing more processing, consider moving to Kafka Streams.
Either way, it’s a lot of fun!
Dave
> On Jun 25, 2021, at 5:09 PM, Samson Adeyemi wr
consumer publish notifications about messages
it has processed to a new topic (or other storage mechanism).
You may be able to use the admin api, but I don't think it's a standard use
case.
On Tue, May 25, 2021, 8:21 AM Tauzell, Dave
wrote:
> I don’t know about monitoring when
I don’t know about monitoring when a particular message is reads but you can
use something like https://github.com/linkedin/Burrow to monitor consumer lag.
Basically you can see that consumer Y has not yet read X number of messages
that are ready.
-Dave
From: Alberto Moio
Date: Tuesday
your message.
I don't recall what that API is right now.
-Dave
On 10/13/20, 9:50 AM, "Pedro Teixeira" wrote:
I was hoping there was an API for at least knowing the consumer progress..
—
Pedro Teixeira
✉️ i...@pgte.me
💻 https://github.com/pgte
would create an API endpoint
to get the "status" of that side effect and have clients periodically poll for
that.
-Dave
On 10/13/20, 9:22 AM, "pedro.teixe...@gmail.com"
wrote:
Imagine I have a Kafka cluster with one producer and one consumer, and
behind it an API server.
record of interest. Send those records to
another topic and have a simple consumer app watching that topic and sending
notifications. Sounds like a fun project!
Dave
> On Sep 2, 2020, at 11:01 AM, cedric sende lubuele
> wrote:
>
> Let me introduce myself, my name is Cedric and I
So if the stream is:
A:1:FOO
A:3:BAR
A:3:BAZ
Then A:3* must be processed after A:1 but A:3:BAR and A:3:BAZ can be processed
in any order?
I don’t think there is a way to do that with topics.
-Dave
From: Andre Mermegas
Reply-To: "users@kafka.apache.org"
Date: Wednesday, Septemb
ually get message the other task (the one that fails)
doesn't acknowledge..
-Dave
On 5/13/20, 10:42 PM, "wangl...@geekplus.com.cn"
wrote:
I want to know how kafka connector under distributed mode balance its task?
For example I have two connector instanc
Hi Brandt,
The username is used as the principal for SALS/PLAIN. Check
*sals.jaas.config* value at the client's configuration file.
Let me know if I haven't understood you correctly.
Best regards
Dave
Newton, Brandt (CAI - Burlington) schrieb am
Mo., 20. Apr. 2020, 21:30:
> Hi
Hello Sylvain,
Apache's Trademark Policy might help:
http://www.apache.org/foundation/marks/
Best regards
Dave
On Sat, 18 Apr 2020 at 18:03, Sylvain Le Gouellec <
sylvain.legouel...@gmail.com> wrote:
> Hi,
>
> I start a new open source project : Kafka Stream .NET. For th
x27;t think Kafka is a good fit for scale up, then scale down scenarios but
you can setup to make it easier to scale-up in the future.
-Dave
On 12/4/19, 12:10 AM, "Goel, Akash Deep"
wrote:
Hi ,
Is it possible to auto scale Kafka? If it is not directly supported, then
is there a
N consumers if one of those
producers is producing more data
4. You can more easily monitor each producer if you are monitoring by topic
-Dave
On 11/18/19, 4:41 AM, "pwozniak" wrote:
Hi all,
He is my usecase:
I have three message producers that submits batch of messag
e:
Hi Dave,
thank you . saw some tutorial where they told it otherwise .. which
confuses me a litte.
If its done round-robin .. my "world view" makes sense again 😊
Oliver
-Ursprüngliche Nachricht-
Von: Tauzell, Dave
Gesendet:
A null key results in the client sending to partitions in a round-robin order.
Use a key if you want to ensure that specific messages end up on the same
partition.
-Dave
On 11/8/19, 1:06 AM, "Oliver Eckle" wrote:
Hi,
Don’t get me wrong, I just want to understand what&
sale apps periodically call the webservice
This won't work if the price list is truly a stream of things rather than "just
get the latest list".
-Dave
On 3/31/19, 7:01 PM, "Peter Bukowinski" wrote:
I don’t want to be a downer, but because kafka is relatively ne
We are using both and leaning towards a web service fronting Kafka because it
gives us the ability to centralize other logic. That said, I don't think the
webservice will be much more "stable" and you'll need to consider what to do
with your audit records if the web servic
It is possible that if all the nodes fail at about the same time and after the
broker acknowledged the message, then some messages will be lost because they
were in memory and not yet fully written to the disk. If you set ACKS=all
then this requires all of your replicas to fail in this way to
If you size your cluster right, you can send large messages of many megabytes.
We send lots (millions per day) of medium sized messages (5-10k) without any
issues.
-Dave
-Original Message-
From: Chanchal Chatterji [mailto:chanchal.chatte...@infosys.com]
Sent: Wednesday, September 12
We use Jolokia (which has a java agent you can load with kafka to expose
metrics via HTTP) and Influx/Telegraf which has support for Jolokia. There is
a fair bit of configuration but it can be done without any coding.
-Dave
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com
What does the hardware side of your brokers look like - do you have enough
memory to hold all pending messages in memory (i.e. before consumers get them).
At what rate are your clients trying to send messages?
-Dave
-Original Message-
From: Pritam Kadam [mailto:kpri...@thoughtworks.com
Does anybody have any experience with Confluent Replicator? Has it worked
well for you?
-Dave
This e-mail and any files transmitted with it are confidential, may contain
sensitive information, and are intended solely for the use of the individual or
entity to whom they are addressed. If
month that it grabbed data
for. Running once a day is just an example, but the basic idea is to have
some way of automatically dealing with failures.You might also want some
way to monitor monthly in case it just stops working altogether.
-Dave
-Original Message-
From: James Smyth
Whatever you use I recommend some sort of wrapper since Kafka doesn't support
any sort of metadata (like the version of the serialization format).
-Dave
-Original Message-
From: Matt Farmer [mailto:m...@frmr.me]
Sent: Thursday, January 11, 2018 8:56 AM
To: users@kafka.apache.org
Su
If you haven’t built in logic from the start (with micro-service version 1)
then I think you’ll need some sort of “router” in the middle that knows the
routing logic.
-Dave
From: Assaf Katz [mailto:assaf.k...@amdocs.com]
Sent: Wednesday, December 13, 2017 3:12 AM
To: Yuval Alon ; users
You then also need to set this up for each topic you create:
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor
> 3 --partitions 3 --topic my-replicated-topic
-Dave
-Original Message-
From: Skip Montanaro [mailto:skip.montan...@gmail.com]
Sent: Th
, then the brokers
themselves will also be connected to one-another in order to replication
messages.
-Dave
-Original Message-
From: Skip Montanaro [mailto:skip.montan...@gmail.com]
Sent: Tuesday, November 28, 2017 8:06 AM
To: users@kafka.apache.org
Subject: Multiple brokers - do they share
offsets to a topic keyed on something like -mm-dd-hh24 or keep them in
memory ( if running in the same application ). You would need some one-time
process to create the offsets for the first time.
-Dave
-Original Message-
From: Kaustuv Bhattacharya [mailto:kaustuvl...@gmail.com]
Sent
Have you tried increasing max.in.flight.requests.per.connection? I wonder if
that would be similar to you having multiple producers.
Dave
Sent using OWA for iPhone
From: Sunny Kim
Sent: Wednesday, August 30, 2017 4:55:02 PM
To: users@kafka.apache.org
I don't think that is possible since Kafka uses the file system cache for this.
-Dave
-Original Message-
From: Archie [mailto:anubhavnidhi1...@gmail.com]
Sent: Monday, August 28, 2017 4:14 PM
To: users@kafka.apache.org
Subject: Re: Is it possible to disable caching for some kafka t
Hmm, I think you are right that you cannot have multiple schemas on the same
topic.
-Dave
-Original Message-
From: Sreejith S [mailto:srssreej...@gmail.com]
Sent: Thursday, August 17, 2017 11:42 AM
To: users@kafka.apache.org
Subject: RE: Different Schemas on same Kafka Topic
Hi Dave
to do what Kafka is doing and prepend some sort of fixed length value to all
messages that have the schema and version you are using for that message.
-Dave
-Original Message-
From: Shajahan, Nishanth [mailto:nshaj...@visa.com]
Sent: Thursday, August 17, 2017 11:02 AM
To: users@kafka
use the java
client on both ends and have the schema registry setup.
We have some slightly different needs ( including non-java languages) so we are
just using byte messages and then have our applications do the serialization
and deserialization.
-Dave
-Original Message-
From: Shajahan
What sort of skew do you expect. For example do you expect one key to have
1000x as many messages as others?
The consumer API allows you to pick a partition. So if you know that you have
N partition groups then you could setup N consumers each pull from one
partition in the group. You could
I don't have any concrete numbers but the REST proxy is quite a bit slower.
That said, it can still be fast and can scale out so it might meet your needs.
-Dave
-Original Message-
From: Affan Syed [mailto:as...@an10.io]
Sent: Thursday, August 10, 2017 1:32 AM
To: users@kafka.apach
As others mentioned this is not a forum to discuss the works of Franz Kafka.
Here are some places to get you started:
1. The works of Franz Kafka:
https://www.vanderbilt.edu/olli/class-materials/Franz_Kafka.pdf
2. Literature stack exchange: https://literature.stackexchange.com/
-Dave
Bumping this. Has anyone here observed this in their Kafka connect deployments?
Thanks,
Dave
On 5/26/17, 1:44 PM, "Dave Hamilton" wrote:
We are currently using the Kafka S3 connector to ship Avro data to S3. We
made a change to one of our Avro schemas and have noticed consumer
>> java.lang.NoClassDefFound Error
You are missing some dependent classes. Two questions:
1. Does the message have more information about what class it couldn't find?
2. What exactly are you putting into your jar file?
-Dave
-Original Message-
From: Rahul R04 [mailto
All the brokers write to server.log. The broker that happens to be the
controller will also write to the controller.log file.
-Dave
-Original Message-
From: karan alang [mailto:karan.al...@gmail.com]
Sent: Wednesday, June 28, 2017 6:04 PM
To: users@kafka.apache.org
Subject: Kafka logs
Losing one out of three should not impact the cluster. Losing more than a
majority means certain Kafka operations won't work. Anything that requires the
zookeeper data like electing a new leader for example.
Dave
Sent using OWA for iPhone
From: m
I’m not really familiar with Netty so I won’t be of much help. Maybe try
posting on a Netty forum to see what they think?
-Dave
From: SenthilKumar K [mailto:senthilec...@gmail.com]
Sent: Wednesday, June 21, 2017 10:28 AM
To: Tauzell, Dave
Cc: users@kafka.apache.org; senthilec...@apache.org; d
seems possible with the right sort of kafka producer tuning.
-Dave
From: SenthilKumar K [mailto:senthilec...@gmail.com]
Sent: Wednesday, June 21, 2017 8:55 AM
To: Tauzell, Dave
Cc: users@kafka.apache.org; senthilec...@apache.org; d...@kafka.apache.org;
Senthil kumar
Subject: Re: Handling 2 to 3
What are your configurations?
- production
- brokers
- consumers
Is the problem that web servers cannot send to Kafka fast enough or your
consumers cannot process messages off of kafka fast enough?
What is the average size of these messages?
-Dave
-Original Message-
From: SenthilKumar
Hi, does anyone have advice on how to deal with this issue? Is it possible that
changing a schema compatibility setting could correct it?
Thanks,
Dave
On 5/26/17, 1:44 PM, "Dave Hamilton" wrote:
We are currently using the Kafka S3 connector to ship Avro data to S3. We
made a
Lots of large messages will slow down throughput. From the client side you
might want to have a client for large messages and one for the others so that
they each have their own queue.
-Dave
-Original Message-
From: Ghosh, Achintya (Contractor) [mailto:achintya_gh...@comcast.com]
Sent
Sounds like there are some issues using the Kafka java library on Android. I
think instead you should create a REST api (or use the REST proxy provided by
Confluent) and have your device make HTTP calls to something that then puts
messages onto Kafka.
-Dave
-Original Message-
From
I'm not sure if the flush would happen before the ack. Maybe somebody closer
to the code can answer that? I haven't tested but I think your performance
will go way down.
-Dave
-Original Message-
From: JEVTIC, MARKO [mailto:marko.jev...@fisglobal.com]
Sent: Tuesday, May 3
ould
>> we assume that all messages that we got a reply from Kafka client with a
>> valid offset are successfully written to disk even in case of power failure,
>>provided disks didn't crash ?
No, see my above reply.
-Dave
-Original Message-
From: JEVTIC, MARKO [ma
the
service and bringing them up together on the new schema version)?
Thanks,
Dave
nsfer is still used after
upgrading the message version? Or do all consumers using the Scala API need to
be switched to using the new Java consumer API?
Thanks,
Dave
Both Confluent and Cloudera provide support.
-Dave
From: Benny Rutten [mailto:brut...@isabel.eu]
Sent: Wednesday, April 26, 2017 2:36 AM
To: users@kafka.apache.org
Subject: Kafka 24/7 support
Good morning,
I am trying to convince my company to choose Apache Kafka as our standard
messaging
I think because the product batches messages which could be for different
topics.
-Dave
-Original Message-
From: Nicolas MOTTE [mailto:nicolas.mo...@amadeus.com]
Sent: Wednesday, March 8, 2017 2:41 PM
To: users@kafka.apache.org
Subject: Performance and Encryption
Hi everyone,
I
Also, see this article on streaming changes from MySQL to kafka:
https://wecode.wepay.com/posts/streaming-databases-in-realtime-with-mysql-debezium-kafka
-Original Message-
From: Tauzell, Dave
Sent: Monday, February 27, 2017 9:07 AM
To: users@kafka.apache.org
Subject: RE: Kafka Connect
updated and new rows. If you want to get
a list of changes you'll either need to build that into your schema or use
something else that does CDC (Change Data Capture) on your source.
-Dave
-Original Message-
From: VIVEK KUMAR MISHRA 13BIT0066 [mailto:vivekkumar.mishra2...@vit.ac.in]
You'll need to provide some details. At a minimum the error message that you
are getting.
-Dave
-Original Message-
From: VIVEK KUMAR MISHRA 13BIT0066 [mailto:vivekkumar.mishra2...@vit.ac.in]
Sent: Friday, February 10, 2017 4:22 AM
To: users@kafka.apache.org
Subject: about produce
Yes, you just need to point it to your cluster.
-Dave
-Original Message-
From: Guillermo Ortiz [mailto:konstt2...@gmail.com]
Sent: Wednesday, February 1, 2017 1:09 PM
To: users@kafka.apache.org
Subject: Kafka Connect in different nodes than Kafka.
Is it possible to use Kafka Connect in
Just wanted to close the loop on this. It seems the consumer offset logs might
have been corrupted by the system restart. Deleting the topic logs and
restarting the Kafka service cleared up the problem.
Thanks,
Dave
On 1/12/17, 2:29 PM, "Dave Hamilton" wrote:
Hello, we
I haven't used dtrace, but is it possible to have it running and recording the
ftruncate64 times? Then when you see one of these long roll times look at the
dtrace log to see if it was that call?
-Dave
-Original Message-
From: Stephen Powis [mailto:spo...@salesforce.com]
Sent: F
operate if you want all
those features. It also plays better with the Hadoop ecosystem. We use IBM
MQ and push thousands of messages per second through it but are looking into
Kafka because of better integration with hadoop and some of the HA features.
-Dave
-Original Message-
From:
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:542)
Does anyone have recommendations for what we can do to recover from this issue?
Thanks,
Dave
You have a local filesystem? Linux?
-Dave
-Original Message-
From: Stephen Powis [mailto:spo...@salesforce.com]
Sent: Thursday, January 12, 2017 1:22 PM
To: users@kafka.apache.org
Subject: Re: Taking a long time to roll a new log segment (~1 min)
I've further narrowed it down to
You can set the retention for the topic to a small time and then wait for Kafka
to delete the messages before setting it back:
bin/kafka-topics.sh --zookeeper zk.prod.yoursite.com --alter --topic TOPIC_NAME
--config retention.ms=1000
-Original Message-
From: Laxmi Narayan NIT DGP [mailt
Can you collect garbage collection stats and verify there isn't a long GC
happening at the same time?
-Dave
-Original Message-
From: Stephen Powis [mailto:spo...@salesforce.com]
Sent: Thursday, January 12, 2017 8:34 AM
To: users@kafka.apache.org
Subject: Re: Taking a long time to r
Can you explain in more detail? Do you want to have files created in hdfs
somehow broken into records and put into Kafka?
> On Jan 9, 2017, at 19:57, Cas Apanowicz wrote:
>
> Hi,
>
> I have general understanding of main Kafka functionality as a streaming tool.
> However, I'm trying to figure out
.Because
Kafka does not wait for the OS to sync to disk before acknowledging receipt you
can get data loss which is why Kafka also has the concept of having backup
partitions.
-Dave
-Original Message-
From: Laxmi Narayan NIT DGP [mailto:nit.dgp...@gmail.com]
Sent: Tuesday, January 3
If you specify a key with each message then all messages with the same key get
sent to the same partition.
> On Dec 26, 2016, at 23:32, Ali Akhtar wrote:
>
> How would I route the messages to a specific partition?
>
>> On 27 Dec 2016 10:25 a.m., "Asaf Mesika" wrote:
>>
>> There is a much easier
What is the plan for backup and recovery of the kafka data?
-Dave
-Original Message-
From: Susheel Kumar [mailto:susheel2...@gmail.com]
Sent: Thursday, December 15, 2016 12:00 PM
To: users@kafka.apache.org
Subject: Kafka as a database/repository question
Hello Folks,
I am going thru an
I don't know if any API to stream a message. I don't suggest putting lots of
large messages onto Kafka.
As far as documentation I hear that confluent is going to support a C and C#
client so you could try asking questions on the confluent mailing list.
Dave
On Dec 5, 2016, at 17
Can you use the console consumer to see the messages on the other topics?
> On Dec 2, 2016, at 04:56, Vincenzo D'Amore wrote:
>
> Hi Kafka Gurus :)
>
> I'm creating process between few applications.
>
> First application create a producer and then write a message into a main
> topic (A), within t
I wasn't paying attention enough and didn't think about the brokers. Assuming
all the VMs have the same underlying SAN for disk I would start by putting
brokers on the VMs with the most free memory and zookeeper on the others.
-Dave
-Original Message-
From: Sachin Mittal [ma
Do you have some idea of the size and number of messages per second you'll put
onto the topics at peak?
-Dave
-Original Message-
From: Sachin Mittal [mailto:sjmit...@gmail.com]
Sent: Thursday, December 1, 2016 9:44 AM
To: users@kafka.apache.org
Subject: Re: I need some help wit
t once you actually start running it in
production.
-Dave
-Original Message-
From: Sachin Mittal [mailto:sjmit...@gmail.com]
Sent: Thursday, December 1, 2016 6:03 AM
To: users@kafka.apache.org
Subject: Re: I need some help with the production server architecture
Folks any help on this.
Just to
Kafka doesn't have the concept of message headers like some other messaging
systems.
You will have to create a payload that contains these headers and whatever
bytes you are sending.
Dave
> On Nov 28, 2016, at 16:47, Prasad Dls wrote:
>
> Hi,
>
> While publishing
ing on the
zookeeper servers.
-Dave
-Original Message-
From: Gwen Shapira [mailto:g...@confluent.io]
Sent: Tuesday, November 22, 2016 8:11 PM
To: Users
Subject: Re: Oversized Message 40k
This has been our experience as well. I think the largest we've seen in
production is 50MB
So I'm guessing you need to use spring 4 and not 3 like you are using.
Dave
> On Nov 27, 2016, at 10:58, Prasad Dls wrote:
>
> Thanks Tauzell,
>
> java.lang.NoClassDefFoundError:
> org/springframework/core/task/AsyncListenableTaskExecutor class is part of
> spring-cor
It looks like you are missing a spring jar. Can you google to find out which
jar that class is in?
Dave
> On Nov 27, 2016, at 01:16, Prasad Dls wrote:
>
> Hi users,
>
>
> My project is already developed with Spring 3.0.5.RELEASE, We are planning
> to use Kafka for new requ
I ran tests with a mix of messages, some as large as 20MB. These large
messages do slow down processing, but it still works.
-Dave
-Original Message-
From: h...@confluent.io [mailto:h...@confluent.io]
Sent: Tuesday, November 22, 2016 1:41 PM
To: users@kafka.apache.org
Subject: Re
Do you have:
Unclean.leader.election.enable = false ?
Dave
> On Nov 17, 2016, at 19:39, Mark Smith wrote:
>
> Hey folks,
>
> I work at Dropbox and I was doing some maintenance yesterday and it
> looks like we lost some committed data during a preferred replica
>
Partitions are used to distribute the messages in a topic between several
different broker instances. This provides higher throughput. Partitions can
also be replicate which allows for high availability.
-Dave
From: Doyle, Keith [mailto:keith.do...@greenwayhealth.com]
Sent: Wednesday
Kafka ... I don't think this is possible.
-Dave
-Original Message-
From: kant kodali [mailto:kanth...@gmail.com]
Sent: Monday, November 7, 2016 10:48 AM
To: users@kafka.apache.org
Subject: Re: is there a way to make sure two consumers receive the same message
from the broker
You should have one consumer pull the message and submit the data to each
storage using an XA transaction.
> On Nov 5, 2016, at 19:49, kant kodali wrote:
>
> yes this problem can definetly be approached in many ways but given the
> hard constraints by our clients we don't seem to have many optio
Is Kafka connect adding some bytes to the beginning of the avro with the scheme
registry id?
Dave
> On Nov 2, 2016, at 18:43, Will Du wrote:
>
> By using the kafka-avro-console-consumer I am able to get rich message from
> kafka connect with AvroConvert, but it got no output e
You want the servers in the primary zone to put messages onto Kafka and
applications in the edge nodes to read and process them?
-Dave
This e-mail and any files transmitted with it are confidential, may contain
sensitive information, and are intended solely for the use of the individual or
be in the client itself
Or
2. The client to return which partition failed to read/write from. This would
only be helpful if the clients are assigning partitions themselves.
I use "partition" and not "broker" since each partition has only one primary
broker at a time.
-Dave
--
Once enough failures happen the circuit is marked open. The client would then
periodically try some messages until it works again. Others would be failed.
There are a number of existing circuit breaker libraries you can use in the
meantime like the Netflix one.
Dave
> On Oct 30, 2016, at
down your throughput quite
a bit.
-Dave
-Original Message-
From: Hans Jespersen [mailto:h...@confluent.io]
Sent: Friday, October 28, 2016 11:36 AM
To: Mudit Agarwal
Cc: users@kafka.apache.org
Subject: Re: Kafka Multi DataCenter HA/Failover
Are you willing to have a maximum throughp
1 - 100 of 192 matches
Mail list logo