The default config handles messages up to 1MB so you should be fine.
-hans
> On Nov 22, 2016, at 4:00 AM, Felipe Santos wrote:
>
> I read on documentation that kafka is not optimized for big messages, what
> is considered a big message?
>
> For us the messages will be o
own native Kafka
Connector for each input protocol. There are over 150 Kafka Connectors already
built (search for "kafka-connect-*" in github) and see the following connector
landing page for more info on Kafka Connect
https://www.confluent.io/product/connectors/
-hans
Sent from
Also see https://github.com/confluentinc/kafka-rest-node for an example
JavaScript wrapper on the Confluent REST Proxy.
You definitely do not have to use Kafka Connect to pub/sub to Kafka via REST.
-hans
> On Dec 15, 2016, at 11:17 AM, Stevo Slavić wrote:
>
> https://g
Kafka clients (currently) do not work against older Kafka brokers/servers so
you have no other option but to upgrade to a 0.10.1.0 or higher Kafka broker.
-hans
> On Dec 22, 2016, at 2:25 PM, Joanne Contact wrote:
>
> Hello I have a program which requires 0.10.1.0 streams API. T
This is a recognized area for improvement and better version compatibility is
something that is being actively worked on. librdkafka clients already allow
for both forward and backward compatibility. Soon the java clients will be able
to do so as well.
-hans
> On Dec 24, 2016, at 12:26
Yes. Either using a Change Data Capture product like Oracle GoldenGate
Connector for Kafka or JDBC Source & Sink Kafka Connectors like the one
included with Confluent Open Source.
-hans
> On Dec 27, 2016, at 11:47 AM, Julious.Campbell
> wrote:
>
>
> Support
>
&
This sounds exactly as I would expect things to behave. If you consume from the
beginning I would think you would get all the messages but not if you consume
from the latest offset. You can separately tune the metadata refresh interval
if you want to miss fewer messages but that still won't get
rebalancing in off hours as the replication traffic can negatively
impact the production traffic. In the latest releases there is a feature to
separately throttle the replication traffic separately from client traffic.
-hans
> On Jan 6, 2017, at 12:23 PM, R . wrote:
>
> Hello, I have a 3nod
is created your code won't automatically know to consume
from it.
-hans
> On Jan 6, 2017, at 4:42 PM, Pradeep Gollakota wrote:
>
> What I mean by "flapping" in this context is unnecessary rebalancing
> happening. The example I would give is what a Hadoop Datanode wo
es in Kafka below)
-hans
> On Jan 18, 2017, at 11:17 PM, Paolo Patierno wrote:
>
> Yes I know so ... what's the value of the Offset field in the MessageSet when
> producer sends messages ?
>
> ____
> From: Hans Jespersen
> Sent: Wedne
else, it really
depends on target system. You have a lot of flexibility with where Connect runs
and in distributed mode it stores most data in Kafka anyway. Most connectors do
not use a lot of resources and often connectors run on machines shared with
other apps.
-hans
> On Feb 1, 2017, at 2
sages.
-hans
Sent from my iPhone
> On Feb 8, 2017, at 8:17 AM, Manikumar wrote:
>
> Are you using new java consumer API? It is officially released as part of
> 0.9 release.
> 0.8.2.2 java consumer code may not be usable. You have to use old scala
> consumer API.
>
> O
You can't integrate 3.1.1 REST Proxy with a secure cluster because it uses the
old consumer API (hence zookeeper dependency). The 3.2 REST Proxy will allow
you to integrate with a secure cluster because it is updated with the latest
0.10.2 client.
-hans
Sent from my iPhone
> On Feb
Try adding props.put("max.block.ms", "0");
-hans
> On Jun 7, 2017, at 12:24 PM, Ankit Jain wrote:
>
> Hi,
>
> We want to use the non blocking Kafka producer. The producer thread should
> not block if the Kafka is cluster is down or not reachable.
&g
If you are setting acks=0 then you don't care about losing data even when the
cluster is up. The only way to get at-least-once is acks=all.
> On Jun 7, 2017, at 1:12 PM, Ankit Jain wrote:
>
> Thanks hans.
>
> It would work but producer will start loosing the data
nsume) messages and not the lower level semantics that are that
consuming is actually
reading AND writing (albeit only to the offset topic).
-hans
> On Jun 17, 2017, at 10:59 AM, Viktor Somogyi
> wrote:
>
> Hi Vahid,
>
> +1 for OffsetFetch from me too.
>
> I also wan
Do you list all three brokers on your consumers bootstrap-server list?
-hans
> On Jun 22, 2017, at 5:15 AM, 夏昀 wrote:
>
> hello:
> I am trying the quickstart of kafka documentation,link is,
> https://kafka.apache.org/quickstart. when I moved to Step 6: Setting up a
> mult
Correct. The use of the word "server" in that sentence is meant as broker (or
KafkaServer as it shows up in the 'jps' command) not as a physical or virtual
machine.
-hans
> On Jun 27, 2017, at 1:22 AM, James <896066...@qq.com> wrote:
>
> Hello,
>At h
Request quotas was just added to 0.11. Does that help in your use case?
https://cwiki.apache.org/confluence/display/KAFKA/KIP-124+-+Request+rate+quotas
-hans
> On Jun 29, 2017, at 12:55 AM, sukumar.np wrote:
>
> Hi Team,
>
>
>
> We are having a Kafka cluster with mult
() to that
offset, and continue consuming with exactly once semantics.
This is how many of the exactly once Kafka Connect Sink Connectors work today.
-hans
> On Jul 1, 2017, at 11:28 PM, fuyou wrote:
>
> I read the great blog about kafka Exactly-once Semantics
> <https://www.co
them “kafka-connect-*”. I quick search will
yield a few “kafka-connect-tcp” connectors like this one
https://github.com/dhanuka84/kafka-connect-tcp
<https://github.com/dhanuka84/kafka-connect-tcp>
-hans
> On Jul 4, 2017, at 10:26 AM, Clay Teahouse wrote:
>
> Hello All,
&
See the producer param called metadata.max.age.ms which is "The period of time
in milliseconds after which we force a refresh of metadata even if we haven't
seen any partition leadership changes to proactively discover any new brokers
or partitions."
-hans
> On Aug 4, 2017,
This is an area that is being worked on. See KIP-107 for details.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-107%3A+Add+purgeDataBefore%28%29+API+in+AdminClient
<https://cwiki.apache.org/confluence/display/KAFKA/KIP-107:+Add+purgeDataBefore()+API+in+AdminClient>
-hans
>
ytest --from-beginning
91
92
93
94
95
96
97
98
99
100
-hans
> On Aug 18, 2017, at 10:32 AM, Manikumar wrote:
>
> This feature got released in Kafka 0.11.0.0. You can
> use kafka-delete-records.sh script to
Doing that doesn't really make sense in a Kafka cluster because the topic
partitions and their replicas are spread out across many brokers in the
cluster. That's what enables the parallel processing and fault tolerance
features of Kafka.
-hans
> On Aug 22, 2017, at 3:14 AM,
We (Confluent) run Kafka as a SaaS-based cloud offering and we do not see any
reason for this feature so I just don’t understand the motivation for it.
Please explain.
-hans
--
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
>
in 0.11 and above see the CLI command bin//kafka-delete-records.sh
-hans
> On Aug 23, 2017, at 7:28 PM, Rahul Singh wrote:
>
> Hello all,
>
> I am unable to purge the topic data from Kafka. Is there any class to flush
> all topic data.
>
> Thank you
Yes the offsets are the same.
-hans
> On Aug 28, 2017, at 8:32 PM, Vignesh wrote:
>
> Hi,
>
> If a topic partition is replicated and leader switches from broker 1 to
> broker 2 , are the offsets for messages in broker 2 same as broker1 ? If
> not, how can applicatio
scripts in the ./bin
directory rather than just typing “confluent start” as it says in the
quickstart documentation.
-hans
> On Sep 19, 2017, at 8:41 PM, Koert Kuipers wrote:
>
> we are using the other components of confluent platform without installing
> the confluent platform, and it
Did you add the --execute flag?
-hans
> On Sep 21, 2017, at 11:37 AM, shargan wrote:
>
> Testing kafka-consumer-groups.sh in my dev environment, I'm unable to reset
> offsets even when CURRENT-OFFSET is inbounds. Again, it returns as if the
> change took effect but desc
tps://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetAPI(AKAListOffset)
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Wed, Sep 27, 2017 at 10:20 AM, Vignesh wrote:
> Correct
I think you are just missing the —execute flag.
-hans
> On Oct 25, 2017, at 1:24 PM, Ted Yu wrote:
>
> I wonder if you have hit KAFKA-5600.
>
> Is it possible that you try out 0.11.0.1 ?
>
> Thanks
>
>> On Wed, Oct 25, 2017 at 1:15 PM, Dan Markhasin wro
You can call the REST endpoints in KSQL from any programming language. I
wrote some stuff in node.js to call KSQL this way and it works great. The
results don't even have to go to a Kafka topic as the results of and POST
to /query all stream using HTTP.
-hans
/**
* Hans Jespersen, Prin
configuration
properties and without coding. If the built in functions are insufficient you
can write your own SMT functions in Java.
-hans
> On Dec 21, 2017, at 7:19 AM, Bill Bejeck wrote:
>
> Hi Mads,
>
> Great question and yes your use case here is an excellent fit for Kafk
Check that your __consumer_offsets topic is also setup with replication factor
of 3 and has In Sync Replicas. Often it gets setup first as a one node cluster
with RF=1 and then when the cluster is expanded to 3 nodes the step to increase
the replication factor of this topic gets missed.
-hans
with indexes beyond
those in Kafka for faster or more complex interactive queries.
-hans
> On Jan 11, 2018, at 1:33 PM, Manoj Khangaonkar wrote:
>
> Hi,
>
> If I understood the question correctly , then the better approach is to
> consume events from topic and store in
&
commit log so the lag cannot be predicted in
advance.
-hans
> On Feb 4, 2018, at 11:51 AM, Wouter Bancken wrote:
>
> Can anyone clarify if this is a bug in Kafka or the expected behavior?
>
> Best regards,
> Wouter
>
>
> On 30 January 2018 at 21:04, Wouter
period of time in milliseconds after which we force
a refresh of metadata even if we haven't seen any partition leadership changes
to proactively discover any new brokers or partitions
-hans
> On Feb 4, 2018, at 2:16 PM, Wouter Bancken wrote:
>
> Hi Hans,
>
> Tha
. Previous the consumer could get stuck and not make progress.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-74%3A+Add+Fetch+Response+Size+Limit+in+Bytes
-hans
> On Feb 25, 2018, at 8:04 AM, adrien ruffie wrote:
>
> Hi Waleed,
>
> thank for you reply, that I thought too !
>
"If your system is stateless and the transformations are not interdependent"
then I would just look at using Kafka Connect's Single Message Transform
(SMT) feature.
-hans
/**
* Hans Jespersen, Director Systems Engineering, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
Kafka 1.1.0
https://issues.apache.org/jira/browse/KAFKA-6240
<https://issues.apache.org/jira/browse/KAFKA-6240>
which seems in include dynamic reconfiguration of SSL keystores
https://issues.apache.org/jira/browse/KAFKA-6241
<https://issues.apache.org/jira/browse/KAFKA-6241>
--
/**
why the 0.10.1 docs are hard to find.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Tue, Oct 4, 2016 at 11:42 PM, Gaurav Shaha wrote:
> Hi,
>
> I want to use kafka new consumer. But in the documentation of 0.10.0
>
ll is any consumer has finished processing
a message by checking the offsets for that client. For the older clients these
are stored in Zookeeper but for the new consumers (0.9+) they are in a special
kafka topic dedicated to storing client offsets.
-hans
> On Oct 7, 2016, at 1:34
n be a bit more tricky if you are using keys but it doesn't sound like
you are if today you are publishing to topics the way you describe.
-hans
> On Oct 8, 2016, at 9:01 PM, Abhit Kalsotra wrote:
>
> Guys any views ?
>
> Abhi
>
>> On Sat, Oct 8, 2016 at 4:28
y are automatically
distributed out over the available partitions.
//h...@confluent.io
Original message From: Abhit Kalsotra
Date: 10/8/16 11:19 PM (GMT-08:00) To: users@kafka.apache.org Subject: Re:
Regarding Kafka
Hans
Thanks for the response, yeah you can say yeah I am tre
sumed the
way I sent, then my analytics will go haywire.
Abhi
On Sun, Oct 9, 2016 at 12:50 PM, Hans Jespersen wrote:
> You don't even have to do that because the default partitioner will spread
> the data you publish to the topic over the available partitions for you.
> Just try it
7:07.500]AxThreadId 23516 ->ID:4495 offset: 81][ID
date: 2016-09-28 20:07:39.000 ]
On Sun, Oct 9, 2016 at 1:31 PM, Hans Jespersen wrote:
> Then publish with the user ID as the key and all messages for the same key
> will be guaranteed to go to the same partition and ther
Watch this talk. Kafka will not lose messages it configured correctly.
http://www.confluent.io/kafka-summit-2016-ops-when-it-absolutely-positively-has-to-be-there
<http://www.confluent.io/kafka-summit-2016-ops-when-it-absolutely-positively-has-to-be-there>
-hans
> On Oct 13, 2016
Because the producer-property option is used to set other properties that are
not compression type.
//h...@confluent.io
Original message From: ZHU Hua B
Date: 10/16/16 11:20 PM (GMT-08:00) To:
Radoslaw Gruchalski , users@kafka.apache.org Subject: RE:
A question about kafka
-1489 <https://issues.apache.org/jira/browse/KAFKA-1489> and KIP-61
<https://cwiki.apache.org/confluence/display/KAFKA/KIP-61%3A+Add+a+log+retention+parameter+for+maximum+disk+space+usage+percentage>
for
more detail.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Co
Yes. See the description of quotas.
https://kafka.apache.org/documentation#design_quotas
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Thu, Oct 20, 2016 at 3:20 PM, Adrienne Kole
wrote:
> Hi,
>
> Is there a way to
You are going to lose everything you store in /tmp. In a production system
you never configure Kafka or zookeeper to store critical data in /tmp.
This has nothing to do with AWS or EBS it is just standard Linux than
everything under /tmp is deleted when Linux reboots.
-hans
/**
* Hans Jespersen
Yes.
//h...@confluent.io
Original message From: ZHU Hua B
Date: 10/24/16 12:09 AM (GMT-08:00) To:
users@kafka.apache.org Subject: RE: Mirror multi-embedded consumer's
configuration
Hi,
Many thanks for your confirm!
I have another question, if I deleted a mirrored topic on
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Mon, Oct 24, 2016 at 7:01 AM, Demian Calcaprina
wrote:
> Hi Guys,
>
> Is there a way to remove a kafka topic from the java api?
>
> I have the following scenar
linux-linuxfoundationx-lfs101x-0
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Mon, Oct 24, 2016 at 1:50 AM, Gourab Chowdhury
wrote:
> Thanks for the reply, I tried changing the data directory as follows:-
> dataDir=/data/zook
necessary because the offset numbers for a
given message will not match in both datacenters.
-hans
> On Oct 28, 2016, at 8:08 AM, Mudit Agarwal wrote:
>
> Thanks dave.
> Any ways for how we can achieve HA/Failover in kafka across two DC?
> Thanks,Mudit
>
> From: "Ta
Are you willing to have a maximum throughput of 6.67 messages per second?
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Fri, Oct 28, 2016 at 9:07 AM, Mudit Agarwal wrote:
> Hi Hans,
>
> The latency between my
Just make sure they are not in the same consumer group by creating a unique
value for group.id for each independent consumer.
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Mon, Oct 31, 2016 at 9:42 AM, Patrick Viet
wrote:
>
The 0.10.1 broker will use more file descriptor than previous releases
because of the new timestamp indexes. You should expect and plan for ~33%
more file descriptors to be open.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On
There is no built in mechanism to do this in Apache Kafka but if you can write
consumer 1 and consumer 2 to share a common external offset storage then you
may be able to build the functionality you seek.
-hans
> On Nov 5, 2016, at 3:55 PM, kant kodali wrote:
>
> Sorry there
way to get you the functionality you want?
-hans
> On Nov 5, 2016, at 4:31 PM, kant kodali wrote:
>
> I am new to Kafka and reading this statement "write consumer 1 and consumer
> 2 to share a common external offset storage" I can interpret it many ways
> but my
and operating system are you using to build this
system? You have to give us more information if you want specific
recommendations.
-hans
> On Nov 6, 2016, at 2:54 PM, kant kodali wrote:
>
> Hi! Thanks. any pointers on how to do that?
>
> On Sun, Nov 6, 2016 at 2:32 PM, Tauzell
case this is no
longer a Kafka question and has become more of a a distributed database
design question.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Sun, Nov 6, 2016 at 7:08 PM, kant kodali wrote:
> Hi Hans,
>
> Th
The latest Confluent packages now ship with systemd scripts. That is since
Confluent Version 4.1 - which included Apache Kafka 1.1
-hans
/**
* Hans Jespersen, Director Systems Engineering, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Fri, Apr 27, 2018 at 11:15 AM, Andrew Otto
Sorry I hit send a bit too soon. I was so focused on the systemd part of
the email and not the Mirror Maker part.
Confluent packages include Mirror Maker but the systemd scripts are setup
to use Confluent Replicator rather than Mirror Maker.
My apologies.
-hans
/**
* Hans Jespersen, Director
If you create a topic with one partition they will be in order.
Alternatively if you publish with the same key for every message they will be
in the same order even if your topic has more than 1 partition.
Either way above will work for Kafka.
-hans
> On May 25, 2018, at 8:56 PM, Raymond
).
Conclusion
You will see ordered delivery if your either use a key when you publish or
create a topic with one partition.
-hans
> On May 26, 2018, at 7:59 AM, Raymond Xie wrote:
>
> Thanks. By default, can you explain me why I received the message in wrong
> order? Note ther
Kafka offset for the consumer
before the first call to poll()
These are the techniques most people use to get end to end exactly once
processing with no duplicates even in the event of a failure.
-hans
> On May 28, 2018, at 12:17 AM, Karthick Kumar wrote:
>
> Hi,
>
> Fac
Why don’t to just put the metadata in the header and leave the key null so it
defaults to round robin?
-hans
> On May 31, 2018, at 6:54 AM, M. Manna wrote:
>
> Hello,
>
> I can see the this has been set as "KIP required".
>
> https://issues.apache.org/jira/
You should just recommit the same offsets sooner than every 24 hours (or
whatever your commit topic retention period is set to). The expiry of offsets
is based on the timestamp of the commits.
-hans
> On Jun 1, 2018, at 1:03 AM, Dinesh Subramanian
> wrote:
>
> Hi,
>
> Fac
Kafka ACLs are at the topic level, not partition level.
Probably better to make 10 topics of 1 partition each and use topic ACLs to
control access.
-hans
> On Jun 25, 2018, at 9:50 PM, Yash Ganthe wrote:
>
> Hi,
>
> If I have a topic with 10 partitions, I would like each
performance but the send() teturns a future so
you can make it appear to be a synchrounous publish easily. Examples are in the
javadoc.
-hans
> On Jul 18, 2018, at 7:45 AM, jingguo yao wrote:
>
> The asynchronous sending of a message returns no error even if the
> Kafka server is
/README.md#consumer
https://github.com/Blizzard/node-rdkafka/blob/master/README.md
-hans
> On Jan 21, 2019, at 5:17 AM, Rahul Singh
> wrote:
>
> I am using in Node with node-kafka module.
>
>> On Mon, Jan 21, 2019 at 6:45 PM M. Manna wrote:
>>
>> Please
this one.
-hans
> On Jan 21, 2019, at 10:02 AM, Rahul Singh
> wrote:
>
> I am using node-kafka, I have used consumer.commit to commit offsets but
> don't know why when I restart the consumer it consume the committed offsets.
>
> Thanks
>
>> On Mon, Jan 21, 2019
-processing-cookbook/
There is even an example for repartitioning topics using the PARTITIONS
parameter.
CREATE STREAM clickstream_new WITH (PARTITIONS=5) AS SELECT * from
clickstream_raw;
-hans
> On Jan 27, 2019, at 9:24 AM, Ryanne Dolan wrote:
>
> You can use MirrorMaker to copy data betwe
were published in realtime.
-hans
> On Mar 15, 2019, at 7:52 AM, Pulkit Manchanda wrote:
>
> Hi All,
>
> I am building a data pipeline to send logs from one data source to the
> other node.
> I am using Kafka Connect standalone for this integration.
> Everything works fi
nd the
brokers.
-hans
> On Mar 19, 2019, at 8:19 AM, James Grant wrote:
>
> Hello,
>
> We would like to expose a Kafka cluster running on one network to clients
> that are running on other networks without having to have full routing
> between the two networks. In this
Thats a 4.5 year old benchmark and it was run with a single broker node and
only 1 producer and 1 consumer all running on a single MacBookPro. Definitely
not the target production environment for Kafka.
-hans
> On Mar 21, 2019, at 11:43 AM, M. Manna wrote:
>
> HI All,
>
>
Doesn’t every one of the 20,000 POS terminals want to get the same price list
messages? If so then there is no need for 20,000 partitions.
-hans
> On Mar 31, 2019, at 7:24 PM, wrote:
>
> Hello!
>
>
>
> I ask for your help in connection with the my recent task:
&
https://blogs.apache.org/kafka/entry/apache-kafka-supports-more-partitions
“As a rule of thumb, we recommend each broker to have up to 4,000 partitions
and each cluster to have up to 200,000 partitions”
-hans
> On Apr 1, 2019, at 2:02 AM, Alexander Kuterin wrote:
>
> Thanks, Hans!
>
yes. Idempotent publish uses a unique messageID to discard potential duplicate
messages caused by failure conditions when publishing.
-hans
> On Apr 1, 2019, at 9:49 PM, jim.me...@concept-solutions.com
> wrote:
>
> Does Kafka have something that behaves like a unique key s
rocksdb state store that comes with Kafka Streams (or as a
UDF in KSQL).
You can alternatively write your consuming apps to implement similar message
pruning functionality themselves and avoid one extra component in the end to
end architecture
-hans
> On Apr 2, 2019, at 7:28 PM, jim
Your connector sounds a lot like this one
https://github.com/jcustenborder/kafka-connect-spooldir
I do not think you can run such a connector in distributed mode though.
Typically something like this runs in standalone mode to avoid conflicts.
-hans
On Wed, Apr 24, 2019 at 1:08 AM Venkata S A
Can you just use kafka-console-consumer and just redirect the output into a
file?
-hans
On Mon, May 13, 2019 at 1:55 PM Vinay Jain wrote:
> Hi
>
> The data needs to be transferred to some other system in other network, and
> due to some security reasons, the other systems canno
start and balance them ahead of time.
-hans
On Wed, May 15, 2019 at 8:45 AM M. Manna wrote:
> Hello,
>
> I am trying to do some performance testing using Kafka-Consumer-Perf-Test.
> Could somone please help me understand whether my setup is correct?
>
> 1) I would like to
messages.
I would recommend you not use auto commit at all and instead manually commit
offsets immediately after sending each email or batch of emails.
-hans
> On May 24, 2019, at 4:35 AM, ASHOK MACHERLA wrote:
>
> Dear Team
>
>
>
> First of all thanks fo
Take a look at the Admin Client API here
https://kafka.apache.org/22/javadoc/index.html?org/apache/kafka/clients/admin/AdminClient.html
-hans
On Mon, Jun 17, 2019 at 4:27 PM shubhmeet kaur
wrote:
> hi,
>
> I wish to updater the replciation factor of already created topic through
&g
Gwen Shapira published a great whitepaper with Reference Architectures for
all Kafka and Confluent components in big and small environements and for
bare metal, VMs, and all 3 major public clouds.
https://www.confluent.io/resources/apache-kafka-confluent-enterprise-reference-architecture/
On Fri
This is a great blog post that explains how kafka works with advertised
listeners and docker
https://rmoff.net/2018/08/02/kafka-listeners-explained/
-hans
> On Oct 18, 2019, at 5:36 AM, Mich Talebzadeh
> wrote:
>
> I do not understand this.
>
> You have on a phy
in a consumer group, each consumer in the group would consume from 3 partitions.
-hans
Yes it should be going much faster than that. Something is wrong in your setup.
-hans
> On Mar 26, 2020, at 5:58 PM, Vidhya Sakar wrote:
>
> Hi Team,
>
> The Kafka consumer is reading only 8 records per second.We have implemented
> apache Kafka and confluent connect S3. The
Very good description with pictures in the book Kafka: The Definitive Guide
https://www.oreilly.com/library/view/kafka-the-definitive/9781491936153/ch04.html
-hans
> On Mar 26, 2020, at 12:00 PM, sunil chaudhari
> wrote:
>
> Again
> A consumer can have one or more consume
RAID 5 typically is slower because Kafka is very write heavy load and that
creates a bottleneck because writes to any disk require parity writes on the
other disks.
-hans
> On Mar 28, 2020, at 2:55 PM, Vishal Santoshi
> wrote:
>
> Ny one ? We doing a series of tests to be co
(all of them as they act as a cluster) and aggregate all the data
to see the full flow of messages in the system. Thats why the logs may seem
overwelming and you need to look at the logs of all the broker (and perhaps all
the clients as well) to get the full picture.
-hans
> On Mar 28, 2
I believe that the new topics are picked up at the next metadata refresh
which is controlled by the metadata.max.age.ms parameter. The default value
is 30 (which is 5 minutes).
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On
What is the difference using the bin/kafka-console-producer and
kafka-console-consumer as pub/sub clients?
see http://docs.confluent.io/3.1.0/kafka/ssl.html
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Thu, Nov 17, 2016 at 11
Publish lots of messages and measure in seconds or minutes. Otherwise you are
just benchmarking the initial SSL handshake setup time which should normally be
a one time overhead, not a per message overhead. If you just send one message
then of course SSL is much slower.
-hans
> On Nov
The performance impact of upgrading and some settings you can use to
mitigate this impact when the majority of your clients are still 0.8.x are
documented on the Apache Kafka website
https://kafka.apache.org/documentation#upgrade_10_performance_impact
-hans
/**
* Hans Jespersen, Principal
Hadoop cluster
-hans
> On Dec 6, 2016, at 3:25 AM, Aseem Bansal wrote:
>
> Hi
>
> Has anyone done a storage of Kafka JSON messages to deep storage like S3.
> We are looking to back up all of our raw Kafka JSON messages for
> Exploration. S3, HDFS, MongoDB come to mind ini
are you setting the group.id property to be the same on both consumers?
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Wed, Dec 7, 2016 at 12:36 PM
1 - 100 of 167 matches
Mail list logo