We need to give Describe/Write privileges for producer user and
Describe/Read privileges for
the consumer user. While using kafka-acls.sh script, you can use
"--producer/--consumer"
options to give a user producer or consumer privileges.
You can find examples here:
https://kafka.apache.org/docume
Gwen Shapira,
> Hamidreza Afzali, Hao Chen, hejiefang, Hojjat Jafarpour, huxi, Ismael Juma,
> Ivan A. Melnikov, Jaikiran Pai, James Cheng, James Chien, Jan Lukavsky,
> Jason Gustafson, Jean-Philippe Daigle, Jeff Chao, Jeff Widman, Jeyhun
> Karimov, Jiangjie Qin, Jon Freedman, Jonathan Monette,
Kafka is not well tested on Windows platform..There are some issues running
on
Windows. It is recommended to run on Linux machines.
On Thu, Jul 6, 2017 at 9:49 PM, M. Manna wrote:
> Hi,
>
> I have sent numerous emails in the past about the same issue, but no
> response so far.
>
> I was wonderin
/quote/documentation which suggests this?*
>
> *Kindest Regards,*
>
>
> On 7 July 2017 at 07:48, Manikumar wrote:
>
> > Kafka is not well tested on Windows platform..There are some issues
> running
> > on
> > Windows. It is recommended to run on Linux machines.
&
looks like these logs coming immediately after topic creation. did you see
any data loss?
otherwise, these should be normal.
On Fri, Jul 14, 2017 at 5:02 PM, mosto...@gmail.com
wrote:
> we are using a local ZFS
>
>
>
> On 14/07/17 13:31, Tom Crayford wrote:
>
>> Hi,
>>
>> Which folder are you st
enable debug logs to find out the actual error.
On Wed, Jul 26, 2017 at 12:49 AM, karan alang wrote:
> hi - I've enabled SSL for Kafka & i'm trying to publish messages using
> console Producer
>
> Error is as shown below, any ideas ?
>
>>
>>1. /usr/hdp/2.5.3.0-37/kafka/bin/kafka-console-prod
tition: 2 Leader: 1001 Replicas: 1002,1003,1001 Isr:
> 1001
>
> It seems setting the parameter -> security.inter.broker.protocol = SSL
> causes connectivity issues between the Controller (in this case 1001) & the
> Brokers (1001, 1002, 1003)
>
> The question is why &
logs look normal to me. It looks like you are creating a new topic for
every hour?
from logs:
16:00:01 : Created log for partition [mytopic.2017-07-13-16,0] (this is
replica log for partition 0)
16:00:01: Truncating log mytopic.2017-07-13-16-0 to offset 0 (This should
be harmless)
Not sure, Why
Log loading is taking time. By default, Kafka uses one thread per data.dir
to load the logs.
You can try by increasing "num.recovery.threads.per.data.dir" broker
config property.
On Mon, Jul 31, 2017 at 8:21 AM, Vinod KC wrote:
> Hi ,
>
> Our business use case requires to keep all kafka log
We should pass necessary ssl configs using --command-config command-line
option.
>>security.protocol=SSL
>>ssl.truststore.location=/var/private/ssl/client.truststore.jks
>>ssl.truststore.password=test1234
http://kafka.apache.org/documentation.html#security_configclients
On Mon, Jul 31, 2017 at
Server restart is required, only if you are using SASL/PLAIN mechanism.
Other mechanisms (Kerberos, Scram) restart is not required.
https://issues.apache.org/jira/browse/KAFKA-4292 will help us to write
custom handlers.
On Tue, Aug 1, 2017 at 4:26 AM, Alexei Levashov <
alexei.levas...@arrayent.c
looks like some config error. Can you upload initial logs for both the
servers?
One user is sufficient inter broker communication.
On Wed, Aug 2, 2017 at 11:04 AM, Alexei Levashov <
alexei.levas...@arrayent.com> wrote:
> Hello Manikumar,
>
> I appreciate your advice , thank you.
Not sure about, what you mean by Asynchronous and Synchronous
replication. details about replication are here:
http://kafka.apache.org/documentation/#replication
Kafka producers can choose whether they wait for the message to be
acknowledged
by 0,1 or all (-1) replicas by using "acks" config prope
Hi,
I think it is a good option to log denials at WARN level. Pls raise JIRA
for this.
On Fri, Aug 18, 2017 at 3:47 AM, Phillip Walker
wrote:
> The problem turns out to be logging in
> kafka.security.auth.SimpleAclAuthorizor. We had logging on because we need
> to log denied authorization atte
This feature got released in Kafka 0.11.0.0. You can
use kafka-delete-records.sh script to delete data.
On Sun, Aug 13, 2017 at 11:27 PM, Hans Jespersen wrote:
> This is an area that is being worked on. See KIP-107 for details.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 107%3A+
Kafka topic names are case-sensitive.
On Tue, Aug 22, 2017 at 5:11 AM, Dominique De Vito
wrote:
> HI,
>
> Just a short question (I was quite surprised not to find it in the Kafka
> FAQ, or in the Kafka book...).
>
> Are Kafka topic names case sensitive or not sensitive ?
>
> Thanks.
>
> Regards,
JIRA for this issue: https://issues.apache.org/jira/browse/KAFKA-5547
On Tue, Aug 22, 2017 at 8:57 PM, sukumar.np wrote:
> Hi All,
>
>
>
> I am using 0.11 Kafka version and trying out an SASL_PLAINTEXT mechanism
> for Authentication and Authorization. I have configured Broker and
> Zookeeper as
Consumers with the same group.id are part of the same consumer group.
Topic/partitions are load-balanced over the consumer instances based on
their topic subscriptions.
In this case G1, G2 consumers are part of the same group, T1 is balanced
over G1 and T2 over G2.
This is a valid scenario.
On Wed
In old consumer, group coordination is based on zookeeper and new consumer
uses inbuilt
(not depending on ZK) group coordinator. As of now, auto-migration
of migration of old consumers
to new consumers is not available. More details here:
https://issues.apache.org/jira/browse/KAFKA-4513
On Wed, A
Hi,
Kafka default authorizer is used with secure authenticated channels
(SSL,SASL,SCRAM).
For plain text (non-secure) channels, the principal will be always
ANONYMOUS. Here you can authorize by ip-address.
It's adviced to run on secure channels. you can try SASL/PLAIN or SCRAM
mechanisms with/wit
Murumkar
wrote:
> Thanks Manikumar. I am testing the setup documented here:
> https://developer.ibm.com/opentech/2017/05/31/kafka-acls-in-practice/
> (SASL_PLAINTEXT).
>
> I haven't setup any authentication for the tests. Thinking about it,
> authentication is a must hav
Deny permission for operations: Write from hosts: *
>
> [nex37045@or1010051029033 ~]$ kafka-acls --authorizer
> kafka.security.auth.SimpleAclAuthorizer --authorizer-properties
> zookeeper.connect=localhost:2181 --list
> Current ACLs for resource `Topic:test`:
> User:nex37045 has De
Hi,
Yes, you can replace bin and libs folders. or you can untar to a new folder
and
update config/server.properties config file.
On Tue, Sep 12, 2017 at 12:21 PM, kiran kumar
wrote:
> [re-posting]
>
> Hi All,
>
>1. Upgrade the brokers one at a time: shut down the broker, update the
>cod
what is the partition size? you need at least 2 partitions to distribute
across two consumers
On Wed, Sep 13, 2017 at 1:24 PM, Liel Shraga (lshraga)
wrote:
> Hi,
>
>
>
> I have 5 separate docker images : 1 for kafka broker, 1 zookeeper , 1
> producer and 2 consumers.
>
> I publish messages to t
Hi,
If you are using commitSync(Map offsets)
api, then the committed offset
should be the next message your application will consume, i.e.
lastProcessedMessageOffset + 1.
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#commitSync(java.util.Map)
On Wed,
you can try DumpLogSegments tools to verify messages from log files. This
will give compression type for each message.
https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-
DumpLogSegment
On Thu, Sep 21, 2017 at 1:38 PM, Vincent Dautremont <
vincent.dautrem...@olamobile.com.
Hi,
We can override per listener security settings. This way we can configure
each listener
to with different configs.
https://issues.apache.org/jira/browse/KAFKA-4636
On Fri, Sep 22, 2017 at 2:00 PM, Jakub Scholz wrote:
> Hi,
>
> I would like to setup my Kafka cluster so that it has several
missed it somewhere in the
> regular documentation? Or is it mentioned only in the KIP?
>
> Thanks & Regards
> Jakub
>
> On Sat, Sep 23, 2017 at 11:05 AM, Manikumar
> wrote:
>
> > Hi,
> >
> > We can override per listener security settings. This wa
looks like syntax issue with "sasl.jaas.config" config property.
On Tue, Oct 3, 2017 at 8:06 PM, Pekka Sarnila wrote:
> The output below is actually from having
>
> security.protocol=SASL_PLAINTEXT
>
> in producer.properties.
>
> Actual error point I believe is:
>
> Caused by: java.lang.Securi
looks like syntax issue with "sasl.jaas.config" config property or jaas
conf file
On Tue, Oct 3, 2017 at 8:12 PM, Manikumar wrote:
> looks like syntax issue with "sasl.jaas.config" config property.
>
> On Tue, Oct 3, 2017 at 8:06 PM, Pekka Sarnila wrote:
>
>
maybe you can list the created ACLs and cross-check the permissions.
On Thu, Oct 5, 2017 at 9:51 AM, Ted Yu wrote:
> From the example off:
> https://cwiki.apache.org/confluence/display/KAFKA/
> Kafka+Authorization+Command+Line+Interface
>
> it seems following 'User:', the formation is te...@exam
Yes, we can throttle the clients based on byte rate threshold quotas.
More details here:
http://kafka.apache.org/documentation/#design_quotas
http://kafka.apache.org/documentation.html#quotas
On Thu, Oct 5, 2017 at 3:32 AM, Anu P wrote:
> Hi All,
>
> I am new to using quotas for Kafka.
>
> Is i
normally, log.retention.hours (168hrs) should be higher than
offsets.retention.minutes (336 hrs)?
On Fri, Oct 6, 2017 at 8:58 PM, Dmitriy Vsekhvalnov
wrote:
> Hi Ted,
>
> Broker: v0.11.0.0
>
> Consumer:
> kafka-clients v0.11.0.0
> auto.offset.reset = earliest
>
>
>
> On Fri, Oct 6, 2017 at 6:2
; with
> > > > > current
> > > > > > semantics of offset reset policy IMO using anything but none is
> not
> > > > > really
> > > > > > an option unless it is ok for consumer to loose some data
> (latest)
> > or
> >
Hi,
Can you reproduce the error? Is it happening at the same offset every time?
Try to reproduce with the console-consumer tool.
You can raise JIRA issue here.
https://issues.apache.org/jira/projects/KAFKA
On Mon, Oct 9, 2017 at 3:00 PM, Michael Keinan
wrote:
> Thank you for your response.
> N
t;
>
>
>
>
> On Oct 9, 2017, at 12:38 PM, Manikumar manikumar.re...@gmail.com>> wrote:
>
> Hi,
>
> Can you reproduce the error? Is it happening at the same offset every time?
> Try to reproduce with the console-consumer tool.
>
> You can raise JIRA issue h
just a minor observation: Kafka binary contains two versions of javasist
library (javassist-3.20.0-GA.jar, javassist-3.21.0-GA.jar).
This dependency is coming from connect project.
On Thu, Oct 19, 2017 at 2:16 AM, Guozhang Wang wrote:
> Thanks for pointing out, Jun, Ismael.
>
> Will update the s
Can you post the sample code?
On Mon, Oct 23, 2017 at 8:53 PM, Andrea Giordano <
andrea.giordano@gmail.com> wrote:
> Hi,
> I set a Kafka broker with some topics and where each topic is divided into
> 10 partitions.
> As I understood on Kafka doc, if I send a keyed message to kafka the key
> i
the link for it:https://pastebin.com/Rqd2Q3kx
> >
> > I use Flink java to get kafka messages and implementing a deserialiser
> effectively I see my key string in key kafka message field (so I’m “quite”
> sure the javascript code is correctly implemented).
> >
> > Finally my
any errors in log cleaner logs?
On Wed, Oct 25, 2017 at 3:12 PM, Elmar Weber wrote:
> Hello,
>
> I'm having trouble getting Kafka to compact a topic. It's over 300GB and
> has enough segments to warrant cleaning. It should only be about 40 GB
> (there is a copy in a db that is unique on the key)
Any exception in the callback exception field?
may be you can enable client debug logs to check any errors.
On Mon, Oct 30, 2017 at 7:25 AM, Thakrar, Jayesh <
jthak...@conversantmedia.com> wrote:
> I created a new Kafka topic with 1 partition and then sent 10 messages
> using the KafkaProducer AP
Any exceptions in the mirror maker logs? may be you can enable mirror maker
trace logs.
maybe all messages are having same key? Can you recheck partition count on
target cluster?
On Thu, Nov 2, 2017 at 2:45 AM, Chris Neal wrote:
> Apologies for bumping my own post, but really hoping someone has
You can use kafka-reassign-partitions.sh to move some to new broker
http://kafka.apache.org/documentation/#basic_ops_automigrate
also check leader balancing:
http://kafka.apache.org/documentation/#basic_ops_leader_balancing
On Thu, Nov 2, 2017 at 2:24 PM, lk_kafka wrote:
> hi,all:
> on
You can set topic specific compression type by setting topic level config
"compression.type"
another option is change compression type config on producer side.
On Wed, Nov 22, 2017 at 4:56 PM, Sameer Kumar
wrote:
> Hi,
>
> Is it possible to switch from gzip to lz4 at runtime on kafka brokers. M
> This is possible by stopping all brokers, producers and changing values.
> But for that broker cluster has to be done. I was looking if there is any
> way we can do that in a running cluster.
>
> -Sameer.
>
> On Wed, Nov 22, 2017 at 7:24 PM, Manikumar
> wrote:
>
> &
3, 2017 at 2:39 PM, Sameer Kumar
wrote:
> Ok. So you mean stop all producers, change the compress type for topic at
> runtime and switch the compression type for producers and have them start
> again.
>
> On Thu, Nov 23, 2017 at 12:46 PM, Manikumar
> wrote:
>
All the data log files for a given topic-partition are stored under a
topic-partition directory in a particular data.dir.
So a topic-partition directory can grow up to the capacity of the log.dir
director. And there can be multiple
topic-partition directories in a data.dir. It depends topic-partiti
You need to pass "security.protocol" config using producer.config or
--consumer.config command-line options.
Only java clients supports security. You need use "--new-consumer" option
for kafka-console-consumer.sh
also need to setup producer/consumer scripts to use jaas conf using
-Djava.secur
We should pass necessary ssl configs using --command-config command-line
option
to kafka-console-consumer.sh script
>>security.protocol=SSL
>>ssl.truststore.location=/var/private/ssl/client.truststore.jks
>>ssl.truststore.password=test1234
http://kafka.apache.org/documentation.html#security_con
Hi,
1. inter.broker.protocol.version should be higher than or equal to
log.message.format.version.
So with 0.10 inter.broker.protocol.version, we can not use latest message
format and broker wont start.
2. Since other brokers in the cluster don't understand latest protocol, we
can not directly
se
To achive high-availability, we need to run multiple instances of mirror
maker process.
On Wed, Dec 20, 2017 at 5:37 AM, sham singh
wrote:
> Hello - i've a question on optimizing MirrorMaker setup...
> I've 2 topics, each with 12 partitions and i'm setting up MirrorMaker for
> the two topics.
>
logs shows "failed due to
org.apache.kafka.common.errors.RecordTooLargeException,
returning UNKNOWN error code to the client (kafka.coordinator.group.
GroupMetadataManager)"
Try increasing max.message.bytes config on broker and/or enable
compression on the offsets topic via the broker config
`off
Is this config added after sending some data? Can you verify the latest
logs?
This wont recompress existing messages. Only applicable to new messages.
On Fri, Dec 29, 2017 at 6:59 PM, Ted Yu wrote:
> Looking at https://issues.apache.org/jira/browse/KAFKA-5686 , it seems you
> should have specifi
"Memory records is not writable" error was fixed in 0.10.0.0 release
https://issues.apache.org/jira/browse/KAFKA-3594
On Fri, Jan 12, 2018 at 6:10 AM, Sunil Parmar wrote:
> We see multiple instance of this error
>
> 2017-12-23 05:30:53,722 WARN
> org.apache.kafka.clients.producer.internals.Sende
Congrats. well deserved.
On Sat, Jan 13, 2018 at 8:37 AM, Martin Gainty wrote:
> Willkommen Matthias!
> Martin-
>
> From: Damian Guy
> Sent: Friday, January 12, 2018 7:43 PM
> To: users@kafka.apache.org
> Cc: dev
> Subject: Re: [ANNOUNCE] New committer: Matthia
+1 (non-binding)
ran quick-start, unit tests on the src.
On Tue, Feb 13, 2018 at 5:31 AM, Ewen Cheslack-Postava
wrote:
> Thanks for the heads up, I forgot to drop the old ones, I've done that and
> rc1 artifacts should be showing up now.
>
> -Ewen
>
>
> On Mon, Feb 12, 2018 at 12:57 PM, Ted Y
KIP-175/KAFKA-5526 added this support. This is part of upcoming Kafka
1.1.0 release.
On Wed, Feb 14, 2018 at 1:36 PM, Devendar Rao
wrote:
> Hi, Is there a way to find out the consumer group coordinator using kafka
> sh util from CLI? Thanks
>
If the broker "compression.type" is "producer", then the broker retains
the original compression codec set by the producer.
If the producer and broker codecs are different, then broker recompress
the data using broker "compression.type".
On Wed, Feb 14, 2018 at 10:58 AM, Uddhav Arote
wrote:
payloadsize: 354 magic: 1 compresscodec: NoCompressionCodec crc: 468622988
> payload: same 354B message
>
> Please note the compression codecs in the --deep-iteration case,
> Case 1 is OK, but in case 2 shouldn't it be SnappyCompression and 3 may be
> LZ4Compression
>
> O
They are 1-, 5-, and 15-minute moving averages.
Kafka brokers uses dropwizard/yammer metric library.
More details about metrics are here :
http://metrics.dropwizard.io/2.2.0/getting-started/
On Sat, Feb 24, 2018 at 8:31 PM, Soheil Pourbafrani
wrote:
> Hi,
>
> What is the exact meaning of the Ka
It is moving average of last minute.
On Sun, Feb 25, 2018 at 12:42 AM, Soheil Pourbafrani
wrote:
> Thanks, Manikumar, You mean one minute's average from start time or for
> last minute?
>
> On Sat, Feb 24, 2018 at 7:06 PM, Manikumar
> wrote:
>
> > They are 1-, 5-,
we can use "kafka-consumer-groups.sh --reset-offsets" option to reset
offsets. This is available from Kafka 0.11.0.0..
On Wed, Feb 28, 2018 at 2:59 PM, UMESH CHAUDHARY
wrote:
> You might want to set group.id config in kafka-console-consumer (or in any
> other consumer) to the value which you h
check broker logs for any errors. also enable consumer debug logs.
check the health of the __consumer_offsets topic. make sure to set
offsets.topic.replication.factor=1 for single node cluster.
On Tue, Mar 20, 2018 at 11:21 PM, Anand, Uttam wrote:
> I don’t want to use --new-consumer as it is th
We can enable both compaction and retention for a topic by
setting cleanup.policy="delete,compact"
http://kafka.apache.org/documentation/#topicconfigs
Does this handle your requirement?
On Wed, Mar 21, 2018 at 2:36 PM, Kopacki, Tomasz (Nokia - PL/Wroclaw) <
tomasz.kopa...@nokia.com> wrote:
> Hi,
the history
> of changes but neither I can simply remove 'old' messages because I need to
> do this based of the lifecycle of the resource not just their age.
>
>
>
> Sincerely,
> Tomasz Kopacki
> DevOps Engineer @ Nokia
>
> -Original Message-
>
policy is 'delete' and it
> still work ?
>
> Sincerely,
> Tomasz Kopacki
> DevOps Engineer @ Nokia
>
> -Original Message-
> From: Manikumar [mailto:manikumar.re...@gmail.com]
> Sent: Wednesday, March 21, 2018 11:03 AM
> To: users@kafka.apache.org
> Subj
+1 (non-binding)
- Verified src, binary artifacts and basic quick start
- Verified delegation token operations and docs
- Verified dynamic broker configuration and docs.
On Tue, Mar 27, 2018 at 6:52 PM, Rajini Sivaram
wrote:
> Can we get some more votes for this RC so that the release can be r
Congrats, Dong!
On Thu, Mar 29, 2018 at 6:45 AM, Tao Feng wrote:
> Congrats Dong!
>
> On Wed, Mar 28, 2018 at 5:15 PM Dong Lin wrote:
>
> > Thanks everyone!!
> >
> > It is my great pleasure to be part of the Apache Kafka community and help
> > make Apache Kafka more useful to its users. I am su
Yes. As long as we use same partitioner and have same number of partitions,
messages with same key will go to same partition.
On Thu, Mar 29, 2018 at 3:11 PM, Victor L wrote:
> I am looking for best method to keep consumption of messages in the same
> order as client produced them, one thing i a
@Darshan,
For PLAINTEXT channels, principal will be "ANONYMOUS". You need to give
produce/consume permissions
to "User:ANONYMOUS"
On Wed, Apr 4, 2018 at 8:10 AM, Joe Hammerman <
jhammer...@squarespace.com.invalid> wrote:
> Hi all,
>
> Is it possible to run mixed mode with PLAINTEXT and SSL with
User:ANONYMOUS --allow-host \* --operation Read
--topic test
On Thu, Apr 5, 2018 at 2:39 AM, Darshan wrote:
> Hi Manikumar
>
> I pushed ACLs for User:ANONYMOUS and when I list them they are listed as
> shown. Can you please suggest if server.properties needs a change ?
>
> *[
"max.poll.records" config property can be used to limit the number of
records returned
in each consumer poll() method call.
On Fri, Sep 23, 2016 at 10:49 PM, Ramanan, Buvana (Nokia - US) <
buvana.rama...@nokia-bell-labs.com> wrote:
> Hello,
>
> Do Kafka protocol & KafkaConsumer (java) client addr
Kafka doesn't support white spaces in topic names. Only support '.', '_'
and '-' these are allowed.
Not sure how you got white space in topic name.
On Wed, Oct 5, 2016 at 8:19 PM, Hamza HACHANI
wrote:
> Well ackwardly when i list the topics i find it but when i do delete it it
> says that this
we have similar setting "metadata.max.age.ms" in new producer api.
Its default value is 300sec.
On Wed, Oct 12, 2016 at 3:04 PM, Alexandru Ionita <
alexandru.ion...@gmail.com> wrote:
> Hello kafka users!!
>
> I'm trying implement/use a mechanism to make a Kafka producer imperatively
> update its
This is known issue in some of the command line tools.
JIRA is here :
https://issues.apache.org/jira/browse/KAFKA-2619
On Mon, Oct 17, 2016 at 11:16 AM, ZHU Hua B
wrote:
> Hi,
>
>
> Anybody could help to answer this question? Thanks!
>
>
>
>
>
>
> Best Regards
>
> Johnny
>
> -Original Messa
+1 (non-binding)
verified quick start and artifacts.
On Sat, Oct 15, 2016 at 4:59 AM, Jason Gustafson wrote:
> Hello Kafka users, developers and client-developers,
>
> One more RC for 0.10.1.0. We're hoping this is the final one so that we
> can meet the release target date of Oct. 17 (Monday).
Kafka does not automatically migrate the the existing partition data to new
volumes. Only new
partitions will placed on on new volumes.
For now, you can manually copy some the partition dirs(careful with
checkpoint files) to new disk,
or you can increase the partitions.
Or we can just delete the
ved this issue just by running
> kafka-reassign-partitions.sh. I now see even distribution of partitions
> across both disk volumes. Does this make sense?
>
> Thanks
> -jeremy
>
> > On Oct 20, 2016, at 10:07 PM, Manikumar
> wrote:
> >
> > Kafka does
why are you passing "consumer.config" twice?
On Mon, Oct 24, 2016 at 11:07 AM, ZHU Hua B
wrote:
> Hi,
>
>
> The version of Kafka I used is 0.10.0.0. Thanks!
>
>
>
>
>
>
> Best Regards
>
> Johnny
>
> -Original Message-
> From: Guozhang Wang [mailto:wangg...@gmail.com]
> Sent: 2016年10月24日
t;
>
>
>
> Best Regards
>
> Johnny
>
>
> -Original Message-
> From: Manikumar [mailto:manikumar.re...@gmail.com]
> Sent: 2016年10月24日 13:48
> To: users@kafka.apache.org
> Subject: Re: Mirror multi-embedded consumer's configuration
>
> why are
n the target Kafka
> cluster, if Kafka mirror maker could mirror the same topic again from
> source cluster when I launch mirror maker next time? Thanks!
>
>
>
>
>
>
> Best Regards
>
> Johnny
>
>
> -Original Message-
> From: Manikumar [mailto:m
Hi,
Before Kafka 1.1.0, If the unclean leader election is enabled and if there
are no ISRs, the leader is set to -1 and ISR will be empty.
During upgrade, If you have single replica partitions or if all replicas
goes out of ISR, then we get into this situation.
>From Kafka 0.11.0.0, Unclean lead
Yes, rolling restart should be fine for 1.0 -> 1.0.1
We can add "unclean.leader.election.enable=true" to server.properties.
This requires broker restart to take effect.
On Tue, Apr 24, 2018 at 12:02 PM, Mika Linnanoja
wrote:
> Morning, group.
>
> On Mon, Apr 23, 2018 at 11:19 AM, Mika Linnanoja
Hi,
>From Kafka 0.10.2.0, we can configure producer/consumer jaas configuration
using "sasl.jaas.config" config property. Using this we can configure
different principals.
On Tue, Apr 24, 2018 at 10:58 PM, Zieger, Antoine <
antoine.zie...@morganstanley.com> wrote:
> Hi,
>
> I am trying to tran
gt; producerConfig)
>
> //Consumer with specific config: principal 'xyz'
> Properties consumerConfig = new Properties();
> consumerConfig.put("sasl.jaas.config" , )
> KafkaConsumerr producer = new KafkaConsumer<>(
> consumerConfig)
>
> Thanks in advance
am sorry this might be a lack of java skills on my
> side but I still don’t understand how I can use it in a java class. The
> example is provided in case of a property file from what I understand.
>
> Would you mind providing a java example ?
> producerConfig.put("sasl.jaas.co
heartbeat.interval.ms should be lower than session.timeout.ms.
Check here:
http://kafka.apache.org/0101/documentation.html#newconsumerconfigs
On Thu, May 24, 2018 at 2:39 PM, Shantanu Deshmukh
wrote:
> Someone please help me. I am suffering due to this issue since a long time
> and not finding
Pls check "group.initial.rebalance.delay.ms" broker config property. This
will be the delay for the initial consumer rebalance.
from docs
"The rebalance will be further delayed by the value of
group.initial.rebalance.delay.ms as new members join the group,
up to a maximum of max.poll.interval.m
Currently authentication logs are not available. In recent Kafka versions,
authorization failures
will be logged in logs/kafka-authorizer.log
On Thu, May 31, 2018 at 5:34 PM, Gérald Quintana
wrote:
> Hello,
>
> I am using SASL Plaintext authentication and ACLs.
> I'd like to be able to detect po
This feature will be part upcoming Kafka 2.0.0 release.
Doc PR is here : https://github.com/apache/kafka/pull/4890
configs here:
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/SaslConfigs.java#L57
On Fri, Jun 1, 2018 at 10:51 PM, Roy van der Valk
As described in usage description, to group the values which contain
commas, we need to use square brackets.
ex: --add-config cleanup.policy=[compact,delete]
On Thu, Jun 7, 2018 at 8:49 AM, Jayaraman, AshokKumar (CCI-Atlanta-CON) <
ashokkumar.jayara...@cox.com> wrote:
> Hi,
>
> We are on Kafka
Can you post consumer debug logs?
You can enable console consumer debug logs here:
kafka/config/tools-log4j.properties
On Wed, Jun 13, 2018 at 9:55 AM Craig Ching wrote:
> Hi!
>
> We’re having a problem with a new kafka cluster at 1.1.0. The problem is,
> in general, that consumers can’t consum
Those configs are topic-level config names. To configure in
server.properties,
we need to use server config property names (log.cleanup.policy,
log.cleaner.delete.retention.ms etc..).
check the "SERVER DEFAULT PROPERTY" column in the table given in the
below link
http://kafka.apache.org/document
These metrics are meter type metrics, which tracks count, mean rate, 1-,
5-, and 15-minute moving averages.
You maybe observing count measure, which gives number of events which have
been marked.
You can try monitoring 1/5/15 miniture averages.
On Wed, Jun 20, 2018 at 12:33 AM Arunkumar
wrote:
+1 (non-binding) Ran tests, Verified quick start, producer/consumer perf
tests
On Sat, Jun 23, 2018 at 8:11 AM Dong Lin wrote:
> Thank you for testing and voting the release!
>
> I noticed that the date for 1.1.1-rc1 is wrong. Please kindly test and
> vote by Tuesday, June 26, 12 pm PT.
>
>
You can enable unclean.leader.election temporarily for specific topic by
using kafka-topics.sh command.
This requires broker restart to take effect.
http://kafka.apache.org/documentation/#topicconfigs
On Thu, Jun 28, 2018 at 2:27 AM Jordan Pilat wrote:
> Heya,
>
> I had a question about what b
Yes, looks like maven artifacts are missing on staging repo
https://repository.apache.org/content/groups/staging/org/apache/kafka/kafka_2.11/
On Thu, Jun 28, 2018 at 4:18 PM Odin wrote:
> There are no 1.1.1-rc1 artifacts in the staging repo listed. Where can
> they be found?
>
> Sincerely
> Odin
In your case, you need to restart B2 with unclean.leader.election=true.
This will enable B2 to become leader with 90 messages.
On Thu, Jun 28, 2018 at 11:51 PM Jordan Pilat wrote:
> If I restart the broker, won't that cause all 100 messages to be lost?
>
> On 2018/06/28 02:59
looks like maven artifacts are not updated in the staging repo. They are
still at old timestamp.
https://repository.apache.org/content/groups/staging/org/apache/kafka/kafka_2.11/2.0.0/
On Sat, Jun 30, 2018 at 12:06 AM Rajini Sivaram
wrote:
> Hello Kafka users, developers and client-developers,
>
It will be taken as "any" directory for each replica, which means replica
will placed on any one of the
configured directory on that broker.
Since it is "log_dirs" optional, you can remove from the json.
On Sat, Jun 30, 2018 at 12:02 PM Debraj Manna
wrote:
> It is problem on my side. The code w
1 - 100 of 349 matches
Mail list logo