.
Thanks
Arjun Narasimha Kota
hi,
I was just looking at kafka 0.8, I could not find any option in
config/server.properties with key "auto.create.topics.enable" or
"default.replication.factor".
Can some one help me out,where i can find these. I want my topics to be
created dynamically.
thanks
Arjun Narasimha Kota
same number of messages)
Thanks
Arjun Narasimha Kota
ets the list. I know kafka do not want to be dependent on
zookeeper, but what should we do in such a case.
Thanks
Arjun Narasimha Kota
So from your reply what i understood is this particular property iis
used only when starting the producers.
is that right? can you please confirm.
Thanks
Arjun Narasimha Kota
On Thursday 19 December 2013 05:33 PM, pushkar priyadarshi wrote:
1.When you start producing : at this time if any of
ad1a4e402b2ac71/core/src/main/scala/kafka/network/SocketServer.scala";.
Does this have anything to do with the problem?
Why will this come up, i tried to look at the code but i am lost in it.
Can some one please point me in a direction where i can find the answer.
Thanks
Arjun Narasimh
Hi Jun,
No its not that problem. I am not getting what the problem is can you
please help.
thanks
Arjun Narasimha Kota
On Monday 10 February 2014 09:10 PM, Jun Rao wrote:
Does
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whydoesmyconsumernevergetanydata?
apply?
Thanks,
Jun
nope i will try that. thanks for suggesting
On Tuesday 11 February 2014 01:59 PM, Guozhang Wang wrote:
Arjun,
Are you using the same group name for the console consumer and the java
consumer?
Guozhang
On Mon, Feb 10, 2014 at 11:38 PM, Arjun wrote:
Hi Jun,
No its not that problem. I am
With the same group id from the console consumer its working fine.
On Tuesday 11 February 2014 01:59 PM, Guozhang Wang wrote:
Arjun,
Are you using the same group name for the console consumer and the java
consumer?
Guozhang
On Mon, Feb 10, 2014 at 11:38 PM, Arjun wrote:
Hi Jun,
No its
-1392133080519-e24b249b-0
thanks
Arjun Narasimha Kota
On Wednesday 12 February 2014 10:21 AM, Jun Rao wrote:
Could you double check that you used the correct topic name? If so, could
you run ConsumerOffsetChecker as described in
https://cwiki.apache.org/confluence/display/KAFKA/FAQ and see if
I am sorry but could not locate the offset in the request log. I have
turned on the debug for the logs but couldnt . Do you know any pattern
with which i can look in.
Thanks
Arjun Narasimha Kota
On Wednesday 12 February 2014 09:26 PM, Jun Rao wrote:
Interesting. So you have 4 messages in the
Thanks,
Jun
On Tue, Feb 11, 2014 at 10:07 PM, Arjun wrote:
The topic name is correct, the o/p of the ConsumerOffserChecker is
arjunn@arjunn-lt:~/Downloads/Kafka0.8/new/kafka_2.8.0-0.8.0$
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group group1
--zkconnect 127.0.0.1:2181,127.0.0.1
-6
group1_ec2-54-225-44-248.compute-1.amazonaws.com-1392525076823-fb0973d5-1
Thanks
Arjun Narasimha Kota
hi,
I am testing kafka 0.8 on my local machine. i have only one zookeeper
and one kafka broker running.
when i run the console producer i get this error:
[2014-02-21 16:01:20,512] WARN Error while fetching metadata
[{TopicMetadata for topic test ->
No partition metadata for topic test due to
the kafka and zookeeper process,
deleted the log dirs of the kafka and the zookeeper and restated the
processes.
Thanks
Arjun Narasimha Kota
On Friday 21 February 2014 09:41 PM, Jun Rao wrote:
Could you do a list topic and show the output? Also, any error in the
controller and state-change log
03)
at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:770)
at org.I0Itec.zkclient.ZkClient$9.call(ZkClient.java:766)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
... 16 more
On Saturday 22 February 2014 02:24 PM, Arjun wrote:
Hi please find below the o/p of th
Hi,
No i have 3 kafka brokers running, but in the same system. I tried with
the replication factor of 1 but it gives the same result.
Thanks
Arjun Narasimha Kota
On Monday 24 February 2014 04:21 AM, Jun Rao wrote:
With only one broker, do you really want to have replication factor 2?
Maybe
Hi,
thanks for the input. I will try to increase the number of retries and
check.
Thanks
Arjun Narasimha kota
On Monday 24 February 2014 10:44 AM, Jun Rao wrote:
If those errors only show up transiently when brokers are started, then
it's normal. It takes a bit of time for metadata
get this info in the kafka console, does the consumer slowness because
of this??
Reconnect due to socket error: null
The producer is pushing the messages as i can see that using Consumer
offset checker tool.I can also see there is a lag in the consumer
messages in this.
Thanks
Arjun Narasimha
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
Does it effect anything. I havent looked at it as it was just a warning.
Should i be worried about this?
Thanks
Arjun Narasimha kota
On Tuesday 25 February 2014 03:45 PM, Arjun wrote:
Hi,
I am using kafka 0.8. I have 3 brokers on three systems an
bytes written"
but no reading is taking place. I may be looking at some thing wrong.
Can some one please help me out in this.
Thanks
Arjun Narasimha kota
On Tuesday 25 February 2014 03:46 PM, Arjun wrote:
Apart from that i get this stack trace
25 Feb 2014 15:45:22,636 WARN
[ConsumerFet
time out time.
Is there anyway i can increase the socket timeout over there. I am not
getting why the consumers are getting stuck over there.
There are no error on the brokers.
Thanks
Arjun Narasimha Kota
On Tuesday 25 February 2014 05:45 PM, Arjun wrote:
Adding to this, i have started my logs in
nputs
thanks
Arjun Narasimha Kota
On Tuesday 25 February 2014 07:42 PM, Neha Narkhede wrote:
Arjun,
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped,why
?
Thanks,
Neha
On Tue, Feb 25, 2014 at 5:04 AM, Arjun wrote:
The thing i found
Hi
I will make the change and see whether things work fine or not and let
you know.
Thanks
Arjun Narasimha Kota
On Tuesday 25 February 2014 09:58 PM, Jun Rao wrote:
The following config is probably what's causing the socket timeout. Try sth
like 1000ms.
MaxWait: 1000 ms
Thanks,
Hi,
Can i know what all should i consider while calculating the memory
requirements for the kafka broker. I had tried to search on net about
this, but doulc not find nay. If anyone can please suggest or point to a
link that would be helpful.
Thanks
Arjun Narasimha Kota
patterns
but couldn't find any in my logs.
Can some one help me with this.
Thanks
Arjun Narasimha kota
. the lag in those partitions is increased a lot. The data is
the production data and we cant afford to lose it. can Some thing be
done to it.
Thanks
Arjun Narasimha Kota
On Thursday 10 April 2014 09:13 AM, Jun Rao wrote:
Do you see many rebalances in the consumer log? If so, see
https
I see this a lot in the consumer logs
[kafka.utils.ZkUtils$] conflict in
/consumers/group1/owners/testtopic/2 data:
group1_ip-10-164-9-107-1397047622368-c14b108b-1 stored data:
group1_ip-10-168-47-114-1397034341089-3861c6cc-1
I am not seeing any "expired" in my consumer logs
Th
t;
and the replay of messages i would be grateful
Thanks
Arjun Narasimha Kota
On Thursday 10 April 2014 10:17 AM, Arjun wrote:
I see this a lot in the consumer logs
[kafka.utils.ZkUtils$] conflict in
/consumers/group1/owners/testtopic/2 data:
group1_ip-10-164-9-107-1397047622368-c14b108b-1 sto
the consumer, how many messages can the consumer read, assuming
after some time as the offset is not commited, consumer will not be able
to consume any messages.
Thanks
Arjun NArasimha Kota
On Thursday 10 April 2014 09:13 AM, Jun Rao wrote:
Do you see many rebalances in the consumer log? If so
, then even though consumer is active it is not trying to fetch the
message. There is nothing in logs, just the messages are being fetched
by the kafka consumer. The messages are there in the Kafka server. Can
some one let me know where i am doing wrong.
Thanks
Arjun Narasimha Kota
its auto created
but even after topic creation this is the scenario
Arjun
On Thursday 10 April 2014 08:41 PM, Guozhang Wang wrote:
Hi Arjun,
Did you manually create the topic or use auto.topic.creation?
Guozhang
On Thu, Apr 10, 2014 at 7:39 AM, Arjun wrote:
Hi,
We have 3 node kafka 0.8
But we have auto offset reset to smallest not largest, even then this
issue arises? If so is there any work around?
Thanks
Arjun NArasimha Kota
On Thursday 10 April 2014 09:39 PM, Guozhang Wang wrote:
It could be https://issues.apache.org/jira/browse/KAFKA-1006.
Guozhang
On Thu, Apr 10
arbitary number)
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 06:23 AM, Arjun Kota wrote:
The consumer uses do specific topics.
On Apr 11, 2014 6:23 AM, "Arjun Kota" <mailto:ar...@socialtwist.com>> wrote:
Yes the message shows up on the server.
On Ap
t" = "smallest"
"auto.commit.enable"= "false"
"fetch.message.max.bytes" = "1048576"
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 06:23 AM, Arjun Kota wrote:
The consumer uses do specific topics.
On Apr 11, 2014 6:23 AM, "Arjun
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 09:51 AM, Arjun wr
Hi,
I could not see any out of memory exceptions in the broker logs. One
thing i can see is i may have configured consumer poorly. If its not
too much to ask can u let me know the changes i have to do for over
coming this problem.
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 10:04
Hi,
From my understanding, the fetch wait max time is the maximum time the
consumer waits if there are no messages in the broker. If there are
messages in the broker, it just gets all the messages from the broker.Is
my understanding wrong?
thanks
Arjun Narasimha Kota
On Friday 11 April
yup i will change the value and recheck. Thanks for the help.
thanks
Arjun Narasimha Kota
On Friday 11 April 2014 10:28 AM, Guozhang Wang wrote:
What I tried to say is that it may be caused by your
"fetch.wait.max.ms"="18"
too large. Try a small value and see if that
i changed the time to 60 seconds even now i see the same result. The
Consumer is not consuming the messages.
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 10:36 AM, Arjun wrote:
yup i will change the value and recheck. Thanks for the help.
thanks
Arjun Narasimha Kota
On Friday 11
thrown at a place user can catch it and raise an alert?
Thanks
Arjun Narasimha Kota
Some times, the error is even not printed. The blow line gets printed(i
increased the number of retires to 10)
end rebalancing consumer group1_ip-10-122-57-66-1397214466042-81e47bfe
try #9
and then the consumer just sits idle.
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 04:33 PM
one please help
me out here.
Thanks
Arjun Narasimha Kota
On Friday 11 April 2014 04:48 PM, Arjun wrote:
Some times, the error is even not printed. The blow line gets
printed(i increased the number of retires to 10)
end rebalancing consumer group1_ip-10-122-57-66-1397214466042-81e47bfe
try #9
partitions but it was not. what is the best way out for me in this
scenario.
There are cases in our production where we may have to add consumers for
a particular topic, if adding consumers is going to result this, can
some one suggest a way out.
thanks
Arjun NArasimha kota
On Friday 11 April
not log a thing.Not even the exception.(I
have put my consumer log level to debug)
Thanks
Arjun Narasimha Kota
On Saturday 12 April 2014 08:41 AM, Jun Rao wrote:
Console consumer also uses the high level consumer. Could you try setting
fetch.wait.max.ms to 100ms?
Thanks,
Jun
On Fri, Apr 11, 20
Hi,
Can you please check weather this is the situation
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped,why?
Arjun Kota
On Tuesday 15 April 2014 11:49 AM, ankit tyagi wrote:
Hi.
currently we are using *kafka_2.8.0-0.8.0-beta1 and *and high level
There can be different reasons, Can you check the kafka consumer log and
see if there is any "conflict in".
And can u check weather, the console consumer is working fine or not.
Arjun Narasimha Kota
On Tuesday 15 April 2014 12:34 PM, ankit tyagi wrote:
I have increased the par
ka.server":type="BrokerTopicMetrics",name="AllTopicsMessagesInPerSec" .
*
Is there some thing i should worry about? i am able to get the data from kafka
and push the data into kafka with out a glitch.
Thanks
Arjun Narasimha Kota
*
*
Arjun Narasimha kota
On Monday 26 May 2014 06:09 PM, Devanshu Srivastava wrote:
Hi Arjun ,
What is the configured number of logical partitions per topic per server .
With the partition size as 2 , Only two partitions would be created
distributed over 2 brokers with one partition each , you can
g",type="ReplicaFetcherManager") for
this node is 0.
We have restarted the nodes one after the other and we cant make this
node to push to ISR.
Can some one please let me know, how to push this node to ISR.
Thanks
Arjun Narasimha Kota
2014-05-23 12:23:33,743] DEBUG preRegister called.
Server=com.sun.jmx.mbeanserver.JmxMBeanServer@2dcb25f1,
name=log4j:logger=kafka.controller (kafka.controller)
[2014-05-23 12:26:59,840] INFO [ControllerEpochListener on 2]:
Initialized controller epoch to 5 and zk version 4
(kafka.controller.ControllerEpochListener)
[2014-05-
log to see if broker 2 once has a soft
failure and hence its leadership been migrated to other brokers?
On Thu, Jun 19, 2014 at 6:57 AM, Arjun wrote:
Hi,
I have a set up of 3 kafka servers, with a replication factor of 2.
I have only one topic in this setup as of now.
bin/kafka-list-topic.sh
One small doubt on this. If we keep on monitoring the "number of under
replicated partitions" and "ISR shrinks and Expansions", could we have
found this error earlier?
Can you please suggest me what should i be monitoring so that i can get
earlier.
Thanks
Arjun Narasimha
give me
the next message i.e 92nd message or will not.
And if the 92nd message is serverd, and its processing is done
smoothly, and if i commit the offset, can i get the 91st message again?
thanks
Arjun Narasimha Kota
s messages are gone. They are in the
queue, but we don't even know the message offsets to get them and i just
read we cant even get them even if we have the offsets.
Please let me know weather my understanding is correct or not.
Thanks
Arjun Narasimha Kota
On Friday 22 August 2014 04:48
me the right way to
do it.
Thanx in advance.
Arjun Harish Nadh
consumer(Its a pull at the consumer)?
Regards
Arjun Harish Nadh
fetch.wait.max.ms=1
fetch.min.bytes=128
My message size is much more than that.
On Feb 11, 2014 9:21 PM, "Jun Rao" wrote:
> What's the fetch.wait.max.ms and fetch.min.bytes you used?
>
> Thanks,
>
> Jun
>
>
> On Tue, Feb 11, 2014 at 12:54 AM, Arjun w
Hi,
No i havent changed the auto commit enable. That one message is the one
which got earlier long time back(2 weeks back). After that i started
working recently and things started behaving werid.
I dont have the request log now, will check and let u know.
Thanks
Arjun narasimha k
On Feb 12
Yes i have made it to trace as it will help me debug the things.
Have u found any issue in the it.
On Feb 13, 2014 9:12 PM, "Jun Rao" wrote:
> The request log is in trace. Take a look at the log4j property file in
> config/.
>
> Thanks,
>
> Jun
>
>
> On Wed,
> Are you running multiple broker instances on a single server.
> Or your 12 partitions for multiple topics.
> I thought you should not have more partitions then the number of brokers
> in the cluster for a topic fro better load balancing and failover.
>
> Thanks,
> Maung
>
> O
Yes the message shows up on the server.
On Apr 11, 2014 12:07 AM, "Guozhang Wang" wrote:
> Hi Arjun,
>
> If you only send one message, does that message show up on the server? Does
> you consumer use wildcard topics or specific topics?
>
> Guozhang
>
>
> O
The consumer uses do specific topics.
On Apr 11, 2014 6:23 AM, "Arjun Kota" wrote:
> Yes the message shows up on the server.
> On Apr 11, 2014 12:07 AM, "Guozhang Wang" wrote:
>
>> Hi Arjun,
>>
>> If you only send one message, does that message sho
I set the retries with 10 and set the max time between retries to 5 seconds
even then i see this .
Thanks
Arjun narasimha kota
On Apr 11, 2014 9:02 PM, "Guozhang Wang" wrote:
> Arjun,
>
> When consumers exhaust all retries of rebalances they will throw the
> exception a
Console consumer works fine. Its the high level java consumer which is
giving this problem.
Thanks
Arjun narasimha kota
On Apr 11, 2014 8:42 PM, "Jun Rao" wrote:
> We may have a bug that doesn't observe etch.min.bytes accurately. So a
> lower fetch.wait.max.ms will imp
Yes u see a lot of them they come continuously while consumer us retrying.
Thanks
Arjun narasimha kota
On Apr 12, 2014 6:49 AM, "Guozhang Wang" wrote:
> Did you see any log entries such as
>
> "conflict in ZK path" in your consumer logs?
>
> Guozhang
>
>
Yup i am, if i get any message only then i commit the offset, if not i am
not commiting.
Thanks
Arjun narasimha kota
On Apr 11, 2014 10:40 PM, "Seshadri, Balaji"
wrote:
> Are you committing offsets manually after you consume as you mentioned
> earlier that "auto.co
Yup will try that
On Apr 12, 2014 8:42 AM, "Jun Rao" wrote:
> Console consumer also uses the high level consumer. Could you try setting
> fetch.wait.max.ms to 100ms?
>
> Thanks,
>
> Jun
>
>
> On Fri, Apr 11, 2014 at 9:56 AM, Arjun Kota wrote:
>
>
Yup will check and let you know.
Sry for delayed response i live in other part of world.
On Apr 14, 2014 1:35 AM, "Guozhang Wang" wrote:
> Hi Arjun,
>
> Could you check if your second, i.e. the added machine has a lot of long
> GCs while during rebalances?
>
> G
kafka to enable Digest-MD5 authentication.
I cannot configure kerberos or TLS, just a Digest-MD5 is sufficient for my
usecase.
Please let me know if there are any docs to enable Digest-MD5 auth between
kafka and zookeeper.
Regards,
Arjun S V
Team,
Please consider this as high priority, we need to enable authentication
ASAP. Please assist.
On Tue, Nov 7, 2023 at 4:38 PM arjun s v wrote:
> Hi team,
>
> I'm trying to configure *Digest-MD5* authentication between kafka and
> zookeeper.
> Also I need to set ACL wi
ot sure what you mean here.
>
> " If I set zookeeper.set.acl=true, I'm forced to configure TLS."
> Hmm, that config shouldn't have anything to do with TLS. You can set ACL's
> with or without TLS encryption. Were you getting an error?
>
> On Wed, Nov
ot sure what you mean here.
>
> " If I set zookeeper.set.acl=true, I'm forced to configure TLS."
> Hmm, that config shouldn't have anything to do with TLS. You can set ACL's
> with or without TLS encryption. Were you getting an error?
>
> On Wed, Nov
(you need both - in this case
> the broker is the "client" and ZK is the "server"). Are you able to share
> the jaas config you're using for both Kafka and ZK? Without seeing that
> it's tough to know. Also, to make troubleshooting easier you might want to
>
afkaServer.scala:441)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:191)
> at kafka.Kafka$.main(Kafka.scala:109)
> at kafka.Kafka.main(Kafka.scala)
Then I set zookeeper.sasl.client=true
> - 10.91.21.142 arjun-8481 - - - 23
> org.apache.zookeeper.client.ZooKeeperSaslClient
ule required
> user_super="adminsecret";
};
Now after making this change, I can make the kafka nodes as world-readable
and modifiable only by brokers (as mentioned in kafka doc)
Thanks and regards
Arjun S V
On Thu, Nov 23, 2023 at 10:57 AM arjun s v wrote:
> Hi Alex Brekken,
>
>
76 matches
Mail list logo