Hi Jun,
I am able to get the data at times on changing the consumer threads count.
My topic (test5) has 6 partitions spread across 3 Kafka servers in a
cluster.
Here is what it looks like :
topic: test5partition: 0leader: 2 replicas: 2,0,1 isr: 2,0,1
topic: test5partition: 1l
In the kafka llgs I get this entry from the clientis this a client or
kafja server issue? if in the kafka server..how do I resolve?
[2014-02-26 05:45:47,322] INFO Closing socket connection to
/107.170.xxx.xxx due to invalid request: Request of length 1903520116 is
not valid, it is larger than
Dear all.
Are there anyone who tried running Kafka-0.8 Log4j Appender?
I want to send my application log into Kafka via Log4j Appender.
Here is my log4j.properties.
I couldn`t find any proper encoder, so I just configure it to use default
encoder.
(e.g I commented the line.)
---
I moved lib files..the consumer still gets the below error
this is how I install kafka from the online docs.are the docs
incorrect?
tar -xzf kafka-0.8.0-src.tgz
cd kafka-#{version}-src
./sbt update
./sbt package
./sbt assembly-package-dependency
2014-02-26 05:36:3
You just need to copy scala-library.jar. The version depends on scala
version that the kafka jar is built with.
Thanks,
Jun
On Tue, Feb 25, 2014 at 9:21 PM, David Montgomery wrote:
> Below are the scala jar files I have on the system. Which do I move?
>
> and I move to this dir? /var/lib/kaf
Below are the scala jar files I have on the system. Which do I move?
and I move to this dir? /var/lib/kafka-0.8.0-src/lib
root@do-kafka-sg-development-20140217110812:/var/lib/kafka-0.8.0-src# find
/ -name *scala*.jar
/root/.ivy2/cache/org.scalatest/scalatest/jars/scalatest-1.2.jar
/root/.ivy2
You just need to add the scala jar to the lib dir.
Thanks,
Jun
On Tue, Feb 25, 2014 at 8:40 PM, David Montgomery wrote:
> Hi,
>
> This is how I start kafka.
>
> command = /var/lib/kafka-<%=@version%>-src/bin/kafka-server-start.sh
> /var/lib/kafka-<%=@version%>-src/config/server.properties
>
>
Yes, consumer group ID can be any value. In your example, does hasNext()
return false or hit an exception?
Thanks,
Jun
On Tue, Feb 25, 2014 at 8:31 PM, Binita Bharati wrote:
> Hi Steve,
>
> So, I assume that consumer group ID is just a logical grouping ? i.e. it
> can be any random value ?
>
Hi,
This is how I start kafka.
command = /var/lib/kafka-<%=@version%>-src/bin/kafka-server-start.sh
/var/lib/kafka-<%=@version%>-src/config/server.properties
In another application I get teh below error. The suggestion is to add the
scalar libraries to the path. How do I do that?
thanks
ja
Hi Steve,
So, I assume that consumer group ID is just a logical grouping ? i.e. it
can be any random value ?
Actually, I am unable to get any data when I am running this eg :
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example.
The "ConsumerTest" class's , while(it.hasNext())
How do you know n? The whole point is that you need to be able to fetch the
end offset. You can't a priori decide you will load 1m messages without
knowing what is there.
Hmm. I think what you are pointing out is that in the new consumer API, we
don't have a way to issue the equivalent of the exis
1. I would prefer PartitionsAssigned and PartitionsRevoked as that seems
clearer to me.
-Jay
On Tue, Feb 25, 2014 at 10:19 AM, Neha Narkhede wrote:
> Thanks for the reviews so far! There are a few outstanding questions -
>
> 1. It will be good to make the rebalance callbacks forward compatible
Hey Neha,
How do you know n? The whole point is that you need to be able to fetch the
end offset. You can't a priori decide you will load 1m messages without
knowing what is there.
-Jay
On Tue, Feb 25, 2014 at 9:54 AM, Neha Narkhede wrote:
> Jay/Robert -
>
> I think what Robert is saying is th
Could you send around the consumer log when it throws
ConsumerRebalanceFailedException. It should state the reason for the failed
rebalance attempts.
Thanks,
Neha
On Tue, Feb 25, 2014 at 12:01 PM, Yu, Libo wrote:
> Hi all,
>
> I tried to reproduce this exception. In case one, when no broker wa
Hi all,
I tried to reproduce this exception. In case one, when no broker was running, I
launched all consumers and
got this exception. In case two, while the consumers and brokers were running,
I shutdown all brokers one by
one and did not see this exception. I wonder why in case two this except
Hi
I will make the change and see whether things work fine or not and let
you know.
Thanks
Arjun Narasimha Kota
On Tuesday 25 February 2014 09:58 PM, Jun Rao wrote:
The following config is probably what's causing the socket timeout. Try sth
like 1000ms.
MaxWait: 1000 ms
Thanks,
Jun
O
Hi,
As i have mentioned in the first message, I have checked the log and
offset using the Consumer off set checker tool. The Consumer offset just
stalls.
And there is a lag. I haven't specified any fetch size in the consumer
so i guess there is a default size of 1MB. All my messages are less t
For ellipsis, sometimes you may have to make a single batch call, instead
of multiple individual calls. An example would be commit(). I think either
way is fine. We just need to be aware of the implication.
Thanks,
Jun
On Tue, Feb 25, 2014 at 9:49 AM, Neha Narkhede wrote:
> Thanks for the revi
Thanks for the reviews so far! There are a few outstanding questions -
1. It will be good to make the rebalance callbacks forward compatible with
Java 8 capabilities. We can change it to PartitionsAssignedCallback
and PartitionsRevokedCallback or RebalanceBeginCallback and
RebalanceEndCallback?
[solved]. zkClient needs a serializer. This is the final code:
ZkClient zkClient = new ZkClient(zookeeperCluster, 3, 3);
zkClient.setZkSerializer(new ZkSerializer() {
@Override
public byte[] serialize(Object o)
throws ZkMarshallingError
{
return ZKStri
Robert,
Are you saying it is possible to get events from the high-level
consumerregarding various state machine changes? For instance, can we
get a
notification when a rebalance starts and ends, when a partition is
assigned/unassigned, when an offset is committed on a partition, when a
leader cha
Jay/Robert -
I think what Robert is saying is that we need to think through the offset
API to enable "batch processing" of topic data. Think of a process that
periodically kicks off to compute a data summary or do a data load or
something like that. I think what we need to support this is an api t
Nope... i have checked this and replication factor is 1. Anyway, when i
tried to increase this param, an exception is thrown (rep. factor greater
than num. of brokers).
There is no difference in this code and the script, apparently, i can only
see changes in the constructor used in zkClient.
Any
Thanks for the review, Jun. Here are some comments -
1. The using of ellipsis: This may make passing a list of items from a
collection to the api a bit harder. Suppose that you have a list of topics
stored in
ArrayList topics;
If you want subscribe to all topics in one call, you will have to do:
David,
Topic creation can fail if you specify the replication factor > # of
brokers in the cluster. Can you check if that is true in your case?
Unfortunately, I don't think we fail the createTopic() API with the
appropriate exception since there is still a race condition where the
broker can come
This is the code that i can see in CreateTopicCommand
var zkClient: ZkClient = null
try {
zkClient = new ZkClient(zkConnect, 3, 3, ZKStringSerializer)
createTopic(zkClient, topic, nPartitions, replicationFactor,
replicaAssignmentStr)
println("creation succeeded!")
The following config is probably what's causing the socket timeout. Try sth
like 1000ms.
MaxWait: 1000 ms
Thanks,
Jun
On Tue, Feb 25, 2014 at 2:16 AM, Arjun wrote:
> Apart from that i get this stack trace
>
> 25 Feb 2014 15:45:22,636 WARN [ConsumerFetcherThread-group1_
> www.taf-dev.com-
Is the ZK connection string + namespace the same btw the code and the
script?
Thanks,
Jun
On Tue, Feb 25, 2014 at 3:01 AM, David Morales de Frías <
dmora...@paradigmatecnologico.com> wrote:
> Hi there,
>
> I'm trying to create a topic from java code, by calling CreateTopicCommand:
>
>
> *ZkCli
Hi all,
I am referring to this e.g:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example.
What is the consumer group ID being referred here ?
Thanks
Binita
Arjun,
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped,why
?
Thanks,
Neha
On Tue, Feb 25, 2014 at 5:04 AM, Arjun wrote:
> The thing i found is my ConsumerFetcherThreads are not going beyond
> BoundedByteBufferReceive.readFrom. When i a
The thing i found is my ConsumerFetcherThreads are not going beyond
BoundedByteBufferReceive.readFrom. When i added a few more traces in
that function i found that the call is stalling after exceptIncomplete
function. I guess Utils.read is stalling for more than 30 sec, which is
the socket time
Adding to this, i have started my logs in trace mode. I fount that the
Consumer fetcher threads are sending the meta data but are not receiving
any.
I see all the
"TRACE
[ConsumerFetcherThread-group1_www.taf-dev.com-1393329622308-6e15dd12-0-1] [kafka.network.BoundedByteBufferSend]
205 bytes wr
Hi there,
I'm trying to create a topic from java code, by calling CreateTopicCommand:
*ZkClient zkClient = new ZkClient(zookeeperCluster, 3, 3);*
*CreateTopicCommand.createTopic(zkClient, topic,
numPartitions.intValue(),replicationFactor.intValue(), "");*
*zkClient.close();*
The prog
Apart from that i get this stack trace
25 Feb 2014 15:45:22,636 WARN
[ConsumerFetcherThread-group1_www.taf-dev.com-1393322165136-8318b07d-0-0] [kafka.consumer.ConsumerFetcherThread]
[ConsumerFetcherThread-group1_www.taf-dev.com-1393322165136-8318b07d-0-0],
Error in fetch Name: FetchRequest; Ve
Hi,
I am using kafka 0.8. I have 3 brokers on three systems and 3 zookeepers
running.
I am using the high level consumer which is in examples folder of kafka.
I am able to push the messages into the queue, but retriving the
messages is taking some time. Is there any way i can tune this.
I ge
35 matches
Mail list logo