hello
We are sending custom message across producer and consumer. But
getting class cast exception . This is working fine with String
message and string encoder.
But this did not work with custom message , i got class cast
exception. I have a message with couple of String attributes
;
Did not get any exception if use DefaultEncoder
On Mon, Feb 9, 2015 at 3:06 PM, Manikumar Reddy
wrote:
> Can you post the exception stack-trace?
>
> On Mon, Feb 9, 2015 at 2:58 PM, Gaurav Agarwal
> wrote:
>
> > hello
> > We are sending custom message across produ
After retrieving a kafka stream or kafka message how to get the
corresponding partition number to which it belongs ? I am using kafka
version 0.8.1.
More specifically kafka.consumer.KafkaStream and
kafka.message.MessageAndMetaData classes, does not provide API to retrieve
partition number. Are ther
Hello
After retrieving a kafka stream or kafka message how to get the
corresponding partition number to which it belongs ? I am using kafka
version 0.8.1.
More specifically kafka.consumer.KafkaStream and
kafka.message.MessageAndMetaData classes, does not provide API to retrieve
partition number. Ar
>
> > You can get the partition number the message belongs to via
> > MessageAndMetadata.partition()
> >
> > On Fri, Feb 27, 2015 at 5:16 AM, Jun Rao wrote:
> >
> > > The partition api is exposed to the consumer in 0.8.2.
> > >
> > > Thanks,
> > >
&
in kafka 0.8.1.1 When Kafka Producer set the property of
request.required.acks=1 ,It means that the producer gets an acknowledgement
after the leader replica has received the data . How will Producer come to
know he got the acknowledgment , Is there any api that i can see at my
application level ,
hello group,
I have created a topic with the delete retention ms time 1000 and send
and consume message across. Nothing happened after that , i checked
the log also , message also not deleted as well. Please help me to
come to know what is the need
<http://kafka.apache.org/081/documentation.html#topic-config>
>
> Regards,
> Madhukar
>
> On Fri, Apr 3, 2015 at 5:01 PM, Gaurav Agarwal
> wrote:
>
> > hello group,
> > I have created a topic with the delete retention ms time 1000 and send
> > and consume
I am new to Kafka that's the reason asking so many question
KeyedMessage keyedMessage = new KeyedMessage(request.getRequestTopicName(),SerializationUtils.serialize(message));
producer.send(keyedMessage);
Currently,I am sending message without any key maintained as part of keyed
message , will it
you can use one more auto.offset.reset=smallest/largest. I ALSO face
the same issue and it work fine for me . Might be my key name is not
correct, please check the key in kafka doxumentation ?..
On 4/6/15, Madhukar Bharti wrote:
> Hi Mayuresh,
>
> We are having only one consumer in the group and
Hello
I am sending message from producer like this with DefaultEncoder.
KeyedMessage keyedMessage = new KeyedMessage("topic",Serializations.serialize("s"),
Serializations.getSerialized(msg,rqst));
This is a compile time error at java level as it expects String
But if i use
K
M, Manoj Khangaonkar
wrote:
> Hi
>
> Your key seems to be String.
>
> key.serializer.class might need to be set to StringEncoder.
>
> regards
>
> On Sat, Apr 25, 2015 at 10:43 AM, Gaurav Agarwal
> wrote:
>
> > Hello
> >
> > I am sending message from pr
i had the same scenario , i creatd multiple consumers on the same
topic with different group.id configured there for each consumer and
then it started behaving as topic for me.
On 5/25/15, Daniel Compton wrote:
> Hi Warren
>
> If you're using the high level consumer, then you can just have multip
can we find from some api in kafka that how many number of connections
we have kafka broker to zookeeper, as my kafka is getting down again
and again .
You need to allocate extra memory for the topology to run.
On Wed, Sep 2, 2015 at 11:36 PM, Khalasi, Vipul Kantibhai <
vipul.kantibhai.khal...@citi.com> wrote:
> Hi ,
>
>
>
> I am using kafkaspout in my topology ann each kafka topic have 8
> partitions and all topic atleast contains 1GB of data.
Hello
I created Custom partitioner for my need implemented Partitioner interface
Override this method
public int partition(Object key, int a_numPartitions){
return partitionId;
}
We have something called as
We are using key as correlationId, That will be unique for each message .
KeyedMessage
Are you consuming data in kafkaStream with byte[] argument in generics
On Sun, Oct 18, 2015 at 4:20 PM, Kiran Singh wrote:
> Hi Pratapi
>
> I am using following serializer property at producer side:
>
> ("serializer.class", "kafka.serializer.StringEncoder");
>
> And at consumer side i am trying
Did u check with ps -ef Kafka whether Kafka broker is running or not
On Nov 25, 2015 4:56 PM, "Shaikh, Mazhar A (Mazhar)" <
mazhar.sha...@in.verizon.com> wrote:
> Hi Team,
>
> In my test setup, Kafka broker goes down when consumer is stopped.
>
> Below are the version & logs.
>
> Request you help
Can u share the code that you are using to publish the message. Also can u
whether a small message is.published
On Nov 25, 2015 9:25 PM, "Kudumula, Surender"
wrote:
> Hi all
> I am trying to get the producer working. It was working before but now
> getting the following issue. I have created a ne
So u have two nodes running where you want to increase the replication
factor 2 because of fault tolerance. That won't be a problem
On Nov 25, 2015 6:26 AM, "Dillian Murphey" wrote:
> Is it safe to run this on an active production topic? A topic was created
> without a replication factor of 2 an
Can u check couple of things
1. Message size that u are sending ttry sending small message
2.check which encoder defaultencoder or string encoder u are using to
consume the message , are u serializing the message while sending normal
stream.
3.u create.any partition of topic just create only topic
Hello
How can I find in Kafka API in 0.8.1.1 count of uncommitted offsets
(unread message)from a particular topic .with the respective consumer group
I'd
I am looking after adminutils ,topic comand , and offsetrequest any
specific class of Kafka API which I can use to find am these things.
something like this should help you figure that out.
>
> [path of kafka]/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
> --zookeeper [zookeeperhost:2181] --topic [topicnamehere] --group
> [groupIDhere]
>
> On Thu, Dec 3, 2015 at 10:48 PM, Gaurav Agarwal
> wrote:
>
&g
What u need prabhu from presentation, go to YouTube u will get presentation
or search Kafka example u will get .
On Mar 11, 2016 9:12 PM, "prabhu v" wrote:
> Hi,
>
> Can anyone please help me with the video presentations from Kafka experts?
>
> Seems the link provided in Kafka home page
>
> https
Hi All,
In our 3-node test cluster running Kafka 0.10.0, we faced this error:
FATAL [2017-07-06 07:30:42,962]
kafka.server.ReplicaFetcherThread:[Logging$class:fatal:110] -
[ReplicaFetcherThread-0-0] - [ReplicaFetcherThread-0-0], Halting because
log truncation is not allowed for topic Topic3, Curr
If you have consumer group id across multiple consumers then Kafka will
work as queue only .
On Mar 28, 2016 6:48 PM, "Sharninder" wrote:
> What kind of queue are you looking for? Kafka works as a nice FIFO queue by
> default anyway.
>
>
>
> On Mon, Mar 28, 2016 at 5:19 PM, Vinod Kakad wrote:
>
>
> can we have C1,C2,C3 subscribe to T1 and (C3,C4) subscribe to T2. and this
> should work like a queue. i.e. T2 should send message to only C3 or C4. and
> same in case of T1.
>
> is this possible by any means?
>
> Thanks & Regards,
> Vinod Kakad.
>
> On Mon,
and
> Group2(C3,C4) for Topic T2?
>
> is it possible?
> how can I achieve implementation like this?
>
>
> On Tue, Mar 29, 2016 at 2:47 PM, Gaurav Agarwal
> wrote:
>
> > Yes this is possible in Kafka. While make a connection with Kafka
consumer
> > . W
Hi
You can have one or two instances of Kafka but you can have one or two
Kafka topic dedicated to each application according to the need. Partition
will have u in increasing the throughput and consumer group id can help u
to make queue as topic or queue.
On Apr 22, 2016 12:37 PM, "Kuldeep Kamboj"
Hi All,
We are facing a weird problem where Kafka broker fails to start due to an
unhandled exception while 'recovering' a log segment. I have been able to
isolate the problem to a single record and providing the details below:
During Kafka restart, if index files are corrupted or they don't exis
Hi All,
We are facing a weird problem where Kafka broker fails to start due to an
unhandled exception while 'recovering' a log segment. I have been able to
isolate the problem to a single record and providing the details below:
During Kafka restart, if index files are corrupted or they don't exis
Hi there, just wanted to bump up the thread one more time to check if
someone can point us in the right direction... This one was quite a serious
failure that took down many of our kafka brokers..
On Sat, Aug 27, 2016 at 2:11 PM, Gaurav Agarwal
wrote:
> Hi All,
>
> We are facing a weir
config, 0, time.scheduler, time
On Tue, Aug 30, 2016 at 11:37 AM, Jaikiran Pai
wrote:
> Can you paste the entire exception stacktrace please?
>
> -Jaikiran
>
> On Tuesday 30 August 2016 11:23 AM, Gaurav Agarwal wrote:
>
>> Hi there, just wanted to bump up the thread one more time t
33 matches
Mail list logo