Hi All,
I'm use Kafka 0.8 release build with 1 partition, 1 replica. My OS is
Windows server 2012 and JDK is 1.7. I got below error when Kafka delete
logs. Any guidance would be of great help.
[2013-12-09 04:00:10,525] ERROR error in loggedRunnable
(kafka.utils.Utils$)kafka.common.KafkaStorageExc
Is there a book or this was just an idea?
On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin wrote:
> Thanks Jun,
>
> I've updated the example with this information.
>
> I've also removed some of the unnecessary newlines.
>
> Thanks,
>
> Chris
>
>
> On Mon, Mar 25, 2013 at 12:04 PM, Jun Rao wrote:
Hi folks,
I got this error when I tried to test the partition addition tool.
bin/kafka-add-partitions.sh --partition 1 --topic libotesttopic --zookeeper
xx.xxx.xxx.xx:
adding partitions failed because of
kafka.admin.AdminUtils$.assignReplicasToBrokers(Lscala/collection/Seq;)Lscala/collec
I forget but think Chetan was with oreilly
> On Dec 10, 2013, at 7:01, S Ahmed wrote:
>
> Is there a book or this was just an idea?
>
>
> On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin wrote:
>
>> Thanks Jun,
>>
>> I've updated the example with this information.
>>
>> I've also removed some
There is this one:
http://www.packtpub.com/develop-custom-message-producers-and-consumers-using-apache-kafka/book
2013/12/10 S Ahmed
> Is there a book or this was just an idea?
>
>
> On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin >wrote:
>
> > Thanks Jun,
> >
> > I've updated the example wi
Hi All,
I am using kafka 0.8.
My producers configurations are as follows
kafka8.bytearray.producer.type=sync
kafka8.producer.batch.num.messages=100
kafka8.producer.topic.metadata.refresh.interval.ms=60
kafka8.producer.retry.backoff.ms=100
kafka8.producer.message.send
Yes, you can use the ConsumerOffsetChecker tool.
Thanks,
Jun
On Mon, Dec 9, 2013 at 9:40 PM, Sanket Maru wrote:
> For mine topic the offset isn't increasing which means the consumer has
> stopped. I wanted to get the count(#) of events that are still remaining to
> be processed. Is that possi
Tried this on the 0.8.0 release and it works for me. Could you make sure
there are no duplicated kafka jars?
Thanks,
Jun
On Tue, Dec 10, 2013 at 7:08 AM, Yu, Libo wrote:
> Hi folks,
>
> I got this error when I tried to test the partition addition tool.
> bin/kafka-add-partitions.sh --partitio
You will need to configure request.required.acks properly. See
http://kafka.apache.org/documentation.html#producerconfigs for details.
Thanks,
Jun
On Tue, Dec 10, 2013 at 1:55 AM, Nishant Kumar wrote:
> Hi All,
>
> I am using kafka 0.8.
>
>
> My producers configurations are as follows
>
>
There was some talk a few months ago, not sure what the current status is.
On 12/10/13 10:01 AM, S Ahmed wrote:
Is there a book or this was just an idea?
On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin wrote:
Thanks Jun,
I've updated the example with this information.
I've also removed some
I'll let chetan comment if he's up for it.
-Steve
On Tue, Dec 10, 2013 at 8:40 AM, David Arthur wrote:
> There was some talk a few months ago, not sure what the current status is.
>
>
> On 12/10/13 10:01 AM, S Ahmed wrote:
>
>> Is there a book or this was just an idea?
>>
>>
>> On Mon, Mar 25,
Hey Guys
Yes, Ben Lorica (Oreilly) and I are planning to pen a "Beginning Kafka"
book.
We only finalized this late October are hoping to start this mid-month
Chetan
On Tue, Dec 10, 2013 at 8:45 AM, Steve Morin wrote:
> I'll let chetan comment if he's up for it.
> -Steve
>
>
>
> On Tue, Dec 10
Great, so its not even at MEAP stage then :(, let me guess, it is going to
take 6 months to decide on what animal to put on the cover! :)
Looking forward to in though!
On Tue, Dec 10, 2013 at 12:15 PM, chetan conikee wrote:
> Hey Guys
>
> Yes, Ben Lorica (Oreilly) and I are planning to pen a "
Hey Guys,
I would love to contribute to the book specially in the portion of
Kafka-Spark integration or parts of kafka in general.
Am building a Kafka-Spark Real-time framework here at Gree Intl Inc
processing order of MBs of data per second.
My profile:
www.linkedin.com/in/shafaqabdu
Shafaq,
What does the architecture of what your building look like?
-Steve
On Tue, Dec 10, 2013 at 10:19 AM, Shafaq wrote:
> Hey Guys,
>I would love to contribute to the book specially in the portion of
> Kafka-Spark integration or parts of kafka in general.
>Am building a Kafka-Spark
Hello,
First, I'm using version 0.7.2.
I'm trying to read some messages from a broker, but looking at wireshark, it
appears that only part of a message is being read by the consumer. After that,
no other data is read and I can verify that there are 10 messages on the
broker. I have the consu
Hello Casey,
What do you mean by "part of a message is being read"? Could you upload the
output and also the log of the consumer here?
Guozhang
On Tue, Dec 10, 2013 at 12:26 PM, Sybrandy, Casey <
casey.sybra...@six3systems.com> wrote:
> Hello,
>
> First, I'm using version 0.7.2.
>
> I'm trying
I figured out the cause. After compiling 0.8, in core/target/scala-2.8.0/
somehow kafka_2.8.0-0.8.0-beta1's jars are generated as well and that
caused the error.
Regards,
Libo
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Tuesday, December 10, 2013 10:56 AM
To: users
What is your configuration for data.dirs (the path where data is) and what
is the set of disks/volumes on the machine?
-Jay
On Tue, Dec 10, 2013 at 12:50 AM, CuiLiang wrote:
> Hi All,
>
> I'm use Kafka 0.8 release build with 1 partition, 1 replica. My OS is
> Windows server 2012 and JDK is 1.7
Having a partial message transfer over the network is the design of Kafka
0.7.x (I can't speak to 0.8.x, though it may still be).
When the request is made, you tell the server the partition number, the
byte offset into that partition, and the size of response that you want.
The server finds that o
Hi Steve,
The first phase would be pretty simple, essentially hooking up the
Kafka-DStream-consumer to perform KPI aggregation over the streamed data
from Kafka Broker cluster in real-time.
We would like to maximize the throughput by choosing the right message
payload size, correct kafka top
The Kafka folder is d:\kafka_2.8.0-0.8.0. Kafka log folder is
d:\data\kafka-logs. kafka batch folder is d:\kafka_2.8.0-0.8.0\bin.
log.dirs=..\\..\\data\\kafka-logs
The log segments files is created correct, but can't delete the log file.
My machine has C, D, E,F, G, k partitions.
Thanks,
Liang Cu
Please try 10.237.0.1:2181,10.237.0.2:2181,10.237.0.3:2181/kafka.
Thanks,
Liang Cui
2013/12/6 Yonghui Zhao
> Hi,
>
> If I don't want to register kafka in zk root and I want to make it under a
> namespace, for example kafka1.
>
> If I set only one host in zk property something like
> 10.237.0.1
23 matches
Mail list logo