Have seen if you have a write with zero data it will hang
> On Jun 16, 2014, at 21:02, Timothy Chen wrote:
>
> Can you try running it in debug mode? (./gradlew jar -d)
>
> Tim
>
>> On Mon, Jun 16, 2014 at 8:44 PM, Jorge Marizan
>> wrote:
>> It just hangs there without any output at all.
>>
For a weird reason, it is compiling correctly now.
Time to play with Kafka. Thanks :)
On Jun 17, 2014, at 12:12 AM, Jorge Marizan wrote:
> I will try it and let you know.
>
> Jorge.
>
> On Jun 17, 2014, at 12:02 AM, Timothy Chen wrote:
>
>> Can you try running it in debug mode? (./gradlew
I will try it and let you know.
Jorge.
On Jun 17, 2014, at 12:02 AM, Timothy Chen wrote:
> Can you try running it in debug mode? (./gradlew jar -d)
>
> Tim
>
> On Mon, Jun 16, 2014 at 8:44 PM, Jorge Marizan
> wrote:
>> It just hangs there without any output at all.
>>
>> Jorge.
>>
>> On
Can you try running it in debug mode? (./gradlew jar -d)
Tim
On Mon, Jun 16, 2014 at 8:44 PM, Jorge Marizan wrote:
> It just hangs there without any output at all.
>
> Jorge.
>
> On Jun 16, 2014, at 11:27 PM, Timothy Chen wrote:
>
>> What output was it stuck on?
>>
>> Tim
>>
>> On Mon, Jun 16,
It just hangs there without any output at all.
Jorge.
On Jun 16, 2014, at 11:27 PM, Timothy Chen wrote:
> What output was it stuck on?
>
> Tim
>
> On Mon, Jun 16, 2014 at 6:39 PM, Jorge Marizan
> wrote:
>> Hi team, I’m a newcomer to Kafka, but I’m having some troubles trying to get
>> it t
Any error in the controller and state-change log?
Thanks,
Jun
On Mon, Jun 16, 2014 at 6:05 PM, Bongyeon Kim
wrote:
> Hi, team.
>
> Im using Kafka 0.8.1.1.
> I'm running 8 brokers on 4 machine. (2 brokers on 1 machine) and I have 3
> topics each have 16 partitions and 3 replicas.
>
> kafka-top
Currently, mirrormaker only logs the error if the producer fails. You can
potentially increase # retries to deal with producer failures.
Thanks,
Jun
On Mon, Jun 16, 2014 at 3:53 PM, Andrey Yegorov
wrote:
> As I read, consumer and producer in mirrormaker are independent and use
> queue to comm
With topic.metadata.refresh.interval.ms=1000, the producer should refresh
metadata and pick up the new partitions after 1 sec. Do you see metadata
being refreshed? You may have to turn on the debug level logging.
Thanks,
Jun
On Mon, Jun 16, 2014 at 3:18 PM, Prakash Gowri Shankor <
prakash.shan
What output was it stuck on?
Tim
On Mon, Jun 16, 2014 at 6:39 PM, Jorge Marizan wrote:
> Hi team, I’m a newcomer to Kafka, but I’m having some troubles trying to get
> it to run on OS X.
> Basically building Kafka on OS X with 'gradlew jar’ gets stuck forever
> without any progress (Indeed I
Have you looked at Pinterest Secor? (
http://engineering.pinterest.com/post/84276775924/introducing-pinterest-secor
)
Cheers, Robert
On Mon, Jun 16, 2014 at 5:17 AM, Mark Godfrey wrote:
> There is Bifrost, which archives Kafka data to S3:
> https://github.com/uswitch/bifrost
>
> Obviously tha
Hi team, I’m a newcomer to Kafka, but I’m having some troubles trying to get it
to run on OS X.
Basically building Kafka on OS X with 'gradlew jar’ gets stuck forever without
any progress (Indeed I tried to leave it building all night with no avail).
Any advices will be greatly appreciated. Th
Are there unit testing libs in kafka which we can include to test our
producers/consumers??
I found the following but the maven libs mentioned there seem to be missing.
http://grokbase.com/t/kafka/users/13ck94p302/writing-unit-tests-for-kafka-code
Any one else tackled this issue?
Thanks,
-Vin
Hi, team.
Im using Kafka 0.8.1.1.
I'm running 8 brokers on 4 machine. (2 brokers on 1 machine) and I have 3
topics each have 16 partitions and 3 replicas.
kafka-topics describe is
Topic:topicCDR PartitionCount:16 ReplicationFactor:3 Configs:retention.ms
=360
Topic: topicCDR Partition: 0 Lead
As I read, consumer and producer in mirrormaker are independent and use
queue to communicate. Therefore consumers keep on consuming/commiting
offsets to zk even if producer is failing. Is it still the way it works in
0.8.0, any plans to change?
Is there any way to minimize data loss in this case?
Hi,
I used the add partition functionality in create-topics to alter a previous
topic and increase the partitions. I noticed that after the new partitions
were added, they dont receive data immediately from the producer unless a
new producer is started up or the old producer is restarted.
Here is
Ack! Thanks for pointing that out. Should be fixed now.
-Jay
On Mon, Jun 16, 2014 at 11:22 AM, William Borg Barthet <
w.borgbart...@onehippo.com> wrote:
> Hi there,
>
> as I was excitedly reading through the introductory documentation, I came
> across a download link [1] in the quickstart sectio
Hi there,
as I was excitedly reading through the introductory documentation, I came
across a download link [1] in the quickstart section [2]. It seems like the
file isn't there anymore.
It didn't take me very long to find an alternate source for download [3],
so that's all good. I apologise if th
I'd love to get some insights on how things work at linkedin in terms of
your web servers and kafka producers.
You guys probably connect to multiple kafka clusters, so let's assume you
are only connecting to a single cluster.
1. do you use a single producer for all message types/topics?
2. For y
Yes, the producer is thread safe, and sharing instances will be more
efficient if you are producing in async mode.
-Jay
On Mon, Jun 16, 2014 at 9:12 AM, S Ahmed wrote:
> In my web application, I should be creating a single instance of a producer
> correct?
>
> So in scala I should be doing som
In my web application, I should be creating a single instance of a producer
correct?
So in scala I should be doing something like:
object KafkaProducer {
// props...
val producer = new Producer[AnyRef, AnyRef](new ProducerConfig(props))
}
And then say in my QueueService I would do:
class Q
There is Bifrost, which archives Kafka data to S3:
https://github.com/uswitch/bifrost
Obviously that's a fairly specific archive solution, but it might work for
you.
Mark.
On Mon, Jun 16, 2014 at 11:02 AM, Anatoly Deyneka
wrote:
> Hi all,
>
> I'm looking for the way of archiving data.
> The d
You should do this as a consumer (i.e. "archiveDataConsumer")
Take a look at the AWS section of the eco system
https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem (e.g.
https://github.com/pinterest/secor ).
Also the tools is a good place to check out
https://cwiki.apache.org/confluence/di
Hi all,
I'm looking for the way of archiving data.
The data is hot for few days in our system.
After that it can rarely be used. Speed is not so important for archive.
Lets say we have kafka cluster and storage system.
It would be great if kafka supported moving data to storage system instead
of
23 matches
Mail list logo