I think I have encounter same problem with you, @chenlax
please refer to d...@kafka.apache.org, title: "topic's partition have no
leader and isr", and I will transfer the mail to users@kafka.apache.org.
On Wed, Jul 9, 2014 at 10:46 AM, chenlax wrote:
> thank you Guozhang,;i don't know why the
I think @chenlax has encounter the same problem with me in
users@kafka.apache.org with titile "How recover leader when broker restart".
cc to users@kafka.apache.org.
On Wed, Jul 9, 2014 at 3:10 PM, 鞠大升 wrote:
> @Jun Rao, Kafka version: 0.8.1.1
>
> @Guozhang Wang, I can not found the original
Actually, Kafka only removes old segments. The last (active) segment is
never removed. So, f you want to have a 10 min retention, you need to
configure log rolling such that log segments are rolled at least every 10
mins.
Thanks,
Jun
On Tue, Jul 8, 2014 at 10:04 PM, Virendra Pratap Singh <
vpsi
Yup. In fact, I just ran the test program again while the Kafak broker is
still running, using the same user of course. I was able to get up to 10K
connections with the test program. The test program uses the same java NIO
library that the broker does. So the machine is capable of handling that
man
Well currently the log rollup is controlled via
log.roll.hours
Or
log.segment.bytes
Given that we now have support to log retention in minutes, I guess it
would be apt to have rollup also have capability to be available in
minutes. Whom/where should I ask to have that coded in.
One a similar not
I have the same problem. I didn't dig deeper but I saw this happen when I
launch kafka in daemon mode. I found the daemon mode is just launch kafka
with nohup. Not quite clear why this happen.
On Wed, Jul 9, 2014 at 9:59 AM, Lung, Paul wrote:
> Yup. In fact, I just ran the test program again wh
I don't know if that is your problem, but I had this output when my brokers
couldn't talk to each others...
The zookeeper were using the FQDN but my brokers didn't know the FQDN of
the other brokers...
If you look at you brokers info in zk (get /brokers/ids/#ID_OF_BROKER) can
you ping/connect to
Hello Jun, is new producer, consumer and offset management in the
trunk already? Can we start developing libraries with 0.8.2 support
against trunk?
Thanks!
On Tue, Jul 8, 2014 at 9:32 PM, Jun Rao wrote:
> Yes, 0.8.2 is compatible with 0.8.0 and 0.8.1 in terms of wire protocols
> and the upgrade
Is it possible your container wrapper somehow overrides the file handler
limit?
Thanks,
Jun
On Wed, Jul 9, 2014 at 9:59 AM, Lung, Paul wrote:
> Yup. In fact, I just ran the test program again while the Kafak broker is
> still running, using the same user of course. I was able to get up to 10K
Dear experts,
I'm new to Kafka and am doing some study around overall real-time data
integration architecture. What is the common ways of pushing data into
Kafka? Does anyone use ESB or others to feed various message streams into
Kafka in real-time / an event-drvien fashion?
Thanks.
-HQ
This is being worked on in https://issues.apache.org/jira/browse/KAFKA-1325
Thanks,
Jun
On Wed, Jul 9, 2014 at 11:42 AM, Virendra Pratap Singh <
vpsi...@yahoo-inc.com.invalid> wrote:
> Well currently the log rollup is controlled via
>
> log.roll.hours
> Or
> log.segment.bytes
>
> Given that we
The new producer and the new offset management are already in trunk. The
new consumer is to be developed.
Thanks,
Jun
On Wed, Jul 9, 2014 at 1:54 PM, Kane Kane wrote:
> Hello Jun, is new producer, consumer and offset management in the
> trunk already? Can we start developing libraries with 0.
That depends on the use case. Perhaps you can start by looking at some of
the presentations on Kafka usage in
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations
Thanks,
Jun
On Wed, Jul 9, 2014 at 4:31 PM, HQ Li wrote:
> Dear experts,
>
> I'm new to Kafka and am d
Thanks, Jun. I did read the uses and the presentations. Particularly for
operational monitoring data and other lower throughput streams, it feels that
camel or mule could help handle ingestion and output flows.
I'm wondering whether those etls are really used together with Kafka for a
unified
14 matches
Mail list logo