art A its value will gradually decrease to 0, only bounce broker B
> when URP drops to 0.
>
> Guozhang
>
> On Thu, May 21, 2015 at 6:12 AM, Helin Xiang wrote:
>
> > Hi, Guozhang
> >
> > Is there a way to monitor/check if a broker has catched up most of the
> &g
Hi, Guozhang
Is there a way to monitor/check if a broker has catched up most of the
replicas of topics in sync?
We are considering an upgrade to 0.8.2.1. A rolling upgrade plan seems
possible because we don't wan't to lose any data. It occured to me that
rolling upgrade wihout control can still
> > Is there a leader movement just before the get latest offset call? If
> > your
> > > follower is not synced and it then becomes the leader due to some
> reason,
> > > it will not have the complete partition data.
> > >
> > > Guozhang
> > >
> >
u just need to create multiple
> partitions for a topic in 0.8 with a replication factor of 1. When one of
> the partitions is not available, the producer will route the data to other
> partitions.
>
> Thanks,
>
> Jun
>
> On Wed, Dec 10, 2014 at 5:58 PM, Helin Xiang wr
lockingChannel.disconnect()
-}
+//}
}
On Sat, Dec 13, 2014 at 1:23 AM, Jun Rao wrote:
>
> Hmm, but if we hit an exception in BlockingChannel.connect(), we will
> call BlockingChannel.disconnect(), which will close the socket channel.
>
> Thanks,
>
> Jun
>
>
s survive failure. On the producer side set ack = -1 with that for it
> to work as expected.
>
> On Wed, Dec 10, 2014 at 7:14 PM, Helin Xiang wrote:
>
> > Thanks for the reply , Joe.
> >
> > In my opinion, when replica == 1, the ack == -1 would cause producer
> > s
t; replica remains.
>
> /***
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> ****/
>
> On Wed,
Hi,
in some topics of our system, the data volumn is so huge that we think
doing extra replica is a waste of disk and network resource( plus the data
is not so important).
firstly, we use 1 replica + ack=0, found when 1 broker is down, the data
would loss 1/n.
then we tried 1 replica + ack=1, and
Sorry for me not replying in the thread. ignore last email.
Hi, Jun
We experienced a network device problem. and cause all brokers crashed.
After investigation, we found server log throw similar exceptions.
this:
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddre
Hi, Jun
We experienced a network device problem. and cause all brokers crashed.
After investigation, we found server log throw similar exceptions.
this:
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:29)
at sun.nio.ch.SocketChannelImpl.connec
eperException$BadVersionException:
KeeperErrorCode = BadVersion for
/brokers/topics/a.s.3/partitions/26/state
On Mon, Dec 8, 2014 at 6:59 PM, Helin Xiang wrote:
> Hi,
>
> We have currently upgraded our kafka cluster from 0.7.2 to 0.8.1.1.
>
> In one of our application, we want to get all
Hi,
We have currently upgraded our kafka cluster from 0.7.2 to 0.8.1.1.
In one of our application, we want to get all partitions' latest offsets,
so we use getoffsetbefore java API (latest).
We believe at some time, 1 of the partition's latest offset we got is much
smaller than its real latest o
Hi Guozhang,
sorry for asking a little unrelated problem here.
We found a consumer stopping fetching data when doing an network upgrade,
if the consumer have connection problem with one broker (but OK with
zookeeper and other brokers)
the fetcherrunnable will stop, but there is no chance to resta
On Fri, May 3, 2013 at 11:50 PM, Jun Rao wrote:
> The consumer hit the exception because the broker closed the socket. What
> does the broker log around the same time say? It should tell you the reason
> why the broker closed the socket.
>
> Thanks,
>
> Jun
>
>
> On
Thanks Jun.
I will try it.
On Sat, Apr 27, 2013 at 12:15 PM, Jun Rao wrote:
> It should work, but may not be well tested.
>
> Thanks,
>
> Jun
>
>
> On Fri, Apr 26, 2013 at 7:41 PM, Helin Xiang wrote:
>
> > Hi,
> >
> > We currently use Kafka 0.7.2
Hi,
We currently use Kafka 0.7.2.
Is it OK to use different whitelist for different consumers in a same
consumer group?
Thanks
--
*Best Regards
Helin Xiang*
Hi, we are using kafka 0.7.2
We start 2 brokers, and running for a while. After that, we shut 1 broker
down, remove its data directory, and restart it. But the broker won't
receive any logs.
We checked the zookeeper, finding that old topic contain only 1 broker id.
After that, we manually make som
Hi,
We are using kafka 0.7.2.
The situation is a little complicated:
1. We use Java API and multi-thread to send logs to kafka. (like 16
threads). Each thread contain its own kafka.javaapi.producer.Producer
object.
2. There is one topic which the partition of is set to 4. we use random
partitio
Hi,
We use kafka 0.7.2 and use virtual IP for producer end, the VIP tools we
are using is LVS.
Sadly, it seems not working on LVS, when the broker changes, the producer
seems not reconnect to the new broker.
So has anyone been successfully using the same VIP mode? What VIP tools are
you using?
TH
Rao wrote:
> Hmm, both log4j messages suggest that the broker received some corrupted
> produce requests. Are you using the java producer? Also, we have seen that
> network router problems caused corrupted requests before.
>
> Thanks,
>
> Jun
>
> On Mon, Mar 18, 2013 at
hat sth is wrong on the server.
>
> In 0.8, the producer will wait for an ack from the broker and will timeout
> if no response is received.
>
> Thanks,
>
> Jun
>
> On Mon, Jan 21, 2013 at 6:56 PM, Helin Xiang wrote:
>
> > Hi,
> >
> > I am doing some
21 matches
Mail list logo