I assume that "log.retention.hours" means the number of hours since the log
file was last modified, correct? Or is it since the log file was created?
If I set "log.retention.hours" to 48 hours does that mean that _any_ log
file older than 48 hours will be deleted or only log files that have
reache
t a zk path doesn't exist. In this
particular case, the path is not >expected to always exist.
>
>
>
>Thanks,
>
>
>
>Jun
>
>
>
>
>
On Tue, Mar 25, 2014 at 12:09 AM, Tom Amon wrote:
> Whenever a session is expired by ZooKe
this timeframe?
Thanks,
Neha
On Tue, Mar 25, 2014 at 12:55 PM, Tom Amon wrote:
> Again thank you for your patience
>
> Is the following pattern normal for a broker that is booting? This is
> from my zookeeper log. It seems to connect and disconnect multiple
>
We see the following messages in the broker logs whenever we reboot a
broker. These messages filled up 200MB of log files in less than 1 minute.
Are these normal? For reference we have enabled controlled shutdown on each
broker.
[2014-03-25 22:52:45,558] INFO Reconnect due to socket error: null
(
alance failure. The consumer
> will retry failed rebalances. If all retries fail, we just log the error.
>
> Thanks,
>
> Jun
>
>
> On Wed, Mar 26, 2014 at 5:01 PM, Tom Amon wrote:
>
> > The pattern for creating and operating consumers that we use is to
>
The pattern for creating and operating consumers that we use is to create
the consumer connector, create the streams and then consume each stream by
waiting on the iterator.
If a rebalance occurs and fails, how is the error raised to the consumer?
Will I get an exception while waiting on the itera
Where in the broker logs can I see that a rebalance is happening? Will the
state change log tell me this?
Again thank you for your patience
Is the following pattern normal for a broker that is booting? This is from
my zookeeper log. It seems to connect and disconnect multiple times in
rapid succession. The last message is a disconnect message with no
subsequent connect. Other zookeeper boxes don't
My apologies for mail bombing the list. I'm banging my head against a
production issue.
In short, can I delete old log and index files manually? I have two brokers
out of five that are hosting ~1200 partitions each, though based on my
understanding from a previous email they should really only be
Thanks,
Jun
On Tue, Mar 25, 2014 at 12:09 AM, Tom Amon wrote:
> Whenever a session is expired by ZooKeeper I see the following
> messages (one per consumer I think) in the ZooKeeper log:
>
> 2014-03-25 00:05:12,953 - INFO
> [ProcessThread:-1:PrepRequestProcesso
Whenever a session is expired by ZooKeeper I see the following messages
(one per consumer I think) in the ZooKeeper log:
2014-03-25 00:05:12,953 - INFO [ProcessThread:-1:PrepRequestProcessor@419]
- Got user-level KeeperException when processing
sessionid:0x344f675fcee0164 type:create cxid:0x7566
s the following help?
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-HowdoIchoosethenumberofpartitionsforatopic
?
Thanks,
Jun
On Mon, Mar 24, 2014 at 4:26 PM, Tom Amon wrote:
> Hi All,
>
> I'm trying to tune the log retention size and have a quest
Hi All,
I'm trying to tune the log retention size and have a question.
I have a replication factor of 3 on a cluster of 5 brokers. How many
partitions will a broker host? Is there any way to tell? Or do I have to
assume that each broker will host all partitions and size accordingly?
Thanks much
On Fri, Mar 21, 2014 at 1:04 AM, Tom Amon wrote:
> Hi All,
>
> I have a question regarding ordering of consumed messages. We
> timestamp our messages and send them into Kafka in order. I wrote a
> simple consumer that simply consumes the messages and prints out the
>
Hi All,
I have a question regarding ordering of consumed messages. We timestamp our
messages and send them into Kafka in order. I wrote a simple consumer that
simply consumes the messages and prints out the timestamp. I see messages
for all seven days worth of date being consumed at once.
Our set
s,
> Jun
On Wed, Feb 5, 2014 at 2:46 PM, Tom Amon wrote:
> Hi,
>
> We have a functioning producer that uses as the
> Producer and KeyedMessage signature. We specify the DefaultEncoder in
> the properties. In Java 1.6 it works fine. However, under Java 1.7 it
Hi,
We have a functioning producer that uses as the Producer
and KeyedMessage signature. We specify the DefaultEncoder in the
properties. In Java 1.6 it works fine. However, under Java 1.7 it gives the
following error:
Failed to collate messages by topic, partition due to: [B incompatible with
j
I'm looking to create a topic with about 1400 partitions to allow a high
degree of parallel processing. We have 5 brokers so that would be 280
partitions per box. Has anyone done something with this number of
partitions before?
Thanks.
normal?
Thanks.
---
These configs are reasonable and shouldn't affect consumer timeouts. Did
you get the time breakdown from the request log?
Thanks,
Jun
On Fri, Dec 20, 2013 at 6:14 PM, Tom Amon wrote:
> I figured out the c
.wait.max.ms in the consumer config? If so, did you
make sure that it is smaller than socket.timeout.ms? Also, if you look at
the request log, how long does it take to complete the timed out fetch
request?
Thanks,
Jun
On Tue, Dec 17, 2013 at 2:30 PM, Tom Amon wrote:
d you
make sure that it is smaller than socket.timeout.ms? Also, if you look at
the request log, how long does it take to complete the timed out fetch
request?
Thanks,
Jun
On Tue, Dec 17, 2013 at 2:30 PM, Tom Amon wrote:
> It appears that consumers that do not get messages
ava or defaults?
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
/
On Tue, Dec 17, 2013 at 2:01 PM, Tom Amon wrote:
are not seeing your broker(s). can you
confirm brokers are up?
On Mon, Dec 16, 2013 at 7:52 PM, Tom Amon wrote:
> Hi All,
>
> I have a situation where one producer/consumer is causing timeout
> errors on the Kafka broker. The exception in the logs looks like this:
>
> [
Hi All,
I have a situation where one producer/consumer is causing timeout errors on
the Kafka broker. The exception in the logs looks like this:
[2013-12-16 17:32:25,992] ERROR Closing socket for /10.236.67.30 because of
error (kafka.network.Processor)
java.io.IOException: Connection timed out
I'm running the Kafka 0.8 version downloaded from the downloads page. I'm
getting lots of issues with socket timeouts from producer and consumer. I'm
also getting errors where brokers that are shut down in a controlled manner
do not get removed from the meta data in other brokers. For instance, I
h
Hi All,
I don't see the replica configuration settings (outlined in the Kafka 0.8
documentation) in the configuration file that comes with the distribution.
I was wondering if they are necessary or if they have reasonable defaults?
Are there implications for not having them in the configuration fi
I've read in the docs and papers that LinkedIn has an auditing system that
correlates message counts from tiers in their system using a time window of
10 minutes. The time stamp on the message is used to determine which window
the message falls into.
My question is how do you account for clock dri
Hi All,
I am hoping that mirror maker is what I'm looking for. I would like to have
complete data center fail over from one kafka cluster to another. If one
data center goes down, my producers and consumers will start using the
mirrored cluster until the primary is back online. Is that something I
another DC.
>
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+mirroring+%28MirrorMaker%29
> Thanks,
> Joel
> On Mon, Sep 16, 2013 at 4:19 PM, Tom Amon wrote:
> > Is it possible to specify data center information to Kafka such that
> > all replicas for a giv
Is it possible to specify data center information to Kafka such that all
replicas for a given partition are not in the same data center? We have a
cluster that spans 2 data centers and I'd like to ensure that we're covered
if we lose one of them.
30 matches
Mail list logo