Hi,
Thanks for the help. I found the issue.I was appending to the bottom
when I should have placed the below line at the top of the file.
echo 'KAFKA_JMX_OPTS="-Dcom.sun.
management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false"' |
fwiw, we wrap the kafka server in our java service container framework.
This allows us to use the default GraphiteReporter class that is part of
the yammer metrics library (which is used by kafka directly). So it works
seemlessly. (We've since changed our use of GraphiteReporter to instead
send a
In our case, we use protocol buffers for all messages, and these have
simple serialization/deserialization builtin to the protobuf libraries
(e.g. MyProtobufMessage.toByteArray()). Also, we often produce/consume
messages without conversion to/from protobuf Objects (e.g. in cases where
we are just
Hi,
You can make use of this documentation aimed at JMX and monitoring:
https://sematext.atlassian.net/wiki/display/PUBSPM/SPM+Monitor+-+Standalone
There is a section about Kafka and the information is not SPM-specific.
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Manageme
OffsetOutOfRangeException will be returned when the requested partition's
offset range is [a, b] and the requested offset is either < a or > b; the
offset range will be change whenever:
1. new messages appended to the log, which increments a;
2. old messages get cleaned based on the log retention
Hello Everyone,
I would very much appreciate if someone could provide me a real world
examplewhere it is more convenient to implement the serializers instead
of just making sure to provide bytearrays.
The code we came up with explicitly avoids the serializer api. I think
it is common underst
Hi David,
Just edit "kafka-server-start.sh", and add "export JMX_PORT=",it will
work.
Yuanjia
From: David Montgomery
Date: 2014-12-03 04:47
To:users
Subject: Re: How to push metrics to graphite - jmxtrans does not work
Hi,
I am seeing this in the logs and wondering what "jmx_port":-1
Hi Guozhang,
My kafka works in product environment, a large messages are produced or
consumed. So it is not easy to get the accurate offset through the GetOffset
tool when an OffsetOutOfRangeException happens.But in my application, I have
coded comparing the consuming offset with the latest
Yeah I am kind of sad about that :(. I just mentioned it to show that there
are material use cases for applications where you expose the underlying
ByteBuffer (I know we were talking about byte arrays) instead of
serializing/deserializing objects - performance is a big one.
On Tue, Dec 2, 2014 a
Rajiv,
That's probably a very special use case. Note that even in the new consumer
api w/o the generics, the client is only going to get the byte array back.
So, you won't be able to take advantage of reusing the ByteBuffer in the
underlying responses.
Thanks,
Jun
On Tue, Dec 2, 2014 at 5:26 PM
I for one use the consumer (Simple Consumer) without any deserialization. I
just take the ByteBuffer wrap it a preallocated flyweight and use it
without creating any objects. I'd ideally not have to wrap this logic in a
deserializer interface. For every one who does do this, it seems like a
very sm
Palur,
First you need to make sure the message is received at Kafka:
message.max.bytes
controls the maximum size of a message that can be accepted, and
fetch.message.max.bytes
controls the maximum number of bytes a consumer issues in one fetch.
Guozhang
On Mon, Dec 1, 2014 at 7:25 PM, Palu
> For (1), yes, but it's easier to make a config change than a code change.
> If you are using a third party library, one may not be able to make any
> code change.
Doesn't that assume that all organizations have to already share the
same underlying specific data type definition (e.g.,
UniversalAv
Yu,
Are you enabling message compression in 0.8.1 now? If you have already then
upgrading to 0.8.2 will not change its behavior.
Guozhang
On Tue, Dec 2, 2014 at 4:21 PM, Yu Yang wrote:
> Hi Neha,
>
> Thanks for the reply! We know that Kafka 0.8.2 will be released soon. If
> we want to upgrade
Hi Neha,
Thanks for the reply! We know that Kafka 0.8.2 will be released soon. If
we want to upgrade to Kafka 0.8.2 and enable message compression, will we
still be able do this in the same way, or we need to handle it differently?
Thanks!
Regards,
-Yu
On Tue, Dec 2, 2014 at 3:11 PM, Neha Nark
I don't have it reproduced in a sandbox environment, but it's already
happened twice on that cluster, so it's a safe bet to say it's reproducible
in that setup. Are there special metrics / events that I should capture to
make debugging this easier?
Thanks,
Karol
On Tue, Dec 2, 2014 at 11:20 PM,
Will doing one broker at
a time by brining the broker down, updating the code, and restarting it be
sufficient?
Yes this should work for the upgrade.
On Mon, Dec 1, 2014 at 10:23 PM, Yu Yang wrote:
> Hi,
>
> We have a kafka cluster that runs Kafka 0.8.1 that we are considering
> upgrade to 0.8.
The offsets are keyed on so if you have more than
one owner per partition, they will rewrite each other's offsets and lead to
incorrect state.
On Tue, Dec 2, 2014 at 2:32 PM, hsy...@gmail.com wrote:
> Thanks Neha, another question, so if offsets are stored under group.id,
> dose it mean in one
Has the message successfully produced to broker? You might need to change
producer settings as well. Otherwise the message could have been dropped.
‹Jiangjie (Becket) Qin
On 12/1/14, 8:09 PM, "Palur Sandeep" wrote:
>Yeah I did. I made the following changes to server.config:
>
>message.max.bytes
Rajiv,
Yes, that's possible within an organization. However, if you want to share
that implementation with other organizations, they will have to make code
changes, instead of just a config change.
Thanks,
Jun
On Tue, Dec 2, 2014 at 1:06 PM, Rajiv Kurian wrote:
> Why can't the organization pa
For (1), yes, but it's easier to make a config change than a code change.
If you are using a third party library, one may not be able to make any
code change.
For (2), it's just that if most consumers always do deserialization after
getting the raw bytes, perhaps it would be better to have these t
I am using kafka 0.8.
Yes I did run —verify, but got some weird output from it I had never seen
before that looked something like:
Status of partition reassignment:
ERROR: Assigned replicas (5,2) don't match the list of replicas for
reassignment (5) for partition [topic-1,248]
ERROR: Assigned re
Thanks Neha, another question, so if offsets are stored under group.id,
dose it mean in one group, there should be at most one subscriber for each
topic partition?
Best,
Siyuan
On Tue, Dec 2, 2014 at 12:55 PM, Neha Narkhede
wrote:
> 1. In this doc it says kafka consumer will automatically do lo
Did you run the --verify option (
http://kafka.apache.org/documentation.html#basic_ops_restarting) to check
if the reassignment process completes? Also, what version of Kafka are you
using?
Thanks,
Jun
On Mon, Dec 1, 2014 at 7:16 PM, Andrew Jorgensen <
ajorgen...@twitter.com.invalid> wrote:
> I
Is there an easy way to reproduce the issues that you saw?
Thanks,
Jun
On Mon, Dec 1, 2014 at 6:31 AM, Karol Nowak wrote:
> Hi,
>
> I observed some error messages / exceptions while running partition
> reassignment on kafka 0.8.1.1 cluster. Being fairly new to this system I'm
> not sure if the
"It also makes it possible to do validation on the server
side or make other tools that inspect or display messages (e.g. the various
command line tools) and do this in an easily pluggable way across tools."
I agree that it's valuable to have a standard way to plugin serialization
across many tool
> The issue with a separate ser/deser library is that if it's not part of the
> client API, (1) users may not use it or (2) different users may use it in
> different ways. For example, you can imagine that two Avro implementations
> have different ways of instantiation (since it's not enforced by t
Yeah totally, far from preventing it, making it easy to specify/encourage a
custom serializer across your org is exactly the kind of thing I was hoping
to make work well. If there is a config that gives the serializer you can
just default this to what you want people to use as some kind of
environm
Why can't the organization package the Avro implementation with a kafka
client and distribute that library though? The risk of different users
supplying the kafka client with different serializer/deserializer
implementations still exists.
On Tue, Dec 2, 2014 at 12:11 PM, Jun Rao wrote:
> Joel, R
Hi,
I have a light load scenerio but I am starting off with kafka because I
like how the messages are durable etc.
If I have 4-5 topics, am I required to create the same # of consumers? I
am assuming each consumer runs in a long-running jvm process correct?
Are there any consumer examples that
1. In this doc it says kafka consumer will automatically do load balance.
Is it based on throughtput or same as what we have now balance the
cardinality among all consumers in same ConsumerGroup? In a real case
different partitions could have different peak time.
Load balancing is still based on #
Hi,
I am seeing this in the logs and wondering what "jmx_port":-1 means?
INFO conflict in /brokers/ids/29136 data: { "host":"104.111.111.111.",
"jmx_port":-1, "port":9092, "timestamp":"1417552817875", "version":1 }
stored data: { "host":"104.111.111", "jmx_port":-1, "port":9092,
"timestamp":"1417
Joel, Rajiv, Thunder,
The issue with a separate ser/deser library is that if it's not part of the
client API, (1) users may not use it or (2) different users may use it in
different ways. For example, you can imagine that two Avro implementations
have different ways of instantiation (since it's no
Hi guys,
I'm interested in the new Consumer API.
http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/
I have couple of question.
1. In this doc it says kafka consumer will automatically do load balance.
Is it based on throughtput or same as what we have now balance the
cardinali
Thanks for the follow-up Jay. I still don't quite see the issue here
but maybe I just need to process this a bit more. To me "packaging up
the best practice and plug it in" seems to be to expose a simple
low-level API and give people the option to plug in a (possibly
shared) standard serializer in
I'm not sure I agree with this. I feel that the need to have a consistent, well
documented, shared serialization approach at the organization level is
important no matter what. How you structure the API doesn't change that or make
it any easier or "automatic" than before. It is still possible fo
Ramesh,
Which producer are you using in 0.8.1? kafka.api.producer or
org.apache.kafka.clients.producer?
Guozhang
On Tue, Dec 2, 2014 at 2:12 AM, Ramesh K wrote:
> Hi,
>
> I have written the basic program to send String or byte[] messages to
> consumer from producer by using java & Kafka 0.8.1
Kafka brokers uses ZK for metadata storage, and Kafka consumer clients uses
ZK for offset and member management.
For metadata storage, when there is replica state changes (for example like
the new replica added after a broker restart in your case) the controller
will try to write to ZK recording s
Yuanjia,
I am not sure that pagecache can be the cause of this, could you attach
your full stack trace and use the GetOffset tool Manikumar mentioned to
make sure the offset does exist in the broker?
Guozhang
On Tue, Dec 2, 2014 at 7:50 AM, Manikumar Reddy
wrote:
> You can check the latest/ear
Hey Joel, you are right, we discussed this, but I think we didn't think
about it as deeply as we should have. I think our take was strongly shaped
by having a wrapper api at LinkedIn that DOES do the serialization
transparently so I think you are thinking of the producer as just an
implementation d
Hello, while we do not currently use the Java API, we are writing a C#/.net
client (https://github.com/ntent-ad/kafka4net). FWIW, we also chose to keep the
API simpler accepting just byte arrays. We did not want to impose even a simple
interface onto users of the library, feeling that users will
It's not clear to me from your initial email what exactly can't be done
with the raw accept bytes API. Serialization libraries should be share able
outside of kafka. I honestly like the simplicity of the raw bytes API and
feel like serialization should just remain outside of the base Kafka APIs.
An
Re: pushing complexity of dealing with objects: we're talking about
just a call to a serialize method to convert the object to a byte
array right? Or is there more to it? (To me) that seems less
cumbersome than having to interact with parameterized types. Actually,
can you explain more clearly what
Joel,
Thanks for the feedback.
Yes, the raw bytes interface is simpler than the Generic api. However, it
just pushes the complexity of dealing with the objects to the application.
We also thought about the layered approach. However, this may confuse the
users since there is no single entry point
You can check the latest/earliest offsets of a given topic by running
GetOffsetShell.
https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-GetOffsetShell
On Tue, Dec 2, 2014 at 2:05 PM, yuanjia8947 wrote:
> Hi all,
> I'm using kafka 0.8.0 release now. And I often encounter
Hello,
In a multi-broker Kafka 0.8.1.1 setup, I had one broker crashed. I
restarted it after some noticeable time, so it started catching up the
leader very intensively. During the replication, I see that the disk load
on the ZK leader bursts abnormally, resulting in ZK performance
degradation. Wh
Thank you!
Chico
Hi all,
I'm using kafka 0.8.0 release now. And I often encounter the problem
OffsetOutOfRangeException when cosuming message by simple consumer API.
But I'm sure that the consuming offset is smaller than the latest offset got
from OffsetRequest.
Can it be caused by that new messages are wrote to
Maybe also set:
-Dcom.sun.management.jmxremote.port=
?
> On Dec 2, 2014, at 02:59, David Montgomery wrote:
>
> Hi,
>
> I am having a very difficult time trying to report kafka 8 metrics to
> Graphite. Nothing is listening on and and no data in graphite. If
> this method of graphi
I was talking about consumer config fetch.message.max.bytes
https://kafka.apache.org/08/configuration.html
by default its 1048576 bytes
On Mon, Dec 1, 2014, at 08:09 PM, Palur Sandeep wrote:
> Yeah I did. I made the following changes to server.config:
>
> message.max.bytes=10485800
> replica.fetc
Jmxtrans should connect to the jmxremote port.
Try to run "ps -aux |grep kafka", and find the process contain
-Dcom.sun.management.jmxremote.port or not.
If not, try to edit "kafka-server-start.sh", add "export JMX_PORT=".
Hi,
I have written the basic program to send String or byte[] messages to
consumer from producer by using java & Kafka 0.8.1 .
It Works perfectly.But i wanted to send serialized object(Java Bean Object).
Is it possible to send the serialized object from producer to consumer?
if possible, please
Hi all,
I'm using kafka 0.8.0 release now. And I often encounter the problem
OffsetOutOfRangeException when cosuming message by simple consumer API.
But I'm sure that the consuming offset is smaller than the latest offset got
from OffsetRequest.
Can it be caused by that new messages are wrote to
I am using kafka 8. I will try your suggestion. But if I run lsof -i
: should I not see a proccess running on that port? I am not seeing
anything.
On Tue, Dec 2, 2014 at 5:37 PM, yuanjia8947 wrote:
> hi David,
> which version do you use kafka?
> when I use kafka 0.8.0, I write jmxtrans "
I checked the max lag and it was 0.
I grep state-change logs about topic-partition "[org.nginx,32]", and
extract some related to broker 24 and broker 29 (controller switched from
broker 24 to 29)
- on broker 29 (current controller):
[2014-11-22 06:20:20,377] TRACE Controller 29 epoch 7 chan
hi David,
which version do you use kafka?
when I use kafka 0.8.0, I write jmxtrans "obj" like this "obj":
"\"kafka.server\":type=\"BrokerTopicMetrics\",name=\"AllTopicsBytesOutPerSec\""
.
Hope it useful for you.
liyuanjia
liyuanjia
> makes it hard to reason about what type of data is being sent to Kafka and
> also makes it hard to share an implementation of the serializer. For
> example, to support Avro, the serialization logic could be quite involved
> since it might need to register the Avro schema in some remote registry a
Hi,
I am having a very difficult time trying to report kafka 8 metrics to
Graphite. Nothing is listening on and and no data in graphite. If
this method of graphite reporting is know to not work is there an
alternative to jmxtrans to get data to graphite?
I am using the deb file to install
58 matches
Mail list logo