I don't think you need to write it from scratch, Hermes project
http://hermes-pubsub.readthedocs.org/en/latest/ does this (and more). You
could probably use only consumers module to change pull to push and push
messages from Kafka to other REST services. It has all the retry and send
rate auto adju
There are a bunch of new features added in 0.9 plus quite a lot of bug
fixes as well, a complete ticket list can be found here:
https://issues.apache.org/jira/browse/KAFKA-1686?jql=project%20%3D%20KAFKA%20AND%20fixVersion%20%3D%200.9.0.0%20ORDER%20BY%20updated%20DESC
In a short summary of the new
Thanks. Are there any other major changes in .9 release other than the
Consumer changes. Should I wait for .9 or go ahead and performance test
with .8?
On Tue, Oct 20, 2015 at 3:54 PM, Guozhang Wang wrote:
> We will have a release document for that on the release date, it is not
> complete yet.
We will have a release document for that on the release date, it is not
complete yet.
Guozhang
On Tue, Oct 20, 2015 at 3:18 PM, Mohit Anchlia
wrote:
> Is there a wiki page where I can find all the major design changes in
> 0.9.0?
>
> On Mon, Oct 19, 2015 at 4:24 PM, Guozhang Wang wrote:
>
> >
Is there a wiki page where I can find all the major design changes in 0.9.0?
On Mon, Oct 19, 2015 at 4:24 PM, Guozhang Wang wrote:
> It is not released yet, we are shooting for Nov. for 0.9.0.
>
> Guozhang
>
> On Mon, Oct 19, 2015 at 4:08 PM, Mohit Anchlia
> wrote:
>
> > Is 0.9.0 still under de
Actually iam planning to write a consumer in a rest client where kafka topic is
residing and send the object from rest client to another webservice which
accepts rest api service.
Regards
Surender Kudumula
Big Data Consultant - EMEA
Analytics & Data Management
surender.kudum...@hpe.com
M +44 7
Tao,
The APIs should not be evolving dramatically after the 0.9.0 release.
Stable-wise, we are doing a bunch of system / integration tests right now
to make sure it is in a good shape upon release. But since it is the first
release one cannot guarantee it is completely bug-free.
Guozhang
On Tue
Could you write a consumer at your rest server?
On Tue, Oct 20, 2015 at 1:18 PM, Kudumula, Surender <
surender.kudum...@hpe.com> wrote:
> Thanks for the reply. Iam looking to know if its possible to route binary
> objects messages to rest api service from kafka. If so please let me know.
> Otherw
Thanks for the reply. Iam looking to know if its possible to route binary
objects messages to rest api service from kafka. If so please let me know.
Otherwise I can consume the binary object using java consumer and then create a
rest client and send the binary message via HTTP POST to rest serve
What version of the API are you planning to use? We're finding it
extremely unstable.
On Tue, Oct 20, 2015 at 2:16 PM, tao xiao wrote:
> Hi,
>
> I am starting a new project that requires heavy use on Kafka consumer. I
> did a quick look at the new Kafka consumer and found it provides some of
>
Hi,
I am starting a new project that requires heavy use on Kafka consumer. I
did a quick look at the new Kafka consumer and found it provides some of
the features we definitely need. But as it is annotated as unable is it
safe to rely on it or it will still be evolving dramatically in coming
relea
If you want the full round-trip latency, you need to measure that at the
client. The performance measurement tools should do this pretty accurately.
For example, if you just want to know how long it takes to produce a
message to Kafka and get an ack back, you can use the latency numbers
reported by
Hi, fellow Kafka users,
I have another question to ask. In Kafka 0.8.2.1, when we disable
auto.commit and listen to data from a single partition, occasionally, we
just get our offsets reset back to where we started. Therefore, we can
never complete the reading of the data (about a million messag
You can accomplish this with the console consumer -- it has a formatter
flag that lets you plug in custom logic for formatting messages. The
default does not do any formatting, but if you write your own
implementation, you just need to set the flag to plug it in.
You can see an example of this in
Hello, Kafka users!
I've been trying to build Apache Kafka into our system, and for one part of
our system, we have a use-case where we have to leave auto-commit enabled
but would like to reset the offset numbers to an earlier offset if a
failure happens in our code. We are using auto-commit beca
For our Kafka cluster of three machines, we have set up a Grafana dashboard,
which, using JMX through collectd, holds some under the hood metrics of the
cluster. Especially interesting is the "kafka-messages-in-per-sec" metric,
which appears to be the number of messages the cluster takes in per
Yes, consumer group coordination is moving off of ZK in 0.9.0.0, which is
due out in November. All the new clients have zero direct dependency on ZK.
Only the brokers (and for the time being, admin and command line tools)
rely on direct access to ZK. There are plans to get a lot of admin
functional
I can't say this is the same issue, but it sounds similar to a situation we
experienced with Kafka 0.8.2.[1-2]. After restarting a broker, the cluster
would never really recover (ISRs constantly changing, replication failing,
etc). We found the only way to fully recover the cluster was to stop
We publish messages to kafka in Thrift format. We use the old simple
consumer and just retrieve the message bytes, transform back to object
model using Thrift API and do whatever our application needs with it.
On 20/10/2015 11:08, "Buntu Dev" wrote:
>I got a Kafka topic with messages in Avro fo
We compress a batch of messages together, but we need to give each
message its own offset (and know its key if we want to use topic
compaction), so messages are un-compressed and re-compressed.
We are working on an improvement to add relative offsets which will
allow the broker to skip this re-com
hi there
we're running kafka cluster with 10 brokers and two topics, each topics
has 500 partitions(kafka version is 0.8.2.1), when we start a hadoop job to
fetch message from cluster(one hadoop map for one partition), 499/500 were
successed, only one task fail. And the error on that broker
I got a Kafka topic with messages in Avro format. I would like to display
the live stream of these events in a web app. Are there any streaming
consumer clients to convert the Avro messages into some readable format, if
not any insight into how I can achieve this would be very helpful?
Thanks!
Sounds like an app design decision. What help can this list give you ?
> On 20-Oct-2015, at 8:07 PM, Kudumula, Surender
> wrote:
>
> Dear sir/madam
> I have a query. We are working on POC at the moment and we are using kafka to
> produce and consume messages. I have one component which consu
Dear sir/madam
I have a query. We are working on POC at the moment and we are using kafka to
produce and consume messages. I have one component which consumes the request
from topic and processes it and creates a file and again produce the java
object as byte array to another kafka topic. Now I
Hi,
I was 100% sure that Kafka broker didn't compress data and I didn't
think that I had to upgrade my broker to 8.2.2.
I tried the upgrade and It works right now!
I still don't understand, why the broker need to compress data again
(if the data compression is already done in the producer). Have
I got same error when I send message to Kafka. generally it caused by
deserializer
as Hemant mentioned.
You need to check how the data was send to Kafka, and how is your consumer
deserializer defined.
And you need to check for both Key and value.
the data is the topic might be byte[] type
Sincere
26 matches
Mail list logo