Don't know if adding it to Kafka is a good thing. I assume you need some
java opts settings for it to work, and with other solutions these would be
different. It could be enabled with an option off course, then it's not in
the way if you use something else.
We use zabbix, this is a single tool whic
On Thu, Mar 31, 2016 at 12:02 AM, Paolo Patierno wrote:
> Hi all,
>
> after the following Twitter conversation ...
>
> https://twitter.com/jfield/status/715299287479877632
>
> I'd like to explain better my concerns about using Kafka Connect for an
> AMQP connector.
> I started to develop it almos
Please look into Confluent Kafka connectors.
http://www.confluent.io/developers/connectors
-- Surendra Manchikanti
On Thu, Mar 31, 2016 at 6:43 PM, Kavitha Veluri
wrote:
> Hi,
>
> I'm trying to use Kafka streams for my use case which consumes data from
> producer , processes it and pushes that
Hi ,
I'm looking for an example code to work with kafka streams to process data
received by kafka consumer and push that data to postgresql database.
I tried to find some examples which relates to my use case but didn't find any.
Can you please provide me a sample example code?
Thanks,
Kavitha
Hi,
I'm trying to use Kafka streams for my use case which consumes data from
producer , processes it and pushes that data to database.
I spent time trying to find any similar examples but couldn't find one.
Can you please help me by sending a sample example code for my use case?
Any help is gr
Another +1 for Jolokia. We've got a pretty cool setup here that deploys
Jolokia alongside Kafka, and we wrote a small Sensu plugin to grab all the
stats from Jolokia's JSON API and reformat them for Graphite.
On Thu, Mar 31, 2016 at 4:36 PM, craig w wrote:
> Including jolokia would be great, I'v
Hi Marcos,
ConsumerMetadata* was renamed to GroupCoordinator* in 0.9 . the api
protocol is unchanged.
However, the new Java clients use non-blocking network channels. It looks
like the example code may reference the deprecated, or
soon-to-be-deprecated, Scala client.
Rather than roll your own mo
Hi Marcos,
We should really update that wiki! We renamed ConsumerMetadataRequest to
GroupCoordinatorRequest in 0.9.0 because of the coordinator's more general
role. You might also want to take a look at consumer-groups.sh, which is
shipped with the release.
-Jason
On Thu, Mar 31, 2016 at 3:32 PM
Including jolokia would be great, I've used for kafka and it worked well.
On Mar 31, 2016 6:54 PM, "Christian Posta"
wrote:
> What if we added something like this to Kafka? https://jolokia.org
> I've added a JIRA to do that, just haven't gotten to it yet. Will soon
> though, especially if it'd be
What if we added something like this to Kafka? https://jolokia.org
I've added a JIRA to do that, just haven't gotten to it yet. Will soon
though, especially if it'd be useful for others.
https://issues.apache.org/jira/browse/KAFKA-3377
On Thu, Mar 31, 2016 at 2:55 PM, David Sidlo wrote:
> The K
Greetings,
The language in server.properties indicates that log.retention.bytes is a
minimum rather than a maximum. It also indicates an incorrect default,
which per http://kafka.apache.org/documentation.html#brokerconfigs is -1
https://github.com/apache/kafka/blob/trunk/config/server.properties#
The Kafka JmxTool works fine although it is not user friendly, in that you
cannot perform a query of the Kafka Server mbeans to determine content and to
determine the path-string that you need to place into the -object-name option.
Here's how I solved the problem...
First, make sure that Kafka
We're building an application to monitor our Kafka consumers, and since the
offsets are now not stored in Zookeeper as before, I was trying to use the
following example in the Kafka website:
https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka
Howeve
Hi,
This isn't a big issue, but I'm wondering if anyone knows what is going on.
I have been running performance benchmarks for a Kafka 0.9 Consumer using a
Kafka 0.9.0.0 (also tried 0.9.0.1) broker. At message sizes of 5k the
broker becomes the bottleneck and throughput starts to become highly
var
This could be caused by a bug in our client's network layer which
occasionally prevents multiple requests from being sent at the same time.
Usually for both heartbeats and periodic commits, we expect the next one to
be sent successfully, so it hasn't been a big problem. However, when the
intervals
Hi John,
0.10.0.0 is at the RC stage at the moment (two RCs have been released and
we know that there will be a third one at least). There are never
guarantees when it comes to dates, but it is very likely that the final
release will happen in Q2.
If upgrading is a big deal, as you say, then perh
You could check what it does, and do that instead of relying in the script.
It runs the kafka.admin.AclCommand class with some properties, and sets
some jvm settings.
On Thu, Mar 31, 2016 at 4:36 PM Kalpesh Jadhav <
kalpesh.jad...@citiustech.com> wrote:
> Hi,
>
> Is there any java api available t
At my current client they are on Kafka 0.8.2.2 and were looking at
upgrading to 0.9 for bug fixes mostly. The new consumer is also enticing
but it has been said it's still "beta" quality which is a hard sell.
I'm considering recommending to wait for 0.10 in hopes that the new
consumer will be con
Hi Martin,
I am not using the browser.
I am making the call from code, much like curl.
The proxy is returning a http code 200, but only with an empty json array.
I am not having a delay when calling the proxy, I am having a delay when the
proxy is returning results.
Even though there is data
Hi,
Is there any java api available to give access to kafka topic??
As we does through kafka-acls.sh.
Just wanted to run below command through java api.
kafka-acls.sh --add --allow-principals user:ctadmin --operation ALL --topic
marchTesting --authorizer-properties zookeeper.connect={hostname}:
Keyboard error...
something along these lines:
records = consumer.poll()
foreach record:
process record
add to commit map
if records processed > threshold:
commit map
Take care to make sure everything has been committed before calling poll
again because it would cause the driver to ski
We have recently had great success with committing records in smaller
batches between poll()'s. Something along these lines:
records = consumer.poll()
foreach record:
process record
On 31 March 2016 at 12:13, Daniel Fanjul
wrote:
> Hi all,
>
> My problem: If the consumer fetches too much da
Does anybody have any ideas about this?
On Saturday, March 26, 2016 10:08 PM, amar dev wrote:
Hi Muthu,
I checked in the ZK cli, I can see my created topic.Steps:-//enter the ZK shell
zookeeper-shell.sh crazybox:2181ls /brokers/topics[kafkatopic] -- this is the
topic that I created.
Tha
This ticket describes how the default partitioner works (and links to the
source code) https://issues.apache.org/jira/browse/KAFKA-
On Thu, Mar 31, 2016 at 9:34 AM, Marcelo Oikawa wrote:
> Hi, everyone.
>
> If you don't specify the partition, and do have a key, then the default
> > behaviour
Hi, everyone.
If you don't specify the partition, and do have a key, then the default
> behaviour is to use a hash on the key to determine the partition. This to
> make sure the messages with the same key and up on the same partition. This
> helps to ensure ordering relative to the key/partition.
Hey folks,
We are currently still running Kafka 0.8.1.1, and are looking to upgrade. From
what I have been able to tell, upgrading to 0.10.x will be supported from as
far back as 0.8.2.x. But the documentation for 0.8.2.x seems to indicate that
upgrading from 0.8.1.x effectively a no-op. So my
Ok, thank you very much. To pause all assigned partitions should work for
us, I will try it.
On Thu, Mar 31, 2016 at 12:32 PM, Manikumar Reddy wrote:
> Hi,
>
> 1. New config property "max.poll.records" is getting introduced in
> upcoming 0.10 release.
>This property can be used to control
As was said it depends on what tradeoffs you want between availability and
data loss risk.
If you're most concerned about the data then I recommend having it
replicated to at least 3 brokers, set minimum ISR to 2 and produce to the
topic with acks = -1.
Also set "unclean.leader.election.enable" t
Thanks Todd
for the valuable information.
Partition re-balance: I am testing a scenario (1 topic with 8 partitions, 3
replicas with 3 brokers) where I brought down one broker2 acting as a
leader to partitions then immediately leaders and isr's have changed to 2
live brokers but not evenly balance
The issue is when the consumer (using the REST proxy) is trying to
consume.MG>can we assume you have disabled all cache between server and client
MG>in fact its probably best now to use a 3rd party tool like curl to pull the
data to make sure browser isnt causing the delay:
https://curl.haxx.se/
Hi Cees De Groot,
I don't want to loose my data.
In-case of 5 brokers there is a chance of 2 brokers down.
On Tue, Mar 29, 2016 at 11:55 PM, Cees de Groot wrote:
> How much do you like your data?
>
> It really depends. There are situations that a replication factor of 1 is
> sufficient, we wo
Hi,
1. New config property "max.poll.records" is getting introduced in
upcoming 0.10 release.
This property can be used to control the no. of records in each poll.
2. We can use the combination of ExecutorService/Processing Thread and
Pause/Resume API to handle unwanted rebalances.
Some of
Hi all,
My problem: If the consumer fetches too much data and the processing of the
records is not fast enough, commit() fails because there was a rebalance.
I cannot reduce 'max.partition.fetch.bytes' because there might be large
messages.
I don't want to increase the 'session.timeout.ms', beca
Hej,
So there seems to be a weird problem that happens if you happen to setup
your heartbeat.interval.ms equal to your autocommit.interval.ms.
>From what I see in the logs, it never successfully send a heartbeat when
there was a commit before, so the consumer expires after the
session.timeout.ms,
Hi all,
after the following Twitter conversation ...
https://twitter.com/jfield/status/715299287479877632
I'd like to explain better my concerns about using Kafka Connect for an AMQP
connector.
I started to develop it almost one month ago (only on source side,
https://github.com/ppatierno/kafk
35 matches
Mail list logo