There is also an IBM alternative,
https://console.ng.bluemix.net/catalog/services/message-hub, and there may
be others as well.
On Mon, Mar 21, 2016 at 8:40 PM Achanta Vamsi Subhash <
achanta.va...@flipkart.com> wrote:
> @Sam
> Not always. Not all LTS customers are cloudera/... customers. Do they
I have not measured that value exactly, but I just thought the case when we
use db; the frequent connection/disconnection causes overhead so that
connection pool is generally used to solve the problem.
Does the kafka bring less overhead comparing with that case as much as what
the connection pool
Can you explain more? Have you measured the overhead of opening the
connections?
If I'm not mistaken, Kafka manages the connections under the covers to each
of the brokers that have topics (leader partitions) from which you're
consuming. The connection(s) to each partition leader will stay around
Hi all;
Im a newbie to kafka. Im trying to publish my java object to kafka topic an
try to consume.
I see there are some API changes in the latest version of the kafka. can
anybody point some samples for how to publish and consume java objects? I
have written my own data serializer, but could not p
Hello. I have a question that the latest kafka, 0.9.0.1, provides any APIs
for managing connection pool of kafka on both consumer and producer sides.
I think the overhead which happens while establishing connection from
consumer/producer to kafka broker(s) seems a little heavy.
Thanks in advance!
Hi Ismael,
Thanks for clarifying this with the example.
I tried it and it worked as you have described below !
I have a follow up question:
Producer (PR) and Consumer (CO) are running on two different Clients and
talking to broker (BR)
Goal: Multiple Principals (P1 .. Pn) should be able to acc
Hi,
I should mention that you don't need to change your code to upgrade clients
from 0.8.x to 0.9.x. The Scala producers and consumers are all still
present in 0.9.x.
Ismael
On Mon, Mar 21, 2016 at 9:46 PM, feifei hsu wrote:
> They also document that as of now. However, 0.9 Brokers work with
I upgraded to consumer 0.9.0.1 as I was running into
https://issues.apache.org/jira/browse/KAFKA-2978.
However, I still see this wired issue when I run (the below command) on my
consumer group:
bin/kafka-consumer-groups.sh --describe --group consumer-gp-1 --new-consumer
--bootstrap-server localhos
Thanks for your reply Marko.
Do you have any simpler products in mind which might fit the requirements?
From what I could see the promise with Kafka streams is a reduction in
engineering effort compared to what has been required with Kafka in the
past. But I'm only going off the blog - not fro
Hi Gopal,
As you suspected, you have to set the appropriate ACLs for it to work. The
following will make the producer work:
kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 \
--add --allow-principal
"User:CN=kafka.example.com,OU=Client,O=Confluent,L=London,ST=London,C=GB"
\
--p
Preamble: Don't compare Kafka 1:1 with RMQ/AMQ/ It will lead to
frustration. Kafka offloads a lot of hard work to its clients, which is
pretty much the main reason for the fact that it scales better than any
"old school" message broker. But it does mean that you have to read up,
understand how
>Hi Christopher,
>On Mon, Mar 21, 2016 at 3:53 PM, christopher palm wrote:
>> Does Kafka support SSL authentication and ACL authorization without
>> Kerberos?
>>
>Yes. The following branch modifies the blog example slightly to only allow
>SSL authentication.
>https://github.com/confluentinc/se
Hello Jeff,
You can actually change the message key before the join. For example, let's
day your stream A has the format of
key: {a}, value: {b, c}
And stream B with the format of
key: {d}, value: {b, e}
And you want to join by value field {e}. You can write in the DSL:
streamA' =
They also document that as of now. However, 0.9 Brokers work with 0.8.x
clients.
However, Anyone has a large deployment on this scenario. e.g. 0.9
brokers + 0.8.x clients? How is your experience and result? more concern in
term of system issues. like reliability/scability/performance? we hav
Hello,
>From your described use cases I think Kafka Streams would be a good fit, in
that 1) it provides higher level DSLs for windowed aggregations, and 2) it
is part of the Kafka open source (coming in the 0.10.0 release, which is
being voted now), and if you have your data already in Kafka it is
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 0.10.0.0.
This is a major release that includes: (1) New message format including
timestamps (2) client interceptor API (3) Kafka Streams. Since this is a
major release, we will give people
Folks,
We're planning on exploring Kafka for our latest work - we are
currently on RabbitMQ. Apologies if this has been already answered.
Question # 1) RMQ offers the Message ordering guarantee, from my initial
read about Kafka - it does seem like a Topic is divided in to Partitions
and any
Kamal,
Say if you have n threads in your executor thread poll, then you can let
consumer.poll() to return at most n records by setting "max.poll.records"
in the consumer config. Then you can maintain a circular bit buffer
indicating completed record offset (this is similar to your "ack" approach
I
The LZ4 implementation "works" but has a framing bug that can make third
party client use difficult. See KAFKA-3160. If you only plan to use the
official Java client then that issue shouldn't be a problem.
-Dana
On Mar 21, 2016 12:26 PM, "Pete Wright" wrote:
>
>
> On 03/17/2016 04:03 PM, Virendr
@Sam
Not always. Not all LTS customers are cloudera/... customers. Do they
actually backport the changes? If yes, an opensource LTS will actually help
them.
On Mon, Mar 21, 2016 at 7:47 PM, Sam Pegler
wrote:
> I would assume (maybe incorrectly) that users who were after a LTS style
> release wou
On 03/17/2016 04:03 PM, Virendra Pratap Singh wrote:
More like getting a feel from the community about using lz4 for compression?
Has anyone used in the kafka setup.
I am aware that gzip and snappy are more older implementation and regressed.
Given that lz4 has better compression/decompressio
Yeah, the Java/Scala clients use the latest protocol version in their
requests and that will not work with older brokers if there are
incompatible changes (ie 0.9.0.1 clients don't work with 0.8.2.2 brokers,
but they do work with 0.9.0.0 brokers). This should be made more explicit
in the upgrade no
Given your requirements, I think the most important question here is
*volume*. How many clients/events per day to you expect?
Unless you expect huge amount of events right away, I would suggest that
you start with a minimum viable product (including pub/sub), since it
wouldn't be too hard to later
They don't work with the old brokers. We made the assumption that they
did and had to roll-back.
On Mon, Mar 21, 2016 at 10:42 AM, Alexis Midon <
alexis.mi...@airbnb.com.invalid> wrote:
> Hi Ismael,
>
> could you elaborate on "newer clients don't work with older brokers
> though."? doc pointers a
Hi Ismael,
could you elaborate on "newer clients don't work with older brokers
though."? doc pointers are fine.
I was under the impression that I could the 0.9 clients with 0.8 brokers.
thanks
Alexis
On Mon, Mar 21, 2016 at 2:05 AM Ismael Juma wrote:
> Hi Allen,
>
> Answers inline.
>
> On Mon
It sounds like a fairly typical pub-sub use case where you’d likely be choosing
Kafka because of its scalable data retention and built in fault tolerance. As
such it’s a reasonable choice.
> On 21 Mar 2016, at 17:07, Mark van Leeuwen wrote:
>
> Hi Sandesh,
>
> Thanks for the suggestions. I
Hi Sandesh,
Thanks for the suggestions. I've looked at them now :-)
The core problem that needs to be solved with my app is keeping a full
replayable history of changes, transmitting latest state to web apps
when they start, then keeping them in sync with latest state as changes
are made by a
Hi All,
I'm using Kafka 0.9.0.1.
I have a requirement in which consumption of records are asynchronous.
*for (ConsumerRecord record : records) {*
*executor.submit(new Runnable() {*
*public void run() {*
*// process record;}*
*});*
*}*
*consumer.commitSy
Hi Gerard,
Seems to me latency to and from server is a separate issue regardless of
the back end solution.
I was anticipating that the issues using Kafka might be latency between
producer and consumer and whether it was intended to have as many
consumers as web clients out there as the app s
Running Kafka 0.9.0.1 (Producers, Brokers and Consumers).
I'm getting sporadic ISR shrinks and expanding of the [__consumer_offsets]
topic which seems to cause some sort of message duplication in message
consumption here and there.
I was wondering what could cause this ?
I am running 1 ZooKe
Hi Christopher,
On Mon, Mar 21, 2016 at 3:53 PM, christopher palm wrote:
> Does Kafka support SSL authentication and ACL authorization without
> Kerberos?
>
Yes. The following branch modifies the blog example slightly to only allow
SSL authentication.
https://github.com/confluentinc/securing-k
You can use SSL certificate hostname verification for rudimentary
authentication rather than Kerberos. The two can be used together or
independently.
On Mon, Mar 21, 2016 at 8:53 AM -0700, "christopher palm"
wrote:
Hi All,
Does Kafka support SSL authentication and ACL authoriza
Hi All,
Does Kafka support SSL authentication and ACL authorization without
Kerberos?
If so, can different clients have their own SSL certificate on the same
broker?
In reading the following security article, it seems that Kerberos is an
option but not required if SSL is used.
Thanks,
Chris
ht
Hello Mark,
Have you looked at one of the streaming engines like Apache Apex, Flink?
Thanks
On Mon, Mar 21, 2016 at 7:56 AM Gerard Klijs
wrote:
> Hi Mark,
>
> I don't think it would be a good solution with the latencies to and from
> the server your running from in mind. This is less of a prob
Hi Mark,
I don't think it would be a good solution with the latencies to and from
the server your running from in mind. This is less of a problem is your app
is only mainly used in one region.
I recently went to a Firebase event, and it seems a lot more fitting. It
also allows the user to see it'
Hi Team,
I have 3 node cluster on hdp 2.3, Kafka version : kafka 0.9.0
Previously I was able to send messages to kafka topic with older version of hdp
i.e. 2.2, when kafka was not kerbersied.
But due to kerberisation I am not able to send message to kafka topic after
upgradation from hdp 2.2 to
I would assume (maybe incorrectly) that users who were after a LTS style
release would instead be going for one of the commercial versions.
Clouderas for example is
https://cloudera.com/products/apache-hadoop/apache-kafka.html, they'll then
manage patches and provide support for you?
Sam Pegler
S
+1 things are quite fast-moving to be honest... LTS would slow things down
and potentially drag along too much technical debt. I agree revisit this
discussion when 1.0 GAs.
On Mon, Mar 21, 2016 at 4:19 AM, Gerard Klijs
wrote:
> I think Kafka at the moment is not mature enough to support a LTS re
If everyone agrees about the the producer and consumer apis being fixed
enough, yes, but then we should not call it 0.10.0.0 but 1.0.0.0 in my
opinion. With streams just making their introduction I don't know if we are
there yet.
On Mon, Mar 21, 2016 at 1:51 PM Achanta Vamsi Subhash <
achanta.va..
Gerard,
I think many people use Kafka just like any other stable software. The
producer and consumer apis are mostly fixed now and many companies across
the world are using it on production for critical use-cases. I think it is
already *expected *to work as per the theory and any bugs need to be
p
I think Kafka at the moment is not mature enough to support a LTS release.
I think it will take a lot of effort to 'guarantee' a back-port will be
more safe to use in production then the new release. For example, when you
will manage the release of 0.9.0.2, with the fixes from 0.10.0.0, you need
to
*bump*
Any opinions on this?
On Mon, Mar 14, 2016 at 4:37 PM, Achanta Vamsi Subhash <
achanta.va...@flipkart.com> wrote:
> Hi all,
>
> We find that there are many releases of Kafka and not all the bugs are
> back ported to the older releases. Can we have a LTS (Long Term Support)
> release which
Hi Adam,
What do you mean by "app read password from xxx".
Doesn't the kafka read the server.properties ?
So, is there any way to let kafka read an encryption?
I don't want to put cleartext password in the kafka property config file
-邮件原件-
发件人: Adam Kunicki [mailto:a...@streamsets.com]
发送
Hi Allen,
Answers inline.
On Mon, Mar 21, 2016 at 5:56 AM, allen chan
wrote:
> 1) I am using the upgrade instructions to upgrade from 0.8 to 0.9. Can
> someone tell me if i need to continue to bump the
> inter.broker.protocol.version after each upgrade? Currently the broker code
> is 0.9.0.1 bu
Yes Muthu. We also tried to set the advertised.host.name but it didn’t
solved the problem.
Still trying to find a solution.
Regards,
Ashiq.
On 20/03/16, 7:17 PM, "Muthukumaran K" wrote:
>Hi Ashiq,
>
>I am going through the same trouble with 0.9.0.0 and 0.9.0.1 for entirely
>new installation on
45 matches
Mail list logo