There are two type of publisher-subscriber system, topic based
publisher-subscriber system and content based publisher-subscriber system.
Kafka is a topic based publisher-subscriber system. We wanted to enhance
the kafka to support content based subscriptions.
Hey, you should have a look at Apache Samza. You put Samza on top of Kafka and
you can inject content filtering rules into a Samza system. This will give you
a "content subscription" system you intend to build.
Get Outlook for iOS
On Thu, May 19, 2016 at 1:56 AM -0700, "Janagan Sivagnanasund
Content-filtering may mean different to different to people depending on
use case. So it's better to have examples.
On Thu, May 19, 2016 at 3:03 PM, Radoslaw Gruchalski
wrote:
> Hey, you should have a look at Apache Samza. You put Samza on top of Kafka
> and you can inject content filtering rule
dar Kafka users,
I have two questions about automatic broker id generation when
broker.id.generation.enable = true,
(1) is there any documentation about how broker id generated? is it
incremental id starting from 0 that limited to reserved.broker.max.id? will
broker id be reusable?
(2) afaik broker
Auto broker id generation logic:
1. If there is a user provided broker.id, then it is used and id range is
from 0 to reserved.broker.max.id
2. If there is no user provided broker.id, then auto id generation starts
from reserved.broker.max.id +1
3. broker.id is stored in meta.properties file under e
Hi,
commitId is nothing but latest git commit hash of the release. This is taken
while building binary distribution. commitId is avilable in binary release
(kafka_2.10-0.10.0.0.tgz)
commitId will not be available if you build from source release
(kafka-0.10.0.0-src.tgz).
On Wed, May 18, 2016 at
Hi there,
One way to disable the old consumer is to only allow authenticated
consumers (via SSL or another authentication system) - the old consumers
don't support authentication at all. If you care about ACLs anyway, you
probably don't want unauthenticated consumers or producers in the system at
Thanks for the confirmation.
I like the idea about only allowing authenticated customers
(definitely what I want). Unfortunately, I'm running Kafka with an ELK
installation and was hoping for some kind of stopgap while the
logstash input plugins catch up and support TLS. When the logstash
kafka pl
You could always contribute back to logstash - I'm sure they'd appreciate
it.
On Thu, May 19, 2016 at 3:47 PM, David Hawes wrote:
> Thanks for the confirmation.
>
> I like the idea about only allowing authenticated customers
> (definitely what I want). Unfortunately, I'm running Kafka with an EL
I'd be happy to do that, but in this case it looks like the next
release has it covered:
https://www.elastic.co/blog/logstash-5-0-0-alpha1-released
(See the Kafka 0.9 section)
On 19 May 2016 at 10:50, Tom Crayford wrote:
> You could always contribute back to logstash - I'm sure they'd appreciat
Or you can use KafkaStreams, which is already available in Kafka :)
On Thu, May 19, 2016 at 2:33 AM, Radoslaw Gruchalski
wrote:
> Hey, you should have a look at Apache Samza. You put Samza on top of Kafka
> and you can inject content filtering rules into a Samza system. This will
> give you a "c
Hi,
For Kafka consumers, is it expected that the throughput will scale linearly
as I increase the number of consumers/partitions?
Also, I keep getting this info message: "Kafka Consumer Marking the
coordinator 2147483647 dead." What is the problem? How can I fix it? My
program continues without a
Hello Srikanth,
Thanks for your questions, please see replies inlined.
On Tue, May 17, 2016 at 7:36 PM, Srikanth wrote:
> Hi,
>
> I was reading about Kafka streams and trying to understand its programming
> model.
> Some observations that I wanted to get some clarity on..
>
> 1) Joins & aggreg
Hi there,
Firstly, I'd recommend not running the consumers and the brokers on the
same machine. Are you running multiple brokers? If not, that'd be my first
recommendation (it sounds like you might not be).
Secondly, yes, consumers scale up with partitions. At most you can have the
same number of
Thank you for your reply. I am running tests using a simple application
with one broker. That is why I am running everything on a single machine.
For the scalability, my application's throughput scales by more than 2x
going from 1 consumer/partition to 2 consumers/partitions which is great.
However
I know that when offsets get stored in Kafka, they get cleaned up based on the
offsets.retention.minutes config setting. This happens when using the new
consumer, or when using the old consumer but offsets.storage=kafka.
If using the old consumer where offsets are stored in Zookeeper, do old off
I'm starting to take a closer look at Kafka Streams, and one of the things
I'd like to be able to is see if I can "migrate" our existing Samza-based**
applications to run on Kafka Streams.
With Samza, we take advantage of multiple source streams being sent to a
process function that runs in a sing
Time-based log retention only happens on old log segments. And log compaction
only happens on old segments as well.
Currently, I believe segments only roll whenever a new record is written to the
log. That is, during the write of the new record is when the current segment is
evaluated to see if
Thanks Guozhang for your reply.
I have a few follow-ups based on your response. Writing it inline would
have made it hard to read. So here is the extract
1) *Internal topics use default retention policy*.
Will it be better to add another config for this? Or something like
topic.log.retention.hour
Hi,
We are seeing the below error and no messages are consumed from the Topic by
the Kafka Consumer. Any input on what is the issue? And How to resolve this?
INFO | jvm 1 | 2016/05/19 19:03:30 | 2016-05-19 19:03:30,088
[container2-kafka-1] INFO AbstractCoordinator - SyncGroup for group
unpbat
Suggested way is to use Samza on top of Kafka and then inject
content filtering rules into a Samza system. This will give a "content
subscription" system
you intend to build.
Or either to use KafkaStreams
Can anyone explain a bit regarding this? :)
On Thu, May 19, 2016 at 9:32 PM, Gwen Shapira
Thanks for running the release. +1 from me. Verified the quickstart.
Jun
On Tue, May 17, 2016 at 10:00 PM, Gwen Shapira wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the seventh (!) candidate for release of Apache Kafka
> 0.10.0.0. This is a major release that includ
22 matches
Mail list logo