Take a look at the new AdminClient or KafkaAdminClient classes
https://kafka.apache.org/24/javadoc/org/apache/kafka/clients/admin/KafkaAdminClient.html
https://kafka.apache.org/24/javadoc/org/apache/kafka/clients/admin/AdminClient.html
You can describe the topic or topics in question and it shoul
Hi Victoria,
Sorry for the vagueness, I’m not in front of a computer right now, so I can
only answer from memory.
I’m not sure why that interface is still tagged “evolving”. Any changes to it
would go through a deprecation period, just like any public interface in Kafka.
We should probably re
Hi, John
Thanks a lot for valuable information.
I looked at KafkaAdminClient and I see that it offers createTopics method that
indeed seems suitable.
I still have a couple of questions:
1. In the documentation it is not mentioned what is the expected behavior if
the specified topic already exi
Hi Victoria,
I’ve used the AdminClient for this kind of thing before. It’s the official java
client for administrative actions like creating topics. You can create topics
with any partition count, replication, or any other config.
I hope this helps,
John
On Sat, Feb 15, 2020, at 22:41, Victor
On 2019/06/04 12:58:10, "M. Manna" wrote:
> Kafka cannot be run on Windows in production. There are problems with
> memory map allocation/releases which results into fatal shutdown. On Linux
> it’s allowed, but on Windows it’s prevented.
>
> You can reproduce this by setting a small log reten
Kafka cannot be run on Windows in production. There are problems with
memory map allocation/releases which results into fatal shutdown. On Linux
it’s allowed, but on Windows it’s prevented.
You can reproduce this by setting a small log retention period on your
windows and test with a QuickStart.
Thank you!
Hi,
A request-response structure is more suitable for your scenario, you should
still persist RESTful API rather than Kafka.
1095193...@qq.com
From: Desmond Lim
Date: 2019-03-19 09:52
To: users
Subject: Using kafka with RESTful API
Hi all,
Just started using kafka yesterday and I have this
Hi Akshay,
In regards to your 3rd question (and indirectly to your 2nd question),
instead of having different consumer groups, why not just multiple
consumers in the same group? That would ensure that each consumer only
reads from one partition in the topic. You can even assign the partition if
yo
Hi All,
Thanks for the inputs: apparently this is an issue for which everyone tries
to come up with a solution.
I think it should be done in the core Kafka CLI; it cries for a feature
request/improvement.
I've created a JIRA issue for it; if you think it would be helpful for you
as well, please
Us too:
https://github.com/wikimedia/puppet/blob/production/modules/confluent/files/kafka/kafka.sh
This requires that the various kafka-* scrips are in your PATH.
And then this gets rendered into /etc/profile.d to set env variables.
https://github.com/wikimedia/puppet/blob/production/modules/con
We also have created simple wrapper scripts for common operations.
On Sat, Apr 21, 2018 at 2:20 AM, Peter Bukowinski wrote:
> One solution is to build wrapper scripts around the standard kafka
> scripts. You’d put your relevant cluster parameters (brokers, zookeepers)
> in a single config file (
One solution is to build wrapper scripts around the standard kafka scripts.
You’d put your relevant cluster parameters (brokers, zookeepers) in a single
config file (I like yaml), then your script would import that config file and
pass the appropriate parameters to the kafka command. You could c
Hello Péter :
etc/hosts
PUBLIC_IP_ADDRESS FQDN SHORTNAME
where shortname can be your shortened HostName
(Nota Bene: be respectful of tabs, spaces and are important...best to make
backup before modify)
https://unix.stackexchange.com/questions/239920/how-to-set-the-fully-qualified-hostname-on
Nice paper to read and cool usage of Kafka. Thanks for sharing Afonso :)
Guozhang
On Mon, Mar 12, 2018 at 1:13 PM, Afonso Mukai wrote:
> Hi Hannes,
>
> We will use Kafka here at the European Spallation Source (the facility is
> currently under construction) to stream data from neutron detector
Hi Hannes,
We will use Kafka here at the European Spallation Source (the facility is
currently under construction) to stream data from neutron detectors and other
experimental station equipment to consumers (the EPICS forwarding software
mentioned by Eric covers the sources other than detectors
The European Spallation Source [1] seems to be using it for this case [2].
I am also using this code [2], but only for visualization in another "data
center".
[1] https://europeanspallationsource.se/
[2] https://github.com/ess-dmsc/forward-epics-to-kafka
Thank you!
Eric
I think how to use GDAX's API is orthogonal to using Kafka.
Kafka client has support for Java and Python.
On Tue, Nov 7, 2017 at 12:31 PM, Taha Arif wrote:
> Hello,
>
>
> I want to build a project that accesses the Gdax websocket in a real time
> stream, and push that data into Kafka to reforma
No, I don't. I help others that do :)
On Tue, Oct 3, 2017 at 1:12 PM, Valentin Forst wrote:
> Hi Sean,
>
> Thanks a lot for this info !
> Are you running DC/OS in prod?
>
> Regards
> Valentin
>
> > Am 03.10.2017 um 15:29 schrieb Sean Glover :
> >
> > Hi Valentin,
> >
> > Kafka is available on D
Hi Sean,
Thanks a lot for this info !
Are you running DC/OS in prod?
Regards
Valentin
> Am 03.10.2017 um 15:29 schrieb Sean Glover :
>
> Hi Valentin,
>
> Kafka is available on DC/OS in the Catalog (aka Universe) as part of the
> `kafka` package. Mesosphere has put a lot of effort into makin
Hi Valentin,
Kafka is available on DC/OS in the Catalog (aka Universe) as part of the
`kafka` package. Mesosphere has put a lot of effort into making Kafka work
on DC/OS. Since Kafka requires persistent disk it's required to make sure
after initial deployment brokers stay put on their assigned M
Hi Avinash,
Thanks for this hint.
It would have been great, if someone could share experience using this
framework on the production environment.
Thanks in advance
Valentin
> Am 02.10.2017 um 19:39 schrieb Avinash Shahdadpuri :
>
> There is a a native kafka framework which runs on top of DC/
Hi David,
Thank you for your replay! Presumably I wasn’t clear in my previous post. Here
an example to visualize what I'm trying to figure out:
Imagine we have a data flow propagating massages through a Kafka-Cluster which
is happen to consist of 3 brokers (3 partitions, 3 replica). If one of t
There is a a native kafka framework which runs on top of DC/OS.
https://docs.mesosphere.com/service-docs/kafka/
This will most likely be a better way to run kafka on DC/OS rather than
running it as a marathon framework.
On Mon, Oct 2, 2017 at 7:35 AM, David Garcia wrote:
> I’m not sure how y
I’m not sure how your requirements of Kafka are related to your requirements
for marathon. Kafka is a streaming-log system and marathon is a scheduler.
Mesos, as your resource manager, simply “manages” resources. Are you asking
about multitenancy? If so, I highly recommend that you separate
Just to add to this, depending upon your use case it may be beneficial to
use kafka connect for pulling data out of oracle to publish to kafka. With
the JDBC connector you would just need a few configs to stand up kafka
connect and start publishing data to kafka, either via a select statement
or a
>> java.lang.NoClassDefFound Error
You are missing some dependent classes. Two questions:
1. Does the message have more information about what class it couldn't find?
2. What exactly are you putting into your jar file?
-Dave
-Original Message-
From: Rahul R04 [mailto:rahul.kuma...@mph
Thank you Michael for the prompt response, really appreciate it!
Best regards,
Mina
On Thu, Mar 30, 2017 at 4:50 AM, Michael Noll wrote:
> If you want to deploy a Kafka Streams application, then essentially you
> only need the (fat) jar of your application and a JRE in your container.
> In othe
If you want to deploy a Kafka Streams application, then essentially you
only need the (fat) jar of your application and a JRE in your container.
In other words, it's the same setup you'd use to deploy *any* kind of Java
application.
You do not need to containerize "Kafka", which I assume you meant
Hi,
Do we have an example of a container with an instance of the jar file by
any chance? I am wondering if I should have a container of headless java or
should I have a container of Kafka?
And after I have the container running, in my container should I run Java
-cp ... same as https://github.com
Hi Michael,
Thank you very much for the prompt response, really appreciate it!
>From https://github.com/confluentinc/examples/blob/3.2.x/kafka-
streams/src/main/java/io/confluent/examples/streams/Wor
dCountLambdaExample.java#L55-L62 and
https://github.com/confluentinc/examples/tree/3.2.x/kafka-st
Typically you'd containerize your app and then launch e.g. 10 containers if
you need to run 10 instances of your app.
Also, what do you mean by "in a cluster of Kafka containers" and "in the
cluster of Kafkas"?
On Tue, Mar 21, 2017 at 9:08 PM, Mina Aslani wrote:
> Hi,
>
> I am trying to underst
The log compaction functionality uses the key to determine which records to
deduplicate. You can think of it (very roughly) as deleting entries from a
hash map as the value for each key is overwritten. This functionality
doesn't have much of a point unless you include keys in your records.
-Ewen
Hey Tom,
Thanks for your help :)
In terms of the number of topics, there would be 1 topic per service.
Each service would be running multiple instances, but they would be in
the same consumer group consuming the same topic.
I am expecting around 30ish microservices at the moment, so around 3
inline
On Mon, Sep 5, 2016 at 11:58 PM, F21 wrote:
> Hi Tom,
>
> Thank you so much for your response. I had a feeling that approach would
> run into scalability problems, so thank you for confirming that.
>
> Another approach would be to have each service request a subscription from
> the event
Hi Tom,
Thank you so much for your response. I had a feeling that approach would
run into scalability problems, so thank you for confirming that.
Another approach would be to have each service request a subscription
from the event store. The event store then creates a unique kafka topic
for
inline
On Mon, Sep 5, 2016 at 12:00 AM, F21 wrote:
> Hi all,
>
> I am currently looking at using Kafka as a "message bus" for an event
> store. I plan to have all my events written into HBase for permanent
> storage and then have a reader/writer that reads from HBase to push them
> into kafka.
iday, July 15, 2016 12:12 AM
Subject: Re: Using Kafka without persisting message to disk
To: Users
Hi Jack,
No, kafka doesn't support not writing to disk. If you're really 100% sure
of yourself you could use a ramdisk and mount Kafka on it, but that's not
supported. I'd re
Hi Jack,
No, kafka doesn't support not writing to disk. If you're really 100% sure
of yourself you could use a ramdisk and mount Kafka on it, but that's not
supported. I'd recommend "just" writing to disk, it's plenty fast enough
for nearly all use cases.
Thanks
Tom Crayford
Heroku Kafka
On Thu
Hello Alex,
Currently Kafka Connect has some simple "T" function on a per-message-basis
since 0.10 release, but that may not be sufficient for your use case.
We are planning to have some Kafka Streams / Connect integration in the
near future, so that users can specify non-Kafka sources / sinks in
This sounds like a square peg in a round hole sort of solution. That said, you
might want to look at the work being done with kafka-streams to expose a topic
as a table.
> On Mar 30, 2016, at 3:23 PM, Michael D. Spence wrote:
>
>
> Any advice on using Kafka to store the actual messages?
>
>
Any advice on using Kafka to store the actual messages?
On 3/22/2016 6:32 PM, Michael D. Spence wrote:
We have to construct a messaging application that functions as a
switch between other applications in the enterprise. Since our switch
need only have a few days worth of messages, we are con
Hi,
If you put n different consumers in different consumer groups, each
consumer will get the same message.
Each consumer gets full data
But, if you put n consumers in 1 consumer group, it'll act as a traditional
distributed queue. Amortised, each consumer will get 1/n of the overall data
Regard
I think this is what you are looking for:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
On Thu, Sep 24, 2015 at 11:59 PM, 刘振 wrote:
> Dear all,
>
>
> I am trying to use kafka to do some job load balance and not sure if kafka
> support this feature:
>
> Suppose there's
Thanks, I'm on 0.8.2 so that explains it.
Should retention.ms affect segment rolling? In my experiment it did (
retention.ms = -1), which was unexpected since I thought only segment.bytes
and segment.ms would control that.
On Mon, Jul 13, 2015 at 7:57 PM, Daniel Tamai
wrote:
> Using -1 for log.
Hi,
1. What you described sounds like a reasonable architecture, but may I
ask why JSON? Avro seems better supported in the ecosystem
(Confluent's tools, Hadoop integration, schema evolution, tools, etc).
1.5 If all you do is convert data into JSON, SparkStreaming sounds
like a difficult-to-manag
Using -1 for log.retention.ms should work only for 0.8.3 (
https://issues.apache.org/jira/browse/KAFKA-1990).
2015-07-13 17:08 GMT-03:00 Shayne S :
> Did this work for you? I set the topic settings to retention.ms=-1 and
> retention.bytes=-1 and it looks like it is deleting segments immediately.
Did this work for you? I set the topic settings to retention.ms=-1 and
retention.bytes=-1 and it looks like it is deleting segments immediately.
On Sun, Jul 12, 2015 at 8:02 AM, Daniel Schierbeck <
daniel.schierb...@gmail.com> wrote:
>
> > On 10. jul. 2015, at 23.03, Jay Kreps wrote:
> >
> > If
Sounds like the same idea. The nice thing about having such option is that,
with a correct application of containers, backup and restore strategy, one can
create an infinite ordered backup of raw input stream using native Kafka
storage format.
I understand the point of having the data in other f
For what it's worth, I did something similar to Rad's suggestion of
"cold-storage" to add long-term archiving when using Amazon Kinesis. Kinesis is
also a message bus, but only has a 24 hour retention window.
I wrote a Kinesis consumer that would take all messages from Kinesis and save
them int
I have had a similar issue where I wanted a single source of truth between
Search and HDFS. First, if you zoom out a little, eventually you are going
to have some compute engine(s) process the data. If you store it in a
compute neutral tier like kafka then you will need to suck the data out at
runt
Am I correct in assuming that Kafka will only retain a file handle for the last
segment of the log? If the number of handles grows unbounded, then it would be
an issue. But I plan on writing to this topic continuously anyway, so not
separating data into cold and hot storage is the entire point.
Indeed, the files would have to be moved to some separate, dedicated storage.
There are basically 3 options, as kafka does not allow adding logs at runtime:
1. make the consumer able to read from an arbitrary file
2. add ability to drop files in (I believe this adds a lot of complexity)
3. read
Yes, consider my e-mail an up vote!
I guess the files would automatically moved somewhere else to separate the
active from cold segments? Ideally, one could run an unmodified consumer
application on the cold segments.
--Scott
On Mon, Jul 13, 2015 at 6:57 AM, Rad Gruchalski
wrote:
> Scott,
>
Scott,
This is what I was trying to target in one of my previous responses to Daniel.
The one in which I suggest another compaction setting for kafka.
Kind regards,
Radek Gruchalski
ra...@gruchalski.com (mailto:ra...@gruchalski.com)
(mailto:ra...@gruchalski.com)
de.linkedin.com/in
We've tried to use Kafka not as a persistent store, but as a long-term
archival store. An outstanding issue we've had with that is that the
broker holds on to an open file handle on every file in the log! The other
issue we've had is when you create a long-term archival log on shared
storage, you
Would it be possible to document how to configure Kafka to never delete
messages in a topic? It took a good while to figure this out, and I see it
as an important use case for Kafka.
On Sun, Jul 12, 2015 at 3:02 PM Daniel Schierbeck <
daniel.schierb...@gmail.com> wrote:
>
> > On 10. jul. 2015, at
> On 10. jul. 2015, at 23.03, Jay Kreps wrote:
>
> If I recall correctly, setting log.retention.ms and log.retention.bytes to
> -1 disables both.
Thanks!
>
> On Fri, Jul 10, 2015 at 1:55 PM, Daniel Schierbeck <
> daniel.schierb...@gmail.com> wrote:
>
>>
>>> On 10. jul. 2015, at 15.16, Shay
Daniel,
I understand your point. From what I understand the mode that suits you is what
Jay suggested: log.retention.ms (http://log.retention.ms) and
log.retention.bytes both set to -1.
A few questions before I continue on something what may already be possible:
1. is it possible to attach a
Radek: I don't see how data could be stored more efficiently than in Kafka
itself. It's optimized for cheap storage and offers high-performance bulk
export, exactly what you want from long-term archival.
On fre. 10. jul. 2015 at 23.16 Rad Gruchalski wrote:
> Hello all,
>
> This is a very interest
Hello all,
This is a very interesting discussion. I’ve been thinking of a similar use case
for Kafka over the last few days.
The usual data workflow with Kafka is most likely something this:
- ingest with Kafka
- process with Storm / Samza / whathaveyou
- put some processed data back on Kafk
If I recall correctly, setting log.retention.ms and log.retention.bytes to
-1 disables both.
On Fri, Jul 10, 2015 at 1:55 PM, Daniel Schierbeck <
daniel.schierb...@gmail.com> wrote:
>
> > On 10. jul. 2015, at 15.16, Shayne S wrote:
> >
> > There are two ways you can configure your topics, log co
> On 10. jul. 2015, at 15.16, Shayne S wrote:
>
> There are two ways you can configure your topics, log compaction and with
> no cleaning. The choice depends on your use case. Are the records uniquely
> identifiable and will they receive updates? Then log compaction is the way
> to go. If they a
There are two ways you can configure your topics, log compaction and with
no cleaning. The choice depends on your use case. Are the records uniquely
identifiable and will they receive updates? Then log compaction is the way
to go. If they are truly read only, you can go without log compaction.
We
I don't want to endorse this use of Kafka, but assuming you can give your
message unique identifiers, I believe using log compaction will keep all
unique messages forever. You can read about how consumer offsets stored in
Kafka are managed using a compacted topic here:
http://kafka.apache.org/docum
up, as I am not sure you receive this email.
Le Sun Jan 11 2015 at 5:34:17 PM, Yann Simon a
écrit :
> Hi,
>
> after having read
> http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying,
> I am considering Kafka for an appli
Hey Yann,
Yes, you can just make the retention infinite which will disable any
deletion.
What you describe with compaction might work, but wasn't exactly the
intention.
This type of event logging can work two ways: you can log the "command" or
you can log the result of the command. In databases
While I agree with Mark that testing the end-to-end pipeline is
critical, note that in terms of performance - whatever you write to
hook-up Teradata to Kafka is unlikely to be as fast as Teradata
connector for Sqoop (especially the newer one). Quite a lot of
optimization by Teradata engineers went
If you use Kafka for the first bulk load, you will test your new
Teradata->Kafka->Hive pipeline, as well as have the ability to blow away
the data in Hive and reflow it from Kafka without an expensive full
re-export from Teradata. As for whether Kafka can handle hundreds of GB of
data: Yes, absolu
Both variants will work well (if your kafka cluster can handle the full
volume of the transmitted data for the duration of the ttl on each topic) .
I would run the whole thing through kafka since you will be "stresstesting"
you production flow - consider if you at some later time lost your
destina
Thanks Philip and Anand for the hints.
I fill more comfortable going further now.
On Wed, Aug 20, 2014 at 6:49 PM, Anand Nalya wrote:
> For operating kafka across multiple data centers have a look at
> https://kafka.apache.org/08/ops.html and MirrorMaker (
> https://kafka.apache.org/08/tools.h
For operating kafka across multiple data centers have a look at
https://kafka.apache.org/08/ops.html and MirrorMaker (
https://kafka.apache.org/08/tools.html)
On 20 August 2014 04:09, Justin Maltat wrote:
> Hi,
>
> As of today, our company IT is mainly composed of domain specific
> software (pr
>>- we have a low data traffic compared to your figures: around 30 GB a
day. Will it be an issue?
I have personal experience that Kafka deals extremely well with very
low-volumes, as well as very high. I have used Kafka for small integration-test
setups, as well as large production systems. Kaf
Hi,
As of today, our company IT is mainly composed of domain specific
software (proprietary and homemade). We could like to migrate them one
after another to a microservice architecture with Kafka as the data
pipeline. With the system now in place it's quite difficult to have a
common data flow be
Hi Justin
It sounds like Kafka could be a good fit for your environment. Are you able to
tell us more about the kinds of applications you will be running?
Daniel.
> On 19/08/2014, at 10:53 am, Justin Maltat wrote:
>
> Hello,
>
> I'm managing a study to explore possibilities for migrating a m
Be aware that JMX metrics changed between 0.7 and 0.8. If you use chef,
you might also check out https://github.com/bflad/chef-jmxtrans which has
recipes for both 0.7 and 0.8 kafka metrics -> graphite.
Dana Powers
Rdio, Inc.
dana.pow...@rd.io
rdio.com/people/dpkp/
On Thu, Mar 20, 2014 at 7:46 A
I’m using jmxtrans to do this for Ganglia, but it should work the same for
Graphite:
http://www.jmxtrans.org/
Here’s an example Kafka jmxtrans json file.
https://github.com/wikimedia/puppet-kafka/blob/master/kafka-jmxtrans.json.md
You can change the output writers to use Graphite instead of Gan
.
-Vjeran
-Original Message-
From: Vjeran Marcinko [mailto:vjeran.marci...@email.t-com.hr]
Sent: Saturday, January 25, 2014 6:10 PM
To: users@kafka.apache.org
Subject: RE: Using Kafka on Windows - file path problems
Even if in that one case the thing would work, it would be best if a
urday, January 25, 2014 5:25 PM
To: users@kafka.apache.org
Subject: Re: Using Kafka on Windows - file path problems
It sounds like \tmp\kafka-logs should work since that's what
File.getParent() returns. Not sure why you can't use that to create the
file.
Thanks,
Jun
On Sat, Jan 25, 2
It sounds like \tmp\kafka-logs should work since that's what
File.getParent() returns. Not sure why you can't use that to create the
file.
Thanks,
Jun
On Sat, Jan 25, 2014 at 12:43 AM, Vjeran Marcinko <
vjeran.marci...@email.t-com.hr> wrote:
> Hi,
>
> I have a problem going through start guide
Chetan,
Are you also releasing a Scala RxJava producer as well?
-Steve
On Tue, Dec 3, 2013 at 10:42 PM, Richard Rodseth wrote:
> Any update on this, Chetan? Thanks.
>
>
> On Thu, Oct 31, 2013 at 4:11 PM, chetan conikee wrote:
>
> > I am in the process of releasing out Scala and RxJava consum
Any update on this, Chetan? Thanks.
On Thu, Oct 31, 2013 at 4:11 PM, chetan conikee wrote:
> I am in the process of releasing out Scala and RxJava consumer(s) on
> github. Will be releasing it soon. Keep an eye out.
>
>
> On Thu, Oct 31, 2013 at 3:49 PM, Richard Rodseth
> wrote:
>
> > So I hav
I am in the process of releasing out Scala and RxJava consumer(s) on
github. Will be releasing it soon. Keep an eye out.
On Thu, Oct 31, 2013 at 3:49 PM, Richard Rodseth wrote:
> So I have the 0.8. Beta 1 consumer Java example running now.
>
> Is there a Scala API documented somewhere? What abo
Each broker will only report metrics for the partitions that exist on that
broker. In order to get a global view of metrics, you will need to collect
metrics from all brokers.
Thanks,
Neha
On Wed, Sep 18, 2013 at 3:07 PM, Vimpy Batra wrote:
> In case there are multiple brokers, is there an ove
In case there are multiple brokers, is there an overlap in the metric values
they report? Let's say a topic is partitioned onto multiple broker nodes and we
want topic level metrics. Will all the brokers be reporting the same value?
Should we tap into one broker to get all the metrics or do we n
If you start a Kafka broker, it should start reporting metrics
automatically. But I'm not sure if I understood your question completely.
Can you elaborate on what problem you saw with metrics collection?
Thanks,
Neha
On Wed, Sep 18, 2013 at 2:20 PM, Vimpy Batra wrote:
> Hi,
>
> I am using Kafk
similar results?
>
> Thanks
> Josh
>
>
>
>
>
> From: Mahendra M
> To: users@kafka.apache.org; Josh Foure
> Sent: Friday, June 14, 2013 8:03 AM
> Subject: Re: Using Kafka for "data" messages
>
>
>
> Hi Josh
__
From: Mahendra M
To: users@kafka.apache.org; Josh Foure
Sent: Friday, June 14, 2013 8:03 AM
Subject: Re: Using Kafka for "data" messages
Hi Josh,
Thanks for clarifying the use case. The idea is good, but I see the following
three issues
1. Creating a queue
t;> Web will be flooded with a ton of messages that it will promptly drop
>> but I
>>> don't want to create a new "response" or "recommendation" topic because
>>> then I feel like I am tightly coupling the message to the functionality
>>
gt;
>
>
>
>
>
> From: Mahendra M
> To: users@kafka.apache.org; Josh Foure
> Sent: Thursday, June 13, 2013 12:56 PM
> Subject: Re: Using Kafka for "data" messages
>
>
> Hi Josh,
>
> The idea looks very interesting.
ion" topic because
> > then I feel like I am tightly coupling the message to the functionality
> and
> > in the future different systems may want to consume those messages as
> well.
> >
> > Does that make sense?
> > Josh
> >
> >
> >
> >
> >
&
nt: Thursday, June 13, 2013 2:13 PM
Subject: Re: Using Kafka for "data" messages
Also since you're going to be creating a topic per user, the number of
concurrent users will also be a concern to Kafka as it doesn't like massive
amounts of topics.
Tim
On Thu, Jun 13, 2013 at 10
s well.
>
> Does that make sense?
> Josh
>
>
>
>
>
>
>
> From: Mahendra M
> To: users@kafka.apache.org; Josh Foure
> Sent: Thursday, June 13, 2013 12:56 PM
> Subject: Re: Using Kafka for "data" messages
>
>
>
make sense?
Josh
From: Mahendra M
To: users@kafka.apache.org; Josh Foure
Sent: Thursday, June 13, 2013 12:56 PM
Subject: Re: Using Kafka for "data" messages
Hi Josh,
The idea looks very interesting. I just had one doubt.
1. A user logs in. H
Hi Josh,
The idea looks very interesting. I just had one doubt.
1. A user logs in. His login id is sent on a topic
2. Other systems (consumers on this topic) consumer this message and
publish their results to another topic
This will be happening without any particular order for hundreds of users
I've been talking about this kind of architecture for years.
As you said it's an EDA architecture. You might also want to have a look at
Esper if you haven't already - it's a perfect complement to this strategy.
At my last job I built a relatively low latency site wide pub sub system
that showed
96 matches
Mail list logo