Hi ,
Can anyone suggest me where I can get the answer for these type of
questions?
Regards,
Abhimanyu
On Thu, Jun 8, 2017 at 6:49 PM, Abhimanyu Nagrath <
abhimanyunagr...@gmail.com> wrote:
> Hi ,
>
> Is Apache Kafka along with storm can be used to design a ticketing system.
> By ticketing syst
Hi Muhammad,
You can use the monitoring tool developed by LinkedIn
https://github.com/linkedin/Burrow to get the data information within a
time window.
Regards,
Abhimanyu
On Tue, Jun 20, 2017 at 4:22 AM, Muhammad Arshad <
muhammad.ars...@alticeusa.com> wrote:
> Hi,
>
> wanted to see if there is
Hi,
wanted to see if there is Kafka monitoring which is available. I am looking to
the following:
how much data came in at a certain time.
Thanks,
Muhammad Faisal Arshad
Manager, Enterprise Data Quality
Data Services & Architecture
[http://www.multichannel.com/sites/default/files/public/styles/b
Thanks for sharing your thoughts.
I am not sure though, what section you mean. IIRC, we don't cover the
supplier pattern in the docs at all. So where do you think, we should
add this one liner (happy to add it if I know where :)).
-Matthias
On 6/16/17 2:48 PM, Adrian McCague wrote:
> Guozhang,
Hi Sameer,
With regard to
>>> What I saw was that while on Machine1, the counter was 100 , another
>>> machine it was at 1. I saw it as inconsistent.
If you really see the same key on different machines, that would be
incorrect. All record with the same key, must be processed by the same
machine
> On Jun 19, 2017, at 2:02 PM, Andre Eriksson wrote:
>
> I then tried implementing my own scheduling that periodically sends/clears
> out messages using the ProcessorContext provided to the aforementioned
> transform step. However, it seems that when I call forward() from my
> scheduler (i.e.
So I'm trying to implement a rate limiting processing step using Kafka Streams
(0.10.2.1).
Basically, this step should just let messages through, unless similar messages
have already been seen in the last N seconds, in which case it should aggregate
them into a single message and then send them
Hi Karan,
You assume incorrectly, KafkaUtils is part of (already mentioned)
spark-streaming-kafka library. I have not seen Eclipse in ages, but I would
suggest to import sbt project there, could make it easier for you. Or
alternatively, using ENSIME with your favorite editor will make your life
Thanks everyone. Great discussion.
Because these Read or Write actions are interpreted in conjunction with
particular resources (Topic, Group, ...) it would also make more sense to
me that for committing offsets the ACL should be (Group, Write).
So, a consumer would be required to have (Topic, R
Hi Jozef - i do have a additional basic question ..
When i tried to compile the code in Eclipse, i was not able to do that
eg.
import org.apache.spark.streaming.kafka.KafkaUtils
gave errors saying KafaUtils was not part of the package.
However, when i used sbt to compile - the compilation went t
Thanks for the explanation. I still think it would be better to have
the mutation operations require write ACLs, though. It might not be
100% intuitive for novice users, but the current split between Describe
and Read is not intuitive for either novice or experienced users.
In any case, I am +1
+1 -- passes kafka-python test suite.
-Dana
On Sun, Jun 18, 2017 at 10:49 PM, Magnus Edenhill
wrote:
> +1 (non-binding)
>
> Passes librdkafka integration tests (v0.9.5 and master)
>
>
> 2017-06-19 0:32 GMT+02:00 Ismael Juma :
>
> > Hello Kafka users, developers and client-developers,
> >
> > Th
It could work as you have described, but indeed only for one "local-only"
transaction. But that's not the case for me. It even wouldn't work with two
Kafka producers. If I could get away with just one single Kafka Producer I
wouldn't probably need distributed system to solve such use case.
Maybe
Thanks Tom, that sounds great. And I agree about your comment regarding RC0
to RC1 changes.
Ismael
On Mon, Jun 19, 2017 at 3:15 PM, Tom Crayford wrote:
> Hello,
>
> Heroku has been testing 0.11.0.0 RC0, mostly focussed on backwards
> compatibility and performance. So far, we note a slight perfo
Hello,
Heroku has been testing 0.11.0.0 RC0, mostly focussed on backwards
compatibility and performance. So far, we note a slight performance
increase from older versions when using not-current clients
Testing a 0.9 client against 0.10.2.1 vs 0.11.0.0 rc0: 0.11.0.0 rc0 has
slightly higher through
I'm not sure if I understood correctly, but if you want to integrate a
single kafka producer transaction (or any transaction manager that only
supports local transaction) into a distributed transaction, I think you
can do so as long as all other involved transaction managers support
2-phase com
Sure, 13 blockers were fixed:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20priority%20%3D%20Blocker%20AND%20fixVersion%20%3D%200.11.0.0%20AND%20resolved%20%3E%3D%20-10d%20ORDER%20BY%20updated%20DESC
Ismael
On Mon, Jun 19, 2017 at 1:44 PM, Tom Crayford wrote:
> Is th
Sorry for responding to my own message, but when I sent an original
message/question I was not subscribed to this mailing list and now I can
not respond to Matthias answer directly.
I don't want to share a transaction between multiple Producers
threads/processes, I just would like to resume an int
Is there a summary of which blockers were fixed in RC0 somewhere?
On Mon, Jun 19, 2017 at 1:41 PM, Eno Thereska
wrote:
> +1 (non-binding) passes Kafka Streams tests.
>
> Thanks,
> Eno
> > On 19 Jun 2017, at 06:49, Magnus Edenhill wrote:
> >
> > +1 (non-binding)
> >
> > Passes librdkafka integra
+1 (non-binding) passes Kafka Streams tests.
Thanks,
Eno
> On 19 Jun 2017, at 06:49, Magnus Edenhill wrote:
>
> +1 (non-binding)
>
> Passes librdkafka integration tests (v0.9.5 and master)
>
>
> 2017-06-19 0:32 GMT+02:00 Ismael Juma :
>
>> Hello Kafka users, developers and client-developers,
21 matches
Mail list logo