ok, I'll open an PR to fix this.
2017-06-17 0:59 GMT+08:00 Matthias J. Sax :
> Thanks for reporting this!
>
> Would you like to open a MINOR PR to fix it? Don't think we need a Jira
> for this.
>
> -Matthias
>
> On 6/16/17 9:26 AM, john cheng wrote:
> > https://github.com/apache/kafka/blob/trunk/
Hi Tom,
Thanks a lot for reporting this. We dug into it. It's easy to reproduce
(thank a lot to describe a simple way to do that) and it seems to be a
bug in Streams... I did open a JIRA:
https://issues.apache.org/jira/browse/KAFKA-5464
For using Streams 0.10.2.1, there is nothing we can advice a
I'm trying to compile kafka & Spark Streaming integration code i.e. reading
from Kafka using Spark Streaming,
and the sbt build is failing with error -
[error] (*:update) sbt.ResolveException: unresolved dependency:
org.apache.spark#spark-streaming-kafka_2.11;2.1.0: not found
Scala version
Thank you Vahid
I appreciate you time.
Arunkumar Pichaimuthu, PMP
On Fri, 6/16/17, Vahid S Hashemian wrote:
Subject: Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL
To: users@kafka.apache.org
Date: Friday, June 16, 2017, 6:30 PM
Hi Arun
Hi Arunkumar,
I'm glad you were able to fix the issue. Also glad that the article was
helpful.
Regarding Kafka SSL configuration, I'm sending some links:
- Kafka documentation:
http://kafka.apache.org/documentation.html#security_ssl
- Apache Kafka Security 101:
https://www.confluent.io/blog/ap
Hi Vahid
I deleted the dataDir and dataLogDir and restarted zookeeper,brokers, producers
and consumer. Now it works
All the messages produced are consumed from the producer.
Thanks for all the help. The link you shared helped a lot.
I am planning to setup SASL_SSL, I appreciate you advice
Guozhang, Matthias
Thanks for confirming, it was quite clear with current understanding that there
was a mistake in the supplier implementation when seeing it.
The Javadoc is indeed clear what to do, this was ultimately a failing to read
that over the documentation provided on confluent / kafka
Hi Vahid
I am working on the same use case ): . As per the document I was trying to set
ACL's for topic which worked and now I am able to start my producer without
error.
Then I set ACL for the consumer and when I start my consumer it starts without
issue. and also able to set second ACL for c
Adrian,
I see. That would explain what you see, i.e. all tasks with their own
"processor context" are accessing the same state store instance; hence for
some task its processor context may not be updated yet while another task
is accessing that state store, hence causing the issue.
If you are wit
Hi Arunkumar,
Were you trying the same steps in the document when you got this error? Or
you are working on a different use case?
Also, I might have missed it in previous emails. What version of Kafka are
you using?
Thanks.
--Vahid
From: Arunkumar
To:
Date: 06/16/2017 10:22 AM
Subj
That is correct. You need to return a new instance on each call.
I am not sure how to improve the docs though:
1) it's a supplier pattern (that should explain it)
2) also the `TransformerSupplier#get()` method JavaDoc says:
> /**
> * Return a new {@link Transformer} instance.
> *
>
Just to follow up, after making the change to
.transform(() -> new PhaseTransformer<>(evaluator, storeName),
transformer.getStoreName())
The problem appears to have gone away so I think my previous hypothesis was
correct? Please let me know if this should have made no difference.
May be worth
Hi Vahid
Thank you for sharing link to set it up. It is really a very useful document.
When I ran describe command for group I see this error
bin/kafka-consumer-groups --bootstrap-server host:9097 --describe --group
arun-group --command-config etc/kafka/producer.properties
Note: This will onl
I just double checked you example code from an email before. There you
are using:
stream.flatMap(...)
.groupBy((k, v) -> k, Serdes.String(), Serdes.Integer())
.reduce((value1, value2) -> value1 + value2,
In you last email, you say that you want to count on category that is
contained i
Thanks Michał!
That is very good feedback.
-Matthias
On 6/16/17 2:38 AM, Michal Borowiecki wrote:
> I wonder if it's a frequent enough use case that Kafka Streams should
> consider providing this out of the box - this was asked for multiple
> times, right?
>
> Personally, I agree totally with
Thanks for reporting this!
Would you like to open a MINOR PR to fix it? Don't think we need a Jira
for this.
-Matthias
On 6/16/17 9:26 AM, john cheng wrote:
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L1345
>
As Michał said. It's not designed for this use case.
Kafka's transaction, are not the same thing as DB transactions and if
you break it down, it allows for atomic (multi-partition) writes, but no
2-phase commit.
Also, a transaction is "owned" by a single thread (ie, producer) and
cannot be "share
I think the question is when do you actually *want* processing time
semantics? There are definitely times when its safe to assume the two are
close enough that a little lossiness doesn't matter much but it is pretty
hard to make assumptions about when the processing time is and has been
hard for us
I think the question is when do you actually *want* processing time
semantics? There are definitely times when its safe to assume the two are
close enough that a little lossiness doesn't matter much but it is pretty
hard to make assumptions about when the processing time is and has been
hard for us
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L1345
This Line:
log.info("{} Adding assigned standby tasks {}", logPrefix, partitionAssignor
.activeTasks());
The parameter is active task, but the info content is stand
But isn't it a low hanging fruit at this moment? Isn't that just an API
limitation and wouldn't the backend for transactions support it with only
minor changes to the API (do not fail automatically dangling transactions
on Producer restart)? Flushing is already there so that _should_ handle the
pre
I don't think KIP-98 is as ambitious as to provide support for
distributed transactions (2 phase commit).
It would be great if I was wrong though :P
Cheers,
Michał
On 16/06/17 14:21, Piotr Nowojski wrote:
Hi,
I'm looking into Kafka's transactions API as proposed in KIP-98. I've read
both t
Hi,
I'm looking into Kafka's transactions API as proposed in KIP-98. I've read
both this KIP-98 document and I looked into the code that is on the master
branch. I would like to use it to implement some two phase commit mechanism
on top of the Kafka's transactions, that would allow me to tie multi
I wonder if it's a frequent enough use case that Kafka Streams should
consider providing this out of the box - this was asked for multiple
times, right?
Personally, I agree totally with the philosophy of "no final
aggregation", as expressed by Eno's post, but IMO that is predicated
totally on
Hi Guozhang
It's just occurred to me that the transformer is added to the topology like
this:
PhaseTransformer transformer = new PhaseTransformer<>(evaluator,
storeName);
...
.transform(() -> transformer, transformer.getStoreName())
Thus meaning that the same transformer is used whenever the s
Hi Bryan,
So this must be something else since KIP-134 is not in 0.10.2.1, but in the new
release 0.11 that hasn't come out yet.
Eno
> On 14 Jun 2017, at 21:35, Bryan Baugher wrote:
>
> It does seem like we are in a similar situation described in the KIP (
> https://cwiki.apache.org/confluenc
Hey Frank,
I think I spotted the issue and submitted a patch. Here's a link to the
JIRA: https://issues.apache.org/jira/browse/KAFKA-5456. I expect we'll get
the fix into 0.11.0.0. Thanks for finding this!
-Jason
On Thu, Jun 15, 2017 at 11:53 PM, Frank Lyaruu wrote:
> Yes, compression was on (
27 matches
Mail list logo