[jira] [Created] (KAFKA-10653) get producer's ip for topic on server

2020-10-28 Thread bo liu (Jira)
bo liu created KAFKA-10653:
--

 Summary: get producer's ip for topic on server
 Key: KAFKA-10653
 URL: https://issues.apache.org/jira/browse/KAFKA-10653
 Project: Kafka
  Issue Type: Improvement
Reporter: bo liu


I want to get the relationship between the producer and consumer of kafka, so 
is there any way to get all producer's ip for every kafka topic ? thx



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10654) connector has failed, but worker status was ok

2020-10-28 Thread wehbi (Jira)
wehbi created KAFKA-10654:
-

 Summary: connector has failed, but worker status was ok
 Key: KAFKA-10654
 URL: https://issues.apache.org/jira/browse/KAFKA-10654
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.1.1
 Environment: Kafka distib : confluent CE
kafka version:kafka_2.12-5.4.0-ccs.jar
Reporter: wehbi


Hello

We are using Kafka Mongo sink connector (please see below configuration), and 
we have multiple connectors on multiple topics.

lately one of the connector has stopped to work, but the others continue to 
operate normally within the same worker. Looking into the connector logs (see 
extract below), we can observe that the Kafka topic leader was not available.

the worker service status was running (systemctl service)

Restarting the workers service has solved the problem.

why the connector was not able to recover automatically ?

how can we monitor and detect this failure ?

 

for information:

Kafka distib : confluent CE
kafka version:kafka_2.12-5.4.0-ccs.jar

{"class":"com.mongodb.kafka.connect.MongoSinkConnector","type":"sink","version":"1.0.1"}

we have a distributed workers.

 

I've checked the task status before restarting the worker, and it was saying 
that it is running (not failed). and also tried pause/resume for the task but 
it didn't do any thing.

we are already monitoring the connector metrics (using prometheus/graphana) and 
they never detected the task failure. All metrics are indicating that all is 
fine.

 

 

--- connector logs -
[2020-09-25 11:53:52,352] WARN [Consumer 
clientId=connector-consumer-adherent-sink-0, groupId=connect-adherent-sink] 
Received unknown topic or partition error in fetch for partition 
RAMOWNER.ADHERENT-1 (org.apache.kafka.clients.consumer.internals.Fetcher:1246)
[2020-09-25 11:53:52,353] WARN [Consumer 
clientId=connector-consumer-adherent-sink-0, groupId=connect-adherent-sink] 
Received unknown topic or partition error in fetch for partition 
RAMOWNER.ADHERENT-4 (org.apache.kafka.clients.consumer.internals.Fetcher:1246)
[2020-09-25 11:53:52,353] WARN [Consumer 
clientId=connector-consumer-adherent-sink-0, groupId=connect-adherent-sink] 
Received unknown topic or partition error in fetch for partition 
RAMOWNER.ADHERENT-7 (org.apache.kafka.clients.consumer.internals.Fetcher:1246)
[2020-09-25 11:53:52,365] WARN [Consumer 
clientId=connector-consumer-adherent-sink-0, groupId=connect-adherent-sink] 
Error while fetching metadata with correlation id 20822125 : 
\{RAMOWNER.ADHERENT=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient:1063)
[2020-09-25 11:53:52,374] INFO [Consumer 
clientId=connector-consumer-adherent-sink-0, groupId=connect-adherent-sink] 
Revoke previously assigned partitions RAMOWNER.ADHERENT-3, RAMOWNER.ADHERENT-2, 
RAMOWNER.ADHERENT-1, RAMOWNER.ADHERENT-0, RAMOWNER.ADHERENT-7, 
RAMOWNER.ADHERENT-6, RAMOWNER.ADHERENT-5, RAMOWNER.ADHERENT-4 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:286)
[2020-09-25 11:53:52,472] WARN [Consumer 
clientId=connector-consumer-adherent-sink-0, groupId=connect-adherent-sink] 
Error while fetching metadata with correlation id 20822127 : 
\{RAMOWNER.ADHERENT=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient:1063)
[2020-09-25 11:53:52,472] WARN [Consumer 
clientId=connector-consumer-adherent-sink-0, groupId=connect-adherent-sink] The 
following subscribed topics are not assigned to any members: 
[RAMOWNER.ADHERENT] 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:570)
[2020-09-25 11:53:52,597] WARN [Consumer 
clientId=connector-consumer-adherent-sink-0, groupId=connect-adherent-sink] 
Error while fetching metadata with correlation id 20822129 : 
\{RAMOWNER.ADHERENT=LEADER_NOT_AVAILABLE} 
(org.apache.kafka.clients.NetworkClient:1063)
topics = [RAMOWNER.ADHERENT]
topics = [RAMOWNER.ADHERENT]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-10-28 Thread Ron Dagostino
HI again, Colin.  I just noticed that both ConfigRecord and
AccessControlRecord have a ResourceType of type int8.  I thought that
config resources are in the set {topics, clients, users, brokers} and
ACL resource types are a different set as defined by the
org.apache.kafka.common.resource.ResourceType enum.  Does
ConfigRecord.ResourceType need to be a String?

Ron

On Sun, Oct 25, 2020 at 6:04 AM Ron Dagostino  wrote:
>
> Hi Colin and Jun.
>
> Regarding these issues:
>
> 83.1 It seems that the broker can transition from FENCED to RUNNING
> without registering for a new broker epoch. I am not sure how this
> works. Once the controller fences a broker, there is no need for the
> controller to keep the boker epoch around. So, if the fenced broker's
> heartbeat request with the existing broker epoch will be rejected,
> leading the broker back to the FENCED state again.; 104.
> REGISTERING(1) : It says "Otherwise, the broker moves into the FENCED
> state.". It seems this should be RUNNING?
>
> When would/could a broker re-register -- i.e. send
> BrokerRegistrationRequest again once it receives a
> BrokerRegistrationResponse containing no error and its broker epoch?
> The text states that "Once the period has elapsed, if the broker has
> not renewed its registration via a heartbeat, it must re-register."
> But the broker state machine only mentions any type of
> registration-related event in the REGISTERING state ("While in this
> state, the broker tries to register with the active controller");
> there is no other broker state in the text that mentions the
> possibility of re-registering, and the broker state machine has no
> transition back to the REGISTERING state.
>
> Also, the text now states that there are "three broker registration
> states: unregistered, registered but fenced, and registered and
> active." It would be good to map these onto the formal broker state
> machine so we know which "registration states" a broker can be in for
> each state within its broker state machine.  It is not clear if there
> is a way for a broker to go backwards into the "unregistered" broker
> registration state.  I suspect it can only flip-flop between
> registered but fenced/registered and active as the broker flip-flops
> between ACTIVE and FENCED, and this would imply that a broker is never
> strictly required to re-register -- though the option isn't precluded.
>
> Does a broker JVM keep it's assigned broker epoch throughout the life
> of the JVM?  The BrokerRegistrationRequest includes a place for the
> broker to specify its current broker epoch, but that would only be
> useful if the broker is re-registering.  If a broker were to
> re-register, the data in the request might seem to imply that it could
> do so to specify dynamic changes to its features or endpoints, but
> those dynamic changes happen centrally, so that doesn't seem to be a
> valid reason to re-register.  So I do not yet see a reason for
> re-registering despite the text "if the broker has not renewed its
> registration via a heartbeat, it must re-register."
>
> It feels to me that a broker would keep its epoch throughout the life
> of its JVM and it would never re-register, and the controller would
> remember/maintain the broker epoch when it fences a broker; the broker
> would continue to try sending heartbeat requests while it is fenced,
> and it would continue to do so until the process is killed via an
> external signal.  If the controller eventually does respond with the
> broker's next state then that next state will either be ACTIVE
> (meaning communication has been restored; the return broker epoch will
> be the same one that the broker JVM has had throughout its lifetime
> and that it provided in the heartbeat request); or the next state will
> be PENDING_CONTROLLED_SHUTDOWN if some other JVM process has since
> started with the same broker ID.
>
> I hope that helps the discussion.  Thanks for the great questions,
> Jun, and your hard work and responses, Colin.
>
> Ron
>
>
>
>
>
> On Sat, Oct 24, 2020 at 4:08 AM Tom Bentley  wrote:
> >
> > Hi Colin,
> >
> > Which error code in particular though? Because so far as I'm aware there's
> > no existing error code which really captures this situation and creating a
> > new one would not be backward compatible.
> >
> > Cheers,
> >
> > Tom
> >
> > On Sat, Oct 24, 2020 at 12:20 AM Jun Rao  wrote:
> >
> > > Hi, Colin,
> > >
> > > Thanks for the reply. A few more comments.
> > >
> > > 55. There is still text that favors new broker registration. "When a 
> > > broker
> > > first starts up, when it is in the INITIAL state, it will always "win"
> > > broker ID conflicts.  However, once it is granted a lease, it transitions
> > > out of the INITIAL state.  Thereafter, it may lose subsequent conflicts if
> > > its broker epoch is stale.  (See KIP-380 for some background on broker
> > > epoch.)  The reason for favoring new processes is to accommodate the 
> > > common
> > > case where a process is killed wi

Re: [DISCUSS] KIP-634: Complementary support for headers in Kafka Streams DSL

2020-10-28 Thread Jorge Esteban Quilcate Otoya
Hi Matthias,

Sorry for the late reply.

I like the proposal. Just to check if I got it right:

We can extend the `kstream.to()` function to support setting headers. e.g.:

```
void to(final String topic,
final Produced produced,
final HeadersExtractor headersExtractor);
```

where `HeadersExtractor`:

```
public interface HeadersExtractor {
Headers extract(final K key, final V value, final RecordContext
recordContext);
}
```

 This would require to change `Topology#addSink()` to support this
extractor as well.

If this is aligned with your proposal, I'm happy to add it to this KIP.

Cheers,
Jorge.

On Fri, Sep 11, 2020 at 11:03 PM Matthias J. Sax  wrote:

> Jorge,
>
> thanks a lot for this KIP. Being able to modify headers is a very
> valuable feature.
>
> However, before we actually expose them in the DSL, I am wondering if we
> should improve how headers can be modified in the PAPI? Currently, it is
> possible but very clumsy to work with headers in the Processor API,
> because of two reasons:
>
>  (1) There is no default `Headers` implementation in the public API
>  (2) There is no explicit way to set headers for output records
>
> Currently, the input record headers are copied into the output records
> when `forward()` is called, however, it's not really a deep copy but we
> just copy the reference. This implies that one needs to work with a
> single mutable object that flows through multiple processors making it
> very error prone.
>
> Furthermore, if you want to emit multiple output records, and for
> example want to add two different headers to the output record (based on
> the same input headers), you would need to do something like this:
>
>   Headers h = context.headers();
>   h.add(...);
>   context.forward(...);
>   // remove the header you added for the first output record
>   h.remove(...);
>   h.add(...);
>   context.forward(...);
>
>
> Maybe we could extend `To` to allow passing in a new `Headers` object
> (or an `Iterable` similar to `ProducerRecord`)? We could either
> add it to your KIP or do a new KIP just for the PAPI.
>
> Thoughts?
>
>
> -Matthias
>
> On 7/16/20 4:05 PM, Jorge Esteban Quilcate Otoya wrote:
> > Hi everyone,
> >
> > Bumping this thread to check if there's any feedback.
> >
> > Cheers,
> > Jorge.
> >
> > On Tue, Jun 30, 2020 at 12:46 AM Jorge Esteban Quilcate Otoya <
> > quilcate.jo...@gmail.com> wrote:
> >
> >> Hi everyone,
> >>
> >> I would like to start the discussion for KIP-634:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-634%3A+Complementary+support+for+headers+in+Kafka+Streams+DSL
> >>
> >> Looking forward to your feedback.
> >>
> >> Thanks!
> >> Jorge.
> >>
> >>
> >>
> >>
> >
>
>


Re: [DISCUSS] KIP-634: Complementary support for headers in Kafka Streams DSL

2020-10-28 Thread Sophie Blee-Goldman
I *think* that the `To` Matthias was referring to was not KStream#to but
the To class
which is accepted as a possible parameter of ProcessorContext#forward
(correct
me if wrong).

This was on the old ProcessorContext interface, which has now been
replaced with
the new api.ProcessorContext in KIP-478. In the new interface we've moved
away
from the forward signatures that accept a separate key or value or
timestamp or To,
and wrapped all of these into a single Record class. This new Record class
has the
headers as a field, so it seems like KIP-478 has happened to solve the lack
of support
for Headers in the PAPI along the way.

This is all somewhat recent, and probably wasn't yet sorted out at the time
of Matthias'
last reply. But given how this worked out it seems like we can just focus
on adding
support for Headers in the DSL in this KIP by building off of the
groundwork of
KIP-478? It doesn't seem necessary to go back and add support for headers
in the old
PAPI, since this will (or already has?) been deprecated.

The one challenge is that this will presumably require that we migrate all
DSL operators
to the new PAPI before adding header support for those operators. But that
definitely
sounds achievable here

On Wed, Oct 28, 2020 at 11:10 AM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Hi Matthias,
>
> Sorry for the late reply.
>
> I like the proposal. Just to check if I got it right:
>
> We can extend the `kstream.to()` function to support setting headers.
> e.g.:
>
> ```
> void to(final String topic,
> final Produced produced,
> final HeadersExtractor headersExtractor);
> ```
>
> where `HeadersExtractor`:
>
> ```
> public interface HeadersExtractor {
> Headers extract(final K key, final V value, final RecordContext
> recordContext);
> }
> ```
>
>  This would require to change `Topology#addSink()` to support this
> extractor as well.
>
> If this is aligned with your proposal, I'm happy to add it to this KIP.
>
> Cheers,
> Jorge.
>
> On Fri, Sep 11, 2020 at 11:03 PM Matthias J. Sax  wrote:
>
> > Jorge,
> >
> > thanks a lot for this KIP. Being able to modify headers is a very
> > valuable feature.
> >
> > However, before we actually expose them in the DSL, I am wondering if we
> > should improve how headers can be modified in the PAPI? Currently, it is
> > possible but very clumsy to work with headers in the Processor API,
> > because of two reasons:
> >
> >  (1) There is no default `Headers` implementation in the public API
> >  (2) There is no explicit way to set headers for output records
> >
> > Currently, the input record headers are copied into the output records
> > when `forward()` is called, however, it's not really a deep copy but we
> > just copy the reference. This implies that one needs to work with a
> > single mutable object that flows through multiple processors making it
> > very error prone.
> >
> > Furthermore, if you want to emit multiple output records, and for
> > example want to add two different headers to the output record (based on
> > the same input headers), you would need to do something like this:
> >
> >   Headers h = context.headers();
> >   h.add(...);
> >   context.forward(...);
> >   // remove the header you added for the first output record
> >   h.remove(...);
> >   h.add(...);
> >   context.forward(...);
> >
> >
> > Maybe we could extend `To` to allow passing in a new `Headers` object
> > (or an `Iterable` similar to `ProducerRecord`)? We could either
> > add it to your KIP or do a new KIP just for the PAPI.
> >
> > Thoughts?
> >
> >
> > -Matthias
> >
> > On 7/16/20 4:05 PM, Jorge Esteban Quilcate Otoya wrote:
> > > Hi everyone,
> > >
> > > Bumping this thread to check if there's any feedback.
> > >
> > > Cheers,
> > > Jorge.
> > >
> > > On Tue, Jun 30, 2020 at 12:46 AM Jorge Esteban Quilcate Otoya <
> > > quilcate.jo...@gmail.com> wrote:
> > >
> > >> Hi everyone,
> > >>
> > >> I would like to start the discussion for KIP-634:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-634%3A+Complementary+support+for+headers+in+Kafka+Streams+DSL
> > >>
> > >> Looking forward to your feedback.
> > >>
> > >> Thanks!
> > >> Jorge.
> > >>
> > >>
> > >>
> > >>
> > >
> >
> >
>


[jira] [Created] (KAFKA-10655) Raft leader should resign after write failures

2020-10-28 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10655:
---

 Summary: Raft leader should resign after write failures
 Key: KAFKA-10655
 URL: https://issues.apache.org/jira/browse/KAFKA-10655
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson


The controller's state machine relies on strong ordering guarantees. Each write 
assumes that all previous writes are either committed or will eventually become 
committed. In order to protect this assumption, the controller must not accept 
additional writes in the same epoch if a preceding write has failed. Instead, 
it should resign so that another controller can be elected. There are basically 
three classes of failures that we consider:

1. Serialization/state errors. Anything unexpected write errors should be 
treated as fatal. The leader should gracefully resign and the process should 
shutdown.
2. Disk IO errors. Similarly, the leader should resign (gracefully if possible) 
and the process should shutdown. 
3. Commit failures. If the leader is unable to commit data after some time, 
then it should gracefully resign, but the process should not exit.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #186

2020-10-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: call super.close() when closing RocksDB options (#9498)


--
[...truncated 3.45 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1faf5b97,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@33017afb,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@33017afb,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4a486fd7,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4a486fd7,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6963bcf2, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6963bcf2, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@c7521aa, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@c7521aa, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@341b547d, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@341b547d, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3da4a3fe, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3da4a3fe, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@49540c94, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@49540c94, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6c890c65, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6c890c65, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@25f1535, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@25f1535, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1afcedd3, 
timestamped = false, caching =

[jira] [Created] (KAFKA-10656) NetworkClient.java: print out the feature flags received at DEBUG level, as well as the other version information

2020-10-28 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-10656:


 Summary: NetworkClient.java: print out the feature flags received 
at DEBUG level, as well as the other version information
 Key: KAFKA-10656
 URL: https://issues.apache.org/jira/browse/KAFKA-10656
 Project: Kafka
  Issue Type: Bug
Reporter: Colin McCabe






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #212

2020-10-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: call super.close() when closing RocksDB options (#9498)


--
[...truncated 6.90 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@36c12e18, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@16d9f998, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@16d9f998, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@38ea6179, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@38ea6179, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@31affa4e, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@31affa4e, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@e21a38d, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@e21a38d, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@576d8ee8, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@576d8ee8, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3ce7ccd5, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3ce7ccd5, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@72d43c8d, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@72d43c8d, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@71eac491, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@71eac491, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4acb1e8a, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4acb1e8a, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4848f262, 
timestamped = false, caching = false, logging = true] STARTED

or

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #178

2020-10-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: call super.close() when closing RocksDB options (#9498)


--
[...truncated 6.85 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = f

[jira] [Created] (KAFKA-10657) Incorporate Envelope into auto-generated JSON schema

2020-10-28 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10657:
---

 Summary: Incorporate Envelope into auto-generated JSON schema
 Key: KAFKA-10657
 URL: https://issues.apache.org/jira/browse/KAFKA-10657
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen


We need to add support to output JSON format for embed request inside Envelope 
to do better request loggin.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-673%3A+Emit+JSONs+with+new+auto-generated+schema



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-634: Complementary support for headers in Kafka Streams DSL

2020-10-28 Thread Jorge Esteban Quilcate Otoya
Thanks Sophie! Haven't followed KIP-478 but sounds great.
I'll be happy to help on that migration to the new PAPI if it's still an
open issue. We can bump this KIP after that.

Cheers,
Jorge.

On Wed, Oct 28, 2020 at 7:00 PM Sophie Blee-Goldman 
wrote:

> I *think* that the `To` Matthias was referring to was not KStream#to but
> the To class
> which is accepted as a possible parameter of ProcessorContext#forward
> (correct
> me if wrong).
>
> This was on the old ProcessorContext interface, which has now been
> replaced with
> the new api.ProcessorContext in KIP-478. In the new interface we've moved
> away
> from the forward signatures that accept a separate key or value or
> timestamp or To,
> and wrapped all of these into a single Record class. This new Record class
> has the
> headers as a field, so it seems like KIP-478 has happened to solve the lack
> of support
> for Headers in the PAPI along the way.
>
> This is all somewhat recent, and probably wasn't yet sorted out at the time
> of Matthias'
> last reply. But given how this worked out it seems like we can just focus
> on adding
> support for Headers in the DSL in this KIP by building off of the
> groundwork of
> KIP-478? It doesn't seem necessary to go back and add support for headers
> in the old
> PAPI, since this will (or already has?) been deprecated.
>
> The one challenge is that this will presumably require that we migrate all
> DSL operators
> to the new PAPI before adding header support for those operators. But that
> definitely
> sounds achievable here
>
> On Wed, Oct 28, 2020 at 11:10 AM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
> > Hi Matthias,
> >
> > Sorry for the late reply.
> >
> > I like the proposal. Just to check if I got it right:
> >
> > We can extend the `kstream.to()` function to support setting headers.
> > e.g.:
> >
> > ```
> > void to(final String topic,
> > final Produced produced,
> > final HeadersExtractor headersExtractor);
> > ```
> >
> > where `HeadersExtractor`:
> >
> > ```
> > public interface HeadersExtractor {
> > Headers extract(final K key, final V value, final RecordContext
> > recordContext);
> > }
> > ```
> >
> >  This would require to change `Topology#addSink()` to support this
> > extractor as well.
> >
> > If this is aligned with your proposal, I'm happy to add it to this KIP.
> >
> > Cheers,
> > Jorge.
> >
> > On Fri, Sep 11, 2020 at 11:03 PM Matthias J. Sax 
> wrote:
> >
> > > Jorge,
> > >
> > > thanks a lot for this KIP. Being able to modify headers is a very
> > > valuable feature.
> > >
> > > However, before we actually expose them in the DSL, I am wondering if
> we
> > > should improve how headers can be modified in the PAPI? Currently, it
> is
> > > possible but very clumsy to work with headers in the Processor API,
> > > because of two reasons:
> > >
> > >  (1) There is no default `Headers` implementation in the public API
> > >  (2) There is no explicit way to set headers for output records
> > >
> > > Currently, the input record headers are copied into the output records
> > > when `forward()` is called, however, it's not really a deep copy but we
> > > just copy the reference. This implies that one needs to work with a
> > > single mutable object that flows through multiple processors making it
> > > very error prone.
> > >
> > > Furthermore, if you want to emit multiple output records, and for
> > > example want to add two different headers to the output record (based
> on
> > > the same input headers), you would need to do something like this:
> > >
> > >   Headers h = context.headers();
> > >   h.add(...);
> > >   context.forward(...);
> > >   // remove the header you added for the first output record
> > >   h.remove(...);
> > >   h.add(...);
> > >   context.forward(...);
> > >
> > >
> > > Maybe we could extend `To` to allow passing in a new `Headers` object
> > > (or an `Iterable` similar to `ProducerRecord`)? We could either
> > > add it to your KIP or do a new KIP just for the PAPI.
> > >
> > > Thoughts?
> > >
> > >
> > > -Matthias
> > >
> > > On 7/16/20 4:05 PM, Jorge Esteban Quilcate Otoya wrote:
> > > > Hi everyone,
> > > >
> > > > Bumping this thread to check if there's any feedback.
> > > >
> > > > Cheers,
> > > > Jorge.
> > > >
> > > > On Tue, Jun 30, 2020 at 12:46 AM Jorge Esteban Quilcate Otoya <
> > > > quilcate.jo...@gmail.com> wrote:
> > > >
> > > >> Hi everyone,
> > > >>
> > > >> I would like to start the discussion for KIP-634:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-634%3A+Complementary+support+for+headers+in+Kafka+Streams+DSL
> > > >>
> > > >> Looking forward to your feedback.
> > > >>
> > > >> Thanks!
> > > >> Jorge.
> > > >>
> > > >>
> > > >>
> > > >>
> > > >
> > >
> > >
> >
>


[jira] [Created] (KAFKA-10658) ErrantRecordReporter.report always return completed future even though the record is not sent to DLQ topic yet

2020-10-28 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10658:
--

 Summary: ErrantRecordReporter.report always return completed 
future even though the record is not sent to DLQ topic yet 
 Key: KAFKA-10658
 URL: https://issues.apache.org/jira/browse/KAFKA-10658
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


This issue happens when both DLQ and error log are enabled. There is a 
incorrect filter in handling multiple reports and it results in the uncompleted 
future is filtered out. Hence, users always receive a completed future even 
though the record is still in producer buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10659) Cogroup topology generation fails if input streams are repartitioned

2020-10-28 Thread blueedgenick (Jira)
blueedgenick created KAFKA-10659:


 Summary: Cogroup topology generation fails if input streams are 
repartitioned
 Key: KAFKA-10659
 URL: https://issues.apache.org/jira/browse/KAFKA-10659
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.5.1, 2.6.0
Reporter: blueedgenick


Example to reproduce:

 
{code:java}
KGroupedStream groupedA = builder
  .stream(topicA, Consumed.with(Serdes.String(), serdeA))
  .selectKey((aKey, aVal) -> aVal.someId)
  .groupByKey();
KGroupedStream groupedB = builder
  .stream(topicB, Consumed.with(Serdes.String(), serdeB))
  .selectKey((bKey, bVal) -> bVal.someId)
  .groupByKey();
KGroupedStream groupedC = builder
  .stream(topicC, Consumed.with(Serdes.String(), serdeC))
  .selectKey((cKey, cVal) -> cVal.someId)
  .groupByKey();
CogroupedKStream cogroup = groupedA.cogroup(AggregatorA)
  .cogroup(groupedB, AggregatorB)
 .  cogroup(groupedC, AggregatorC);
// Aggregate all streams of the cogroup
 KTable agg = cogroup.aggregate(
  () -> new ABC(),
  Named.as("my-agg-proc-name"),
  Materialized.>as(
 "abc-agg-store") 
 .withKeySerde(Serdes.String())
 .withValueSerde(serdeABC)
 );
{code}
 

 

This throws an exception during topology generation: 

 
{code:java}
org.apache.kafka.streams.errors.TopologyException: Invalid topology: Processor 
abc-agg-store-repartition-filter is already added. at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.addProcessor(Inter
nalTopologyBuilder.java:485)`
 at 
org.apache.kafka.streams.kstream.internals.graph.OptimizableRepartitionNode.writeToTopology(OptimizableRepartitionNode.java:70)
 at 
org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:307)
 at org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:564)
 at org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:553)
 at ...
{code}
 

The same exception is observed if the `selectKey(...).groupByKey()`  pattern is 
replaced with `groupBy(...)`.

This behavior is observed with topology optimization at default state, 
explicitly set off, or explicitly set on.

Interestingly the problem is avoided, and a workable topology produced,, if the 
grouping step is named by passing a `Grouped.with(...)` expression to either 
`groupByKey`` or `groupBy`.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #213

2020-10-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix documentation for KIP-585 (#9524)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreams should 
join correctly records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > transform a KStream should 
transform correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > transform a KStream should 
transform correctly records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransform a KStream 
should flatTransform correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransform a KStream 
should flatTransform correctly records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues a 
KStream should correctly flatTransform values in records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues a 
KStream should correctly flatTransform values in records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues with 
key in a KStream should correctly flatTransformValues in records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > flatTransformValues with 
key in a KStream should correctly flatTransformValues in records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreamToTables 
should join correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreamToTables 
should join correctly records PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaCogroup STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaCogroup PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaCogroupSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaCogroupSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Se

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #179

2020-10-28 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #187

2020-10-28 Thread Apache Jenkins Server
See