Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #167

2020-10-23 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10637) KafkaProducer: IllegalMonitorStateException

2020-10-23 Thread Lefteris Katiforis (Jira)
Lefteris Katiforis created KAFKA-10637:
--

 Summary: KafkaProducer: IllegalMonitorStateException 
 Key: KAFKA-10637
 URL: https://issues.apache.org/jira/browse/KAFKA-10637
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 2.5.1
Reporter: Lefteris Katiforis


Kafka producer throws the following exception:
{code:java}
{\"log\":\"java.lang.IllegalMonitorStateException: current thread is not 
owner\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415014714Z\"}"}
 java.base/java.lang.Object.wait(Native 
Method)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.41502027Z\"}"}
org.apache.kafka.common.utils.SystemTime.waitObject(SystemTime.java:55)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415024923Z\"}"}
at 
org.apache.kafka.clients.producer.internals.ProducerMetadata.awaitUpdate(ProducerMetadata.java:119)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415029863Z\"}"}
org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata(KafkaProducer.java:1029)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415034336Z\"}"}
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:883)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415038722Z\"}"}
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415042939Z\"}"}
org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:781)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415047238Z\"}"}
org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:562)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415051555Z\"}"}
org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:369)\\n\",\"stream\":\"stdout\",\"time\":\"2020-10-23T12:48:50.415055882Z\"}"}{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Checkstyle Import Order

2020-10-23 Thread Sophie Blee-Goldman
Bruno is committer-like :)

I generally prefer option 2, but would definitely be happy with any of them.

I'm pretty sure I'm personally responsible for some of the wacky import
ordering way back when, before I set my IDE configuration straight. It took
me a while to notice because we never had a checkstyle rule for import
order.
So thank you for proposing this.

Sophie

On Thu, Oct 22, 2020 at 11:17 PM Bruno Cadonna  wrote:

> Hi Dongjin,
>
> Thank you that you put me into the committer section, but I am actually
>   not a committer.
>
> Best,
> Bruno
>
> On 23.10.20 07:46, Dongjin Lee wrote:
> > As of Present:
> >
> > Committers:
> >
> > - Bruno: 2 and 3.
> > - Gwen: (No Specific Preference)
> >
> > Non-Committers:
> >
> > - Brandon: 2.
> > - Dongjin: 2 and 3.
> >
> > Let's hold on for 2 or 3 committers.
> >
> > Best,
> > Dongjin
> >
> > On Fri, Oct 23, 2020 at 10:09 AM Gwen Shapira  wrote:
> >
> >> I don't have any specific preference on the style. But I am glad you
> >> are bringing it up. Every other project I worked on had a specific
> >> import style, and the random import changes in PRs are pretty
> >> annoying.
> >>
> >> On Wed, Oct 14, 2020 at 10:36 PM Dongjin Lee 
> wrote:
> >>>
> >>> Hello. I hope to open a discussion about the import order in Java code.
> >>>
> >>> As Nikolay stated recently[^1], Kafka uses a relatively strict code
> style
> >>> for Java code. However, it misses any rule on import order. For this
> >>> reason, the code formatting settings of every local dev environment are
> >>> different from person to person, resulting in the countless meaningless
> >>> import order changes in the PR.
> >>>
> >>> For example, in `NamedCache.java` in the streams module, the `java.*`
> >>> imports are split into two chunks, embracing the other imports between
> >>> them. So, I propose to define an import order to prevent these kinds of
> >>> cases in the future.
> >>>
> >>> To define the import order, we have to regard the following three
> >>> orthogonal issues beforehand:
> >>>
> >>> a. How to group the type imports?
> >>> b. Whether to sort the imports alphabetically?
> >>> c. Where to place static imports: above the type imports, or below
> them.
> >>>
> >>> Since b and c seem relatively straightforward (sort the imports
> >>> alphabetically and place the static imports below the type imports), I
> >> hope
> >>> to focus the available alternatives on the problem a.
> >>>
> >>> I evaluated the following alternatives and checked how many files are
> get
> >>> effected for each case. (based on commit 1457cc652) And here are the
> >>> results:
> >>>
> >>> *1. kafka, org.apache.kafka, *, javax, java (5 groups): 1222 files.*
> >>>
> >>> ```
> >>>  
> >>> >>> value="kafka,/^org\.apache\.kafka.*$/,*,javax,java"/>
> >>>
> >>>
> >>>
> >>>
> >>>  
> >>> ```
> >>>
> >>> *2. (kafka|org.apache.kafka), *, javax? (3 groups): 968 files.*
> >>>
> >>> ```
> >>>  
> >>> >> value="(kafka|org\.apache\.kafka),*,javax?"/>
> >>>
> >>>
> >>>
> >>>
> >>>  
> >>> ```
> >>>
> >>> *3. (kafka|org.apache.kafka), *, javax, java (4 groups): 533 files.*
> >>>
> >>> ```
> >>>  
> >>> >>> value="(kafka|org\.apache\.kafka),*,javax,java"/>
> >>>
> >>>
> >>>
> >>>
> >>>  
> >>> ```
> >>>
> >>> *4. *, javax? (2 groups): 707 files.*
> >>>
> >>> ```
> >>>  
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>  
> >>> ```
> >>>
> >>> *5. javax?, * (2 groups): 1822 files.*
> >>>
> >>> ```
> >>>  
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>  
> >>> ```
> >>>
> >>> *6. java, javax, * (3 groups): 1809 files.*
> >>>
> >>> ```
> >>>  
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>  
> >>> ```
> >>>
> >>> I hope to get some feedback on this issue here.
> >>>
> >>> For the WIP PR, please refer here:
> >> https://github.com/apache/kafka/pull/8404
> >>>
> >>> Best,
> >>> Dongjin
> >>>
> >>> [^1]:
> >>>
> >>
> https://lists.apache.org/thread.html/r2bbee24b8a459842a0fc840c6e40958e7158d29f3f2d6c0d223be80b%40%3Cdev.kafka.apache.org%3E
> >>> [^2]:
> >>>
> >>
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/internals/NamedCache.java
> >>>
> >>> --
> >>> *Dongjin Lee*
> >>>
> >>> *A hitchhiker in the mathematical world.*
> >>>
> >>>
> >>>
> >>>
> >>> *github:  github.com/dongjinleekr
> >>> keybase:
> >> https://keybase.io/dongjinleekr
> >>> linkedin:
> >> kr.linkedin.com/in/dongjinleekr
> >>> speakerdeck:
> >> speakerdeck.com/dongjin
> >>> *
> >>
> >>
> >>
> >> --
> >> Gwen Shapira
> >> Engineering Manager | Confluent
> >> 650.450.2760 | @gwenshap
> >> Follow us: Twitter | blog
> >>
> >
> >
>

[jira] [Created] (KAFKA-10638) QueryableStateIntegrationTest fails due to stricter store checking

2020-10-23 Thread John Roesler (Jira)
John Roesler created KAFKA-10638:


 Summary: QueryableStateIntegrationTest fails due to stricter store 
checking
 Key: KAFKA-10638
 URL: https://issues.apache.org/jira/browse/KAFKA-10638
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.7.0
Reporter: John Roesler
Assignee: John Roesler


Observed:
{code:java}
org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
store source-table because the stream thread is PARTITIONS_ASSIGNED, not RUNNING
at 
org.apache.kafka.streams.state.internals.StreamThreadStateStoreProvider.stores(StreamThreadStateStoreProvider.java:81)
at 
org.apache.kafka.streams.state.internals.WrappingStoreProvider.stores(WrappingStoreProvider.java:50)
at 
org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStore.get(CompositeReadOnlyKeyValueStore.java:52)
at 
org.apache.kafka.streams.integration.StoreQueryIntegrationTest.shouldQuerySpecificActivePartitionStores(StoreQueryIntegrationTest.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.

Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-10-23 Thread Colin McCabe
Hi Ron,

Thanks for the questions.

Talking about "broker state" can get confusing because there's lots of 
different concepts that we refer to as "broker state."  For example, there is a 
JMX metric that we expose, whose values are given in BrokerStates.scala.

However, for the purpose of this discussion, the important broker state is 
registration state.  And registration state is currently very simple: you're 
either in the cluster, or you're not.  If you're in, you have an active 
ZooKeeper connection, and you have a znode under /brokers/ which has your 
connection information.  If you're out, you do not.  Registration is a thing 
that happens between the broker and ZooKeeper-- nobody else is involved.

Controlled shutdown is just a path that allows you to transition from "in the 
cluster" to "out of the cluster" more gracefully.  It doesn't have a different 
destination state than just using kill -9 on the process.

The intention in KIP-500 is to have three registration states.  I added a 
paragraph to KIP-631 about this:

 > In the post-KIP-500 world, there will be three broker registration states: 
 > unregistered, registered but fenced, and registered and active.  Just like 
 > today, unregistered means that there is no registration information and no 
 > way to reach the broker.  It is effectively not part of the cluster.  In 
 > the two registered states, in contrast, contact information is available.  
 > However, in the "registered but fenced" state, the contact information 
 > might no longer be valid.  For example, if a broker crashes and is not 
 > restarted, it will end up in "registered but fenced" state.

I also added a new Record, UnfenceBroker, which I think addresses your earlier 
point that we should not confuse unfencing with registering.

best,
Colin


It is not necessary to communicate 
On Thu, Oct 22, 2020, at 10:48, Ron Dagostino wrote:
> Hi again, Colin.  Related to the issue of how to communicate fenced
> vs. not fenced (and how to communicate or imply Offline vs not
> Offline), do we need to communicate in the log that a broker has
> gracefully shut down?  We do distinguish between a broker being
> unavailable due to a controlled shutdown vs. having died -- one that
> dies appears in the metadata as being Offline, whereas one that
> gracefully shuts down does not.  I don't see a way to imply this
> gracefully-shutdown state, though I may be missing it.
> 
> Ron
> 
> On Thu, Oct 22, 2020 at 1:32 PM Ron Dagostino  wrote:
> >
> > HI Colin.  A FencedBrokerRecord appears in the metadata log when a
> > broker is fenced.  What appears in the metadata log to indicate that a
> > broker is no longer fenced?  Does a BrokerRecord appear?  That seems
> > to communicate a bunch of unnecessary data in this context (endpoints,
> > features, rack).  If that is the current plan, might it be better to
> > include a Boolean value within FencedBrokerRecord and specify true
> > when the broker becomes fenced and false when it is no longer fenced?
> >
> > Also, is a fenced broker considered Offline for partition metadata
> > purposes?  I think so but would like to confirm.
> >
> > Ron
> >
> > On Wed, Oct 21, 2020 at 9:05 AM Colin McCabe  wrote:
> > >
> > > On Tue, Oct 13, 2020, at 18:30, Jun Rao wrote:
> > > > Hi, Colin,
> > > >
> > > > Thanks for the reply. A few more comments below.
> > > >
> > > > 80.1 controller.listener.names only defines the name of the listener. 
> > > > The
> > > > actual listener including host/port/security_protocol is typically 
> > > > defined
> > > > in advertised_listners. Does that mean advertised_listners is a required
> > > > config now?
> > > >
> > >
> > > Hi Jun,
> > >
> > > Thanks for the re-review.
> > >
> > > The controller listeners are not advertised to clients.  So I think they 
> > > should go in listeners, rather than advertised.listeners.
> > >
> > > I agree that this makes listeners a required configuration.  At very 
> > > least, it is required to have the controller listener in there.
> > >
> > > >
> > > > 83.1 broker state machine: It seems that we should transition from 
> > > > FENCED
> > > > => INITIAL since only INITIAL generates new broker epoch?
> > > >
> > >
> > > I changed the broker state machine a bit-- take a look.  In the new state 
> > > machine, the FENCED state can re-register.
> > >
> > > >
> > > > 83.5. It's true that the controller node doesn't serve metadata 
> > > > requests.
> > > > However, there are admin requests such as topic creation/deletion are 
> > > > sent
> > > > to the controller directly. So, it seems that the client needs to know
> > > > the controller host/port?
> > > >
> > >
> > > The client sends admin requests to a random broker, which forwards them 
> > > to the controller.  This is described in KIP-590.
> > >
> > > >
> > > > 85. "I was hoping that we could avoid responding to requests when the
> > > > broker was fenced." This issue is that if we don't send a response, the
> > > > client won't know the reason and

Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-10-23 Thread Colin McCabe
On Wed, Oct 21, 2020, at 05:51, Tom Bentley wrote:
> Hi Colin,
> 
> On Mon, Oct 19, 2020, at 08:59, Ron Dagostino wrote:
> > > Hi Colin.  Thanks for the hard work on this KIP.
> > >
> > > I have some questions about what happens to a broker when it becomes
> > > fenced (e.g. because it can't send a heartbeat request to keep its
> > > lease).  The KIP says "When a broker is fenced, it cannot process any
> > > client requests.  This prevents brokers which are not receiving
> > > metadata updates or that are not receiving and processing them fast
> > > enough from causing issues to clients." And in the description of the
> > > FENCED(4) state it likewise says "While in this state, the broker does
> > > not respond to client requests."  It makes sense that a fenced broker
> > > should not accept producer requests -- I assume any such requests
> > > would result in NotLeaderOrFollowerException.  But what about KIP-392
> > > (fetch from follower) consumer requests?  It is conceivable that these
> > > could continue.  Related to that, would a fenced broker continue to
> > > fetch data for partitions where it thinks it is a follower?  Even if
> > > it rejects consumer requests it might still continue to fetch as a
> > > follower.  Might it be helpful to clarify both decisions here?
> >
> > Hi Ron,
> >
> > Good question.  I think a fenced broker should continue to fetch on
> > partitions it was already fetching before it was fenced, unless it hits a
> > problem.  At that point it won't be able to continue, since it doesn't have
> > the new metadata.  For example, it won't know about leadership changes in
> > the partitions it's fetching.  The rationale for continuing to fetch is to
> > try to avoid disruptions as much as possible.
> >
> > I don't think fenced brokers should accept client requests.  The issue is
> > that the fenced broker may or may not have any data it is supposed to
> > have.  It may or may not have applied any configuration changes, etc. that
> > it is supposed to have applied.  So it could get pretty confusing, and also
> > potentially waste the client's time.
> >
> >
> When fenced, how would the broker reply to a client which did make a
> request?
> 

Hi Tom,

The broker will respond with a retryable error in that case.  Once the client 
has re-fetched its metadata, it will no longer see the fenced broker as part of 
the cluster.  I added a note to the KIP.

best,
Colin

>
> Thanks,
> 
> Tom
>


Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-10-23 Thread Jun Rao
Hi, Colin,

Thanks for the reply. A few more comments.

55. There is still text that favors new broker registration. "When a broker
first starts up, when it is in the INITIAL state, it will always "win"
broker ID conflicts.  However, once it is granted a lease, it transitions
out of the INITIAL state.  Thereafter, it may lose subsequent conflicts if
its broker epoch is stale.  (See KIP-380 for some background on broker
epoch.)  The reason for favoring new processes is to accommodate the common
case where a process is killed with kill -9 and then restarted.  We want it
to be able to reclaim its old ID quickly in this case."

80.1 Sounds good. Could you document that listeners is a required config
now? It would also be useful to annotate other required configs. For
example, controller.connect should be required.

80.2 Could you list all deprecated existing configs? Another one is
control.plane.listener.name since the controller no longer sends
LeaderAndIsr, UpdateMetadata and StopReplica requests.

83.1 It seems that the broker can transition from FENCED to RUNNING without
registering for a new broker epoch. I am not sure how this works. Once the
controller fences a broker, there is no need for the controller to keep the
boker epoch around. So, if the fenced broker's heartbeat request with the
existing broker epoch will be rejected, leading the broker back to the
FENCED state again.

83.5 Good point on KIP-590. Then should we expose the controller for
debugging purposes? If not, we should deprecate the controllerID field in
MetadataResponse?

90. We rejected the shared ID with just one reason "This is not a good idea
because NetworkClient assumes a single ID space.  So if there is both a
controller 1 and a broker 1, we don't have a way of picking the "right"
one." This doesn't seem to be a strong reason. For example, we could
address the NetworkClient issue with the node type as you pointed out or
using the negative value of a broker ID as the controller ID.

100. In KIP-589
,
the broker reports all offline replicas due to a disk failure to the
controller. It seems this information needs to be persisted to the metadata
log. Do we have a corresponding record for that?

101. Currently, StopReplica request has 2 modes, without deletion and with
deletion. The former is used for controlled shutdown and handling disk
failure, and causes the follower to stop. The latter is for topic deletion
and partition reassignment, and causes the replica to be deleted. Since we
are deprecating StopReplica, could we document what triggers the stopping
of a follower and the deleting of a replica now?

102. Should we include the metadata topic in the MetadataResponse? If so,
when it will be included and what will the metadata response look like?

103. "The active controller assigns the broker a new broker epoch, based on
the latest committed offset in the log." This seems inaccurate since the
latest committed offset doesn't always advance on every log append.

104. REGISTERING(1) : It says "Otherwise, the broker moves into the FENCED
state.". It seems this should be RUNNING?

105. RUNNING: Should we require the broker to catch up to the metadata log
to get into this state?

Thanks,

Jun



On Fri, Oct 23, 2020 at 1:20 PM Colin McCabe  wrote:

> On Wed, Oct 21, 2020, at 05:51, Tom Bentley wrote:
> > Hi Colin,
> >
> > On Mon, Oct 19, 2020, at 08:59, Ron Dagostino wrote:
> > > > Hi Colin.  Thanks for the hard work on this KIP.
> > > >
> > > > I have some questions about what happens to a broker when it becomes
> > > > fenced (e.g. because it can't send a heartbeat request to keep its
> > > > lease).  The KIP says "When a broker is fenced, it cannot process any
> > > > client requests.  This prevents brokers which are not receiving
> > > > metadata updates or that are not receiving and processing them fast
> > > > enough from causing issues to clients." And in the description of the
> > > > FENCED(4) state it likewise says "While in this state, the broker
> does
> > > > not respond to client requests."  It makes sense that a fenced broker
> > > > should not accept producer requests -- I assume any such requests
> > > > would result in NotLeaderOrFollowerException.  But what about KIP-392
> > > > (fetch from follower) consumer requests?  It is conceivable that
> these
> > > > could continue.  Related to that, would a fenced broker continue to
> > > > fetch data for partitions where it thinks it is a follower?  Even if
> > > > it rejects consumer requests it might still continue to fetch as a
> > > > follower.  Might it be helpful to clarify both decisions here?
> > >
> > > Hi Ron,
> > >
> > > Good question.  I think a fenced broker should continue to fetch on
> > > partitions it was already fetching before it was fenced, unless it
> hits a
> > > problem.  At that point it won't be able to continue, since it doesn't
> have
> > > the 

Re: [DISCUSS] Checkstyle Import Order

2020-10-23 Thread John Roesler
Hi Dongjin,

Thanks for bringing this up. I’m in favor of adding a check style rule. 

Thanks for testing out the alternatives. As long as we aren’t adding wildcard 
imports and as long as it’s easy to configure IDEs with our chosen rules, I 
have no preference. I assume these hold for your proposals, since we’d 
otherwise have a huge number of changes files.

I guess the options with the least changes is likely to be the one for which 
the fewest people would have to change IDE settings?

Thanks,
John

On Fri, Oct 23, 2020, at 11:44, Sophie Blee-Goldman wrote:
> Bruno is committer-like :)
> 
> I generally prefer option 2, but would definitely be happy with any of them.
> 
> I'm pretty sure I'm personally responsible for some of the wacky import
> ordering way back when, before I set my IDE configuration straight. It took
> me a while to notice because we never had a checkstyle rule for import
> order.
> So thank you for proposing this.
> 
> Sophie
> 
> On Thu, Oct 22, 2020 at 11:17 PM Bruno Cadonna  wrote:
> 
> > Hi Dongjin,
> >
> > Thank you that you put me into the committer section, but I am actually
> >   not a committer.
> >
> > Best,
> > Bruno
> >
> > On 23.10.20 07:46, Dongjin Lee wrote:
> > > As of Present:
> > >
> > > Committers:
> > >
> > > - Bruno: 2 and 3.
> > > - Gwen: (No Specific Preference)
> > >
> > > Non-Committers:
> > >
> > > - Brandon: 2.
> > > - Dongjin: 2 and 3.
> > >
> > > Let's hold on for 2 or 3 committers.
> > >
> > > Best,
> > > Dongjin
> > >
> > > On Fri, Oct 23, 2020 at 10:09 AM Gwen Shapira  wrote:
> > >
> > >> I don't have any specific preference on the style. But I am glad you
> > >> are bringing it up. Every other project I worked on had a specific
> > >> import style, and the random import changes in PRs are pretty
> > >> annoying.
> > >>
> > >> On Wed, Oct 14, 2020 at 10:36 PM Dongjin Lee 
> > wrote:
> > >>>
> > >>> Hello. I hope to open a discussion about the import order in Java code.
> > >>>
> > >>> As Nikolay stated recently[^1], Kafka uses a relatively strict code
> > style
> > >>> for Java code. However, it misses any rule on import order. For this
> > >>> reason, the code formatting settings of every local dev environment are
> > >>> different from person to person, resulting in the countless meaningless
> > >>> import order changes in the PR.
> > >>>
> > >>> For example, in `NamedCache.java` in the streams module, the `java.*`
> > >>> imports are split into two chunks, embracing the other imports between
> > >>> them. So, I propose to define an import order to prevent these kinds of
> > >>> cases in the future.
> > >>>
> > >>> To define the import order, we have to regard the following three
> > >>> orthogonal issues beforehand:
> > >>>
> > >>> a. How to group the type imports?
> > >>> b. Whether to sort the imports alphabetically?
> > >>> c. Where to place static imports: above the type imports, or below
> > them.
> > >>>
> > >>> Since b and c seem relatively straightforward (sort the imports
> > >>> alphabetically and place the static imports below the type imports), I
> > >> hope
> > >>> to focus the available alternatives on the problem a.
> > >>>
> > >>> I evaluated the following alternatives and checked how many files are
> > get
> > >>> effected for each case. (based on commit 1457cc652) And here are the
> > >>> results:
> > >>>
> > >>> *1. kafka, org.apache.kafka, *, javax, java (5 groups): 1222 files.*
> > >>>
> > >>> ```
> > >>>  
> > >>> > >>> value="kafka,/^org\.apache\.kafka.*$/,*,javax,java"/>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>  
> > >>> ```
> > >>>
> > >>> *2. (kafka|org.apache.kafka), *, javax? (3 groups): 968 files.*
> > >>>
> > >>> ```
> > >>>  
> > >>> > >> value="(kafka|org\.apache\.kafka),*,javax?"/>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>  
> > >>> ```
> > >>>
> > >>> *3. (kafka|org.apache.kafka), *, javax, java (4 groups): 533 files.*
> > >>>
> > >>> ```
> > >>>  
> > >>> > >>> value="(kafka|org\.apache\.kafka),*,javax,java"/>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>  
> > >>> ```
> > >>>
> > >>> *4. *, javax? (2 groups): 707 files.*
> > >>>
> > >>> ```
> > >>>  
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>  
> > >>> ```
> > >>>
> > >>> *5. javax?, * (2 groups): 1822 files.*
> > >>>
> > >>> ```
> > >>>  
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>  
> > >>> ```
> > >>>
> > >>> *6. java, javax, * (3 groups): 1809 files.*
> > >>>
> > >>> ```
> > >>>  
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>  
> > >>> ```
> > >>>
> > >>> I hope to get some feedback on this issue here.
> > >>>
> > >>> For the WIP PR, please refer here:
> > >> https://github.com/apache/kafka/pull/8404
> > >>>
> > >>> Best,
> > >>> Dongjin
> > >>>
> > >>> [^1]:
> > >>>
> > >>
>

[jira] [Created] (KAFKA-10639) There should be an EnvironmentConfigProvider that will do variable substitution using environment variable.

2020-10-23 Thread Brad Davis (Jira)
Brad Davis created KAFKA-10639:
--

 Summary: There should be an EnvironmentConfigProvider that will do 
variable substitution using environment variable.
 Key: KAFKA-10639
 URL: https://issues.apache.org/jira/browse/KAFKA-10639
 Project: Kafka
  Issue Type: Improvement
  Components: config
Affects Versions: 2.5.1
Reporter: Brad Davis


Running Kafka Connect in the same docker container in multiple stages (like dev 
vs production) means that a file based approach to secret hiding using the file 
config provider isn't viable.  However, docker container instances can have 
their environment variables customized on a per-container basis, and our 
existing tech stack typically exposes per-stage secrets (like the dev DB 
password vs the prod DB password) through env vars within the containers.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)