Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #3078

2024-07-05 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-4374) Improve Response Errors Logging

2024-07-05 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-4374.
---
Fix Version/s: 3.9.0
   Resolution: Fixed

> Improve Response Errors Logging
> ---
>
> Key: KAFKA-4374
> URL: https://issues.apache.org/jira/browse/KAFKA-4374
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.1
>Reporter: Jesse Anderson
>Assignee: Ksolves
>Priority: Minor
> Fix For: 3.9.0
>
>
> When NetworkClient.java gets a response error, it runs:
> {code}
> if (response.errors().size() > 0) {
> log.warn("Error while fetching metadata with correlation id 
> {} : {}", header.correlationId(), response.errors());
> }
> {code}
> Logging that at warn level and saying there is an error, is confusing to new 
> people. They don't see it was a warn and not error level. They just see that 
> it says "Error while...".
> Maybe it should be something like "The metadata response from the cluster 
> reported a recoverable issue..."



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


RE: [VOTE] KIP-1054: Support external schemas in JSONConverter

2024-07-05 Thread Priyanka K U
Hi Chris,

Thank you for your feedback and insights on the KIP.

Let me address your queries

1 .  The KIP will work with existing config providers , we have tested with the 
Directory config .I have updated the examples in KIP .

2.   We have tested it out with inline schema and we should  not need any 
backslashes. However, your point about the ergonomics is true - as this needs 
the schema to be provided in a single line, it would be difficult to read 
unless the schema is very small and simple. I think using a config provider 
would be the answer for this.


Thanks
Priyanka


From: Chris Egerton 
Date: Saturday, 29 June 2024 at 12:37 PM
To: dev@kafka.apache.org 
Subject: [EXTERNAL] Re: [VOTE] KIP-1054: Support external schemas in 
JSONConverter
Hi Priyanka,

I think allowing a raw schema string, instead of introducing a property
that gives users access to the file system on the worker, is a good idea. I
have two small thoughts:

1) The KIP provides an example
of schema.content:${directory:/schema/schema.json} for how a config
provider might be used to read a schema from the file system. Is this an
accurate example corresponding to the file config provider or the directory
config provider? If not, can an example be provided for either of these
out-of-the-box providers? I'm asking because I think it'd make sense to
ensure that readily-accessible config providers can be used to achieve what
we need for this KIP, and if there's a gap, we should try to fill it.

2) Schemas for the JSON converter are, unsurprisingly, encoded in JSON.
This is fine if someone uses a config provider to fetch that JSON string
from somewhere else (like a file on the worker), but if someone tries to
provide a schema directly in a connector config, it'll be a mess of
backslashes and harder to read. Do you have any thoughts on a way we might
improve the ergonomics here?

Cheers,

Chris

On Fri, Jun 28, 2024 at 5:24 AM Priyanka K U 
wrote:

> Hello Everyone,
>
> I'd like to start a vote on KIP-1054, which aims to Support external
> schemas in JSONConverter to Kafka Connect:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1054%3A+Support+external+schemas+in+JSONConverter
>
>
> Thank you,
>
> Priyanka
>
>


[jira] [Created] (KAFKA-17083) KRaft Upgrade Failures in SystemTests

2024-07-05 Thread Josep Prat (Jira)
Josep Prat created KAFKA-17083:
--

 Summary: KRaft Upgrade Failures in SystemTests
 Key: KAFKA-17083
 URL: https://issues.apache.org/jira/browse/KAFKA-17083
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Affects Versions: 3.8.0
Reporter: Josep Prat


2 System tests for "TestKRaftUpgrade are consistently failing on 3.8 in the 
system tests.
{noformat}
Module: kafkatest.tests.core.kraft_upgrade_test
Class:  TestKRaftUpgrade
Method: test_isolated_mode_upgrade
Arguments:
{
  "from_kafka_version": "dev",
  "metadata_quorum": "ISOLATED_KRAFT"
}
{noformat}
 

and 

 
{code:java}
Module: kafkatest.tests.core.kraft_upgrade_test
Class:  TestKRaftUpgrade
Method: test_combined_mode_upgrade
Arguments:
{
  "from_kafka_version": "dev",
  "metadata_quorum": "COMBINED_KRAFT"
}
{code}
 

Failure for Isolated is:
{noformat}
RemoteCommandError({'ssh_config': {'host': 'worker15', 'hostname': 
'10.140.39.207', 'user': 'ubuntu', 'port': 22, 'password': None, 
'identityfile': '/home/semaphore/kafka-overlay/semaphore-muckrake.pem'}, 
'hostname': 'worker15', 'ssh_hostname': '10.140.39.207', 'user': 'ubuntu', 
'externally_routable_ip': '10.140.39.207', '_logger': , 'os': 'linux', '_ssh_client': , '_sftp_client': , '_custom_ssh_exception_checks': None}, 
'/opt/kafka-dev/bin/kafka-features.sh --bootstrap-server 
worker15:9092,worker16:9092,worker17:9092 upgrade --metadata 3.7', 1, b'SLF4J: 
Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in 
[jar:file:/vagrant/tools/build/dependant-libs-2.13.14/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 Found binding in 
[jar:file:/vagrant/trogdor/build/dependant-libs-2.13.14/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.\nSLF4J: Actual binding is of type 
[org.slf4j.impl.Reload4jLoggerFactory]\n1 out of 1 operation(s) failed.\n')
Traceback (most recent call last):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 184, in _do_run
data = self.run_test()
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 262, in run_test
return self.test_context.function(self.test)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/mark/_mark.py",
 line 433, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/kraft_upgrade_test.py",
 line 121, in test_isolated_mode_upgrade
self.run_upgrade(from_kafka_version)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/kraft_upgrade_test.py",
 line 105, in run_upgrade
self.run_produce_consume_validate(core_test_action=lambda: 
self.perform_version_change(from_kafka_version))
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/produce_consume_validate.py",
 line 105, in run_produce_consume_validate
core_test_action(*args)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/kraft_upgrade_test.py",
 line 105, in 
self.run_produce_consume_validate(core_test_action=lambda: 
self.perform_version_change(from_kafka_version))
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/kraft_upgrade_test.py",
 line 75, in perform_version_change
self.kafka.upgrade_metadata_version(LATEST_STABLE_METADATA_VERSION)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/services/kafka/kafka.py", 
line 920, in upgrade_metadata_version
self.run_features_command("upgrade", new_version)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/services/kafka/kafka.py", 
line 930, in run_features_command
self.nodes[0].account.ssh(cmd)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 35, in wrapper
return method(self, *args, **kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 293, in ssh
raise RemoteCommandError(self, cmd, exit_status, stderr.read())
ducktape.cluster.remoteaccount.RemoteCommandError: ubuntu@worker15: Command 
'/opt/kafka-dev/bin/kafka-features.sh --bootstrap-server 
worker15:9092,worker16:9092,worker17:9092 upgrade --metadata 3.7' returned 
non-zero exit status 1. Remote error message: b'SLF4J: Class path contains 
multiple SLF4J bindings.\nSLF4J: Found binding in 
[jar:file:/vagrant/tools/build/dependant-libs-2.13.14/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 Found binding in 
[jar:file:/vagrant/trogdor/build/dependant-libs-2.13.14/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 See http://www.slf4j.or

[jira] [Created] (KAFKA-17084) Network Degrade Test fails in System Tests

2024-07-05 Thread Josep Prat (Jira)
Josep Prat created KAFKA-17084:
--

 Summary: Network Degrade Test fails in System Tests
 Key: KAFKA-17084
 URL: https://issues.apache.org/jira/browse/KAFKA-17084
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Affects Versions: 3.8.0
Reporter: Josep Prat


Tests for NetworkDegradeTest fail consistently on the 3.8 branch.

 

Tests failing are:

 
{noformat}
Module: kafkatest.tests.core.network_degrade_test
Class:  NetworkDegradeTest
Method: test_latency
Arguments:
{
  "device_name": "eth0",
  "latency_ms": 50,
  "rate_limit_kbit": 1000,
  "task_name": "latency-100-rate-1000"
}
{noformat}
 

and 

 
{noformat}
Module: kafkatest.tests.core.network_degrade_test
Class:  NetworkDegradeTest
Method: test_latency
Arguments:
{
  "device_name": "eth0",
  "latency_ms": 50,
  "rate_limit_kbit": 0,
  "task_name": "latency-100"
}
{noformat}
 

Failure for the first one is:
{noformat}
RemoteCommandError({'ssh_config': {'host': 'worker30', 'hostname': 
'10.140.34.105', 'user': 'ubuntu', 'port': 22, 'password': None, 
'identityfile': '/home/semaphore/kafka-overlay/semaphore-muckrake.pem'}, 
'hostname': 'worker30', 'ssh_hostname': '10.140.34.105', 'user': 'ubuntu', 
'externally_routable_ip': '10.140.34.105', '_logger': , 'os': 'linux', '_ssh_client': , '_sftp_client': , '_custom_ssh_exception_checks': None}, 'ping -i 1 -c 20 
worker21', 1, b'')
Traceback (most recent call last):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 184, in _do_run
data = self.run_test()
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 262, in run_test
return self.test_context.function(self.test)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/mark/_mark.py",
 line 433, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/network_degrade_test.py",
 line 66, in test_latency
for line in zk0.account.ssh_capture("ping -i 1 -c 20 %s" % 
zk1.account.hostname):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 680, in next
return next(self.iter_obj)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 347, in output_generator
raise RemoteCommandError(self, cmd, exit_status, stderr.read())
ducktape.cluster.remoteaccount.RemoteCommandError: ubuntu@worker30: Command 
'ping -i 1 -c 20 worker21' returned non-zero exit status 1.{noformat}
And for the second one is:
{noformat}
RemoteCommandError({'ssh_config': {'host': 'worker28', 'hostname': 
'10.140.41.79', 'user': 'ubuntu', 'port': 22, 'password': None, 'identityfile': 
'/home/semaphore/kafka-overlay/semaphore-muckrake.pem'}, 'hostname': 
'worker28', 'ssh_hostname': '10.140.41.79', 'user': 'ubuntu', 
'externally_routable_ip': '10.140.41.79', '_logger': , 'os': 'linux', '_ssh_client': , '_sftp_client': , '_custom_ssh_exception_checks': None}, 'ping -i 1 -c 20 
worker27', 1, b'')
Traceback (most recent call last):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 184, in _do_run
data = self.run_test()
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 262, in run_test
return self.test_context.function(self.test)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/mark/_mark.py",
 line 433, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/core/network_degrade_test.py",
 line 66, in test_latency
for line in zk0.account.ssh_capture("ping -i 1 -c 20 %s" % 
zk1.account.hostname):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 680, in next
return next(self.iter_obj)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 347, in output_generator
raise RemoteCommandError(self, cmd, exit_status, stderr.read())
ducktape.cluster.remoteaccount.RemoteCommandError: ubuntu@worker28: Command 
'ping -i 1 -c 20 worker27' returned non-zero exit status 1.{noformat}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17085) Streams Cooperative Rebalance Upgrade Test fails in System Tests

2024-07-05 Thread Josep Prat (Jira)
Josep Prat created KAFKA-17085:
--

 Summary: Streams Cooperative Rebalance Upgrade Test fails in 
System Tests
 Key: KAFKA-17085
 URL: https://issues.apache.org/jira/browse/KAFKA-17085
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Affects Versions: 3.8.0
Reporter: Josep Prat


StreamsCooperativeRebalanceUpgradeTest fails on system tests when upgrading 
from: 2.1.1, 2.2.2 and 2.3.1.


Tests that fail:

 
{noformat}
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "2.1.1"
}
 
{noformat}
and

 
{noformat}
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "2.2.2"
}
{noformat}
and

 

 
{noformat}
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "2.3.1"
}
{noformat}
 

Failure for 2.1.1 is:
{noformat}
TimeoutError("Never saw 'first_bounce_phase-Processed [0-9]* records so far' 
message ubuntu@worker28")
Traceback (most recent call last):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 184, in _do_run
data = self.run_test()
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 262, in run_test
return self.test_context.function(self.test)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/mark/_mark.py",
 line 433, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/streams/streams_cooperative_rebalance_upgrade_test.py",
 line 101, in test_upgrade_to_cooperative_rebalance
self.maybe_upgrade_rolling_bounce_and_verify(processors,
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/streams/streams_cooperative_rebalance_upgrade_test.py",
 line 182, in maybe_upgrade_rolling_bounce_and_verify
stdout_monitor.wait_until(verify_processing_msg,
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 736, in wait_until
return wait_until(lambda: self.acct.ssh("tail -c +%d %s | grep '%s'" % 
(self.offset + 1, self.log, pattern),
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/utils/util.py",
 line 58, in wait_until
raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from 
last_exception
ducktape.errors.TimeoutError: Never saw 'first_bounce_phase-Processed [0-9]* 
records so far' message ubuntu@worker28{noformat}
Failure for 2.2.2 is:
{noformat}
TimeoutError("Never saw 'first_bounce_phase-Processed [0-9]* records so far' 
message ubuntu@worker5")
Traceback (most recent call last):
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 184, in _do_run
data = self.run_test()
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/tests/runner_client.py",
 line 262, in run_test
return self.test_context.function(self.test)
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/mark/_mark.py",
 line 433, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/streams/streams_cooperative_rebalance_upgrade_test.py",
 line 101, in test_upgrade_to_cooperative_rebalance
self.maybe_upgrade_rolling_bounce_and_verify(processors,
  File 
"/home/semaphore/kafka-overlay/kafka/tests/kafkatest/tests/streams/streams_cooperative_rebalance_upgrade_test.py",
 line 182, in maybe_upgrade_rolling_bounce_and_verify
stdout_monitor.wait_until(verify_processing_msg,
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/cluster/remoteaccount.py",
 line 736, in wait_until
return wait_until(lambda: self.acct.ssh("tail -c +%d %s | grep '%s'" % 
(self.offset + 1, self.log, pattern),
  File 
"/home/semaphore/kafka-overlay/kafka/venv/lib/python3.8/site-packages/ducktape/utils/util.py",
 line 58, in wait_until
raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from 
last_exception
ducktape.errors.TimeoutError: Never saw 'first_bounce_phase-Processed [0-9]* 
records so far' message ubuntu@worker5{noformat}
Failure for 2.3.1 is:
{noformat}
TimeoutError("Never saw 'first_bounce_phase-Processed [0-9]* records so far' 
message ubuntu@worker21")
Traceback (most recent call last

Re: [DISCUSS] Apache Kafka 3.8.0 release

2024-07-05 Thread Josep Prat
Hi all,
Unfortunately, after 4 runs of the systems tests, we still can't have a
combined run with no errors. I created the JIRAs linked below to track
these.
I would think these are blockers for the release, but I'd be extremely
happy to be corrected!

KRaft Upgrade Failures: https://issues.apache.org/jira/browse/KAFKA-17083
Network Degrade Failures: https://issues.apache.org/jira/browse/KAFKA-17084
Streams Cooperative Rebalance Upgrade Failures
https://issues.apache.org/jira/browse/KAFKA-17085.
These system tests above fail consistently on CI and on my machine. If
anyone has the means to run system tests and can make these pass, please
let me know.

These add up to the existing
https://issues.apache.org/jira/browse/KAFKA-16138 (discovered during 3.7)
for the Quota test failures that can pass locally.

The status of the test runs as well as the logs of the runs can be found
here:
https://docs.google.com/document/d/1wbcyzO6GM2SYQaqTMITBTBjHgZgM7mmiAt7TUfh1xt8/edit

Best,

On Thu, Jul 4, 2024 at 3:27 PM Josep Prat  wrote:

> Thanks Luke!
>
> --
> Josep Prat
> Open Source Engineering Director, Aiven
> josep.p...@aiven.io   |   +491715557497 | aiven.io
> Aiven Deutschland GmbH
> Alexanderufer 3-7, 10117 Berlin
> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> Amtsgericht Charlottenburg, HRB 209739 B
>
> On Thu, Jul 4, 2024, 14:04 Luke Chen  wrote:
>
>> Hi Josep,
>>
>> I had run tests for tests/kafkatest/tests/client/quota_test.py based on
>> 3.8
>> branch, and they all passed.
>>
>> *19:54:24*
>> *19:54:24*
>>  SESSION REPORT (ALL TESTS)*19:54:24*  ducktape version:
>> 0.11.4*19:54:24*  session_id:   2024-07-04--001*19:54:24*  run
>> time: 12 minutes 39.940 seconds*19:54:24*  tests run:
>> 9*19:54:24*  passed:   9*19:54:24*  flaky:
>> 0*19:54:24*  failed:   0*19:54:24*  ignored:
>> 0*19:54:24*
>> *19:54:24*
>>  test_id:
>> kafkatest.tests.client.quota_test.QuotaTest.test_quota.quota_type=client-id.consumer_num=2*19:54:24*
>>  status: PASS*19:54:24*  run time:   3 minutes 51.280
>> seconds*19:54:24*
>>
>> *19:54:24*
>>  test_id:
>> kafkatest.tests.client.quota_test.QuotaTest.test_quota.quota_type=.user.client-id.override_quota=True*19:54:24*
>>  status: PASS*19:54:24*  run time:   4 minutes 21.082
>> seconds*19:54:24*
>>
>> *19:54:24*
>>  test_id:
>> kafkatest.tests.client.quota_test.QuotaTest.test_quota.quota_type=.user.client-id.override_quota=False*19:54:24*
>>  status: PASS*19:54:24*  run time:   5 minutes 14.854
>> seconds*19:54:24*
>>
>> *19:54:24*
>>  test_id:
>> kafkatest.tests.client.quota_test.QuotaTest.test_quota.quota_type=client-id.old_broker_throttling_behavior=True*19:54:24*
>>  status: PASS*19:54:24*  run time:   3 minutes 0.505
>> seconds*19:54:24*
>>
>> *19:54:24*
>>  test_id:
>> kafkatest.tests.client.quota_test.QuotaTest.test_quota.quota_type=client-id.old_client_throttling_behavior=True*19:54:24*
>>  status: PASS*19:54:24*  run time:   3 minutes 19.629
>> seconds*19:54:24*
>>
>> *19:54:24*
>>  test_id:
>> kafkatest.tests.client.quota_test.QuotaTest.test_quota.quota_type=client-id.override_quota=False*19:54:24*
>>  status: PASS*19:54:24*  run time:   4 minutes 11.296
>> seconds*19:54:24*
>>
>> *19:54:24*
>>  test_id:
>> kafkatest.tests.client.quota_test.QuotaTest.test_quota.quota_type=client-id.override_quota=True*19:54:24*
>>  status: PASS*19:54:24*  run time:   4 minutes 10.578
>> seconds*19:54:24*
>>
>> *19:54:24*
>>  test_id:
>> kafkatest.tests.client.quota_test.QuotaTest.test_quota.quota_type=user.override_quota=False*19:54:24*
>>  status: PASS*19:54:24*  run time:   4 minutes 19.187
>> seconds*19:54:24*
>>
>> *19:54:24*
>>  test_id:
>> kafkatest.tests.client.quota_test.QuotaTest.test_quota.quota_type=user.override_quota=True*19:54:24*
>>  status: PASS*19:54:24*  run time:   3 minutes 13.666
>> seconds*19:54:24*
>>
>> 
>>
>>
>> Thanks.
>> Luke
>>
>> On Thu, Jul 4, 2024 at 6:01 PM Josep Prat 
>> wrote:
>>
>> > Hi Luke,
>> >
>> > Thanks for the pointer, if you have an environment where you can run the
>> > tests I would highly appreciate it!

[jira] [Resolved] (KAFKA-17042) the migration docs should remind users to set "broker.id.generation.enable" when adding broker.id

2024-07-05 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-17042.

Fix Version/s: 3.9.0
   Resolution: Fixed

> the migration docs should remind users to set "broker.id.generation.enable" 
> when adding broker.id
> -
>
> Key: KAFKA-17042
> URL: https://issues.apache.org/jira/browse/KAFKA-17042
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: TengYao Chi
>Priority: Minor
> Fix For: 3.9.0
>
>
> in the section: Enter Migration Mode on the Brokers
> it requires users to add "broker.id", but it can produces error "broker.id 
> must be greater than or equal to -1 and not greater than 
> reserved.broker.max.id" too. That is caused by the zk broker is using a 
> generated broker id.
> As this phase is temporary, the simple solution is to remind users to add 
> "broker.id.generation.enable=false" if the zk broker is using generated 
> broker id.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17086) Kafka support for java 21

2024-07-05 Thread Swathi Mocharla (Jira)
Swathi Mocharla created KAFKA-17086:
---

 Summary: Kafka support for java 21
 Key: KAFKA-17086
 URL: https://issues.apache.org/jira/browse/KAFKA-17086
 Project: Kafka
  Issue Type: Wish
  Components: core
Reporter: Swathi Mocharla


When does Apache Kafka plan to support Java 21 from.

Currently there seem to be some known issues that are already fixed in the 
community.

A timeline would be helpful.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1054: Support external schemas in JSONConverter

2024-07-05 Thread Chris Egerton
Hi Priyanka,

How exactly did you test this feature? Was it in standalone mode (with a
Java properties file), or with a JSON config submitted via the REST API?

Cheers,

Chris

On Fri, Jul 5, 2024, 04:56 Priyanka K U 
wrote:

> Hi Chris,
>
> Thank you for your feedback and insights on the KIP.
>
> Let me address your queries
>
> 1 .  The KIP will work with existing config providers , we have tested
> with the Directory config .I have updated the examples in KIP .
>
> 2.   We have tested it out with inline schema and we should  not need any
> backslashes. However, your point about the ergonomics is true - as this
> needs the schema to be provided in a single line, it would be difficult to
> read unless the schema is very small and simple. I think using a config
> provider would be the answer for this.
>
>
> Thanks
> Priyanka
>
>
> From: Chris Egerton 
> Date: Saturday, 29 June 2024 at 12:37 PM
> To: dev@kafka.apache.org 
> Subject: [EXTERNAL] Re: [VOTE] KIP-1054: Support external schemas in
> JSONConverter
> Hi Priyanka,
>
> I think allowing a raw schema string, instead of introducing a property
> that gives users access to the file system on the worker, is a good idea. I
> have two small thoughts:
>
> 1) The KIP provides an example
> of schema.content:${directory:/schema/schema.json} for how a config
> provider might be used to read a schema from the file system. Is this an
> accurate example corresponding to the file config provider or the directory
> config provider? If not, can an example be provided for either of these
> out-of-the-box providers? I'm asking because I think it'd make sense to
> ensure that readily-accessible config providers can be used to achieve what
> we need for this KIP, and if there's a gap, we should try to fill it.
>
> 2) Schemas for the JSON converter are, unsurprisingly, encoded in JSON.
> This is fine if someone uses a config provider to fetch that JSON string
> from somewhere else (like a file on the worker), but if someone tries to
> provide a schema directly in a connector config, it'll be a mess of
> backslashes and harder to read. Do you have any thoughts on a way we might
> improve the ergonomics here?
>
> Cheers,
>
> Chris
>
> On Fri, Jun 28, 2024 at 5:24 AM Priyanka K U  >
> wrote:
>
> > Hello Everyone,
> >
> > I'd like to start a vote on KIP-1054, which aims to Support external
> > schemas in JSONConverter to Kafka Connect:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1054%3A+Support+external+schemas+in+JSONConverter
> >
> >
> > Thank you,
> >
> > Priyanka
> >
> >
>


[jira] [Created] (KAFKA-17087) Deprecate `delete-config` of TopicCommand

2024-07-05 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17087:
--

 Summary: Deprecate `delete-config` of TopicCommand
 Key: KAFKA-17087
 URL: https://issues.apache.org/jira/browse/KAFKA-17087
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


TopicCommand `delete-config` is an no-op, so we should deprecate it in 3.9 and 
then remove it from 4.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17088) REQUEST_TIMED_OUT occurs intermittently in the kafka Producer client

2024-07-05 Thread Janardhana Gopalachar (Jira)
Janardhana Gopalachar created KAFKA-17088:
-

 Summary: REQUEST_TIMED_OUT occurs intermittently in the kafka 
Producer client 
 Key: KAFKA-17088
 URL: https://issues.apache.org/jira/browse/KAFKA-17088
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 3.5.1
Reporter: Janardhana Gopalachar


Hi 

We observe that producer receives a request timeout ( intermittently) when 
trying to send message to kafka broker. Below is the properties set for kafka 
producer. 

producerProps.put("acks", "all");

producerProps.put("linger.ms", 0);

producerProps.put("max.block.ms", 5000);

producerProps.put("metadata.max.idle.ms", 5000); 
producerProps.put("delivery.timeout.ms", 1); 
producerProps.put("request.timeout.ms", 1000);

producerProps.put("key.serializer", BYTE_SERIALIZER);

producerProps.put("value.serializer", BYTE_SERIALIZER);

 

 

we receive below message intermittently. We need to know the reaon for this 
timeout.

_[kafka-producer-network-thread | producer-1] 
o.a.k.c.u.LogContext$LocationAwareKafkaLogger:434 writeLog [Producer 
clientId=producer-1] Got error produce response with correlation id 231972 on 
topic-partition {*}health_check_topic_msg2-0{*}, retrying (2147483646 attempts 
left). Error: REQUEST_TIMED_OUT. Error Message: Disconnected from node 1 due to 
timeout_



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17089) Incorrect JWT parsing in OAuthBearerUnsecuredJws

2024-07-05 Thread Jira
Björn Löfroth created KAFKA-17089:
-

 Summary: Incorrect JWT parsing in OAuthBearerUnsecuredJws
 Key: KAFKA-17089
 URL: https://issues.apache.org/jira/browse/KAFKA-17089
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 3.6.2
Reporter: Björn Löfroth


The documentation for the `OAuthBearerUnsecuredJws.toMap` function correctly 
describes that the input is Base64URL, but then goes ahead and does a simple 
base64 decode.


[https://github.com/apache/kafka/blob/9a7eee60727dc73f09075e971ea35909d2245f19/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/internals/unsecured/OAuthBearerUnsecuredJws.java#L295]

 

It should probably be 
```

{color:#c678dd}byte{color}{color:#abb2bf}[{color}{color:#abb2bf}]{color} decode 
{color:#61afef}={color} 
{color:#d19a66}Base64{color}{color:#abb2bf}.{color}{color:#61afef}getUrlDecoder{color}{color:#abb2bf}({color}{color:#abb2bf}){color}{color:#abb2bf}.{color}{color:#61afef}decode{color}{color:#abb2bf}({color}split{color:#abb2bf}){color}{color:#abb2bf};{color}
```

The error I get when using Confluent Schema Registry clients:
```

org.apache.kafka.common.errors.SerializationException: Error serializing JSON 
message

    at 
io.confluent.kafka.serializers.json.AbstractKafkaJsonSchemaSerializer.serializeImpl(AbstractKafkaJsonSchemaSerializer.java:171)

    at 
io.confluent.kafka.serializers.json.KafkaJsonSchemaSerializer.serialize(KafkaJsonSchemaSerializer.java:95)

    at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:1000)

    at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:947)

    at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:832)

    at 
se.ica.icc.schemaregistry.example.confluent.ProducerJsonExample.main(ProducerJsonExample.java:87)

    at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)

    at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.base/java.lang.reflect.Method.invoke(Method.java:568)

    at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:282)

    at java.base/java.lang.Thread.run(Thread.java:833)

Caused by: 
io.confluent.kafka.schemaregistry.client.security.bearerauth.oauth.exceptions.SchemaRegistryOauthTokenRetrieverException:
 Error while fetching Oauth Token for Schema Registry: OAuth Token for Schema 
Registry is Invalid

    at 
io.confluent.kafka.schemaregistry.client.security.bearerauth.oauth.CachedOauthTokenRetriever.getToken(CachedOauthTokenRetriever.java:74)

    at 
io.confluent.kafka.schemaregistry.client.security.bearerauth.oauth.OauthCredentialProvider.getBearerToken(OauthCredentialProvider.java:53)

    at 
io.confluent.kafka.schemaregistry.client.rest.RestService.setAuthRequestHeaders(RestService.java:1336)

    at 
io.confluent.kafka.schemaregistry.client.rest.RestService.buildConnection(RestService.java:361)

    at 
io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:300)

    at 
io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:409)

    at 
io.confluent.kafka.schemaregistry.client.rest.RestService.getLatestVersion(RestService.java:981)

    at 
io.confluent.kafka.schemaregistry.client.rest.RestService.getLatestVersion(RestService.java:972)

    at 
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getLatestSchemaMetadata(CachedSchemaRegistryClient.java:574)

    at 
io.confluent.kafka.serializers.AbstractKafkaSchemaSerDe.lookupLatestVersion(AbstractKafkaSchemaSerDe.java:571)

    at 
io.confluent.kafka.serializers.AbstractKafkaSchemaSerDe.lookupLatestVersion(AbstractKafkaSchemaSerDe.java:554)

    at 
io.confluent.kafka.serializers.json.AbstractKafkaJsonSchemaSerializer.serializeImpl(AbstractKafkaJsonSchemaSerializer.java:151)

    ... 11 more

Caused by: 
org.apache.kafka.common.security.oauthbearer.internals.secured.ValidateException:
 Could not validate the access token: malformed Base64 URL encoded value

    at 
org.apache.kafka.common.security.oauthbearer.internals.secured.LoginAccessTokenValidator.validate(LoginAccessTokenValidator.java:93)

    at 
io.confluent.kafka.schemaregistry.client.security.bearerauth.oauth.CachedOauthTokenRetriever.getToken(CachedOauthTokenRetriever.java:72)

    ... 22 more

Caused by: 
org.apache.kafka.common.security.oauthbearer.internals.unsecured.OAuthBearerIllegalTokenException:
 malformed Base64 URL encoded value

    at 
org.apache.kafka.common.security.oauthbearer.internals.unsecured.OAuthBearerUnsecuredJws.toMap(OAuthBea

[jira] [Created] (KAFKA-17090) Add documentation to CreateTopicsResult#config to remind users that both "type" and "document" are null

2024-07-05 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17090:
--

 Summary: Add documentation to CreateTopicsResult#config to remind 
users that both "type" and "document" are null 
 Key: KAFKA-17090
 URL: https://issues.apache.org/jira/browse/KAFKA-17090
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


CreateTopicsResult#config [0] always has null type and null document, since 
kafka protocol does not declare those fields[1]. However, 
CreateTopicsResult#config reuse the class `ConfigEntry`, and so users may 
expect those fields are defined too.



[0] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/CreateTopicsResult.java#L68
[1] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/resources/common/message/CreateTopicsResponse.json#L55



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17091) Add @FunctionalInterface to Streams interfaces

2024-07-05 Thread Ray McDermott (Jira)
Ray McDermott created KAFKA-17091:
-

 Summary: Add @FunctionalInterface to Streams interfaces
 Key: KAFKA-17091
 URL: https://issues.apache.org/jira/browse/KAFKA-17091
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Ray McDermott
Assignee: Ray McDermott


Clojure version 1.12 (currently in beta) has many updates to Java interop.

Unfortunately, it does not quite deliver what we need with respect to thinning 
down Kafka Streams interop.

We were specifically hoping that passing {{(fn [] ...)}} to SAM interfaces 
would just work and we would not need to {{reify}} the interface.

Sadly it only works for interfaces that have been explicitly annotated with 
{{@FunctionalInterface}}  - and the Kafka Streams DSL does not have those 
annotations.

Details here

https://ask.clojure.org/index.php/13908/expand-fi-adapting-to-sam-types-not-marked-as-fi



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17092) Revisit `KafkaConsumerTest#testBeginningOffsetsTimeout`

2024-07-05 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17092:
--

 Summary: Revisit `KafkaConsumerTest#testBeginningOffsetsTimeout`
 Key: KAFKA-17092
 URL: https://issues.apache.org/jira/browse/KAFKA-17092
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


Sometimes it hangs in my jenkins ... not sure whether Kafka jenkins encounters 
same issue or not.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17093) KafkaConsumer.seekToEnd should return LSO

2024-07-05 Thread Tom Kalmijn (Jira)
Tom Kalmijn created KAFKA-17093:
---

 Summary: KafkaConsumer.seekToEnd should return LSO 
 Key: KAFKA-17093
 URL: https://issues.apache.org/jira/browse/KAFKA-17093
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 3.6.1
 Environment: Ubuntu,  IntelliJ, Scala   "org.apache.kafka" % 
"kafka-clients" % "3.6.1"

Reporter: Tom Kalmijn


 

Expected

When using a transactional producer then the method 
KafkaConsumer.seekToEnd(...) of a consumer configured with isolation level 
"read_committed" should return the LSO. 

Observed

The offset returned is always the actual last offset of the partition, which is 
not the LSO if the latest offsets are occupied by transaction markers.

Also see this Slack thread:

https://confluentcommunity.slack.com/archives/C499EFQS0/p1720088282557559



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #3080

2024-07-05 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-17094) Make it possible to list registered KRaft nodes in order to know which nodes should be unregistered

2024-07-05 Thread Jakub Scholz (Jira)
Jakub Scholz created KAFKA-17094:


 Summary: Make it possible to list registered KRaft nodes in order 
to know which nodes should be unregistered
 Key: KAFKA-17094
 URL: https://issues.apache.org/jira/browse/KAFKA-17094
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.7.1
Reporter: Jakub Scholz


Kafka seems to require nodes that are removed from the cluster to be 
unregistered using the Kafka Admin API. If they are unregistred, that you might 
run into problems later. For example, after upgrade when you try to bump the 
KRaft metadata version, you might get an error like this:

 
{code:java}
g.apache.kafka.common.errors.InvalidUpdateVersionException: Invalid update 
version 19 for feature metadata.version. Broker 3002 only supports versions 
1-14 {code}
In this case, 3002 is an old node that was removed before the upgrade and 
doesn't support the KRaft metadata version 19 and blocks the metadata update.

 

However, it seems to be impossible to list the registered nodes in order to 
unregister them:
 * The describe cluster metadata request in the Admin API seems to return only 
the IDs of running brokers
 * The describe metadata quorum command seems to list the removed nodes in the 
list of observers. But it does so only until the controller nodes are restarted.

If Kafka expects the inactive nodes to be registered, it should provide a list 
of the registered nodes so that it can be checked what nodes to unregister.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-1006: Remove SecurityManager Support

2024-07-05 Thread Greg Harris
Hi all,

I've added Ismael's kip-less idea as a rejected alternative, with
appropriate justification.
I'm happy to discuss this alternative, as it appears to be the best
alternative, and will be what happens if the KIP vote does not succeed.

Thanks,
Greg

On Wed, Jul 3, 2024 at 10:34 AM Greg Harris  wrote:

> Hi Ismael,
>
> Thanks for the question.
>
> > Can we not
> > use the SecurityManager when it's available and fallback when it's not?
>
> This is the strategy the KIP is proposing in the interim before we drop
> support for the SecurityManager. The KIP should be stating this idea, just
> more verbosely.
>
> > I'm not totally clear on why we need a KIP.
>
> Implementing the above strategy is IMHO tech debt, and I wanted to plan
> for eventually paying off that tech debt before incurring it.
> I think the only way to eliminate it is going to be removing our support
> for SecurityManager entirely.
> Since there may be Kafka users using the SecurityManager, this would
> represent a removal of functionality/breaking change for them, and
> therefore warrants a KIP.
>
> Please let me know if you have more questions,
> Greg
>
> On Wed, Jul 3, 2024 at 10:14 AM Ismael Juma  wrote:
>
>> Hi Greg,
>>
>> Thanks for the KIP. I'm not totally clear on why we need a KIP. Can we not
>> use the SecurityManager when it's available and fallback when it's not? If
>> so, then it would mean that whether SecurityManager is used or not depends
>> on the JDK and its configuration.
>>
>> Ismael
>>
>> On Mon, Nov 20, 2023 at 4:48 PM Greg Harris > >
>> wrote:
>>
>> > Hi all,
>> >
>> > I'd like to invite you all to discuss removing SecurityManager support
>> > from Kafka. This affects the client and server SASL mechanism, Tiered
>> > Storage, and Connect classloading.
>> >
>> > Find the KIP here:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1006%3A+Remove+SecurityManager+Support
>> >
>> > I think this is a "code higiene" effort that doesn't need to be dealt
>> > with urgently, but it would prevent a lot of headache later when Java
>> > does decide to remove support.
>> >
>> > If you are currently using the SecurityManager with Kafka, I'd really
>> > appreciate hearing how you're using it, and how you're planning around
>> > its removal.
>> >
>> > Thanks!
>> > Greg Harris
>> >
>>
>


Re: [DISCUSS] KIP-1006: Remove SecurityManager Support

2024-07-05 Thread Greg Harris
Also, I've opened a PR with the proposed reflective shim, for anyone that
is interested: https://github.com/apache/kafka/pull/16522

Thanks,
Greg

On Fri, Jul 5, 2024 at 3:17 PM Greg Harris  wrote:

> Hi all,
>
> I've added Ismael's kip-less idea as a rejected alternative, with
> appropriate justification.
> I'm happy to discuss this alternative, as it appears to be the best
> alternative, and will be what happens if the KIP vote does not succeed.
>
> Thanks,
> Greg
>
> On Wed, Jul 3, 2024 at 10:34 AM Greg Harris  wrote:
>
>> Hi Ismael,
>>
>> Thanks for the question.
>>
>> > Can we not
>> > use the SecurityManager when it's available and fallback when it's not?
>>
>> This is the strategy the KIP is proposing in the interim before we drop
>> support for the SecurityManager. The KIP should be stating this idea, just
>> more verbosely.
>>
>> > I'm not totally clear on why we need a KIP.
>>
>> Implementing the above strategy is IMHO tech debt, and I wanted to plan
>> for eventually paying off that tech debt before incurring it.
>> I think the only way to eliminate it is going to be removing our support
>> for SecurityManager entirely.
>> Since there may be Kafka users using the SecurityManager, this would
>> represent a removal of functionality/breaking change for them, and
>> therefore warrants a KIP.
>>
>> Please let me know if you have more questions,
>> Greg
>>
>> On Wed, Jul 3, 2024 at 10:14 AM Ismael Juma  wrote:
>>
>>> Hi Greg,
>>>
>>> Thanks for the KIP. I'm not totally clear on why we need a KIP. Can we
>>> not
>>> use the SecurityManager when it's available and fallback when it's not?
>>> If
>>> so, then it would mean that whether SecurityManager is used or not
>>> depends
>>> on the JDK and its configuration.
>>>
>>> Ismael
>>>
>>> On Mon, Nov 20, 2023 at 4:48 PM Greg Harris >> >
>>> wrote:
>>>
>>> > Hi all,
>>> >
>>> > I'd like to invite you all to discuss removing SecurityManager support
>>> > from Kafka. This affects the client and server SASL mechanism, Tiered
>>> > Storage, and Connect classloading.
>>> >
>>> > Find the KIP here:
>>> >
>>> >
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1006%3A+Remove+SecurityManager+Support
>>> >
>>> > I think this is a "code higiene" effort that doesn't need to be dealt
>>> > with urgently, but it would prevent a lot of headache later when Java
>>> > does decide to remove support.
>>> >
>>> > If you are currently using the SecurityManager with Kafka, I'd really
>>> > appreciate hearing how you're using it, and how you're planning around
>>> > its removal.
>>> >
>>> > Thanks!
>>> > Greg Harris
>>> >
>>>
>>


[jira] [Resolved] (KAFKA-16806) Explicitly declare JUnit dependencies for all test modules

2024-07-05 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16806.

Fix Version/s: 3.9.0
   Resolution: Fixed

> Explicitly declare JUnit dependencies for all test modules
> --
>
> Key: KAFKA-16806
> URL: https://issues.apache.org/jira/browse/KAFKA-16806
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Greg Harris
>Assignee: TengYao Chi
>Priority: Major
> Fix For: 3.9.0
>
>
> The automatic loading of test framework implementation dependencies has been 
> deprecated.    
> This is scheduled to be removed in Gradle 9.0.    
> Declare the desired test framework directly on the test suite or explicitly 
> declare the test framework implementation dependencies on the test's runtime 
> classpath.    
> [Documentation|https://docs.gradle.org/8.7/userguide/upgrading_version_8.html#test_framework_implementation_dependencies]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #3081

2024-07-05 Thread Apache Jenkins Server
See