Hi Ajantha

I will help on the Flink PR. However I don’t think we need to hold 1.9.0
for Spark and Flink updates. We can always include that in the next release
cycle.

I propose to focus on issues and check the milestone on GH.

Regards
JB

Le jeu. 27 mars 2025 à 07:11, Ajantha Bhat <ajanthab...@gmail.com> a écrit :

> Latest update on the release:
>
> As discussed in the last community sync, I have waited for the Spark 4.0
> release. But the RC3 has failed.
> Flink 2.0 has been released, there is an open PR
> <https://github.com/apache/iceberg/pull/12527> to include it at the
> Iceberg. It didn't make any progress this week.
>
> We have recently found a few issues and small requirements for Auth
> manager,
> Added them to the milestone:
>  https://github.com/apache/iceberg/milestone/53
> <https://github.com/apache/iceberg/milestone/53>
> If we get good review support, I think 2-3 days should be sufficient to
> close these and prepare a release.
>
>
> - Ajantha
>
> On Thu, Mar 20, 2025 at 1:42 PM Alex Dutra <alex.du...@dremio.com.invalid>
> wrote:
>
>> Hi Yuya,
>>
>> Thanks for reporting this issue, which is indeed a defect in the SigV4
>> auth manager. The fix was hopefully easy:
>>
>> https://github.com/apache/iceberg/pull/12582
>>
>> Sorry for the inconvenience,
>>
>> Alex
>>
>> On Thu, Mar 20, 2025 at 5:20 AM Yuya Ebihara <
>> yuya.ebih...@starburstdata.com> wrote:
>>
>>> S3 Tables tests in Trino with 1.9.0 nightly release throws
>>> ClassCastException (RESTSigV4AuthSession → OAuth2Util$AuthSession). The
>>> test works fine in 1.8.1.
>>> I’m going to check if adjusting our test settings based on this PR (
>>> https://github.com/apache/iceberg/pull/11995) can fix the issue.
>>>
>>>
>>> https://github.com/trinodb/trino/actions/runs/13960201796/job/39080056606?pr=25331
>>> Error:
>>>  io.trino.plugin.iceberg.catalog.rest.TestIcebergS3TablesConnectorSmokeTest
>>> -- Time elapsed: 3.038 s <<< ERROR!
>>> io.trino.testing.QueryFailedException: class
>>> org.apache.iceberg.aws.RESTSigV4AuthSession cannot be cast to class
>>> org.apache.iceberg.rest.auth.OAuth2Util$AuthSession
>>> (org.apache.iceberg.aws.RESTSigV4AuthSession and
>>> org.apache.iceberg.rest.auth.OAuth2Util$AuthSession are in unnamed module
>>> of loader 'app')
>>> at
>>> io.trino.testing.AbstractTestingTrinoClient.execute(AbstractTestingTrinoClient.java:138)
>>> at
>>> io.trino.testing.DistributedQueryRunner.executeInternal(DistributedQueryRunner.java:565)
>>> at
>>> io.trino.testing.DistributedQueryRunner.execute(DistributedQueryRunner.java:548)
>>> at io.trino.testing.QueryRunner.execute(QueryRunner.java:82)
>>> at
>>> io.trino.plugin.iceberg.SchemaInitializer.accept(SchemaInitializer.java:54)
>>> at
>>> io.trino.plugin.iceberg.IcebergQueryRunner$Builder.lambda$build$3(IcebergQueryRunner.java:178)
>>> at java.base/java.util.Optional.ifPresent(Optional.java:178)
>>> at
>>> io.trino.plugin.iceberg.IcebergQueryRunner$Builder.build(IcebergQueryRunner.java:178)
>>> at
>>> io.trino.plugin.iceberg.catalog.rest.TestIcebergS3TablesConnectorSmokeTest.createQueryRunner(TestIcebergS3TablesConnectorSmokeTest.java:83)
>>> at
>>> io.trino.testing.AbstractTestQueryFramework.init(AbstractTestQueryFramework.java:119)
>>> at java.base/java.lang.reflect.Method.invoke(Method.java:580)
>>> at
>>> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:507)
>>> at
>>> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.tryRemoveAndExec(ForkJoinPool.java:1501)
>>> at
>>> java.base/java.util.concurrent.ForkJoinPool.helpJoin(ForkJoinPool.java:2274)
>>> at
>>> java.base/java.util.concurrent.ForkJoinTask.awaitDone(ForkJoinTask.java:495)
>>> at
>>> java.base/java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:662)
>>> at
>>> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:507)
>>> at
>>> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1460)
>>> at
>>> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:2036)
>>> at
>>> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:189)
>>> Suppressed: java.lang.Exception: SQL: CREATE SCHEMA IF NOT EXISTS "tpch"
>>> at
>>> io.trino.testing.DistributedQueryRunner.executeInternal(DistributedQueryRunner.java:572)
>>> ... 18 more
>>> Caused by: java.lang.ClassCastException: class
>>> org.apache.iceberg.aws.RESTSigV4AuthSession cannot be cast to class
>>> org.apache.iceberg.rest.auth.OAuth2Util$AuthSession
>>> (org.apache.iceberg.aws.RESTSigV4AuthSession and
>>> org.apache.iceberg.rest.auth.OAuth2Util$AuthSession are in unnamed module
>>> of loader 'app')
>>> at
>>> org.apache.iceberg.rest.auth.OAuth2Manager.contextualSession(OAuth2Manager.java:142)
>>> at
>>> org.apache.iceberg.rest.auth.OAuth2Manager.contextualSession(OAuth2Manager.java:40)
>>> at
>>> org.apache.iceberg.aws.RESTSigV4AuthManager.contextualSession(RESTSigV4AuthManager.java:68)
>>> at
>>> org.apache.iceberg.rest.RESTSessionCatalog.loadNamespaceMetadata(RESTSessionCatalog.java:608)
>>> at
>>> org.apache.iceberg.catalog.SessionCatalog.namespaceExists(SessionCatalog.java:358)
>>> at
>>> org.apache.iceberg.rest.RESTSessionCatalog.namespaceExists(RESTSessionCatalog.java:595)
>>> at
>>> io.trino.plugin.iceberg.catalog.rest.TrinoRestCatalog.namespaceExists(TrinoRestCatalog.java:162)
>>> at
>>> io.trino.plugin.iceberg.IcebergMetadata.schemaExists(IcebergMetadata.java:512)
>>> at
>>> io.trino.plugin.base.classloader.ClassLoaderSafeConnectorMetadata.schemaExists(ClassLoaderSafeConnectorMetadata.java:193)
>>> at
>>> io.trino.tracing.TracingConnectorMetadata.schemaExists(TracingConnectorMetadata.java:125)
>>> at
>>> io.trino.metadata.MetadataManager.lambda$schemaExists$1(MetadataManager.java:248)
>>> at
>>> java.base/java.util.stream.MatchOps$1MatchSink.accept(MatchOps.java:90)
>>> at
>>> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:215)
>>> at
>>> java.base/java.util.Spliterators$ArraySpliterator.tryAdvance(Spliterators.java:1034)
>>> at
>>> java.base/java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:147)
>>> at
>>> java.base/java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:588)
>>> at
>>> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:574)
>>> at
>>> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:560)
>>> at
>>> java.base/java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230)
>>> at
>>> java.base/java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196)
>>> at
>>> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:265)
>>> at
>>> java.base/java.util.stream.ReferencePipeline.anyMatch(ReferencePipeline.java:672)
>>> at
>>> io.trino.metadata.MetadataManager.schemaExists(MetadataManager.java:248)
>>> at
>>> io.trino.tracing.TracingMetadata.schemaExists(TracingMetadata.java:165)
>>> at
>>> io.trino.execution.CreateSchemaTask.internalExecute(CreateSchemaTask.java:119)
>>> at io.trino.execution.CreateSchemaTask.execute(CreateSchemaTask.java:82)
>>> at io.trino.execution.CreateSchemaTask.execute(CreateSchemaTask.java:54)
>>> at
>>> io.trino.execution.DataDefinitionExecution.start(DataDefinitionExecution.java:152)
>>> at
>>> io.trino.execution.SqlQueryManager.createQuery(SqlQueryManager.java:272)
>>> at
>>> io.trino.dispatcher.LocalDispatchQuery.startExecution(LocalDispatchQuery.java:150)
>>> at
>>> io.trino.dispatcher.LocalDispatchQuery.lambda$waitForMinimumWorkers$2(LocalDispatchQuery.java:134)
>>> at
>>> io.airlift.concurrent.MoreFutures.lambda$addSuccessCallback$12(MoreFutures.java:570)
>>> at io.airlift.concurrent.MoreFutures$3.onSuccess(MoreFutures.java:545)
>>> at
>>> com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1139)
>>> at io.trino.$gen.Trino_testversion____20250320_014939_2038.run(Unknown
>>> Source)
>>> at
>>> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
>>> at
>>> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
>>> at java.base/java.lang.Thread.run(Thread.java:1575)
>>>
>>> BR,
>>> Yuya
>>>
>>> On Wed, Mar 19, 2025 at 12:38 AM Manu Zhang <owenzhang1...@gmail.com>
>>> wrote:
>>>
>>>> Hi Ajantha,
>>>>
>>>> Thanks for driving the release. Can we include
>>>> https://github.com/apache/iceberg/pull/12120?
>>>>
>>>> On Tue, Mar 18, 2025 at 3:18 AM Steve Loughran
>>>> <ste...@cloudera.com.invalid> wrote:
>>>>
>>>>>
>>>>> Can I get this reviewed and merged; gives all hadoop filesystems with
>>>>> bulk delete calls the ability to issue bulk deletes up to their page 
>>>>> sizes;
>>>>> off by default. Tested all the way through iceberg to AWS S3 london.
>>>>>
>>>>> https://github.com/apache/iceberg/pull/10233
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, 17 Mar 2025 at 12:32, Yuya Ebihara <
>>>>> yuya.ebih...@starburstdata.com> wrote:
>>>>>
>>>>>> Hi, can we include https://github.com/apache/iceberg/pull/12264 to
>>>>>> fix the S3-compatible storage issue?
>>>>>> We downgraded the problematic library in
>>>>>> https://github.com/apache/iceberg/pull/12339, but the issue is still
>>>>>> present in the main branch.
>>>>>>
>>>>>
>>>>> I'd go with downgrading the AWS SDK to 2.29. x and then get on the
>>>>> relevant AWS SDK discussion to express your concerns:
>>>>> https://github.com/aws/aws-sdk-java-v2/discussions/5802
>>>>>
>>>>> The problem here is that there's such a broad set of implementations
>>>>> of the S3 API that it'll take testing to see how even the suggestions from
>>>>> the SDK team work everywhere -and we now have explicit confirmation that
>>>>> the SDK team leave all such testing to downstream users.
>>>>>
>>>>>
>>>>>
>>>>> *The AWS SDKs and CLI are designed for usage with official AWS
>>>>> services.We may introduce and enable new features by default, such as 
>>>>> these
>>>>> new default integrity protections,prior to them being supported or
>>>>> otherwise handled by third-party service implementations.*
>>>>>
>>>>> I think ASF projects need to make clear how dangerous this is -that
>>>>> projects will end up shipping releases which don't work, and the "set an
>>>>> env var or a system property" workarounds are not enough.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> On Mon, Mar 17, 2025 at 8:47 PM Ajantha Bhat <ajanthab...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> We decided to have a fast follow up on the things that missed
>>>>>>> release train of 1.8.0 during community sync.
>>>>>>>
>>>>>>> More details here:
>>>>>>> https://lists.apache.org/thread/wvz5sd7pmh5ww1yqhsxpt1kwf993276j
>>>>>>>
>>>>>>> On Mon, Mar 17, 2025 at 4:53 PM Russell Spitzer <
>>>>>>> russell.spit...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Can you please rehash the plan? I thought we just did a release
>>>>>>>> last month and were aiming for a three month schedule. I may have 
>>>>>>>> missed
>>>>>>>> something
>>>>>>>>
>>>>>>>> On Mon, Mar 17, 2025 at 6:00 AM Ajantha Bhat <ajanthab...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hey everyone,
>>>>>>>>>
>>>>>>>>> Following the plan from the 1.8.0 release, I'll be handling the
>>>>>>>>> 1.9.0 release. We have several major updates ready:
>>>>>>>>>
>>>>>>>>>    - *Partition stats APIs:* All core APIs for partition
>>>>>>>>>    statistics have been merged, unblocking engines like Dremio, 
>>>>>>>>> Trino, and
>>>>>>>>>    Hive that were waiting for this feature.
>>>>>>>>>    - *REST catalog authentication:* The refactoring PR for the
>>>>>>>>>    REST catalog authentication manager has been merged, improving
>>>>>>>>>    authentication support.
>>>>>>>>>    - *Spark 3.3 support removed:* We've officially dropped
>>>>>>>>>    support for Spark 3.3.
>>>>>>>>>    - *InternalData support in the core module:* The core module
>>>>>>>>>    now supports InternalData, leveraging the internal Parquet and 
>>>>>>>>> Avro readers
>>>>>>>>>    added in the previous release. This allows metadata to be written 
>>>>>>>>> in
>>>>>>>>>    Parquet.
>>>>>>>>>    - *Bug fixes:* Many important bug fixes have been merged.
>>>>>>>>>
>>>>>>>>> A 1.9.0 milestone has been created with additional "good-to-have"
>>>>>>>>> issues:
>>>>>>>>> https://github.com/apache/iceberg/milestone/53
>>>>>>>>>
>>>>>>>>> If there's anything urgent that needs to be included for this
>>>>>>>>> release, please let me know or reply to this thread.
>>>>>>>>> I'm aiming to start the release cut by the end of this week.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> - Ajantha
>>>>>>>>>
>>>>>>>>

Reply via email to