I've now tracked this down to the fact that my application is trying to run
a query in a separate thread from the one that set up the table
environment. is StreamTableEnvironment supposed to be thread safe? Can it
be a singleton that is available to multiple threads to server queries on
or should each thread built its own?

Thanks!

Jad Naous <https://www.linkedin.com/in/jadnaous/>
Grepr, CEO/Founder



On Fri, Oct 13, 2023 at 5:42 PM Jad Naous <j...@grepr.io> wrote:

> Hi all,
>
> We're using the Table API to read in batch mode from an Iceberg table on
> s3 using DynamoDB as the catalog implementation. We're hitting a NPE in the
> Calcite planner. The same query works fine when using the local FS and
> in-memory catalog. Below is a snipped stacktrace. Any thoughts on how to
> troubleshoot this?Any help would be much appreciated!
> Thanks!
> Jad
>
> java.lang.NullPointerException: metadataHandlerProvider
> at java.base/java.util.Objects.requireNonNull(Objects.java:246)
> at
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.getMetadataHandlerProvider(RelMetadataQueryBase.java:122)
> at
> org.apache.calcite.rel.metadata.RelMetadataQueryBase.revise(RelMetadataQueryBase.java:118)
> at
> org.apache.calcite.rel.metadata.RelMetadataQuery.collations(RelMetadataQuery.java:604)
> at
> org.apache.calcite.rel.metadata.RelMdCollation.project(RelMdCollation.java:291)
> at
> org.apache.calcite.rel.logical.LogicalProject.lambda$create$0(LogicalProject.java:125)
> at org.apache.calcite.plan.RelTraitSet.replaceIfs(RelTraitSet.java:244)
> at
> org.apache.calcite.rel.logical.LogicalProject.create(LogicalProject.java:124)
> at
> org.apache.calcite.rel.logical.LogicalProject.create(LogicalProject.java:114)
> at
> org.apache.calcite.rel.core.RelFactories$ProjectFactoryImpl.createProject(RelFactories.java:178)
> at org.apache.calcite.tools.RelBuilder.project_(RelBuilder.java:2191)
> at org.apache.calcite.tools.RelBuilder.project(RelBuilder.java:1970)
> at
> org.apache.flink.table.planner.plan.QueryOperationConverter$SingleRelVisitor.visit(QueryOperationConverter.java:165)
> at
> org.apache.flink.table.planner.plan.QueryOperationConverter$SingleRelVisitor.visit(QueryOperationConverter.java:158)
> at
> org.apache.flink.table.operations.ProjectQueryOperation.accept(ProjectQueryOperation.java:76)
> at
> org.apache.flink.table.planner.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:155)
> at
> org.apache.flink.table.planner.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:135)
> at
> org.apache.flink.table.operations.utils.QueryOperationDefaultVisitor.visit(QueryOperationDefaultVisitor.java:47)
> at
> org.apache.flink.table.operations.ProjectQueryOperation.accept(ProjectQueryOperation.java:76)
> at
> org.apache.flink.table.planner.plan.QueryOperationConverter.lambda$defaultMethod$0(QueryOperationConverter.java:154)
> at
> java.base/java.util.Collections$SingletonList.forEach(Collections.java:4856)
> at
> org.apache.flink.table.planner.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:154)
> at
> org.apache.flink.table.planner.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:135)
> at
> org.apache.flink.table.operations.utils.QueryOperationDefaultVisitor.visit(QueryOperationDefaultVisitor.java:72)
> at
> org.apache.flink.table.operations.FilterQueryOperation.accept(FilterQueryOperation.java:67)
> at
> org.apache.flink.table.planner.plan.QueryOperationConverter.lambda$defaultMethod$0(QueryOperationConverter.java:154)
> at
> java.base/java.util.Collections$SingletonList.forEach(Collections.java:4856)
> at
> org.apache.flink.table.planner.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:154)
> at
> org.apache.flink.table.planner.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:135)
> at
> org.apache.flink.table.operations.utils.QueryOperationDefaultVisitor.visit(QueryOperationDefaultVisitor.java:82)
> at
> org.apache.flink.table.operations.SortQueryOperation.accept(SortQueryOperation.java:93)
> at
> org.apache.flink.table.planner.calcite.FlinkRelBuilder.queryOperation(FlinkRelBuilder.java:261)
> at
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:224)
> at
> org.apache.flink.table.planner.delegation.PlannerBase.$anonfun$translate$1(PlannerBase.scala:194)
> at
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
> at scala.collection.Iterator.foreach(Iterator.scala:937)
> at scala.collection.Iterator.foreach$(Iterator.scala:937)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
> at scala.collection.IterableLike.foreach(IterableLike.scala:70)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike.map(TraversableLike.scala:233)
> at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:194)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1803)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:945)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1422)
> at
> org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:476)
>
>

Reply via email to