DDL support issues in Calcite

2022-02-21 Thread wang...@mchz.com.cn
Dear Calcite community:

I am new Calcite user here. I noted in the document 
https://calcite.apache.org/docs/adapter.html that Calcite does support DDL 
operations,the only thing I need to do is 1) to include calcite-server.jar in 
my classpath and 2) add 
parserFactory=org.apache.calcite.sql.parser.ddl.SqlDdlParserImpl#FACTORY to 
JDBC connect string. 
Now the problem is: I did so, but still I got the following error:


java.sql.SQLException: Error while executing SQL "CREATE TABLE t (i INTEGER, j 
VARCHAR(10))": DDL not supported: CREATE TABLE `T` (`I` INTEGER, `J` 
VARCHAR(10)) at 
org.apache.calcite.avatica.Helper.createException(Helper.java:56) at 
org.apache.calcite.avatica.Helper.createException(Helper.java:41) at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:163)
 at 
org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:227)
 at com..CalciteResolverTest.testJdbc(CalciteResolverTest.java:259) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
junit.framework.TestCase.runTest(TestCase.java:177) at 
junit.framework.TestCase.runBare(TestCase.java:142) at 
junit.framework.TestResult$1.protect(TestResult.java:122) at 
junit.framework.TestResult.runProtected(TestResult.java:142) at 
junit.framework.TestResult.run(TestResult.java:125) at 
junit.framework.TestCase.run(TestCase.java:130) at 
junit.framework.TestSuite.runTest(TestSuite.java:241) at 
junit.framework.TestSuite.run(TestSuite.java:236) at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:90) 
at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
 at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
 at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221)
 at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54) Caused by: 
java.lang.UnsupportedOperationException: DDL not supported: CREATE TABLE `T` 
(`I` INTEGER, `J` VARCHAR(10)) at 
org.apache.calcite.server.DdlExecutor.lambda$static$0(DdlExecutor.java:28) at 
org.apache.calcite.prepare.CalcitePrepareImpl.executeDdl(CalcitePrepareImpl.java:369)
 at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:634)
 at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:513)
 at 
org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:483)
 at 
org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:249)
 at 
org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:623)
 at 
org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:674)
 at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
 ... 20 more

did I miss any other properties that needs to be set? 




Many thanks
Sai Wang


[jira] [Created] (CALCITE-5017) SqlTypeUtil#canCastFrom is hard to extend for downstream projects

2022-02-21 Thread Francesco Guardiani (Jira)
Francesco Guardiani created CALCITE-5017:


 Summary: SqlTypeUtil#canCastFrom is hard to extend for downstream 
projects
 Key: CALCITE-5017
 URL: https://issues.apache.org/jira/browse/CALCITE-5017
 Project: Calcite
  Issue Type: Wish
Reporter: Francesco Guardiani


Hi all,
In Flink SQL we're extending the matrix of supported casting pairs beyond what 
Calcite provides and we hit a roadblock when dealing with 
{{SqlTypeUtil#canCastFrom}}. Note that we also have a couple of types more than 
what Calcite provides

The problem of this function is:

* It's hardcoded for structured types, so we can't essentially change anything 
about those
* The {{SqlTypeMappingRule}} is not flexible enough, as it doesn't allow 
matching types with parameters, but just type name pairs.

As a workaround, we're overriding `SqlCastFunction` in our classpath, invoking 
our custom cast checking logic: 
https://github.com/apache/flink/pull/18524/files#diff-8c1c5cdfd4ee1de9d54fd39db308ab1d7fa9fddb2db1e36cdd1c9b0b8fc90f4bR161
 

It would be nice if somehow this function can be extended with some custom 
predicates.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [HELP][DISCUSS] ReduceExpressionsRule configurability / extensibility

2022-02-21 Thread Ruben Q L
Thanks Stamatis, I will take a look at those links.


On Fri, Feb 18, 2022 at 9:41 PM Stamatis Zampetakis 
wrote:

> Hi Ruben,
>
> There was a recent request about preventing simplifications of certain
> operators [1] that does make sense in certain use cases. Apart from changes
> in RexSimplify this request would most likely need some kind of changes in
> various reduce expressions rules like the ones you seem to need as well.
>
> Adding appropriate configuration to the respective rule to avoid reducing
> certain expressions seems reasonable to me. Note that some other reduction
> rules, such as AggregateReduceFunctionsRule [2], already expose similar
> configurations.
>
> Best,
> Stamatis
>
> [1] https://lists.apache.org/thread/cyj792yqfc8byfkxcw2jv07c9tfs0np9
> [2]
>
> https://github.com/apache/calcite/blob/9c4f3bb540dd67a0ffefc09f4ebd98d2be65bb14/core/src/main/java/org/apache/calcite/rel/rules/AggregateReduceFunctionsRule.java#L871
>
> On Fri, Feb 18, 2022 at 4:23 PM Ruben Q L  wrote:
>
> > Hello community,
> >
> > I would need some advice for a very specific problem.
> > I find myself in the following situation: I have a plan with a bottom
> > filter (an equality) and a top filter (a user defined function MY_FUNC):
> > ...
> >   LogicalFilter(condition: MY_FUNC($0, 42))
> > LogicalFilter(condition: $0=1)
> >   ...
> >
> > When ReduceExpressionsRule.FilterReduceExpressionsRule gets applied on
> the
> > top filter, it pulls up the predicates from the bottom, detects that $0
> is
> > equal to 1, so it replaces it leaving:
> >   LogicalFilter(condition: MY_FUNC(1, 42))
> > LogicalFilter(condition: $0=1)
> >
> > The relevant code seems to be
> > ReduceExpressionsRule.ReducibleExprLocator#visitCall which considers that
> > all calls are a priori reducible:
> > @Override public Void visitCall(RexCall call) {
> >   // assume REDUCIBLE_CONSTANT until proven otherwise
> >   analyzeCall(call, Constancy.REDUCIBLE_CONSTANT);
> >   return null;
> > }
> >
> > However, due to some unrelated circumstances, this reduction is incorrect
> > for my particular UDF, so I do not want it to be converted into
> MY_FUNC(1,
> > 42), I'd need it to remain as MY_FUNC($0, 42) (i.e. neither the call
> > itself, nor its parameters can be reduced); the rest of the logic inside
> > FilterReduceExpressionsRule is perfectly fine for me. So it seems I'm
> > looking for something like:
> > @Override public Void visitCall(RexCall call) {
> >  if (call.op.equals(MY_FUNC) {
> >return pushVariable();
> >   }
> >   analyzeCall(call, Constancy.REDUCIBLE_CONSTANT);
> >   return null;
> > }
> >
> > Question 1: Is there a way to achieve this result (i.e. let it know to
> > ReduceExpressionsRule that a certain operator must not be reduced) via
> rule
> > configuration or in the UDF operator's definition?
> >
> > So far, I have not found a positive answer to this question, so my next
> > thought was "ok, I'll define my own MyFilterReduceExpressionsRule which
> > extends FilterReduceExpressionsRule and will adjust the few parts of the
> > code where I need some special treatment, i.e. ReducibleExprLocator''.
> > Except that, in practice I cannot simply do that, I am forced to
> copy-paste
> > most of the original rule code (and then modify a few lines) because of
> the
> > following reasons:
> > - ReduceExpressionsRule subclasses (e.g. FilterReduceExpressionsRule)
> even
> > if they are protected, they use some auxiliary methods that are static,
> so
> > they cannot be overridden (e.g. FilterReduceExpressionsRule#onMatch calls
> > the static reduceExpressions that calls the static
> > reduceExpressionsInternal that calls the static findReducibleExps that
> > creates the ReducibleExprLocator).
> > - ReduceExpressionsRule uses some auxiliary static classes (e.g.
> > ReducibleExprLocator) which are protected (good) but have a
> package-private
> > constructor (bad) so in practice they cannot be extended (I cannot create
> > "MyReducibleExprLocator extends ReducibleExprLocator" to deal with my
> > special UDF).
> >
> > Question 2: if the answer to the first question is "no", should we
> improve
> > ReduceExpressionsRule to make it more easily adaptable (for cases like my
> > example)? Maybe converting the static methods into non-static; and
> > declaring the static classes' constructors protected (so that anything
> can
> > be overridden by downstream rule subclasses if required)? Or maybe we
> could
> > provide more (optional) configuration capabilities in
> > ReduceExpressionsRule.Config to achieve this?
> >
> > Best regards,
> > Ruben
> >
>


[jira] [Created] (CALCITE-5018) org.apache.calcite.sql.parser.SqlParseException: Lexical error at line 1, column 8. Encountered: "`" (96), after : ""

2022-02-21 Thread zhangquan (Jira)
zhangquan created CALCITE-5018:
--

 Summary: org.apache.calcite.sql.parser.SqlParseException: Lexical 
error at line 1, column 8.  Encountered: "`" (96), after : ""
 Key: CALCITE-5018
 URL: https://issues.apache.org/jira/browse/CALCITE-5018
 Project: Calcite
  Issue Type: Bug
  Components: core
Affects Versions: 1.29.0
Reporter: zhangquan


Dear all, I use SqlParser.parseQuery(String sql) sql like as :
{code:java}
select `ID`,`NAME` from hr.emp where dept_id=1 {code}
 

java code

 
{code:java}
String sql = "select `ID`,`NAME` from hr.emp where dept_id=1";
SqlParser.Config config =
SqlParser.config()
.withQuoting(Quoting.BACK_TICK);
SqlParser parser = SqlParser.create(sql, config);
SqlNode sqlNode = parser.parseQuery(); 


{code}
 

then occur Exception:
{code}
Lexical error at line 1, column 11.  Encountered: "`" (96), after : ""
Exception in thread "main" org.apache.calcite.sql.parser.SqlParseException: 
Lexical error at line 1, column 21.  Encountered: "`" (96), after : ""
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.convertException(SqlParserImpl.java:389)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.normalizeException(SqlParserImpl.java:153)
    at 
org.apache.calcite.sql.parser.SqlParser.handleException(SqlParser.java:145)
    at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:160)
    at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:175)
    at 
com.github.quxiucheng.calcite.parser.tutorial.SqlParserConfigSample.quoting(.java:99)
    at 
com.github.quxiucheng.calcite.parser.tutorial.SqlParserConfigSample.main(.java:28)
Caused by: org.apache.calcite.sql.parser.impl.TokenMgrError: Lexical error at 
line 1, column 21.  Encountered: "`" (96), after : ""
    at 
org.apache.calcite.sql.parser.impl.SqlParserImplTokenManager.getNextToken(SqlParserImplTokenManager.java:25902)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.jj_scan_token(SqlParserImpl.java:36935)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.jj_3R_259(SqlParserImpl.java:35756)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.jj_3R_117(SqlParserImpl.java:35770)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.jj_3R_118(SqlParserImpl.java:35256)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.jj_3R_156(SqlParserImpl.java:34479)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.jj_3R_67(SqlParserImpl.java:31638)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.jj_3_23(SqlParserImpl.java:34219)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.jj_2_23(SqlParserImpl.java:30602)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.TableRef2(SqlParserImpl.java:9292)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.TableRef(SqlParserImpl.java:9268)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.FromClause(SqlParserImpl.java:9158)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.SqlSelect(SqlParserImpl.java:4419)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.LeafQuery(SqlParserImpl.java:631)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.LeafQueryOrExpr(SqlParserImpl.java:16109)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.QueryOrExpr(SqlParserImpl.java:15557)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.OrderedQueryOrExpr(SqlParserImpl.java:505)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.SqlStmt(SqlParserImpl.java:3790)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.SqlStmtEof(SqlParserImpl.java:3828)
    at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.parseSqlStmtEof(SqlParserImpl.java:201)
    at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:158)
    ... 3 more
{code}
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Support Left out join in RelMdExpressionLineage

2022-02-21 Thread Chang Chen
Hi jesusca

I am investigating how to extend MaterializedViewRule to support left outer
join. The first issue which I met is supporting outer join in
RelMdExpressionLineage#getExpressionLineage()

I think the current implementation also fits in the outer join case, and to
support outer join only need to remove the codes which exclude the used
fields coming from the null supplying side.

Thanks
Chang.


Re: "Unable to implement EnumerableNestedLoopJoin in SELECT(a, b, ARRAY(c, d, ARRAY(e)))"

2022-02-21 Thread Stamatis Zampetakis
Hey Gavin,

I think you are bumping into a missing feature and most likely addressed by
[1].

The approach in [1] is rather good but I had some doubts about a few new
APIs that were introduced which made me a bit cautious about merging this
to master. I would definitely like to find some time to review this again.

Best,
Stamatis

[1] https://github.com/apache/calcite/pull/2116

On Sat, Feb 19, 2022 at 6:17 PM Gavin Ray  wrote:

> Digging into this more to try to better understand Calcite and hopefully
> make
> progress,it seems like the query breaks here:
>
>
> https://github.com/apache/calcite/blob/5b2de4ef5c9447bc9f7aff98dd049bd32af5c53d/core/src/main/java/org/apache/calcite/rel/rel2sql/SqlImplementor.java#L1450-L1454
>
> @Override protected Context getAliasContext(RexCorrelVariable variable)
> {
>   return requireNonNull(
>   correlTableMap.get(variable.id),
>   () -> "variable " + variable.id + " is not found");
> }
>
> Unfortunately, this feature (at least I think so?) is the barrier to
> me being able to make efficient cross-datasource queries that return the
> right
> data shape for GraphQL responses.
>
> My current duct-tape hack is to split the query into query-per-join which I
> assume defeats most of Calcite's optimization and planning abilities =(
>
> It's not much, but I'm also willing to offer $250 if anyone could help me
> fix
> this or figure out an alternative solution.
>
> On Mon, Feb 14, 2022 at 4:26 PM Gavin Ray  wrote:
>
> > Apologies for the slow reply Ruben, I appreciate your help.
> > The full stack trace (I was prototyping in sqlline) seems to be more
> > helpful:
> >
> > Here is what seems to be the most useful bits:
> > ==
> > java.sql.SQLException: Error while preparing plan
> > [EnumerableCorrelate(correlation=[$cor0], joinType=[inner],
> > requiredColumns=[{0}])
> >   JdbcToEnumerableConverter
> >
> > Caused by: java.lang.IllegalStateException: Unable to implement
> > EnumerableCorrelate
> > Suppressed: java.lang.NullPointerException: variable $cor0 is not
> found
> > at java.base/java.util.Objects.requireNonNull(Objects.java:334)
> > at
> >
> org.apache.calcite.rel.rel2sql.SqlImplementor$BaseContext.getAliasContext(SqlImplementor.java:1429)
> > at
> >
> org.apache.calcite.rel.rel2sql.SqlImplementor$Context.toSql(SqlImplementor.java:628)
> > 
> > at
> >
> org.apache.calcite.rel.rel2sql.RelToSqlConverter.visit(RelToSqlConverter.java:427)
> >
> > And here is the entire thing:
> > ==
> > java.sql.SQLException: Error while preparing plan
> > [EnumerableCorrelate(correlation=[$cor0], joinType=[inner],
> > requiredColumns=[{0}])
> >   JdbcToEnumerableConverter
> > JdbcTableScan(table=[[hsql, PUBLIC, houses]])
> >   EnumerableCollect(field=[EXPR$0])
> > EnumerableProject(id=[$0], name=[$1], todos=[$3])
> >   EnumerableCorrelate(correlation=[$cor1], joinType=[inner],
> > requiredColumns=[{0}])
> > JdbcToEnumerableConverter
> >   JdbcFilter(condition=[=($2, $cor0.id)])
> > JdbcTableScan(table=[[hsql, PUBLIC, users]])
> > EnumerableCollect(field=[EXPR$0])
> >   JdbcToEnumerableConverter
> > JdbcProject(id=[$0], description=[$2])
> >   JdbcFilter(condition=[=($1, $cor1.id)])
> > JdbcTableScan(table=[[hsql, PUBLIC, todos]])
> > ]
> > at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
> > at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
> > at
> >
> org.apache.calcite.jdbc.CalciteConnectionImpl.prepareStatement_(CalciteConnectionImpl.java:239)
> > at
> >
> org.apache.calcite.jdbc.CalciteConnectionImpl.access$100(CalciteConnectionImpl.java:101)
> > at
> >
> org.apache.calcite.jdbc.CalciteConnectionImpl$2.prepareStatement(CalciteConnectionImpl.java:188)
> > at CalciteSchemaManager.executeQuery(CalciteSchemaManager.kt:209)
> > at CalciteSchemaManager.executeQuery(CalciteSchemaManager.kt:213)
> > at ForeignKeyTest.throwawayTest(ForeignKeyTest.kt:265)
> >
> > Caused by: java.lang.IllegalStateException: Unable to implement
> > EnumerableCorrelate(correlation=[$cor0], joinType=[inner],
> > requiredColumns=[{0}]): rowcount = 22500.0, cumulative cost = {318610.0
> > rows, 562611.0 cpu, 0.0 io}, id = 315
> >   JdbcToEnumerableConverter: rowcount = 100.0, cumulative cost = {110.0
> > rows, 111.0 cpu, 0.0 io}, id = 293
> > JdbcTableScan(table=[[hsql, PUBLIC, houses]]): rowcount = 100.0,
> > cumulative cost = {100.0 rows, 101.0 cpu, 0.0 io}, id = 15
> >   EnumerableCollect(field=[EXPR$0]): rowcount = 225.0, cumulative cost =
> > {2959.0 rows, 5625.0 cpu, 0.0 io}, id = 313
> > EnumerableCalc(expr#0..3=[{inputs}], proj#0..1=[{exprs}],
> > todos=[$t3]): rowcount = 225.0, cumulative cost = {2734.0 rows, 5400.0
> cpu,
> > 0.0 io}, id = 317
> >   EnumerableCorrelate(correlation=[$cor1], joinType=[inner],
> > requiredColumns=[{0}]): rowcount =

Re: Why are nested aggregations illegal? Best alternatives?

2022-02-21 Thread Stamatis Zampetakis
Hi Gavin,

A few more comments in case they help to get you a bit further on your work.

The need to return the result as a single object is a common problem in
object relational mapping (ORM) frameworks/APIS (JPA, Datanucleus,
Hibernate, etc.). Apart from the suggestions so far maybe you could look
into these frameworks as well for more inspiration.

Moreover your approach of decomposing the query into individual parts is
commonly known as the N+1 problem [1].

Lastly, keep in mind that you can introduce custom UDF, UDAF functions if
you need more flexibility on reconstructing the final result.

Best,
Stamatis

[1]
https://stackoverflow.com/questions/97197/what-is-the-n1-selects-problem-in-orm-object-relational-mapping

On Sun, Feb 13, 2022 at 3:59 AM Gavin Ray  wrote:

> Ah wait nevermind, got excited and spoke too soon. Looking at it more
> closely, that data isn't correct.
> At least it's in somewhat the right shape, ha!
>
> On Sat, Feb 12, 2022 at 9:57 PM Gavin Ray  wrote:
>
> > After ~5 hours, I think I may have made some progress =)
> >
> > I have this, which currently works. The problem is that the nested
> columns
> > don't have names on them.
> > Since I need to return a nested "Map", I have to figure
> > out how to convert this query into a form that gives column names.
> >
> > But this is still great progress I think!
> >
> > SELECT
> > "todos".*,
> > ARRAY(
> > SELECT
> > "users".*,
> > ARRAY(
> > SELECT
> > "todos".*
> > FROM
> > "todos"
> > ) AS "todos"
> > FROM
> > "users"
> > ) AS "users"
> > FROM
> > "todos"
> > WHERE
> > "user_id" IN (
> > SELECT
> > "user_id"
> > FROM
> > "users"
> > WHERE
> > "house_id" IN (
> > SELECT
> > "id"
> > FROM
> > "houses"
> > )
> > );
> >
> >
> >
> >
> ++-++--+
> > | id | user_id |  description   |
> > |
> >
> >
> ++-++--+
> > | 1  | 1   | Take out the trash | [{1, John, 1, [{1, 1, Take out
> > the trash}, {2, 1, Watch my favorite show}, { |
> > | 2  | 1   | Watch my favorite show | [{1, John, 1, [{1, 1, Take out
> > the trash}, {2, 1, Watch my favorite show}, { |
> > | 3  | 1   | Charge my phone| [{1, John, 1, [{1, 1, Take out
> > the trash}, {2, 1, Watch my favorite show}, { |
> > | 4  | 2   | Cook dinner| [{1, John, 1, [{1, 1, Take out
> > the trash}, {2, 1, Watch my favorite show}, { |
> > | 5  | 2   | Read a book| [{1, John, 1, [{1, 1, Take out
> > the trash}, {2, 1, Watch my favorite show}, { |
> > | 6  | 2   | Organize office| [{1, John, 1, [{1, 1, Take out
> > the trash}, {2, 1, Watch my favorite show}, { |
> > | 7  | 3   | Walk the dog   | [{1, John, 1, [{1, 1, Take out
> > the trash}, {2, 1, Watch my favorite show}, { |
> > | 8  | 3   | Feed the cat   | [{1, John, 1, [{1, 1, Take out
> > the trash}, {2, 1, Watch my favorite show}, { |
> >
> >
> ++-++--+
> >
> > On Sat, Feb 12, 2022 at 4:13 PM Gavin Ray  wrote:
> >
> >> Nevermind, this is a standard term not something Calcite-specific it
> >> seems!
> >>
> >> https://en.wikipedia.org/wiki/Correlated_subquery
> >>
> >> On Sat, Feb 12, 2022 at 3:46 PM Gavin Ray 
> wrote:
> >>
> >>> Forgive my ignorance/lack of experience
> >>>
> >>> I am somewhat familiar with the ARRAY() function, but not sure I know
> >>> the term "correlated"
> >>> Searching the Calcite codebase for uses of "correlated" + "query", I
> >>> found:
> >>>
> >>>
> >>>
> https://github.com/apache/calcite/blob/1d4f1b394bfdba03c5538017e12ab2431b137ca9/core/src/test/java/org/apache/calcite/test/SqlToRelConverterTest.java#L1603-L1612
> >>>
> >>>   @Test void testCorrelatedSubQueryInJoin() {
> >>> final String sql = "select *\n"
> >>> + "from emp as e\n"
> >>> + "join dept as d using (deptno)\n"
> >>> + "where d.name = (\n"
> >>> + "  select max(name)\n"
> >>> + "  from dept as d2\n"
> >>> + "  where d2.deptno = d.deptno)";
> >>> sql(sql).withExpand(false).ok();
> >>>   }
> >>>
> >>> But I also see this, which says it is "uncorrelated" but seems very
> >>> similar?
> >>>
> >>>   @Test void testInUncorrelatedSubQuery() {
> >>> final String sql = "select empno from emp where deptno in"
> >>> + " (select deptno from dept)";
> >>> sql(sql).ok();
> >>>   }
> >>>
> >>> I wouldn't blame you for not answering s

Re: "Unable to implement EnumerableNestedLoopJoin in SELECT(a, b, ARRAY(c, d, ARRAY(e)))"

2022-02-21 Thread Gavin Ray
Ahh wow, thank you Stamatis. Great to know there is something that has been
done in the past
I suppose I could try to build from this PR and see if it works

And about the API's, I assume you mean the "GenerateCorrelate" and
"CreateEnricher" methods the author introduced there

On Mon, Feb 21, 2022 at 8:05 AM Stamatis Zampetakis 
wrote:

> Hey Gavin,
>
> I think you are bumping into a missing feature and most likely addressed by
> [1].
>
> The approach in [1] is rather good but I had some doubts about a few new
> APIs that were introduced which made me a bit cautious about merging this
> to master. I would definitely like to find some time to review this again.
>
> Best,
> Stamatis
>
> [1] https://github.com/apache/calcite/pull/2116
>
> On Sat, Feb 19, 2022 at 6:17 PM Gavin Ray  wrote:
>
> > Digging into this more to try to better understand Calcite and hopefully
> > make
> > progress,it seems like the query breaks here:
> >
> >
> >
> https://github.com/apache/calcite/blob/5b2de4ef5c9447bc9f7aff98dd049bd32af5c53d/core/src/main/java/org/apache/calcite/rel/rel2sql/SqlImplementor.java#L1450-L1454
> >
> > @Override protected Context getAliasContext(RexCorrelVariable
> variable)
> > {
> >   return requireNonNull(
> >   correlTableMap.get(variable.id),
> >   () -> "variable " + variable.id + " is not found");
> > }
> >
> > Unfortunately, this feature (at least I think so?) is the barrier to
> > me being able to make efficient cross-datasource queries that return the
> > right
> > data shape for GraphQL responses.
> >
> > My current duct-tape hack is to split the query into query-per-join
> which I
> > assume defeats most of Calcite's optimization and planning abilities =(
> >
> > It's not much, but I'm also willing to offer $250 if anyone could help me
> > fix
> > this or figure out an alternative solution.
> >
> > On Mon, Feb 14, 2022 at 4:26 PM Gavin Ray  wrote:
> >
> > > Apologies for the slow reply Ruben, I appreciate your help.
> > > The full stack trace (I was prototyping in sqlline) seems to be more
> > > helpful:
> > >
> > > Here is what seems to be the most useful bits:
> > > ==
> > > java.sql.SQLException: Error while preparing plan
> > > [EnumerableCorrelate(correlation=[$cor0], joinType=[inner],
> > > requiredColumns=[{0}])
> > >   JdbcToEnumerableConverter
> > >
> > > Caused by: java.lang.IllegalStateException: Unable to implement
> > > EnumerableCorrelate
> > > Suppressed: java.lang.NullPointerException: variable $cor0 is not
> > found
> > > at java.base/java.util.Objects.requireNonNull(Objects.java:334)
> > > at
> > >
> >
> org.apache.calcite.rel.rel2sql.SqlImplementor$BaseContext.getAliasContext(SqlImplementor.java:1429)
> > > at
> > >
> >
> org.apache.calcite.rel.rel2sql.SqlImplementor$Context.toSql(SqlImplementor.java:628)
> > > 
> > > at
> > >
> >
> org.apache.calcite.rel.rel2sql.RelToSqlConverter.visit(RelToSqlConverter.java:427)
> > >
> > > And here is the entire thing:
> > > ==
> > > java.sql.SQLException: Error while preparing plan
> > > [EnumerableCorrelate(correlation=[$cor0], joinType=[inner],
> > > requiredColumns=[{0}])
> > >   JdbcToEnumerableConverter
> > > JdbcTableScan(table=[[hsql, PUBLIC, houses]])
> > >   EnumerableCollect(field=[EXPR$0])
> > > EnumerableProject(id=[$0], name=[$1], todos=[$3])
> > >   EnumerableCorrelate(correlation=[$cor1], joinType=[inner],
> > > requiredColumns=[{0}])
> > > JdbcToEnumerableConverter
> > >   JdbcFilter(condition=[=($2, $cor0.id)])
> > > JdbcTableScan(table=[[hsql, PUBLIC, users]])
> > > EnumerableCollect(field=[EXPR$0])
> > >   JdbcToEnumerableConverter
> > > JdbcProject(id=[$0], description=[$2])
> > >   JdbcFilter(condition=[=($1, $cor1.id)])
> > > JdbcTableScan(table=[[hsql, PUBLIC, todos]])
> > > ]
> > > at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
> > > at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
> > > at
> > >
> >
> org.apache.calcite.jdbc.CalciteConnectionImpl.prepareStatement_(CalciteConnectionImpl.java:239)
> > > at
> > >
> >
> org.apache.calcite.jdbc.CalciteConnectionImpl.access$100(CalciteConnectionImpl.java:101)
> > > at
> > >
> >
> org.apache.calcite.jdbc.CalciteConnectionImpl$2.prepareStatement(CalciteConnectionImpl.java:188)
> > > at CalciteSchemaManager.executeQuery(CalciteSchemaManager.kt:209)
> > > at CalciteSchemaManager.executeQuery(CalciteSchemaManager.kt:213)
> > > at ForeignKeyTest.throwawayTest(ForeignKeyTest.kt:265)
> > >
> > > Caused by: java.lang.IllegalStateException: Unable to implement
> > > EnumerableCorrelate(correlation=[$cor0], joinType=[inner],
> > > requiredColumns=[{0}]): rowcount = 22500.0, cumulative cost = {318610.0
> > > rows, 562611.0 cpu, 0.0 io}, id = 315
> > >   JdbcToEnumerableConverter: rowcount = 100.0, cumulative cost = {110.0
> >

Re: Why are nested aggregations illegal? Best alternatives?

2022-02-21 Thread Gavin Ray
I hadn't thought about the fact that ORM's probably have to solve this
problem as well
That is a great suggestion, I will try to investigate some of the popular
ORM codebases and see if there are any tricks they are using.

I seem to maybe be getting a tiny bit closer by using subqueries like
Julian suggested instead of operator calls
But if I may ask what is probably a very stupid question:

What might the error message
"parse failed: Query expression encountered in illegal context
(state=,code=0)"

Mean in the below query?

The reason why I am confused is because the query runs if I remove the
innermost subquery ("todos")
But the innermost subquery is a direct copy-paste of the subquery above it,
so I know it MUST be valid

As usual, thank you so much for your help/guidance Stamatis.

select
  "g0"."id" "id",
  "g0"."address" "address",
  (
select json_arrayagg(json_object(
  key 'id' value "g1"."id",
  key 'todos' value (
select json_arrayagg(json_object(
  key 'id' value "g2"."id",
  key 'description' value "g2"."description",
))
from (
  select * from "todos"
  where "g1"."id" = "user_id"
  order by "id"
) "g2"
  )
))
from (
  select * from "users"
  where "g0"."id" = "house_id"
  order by "id"
) "g1"
  ) "users"
from "houses" "g0"
order by "g0"."id"

On Mon, Feb 21, 2022 at 8:07 AM Stamatis Zampetakis 
wrote:

> Hi Gavin,
>
> A few more comments in case they help to get you a bit further on your
> work.
>
> The need to return the result as a single object is a common problem in
> object relational mapping (ORM) frameworks/APIS (JPA, Datanucleus,
> Hibernate, etc.). Apart from the suggestions so far maybe you could look
> into these frameworks as well for more inspiration.
>
> Moreover your approach of decomposing the query into individual parts is
> commonly known as the N+1 problem [1].
>
> Lastly, keep in mind that you can introduce custom UDF, UDAF functions if
> you need more flexibility on reconstructing the final result.
>
> Best,
> Stamatis
>
> [1]
>
> https://stackoverflow.com/questions/97197/what-is-the-n1-selects-problem-in-orm-object-relational-mapping
>
> On Sun, Feb 13, 2022 at 3:59 AM Gavin Ray  wrote:
>
> > Ah wait nevermind, got excited and spoke too soon. Looking at it more
> > closely, that data isn't correct.
> > At least it's in somewhat the right shape, ha!
> >
> > On Sat, Feb 12, 2022 at 9:57 PM Gavin Ray  wrote:
> >
> > > After ~5 hours, I think I may have made some progress =)
> > >
> > > I have this, which currently works. The problem is that the nested
> > columns
> > > don't have names on them.
> > > Since I need to return a nested "Map", I have to figure
> > > out how to convert this query into a form that gives column names.
> > >
> > > But this is still great progress I think!
> > >
> > > SELECT
> > > "todos".*,
> > > ARRAY(
> > > SELECT
> > > "users".*,
> > > ARRAY(
> > > SELECT
> > > "todos".*
> > > FROM
> > > "todos"
> > > ) AS "todos"
> > > FROM
> > > "users"
> > > ) AS "users"
> > > FROM
> > > "todos"
> > > WHERE
> > > "user_id" IN (
> > > SELECT
> > > "user_id"
> > > FROM
> > > "users"
> > > WHERE
> > > "house_id" IN (
> > > SELECT
> > > "id"
> > > FROM
> > > "houses"
> > > )
> > > );
> > >
> > >
> > >
> > >
> >
> ++-++--+
> > > | id | user_id |  description   |
> > > |
> > >
> > >
> >
> ++-++--+
> > > | 1  | 1   | Take out the trash | [{1, John, 1, [{1, 1, Take
> out
> > > the trash}, {2, 1, Watch my favorite show}, { |
> > > | 2  | 1   | Watch my favorite show | [{1, John, 1, [{1, 1, Take
> out
> > > the trash}, {2, 1, Watch my favorite show}, { |
> > > | 3  | 1   | Charge my phone| [{1, John, 1, [{1, 1, Take
> out
> > > the trash}, {2, 1, Watch my favorite show}, { |
> > > | 4  | 2   | Cook dinner| [{1, John, 1, [{1, 1, Take
> out
> > > the trash}, {2, 1, Watch my favorite show}, { |
> > > | 5  | 2   | Read a book| [{1, John, 1, [{1, 1, Take
> out
> > > the trash}, {2, 1, Watch my favorite show}, { |
> > > | 6  | 2   | Organize office| [{1, John, 1, [{1, 1, Take
> out
> > > the trash}, {2, 1, Watch my favorite show}, { |
> > > | 7  | 3   | Walk the dog   | [{1, John, 1, [{1, 1, Take
> out
> > > the trash}, {2, 1, Watch my favorite show}, { |
> > > | 8  | 3   | Feed the cat   | [{1, John, 1, [{

RE: DDL support issues in Calcite

2022-02-21 Thread 徐仁和
Hi Wang
The test case of `org.apache.calcite.test.ServerTest` may be what you want.

On 2022/02/21 08:17:55 "wang...@mchz.com.cn" wrote:
> Dear Calcite community:
>
> I am new Calcite user here. I noted in the document
https://calcite.apache.org/docs/adapter.html that Calcite does support DDL
operations,the only thing I need to do is 1) to include calcite-server.jar
in my classpath and 2) add
parserFactory=org.apache.calcite.sql.parser.ddl.SqlDdlParserImpl#FACTORY to
JDBC connect string.
> Now the problem is: I did so, but still I got the following error:
>
>
> java.sql.SQLException: Error while executing SQL "CREATE TABLE t (i
INTEGER, j VARCHAR(10))": DDL not supported: CREATE TABLE `T` (`I` INTEGER,
`J` VARCHAR(10)) at
org.apache.calcite.avatica.Helper.createException(Helper.java:56) at
org.apache.calcite.avatica.Helper.createException(Helper.java:41) at
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:163)
at
org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:227)
at com..CalciteResolverTest.testJdbc(CalciteResolverTest.java:259) at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method) at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568) at
junit.framework.TestCase.runTest(TestCase.java:177) at
junit.framework.TestCase.runBare(TestCase.java:142) at
junit.framework.TestResult$1.protect(TestResult.java:122) at
junit.framework.TestResult.runProtected(TestResult.java:142) at
junit.framework.TestResult.run(TestResult.java:125) at
junit.framework.TestCase.run(TestCase.java:130) at
junit.framework.TestSuite.runTest(TestSuite.java:241) at
junit.framework.TestSuite.run(TestSuite.java:236) at
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:90)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54) Caused by:
java.lang.UnsupportedOperationException: DDL not supported: CREATE TABLE
`T` (`I` INTEGER, `J` VARCHAR(10)) at
org.apache.calcite.server.DdlExecutor.lambda$static$0(DdlExecutor.java:28)
at
org.apache.calcite.prepare.CalcitePrepareImpl.executeDdl(CalcitePrepareImpl.java:369)
at
org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:634)
at
org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:513)
at
org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:483)
at
org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:249)
at
org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:623)
at
org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:674)
at
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
... 20 more
>
> did I miss any other properties that needs to be set?
>
>
>
>
> Many thanks
> Sai Wang
>


Re: dynamic reflective schema

2022-02-21 Thread xiaobo
I have made a customed JSONSchema on github : 
https://github.com/guxiaobo/calcite-json-adapter,
I am still debug it, a simple example failed regarding getting the datatype of 
tables:

Map> map = new HashMap>();
List t1 = new ArrayList();
JSONObject r1 = new JSONObject();
t1.add(r1);
map.put("t1", t1);

r1.put("c1", Long.valueOf(100));
r1.put("c2", "column2");
r1.put("c3", Boolean.FALSE);
r1.put("c4", new BigDecimal("2.1"));
r1.put("c5", new java.sql.Date(2022, 2, 22));
r1.put("c6", new java.sql.Time(System.currentTimeMillis()));
r1.put("c7", new 
java.sql.Timestamp(System.currentTimeMillis()));

String sql1 = "select count(*) from t1";
String sql2 = "select count(*) from js.t1";
Schema schema = new JsonSchema(map);

try {
CalciteDatabase db = new CalciteDatabase(schema, "js");
Long l1 = db.exeGetLong(sql1);
Long l2 = db.exeGetLong(sql2);

System.out.println("sql result1 " + l1);
System.out.println("sql result2 " + l2);

} catch (SQLException | ValidationException | SqlParseException 
| RelConversionException e) {
e.printStackTrace();
}




java.sql.SQLException: exception while executing query: null
at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
at 
org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:576)
at 
org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137)
at 
com.xsmartware.common.calcite.CalciteDatabase.executeQuery(CalciteDatabase.java:78)
at 
com.xsmartware.common.calcite.CalciteDatabase.executeQuery(CalciteDatabase.java:72)
at 
com.xsmartware.common.calcite.CalciteDatabase.exeGetLong(CalciteDatabase.java:95)
at 
com.xsmartware.javatest.calcite.CalCiteTest.test9(CalCiteTest.java:174)
at com.xsmartware.javatest.calcite.CalCiteTest.run(CalCiteTest.java:114)
at 
org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:758)
at 
org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:748)
at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:309)
at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:1301)
at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:1290)
at 
com.xsmartware.javatest.JavaTestApplication.main(JavaTestApplication.java:9)
Caused by: java.lang.NullPointerException
at 
org.apache.calcite.adapter.json.JsonEnumerator.(JsonEnumerator.java:49)
at 
org.apache.calcite.adapter.json.JsonScannableTable$1.enumerator(JsonScannableTable.java:48)
at 
org.apache.calcite.linq4j.EnumerableDefaults.aggregate(EnumerableDefaults.java:130)
at 
org.apache.calcite.linq4j.DefaultEnumerable.aggregate(DefaultEnumerable.java:107)
at Baz.bind(Unknown Source)
at 
org.apache.calcite.jdbc.CalcitePrepare$CalciteSignature.enumerable(CalcitePrepare.java:363)
at 
org.apache.calcite.jdbc.CalciteConnectionImpl.enumerable(CalciteConnectionImpl.java:338)
at 
org.apache.calcite.jdbc.CalciteMetaImpl._createIterable(CalciteMetaImpl.java:578)
at 
org.apache.calcite.jdbc.CalciteMetaImpl.createIterable(CalciteMetaImpl.java:569)
at 
org.apache.calcite.avatica.AvaticaResultSet.execute(AvaticaResultSet.java:184)
at 
org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:64)
at 
org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:43)
at 
org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:572)
... 12 more







-- Original --
From:  "Gavin Ray";;
Send time: Monday, Feb 21, 2022 1:51 AM
To: "dev"; 

Subject:  Re: dynamic reflective schema



Ah, you don't want to use ReflectiveSchema, it's a simple schema type meant
for easily making test schemas

You want to extend from "AbstractSchema" and override the function
"Map getTableMap()"
For the "Table" type, you probably want to use "JsonScannableTable"

The CsvSchema example does exactly this, if you want to see an example
implementation:
https://github.com/apache/calcite/blob/4bc916619fd286b2c0cc4d5c653c96a68801d74e/example/csv/src/main/java/org/apache/calcite/adapter/csv/CsvSchema.java#L69-L106

Hope this helps =)



On Sat, Feb 19, 2022 at 11:03 PM xiaobo  wrote:

> Hi,
> When using  reflectiveSchema we must defin

Re: dynamic reflective schema

2022-02-21 Thread xiaobo
It seems because the public RelDataType getRowType(RelDataTypeFactory 
typeFactory)  method of our JsonTable class did not have a change to be called.




-- Original --
From:  "xiaobo ";;
Send time: Tuesday, Feb 22, 2022 2:57 PM
To: "dev"; 

Subject:  Re:  dynamic reflective schema



I have made a customed JSONSchema on github : 
https://github.com/guxiaobo/calcite-json-adapter,
I am still debug it, a simple example failed regarding getting the datatype of 
tables:

Map> map = new HashMap>();
List t1 = new ArrayList();
JSONObject r1 = new JSONObject();
t1.add(r1);
map.put("t1", t1);

r1.put("c1", Long.valueOf(100));
r1.put("c2", "column2");
r1.put("c3", Boolean.FALSE);
r1.put("c4", new BigDecimal("2.1"));
r1.put("c5", new java.sql.Date(2022, 2, 22));
r1.put("c6", new java.sql.Time(System.currentTimeMillis()));
r1.put("c7", new 
java.sql.Timestamp(System.currentTimeMillis()));

String sql1 = "select count(*) from t1";
String sql2 = "select count(*) from js.t1";
Schema schema = new JsonSchema(map);

try {
CalciteDatabase db = new CalciteDatabase(schema, "js");
Long l1 = db.exeGetLong(sql1);
Long l2 = db.exeGetLong(sql2);

System.out.println("sql result1 " + l1);
System.out.println("sql result2 " + l2);

} catch (SQLException | ValidationException | SqlParseException 
| RelConversionException e) {
e.printStackTrace();
}




java.sql.SQLException: exception while executing query: null
at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
at 
org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:576)
at 
org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137)
at 
com.xsmartware.common.calcite.CalciteDatabase.executeQuery(CalciteDatabase.java:78)
at 
com.xsmartware.common.calcite.CalciteDatabase.executeQuery(CalciteDatabase.java:72)
at 
com.xsmartware.common.calcite.CalciteDatabase.exeGetLong(CalciteDatabase.java:95)
at 
com.xsmartware.javatest.calcite.CalCiteTest.test9(CalCiteTest.java:174)
at com.xsmartware.javatest.calcite.CalCiteTest.run(CalCiteTest.java:114)
at 
org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:758)
at 
org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:748)
at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:309)
at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:1301)
at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:1290)
at 
com.xsmartware.javatest.JavaTestApplication.main(JavaTestApplication.java:9)
Caused by: java.lang.NullPointerException
at 
org.apache.calcite.adapter.json.JsonEnumerator.(JsonEnumerator.java:49)
at 
org.apache.calcite.adapter.json.JsonScannableTable$1.enumerator(JsonScannableTable.java:48)
at 
org.apache.calcite.linq4j.EnumerableDefaults.aggregate(EnumerableDefaults.java:130)
at 
org.apache.calcite.linq4j.DefaultEnumerable.aggregate(DefaultEnumerable.java:107)
at Baz.bind(Unknown Source)
at 
org.apache.calcite.jdbc.CalcitePrepare$CalciteSignature.enumerable(CalcitePrepare.java:363)
at 
org.apache.calcite.jdbc.CalciteConnectionImpl.enumerable(CalciteConnectionImpl.java:338)
at 
org.apache.calcite.jdbc.CalciteMetaImpl._createIterable(CalciteMetaImpl.java:578)
at 
org.apache.calcite.jdbc.CalciteMetaImpl.createIterable(CalciteMetaImpl.java:569)
at 
org.apache.calcite.avatica.AvaticaResultSet.execute(AvaticaResultSet.java:184)
at 
org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:64)
at 
org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:43)
at 
org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:572)
... 12 more







-- Original --
From:  "Gavin Ray";;
Send time: Monday, Feb 21, 2022 1:51 AM
To: "dev"; 

Subject:  Re: dynamic reflective schema



Ah, you don't want to use ReflectiveSchema, it's a simple schema type meant
for easily making test schemas

You want to extend from "AbstractSchema" and override the function
"Map getTableMap()"
For the "Table" type, you probably want to use "JsonScannableTable"

The CsvSchema example does exactly this, if you wan