Thanks a lot @yuxia <luoyu...@alumni.sjtu.edu.cn>  for your time. We will
try to back port https://issues.apache.org/jira/browse/FLINK-27450 and see
if that works. Also will keep watching for your fix for the Default Dialect
fix.

Regards
Ram

On Mon, Jul 17, 2023 at 8:08 AM yuxia <luoyu...@alumni.sjtu.edu.cn> wrote:

> Hi, Ram.
> Thanks for reaching out.
> 1:
> About  Hive dialect issue, may be you're using JDK11?
> There's a known issue in FLINK-27450[1]. The main reason that Hive
> dosen't fully support JDK11. More specific to your case, it has been
> tracked in HIVE-21584[2].
> Flink has upgrade the Hive 2.x version to 2.3.9 to include this patch. But
> unfortunately, IIRC, this patch is still not available in Hive 3.x.
>
> 2:
> About the creating table issue, thanks for reporting it. I tried it and it
> turns out that it's a bug. I have created FLINK-32596[3] to trace it.
> It only happen with Flink dialect & partitioned table & Hive Catalog.
> In most case, we recommend user to use Hive dialect to created hive
> tables, then we miss the test to cover use Flink dialect to create
> partitioed table in Hive Catalog.  So this bug has been hiden for a while.
> For your case, as a work around, I think you can try to create the table
> in Hive itself with the following SQL:
>  CREATE TABLE testsource(
>  `geo_altitude` FLOAT
> )
> PARTITIONED by ( `date` STRING) tblproperties (
>  'sink.partition-commit.delay'='1 s',
>  'sink.partition-commit.policy.kind'='metastore,success-file');
>
>
> [1] https://issues.apache.org/jira/browse/FLINK-27450
> [2] https://issues.apache.org/jira/browse/HIVE-21584
> [3] https://issues.apache.org/jira/browse/FLINK-32596
>
>
> Best regards,
> Yuxia
>
> ------------------------------
> *发件人: *"ramkrishna vasudevan" <ramvasu.fl...@gmail.com>
> *收件人: *"User" <user@flink.apache.org>, "dev" <d...@flink.apache.org>
> *发送时间: *星期五, 2023年 7 月 14日 下午 8:46:20
> *主题: *Issue with flink 1.16 and hive dialect
>
> Hi All,
> I am not sure if this was already discussed in this forum.
> In our set up with 1.16.0 flink we have ensured that the setup has all the
> necessary things for Hive catalog to work.
>
> The flink dialect works fine functionally (with some issues will come to
> that later).
>
> But when i follow the steps here in
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/hive-compatibility/hive-dialect/queries/overview/#examples
> I am getting an exception once i set to hive dialect
>         at
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> [flink-sql-client-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
> Caused by: java.lang.ClassCastException: class
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader
> and java.net.URLClassLoader are in module java.base of loader 'bootstrap')
>         at
> org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:413)
> ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
>         at
> org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:389)
> ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
>         at
> org.apache.flink.table.planner.delegation.hive.HiveSessionState.<init>(HiveSessionState.java:80)
> ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
>         at
> org.apache.flink.table.planner.delegation.hive.HiveSessionState.startSessionState(HiveSessionState.java:128)
> ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
>         at
> org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:210)
> ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
>         at
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:172)
> ~[flink-sql-client-1.16.0-0.0-SNAPSHOT.jar:1.16.0-0.0-SNAPSHOT]
>
> I have ensured the dialect related steps are completed followed including
> https://issues.apache.org/jira/browse/FLINK-25128
>
> In the flink catalog - if we create a table
> > CREATE TABLE testsource(
> >
> >  `date` STRING,
> >  `geo_altitude` FLOAT
> > )
> > PARTITIONED by ( `date`)
> >
> > WITH (
> >
> > 'connector' = 'hive',
> > 'sink.partition-commit.delay'='1 s',
> > 'sink.partition-commit.policy.kind'='metastore,success-file'
> > );
>
> The parition always gets created on the last set of columns and not on the
> columns that we specify. Is this a known bug?
>
> Regards
> Ram
>
>

Reply via email to