gt;> Best regards,
>> Yuxia
>>
>> - 原始邮件 -
>> 发件人: "Xiaolong Wang"
>> 收件人: "dev"
>> 发送时间: 星期四, 2024年 3 月 28日 下午 5:11:20
>> 主题: Re: Bug report for reading Hive table as streaming source.
>>
>> I think it worth mentioni
Sure
On Mon, Apr 1, 2024 at 9:28 AM yuxia wrote:
> Thanks for reporting. Could you please help create a jira about it?
>
> Best regards,
> Yuxia
>
> - 原始邮件 -
> 发件人: "Xiaolong Wang"
> 收件人: "dev"
> 发送时间: 星期四, 2024年 3 月 28日 下午 5:11:20
Thanks for reporting. Could you please help create a jira about it?
Best regards,
Yuxia
- 原始邮件 -
发件人: "Xiaolong Wang"
收件人: "dev"
发送时间: 星期四, 2024年 3 月 28日 下午 5:11:20
主题: Re: Bug report for reading Hive table as streaming source.
I think it worth mentioning in the d
I think it worth mentioning in the documentation of Hive read that it
cannot read a table that has more than 32,767 partitions.
On Thu, Mar 28, 2024 at 5:10 PM Xiaolong Wang
wrote:
> Found out the reason:
>
> It turned out that in Flink, it uses hive’s IMetaStoreClient to fetch
> partitions usin
Found out the reason:
It turned out that in Flink, it uses hive’s IMetaStoreClient to fetch
partitions using the following method:
List listPartitionNames(String db_name, String tbl_name,
short max_parts) throws MetaException, TException;
where the max_parts represents the max number of pa
Hi,
I found a weird bug when reading a Hive table as a streaming source.
In summary, if the first partition is not time related, then the Hive table
cannot be read as a streaming source.
e.g.
I've a Hive table in the definition of
```
CREATE TABLE article (
id BIGINT,
edition STRING,
dt STRING