This is a missing piece from Flink. Link now does not provide spark sql like
integration to Hive. However, you can write ORC/Avro format which can read by
Hive later.
Thanks,
Will
> On Aug 13, 2018, at 7:34 PM, Renjie Liu wrote:
>
> Hi, yuvraj:
>
> Do you mean querying hive with sql? Or anyt
Hi, yuvraj:
Do you mean querying hive with sql? Or anything else?
On Tue, Aug 14, 2018 at 3:52 AM yuvraj singh <19yuvrajsing...@gmail.com>
wrote:
> I want to know ,if FLink have support for hive .
>
> Thanks
> Yubraj Singh
>
--
Liu, Renjie
Software Engineer, MVAD
Hi,
no, combining batch and streaming environments is not possible at the
moment. However, most operations in batch can be done in streaming
fashion as well. I would recommend to use the DataStream API as it
provides the most flexibility in your use case.
Regards,
Timo
Am 11/21/17 um 4:41
Hi Timo,
Thanks for your reply. I do notice that the document says "A Table is always
bound to a specific TableEnvironment. It is not possible to combine tables of
different TableEnvironments in the same query, e.g., to join or union them.”
Does that mean there is no way I can make operations,
Hi Wangsan,
yes, the Hive integration is limited so far. However, we provide an
external catalog feature [0] that allows you to implement custom logic
to retrieve Hive tables. I think it is not possible to do all you
operations in Flink's SQL API right now. For now, I think you need to
combin