ope it helps.
>
> Best, Hequn
>
> [1] https://github.com/FlinkML/flink-jpmml
> [2]
> https://github.com/FlinkML/flink-jpmml/tree/master/flink-jpmml-examples/src/main/scala/io/radicalbit/examples
>
> On Fri, Aug 17, 2018 at 10:09 AM, sagar loke wrote:
>
>> Hi,
>>
Hi,
We are planning to use flink to run jpmml models using flink-jpmml library
from (radicalbit) over DataStream in Flink.
Is there any code example which we can refer to kick start the process ?
Thanks,
Hi,
We have a use case where we are consuming from more than 100s of Kafka
Topics. Each topic has different number of partitions.
As per the documentation, to parallelize a Kafka Topic, we need to use
setParallelism() == number of Kafka Partitions for a topic.
But if we are consuming multiple to
AFAIK, the top-level
> type is always a struct because it needs to wrap the fields, e.g.,
> struct(name:string, age:int)
>
> Best, Fabian
>
> 2018-06-26 22:38 GMT+02:00 sagar loke :
>
>> @zhangminglei,
>>
>> Question about the schema for ORC format:
>>
>&
glei <18717838...@163.com>
> Date: 6/23/18 10:12 AM (GMT+08:00)
> To: sagar loke
> Cc: dev , user
> Subject: Re: [Flink-9407] Question about proposed ORC Sink !
>
> Hi, Sagar.
>
> 1. It solves the issue partially meaning files which have finished
> checkpointing
issues.apache.org/jira/browse/FLINK-9407> and
> https://issues.apache.org/jira/browse/FLINK-9411 <
> https://issues.apache.org/jira/browse/FLINK-9411>
> >>>
> >>> For ORC format, Currently only support basic data types, such as Long,
> Boolean, Short, Intege
We are eagerly waiting for
- Extends Streaming Sinks:
- Bucketing Sink should support S3 properly (compensate for eventual
consistency), work with Flink's shaded S3 file systems, and efficiently
support formats that compress/index arcoss individual rows (Parquet, ORC,
...)
Especially for O
, Jörn Franke wrote:
> Don’t use the JDBC driver to write to Hive. The performance of JDBC in
> general for large volumes is suboptimal.
> Write it to a file in HDFS in a format supported by HIve and point the
> table definition in Hive to it.
>
> On 11. Jun 2018, at 04:47, sagar
I am trying to Sink data to Hive via Confluent Kafka -> Flink -> Hive using
following code snippet:
But I am getting following error:
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
DataStream stream = readFromKafka(env);
private static final TypeI