Unsubscribe
用的版本为 Flink 1.17,当前先在 Hive 中创建了 partition_test 这张表。
在代码中也指定了:sink.partition-commit.policy.kind,但是实际执行还是报上面的错,但是如果不在 Hive
中创建这张表,使用 Flink 来创建这张表就能够执行。
这是不是 Flink 1.17 的 BUG?
CREATE CATALOG my_hive_catalog
WITH (
'type' = 'hive',
-- 指定默认的 hive 数据库
'default-database' = 'zhoujielun'
);
use catalog m
Good morning Salva,
The situation is much better than you apparently are aware of 😊
For quite some time there is an implementation for keyed operators with as many
inputs as you like:
* MultipleInputStreamOperator/KeyedMultipleInputTransformation
I originally used your proposed sum types wi
Sorry for my late reply Gabor, here you are the whole trace:
SLF4J(W): No SLF4J providers were found.
SLF4J(W): Defaulting to no-operation (NOP) logger implementation
SLF4J(W): See https://www.slf4j.org/codes.html#noProviders for further
details.
SLF4J(W): Class path contains SLF4J bindings target
I've seen the class definition for source function:
class SinkFunction(JavaFunctionWrapper):"""The base class for
SinkFunctions."""
def __init__(self, sink_func: Union[str, JavaObject]):"""
Constructor of SinkFunction.
:param sink_func: The java SinkFunction
{
"emoji": "💖",
"version": 1
}
Hi Salva,
I've done exactly that (union for N number of streams in order to perform a
join), and gave a talk at Flink Forward a few years ago:
https://www.youtube.com/watch?v=tiGxEGPyqCg&ab_channel=FlinkForward
On Wed, Dec 4, 2024 at 5:03 AM Salva Alcántara
wrote:
> I have a job which basically
Which fixes are you interested in being released?
On Thu, Nov 21, 2024 at 10:02 AM Prasanna kumar <
prasannakumarram...@gmail.com> wrote:
> Any Plans on these minor version releases ?
>
> Thanks,
> Prasanna.
>
> On Mon, Nov 11, 2024 at 2:56 PM NGH Flink wrote:
>
>> Hi,
>>
>> I am also interested
Thanks Shengkai and Andrew - that's helped clarify things a lot.
On Tue, 3 Dec 2024 at 08:30, Shengkai Fang wrote:
> Accidentally sent an email that was not finished...
>
> Yaml is much easier for users to use compared to SQL. Many external
> systems can use yaml spec to build a data pipeline
Unsubscribe
I have a job which basically joins different inputs together, all
partitioned by the same key.
I originally took the typical approach and created a pipeline consisting of
N-1 successive joins, each one implemented using a DataStream co-process
function.
To avoid shuffling and also some state dupl
11 matches
Mail list logo