用的版本为 Flink 1.17,当前先在 Hive 中创建了 partition_test 这张表。
在代码中也指定了:sink.partition-commit.policy.kind,但是实际执行还是报上面的错,但是如果不在 Hive
中创建这张表,使用 Flink 来创建这张表就能够执行。
这是不是 Flink 1.17 的 BUG?
CREATE CATALOG my_hive_catalog
WITH (
'type' = 'hive',
-- 指定默认的 hive 数据库
'default-database' = 'zhoujielun'
);
use catalog m
quot;. But I go
into the /tmp dir ,I
couldn't find the flink checkpoint state local directory.
What is the RocksDB local directory in flink checkpointing? I am looking
forward to your reply.
Best,
LakeShen
gt; Jark
>
> On Wed, 29 Apr 2020 at 10:19, LakeShen wrote:
>
>> Hi Jark,
>>
>> I am a little confused about how double stream joining state cleared(not
>> window join).
>>
>> For example, there are two stream , A , B . The sql like this :
>>
>&
Hi community,
I have a question about flink on yarn ha , if active resourcemanager
changed, what is the flink task staus. Is flink task running normally?
Should I must restart my flink task to run?
Thanks to your reply.
Best,
LakeShen
g
the containerized.heap-cutoff-ratio be 0.15.
Is there any problem for this config?
I am looking forward to your reply.
Best wishes,
LakeShen
e.
Thanks to your reply.
Best regards,
LakeShen
Thank you, I will do that.
jinhai wang 于2020年3月17日周二 下午5:58写道:
> Hi LakeShen
>
> You also must assign IDs to all operators of an application. Otherwise,
> you may not be able to recover from checkpoint
>
> Doc:
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/up
ent
> timestamp each time 1000 entries have been processed.
What's the meaning of 1000 entries? 1000 different key ?
Thanks to your reply.
Best regards,
LakeShen
Ok, thanks! Arvid
Arvid Heise 于2020年3月10日周二 下午4:14写道:
> Hi LakeShen,
>
> you can change the port with
>
> conf.setInteger(RestOptions.PORT, 8082);
>
> or if want to be on the safe side specify a range
>
> conf.setString(RestOptions.BIND_PORT, "8081-8099");
Thanks a lot!, tison
tison 于2020年3月12日周四 下午5:56写道:
> The StoppableFunction is gone.
>
> See also https://issues.apache.org/jira/browse/FLINK-11889
>
> Best,
> tison.
>
>
> LakeShen 于2020年3月12日周四 下午5:44写道:
>
>> Hi community,
>> now I am seei
Thanks to your reply.
Best wishes,
LakeShen
is command only
suit for the sources that implement the StoppableFunction interface, is it
correct?
Thanks to your reply.
Best wishes,
LakeShen
to do that?
Thanks to your reply.
Best wishes,
LakeShen
In my thought , I think I should config the correct flink jobserver for
flink task
LakeShen 于2020年3月4日周三 下午2:07写道:
> Hi community,
> now we plan to move all flink tasks to k8s cluster. For one flink
> task , we want to see this flink task web ui . First , we create the k8s
>
Hi community,
now we plan to move all flink tasks to k8s cluster. For one flink
task , we want to see this flink task web ui . First , we create the k8s
Service to expose 8081 port of jobmanager, then we use ingress controller
so that we can see it outside.But the flink web like this :
[im
I have solved this problem. I set the flink-table-planner-blink maven
scope to provided .
kant kodali 于2020年2月28日周五 下午3:32写道:
> Same problem!
>
> On Thu, Feb 27, 2020 at 11:10 PM LakeShen
> wrote:
>
>> Hi community,
>> now I am using the flink
Hi community,
now I am using the flink 1.10 to run the flink task ,cluster
type is yarn . I use commandline to submit my flink job , the commandline
just like this :
flink run -m yarn-cluster --allowNonRestoredState -c xxx.xxx.xx
flink-stream-xxx.jar
Bug there is a exception to
Hi community,
now I have a flink sql job, and I set the flink sql sate retention
time, there are three dir in flink checkpoint dir :
1. chk -xx dir
2. shared dir
3. taskowned dir
I find the shared dir store the last year checkpoint state,the only reason
I thought is that the latest
checkpo
Hi community, now I am using Flink sql , and I set the retention time, As I
all know is that Flink will set the timer for per key to clear their state,
if Flink task always checkpoint failure, are the key state cleared by
timer?
Thanks to your replay.
Hi community,now I am use flink sql inner join in my code,I saw the flink
document, the flink sql inner join will keep both sides of the join input
in Flink’s state forever.
As result , the hdfs files size are so big , is there any way to clear the
sql join state?
Thanks to your reply.
Ok, got it ,thank you
Zhu Zhu 于2020年1月6日周一 上午10:30写道:
> Yes. State TTL is by default disabled.
>
> Thanks,
> Zhu Zhu
>
> LakeShen 于2020年1月6日周一 上午10:09写道:
>
>> I saw the flink source code, I find the flink state ttl default is
>> never expire,is it right?
>
I saw the flink source code, I find the flink state ttl default is
never expire,is it right?
LakeShen 于2020年1月6日周一 上午9:58写道:
> Hi community,I have a question about flink state ttl.If I don't config the
> flink state ttl config,
> How long the flink state retain?Is it forever
Hi community,I have a question about flink state ttl.If I don't config the
flink state ttl config,
How long the flink state retain?Is it forever retain in hdfs?
Thanks your replay.
Hi community,when I write the flink ddl sql like this:
CREATE TABLE kafka_src (
id varchar,
a varchar,
b TIMESTAMP,
c TIMESTAMP
)
with (
...
'format.type' = 'json',
'format.property-version' = '1',
'format.derive-schema' = 'true',
'update-mode' = 'append'
);
If the me
Hi community , when I run the flink task on k8s , the first thing is that
to build the flink task jar to
Docker Image . I find that It would spend much time to build docker image.
Is there some way to makr it faster.
Thank your replay.
Hi community, when I use Flink SQL DDL ,the kafka' json field conflict with
flink SQL Keywords,my thought is that using the UDTF to solve it . Is there
graceful way to solve this problem?
Hi community, as I know I can use idle state retention time to clear the
flink sql task state,I have a question is that how long the flink sql task
state default ttl is . Thanks
27 matches
Mail list logo