Hi community,I want to know when publish the flink 1.9 document? Thanks.
k, so ideally this should be available near end of July.
>
> Cheers,
> Gordon
>
> [1] https://ci.apache.org/projects/flink/flink-docs-master/
>
> On Thu, Jul 4, 2019 at 5:49 PM LakeShen wrote:
>
> > Hi community,I want to know when publish the flink 1.9 document? Thanks.
> >
>
Hi community, I have a question is that could rest api :
/jobs/:jobid/yarn-cancel trigger the savepoint? I saw the fink src code,
and I find it didn't trigger the savepoint, is it right?
Thank you to reply .
Congratulations Kurt!
Congxian Qiu 于2019年7月23日周二 下午5:37写道:
> Congratulations Kurt!
> Best,
> Congxian
>
>
> Dian Fu 于2019年7月23日周二 下午5:36写道:
>
> > Congrats, Kurt!
> >
> > > 在 2019年7月23日,下午5:33,Zili Chen 写道:
> > >
> > > Congratulations Kurt!
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > Jings
Hi all,when I use blink flink-sql-parser module,the maven dependency like
this:
com.alibaba.blink
flink-sql-parser
1.5.1
I also import the flink 1.9 blink-table-planner module , I
use FlinkPlannerImpl to parse the sql to get the List. But
when I run the program , it throws the exception like th
Hi community , I have a question about flink command, when I use flink run
command to submit my flink job to yarn , I use -yt to upload my function
jar , but I set -C file:xxxfunction.jar , the flink job throw the exception
like this :
The main method caused an error: Could not find class
'com.you
I Have solve this problem , thanks.
LakeShen 于2019年8月27日周二 下午4:55写道:
> Hi community , I have a question about flink command, when I use flink run
> command to submit my flink job to yarn , I use -yt to upload my function
> jar , but I set -C file:xxxfunction.jar , the flink job
Hi community , when I create the hbase sink table in my flink ddl sql
,just like this:
*create table sink_hbase_table(rowkey VARCHAR,cf
row(kdt_it_count bigint )) with (xx);*
and I run my flink task , it throws the exception like this :
*Up
Thank you , Jark . I have added the primary key in my flink sql before ,
and it throwed the * Primary key and unique key are not supported yet. *Now
I know it ,thank you sincerely to reply me .
Best wishes,
LakeShen
Jark Wu 于2019年9月12日周四 下午3:15写道:
> Hi Lake,
>
> This is not a p
Congratulations, Ron!
Best,
LakeShen
Sergey Nuyanzin 于2023年10月16日周一 16:17写道:
> Congratulations, Ron!
>
> On Mon, Oct 16, 2023 at 9:38 AM Qingsheng Ren wrote:
>
> > Congratulations and welcome aboard, Ron!
> >
> > Best,
> > Qingsheng
> >
>
Congratulations, Ron!
Best,
LakeShen
Sergey Nuyanzin 于2023年10月16日周一 16:18写道:
> Congratulations, Jane!
>
> On Mon, Oct 16, 2023 at 9:39 AM Qingsheng Ren wrote:
>
> > Congratulations and welcome, Jane!
> >
> > Best,
> > Qingsheng
> >
> >
Hi community,
now I am using the flink 1.10 to run the flink task ,cluster
type is yarn . I use commandline to submit my flink job , the commandline
just like this :
flink run -m yarn-cluster --allowNonRestoredState -c xxx.xxx.xx
flink-stream-xxx.jar
Bug there is a exception to
I have solved this problem. I set the flink-table-planner-blink maven
scope to provided .
kant kodali 于2020年2月28日周五 下午3:32写道:
> Same problem!
>
> On Thu, Feb 27, 2020 at 11:10 PM LakeShen
> wrote:
>
>> Hi community,
>> now I am using the flink
Hi community,
now we plan to move all flink tasks to k8s cluster. For one flink
task , we want to see this flink task web ui . First , we create the k8s
Service to expose 8081 port of jobmanager, then we use ingress controller
so that we can see it outside.But the flink web like this :
[im
In my thought , I think I should config the correct flink jobserver for
flink task
LakeShen 于2020年3月4日周三 下午2:07写道:
> Hi community,
> now we plan to move all flink tasks to k8s cluster. For one flink
> task , we want to see this flink task web ui . First , we create the k8s
>
to do that?
Thanks to your reply.
Best wishes,
LakeShen
is command only
suit for the sources that implement the StoppableFunction interface, is it
correct?
Thanks to your reply.
Best wishes,
LakeShen
Thanks to your reply.
Best wishes,
LakeShen
Thanks a lot!, tison
tison 于2020年3月12日周四 下午5:56写道:
> The StoppableFunction is gone.
>
> See also https://issues.apache.org/jira/browse/FLINK-11889
>
> Best,
> tison.
>
>
> LakeShen 于2020年3月12日周四 下午5:44写道:
>
>> Hi community,
>> now I am seei
; On 13 Mar 2020, at 04:34, Sivaprasanna
> > wrote:
> > >
> > > I think you can modify the operator’s parallelism. It is only if you
> > have set maxParallelism, and while restoring from a checkpoint, you
> > shouldn’t modify the maxParallelism. Otherwise
Ok, thanks! Arvid
Arvid Heise 于2020年3月10日周二 下午4:14写道:
> Hi LakeShen,
>
> you can change the port with
>
> conf.setInteger(RestOptions.PORT, 8082);
>
> or if want to be on the safe side specify a range
>
> conf.setString(RestOptions.BIND_PORT, "8081-8099");
ent
> timestamp each time 1000 entries have been processed.
What's the meaning of 1000 entries? 1000 different key ?
Thanks to your reply.
Best regards,
LakeShen
Hi Jingsong ,
I am looking forward this feature. Because in some streaming application,it
need transfer their messages to hdfs , in order to offline analysis.
Best wishes,
LakeShen
Stephan Ewen 于2020年3月17日周二 下午7:42写道:
> I would really like to see us converging the stack and the functional
e.
Thanks to your reply.
Best regards,
LakeShen
27;connector.properties.1.key' = 'bootstrap.servers'
> , 'connector.properties.1.value' = 'x'
>
I can understand this config , but for the flink fresh man,maybe it
is confused for him.
In my thought, I am really looking forward to this feature,thank
g
the containerized.heap-cutoff-ratio be 0.15.
Is there any problem for this config?
I am looking forward to your reply.
Best wishes,
LakeShen
+1 (non-binding)
Benchao Li 于2020年4月3日周五 上午9:50写道:
> +1 (non-binding)
>
> Dawid Wysakowicz 于2020年4月3日周五 上午12:33写道:
>
> > +1
> >
> > Best,
> >
> > Dawid
> >
> > On 02/04/2020 18:28, Timo Walther wrote:
> > > +1
> > >
> > > Thanks,
> > > Timo
> > >
> > > On 02.04.20 17:22, Jark Wu wrote:
> > >> H
Hi community,
I have a question about flink on yarn ha , if active resourcemanager
changed, what is the flink task staus. Is flink task running normally?
Should I must restart my flink task to run?
Thanks to your reply.
Best,
LakeShen
Hi community, as I know I can use idle state retention time to clear the
flink sql task state,I have a question is that how long the flink sql task
state default ttl is . Thanks
Hi community, when I use Flink SQL DDL ,the kafka' json field conflict with
flink SQL Keywords,my thought is that using the UDTF to solve it . Is there
graceful way to solve this problem?
thank u lucas
lucas.wu 于2019年12月10日周二 下午2:12写道:
> You can use ` ` to surround the field
>
>
> 原始邮件
> 发件人:lakeshenshenleifight...@gmail.com
> 收件人:dev...@flink.apache.org; useru...@flink.apache.org
> 发送时间:2019年12月10日(周二) 14:05
> 主题:Flink SQL Kafka topic DDL ,the kafka' json field conflict with fli
Hi community , when I run the flink task on k8s , the first thing is that
to build the flink task jar to
Docker Image . I find that It would spend much time to build docker image.
Is there some way to makr it faster.
Thank your replay.
Hi community,when I write the flink ddl sql like this:
CREATE TABLE kafka_src (
id varchar,
a varchar,
b TIMESTAMP,
c TIMESTAMP
)
with (
...
'format.type' = 'json',
'format.property-version' = '1',
'format.derive-schema' = 'true',
'update-mode' = 'append'
);
If the me
Hi community,I have a question about flink state ttl.If I don't config the
flink state ttl config,
How long the flink state retain?Is it forever retain in hdfs?
Thanks your replay.
I saw the flink source code, I find the flink state ttl default is
never expire,is it right?
LakeShen 于2020年1月6日周一 上午9:58写道:
> Hi community,I have a question about flink state ttl.If I don't config the
> flink state ttl config,
> How long the flink state retain?Is it forever
Ok, got it ,thank you
Zhu Zhu 于2020年1月6日周一 上午10:30写道:
> Yes. State TTL is by default disabled.
>
> Thanks,
> Zhu Zhu
>
> LakeShen 于2020年1月6日周一 上午10:09写道:
>
>> I saw the flink source code, I find the flink state ttl default is
>> never expire,is it right?
>
Hi community,now I am use flink sql inner join in my code,I saw the flink
document, the flink sql inner join will keep both sides of the join input
in Flink’s state forever.
As result , the hdfs files size are so big , is there any way to clear the
sql join state?
Thanks to your reply.
Hi community, now I am using Flink sql , and I set the retention time, As I
all know is that Flink will set the timer for per key to clear their state,
if Flink task always checkpoint failure, are the key state cleared by
timer?
Thanks to your replay.
Hi community,
now I have a flink sql job, and I set the flink sql sate retention
time, there are three dir in flink checkpoint dir :
1. chk -xx dir
2. shared dir
3. taskowned dir
I find the shared dir store the last year checkpoint state,the only reason
I thought is that the latest
checkpo
Congratulations! Jincheng Sun
Best,
LakeShen
Robert Metzger 于2019年6月24日周一 下午11:09写道:
> Hi all,
>
> On behalf of the Flink PMC, I'm happy to announce that Jincheng Sun is now
> part of the Apache Flink Project Management Committee (PMC).
>
> Jincheng has been a committer
quot;. But I go
into the /tmp dir ,I
couldn't find the flink checkpoint state local directory.
What is the RocksDB local directory in flink checkpointing? I am looking
forward to your reply.
Best,
LakeShen
用的版本为 Flink 1.17,当前先在 Hive 中创建了 partition_test 这张表。
在代码中也指定了:sink.partition-commit.policy.kind,但是实际执行还是报上面的错,但是如果不在 Hive
中创建这张表,使用 Flink 来创建这张表就能够执行。
这是不是 Flink 1.17 的 BUG?
CREATE CATALOG my_hive_catalog
WITH (
'type' = 'hive',
-- 指定默认的 hive 数据库
'default-database' = 'zhoujielun'
);
use catalog m
LakeShen created FLINK-13283:
Summary: JDBCLookup Exception: Unsupported type: LocalDate
Key: FLINK-13283
URL: https://issues.apache.org/jira/browse/FLINK-13283
Project: Flink
Issue Type: Bug
LakeShen created FLINK-13289:
Summary: Blink Planner JDBCUpsertTableSink :
UnsupportedOperationException "JDBCUpsertTableSink can not support "
Key: FLINK-13289
URL: https://issues.apache.org/jira/browse/F
LakeShen created FLINK-32528:
Summary: The RexCall a = a,if a's datatype is nullable, and when a
is null, a = a is null, it isn't true in BinaryComparisonExprReducer
Key: FLINK-32528
URL: https://issues.
LakeShen created FLINK-16639:
Summary: Flink SQL Kafka source connector, add the no json format
filter params when format.type is json
Key: FLINK-16639
URL: https://issues.apache.org/jira/browse/FLINK-16639
LakeShen created FLINK-16681:
Summary: Jdbc JDBCOutputFormat and JDBCLookupFunction
PreparedStatement loss connection, if long time not records to write.
Key: FLINK-16681
URL: https://issues.apache.org/jira/browse
LakeShen created FLINK-18376:
Summary: java.lang.ArrayIndexOutOfBoundsException in
RetractableTopNFunction
Key: FLINK-18376
URL: https://issues.apache.org/jira/browse/FLINK-18376
Project: Flink
LakeShen created FLINK-18440:
Summary: ROW_NUMBER function: ROW/RANGE not allowed with RANK,
DENSE_RANK or ROW_NUMBER functions
Key: FLINK-18440
URL: https://issues.apache.org/jira/browse/FLINK-18440
49 matches
Mail list logo