Congratulations!
Best,
Jingsong
On Tue, Mar 25, 2025 at 10:40 AM Yuan Mei wrote:
>
> Thanks for driving this & Congrats!
>
> Best
> Yuan
>
> On Mon, Mar 24, 2025 at 5:38 PM Leonard Xu wrote:
>>
>> Congratulations!
>>
>> Thanks Xintong, Jark, Jiangjie and Martijn for the release management and
Hi Yanquan,
We need upgrade Paimon to 1.0... Iceberg snapshots in Paimon 0.9
cannot work well.
Best,
Jingsong
On Mon, Dec 9, 2024 at 5:25 PM Yanquan Lv wrote:
>
> Hi, John. Happy to hear that you've used this pipeline.
>
> As you said, Paimon's Iceberg Compatibility Mode is a new feature in Pai
CC to the Paimon community.
Best,
Jingsong
On Mon, May 20, 2024 at 9:55 AM Jingsong Li wrote:
>
> Amazing, congrats!
>
> Best,
> Jingsong
>
> On Sat, May 18, 2024 at 3:10 PM 大卫415 <2446566...@qq.com.invalid> wrote:
> >
> > 退订
> >
> >
> >
Congratulations!
On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
>
> Congratulations, thanks for the great work!
>
> Best,
> Rui
>
> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee wrote:
>>
>> The Apache Flink community is very happy to announce the release of Apache
>> Flink
+1, the fallback looks weird now, it is outdated.
But, it is good to provide an option. I don't know if there are some
users who depend on this fallback.
Best,
Jingsong
On Tue, May 30, 2023 at 1:47 PM Rui Li wrote:
>
> +1, the fallback was just intended as a temporary workaround to run
> catal
The Apache Flink community is very happy to announce the release of
Apache Flink Table Store 0.3.0.
Apache Flink Table Store is a unified storage to build dynamic tables
for both streaming and batch processing in Flink, supporting
high-speed data ingestion and timely data query.
Please check out
Thanks for driving, Qingsheng.
+1 for reverting sink metric name.
We often forget that metric is also one of the important APIs.
+1 for releasing 1.15.3 to fix this.
Best,
Jingsong
On Sun, Oct 9, 2022 at 11:35 PM Becket Qin wrote:
>
> Thanks for raising the discussion, Qingsheng,
>
> +1 on re
Thanks Xingbo for releasing it.
Best,
Jingsong
On Wed, Sep 28, 2022 at 10:52 AM Xingbo Huang wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.14.6, which is the fifth bugfix release for the Apache Flink 1.14
> series.
>
> Apache Flink® is an open-
The Apache Flink community is very happy to announce the release of
Apache Flink Table Store 0.2.0.
Apache Flink Table Store is a unified storage to build dynamic tables
for both streaming and batch processing in Flink, supporting
high-speed data ingestion and timely data query.
Please check out
Thanks Xingtong, Jark, Martijn and Robert for making this possible!
Best,
Jingsong
On Thu, Jun 2, 2022 at 5:32 PM Jark Wu wrote:
> Thank Xingtong for making this possible!
>
> Cheers,
> Jark Wu
>
> On Thu, 2 Jun 2022 at 15:31, Xintong Song wrote:
>
> > Hi everyone,
> >
> > I'm very happy to a
>>>>>>>> >
>>>>>>>> >
>>>>>>>> >
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > On Sat, May 7, 2022 at 12:22 PM Xintong Song <
>>>&g
Most of the open source communities I know have set up their slack
channels, such as Apache Iceberg [1], Apache Druid [2], etc.
So I think slack can be worth trying.
David is right, there are some cases that need to communicate back and
forth, slack communication will be more effective.
But back
Thanks all for your discussions.
I'll share my opinion here:
1. Hive SQL and Hive-like SQL are the absolute mainstay of current
Batch ETL in China. Hive+Spark (HiveSQL-like)+Databricks also occupies
a large market worldwide.
- Unlike OLAP SQL (such as presto, which is ansi-sql rather than hive
s
Not found in
https://repo1.maven.org/maven2/org/apache/flink/flink-table-api-java/
I guess too many people sent versions, resulting in maven central
repository synchronization being slower.
Best,
Jingsong
On Fri, Dec 17, 2021 at 2:00 PM casel.chen wrote:
>
> I can NOT find flink 1.13.5 rel
d be best if a rough table could be provided.
>
> I think this is a good suggestion, we can provide those suggestions in the
> document.
>
> Best,
> Yingjie
>
> Jingsong Li 于2021年12月14日周二 14:39写道:
>>
>> Hi Yingjie,
>>
>> +1 for this FLIP. I
Hi Yingjie,
+1 for this FLIP. I'm pretty sure this will greatly improve the ease
of batch jobs.
Looks like "taskmanager.memory.framework.off-heap.batch-shuffle.size"
and "taskmanager.network.sort-shuffle.min-buffers" are related to
network memory and framework.off-heap.size.
My question is, wha
Amazing!
Thanks Yingjie and all contributors for your great work.
Best,
Jingsong
On Wed, Dec 1, 2021 at 10:52 AM Yun Tang wrote:
>
> Great news!
> Thanks for all the guys who contributed in this project.
>
> Best
> Yun Tang
>
> On 2021/11/30 16:30:52 Till Rohrmann wrote:
> > Great news, Yingjie
Hi,
yidan is correct.
The success-file is not the data-file. [1]
At present, there is no configuration with data file name. You can
create a JIRA for this.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/filesystem/#partition-commit-policy
Best,
Jingsong
On Mon
Hi,
If you are using sql-client, you can try:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sqlclient/#execute-a-set-of-sql-statements
If you are using TableEnvironment, you can try statement set too:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/common/
Hi,
It seems that there is a jar conflict. You can check your
dependencies. Some guava dependencies conflict with the corresponding
Hadoop version.
You can try to exclude all guava dependencies.
Best,
Jingsong
On Mon, Nov 1, 2021 at 6:07 PM 方汉云 wrote:
>
> Hive version2.3.8
> Flink version 1.12
Thanks, Chesnay & Martijn
1.13.3 really solves many problems.
Best,
Jingsong
On Thu, Oct 21, 2021 at 6:46 PM Konstantin Knauf wrote:
>
> Thank you, Chesnay & Martijn, for managing this release!
>
> On Thu, Oct 21, 2021 at 10:29 AM Chesnay Schepler
> wrote:
>
> > The Apache Flink community is v
Hi Yik,
The **batch** Hive sink does not support `sink.partition-commit.policy.kind`.
Default **batch** Hive sink will commit metastore without success-file.
You can create a JIRA for this.
Best,
Jingsong
On Fri, Aug 20, 2021 at 11:01 AM Caizhi Weng wrote:
>
> Hi!
>
> As far as I know Flink b
Hi Oran and Chesnay,
I think it should be my problem. The docker image I generated on the
computer with the macbook M1 will lead to the image of arm64 (When
releasing 1.12.5).
We will regenerate the image of 1.13.1 on the Intel x86 machine.
I'm very sorry.
Best,
Jingsong
On Tue, Aug 10, 2021 a
Thanks Yun Tang and everyone!
Best,
Jingsong
On Tue, Aug 10, 2021 at 9:37 AM Xintong Song wrote:
> Thanks Yun and everyone~!
>
> Thank you~
>
> Xintong Song
>
>
>
> On Mon, Aug 9, 2021 at 10:14 PM Till Rohrmann
> wrote:
>
> > Thanks Yun Tang for being our release manager and the great work! Al
e enabled
> by setting taskmanager.network.blocking-shuffle.compression.enabled to
> true);
> 2. Blocking shuffle type used. See [1] for more information. (To used
> sort-shuffle, the minimum version is 1.13);
> 3. Memory configuration, including network and managed memory size.
>
Thanks Yingjie for the great effort!
This is really helpful to Flink Batch users!
Best,
Jingsong
On Mon, Jun 7, 2021 at 10:11 AM Yingjie Cao wrote:
> Hi devs & users,
>
> The FLIP-148[1] has been released with Flink 1.13 and the final
> implementation has some differences compared with the ini
Hi Alokh,
Maybe this is related to https://issues.apache.org/jira/browse/FLINK-20241
We can improve `SerializableConfiguration` to throw better exceptions.
So the true reason may be "ClassNotFoundException"
Can you check your dependencies? Like Hadoop related dependencies?
Best,
Jingsong
On F
Hi,
Yes, as Danny said, it is very hard work...
A suggestion is that you can cherry-pick some bugfixs from the new Calcite
version to your own internal Calcite branch, if you just want to fix some
bugs.
Best,
Jingsong
On Thu, Mar 11, 2021 at 2:28 PM Danny Chan wrote:
> Hi Sheng ~
>
> It is a
Hi Naehee, sorry for the late reply.
I think you are right, there are bugs here. We didn't think about nested
structures very well before.
Now we mainly focus on the new BulkFormat implementation, which we need to
consider when implementing the new ParquetBulkFormat.
Best,
Jingsong
On Tue, Nov
Hi Yuval,
Yes, The new table source does not support the new Source API in Flink
1.11. The integration is introduced in Flink master (1.12):
https://cwiki.apache.org/confluence/display/FLINK/FLIP-146%3A+Improve+new+TableSource+and+TableSink+interfaces
Best,
Jingsong
On Sun, Nov 1, 2020 at 10:54
+1 to remove the Bucketing Sink.
Thanks for the effort on ORC and `HadoopPathBasedBulkFormatBuilder`, I
think it's safe to get rid of the old Bucketing API with them.
Best,
Jingsong
On Thu, Oct 29, 2020 at 3:06 AM Kostas Kloudas wrote:
> Thanks for the discussion!
>
> From this thread I do not
Hi, Yang,
"SUCCESSFUL_JOB_OUTPUT_DIR_MARKER" does not work in StreamingFileSink.
You can take a look to partition commit feature [1],
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html#partition-commit
Best,
Jingsong Lee
On Thu, Oct 15, 2020 a
Hi devs and users:
After the 1.11 release, I heard some voices recently: How can't Hive's
documents be found in the "Table & SQL Connectors".
Actually, Hive's documents are in the "Table API & SQL". Since the "Table &
SQL Connectors" document was extracted separately, Hive is a little out of
plac
Thanks ZhuZhu for driving the release.
Best,
Jingsong
On Thu, Sep 17, 2020 at 1:29 PM Zhu Zhu wrote:
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.11.2, which is the second bugfix release for the Apache Flink 1.11
> series.
>
> Apache Flink® is an open-s
Hi Dan,
I think Arvid and Dawid are right, as a workaround, you can try making
S3Filesystem works in the client. But for a long term solution, we can fix
it.
I created https://issues.apache.org/jira/browse/FLINK-19228 for tracking
this.
Best,
Jingsong
On Mon, Sep 14, 2020 at 3:57 PM Dawid Wysak
Congratulations , Dian!
Best, Jingsong
On Fri, Aug 28, 2020 at 11:06 AM Walter Peng wrote:
> congrats!
>
> Yun Tang wrote:
> > Congratulations , Dian!
>
--
Best, Jingsong Lee
Hi Paul,
It is a meaningless sink.
This is because for the sake of flexibility, the `StreamingFileCommitter`
is implemented as a `StreamOperator` rather than a `SinkFunction`.
But `StreamTableSink` requires a `SinkFunction`, so we give a meaningless
`DiscardingSink` to it. And this sink should b
lly I want to "monitor" a bucket on S3 and every file that gets
>> created in that bucket read it and stream it.
>>
>> If I understand correctly, I can just use env.readCsvFile() and config to
>> continuously read a folder path?
>>
>>
>> On Tue., Ju
nd every file that gets
> created in that bucket read it and stream it.
>
> If I understand correctly, I can just use env.readCsvFile() and config to
> continuously read a folder path?
>
>
> On Tue., Jul. 28, 2020, 1:38 a.m. Jingsong Li,
> wrote:
>
>> Hi John,
>>
Hi John,
Do you mean you want to read S3 CSV files using partition/bucket pruning?
If just using the DataSet API, you can use CsvInputFormat to read csv files.
If you want to use Table/Sql API, In 1.10, Csv format in table not support
partitioned table. So the only way is specific the partition/
Thanks for being the release manager for the 1.11.1 release, Dian.
Best,
Jingsong
On Thu, Jul 23, 2020 at 10:12 AM Zhijiang
wrote:
> Thanks for being the release manager and the efficient work, Dian!
>
> Best,
> Zhijiang
>
> --
> F
Hi Kelly,
There are issues for tracking:
- Filesystem support single file reading:
https://issues.apache.org/jira/browse/FLINK-17398
- Filesystem support LookupJoin:
https://issues.apache.org/jira/browse/FLINK-17397
Best,
Jingsong
On Wed, Jul 22, 2020 at 3:13 AM Kelly Smith wrote:
> Thanks Leo
be able to get rid of the
> HadoopOutputForma and be able to use a more comfortable Source/Sink
> implementation.
>
> On Tue, Jul 21, 2020 at 12:38 PM Jingsong Li
> wrote:
>
>> Hi Flavio,
>>
>> AvroOutputFormat only supports writing Avro files.
>> I think
Stream.defaultWriteFields(ObjectOutputStream.java:1548)
>> at
>> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
>> at
>> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
>> at java.io.ObjectOutputSt
Hi Flavio,
AvroOutputFormat only supports writing Avro files.
I think you can use `AvroParquetOutputFormat` as a hadoop output format,
and wrap it through Flink `HadoopOutputFormat`.
Best,
Jingsong
On Fri, Jul 17, 2020 at 11:59 PM Flavio Pompermaier
wrote:
> Hi to all,
> is there a way to writ
Hi Paul,
If your orc table has no complex(list,map,row) types, you can try to set
`table.exec.hive.fallback-mapred-writer` to false in TableConfig. And Hive
sink will use ORC native writer, it is a work-around way.
About this error, I think this is a bug for Hive 1.1 ORC. I will try to
re-produce
Hi Flavio,
For print:
- As Kurt said, you can use `table.execute().print();`, records will be
collected to the client (NOTE it is client) and print to client console.
- But if you want print records in runtime tasks like DataStream.print, you
can use [1]
[1]
https://ci.apache.org/projects/flink/f
ses are
> considered under different packages. That’s why an
> `java.lang.IllegalAccessError`
> occurred.
>
> BTW, the artifact links of the hive connectors seem to be broken. Should
> we use https://repo.maven.apache.org/maven2/ instead?
>
> Best,
> Paul Lam
>
> 20
Hi, It looks really weird.
Is there any possibility of class conflict?
How do you manage your dependencies? Do you download bundle-jar to lib? [1]
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/hive/#using-bundled-hive-jar
Best,
Jingsong
On Mon, Jul 13, 2020 at 5:48
Hi,
Flink also has `HadoopOutputFormat`, it can wrap hadoop OutputFormat to
Flink sink.
You can have a try.
Best,
Jingsong
On Mon, Jul 13, 2020 at 2:34 PM 殿李 wrote:
> Hi,
>
> Yes, TF means TensorFlow.
>
> This class may not be in the spark package, but spark supports writing
> this file format
Congratulations!
Thanks Zhijiang and Piotr as release managers, and thanks everyone.
Best,
Jingsong
On Wed, Jul 8, 2020 at 10:51 AM chaojianok wrote:
> Congratulations!
>
> Very happy to make some contributions to Flink!
>
>
>
>
>
> At 2020-07-07 22:06:05, "Zhijiang" wrote:
>
> The Apache Fli
Hi,
Looks like you are using the watermark feature with the old Flink planner?
Is this what you expect? Or can you change the planner to Blink planner?
Best,
Jingsong
On Sun, Jul 5, 2020 at 10:52 AM xin Destiny wrote:
> Hi, all:
> i use zeppelin execute sql, FLink version is Flink 1.11 snaps
rces key integrity.
>
> For DDL to create JDBC table, you can reference [2]
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/create.html#create-table
> [2]
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/jdbc.htm
--
> wangl...@geekplus.com.cn
>
>
> *From:* Jingsong Li
> *Date:* 2020-06-30 10:08
> *To:* wangl...@geekplus.com.cn
> *CC:* user
> *Subject:* Re: Flip-105 can the debezium/canal SQL sink to database
> directly?
> Hi Lei,
>
> INSERT INTO jdbc_table SELECT *
Hi Lei,
INSERT INTO jdbc_table SELECT * FROM changelog_table;
For Flink 1.11 new connectors, you need to define the primary key for
jdbc_table (and also your mysql table needs to have the corresponding
primary key) because changelog_table has the "update", "delete" records.
And then, jdbc sink
Hi Dmytro,
Yes, Batch mode must disabled checkpoint, So StreamingFileSink can not be
used in batch mode (StreamingFileSink requires checkpoint whatever
formats), we are refactoring it to more generic, and can be used in batch
mode, but this is a future topic.
Currently, in batch mode, for sink, we
Congratulations Yu, well deserved!
Best,
Jingsong
On Wed, Jun 17, 2020 at 1:42 PM Yuan Mei wrote:
> Congrats, Yu!
>
> GXGX & well deserved!!
>
> Best Regards,
>
> Yuan
>
> On Wed, Jun 17, 2020 at 9:15 AM jincheng sun
> wrote:
>
>> Hi all,
>>
>> On behalf of the Flink PMC, I'm happy to announce
Hi forideal,
Just because we don't have time to support it. We just support
LocalDateTime in Flink after 1.9.
Welcome to contribute.
Best,
Jingsong Lee
On Fri, May 22, 2020 at 2:48 PM forideal wrote:
> Hello, my friends
>
> env: Flink 1.10, Blink Planner
> table source
>
> CREATE TABL
Hi Thomas,
Good to hear from you. This is a very common problem.
In 1.11, we have two FLIP to solve your problem. [1][2] You can take a look.
I think dynamic table options (table hints) is enough for your requirement.
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-113%3A+Supports+Dyna
hen
> we create a new file and start writing data into it. But this is dynamic as
> in based on the incoming stream.
>
> regards,
> Rahul
>
> On Wed, May 13, 2020 at 11:43 PM Jingsong Li
> wrote:
>
>> Hi, Dhurandar,
>>
>> Can you describe your needs
Hi, Dhurandar,
Can you describe your needs? Why do you need to modify file names flexibly?
What kind of name do you want?
Best,
Jingsong Lee
On Thu, May 14, 2020 at 2:05 AM dhurandar S wrote:
> Yes we looked at it ,
> The problem is the file name gets generated in a dynamic fashion, based on
>
tem+connector+in+Table
Best,
Jingsong Lee
On Tue, May 12, 2020 at 3:52 PM Khachatryan Roman <
khachatryan.ro...@gmail.com> wrote:
> AFAIK, yes, you can write streams.
>
> I'm pulling in Jingsong Li and Rui Li as they might know better.
>
> Regards,
> Roman
>
>
>
Hi,
Some suggestions from my side:
- synchronized (checkpointLock) to some work and ctx.collect?
- Put Thread.sleep(interval) out of try catch? Maybe should not
swallow interrupt exception (Like cancel the job).
Best,
Jingsong Lee
On Fri, May 8, 2020 at 2:52 AM Senthil Kumar wrote:
> I am impl
Hi,
My impression is that MongoDB's API is not complicated. So you can
implement a MongoDB sink. Something like:
@Override
public void invoke(Row value, Context context) throws Exception {
Map map = new HashMap<>();
for (int i = 0; i < fieldNames.length; i++) {
map.put(fieldNames
Hi Peter,
The troublesome is how to know the "ending" for a bucket in streaming job.
In 1.11, we are trying to implement a watermark-related bucket ending
mechanism[1] in Table/SQL.
[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-115%3A+Filesystem+connector+in+Table
Best,
Jingsong Lee
n'
value.format.option:
option1: '...'
option2: '...'
Best,
Jingsong Lee
On Thu, Apr 30, 2020 at 10:16 AM Jingsong Li wrote:
> Thanks Timo for staring the discussion.
>
> I am +1 for "format: 'json'".
> Take a look to Dawid's y
Thanks Timo for staring the discussion.
I am +1 for "format: 'json'".
Take a look to Dawid's yaml case:
connector: 'filesystem'
path: '...'
format: 'json'
format:
option1: '...'
option2: '...'
option3: '...'
Is this work?
According to my understanding, 'format' key is the attribute o
Thanks Dian for managing this release!
Best,
Jingsong Lee
On Sun, Apr 26, 2020 at 7:17 PM Jark Wu wrote:
> Thanks Dian for being the release manager and thanks all who make this
> possible.
>
> Best,
> Jark
>
> On Sun, 26 Apr 2020 at 18:06, Leonard Xu wrote:
>
> > Thanks Dian for the release a
Hi, Benchao,
Glad to see your requirement about range partition.
I have a branch to support range partition: [1]
Can you describe your scene in more detail? What sink did you use for your
jobs? A simple and complete business scenario? This can help the community
judge the importance of the range
; = 'maxDate',
> *'connector.read.parametervalues.1.value*'= '01/01/2020'
>
> Another question: why JDBC table source does not implement
> *FilterableTableSource?*
>
> On Wed, Apr 22, 2020 at 3:27 PM Jingsong Li
> wrote:
>
>> Hi,
>>
and I need to be sure of explaining the problem with the correct terms.
>
> Best,
> Flavio
>
> On Wed, Apr 22, 2020 at 11:52 AM Jingsong Li
> wrote:
>
>> Thanks for the explanation.
>> You can create JIRA for this.
>>
>> For "SELECT public.A.x, public.
e a VIEW
> in the db (but this is not alway possible) or in Flink (but from what I
> know this is very costly).
> In this case a parameter "scan.query.statement" without a
> "scan.parameter.values.provider.class" is super helpful and could improve
> performance a lo
://issues.apache.org/jira/browse/FLINK-16242
Best,
Jingsong Lee
On Wed, Apr 22, 2020 at 4:50 PM Jingsong Li wrote:
> Hi,
>
> Just like Jark said, it may be FLINK-13702[1]. Has been fixed in 1.9.2 and
> later versions.
>
> > Can it be a thread-safe problem or something else?
>
> Yes,
ave in mind something else?
>
> On Wed, Apr 22, 2020 at 10:33 AM Jingsong Li
> wrote:
>
>> Hi,
>>
>> Now in JDBCTableSource.getInputFormat, It's written explicitly: WHERE XXX
>> BETWEEN ? AND ?. So we must use `NumericBetweenParametersProvider`.
>> I d
Hi,
Just like Jark said, it may be FLINK-13702[1]. Has been fixed in 1.9.2 and
later versions.
> Can it be a thread-safe problem or something else?
Yes, it is a thread-safe problem with lazy materialization.
[1]https://issues.apache.org/jira/browse/FLINK-13702
Best,
Jingsong Lee
On Tue, Apr 2
Hi,
Now in JDBCTableSource.getInputFormat, It's written explicitly: WHERE XXX
BETWEEN ? AND ?. So we must use `NumericBetweenParametersProvider`.
I don't think this is a good and long-term solution.
I think we should support filter push-down for JDBCTableSource, so in this
way, we can write the fi
+1. It's very useful for Flink newcomers.
Best,
Jingsong Lee
On Wed, Apr 15, 2020 at 10:23 PM Yun Tang wrote:
> +1 for this idea.
>
> I think there would existed many details to discuss once community ready
> to host the materials:
>
>1. How to judge whether a lab exercise should be added?
Hi Dmytro,
For 1.11:
Like Godfrey said, you can use
"TableEnvironment#createFunction/createTemporarySystemFunction". And like
Timo said, can support function with new type system.
But for 1.10 and 1.9:
A workaround way is:
"tEnv.getCatalog(tEnv.getCurrentCatalog()).get().createFunction"
You may n
correct it if I write something wrong.
>
> Thanks,
> Lei
>
> --
> wangl...@geekplus.com.cn
>
>
> *From:* Jingsong Li
> *Date:* 2020-04-10 11:03
> *To:* wangl...@geekplus.com.cn
> *CC:* Jark Wu ; lirui ; user
>
> *Subject:* Re:
int,
> `lifterangleerror` int, `lifterheighterror` int, `linearcmdspeed` int,
> `angluarcmdspeed` int, `liftercmdspeed` int, `rotatorcmdspeed` int)
> PARTITIONED BY (`hour` string) STORED AS parquet;
>
>
> Thanks,
> Lei
> --
> wangl...@geekplus.c
-
> wangl...@geekplus.com.cn
>
>
> *From:* Jark Wu
> *Date:* 2020-04-09 14:48
> *To:* wangl...@geekplus.com.cn; Jingsong Li ;
> lirui
> *CC:* user
> *Subject:* Re: fink sql client not able to read parquet format table
> Hi Lei,
>
> Are you using the newest 1.
Hi jingjing,
If seconds precision is OK for you.
You can try "to_timestamp(from_unixtime(your_time_seconds_long))".
Best,
Jingsong Lee
On Wed, Apr 1, 2020 at 8:56 AM jingjing bai
wrote:
> Thanks a lot!
>
> Jark Wu 于2020年4月1日周三 上午1:13写道:
>
>> Hi Jing,
>>
>> I created https://issues.apache.org/
+1
In 1.10, we have set default planner for SQL Client to Blink planner[1].
Looks good.
[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Set-default-planner-for-SQL-Client-to-Blink-planner-in-1-10-release-td36379.html
Best,
Jingsong Lee
On Wed, Apr 1, 2020 at 11:39 AM
Thanks Jeff very much, that is very impressive.
Zeppelin is very convenient development platform.
Best,
Jingsong Lee
On Tue, Mar 31, 2020 at 11:58 AM Zhijiang
wrote:
>
> Thanks for the continuous efforts for engaging in Flink ecosystem Jeff!
> Glad to see the progressive achievement. Wish more
etract mode not
> available yet. It is right?
>
> Thanks,
> Lei
>
>
> --
> wangl...@geekplus.com.cn
>
>
> *Sender:* Jingsong Li
> *Send Time:* 2020-03-25 11:39
> *Receiver:* wangl...@geekplus.com.cn
> *cc:* user
> *Subject
0 at 11:37 AM Jingsong Li wrote:
> Hi,
>
> This can be a upsert stream [1], and JDBC has upsert sink now [2].
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/dynamic_tables.html#table-to-stream-conversion
> [2]
> https://ci.apache.org/pr
Best,
Jingsong Lee
On Wed, Mar 25, 2020 at 11:14 AM Jingsong Li wrote:
> Hi,
>
> This can be a upsert stream [1]
>
> Best,
> Jingsong Lee
>
> On Wed, Mar 25, 2020 at 11:12 AM wangl...@geekplus.com.cn <
> wangl...@geekplus.com.cn> wrote:
>
>>
>>
Hi,
This can be a upsert stream [1]
Best,
Jingsong Lee
On Wed, Mar 25, 2020 at 11:12 AM wangl...@geekplus.com.cn <
wangl...@geekplus.com.cn> wrote:
>
> Create one table with kafka, another table with MySQL using flinksql.
> Write a sql to read from kafka and write to MySQL.
>
> INSERT INTO my
Hi wanglei,
> 1 Is there any flink-hive-connector that i can use to write to hive
streamingly?
"Streaming kafka data sink to hive" is under discussion.[1]
And POC work is ongoing.[2] We want to support it in release-1.11.
> 2 Since HDFS is not friendly to frequently append and hive's data is
s
Hi Nick,
You can try "new Path("hdfs:///tmp/auditlog/")". There is one additional /
after hdfs://, which is a protocol name.
Best,
Jingsong Lee
On Fri, Mar 20, 2020 at 3:13 AM Nick Bendtner wrote:
> Hi guys,
> I am using flink version 1.7.2.
> I am trying to write to hdfs sink from my flink jo
s for FLIP-115. It is really useful feature for platform developers
> > who manage hundreds of Flink to Hive jobs in production.
>
> > I think we need add 'connector.sink.username' for UserGroupInformation when
> > data is written to HDFS
> >
> >
> > 在 2020/3
Hi forideal,
I got your point.
About replay kafka history data, if the data came in flink very unbalanced
between partitions.
That maybe lead to very big state, and lead to disk/memory unstable.
And Yes, FLIP-27 can help you.
About work around way, IMO, it maybe a little hacky but works. And it
s
Hi everyone,
I'd like to start a discussion about FLIP-115 Filesystem connector in Table
[1].
This FLIP will bring:
- Introduce Filesystem table factory in table, support
csv/parquet/orc/json/avro formats.
- Introduce streaming filesystem/hive sink in table
CC to user mail list, if you have any u
end","rocksdb");
>
> eEnv.sqlUpdate(..)
>
>
> Thanks,
> Lei
>
> ------
> wangl...@geekplus.com.cn
>
>
> *From:* Jingsong Li
> *Date:* 2020-03-12 11:32
> *To:* wangl...@geekplus.com.cn
> *CC:* user
> *Subject:* Re
Hi wanglei,
If you are using Flink 1.10, you can set "state.backend=rocksdb" to
"TableConfig.getConfiguration".
And you can find related config options here[1].
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html
Jingsong Lee
On Thu, Mar 12, 2020 at 11:15 AM wangl..
Hi olivier,
Sorry for the late reply.
In blink planner,
- only hive parquet table can be read now.
- If you want to support native parquet files, you can modify
`ParquetTableSource` a little bit, extends StreamTableSource.
Best,
Jingsong Lee
On Wed, Feb 26, 2020 at 7:50 PM wrote:
> Hi communit
Hi Sunfulin,
I think this is very important too.
There is an issue to fix this[1]. Is that meet your requirement?
[1] https://issues.apache.org/jira/browse/FLINK-15396
Best,
Jingsong Lee
On Mon, Mar 9, 2020 at 12:33 PM sunfulin wrote:
> hi , community,
> I am wondering if there is some config
Hi tison and Aljoscha,
Do you think "--classpath can not be in front of jar file" is an
improvement? Or need documentation? Because I used to be confused.
Best,
Jingsong Lee
On Fri, Mar 6, 2020 at 10:22 PM tison wrote:
> I think the problem is that --classpath should be before the user jar,
>
Hi ouywl,
As I know, "--classpath" should be in front of jar file, it means:
/opt/flink/bin/flink run --jobmanager ip:8081 --class
com.netease.java.TopSpeedWindowing --parallelism 1 --detached --classpath
file:///opt/flink/job/fastjson-1.2.66.jar
/opt/flink/job/kafkaDemo19-1.0-SNAPSHOT.jar
You ca
6:37 PM faaron zheng wrote:
> Thanks for you attention. The input of sink is 500, and there is no order
> by and limit.
>
> Jingsong Li 于 2020年3月6日周五 下午6:15写道:
>
>> Hi faaron,
>>
>> For sink parallelism.
>> - What is parallelism of the input of sink?
1 - 100 of 160 matches
Mail list logo