ferenc-csaky commented on code in PR #680: URL: https://github.com/apache/flink-web/pull/680#discussion_r1371372422
########## docs/content/posts/2023-10-10-release-1.18.0.md: ########## @@ -0,0 +1,572 @@ +--- +authors: +- JingGe: + name: "Jing Ge" + twitter: jingengineer +- KonstantinKnauf: + name: "Konstantin Knauf" + twitter: snntrable +- SergeyNuyanzin: + name: "Sergey Nuyanzin" + twitter: uckamello +- QingshengRen: + name: "Qingsheng Ren" + twitter: renqstuite +date: "2023-10-10T08:00:00Z" +subtitle: "" +title: Announcing the Release of Apache Flink 1.18 +aliases: +- /news/2023/10/10/release-1.18.0.html +--- + +The Apache Flink PMC is pleased to announce the release of Apache Flink 1.18.0. As usual, we are looking at a packed +release with a wide variety of improvements and new features. Overall, 174 people contributed to this release completing +18 FLIPS and 700+ issues. Thank you! + +Let's dive into the highlights. + +# Towards a Streaming Lakehouse + +## Flink SQL Improvements + +### Introduce Flink JDBC Driver For SQL Gateway + +Flink 1.18 comes with a JDBC Driver for the Flink SQL Gateway. So, you can now use any SQL Client that supports JDBC to +interact with your tables via Flink SQL. Here is an example using [SQLLine](https://julianhyde.github.io/sqlline/manual.html). + +```shell +sqlline> !connect jdbc:flink://localhost:8083 +``` + +```shell +sqlline version 1.12.0 +sqlline> !connect jdbc:flink://localhost:8083 +Enter username for jdbc:flink://localhost:8083: +Enter password for jdbc:flink://localhost:8083: +0: jdbc:flink://localhost:8083> CREATE TABLE T( +. . . . . . . . . . . . . . .)> a INT, +. . . . . . . . . . . . . . .)> b VARCHAR(10) +. . . . . . . . . . . . . . .)> ) WITH ( +. . . . . . . . . . . . . . .)> 'connector' = 'filesystem', +. . . . . . . . . . . . . . .)> 'path' = 'file:///tmp/T.csv', +. . . . . . . . . . . . . . .)> 'format' = 'csv' +. . . . . . . . . . . . . . .)> ); +No rows affected (0.122 seconds) +0: jdbc:flink://localhost:8083> INSERT INTO T VALUES (1, 'Hi'), (2, 'Hello'); ++----------------------------------+ +| job id | ++----------------------------------+ +| fbade1ab4450fc57ebd5269fdf60dcfd | ++----------------------------------+ +1 row selected (1.282 seconds) +0: jdbc:flink://localhost:8083> SELECT * FROM T; ++---+-------+ +| a | b | ++---+-------+ +| 1 | Hi | +| 2 | Hello | ++---+-------+ +2 rows selected (1.955 seconds) +0: jdbc:flink://localhost:8083> +``` + +**More Information** +* [Documentation](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/jdbcdriver/) +* [FLIP-293: Introduce Flink Jdbc Driver For SQL Gateway](https://cwiki.apache.org/confluence/display/FLINK/FLIP-293%3A+Introduce+Flink+Jdbc+Driver+For+Sql+Gateway) + + +### Stored Procedure Support for Flink Connectors + +Stored procedures have been an indispensable tool in traditional databases, +offering a convenient way to encapsulate complex logic for data manipulation +and administrative tasks. They also offer the potential for enhanced +performance, since they can trigger the handling of data operations directly +within an external database. Other popular data systems like Trino and Iceberg +automate and simplify common maintenance tasks into small sets of procedures, +which greatly reduces users' administrative burden. + +This new update primarily targets developers of Flink connectors, who can now +predefine custom stored procedures into connectors via the Catalog interface. +The primary benefit to users is that connector-specific tasks that previously +may have required writing custom Flink code can now be replaced with simple +calls that encapsulate, standardize, and potentially optimize the underlying +operations. Users can execute procedures using the familiar `CALL` syntax, and +discover a connector's available procedures with `SHOW PROCEDURES`. Stored +procedures within connectors improves the extensibility of Flink's SQL and +Table APIs, and should unlock smoother data access and management for users. + +**More Information** +* [Documentation](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/procedures/) +* [FLIP-311: Support Call Stored Procedure](https://cwiki.apache.org/confluence/display/FLINK/FLIP-311%3A+Support+Call+Stored+Procedure) + +### Extended DDL Support + +From this release onwards, Flink supports + +- `REPLACE TABLE AS SELECT` +- `CREATE OR REPLACE TABLE AS SELECT` + +and both these commands and previously supported `CREATE TABLE AS` can now support atomicity provided the underlying +connector also supports this. + +Moreover, Apache Flink now supports TRUNCATE TABLE in batch execution mode. Same as before, the underlying connector needs +to implement and provide this capability + +And, finally, we have also implemented support for adding, dropping and listing partitions via + +- `ALTER TABLE ADD PARTITION` +- `ALTER TABLE DROP PARTITION` +- `SHOW PARTITIONS` + +**More Information** +- [Documentation on TRUNCATE](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/sql/truncate/) +- [Documentation on CREATE OR REPLACE](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/sql/create/#create-or-replace-table) +- [Documentation on ALTER TABLE](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/sql/alter/#alter-table) +- [FLIP-302: Support TRUNCATE TABLE statement in batch mode](https://cwiki.apache.org/confluence/display/FLINK/FLIP-302%3A+Support+TRUNCATE+TABLE+statement+in+batch+mode) +- [FLIP-303: Support REPLACE TABLE AS SELECT statement](https://cwiki.apache.org/confluence/display/FLINK/FLIP-303%3A+Support+REPLACE+TABLE+AS+SELECT+statement) +- [FLIP-305: Support atomic for CREATE TABLE AS SELECT(CTAS) statement](https://cwiki.apache.org/confluence/display/FLINK/FLIP-305%3A+Support+atomic+for+CREATE+TABLE+AS+SELECT%28CTAS%29+statement) + +### Time Traveling + +Flink supports the time travel SQL syntax for querying historical versions of data that allows users to specify a point +in time and retrieve the data and schema of a table as it appeared at that time. With time travel, users can easily +analyze and compare historical versions of data. + +**More information** +- [Documentation](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/sql/queries/time-travel/) Review Comment: Not at the moment, but when the 1.18 docs will be released and pointed as `flink-docs-stable`, it will work. Currently the following one is valid: https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/sql/queries/time-travel/ IMO we may hardcode the 1.18 docs here as this is the 1.18 release notes, one thing that can happen if someone checks the link N release later and if the docs got restructured or removed by then, it may not point where it should have benn, but I do not have a strong preference. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org