dchristle commented on code in PR #680:
URL: https://github.com/apache/flink-web/pull/680#discussion_r1349773744


##########
docs/content/posts/2023-10-10-release-1.18.0.md:
##########
@@ -0,0 +1,542 @@
+---
+authors:
+- JingGe:
+  name: "Jing Ge"
+  twitter: jingengineer
+- KonstantinKnauf:
+  name: "Konstantin Knauf"
+  twitter: snntrable
+- SergeyNuyanzin:
+  name: "Sergey Nuyanzin"
+  twitter: uckamello
+- QingshengRen:
+  name: "Qingsheng Ren"
+  twitter: renqstuite
+date: "2023-10-10T08:00:00Z"
+subtitle: ""
+title: Announcing the Release of Apache Flink 1.18
+aliases:
+- /news/2023/10/10/release-1.18.0.html
+---
+
+The Apache Flink PMC is pleased to announce the release of Apache Flink 
1.18.0. As usual, we are looking at a packed 
+release with a wide variety of improvements and new features. Overall, 176 
people contributed to this release completing 
+18 FLIPS and 700+ issues. Thank you!
+
+Let's dive into the highlights.
+
+# Towards a Streaming Lakehouse
+
+## Flink SQL Improvements
+
+### Introduce Flink JDBC Driver For Sql Gateway 
+
+Flink 1.18 comes with a JDBC Driver for the Flink SQL Gateway. So, you can now 
use any SQL Client that supports JDBC to 
+interact with your tables via Flink SQL. Here is an example using 
[SQLLine](https://julianhyde.github.io/sqlline/manual.html). 
+
+```shell
+sqlline> !connect jdbc:flink://localhost:8083
+```
+
+```shell
+sqlline version 1.12.0
+sqlline> !connect jdbc:flink://localhost:8083
+Enter username for jdbc:flink://localhost:8083:
+Enter password for jdbc:flink://localhost:8083:
+0: jdbc:flink://localhost:8083> CREATE TABLE T(
+. . . . . . . . . . . . . . .)>      a INT,
+. . . . . . . . . . . . . . .)>      b VARCHAR(10)
+. . . . . . . . . . . . . . .)>  ) WITH (
+. . . . . . . . . . . . . . .)>      'connector' = 'filesystem',
+. . . . . . . . . . . . . . .)>      'path' = 'file:///tmp/T.csv',
+. . . . . . . . . . . . . . .)>      'format' = 'csv'
+. . . . . . . . . . . . . . .)>  );
+No rows affected (0.122 seconds)
+0: jdbc:flink://localhost:8083> INSERT INTO T VALUES (1, 'Hi'), (2, 'Hello');
++----------------------------------+
+|              job id              |
++----------------------------------+
+| fbade1ab4450fc57ebd5269fdf60dcfd |
++----------------------------------+
+1 row selected (1.282 seconds)
+0: jdbc:flink://localhost:8083> SELECT * FROM T;
++---+-------+
+| a |   b   |
++---+-------+
+| 1 | Hi    |
+| 2 | Hello |
++---+-------+
+2 rows selected (1.955 seconds)
+0: jdbc:flink://localhost:8083>
+```
+
+**More Information**
+* 
[Documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/)
 
+* [FLIP-293: Introduce Flink Jdbc Driver For Sql 
Gateway](https://cwiki.apache.org/confluence/display/FLINK/FLIP-293%3A+Introduce+Flink+Jdbc+Driver+For+Sql+Gateway)
+
+
+### Stored Procedures
+
+Stored Procedures provide a convenient way to encapsulate complex logic to 
perform data manipulation or administrative 
+tasks in Apache Flink itself. Thereforeļ¼Œ Flink introduces the support for 
calling stored procedures. 
+Flink now allows catalog developers to develop their own built-in stored 
procedures and then enables users to call these
+predefined stored procedures.

Review Comment:
   Here is a revised description of the new support for stored procedures. I 
think it provides better motivation for the feature, and is more clear about 
the intended audience & benefits to users.
   
   ```suggestion
   ### Stored Procedure Support for Flink Connectors
   
   Stored procedures have been an indispensable tool in traditional databases,
   offering a convenient way to encapsulate complex logic for data manipulation
   and administrative tasks. They also offer the potential for enhanced
   performance, since they can trigger the handling of data operations directly
   within an external database. Other popular data systems like Trino and 
Iceberg
   automate and simplify common maintenance tasks into small sets of procedures,
   which greatly reduces users' administrative burden.
   
   This new update primarily targets developers of Flink connectors, who can now
   predefine custom stored procedures into connectors via the Catalog interface.
   The primary benefit to users is that connector-specific tasks that previously
   may have required writing custom Flink code can now be replaced with simple
   calls that encapsulate, standardize, and potentially optimize the underlying
   operations. Users can execute procedures using the familiar `CALL` syntax, 
and
   discover a connector's available procedures with `SHOW PROCEDURES`. Stored
   procedures within connectors improves the extensibility of Flink's SQL and
   Table APIs, and should unlock smoother data access and management for users.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to