xintongsong commented on code in PR #754:
URL: https://github.com/apache/flink-web/pull/754#discussion_r1797530647


##########
docs/content/posts/2024-10-01-release-2.0-preview.md:
##########
@@ -0,0 +1,1264 @@
+---
+authors:
+- xtsong:
+  name: "Xintong Song"
+date: "2024-10-01T08:00:00Z"
+subtitle: ""
+title: Preview Release of Apache Flink 2.0
+aliases:
+- /news/2024/10/01/release-2.0-preview.html
+---
+
+The Apache Flink community is actively preparing Flink 2.0, the first major 
release since Flink 1.0 launched 8 years ago. As a significant milestone, Flink 
2.0 is set to introduce numerous innovative features and improvements, along 
with some compatibility-breaking changes. To facilitate early adaption of these 
changes for our users and partner projects (e.g., connectors), and to offer a 
sneak peek into the exciting new features while gathering feedback, we are now 
providing a preview release of Flink 2.0.
+
+**NOTICE:** Flink 2.0 Preview is not a stable release and should not be used 
in production environments. While this preview includes most of the breaking 
changes planned for Flink 2.0, the final release may still subject to 
additional modifications.
+
+# Breaking Changes
+
+## API
+
+The following sets of APIs have been completely removed.
+- **DataSet API.** Please migrate to [DataStream 
API](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/overview/),
 or [Table 
API/SQL](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/overview/)
 if applicable. See also [How to Migrate from DataSet to 
DataStream](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/dataset_migration).
+- **Scala DataStream and DataSet API.** Please migrate to the Java [DataStream 
API](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/overview/).
+- **SourceFuction, SinkFunction and Sink V1.** Please migrate to 
[Source](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/connector/source/Source.java)
 and [Sink 
V2](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/connector/sink2/Sink.java).
+- **TableSoure and TableSink.** Please migrate to 
[DynamicTableSource](https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/DynamicTableSource.java)
 and 
[DynamicTableSink](https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/sink/DynamicTableSink.java).
 See also [User-defined Sources & 
Sinks](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sourcessinks/).
+- **TableSchema, TableColumn and Types.** Please migrate to 
[Schema](https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/Schema.java),
 
[Column](https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/Column.java)
 and 
[DataTypes](https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/DataTypes.java)
 respectively.
+
+Some deprecated methods have been removed from **DataStream API**. See also 
the list of [removed programming APIs](#list-of-removed-programming-apis).
+
+Some deprecated fields have been removed from **REST API**. See also the list 
of [removed REST APIs](#list-of-removed-rest-apis).
+
+**NOTICE:** You may find some of the removed APIs still exist in the code 
base, usually in a different package. They are for internal usages only and can 
be changed / removed anytime without notifications. Please **DO NOT USE** them.
+
+### Connector Adaption Plan
+
+As SourceFunction, SinkFunction and SinkV1 being removed, existing connectors 
depending on these APIs will not work on the Flink 2.x series. Here's the plan 
for adapting the first-party connectors. 
+1. A new version of Kafka connector, adapted to the API changes, will be 
released right after the release of Flink 2.0 Preview.
+2. JDBC and ElasticSearch connectors will be adapted by the formal release of 
Flink 2.0.
+3. We plan to gradually migrate the remaining first-party connectors within 3 
subsequent minor releases (i.e., by Flink 2.3).
+
+## Configuration
+
+Configuration options meet the following criteria are removed. See also the 
list of [removed configuration options](#list-of-removed-configuration-options).
+- Annotated as `@Public` and have been deprecated for at least 2 minor 
releases.
+- Annotated as `@PublicEvolving` and have been deprecated for at least 1 minor 
releases.
+
+The legacy configuration file `flink-conf.yaml` is no longer supported. Please 
use `config.yaml` with standard YAML format instead. A migration tool is 
provided to convert a legacy `flink-conf.yaml` into a new `config.yaml`. See 
[Migrate from flink-conf.yaml to 
config.yaml](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#migrate-from-flink-confyaml-to-configyaml)
 for more details.
+
+Dedicated APIs for configuring specific options are removed from 
`StreamExecutionEnvironment` and `ExecutionConfig`. All options should now be 
set via `Configuration` and `ConfigOption`. See also the list of [removed 
programming APIs](#list-of-removed-programming-apis).
+
+To avoid exposing internal interfaces, User-Defined Functions no longer have 
full access to `ExecutionConfig`. Instead, necessary functions such as 
`createSerializer()`, `getGlobalJobParameters()` and `isObjectReuseEnabled()` 
can now be accessed from `RuntimeContext` directly.
+
+## Misc
+
+**State Compatibility** is not guaranteed between 1.x and 2.x. 
+
+**Java 8** is no longer supported. The minimum Java version supported by Flink 
now is Java 11.
+
+**Per-Job Deployment Mode** is removed. Please use [Application 
Mode](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/overview/#application-mode)
 instead.
+
+**Legacy Mode of Hybrid Shuffle** is removed.
+
+# Highlights of New Features
+
+## Disaggregated State Storage and Management
+
+The past decade has witnessed a dramatic shift in Flink's deployment mode, 
workload patterns, and hardware improvements. We've moved from the map-reduce 
era where workers are computation-storage tightly coupled nodes to a 
cloud-native world where containerized deployments on Kubernetes become 
standard. To enable Flink's Cloud-Native future, we introduce Disaggregated 
State Storage and Management that uses remote storage as primary storage in 
Flink 2.0.
+
+This new architecture solves the following challenges brought in the 
cloud-native era for Flink. 
+1. Local Disk Constraints in containerization 
+2. Spiky Resource Usage caused by compaction in the current state model 
+3. Fast Rescaling for jobs with large states (hundreds of Terabytes) 
+4. Light and Fast Checkpoint in a native way 
+
+However, simply extending the state store to read/write from remote DFS is 
insufficient due to the existing blocking execution model in Flink. In Flink 
2.0, we propose asynchronous execution model and introduce ForStDB, a 
disaggregated statebackend solution for this purpose.
+
+In the preview version, we offer a complete end-to-end trial using 
[Nexmark](https://github.com/nexmark/nexmark) Q20 (SQL Filter Join). This 
includes: 
+- Asynchronous execution: Full support for asynchronous state APIs and 
checkpointing. 
+- Asynchronous SQL Join operator: Rewrite SQL Join operators to enable 
asynchronous join execution.
+- Hybrid Async & Sync Execution: Hybrid SQL Plan + Runtime Execution + State 
Access
+- Performance: Demonstrate performance results directly writing to DFS in 
async execution mode
+
+## Materialized Table
+
+In Flink 1.20, we introduced Materialized Table as a MVP feature. Materialized 
Table is an innovative table type in Flink SQL, designed to further 
streamlining batch and stream data processing while providing a unified 
development experience. In the upcoming Flink 2.0 release, we are enhancing 
operational supports for Materialized Tables, including connector integration 
with cutting-edge lake formats and production-ready schedulers.
+
+## Adaptive Batch Execution
+
+In addition, Flink is continuously enhancing its adaptive batch execution 
capabilities. The upcoming Flink 2.0 release will introduce dynamic 
optimization of logical plans, in addition to physical plans, based on insights 
gained from the execution of previous stages. The initial set of optimization 
strategies includes the dynamic application of broadcast join and skewed join 
optimization.
+
+## Streaming Lakehouse
+
+Represented by the integration of Apache Flink and Apache Paimon, the 
Streaming Lakehouse architecture has extended the unified data storage, open 
format and cost-effectiveness of the Lakehouse paradigm to the real-time area. 
The upcoming Flink 2.0 release marks another significant step in enhancing the 
integration between Flink and Paimon. The Flink and Paimon communities are 
collaborating closely to adapt to each other's strengths and fully leverage 
their cutting-edge features, yielding various improvements including SQL plan 
optimization utilizing Paimon's rich merge engines, enhanced bucket-aware 
lookup join performance, and Paimon support for Flink's Materialized Table, 
Adaptive Batch Execution and Speculative Execution.
+
+# Appendix
+
+## List of removed programming APIs
+
+### Removed Classes: 
+- `org.apache.flink.api.common.ExecutionConfig$SerializableSerializer`
+- `org.apache.flink.api.common.ExecutionMode`
+- `org.apache.flink.api.common.InputDependencyConstraint`
+- 
`org.apache.flink.api.common.restartstrategy.RestartStrategies$ExponentialDelayRestartStrategyConfiguration`
+- 
`org.apache.flink.api.common.restartstrategy.RestartStrategies$FailureRateRestartStrategyConfiguration`
+- 
`org.apache.flink.api.common.restartstrategy.RestartStrategies$FallbackRestartStrategyConfiguration`
+- 
`org.apache.flink.api.common.restartstrategy.RestartStrategies$FixedDelayRestartStrategyConfiguration`
+- 
`org.apache.flink.api.common.restartstrategy.RestartStrategies$NoRestartStrategyConfiguration`
+- 
`org.apache.flink.api.common.restartstrategy.RestartStrategies$RestartStrategyConfiguration`
+- `org.apache.flink.api.common.restartstrategy.RestartStrategies`
+- `org.apache.flink.api.common.time.Time`
+- `org.apache.flink.api.connector.sink.Committer`
+- `org.apache.flink.api.connector.sink.GlobalCommitter`
+- `org.apache.flink.api.connector.sink.Sink$InitContext`
+- 
`org.apache.flink.api.connector.sink.Sink$ProcessingTimeService$ProcessingTimeCallback`
+- `org.apache.flink.api.connector.sink.Sink$ProcessingTimeService`
+- `org.apache.flink.api.connector.sink.SinkWriter$Context`
+- `org.apache.flink.api.connector.sink.SinkWriter`
+- `org.apache.flink.api.connector.sink.Sink`
+- `org.apache.flink.api.connector.sink2.Sink$InitContextWrapper`
+- `org.apache.flink.api.connector.sink2.Sink$InitContext`
+- `org.apache.flink.api.connector.sink2.StatefulSink$StatefulSinkWriter`
+- `org.apache.flink.api.connector.sink2.StatefulSink$WithCompatibleState`
+- `org.apache.flink.api.connector.sink2.StatefulSink`
+- 
`org.apache.flink.api.connector.sink2.TwoPhaseCommittingSink$PrecommittingSinkWriter`
+- `org.apache.flink.api.connector.sink2.TwoPhaseCommittingSink`
+- `org.apache.flink.api.java.CollectionEnvironment`
+- `org.apache.flink.api.java.DataSet`
+- `org.apache.flink.api.java.ExecutionEnvironmentFactory`
+- `org.apache.flink.api.java.ExecutionEnvironment`
+- `org.apache.flink.api.java.LocalEnvironment`
+- `org.apache.flink.api.java.RemoteEnvironment`
+- `org.apache.flink.api.java.aggregation.Aggregations`
+- `org.apache.flink.api.java.aggregation.UnsupportedAggregationTypeException`
+- `org.apache.flink.api.java.functions.FlatMapIterator`
+- `org.apache.flink.api.java.functions.FunctionAnnotation$ForwardedFieldsFirst`
+- 
`org.apache.flink.api.java.functions.FunctionAnnotation$ForwardedFieldsSecond`
+- `org.apache.flink.api.java.functions.FunctionAnnotation$ForwardedFields`
+- 
`org.apache.flink.api.java.functions.FunctionAnnotation$NonForwardedFieldsFirst`
+- 
`org.apache.flink.api.java.functions.FunctionAnnotation$NonForwardedFieldsSecond`
+- `org.apache.flink.api.java.functions.FunctionAnnotation$NonForwardedFields`
+- `org.apache.flink.api.java.functions.FunctionAnnotation$ReadFieldsFirst`
+- `org.apache.flink.api.java.functions.FunctionAnnotation$ReadFieldsSecond`
+- `org.apache.flink.api.java.functions.FunctionAnnotation$ReadFields`
+- `org.apache.flink.api.java.functions.FunctionAnnotation`
+- `org.apache.flink.api.java.functions.GroupReduceIterator`
+- `org.apache.flink.api.java.io.CollectionInputFormat`
+- `org.apache.flink.api.java.io.CsvOutputFormat`
+- `org.apache.flink.api.java.io.CsvReader`
+- `org.apache.flink.api.java.io.DiscardingOutputFormat`
+- `org.apache.flink.api.java.io.IteratorInputFormat`
+- `org.apache.flink.api.java.io.LocalCollectionOutputFormat`
+- `org.apache.flink.api.java.io.ParallelIteratorInputFormat`
+- `org.apache.flink.api.java.io.PrimitiveInputFormat`
+- `org.apache.flink.api.java.io.PrintingOutputFormat`
+- `org.apache.flink.api.java.io.RowCsvInputFormat`
+- `org.apache.flink.api.java.io.SplitDataProperties$SourcePartitionerMarker`
+- `org.apache.flink.api.java.io.SplitDataProperties`
+- `org.apache.flink.api.java.io.TextInputFormat`
+- `org.apache.flink.api.java.io.TextOutputFormat$TextFormatter`
+- `org.apache.flink.api.java.io.TextOutputFormat`
+- `org.apache.flink.api.java.io.TextValueInputFormat`
+- `org.apache.flink.api.java.io.TypeSerializerInputFormat`
+- `org.apache.flink.api.java.io.TypeSerializerOutputFormat`
+- `org.apache.flink.api.java.operators.AggregateOperator`
+- `org.apache.flink.api.java.operators.CoGroupOperator$CoGroupOperatorSets`
+- `org.apache.flink.api.java.operators.CoGroupOperator`
+- `org.apache.flink.api.java.operators.CrossOperator$DefaultCross`
+- `org.apache.flink.api.java.operators.CrossOperator$ProjectCross`
+- `org.apache.flink.api.java.operators.CrossOperator`
+- `org.apache.flink.api.java.operators.CustomUnaryOperation`
+- `org.apache.flink.api.java.operators.DataSink`
+- `org.apache.flink.api.java.operators.DataSource`
+- `org.apache.flink.api.java.operators.DeltaIteration$SolutionSetPlaceHolder`
+- `org.apache.flink.api.java.operators.DeltaIteration$WorksetPlaceHolder`
+- `org.apache.flink.api.java.operators.DeltaIterationResultSet`
+- `org.apache.flink.api.java.operators.DeltaIteration`
+- `org.apache.flink.api.java.operators.DistinctOperator`
+- `org.apache.flink.api.java.operators.FilterOperator`
+- `org.apache.flink.api.java.operators.FlatMapOperator`
+- `org.apache.flink.api.java.operators.GroupCombineOperator`
+- `org.apache.flink.api.java.operators.GroupReduceOperator`
+- `org.apache.flink.api.java.operators.Grouping`
+- `org.apache.flink.api.java.operators.IterativeDataSet`
+- `org.apache.flink.api.java.operators.JoinOperator$DefaultJoin`
+- `org.apache.flink.api.java.operators.JoinOperator$EquiJoin`
+- 
`org.apache.flink.api.java.operators.JoinOperator$JoinOperatorSets$JoinOperatorSetsPredicate`
+- `org.apache.flink.api.java.operators.JoinOperator$JoinOperatorSets`
+- `org.apache.flink.api.java.operators.JoinOperator$ProjectJoin`
+- `org.apache.flink.api.java.operators.JoinOperator`
+- `org.apache.flink.api.java.operators.MapOperator`
+- `org.apache.flink.api.java.operators.MapPartitionOperator`
+- `org.apache.flink.api.java.operators.Operator`
+- `org.apache.flink.api.java.operators.PartitionOperator`
+- `org.apache.flink.api.java.operators.ProjectOperator`
+- `org.apache.flink.api.java.operators.ReduceOperator`
+- `org.apache.flink.api.java.operators.SingleInputOperator`
+- `org.apache.flink.api.java.operators.SingleInputUdfOperator`
+- `org.apache.flink.api.java.operators.SortPartitionOperator`
+- `org.apache.flink.api.java.operators.SortedGrouping`
+- `org.apache.flink.api.java.operators.TwoInputOperator`
+- `org.apache.flink.api.java.operators.TwoInputUdfOperator`
+- `org.apache.flink.api.java.operators.UdfOperator`
+- `org.apache.flink.api.java.operators.UnionOperator`
+- `org.apache.flink.api.java.operators.UnsortedGrouping`
+- `org.apache.flink.api.java.operators.join.JoinFunctionAssigner`
+- 
`org.apache.flink.api.java.operators.join.JoinOperatorSetsBase$JoinOperatorSetsPredicateBase`
+- `org.apache.flink.api.java.operators.join.JoinOperatorSetsBase`
+- `org.apache.flink.api.java.operators.join.JoinType`
+- `org.apache.flink.api.java.summarize.BooleanColumnSummary`
+- `org.apache.flink.api.java.summarize.ColumnSummary`
+- `org.apache.flink.api.java.summarize.NumericColumnSummary`
+- `org.apache.flink.api.java.summarize.ObjectColumnSummary`
+- `org.apache.flink.api.java.summarize.StringColumnSummary`
+- `org.apache.flink.api.java.utils.AbstractParameterTool`
+- `org.apache.flink.api.java.utils.DataSetUtils`
+- `org.apache.flink.api.java.utils.MultipleParameterTool`
+- `org.apache.flink.api.java.utils.ParameterTool`
+- `org.apache.flink.configuration.AkkaOptions`
+- `org.apache.flink.connector.file.src.reader.FileRecordFormat$Reader`
+- `org.apache.flink.connector.file.src.reader.FileRecordFormat`
+- `org.apache.flink.core.execution.RestoreMode`
+- `org.apache.flink.formats.avro.AvroRowDeserializationSchema`
+- `org.apache.flink.formats.csv.CsvRowDeserializationSchema$Builder`
+- `org.apache.flink.formats.csv.CsvRowDeserializationSchema`
+- `org.apache.flink.formats.csv.CsvRowSerializationSchema$Builder`
+- `org.apache.flink.formats.csv.CsvRowSerializationSchema`
+- `org.apache.flink.formats.json.JsonRowDeserializationSchema$Builder`
+- `org.apache.flink.formats.json.JsonRowDeserializationSchema`
+- `org.apache.flink.formats.json.JsonRowSerializationSchema$Builder`
+- `org.apache.flink.formats.json.JsonRowSerializationSchema`
+- `org.apache.flink.hadoopcompatibility.mapred.HadoopReduceCombineFunction`
+- `org.apache.flink.hadoopcompatibility.mapred.HadoopReduceFunction`
+- `org.apache.flink.metrics.reporter.InstantiateViaFactory`
+- `org.apache.flink.metrics.reporter.InterceptInstantiationViaReflection`
+- `org.apache.flink.runtime.jobgraph.SavepointConfigOptions`
+- `org.apache.flink.runtime.state.CheckpointListener`
+- `org.apache.flink.runtime.state.filesystem.FsStateBackendFactory`
+- `org.apache.flink.runtime.state.filesystem.FsStateBackend`
+- `org.apache.flink.runtime.state.memory.MemoryStateBackendFactory`
+- `org.apache.flink.runtime.state.memory.MemoryStateBackend`
+- `org.apache.flink.state.api.BootstrapTransformation`
+- `org.apache.flink.state.api.EvictingWindowReader`
+- `org.apache.flink.state.api.ExistingSavepoint`
+- `org.apache.flink.state.api.KeyedOperatorTransformation`
+- `org.apache.flink.state.api.NewSavepoint`
+- `org.apache.flink.state.api.OneInputOperatorTransformation`
+- `org.apache.flink.state.api.Savepoint`
+- `org.apache.flink.state.api.WindowReader`
+- `org.apache.flink.state.api.WindowedOperatorTransformation`
+- `org.apache.flink.state.api.WritableSavepoint`
+- `org.apache.flink.streaming.api.TimeCharacteristic`
+- 
`org.apache.flink.streaming.api.checkpoint.ExternallyInducedSource$CheckpointTrigger`
+- `org.apache.flink.streaming.api.checkpoint.ExternallyInducedSource`
+- 
`org.apache.flink.streaming.api.datastream.IterativeStream$ConnectedIterativeStreams`
+- 
`org.apache.flink.streaming.api.environment.CheckpointConfig$ExternalizedCheckpointCleanup`
+- `org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions`
+- `org.apache.flink.streaming.api.environment.StreamPipelineOptions`
+- `org.apache.flink.streaming.api.functions.AscendingTimestampExtractor`
+- `org.apache.flink.streaming.api.functions.sink.DiscardingSink`
+- `org.apache.flink.streaming.api.functions.sink.OutputFormatSinkFunction`
+- `org.apache.flink.streaming.api.functions.sink.PrintSinkFunction`
+- `org.apache.flink.streaming.api.functions.sink.RichSinkFunction`
+- `org.apache.flink.streaming.api.functions.sink.SinkFunction$Context`
+- `org.apache.flink.streaming.api.functions.sink.SinkFunction`
+- `org.apache.flink.streaming.api.functions.sink.SocketClientSink`
+- `org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction`
+- `org.apache.flink.streaming.api.functions.sink.WriteFormatAsCsv`
+- `org.apache.flink.streaming.api.functions.sink.WriteFormatAsText`
+- `org.apache.flink.streaming.api.functions.sink.WriteFormat`
+- `org.apache.flink.streaming.api.functions.sink.WriteSinkFunctionByMillis`
+- `org.apache.flink.streaming.api.functions.sink.WriteSinkFunction`
+- 
`org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$BulkFormatBuilder`
+- 
`org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$DefaultBulkFormatBuilder`
+- 
`org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$DefaultRowFormatBuilder`
+- 
`org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder`
+- `org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink`
+- `org.apache.flink.streaming.api.functions.source.FromElementsFunction`
+- `org.apache.flink.streaming.api.functions.source.FromIteratorFunction`
+- 
`org.apache.flink.streaming.api.functions.source.FromSplittableIteratorFunction`
+- 
`org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase`
+- 
`org.apache.flink.streaming.api.functions.source.MultipleIdsMessageAcknowledgingSourceBase`
+- `org.apache.flink.streaming.api.functions.source.ParallelSourceFunction`
+- `org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction`
+- `org.apache.flink.streaming.api.functions.source.RichSourceFunction`
+- `org.apache.flink.streaming.api.functions.source.SocketTextStreamFunction`
+- 
`org.apache.flink.streaming.api.functions.source.SourceFunction$SourceContext`
+- `org.apache.flink.streaming.api.functions.source.SourceFunction`
+- `org.apache.flink.streaming.api.functions.source.StatefulSequenceSource`
+- 
`org.apache.flink.streaming.api.functions.windowing.RichProcessAllWindowFunction`
+- 
`org.apache.flink.streaming.api.functions.windowing.RichProcessWindowFunction`
+- `org.apache.flink.streaming.api.operators.SetupableStreamOperator`
+- `org.apache.flink.streaming.api.windowing.time.Time`
+- `org.apache.flink.streaming.util.serialization.AbstractDeserializationSchema`
+- `org.apache.flink.streaming.util.serialization.DeserializationSchema`
+- `org.apache.flink.streaming.util.serialization.SerializationSchema`
+- `org.apache.flink.streaming.util.serialization.SimpleStringSchema`
+- 
`org.apache.flink.streaming.util.serialization.TypeInformationSerializationSchema`
+- `org.apache.flink.table.api.TableColumn$ComputedColumn`
+- `org.apache.flink.table.api.TableColumn$MetadataColumn`
+- `org.apache.flink.table.api.TableColumn$PhysicalColumn`
+- `org.apache.flink.table.api.TableColumn`
+- `org.apache.flink.table.api.TableSchema$Builder`
+- `org.apache.flink.table.api.TableSchema`
+- `org.apache.flink.table.api.constraints.Constraint$ConstraintType`
+- `org.apache.flink.table.api.constraints.Constraint`
+- `org.apache.flink.table.api.constraints.UniqueConstraint`
+- `org.apache.flink.table.connector.sink.SinkFunctionProvider`
+- `org.apache.flink.table.connector.sink.SinkProvider`
+- `org.apache.flink.table.connector.source.AsyncTableFunctionProvider`
+- `org.apache.flink.table.connector.source.SourceFunctionProvider`
+- `org.apache.flink.table.connector.source.TableFunctionProvider`
+- `org.apache.flink.table.descriptors.Descriptor`
+- `org.apache.flink.table.descriptors.RowtimeValidator`
+- `org.apache.flink.table.descriptors.Rowtime`
+- `org.apache.flink.table.descriptors.SchemaValidator`
+- `org.apache.flink.table.descriptors.Schema`
+- `org.apache.flink.table.factories.StreamTableSinkFactory`
+- `org.apache.flink.table.factories.StreamTableSourceFactory`
+- `org.apache.flink.table.factories.TableFactory`
+- `org.apache.flink.table.factories.TableSinkFactory$Context`
+- `org.apache.flink.table.factories.TableSinkFactory`
+- `org.apache.flink.table.factories.TableSourceFactory$Context`
+- `org.apache.flink.table.factories.TableSourceFactory`
+- `org.apache.flink.table.sinks.AppendStreamTableSink`
+- `org.apache.flink.table.sinks.RetractStreamTableSink`
+- `org.apache.flink.table.sinks.TableSink`
+- `org.apache.flink.table.sinks.UpsertStreamTableSink`
+- `org.apache.flink.table.sources.DefinedFieldMapping`
+- `org.apache.flink.table.sources.DefinedProctimeAttribute`
+- `org.apache.flink.table.sources.DefinedRowtimeAttributes`
+- `org.apache.flink.table.sources.FieldComputer`
+- `org.apache.flink.table.sources.NestedFieldsProjectableTableSource`
+- `org.apache.flink.table.sources.ProjectableTableSource`
+- `org.apache.flink.table.sources.TableSource`
+- `org.apache.flink.table.sources.tsextractors.ExistingField`
+- `org.apache.flink.table.sources.tsextractors.StreamRecordTimestamp`
+- `org.apache.flink.table.sources.tsextractors.TimestampExtractor`
+- `org.apache.flink.table.types.logical.TypeInformationRawType`
+- `org.apache.flink.table.utils.TypeStringUtils`
+
+
+### Modified classes: 
+- `org.apache.flink.table.api.config.ExecutionConfigOptions`
+  - field removed:
+    - `org.apache.flink.configuration.ConfigOption<java.lang.Boolean> 
TABLE_EXEC_LEGACY_TRANSFORMATION_UIDS`
+    - `org.apache.flink.configuration.ConfigOption<java.lang.String> 
TABLE_EXEC_SHUFFLE_MODE`
+- `org.apache.flink.table.api.config.LookupJoinHintOptions`
+  - method modified:
+    - 
`org.apache.flink.shaded.guava32.com.google.common.collect.ImmutableSet<org.apache.flink.configuration.ConfigOption><org.apache.flink.configuration.ConfigOption>
 
(<-org.apache.flink.shaded.guava31.com.google.common.collect.ImmutableSet<org.apache.flink.configuration.ConfigOption><org.apache.flink.configuration.ConfigOption>)
 getRequiredOptions()`
+    - 
`org.apache.flink.shaded.guava32.com.google.common.collect.ImmutableSet<org.apache.flink.configuration.ConfigOption><org.apache.flink.configuration.ConfigOption>
 
(<-org.apache.flink.shaded.guava31.com.google.common.collect.ImmutableSet<org.apache.flink.configuration.ConfigOption><org.apache.flink.configuration.ConfigOption>)
 getSupportedOptions()`
+- `org.apache.flink.table.api.config.OptimizerConfigOptions`
+  - field removed:
+    - `org.apache.flink.configuration.ConfigOption<java.lang.Boolean> 
TABLE_OPTIMIZER_SOURCE_PREDICATE_PUSHDOWN_ENABLED`
+    - `org.apache.flink.configuration.ConfigOption<java.lang.Boolean> 
TABLE_OPTIMIZER_SOURCE_AGGREGATE_PUSHDOWN_ENABLED`
+- `org.apache.flink.table.api.Table`
+  - method modified:
+    - `org.apache.flink.table.legacy.api.TableSchema 
(<-org.apache.flink.table.api.TableSchema) getSchema()`
+- `org.apache.flink.table.api.TableConfig`
+  - method removed:
+    - `void setIdleStateRetentionTime(org.apache.flink.api.common.time.Time, 
org.apache.flink.api.common.time.Time)`

Review Comment:
   Method changed from `Time` to `Duration`. I'll update the description here. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to