[ 
https://issues.apache.org/jira/browse/FLINK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15731549#comment-15731549
 ] 

Fabian Hueske commented on FLINK-3849:
--------------------------------------

Hi [~tonycox],

I would like to wait for FLINK-3848 to be finalized before starting with this 
issue. 
Pushing filters into the CsvTableFormat does not provide a lot of benefits 
because we need to scan the whole file anyway.
This feature is more suitable for querying a database (JDBC) or KV-store 
(HBase, Cassandra, ...) or reading data from a storage format such as Parquet 
or ORC which support filter pushdown.

The unsupported expressions are necessary, because the FilterableTableSource 
needs to tell the optimizer which predicates it can apply and which predicates 
need to be evaluated by the query engine (i.e., Flink).

> Add FilterableTableSource interface and translation rule
> --------------------------------------------------------
>
>                 Key: FLINK-3849
>                 URL: https://issues.apache.org/jira/browse/FLINK-3849
>             Project: Flink
>          Issue Type: New Feature
>          Components: Table API & SQL
>            Reporter: Fabian Hueske
>            Assignee: Anton Solovev
>
> Add a {{FilterableTableSource}} interface for {{TableSource}} implementations 
> which support filter push-down.
> The interface could look as follows
> {code}
> def trait FilterableTableSource {
>   // returns unsupported predicate expression
>   def setPredicate(predicate: Expression): Expression
> }
> {code}
> In addition we need Calcite rules to push a predicate (or parts of it) into a 
> TableScan that refers to a {{FilterableTableSource}}. We might need to tweak 
> the cost model as well to push the optimizer in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to