Hi Jark,
I think user ClassLoader is clear in SQL-CLI. I agree with you that we can
add ClassLoader to Context.
But how to implement user ClassLoader in TableEnvironment, there is no
ClassLoader in TableEnvironment. (Maybe EnvironmentSettings could contains
user ClassLoader in the future)
So I r
Hi all,
I would like to add an additional method `getClassloader()` into the
context.
Because a TableFactory may require this classloader to find another
TableFactory,
e.g. we will find format factory in KafkaTableSourceSinkFactory.
See FLINK-15992.
I don't think we need a new VOTE for this, I ju
+1, thanks for the efforts.
On Wed, Feb 5, 2020 at 4:00 PM Jingsong Li wrote:
> Hi all,
>
> As Jark suggested in VOTE thread.
> JIRA created: https://issues.apache.org/jira/browse/FLINK-15912
>
> Best,
> Jingsong Lee
>
> On Wed, Feb 5, 2020 at 10:57 AM Jingsong Li
> wrote:
>
> > Hi Timo,
> >
>
Hi all,
As Jark suggested in VOTE thread.
JIRA created: https://issues.apache.org/jira/browse/FLINK-15912
Best,
Jingsong Lee
On Wed, Feb 5, 2020 at 10:57 AM Jingsong Li wrote:
> Hi Timo,
>
> G ood catch!
>
> I really love the idea 2, a full Flink config looks very good to me.
>
> Try to unders
Hi Timo,
G ood catch!
I really love the idea 2, a full Flink config looks very good to me.
Try to understand your first one, actually we don't have `TableIdentifier`
class now. But TableFactory already indicate table. So I am OK.
New Context should be:
/**
* Context of table source crea
Hi Jingsong,
some last minute changes from my side:
1. rename `getTableIdentifier` to `getObjectIdentifier` to keep the API
obvious. Otherwise people expect a `TableIdentifier` class being
returned here.
2. rename `getTableConfig` to `getConfiguration()` in the future this
will not only be
So the interface will be:
public interface TableSourceFactory extends TableFactory {
..
/**
* Creates and configures a {@link TableSource} based on the given
{@link Context}.
*
* @param context context of this table source.
* @return the configured table source.
*/
Hi all,
After rethinking and discussion with Kurt, I'd like to remove "isBounded".
We can delay this is bounded message to TableSink.
With TableSink refactor, we need consider "consumeDataStream"
and "consumeBoundedStream".
Best,
Jingsong Lee
On Mon, Feb 3, 2020 at 4:17 PM Jingsong Li wrote:
>
Hi Jark,
Thanks involving, yes, it's hard to understand to add isBounded on the
source.
I recommend adding only to sink at present, because sink has upstream. Its
upstream is either bounded or unbounded.
Hi all,
Let me summarize with your suggestions.
public interface TableSourceFactory extends
Thanks Bowen and Timo for involving.
Hi Bowen,
> 1. is it better to have explicit APIs like "createBatchTableSource(...)"
I think it is better to keep one method, since in [1], we have reached one
in DataStream layer to maintain a single API in "env.source". I think it is
good to not split batch
Hi Jingsong,
I'm also +1 for adding a context in the source and sink factories.
Base on the Timo's proposal, IMO,
1) table path might be useful because it is a part of DDL and users might
use it as the source/sink display name in explainSource().
2) isStreamingMode vs isBounded: however, a bound
Hi Jingsong,
+1 for adding a context in the source and sink factories. A context
class also allows for future modifications without touching the
TableFactory interface again.
How about:
interface TableSourceFactory {
interface Context {
// ...
}
}
Because I find the name `Cat
Hi Jingsong,
The 1st and 2nd pain points you described are very valid, as I'm more
familiar with them. I agree these are shortcomings of the current Flink SQL
design.
A couple comments on your 1st proposal:
1. is it better to have explicit APIs like "createBatchTableSource(...)"
and "createStrea
13 matches
Mail list logo