Is it possible?
For Dataset I've found [1] :
parameters.setBoolean("recursive.file.enumeration", true);
// pass the configuration to the data sourceDataSet logs =
env.readTextFile("file:///path/with.nested/files")
.withParameters(parameters);
But can I achieve somethin
ence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors
[3]: https://issues.apache.org/jira/browse/FLINK-15869
On Fri, Oct 30, 2020 at 1:29 PM Ruben Laguna wrote:
> Sure, I’ll write the JIRA issue
>
> On Fri, 30 Oct 2020 at 13:27, Dawid Wysakowicz
> wrote:
>
>> I am afraid
file name as
> metadata column of a filesystem source.
>
> Would you like to create a JIRA issue for it?
>
> Best,
>
> Dawid
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors
>
> On 30/10/2020 13:21,
I've asked this already on [stackoverflow][1]
Is there anything equivalent to Spark's `f.input_file_name()` ? I
don't see anything that could be used in [system functions][2]
I have a dataset where they embedded some information in the filenames
(200k files) and I need to extract that as a new c
sults in the client. BTW, for printing you
> can use TableResult#print, which will nicely format your results.
>
> Best,
>
> Dawid
>
> On 29/10/2020 16:13, Ruben Laguna wrote:
> > How can I use the Table [Print SQL connector][1]? I tried the
> > following (batch mode)
How can I use the Table [Print SQL connector][1]? I tried the
following (batch mode) but it does not give any output:
EnvironmentSettings settings =
EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build();
TableEnvironment tEnv = TableEnvironment.create(settings);
final LocalD
I made this question on [Stackoverflow][1] but I'm cross posting here.
Are double quoted identifiers allowed in Flink SQL? [Calcite
documentation says to use double quoted
identifiers](https://calcite.apache.org/docs/reference.html#identifiers)
but they don't seem to work (see below). On the othe
Hi,
First time user , I'm just evaluating Flink at the moment, and I was
reading
https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html#deploy-job-cluster
and I don't fully understand if a Job Cluster will autoterminate after
the job is completed (for at batch job)