Hi Aj,
I got a work around to put my app jar inside /usr/lib/flink/lib directory.
On Mon, Apr 6, 2020 at 11:27 PM aj wrote:
> Hi Fanbin,
>
> I am facing a similar kind of issue. Let me know if you are able to
> resolve this issue then please help me also
>
>
> https://stackoverflow.com/question
Hi Fanbin,
I am facing a similar kind of issue. Let me know if you are able to
resolve this issue then please help me also
https://stackoverflow.com/questions/61012350/flink-reading-a-s3-file-causing-jackson-dependency-issue
On Tue, Dec 17, 2019 at 7:50 AM ouywl wrote:
> Hi Bu
>I think I
Hi LakeShen,
I'm sorry there is no such configuration for json format currently.
I think it makes sense to add such configuration like
'format.ignore-parse-errors' in csv format.
I created FLINK-15396[1] to track this.
Best,
Jark
[1]: https://issues.apache.org/jira/browse/FLINK-15396
On Thu, 26
Hi Polarisary.
Checked the flink codebase and your stacktraces, seems you need to format
the timestamp as : "-MM-dd'T'HH:mm:ss.SSS'Z'"
The code is here:
https://github.com/apache/flink/blob/38e4e2b8f9bc63a793a2bddef5a578e3f80b7376/flink-formats/flink-json/src/main/java/org/apache/flink/forma
Hi Komal,
Measuring latency is always a challenge. The problem here is that your
functions are chained, meaning that the result of a function is directly
passed on to the next function and only when the last function emits the
result, the first function is called with a new record.
This makes meas
Hi Chesnay,
I see. Many thanks for your prompt reply. Will make us of
flink-shaded-hadoop-uber jar when deploying Flink using Docker starting
from Flink v.1.8.0.
Best regards,
On Fri, Oct 18, 2019 at 1:30 PM Chesnay Schepler wrote:
> We will not release Flink version bundling Hadoop.
>
> The v
We will not release Flink version bundling Hadoop.
The versioning for flink-shaded-hadoop-uber is entirely decoupled from
Flink version.
You can just use the flink-shaded-hadoop-uber jar linked on the
downloads page with any Flink version.
On 18/10/2019 13:25, GezimSejdiu wrote:
Hi Flink com
Hi Flink community,
I'm aware of the split done for binary sources of Flink starting from Flink
1.8.0 version, i.e there are no hadoop-shaded binaries available on apache
dist. archive: https://archive.apache.org/dist/flink/flink-1.8.0/.
Are there any plans to move the hadoop-pre-build binaries t
Thanks a lot.
On Wed, Oct 9, 2019, 8:55 AM Chesnay Schepler wrote:
> Java 11 support will be part of Flink 1.10 (FLINK-10725). You can take the
> current master and compile&run it on Java 11.
>
> We have not investigated later Java versions yet.
> On 09/10/2019 14:14, Vishal Santoshi wrote:
>
>
Java 11 support will be part of Flink 1.10 (FLINK-10725). You can take
the current master and compile&run it on Java 11.
We have not investigated later Java versions yet.
On 09/10/2019 14:14, Vishal Santoshi wrote:
Thank you. A related question, has flink been tested with jdk11 or
above. ?
O
Thank you. A related question, has flink been tested with jdk11 or above. ?
On Tue, Oct 8, 2019, 5:18 PM Steven Nelson wrote:
>
> https://flink.apache.org/downloads.html#apache-flink-190
>
>
> Sent from my iPhone
>
> On Oct 8, 2019, at 3:47 PM, Vishal Santoshi
> wrote:
>
> where do I get the c
https://flink.apache.org/downloads.html#apache-flink-190
Sent from my iPhone
> On Oct 8, 2019, at 3:47 PM, Vishal Santoshi wrote:
>
> where do I get the corresponding jar for 1.9 ?
>
> flink-shaded-hadoop2-uber-2.7.5-1.8.0.jar
>
> Thanks..
Hi!
Not sure what is happening here.
- I cannot understand why MapR FS should use Flink's relocated ZK
dependency
- It might be that it doesn't and that all the logging we see probably
comes from Flink's HA services. Maybe the MapR stuff uses a different
logging framework and the logs do not
Hi Stephan,
sorry for the late answer, didn't have access to cluster.
Here is log and stacktrace.
Hope this helps,
Maxim.
-
2019-09-16 18:00:31,804 INFO
org.apache.fli
Hi Wesley,
This is not the way I want, I want to read local json data in Flink SQL by
defining DDL.
Best regards,
Anyang
Wesley Peng 于2019年9月8日周日 下午6:14写道:
> On 2019/9/8 5:40 下午, Anyang Hu wrote:
> > In flink1.9, is there a way to read local json file in Flink SQL like
> > the reading of csv f
On 2019/9/8 5:40 下午, Anyang Hu wrote:
In flink1.9, is there a way to read local json file in Flink SQL like
the reading of csv file?
hi,
might this thread help you?
http://mail-archives.apache.org/mod_mbox/flink-dev/201604.mbox/%3cCAK+0a_o5=c1_p3sylrhtznqbhplexpb7jg_oq-sptre2neo...@mail.gmail.
Could you share the stack trace where the failure occurs, so we can see why
the Flink ZK is used during MapR FS access?
/CC Till and Tison - just FYI
On Fri, Aug 30, 2019 at 9:40 AM Maxim Parkachov
wrote:
> Hi Stephan,
>
> With previous versions, I tried around 1.7, I always had to compile MapR
Hi Stephan,
With previous versions, I tried around 1.7, I always had to compile MapR
hadoop to get it working.
With 1.9 I took hadoop-less Flink, which worked with MapR FS until I
switched on HA.
So it is hard to say if this is regression or not.
The error happens when Flink tries to initialize B
Hi Maxim!
The change of the MapR dependency should not have an impact on that.
Do you know if the same thing worked in prior Flink versions? Is that a
regression in 1.9?
The exception that you report, is that from Flink's HA services trying to
connect to ZK, or from the MapR FS client trying to c
Hi
on 2019/8/27 11:35, Simon Su wrote:
Could not resolve dependencies for project
org.apache.flink:flink-s3-fs-hadoop:jar:1.9-SNAPSHOT: Could not find
artifact org.apache.flink:flink-fs-hadoop-shaded:jar:tests:1.9-SNAPSHOT
in maven-ali (http://maven.aliyun.com/nexus/content/groups/public/)
A
20 matches
Mail list logo