Hi,
We recently bumped to Flink 1.4 from 1.3.2, and found out an issue on HDFS
configuration.
We are using *FlinkYarnSessionCli* to start the cluster and submit job.
In 1.3.2, we set below Flink properties when using checkpoints:
state.backend.fs.checkpointdir = hdfs://nameservice0/.../..
state.
Thanks Amit.
I will start with the following and see how it goes.
io.vertx
vertx-jdbc-client
3.3.3
org.mariadb.jdbc
mariadb-java-client
1.5.5
On Wed, Oct 3, 2018, 10:11 PM Amit Jain wrote:
> Hi Nicos,
>
> DatabaseClient is an example class to describe the asyncio conce
Hi Nicos,
DatabaseClient is an example class to describe the asyncio concept. There
is no interface/class for this client in Flink codebase. You can use any
mariaDB client implementation which supports concurrent request to DB.
--
Cheers,
Amit
On Wed, Oct 3, 2018 at 8:14 PM Nicos Maris wrote:
Hi Andrey,
Yes, we followed the guide. Our checkpoints/savepoints are already being
saved on S3/Ceph, using the ShadedHadoop/S3AFileSystem (because it's the
one we managed to completely override the AWS address to point to our Ceph
cluster).
I suppose I can add the package with the AmazonClientEx
I have been trying with all variations to no avail of java
-Dhttp.nonProxyHosts=.. -Dhttps.proxyHost=http://... -Dhttps.proxyPort=911
-Dhttps.proxyUser= -Dhttps.proxyPassword=.. -Dhttp.proxyHost=http://..
-Dhttp.proxyPort=911 -Dhttp.proxyUser=... -Dhttp.proxyPassword=... -jar ..
after looking at
Hello,
I have a short question about the following example in your documentation.
https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/stream/operators/asyncio.html
Which is the package and the maven dependency of the class DatabaseClient?
I am building a Proof of Concept based on t
You don’t need to include the flink libraries themselves in the fat jar ! You
can put them as provided and this reduces the jar size! They are already
provided by your Flink installation. One exception is the table API but I
simply recommend to put it in your flink distribution folder (if your f
Hi Julio,
What's the Flink version for this setup?
--
Thanks,
Amit
On Wed, Oct 3, 2018 at 4:22 PM Andrey Zagrebin
wrote:
> Hi Julio,
>
> Looks like some problem with dependencies.
> Have you followed the recommended s3 configuration guide [1]?
> Is it correct that your job already created chec
Hi Julio,
this might be a bug in job stats. Can you please create an issue in Jira
describing the steps you were doing and complete logs?
Best,
Andrey
> On 2 Oct 2018, at 21:11, Julio Biason wrote:
>
> Oh, another piece of information:
>
> Because the job was failing and restarting, I did a
Hi Julio,
Looks like some problem with dependencies.
Have you followed the recommended s3 configuration guide [1]?
Is it correct that your job already created checkpoints/savepoints on s3 before?
I think if you manually create file system using FileSystem.get(path), it
should be configured the s
Flink version is 1.5.3/Hadoop 27
_
From: Narayanaswamy, Krishna [Tech]
Sent: Wednesday, October 03, 2018 3:42 PM
To: 'user@flink.apache.org'
Subject: Memory Allocate/Deallocate related Thread Deadlock encountered when
running a large job > 10k tasks
H
Hi folks,
We're currently deploying our Flink applications as a fat-jar using the
maven-shade-plugin. Problem is, each application jar ends up being
approximately 130-140 MB which is a pain to build and deploy every time. Is
there a way to exclude dependencies and just deploy a thin jar to the
Hi,
I am trying to run one large single job graph which has > 10k tasks. The form
of the graph is something like
DataSource -> Filter -> Map [...multiple]
==> Sink1
==> Sink2
I am using a parallelism of 10 with 1 slot per task manager and a memory
allocation of 32G per TM. The JM is runn
Hey,
We're running Flink 1.5.2 (I know there's 1.5.4 and 1.6.1) on YARN for some
jobs we're processing. It's a "long running" container to which we're
submitting jobs - all jobs submitted to that container have got parallelism of
32 (to be precise: in this job there are 8 subtasks with parallel
I know that Gordon (in CC) has looked closer into this problem.
He should be able to share restrictions and maybe even workarounds.
Best, Fabian
Hequn Cheng schrieb am Mi., 3. Okt. 2018, 05:09:
> Hi Elias,
>
> From my understanding, you can't do this since the state will no longer be
> compatib
Hi,
AFAIK it's not that easy. Flink's Python support is based on Jython which
translates Python code into JVM byte code. Therefore, native libs are not
supported.
Chesnay (in CC) knows the details here.
Best, Fabian
Hequn Cheng schrieb am Mi., 3. Okt. 2018, 04:30:
> Hi Bing,
>
> I'm not famil
16 matches
Mail list logo