This is a limitation of the presto version; use
flink-s3-fs-hadoop-1.11.3.jar instead.
On 08/09/2021 20:39, Dhiru wrote:
I copied
FROM flink:1.11.3-scala_2.12-java11 RUN mkdir
./plugins/flink-s3-fs-presto RUN cp
./opt/flink-s3-fs-presto-1.11.3.jar ./plugins/flink-s3-fs-presto/
then started
Hi David,
I can confirm that I'm able to reproduce this behaviour. I've tried
profiling/flame graphs and I was not able to make much sense out of those
results. There are no IO/Memory bottlenecks that I could notice, it looks
indeed like the Job is stuck inside RocksDB itself. This might be an iss
Hi Peter,
Can you provide relevant JobManager logs? And can you write down what steps
have you taken before the failure happened? Did this failure occur during
upgrading Flink, or after the upgrade etc.
Best,
Piotrek
śr., 8 wrz 2021 o 16:11 Peter Westermann
napisał(a):
> We recently upgraded f
We also have this configuration set in case it makes any difference when
allocation tasks: cluster.evenly-spread-out-slots.
On 2021/09/08 18:09:52, Xiang Zhang wrote:
> Hello,
>
> We have an app running on Flink 1.10.2 deployed in standalone mode. We
> enabled task-local recovery by setting bo
I copied FROM flink:1.11.3-scala_2.12-java11
RUN mkdir ./plugins/flink-s3-fs-presto
RUN cp ./opt/flink-s3-fs-presto-1.11.3.jar ./plugins/flink-s3-fs-presto/
then started getting this error , trying to run on aws eks and trying to access
s3 bucket 2021-09-08 14:38:10java.lang.UnsupportedOperatio
Hello,
We have an app running on Flink 1.10.2 deployed in standalone mode. We
enabled task-local recovery by setting both *state.backend.local-recovery *and
*state.backend.rocksdb.localdir*. The app has over 100 task managers and 2
job managers (active and passive).
This is what we have observed.
you need to put the flink-s3-fs-hadoop/presto jar into a directory
within the plugins directory, for example the final path should look
like this:
/opt/flink/plugins/flink-s3-fs-hadoop/flink-s3-fs-hadoop-1.13.1.jar
Furthermore, you only need either the hadoop or presto jar, _not_ both
of them
yes I copied to plugin folder but not sure same jar I see in /opt as well by
default
root@d852f125da1f:/opt/flink/plugins# lsREADME.txt
flink-s3-fs-hadoop-1.13.1.jar metrics-datadog metrics-influx
metrics-prometheus metrics-statsdexternal-resource-gpu
flink-s3-fs-presto-1.1
We recently upgraded from Flink 1.12.4 to 1.12.5 and are seeing some weird
behavior after a change in jobmanager leadership: We’re seeing two copies of
the same job, one of those is in SUSPENDED state and has a start time of zero.
Here’s the output from the /jobs/overview endpoint:
{
"jobs": [
Hi,
So for past 2-3 days i have been looking for documentation which elaborates how
flink takes care of restarting the data streaming job. I know all the restart
and failover strategies but wanted to know how different components (Job
Manager, Task Manager etc) play a role while restarting the
Hi,
I'm investigating why a job we use to inspect a flink state is a lot slower
than the bootstrap job used to generate it.
I use RocksdbDB with a simple keyed value state mapping a string key to a
long value. Generating the bootstrap state from a CSV file with 100M
entries takes a couple minutes
Hi,
did you try to use a different order? Core module first and then Hive
module?
The compatibility layer should work sufficiently for regular Hive UDFs
that don't aggregate data. Hive aggregation functions should work well
in batch scenarios. However, for streaming pipeline the aggregate
f
12 matches
Mail list logo