-renewal-renewers-and-yarn
If you think that you need more information about our issue, we can organize a
call and discuss about it.
Regards,
Juan
From: Rong Rong
Date: Sunday, January 12, 2020 at 6:13 PM
To: Juan Gentile
Cc: Aljoscha Krettek , "user@flink.apache.org"
, Arnau
wrote:
Hi,
Interesting! What problem are you seeing when you don't unset that
environment variable? From reading UserGroupInformation.java our code
should almost work when that environment variable is set.
Best,
Aljoscha
On 10.01.20 15:23, Juan Gentile wrote:
erberos.login.keytab
>> security.kerberos.login.principal
>> security.kerberos.login.contexts
>>
>>
>>
>> Best,
>> Yang
>>
>> Juan Gentile 于2020年1月6日周一 下午3:55写道:
>>
>>> Hello Rong, Che
just not supported by Flink or I’m doing something wrong.
Thank you,
Juan
From: Rong Rong
Date: Saturday, January 4, 2020 at 6:06 PM
To: Chesnay Schepler
Cc: Juan Gentile , "user@flink.apache.org"
, Oleksandr Nitavskyi
Subject: Re: Yarn Kerberos issue
Hi Juan,
Chesnay was right.
Hello,
Im trying to submit a job (batch worcount) to a Yarn cluster. I’m trying to use
delegation tokens and I’m getting the following error:
org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy
Yarn session cluster
at
org.apache.flink.yarn.AbstractYarn
Hello!
We are running Flink on Yarn and we are currently getting the following error:
2019-08-23 06:11:01,534 WARN org.apache.hadoop.security.UserGroupInformation
- PriviledgedActionException as: (auth:KERBEROS)
cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.se
the watermarks handled in the source operator.
Please let us know your opinion.
Thank you,
Juan G.
From: Konstantin Knauf
Date: Sunday, July 7, 2019 at 10:14 PM
To: Juan Gentile
Cc: "user@flink.apache.org" , Olivier Solliec
, Oleksandr Nitavskyi
Subject: Re: Watermarks and Kafka
H
Hello,
We are currently facing an issue where we need to store the instance of the
watermark and timestamp assigner in the state while consumer from Kafka.
For that purpose we took a look at FlinkKafkaConsumerBase and noticed that
since the methods (snapshotState and initializeState from the
Ch
Hello!
We currently have a job which reads from Kafka and uses punctuated watermarks
based on the messages we read. We currently keep track of the watermarks for
each partition to emit a consensus watermark, taking the smallest of all
partitions.
We ran into an issue because we are not storing
Hello!
We are trying to run in Mesos a job which will launch its own cluster (as
opposed to launching the cluster and then submitting jobs to it)
We have a couple of questions/issues:
1. Is there any easier way to achieve this without having to generate a
graph file before submitting the jo
Hello!
We are having a small problem while trying to deploy Flink on Mesos using
marathon. In our set up of Mesos we are required to specify the amount of disk
space we want to have for the applications we deploy there.
The current default value in Flink is 0 and it’s currently is not
parameter
Hello!
We are migrating the the last 1.6 version and all the jobs seem to work fine,
but when we check individual jobs through the web interface we encounter the
issue that after clicking on a job, either it takes too long to load the
information of the job or it never loads at all.
Has anyone
Hello!
We are trying to migrate from 1.4 to 1.6 and we are getting the following
exception in our jobs:
org.apache.flink.util.FlinkException: The assigned slot
container_e293_1539164595645_3455869_01_011241_2 was removed.
at
org.apache.flink.runtime.resourcemanager.slotmanager.SlotManag
Hello!
We have found a weird issue while replacing the source in one of our Flink SQL
Jobs.
We have a job which was reading from a Kafka topic (with externalize
checkpoints) and we needed to change the topic while keeping the same logic for
the job/SQL.
After we restarted the job, instead of c
Hello,
I'm looking at the following page of the documentation
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.html
particularly at this piece of code:
val stream: DataStream[(String, Int)] = ...
val counts: DataStream[(String, Int)] = stream
.keyBy(_._1)
.m
Hello,
We are using the SQL api and we were wondering if it’s possible to capture and
log late events. We could not find a way considering the time window is managed
inside the SQL.
Is there a way to do this?
Thank you,
Juan
Hello!
I’m trying to have a process with a cache (using guava) and following this
https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/stream/operators/process_function.html
But when I run it I get the following exception:
com.esotericsoftware.kryo.KryoException: java.lang.NullPointe
at 20:25
To: Juan Gentile
Cc: "user@flink.apache.org" , Oleksandr Nitavskyi
Subject: Re: Externalized checkpoints and metadata
Hi Juan,
We modified the flink code a little bit to change the flink checkpoint
structure so we can easily identify which is which
you can read my note or th
Hello,
We are trying to use externalized checkpoints, using RocksDB on Hadoop hdfs.
We would like to know what is the proper way to resume from a saved checkpoint
as we are currently running many jobs in the same flink cluster.
The problem is that when we want to restart the jobs and pass the me
Hello,
We are currently testing the SQL API using 1.4.0 version of Flink and we would
like to know if it’s possible to name a query or parts of it so we can easily
recognize what it’s doing when we run it.
An additional question is, In case of small changes done to the query/ies, and
assuming w
20 matches
Mail list logo