wrote:
>
>> Hi Sampath,
>>
>> It seems Flink CLI for standalone would not access
>> *high-availability.storageDir.*
>>
>> What's the exception stack trace in your environment?
>>
>> Thanks, vino.
>>
>> 2018-07-17 15:
vailability.storageDir* is storage
> (Job graph, checkpoint and so on). Actually, the real data is stored under
> this path which used to recover purpose, zookeeper just store a state
> handle.
>
> ---
> Thanks.
> vino.
>
>
> 2018-07-16 15:28 GMT+08:00 Sampath Bhat :
-- Forwarded message --
From: Sampath Bhat
Date: Fri, Jul 13, 2018 at 3:18 PM
Subject: Flink CLI properties with HA
To: user
Hello
When HA is enabled in the flink cluster and if I've to submit job via flink
CLI then in the flink-conf.yaml of flink CLI should contain
Hello
When HA is enabled in the flink cluster and if I've to submit job via flink
CLI then in the flink-conf.yaml of flink CLI should contain this properties
-
high-availability: zookeeper
high-availability.cluster-id: flink
high-availability.zookeeper.path.root: flink
high-availability.storageDir
Chesnay - Why is the absolute file check required in the
RocksDBStateBackend.setDbStoragePaths(String ... paths). I think this is
causing the issue. Its not related to GlusterFS or file system. The same
problem can be reproduced with the following configuration on local
machine. The flink applicati
will fall
> back to use the jobmanager.rpc.address. Currently, the rest server endpoint
> runs in the same JVM as the cluster entrypoint and all JobMasters.
>
> Cheers,
> Till
>
> On Thu, Jun 21, 2018 at 8:46 AM Sampath Bhat
> wrote:
>
>> Hello Till
>>
>&g
va:1192)
>>> > at java.net.InetAddress.getAllByName(InetAddress.java:1126)
>>> > at java.net.InetAddress.getByName(InetAddress.java:1076)
>>> > at
>>> > org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.
>>> getRpcUrl(AkkaRpcServiceUtils.java:171)
>&
, 2018 at 11:18 AM, Sampath Bhat
wrote:
> Hi Chesnay
>
> If REST API (i.e. the web server) is mandatory for submitting jobs then
> why is there an option to set rest.port to -1? I think it should be
> mandatory to set some valid port for rest.port and make sure flink job
> manag
ssary anymore, the rpc
> address is still *required *due to some technical implementations; it may
> be that you can set this to some arbitrary value however.
>
> As a result the REST API (i.e. the web server) must be running in order to
> submit jobs.
>
>
> On 19.06.2018 14:1
Hello
I'm using Flink 1.5.0 version and Flink CLI to submit jar to flink cluster.
In flink 1.4.2 only job manager rpc address and job manager rpc port were
sufficient for flink client to connect to job manager and submit the job.
But in flink 1.5.0 the flink client additionally requires the rest
Hello
In flink 1.5 release notes -
https://flink.apache.org/news/2018/05/25/release-1.5.0.html#release-notes
Various Other Features and Improvements:
Applications can be rescaled without manually triggering a savepoint. Under
the hood, Flink will still take a savepoint, stop the application, and
r
Hi Angelica
You can run any number of flink jobs in flink cluster. There is no
restriction as such until and unless there are issues with flink jobs
resource sharing(Ex : two jobs accessing same port).
On Tue, Jun 12, 2018 at 5:03 AM, Angelica
wrote:
> I have a Flink Standalone Cluster based on
Hi Rohil
You need not upload the jar again when job manager restarts in an HA
environment. Only the the jar stored in web.upload.dir will be deleted
which is fine. The jars needed for the job manager to restart will be
stored in high-availability.storageDir along with job graphs and job
related st
Thank you for your reply.
On Mon, May 7, 2018 at 9:02 AM, Tzu-Li (Gordon) Tai
wrote:
> Hi Sampath,
>
> Do you already have a target JIRA that you would like to work on?
>
> Once you have one, let us know the JIRA issue ID and your JIRA account ID,
> then we'll assign you contributor permissions.
Hello
I would like to know the procedure for assigning the jira issue. How can I
assign it to myself?
Thanks
It would be helpful if you provide the complete CLI logs. Because even I'm
using flink run command to submit jobs to flink jobmanager running on K8s
and its working fine. For remote execution using flink CLI you should
provide flink-conf.yaml file which contains job manager address, port and
SSL/HA
cts/flink/flink-docs-release-1.
> 4/ops/security-kerberos.html).
>
> Also, please make sure to use Flink release 1.4.1 or above, because there
> is some regression in previous versions that your job might fail after
> deploying to secure YARN.
>
> Thanks
> Shuyi
>
>
Hello
I would like to know if the job submission through flink command line say
./bin/flink run
can be authenticated. Like if SSL is enabled then will the job submission
require SSL certificates. But I don't see any behavior as such. Simple
flink run is able to submit the job even if SSL is enabl
Hi Edward,
You can use this parameter in flink-conf.yaml to supress the hostname
checking in certificates. If it suits your purpose.
security.ssl.verify-hostname: false
Secondly even I'm running flink 1.4 on K8s, I used to get the same error
stack trace as you mentioned, while the blob client was
cess-control-allow-origin", I am not aware of anything like
> username/password authentication. Chesnay (cc'd) may know more about
> future plans.
> You can, however, wrap a proxy like squid around the web UI if you need
> this.
>
>
> Regards
> Nico
>
> On 13/03
Hello
I would like to know if flink supports any user level authentication like
username/password for flink web ui.
Regards
Sampath S
21 matches
Mail list logo