understood is correctly?
Thanks a lot again.
Regards,
Raja.
From: Lasse Nedergaard
Date: Tuesday, September 11, 2018 at 12:52 PM
To: miki haiat
Cc: Raja Aravapalli , user
Subject: [EXTERNAL] Re: Server side error: Cannot find required BLOB at
/tmp/blobStore
Hi.
From my presentation on Flink forward
I have not passed visibly anything from my end, to application any path either
HDFS or Local, that starts with “/tmp” ☹
Regards,
Raja.
From: miki haiat
Date: Tuesday, September 11, 2018 at 12:44 PM
To: Raja Aravapalli
Cc: user
Subject: [EXTERNAL] Re: Server side error: Cannot find required
Hi,
My Flink application which reads from Kafka and writes to HDFS is failing
repeatedly with below exception:
Caused by: java.io.IOException: Server side error: Cannot find required BLOB at
/tmp/blobStore-**
Can someone please help me on, what could be the root cause of this issue? I
received from Yarn Client using the
command “yarn application -list”
Thanks a lot.
Regards,
Raja.
From: Gary Yao
Date: Friday, February 9, 2018 at 9:25 AM
To: Raja Aravapalli
Cc: "user@flink.apache.org"
Subject: Re: [EXTERNAL] Re: Flink REST API
Hi Raja,
Can you tell me the API call th
2, 2018 at 10:20 AM
To: Raja Aravapalli
Cc: "user@flink.apache.org"
Subject: [EXTERNAL] Re: Flink REST API
Hi Raja,
The registered tracking URL of the YARN application can be used to issue HTTP
requests against the REST API. You can retrieve the URL by using the YARN
client:
yarn a
Hi,
I have a triggered a Flink YARN Session on Hadoop yarn.
While I was able to trigger applications and run them. I wish to find the URL
of REST API for the Flink YARN Sesssion I launched.
Can someone please help me point out on how to find the REST API Url for the
Flink on YARN?
Thanks a l
highly helpful.
Thanks.
Regards,
Raja.
From: Jins George
Date: Wednesday, January 31, 2018 at 8:51 PM
To: Raja Aravapalli
Cc: "user@flink.apache.org"
Subject: [EXTERNAL] Re: Flink on YARN || Monitoring REST API Not Working ||
Please help
8081 is the default port for standalone cluster.
Hi,
I have deployed Flink cluster on Hadoop YARN and I am able to trigger jobs and
run it.
But, I am not able to work the running flink cluster’s Montoring REST API!
As listed here @
https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/rest_api.html
I am trying to connect us
Hi,
I want to overwrite the method “openNewPartFile” in the BucketingSink Class
such that it creates part file name with inclusion of timestamp whenever it
rolls a new part file.
Can someone share some thoughts on how I can do this.
Thanks a ton, in advance.
Regards,
Raja.
Hi Team,
I have a Bucketing Sink writing to HDFS files…. Which is running successfully
for 4days failing suddenly with below exception:
Caused by: java.io.IOException: Cannot find required BLOB at /tmp/blobStore
Code below:
BucketingSink HdfsSink = new BucketingSink (hdfsOutputPath);
HdfsSi
Thanks Aljoscha for the inputs.
I will check to extend “BasePathBucketer” class.
Regards,
Raja.
From: Aljoscha Krettek
Date: Friday, September 1, 2017 at 10:27 AM
To: Piotr Nowojski
Cc: Raja Aravapalli , "user@flink.apache.org"
Subject: [EXTERNAL] Re: Bucketing/Rolling Sink: New
Hi,
I have a flink application that is streaming data into HDFS and I am using
Bucketing Sink for that. And, I want to know if is it possible to rename the
part files that is being created in the base hdfs directory.
Right now I am using the below code for including the timestamp into part-fil
Hi,
Is there a way to set alerting when a running Flink job kills, due to any
reasons?
Any thoughts please?
Regards,
Raja.
Ability to disable it will be a super helpful.
+1 to the idea.
Regards,
Raja.
From: Ted Yu
Date: Friday, August 25, 2017 at 4:56 PM
To: Robert Metzger
Cc: Raja Aravapalli , "user@flink.apache.org"
Subject: [EXTERNAL] Re: Security Control of running Flink Jobs on Flink UI
bq. i
Hi,
I have started a Flink session/cluster on a existing Hadoop Yarn Cluster using
Flink Yarn-Session, and submitting Flink streaming jobs to it… and everything
works fine.
But, one problem I see with this approach is:
The Flink Yarn-Session is running with a yarn application id. And this
ap
Thanks Gordon.
Regards,
Raja.
From: "Tzu-Li (Gordon) Tai"
Date: Thursday, August 17, 2017 at 11:47 PM
To: Raja Aravapalli , "user@flink.apache.org"
Subject: Re: [EXTERNAL] Re: Fink application failing with kerberos issue after
running successfully without any issues f
: Thursday, August 17, 2017 at 1:06 PM
To: Ted Yu
Cc: Raja Aravapalli , "user@flink.apache.org"
Subject: Re: [EXTERNAL] Re: Fink application failing with kerberos issue after
running successfully without any issues for few days
Raja,
According to those configuration values, the delega
I don’t have access to the site.xml files, it is controlled by a support team.
Does flink has any configuration settings or api’s thru which we can control
this ?
Regards,
Raja.
From: Ted Yu
Date: Thursday, August 17, 2017 at 11:07 AM
To: Raja Aravapalli
Cc: "user@flink.apach
Hi Ted,
Below is what I see in the environment:
dfs.namenode.delegation.token.max-lifetime: 60480
dfs.namenode.delegation.token.renew-interval: 8640
Thanks.
Regards,
Raja.
From: Ted Yu
Date: Thursday, August 17, 2017 at 10:46 AM
To: Raja Aravapalli
Cc: "
Hi Ted,
Find below the configuration I see in yarn-site.xml
yarn.resourcemanager.proxy-user-privileges.enabled
true
Regards,
Raja.
From: Ted Yu
Date: Wednesday, August 16, 2017 at 9:05 PM
To: Raja Aravapalli
Cc: "user@flink.apache.org"
Subject: [EXTERNAL] Re: h
Thanks very much for the detailed explanation Stefan.
Regards,
Raja.
From: Stefan Richter
Date: Monday, August 14, 2017 at 7:47 AM
To: Raja Aravapalli
Cc: "user@flink.apache.org"
Subject: Re: [EXTERNAL] difference between checkpoints & savepoints
Just noticed that I forgot t
Hi,
I triggered an flink yarn-session on a running Hadoop cluster… and triggering
streaming application on that.
But, I see after few days of running without any issues, the flink application
which is writing data to hdfs failing with below exception.
Caused by:
org.apache.hadoop.ipc.RemoteE
Thanks for the discussion. That answered many questions I have.
Also, in the same line, can someone detail the difference between State Backend
& External checkpoint?
Also, programmatic API, thru which methods we can configure those.
Regards,
Raja.
From: Stefan Richter
Date: Thursday, Augus
Hi,
Can someone please help me understand the difference between Flink's
Checkpoints & Savepoints.
While I read the documentation, couldn't understand the difference! :s
Thanks a lot.
Regards,
Raja.
Thanks Ziyad. Will check that.
Regards,
Raja.
From: Ziyad Muhammed
Date: Wednesday, August 9, 2017 at 12:52 PM
To: Raja Aravapalli
Cc: "user@flink.apache.org"
Subject: [EXTERNAL] Re: Naming operators to reflect in UI
Hi,
You can set the name of any operator explicitly by call
Hi,
Can someone please let me know, if I can name the operators, so that the naming
reflects in UI.
Right now, I am observing in UI that, only
Source: Custom Source
Sink: Unnamed
Please advise.
Thank you.
Regards,
Raja.
Thank you very much Chao. That helps me.
Regards,
Raja.
From: Chao Wang
Date: Monday, August 7, 2017 at 12:28 PM
To: Raja Aravapalli
Cc: "user@flink.apache.org"
Subject: [EXTERNAL] Re: schema to just read as "byte[] array" from kafka
A quick update, in class MyDe:
publ
Hi
I am using SimpleStringSchema to deserialize a message read from kafka, but
need some help to know if there is any schema available I can use rather than
“SimpleStringSchema()” and instead just get “byte[]” without any
deserialization happening!
Below is code I am currently using, but inst
Thanks very much for the pointers Vinay. That helps ☺
-Raja.
From: vinay patil
Date: Monday, August 7, 2017 at 1:56 AM
To: "user@flink.apache.org"
Subject: Re: [EXTERNAL] Re: Help required - "BucketingSink" usage to write HDFS
Files
Hi Raja,
That is why they are in the pending state. You ca
Hi Vinay,
Thanks for the response.
I have NOT enabled any checkpointing.
Files are rolling out correctly for every 2mb, but the files are remaining as
below:
-rw-r--r-- 3 2097424 2017-08-06 21:10 ////Test/part-0-0.pending
-rw-r--r-- 3 1431430 2017-08-06 21:12 ////Te
Hi,
I am working on a poc to write to hdfs files using BucketingSink class. Even
thought I am the data is being writing to hdfs files, but the files are lying
with “.pending” on hdfs.
Below is the code I am using. Can someone pls help me identify the issue and
help me fix this ?
BucketingS
31 matches
Mail list logo