Hi Team,
Flink influx db connector `flink-connector-influxdb_2.1` is not present in
Maven , can you please upload the same
https://repo.maven.apache.org/maven2/org/apache/bahir/
Regards,
Vinay Patil
Thanks and Regards,
Vinay Patil
On Wed, Jul 29, 2020 at 11:08 AM Yang Wang wrote:
> Hi Vinay Patil,
>
> You are right. Flink does not provide any isolation between different jobs
> in the same Flink session cluster.
> You could use Flink job cluster or application cluster(from 1.11)
Regards,
Vinay Patil
Ohh Okay, basically implement the Gauge and add timer functionality to it
for now.
Is there a plan or JIRA ticket to add Timer metric in future release, I
think it is good to have
Regards,
Vinay Patil
On Wed, Jun 10, 2020 at 5:55 PM Chesnay Schepler wrote:
> You cannot add custom met
also have to
create Timer interface and add it to the metric group.
Is this possible?
I want to have a timer to check Hbase lookup time.
Regards,
Vinay Patil
o handle late record).
I think the downstream consumer of enriched data will have to dedup the
duplicate records or else we will end up having stale enrichment.
Regards,
Vinay Patil
On Fri, Apr 24, 2020 at 12:14 PM Konstantin Knauf wrote:
> Hi Vinay,
>
> I assume your subscription up
do we know
if it is stale data or not based on timestamp (watermark) as it can happen
that a particular enriched record is not updated for 6 hrs.
Regards,
Vinay Patil
RowFormatBuilder.
P.S. Curious to know Why was the RollingPolicy not exposed in case of
BulkFormat ?
Regards,
Vinay Patil
Thanks Fabian,
@Gordon - Can you please help here.
Regards,
Vinay Patil
On Fri, Oct 25, 2019 at 9:11 PM Fabian Hueske wrote:
> Hi Vinay,
>
> Maybe Gordon (in CC) has an idea about this issue.
>
> Best, Fabian
>
> Am Do., 24. Okt. 2019 um 14:50 Uhr schrieb Vinay
supports DynamoStreams
Regards,
Vinay Patil
Hi,
Can someone pls help here , facing issues in Prod . I see the following
ticket in unresolved state.
https://issues.apache.org/jira/browse/FLINK-8417
Regards,
Vinay Patil
On Thu, Oct 24, 2019 at 11:01 AM Vinay Patil
wrote:
> Hi,
>
> I am trying to access dynamo streams from a
at the credentials are not required to be passed :
https://github.com/apache/flink/blob/abbd6b02d743486f3c0c1336139dd6b3edd20840/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/AWSUtil.java#L164
Regards,
Vinay Patil
Hello,
For anyone looking for setting up alerts for flink application ,here is
good blog by Flink itself :
https://www.ververica.com/blog/monitoring-apache-flink-applications-101
So, for dynamoDb streams we can set the alert on millisBehindLatest
Regards,
Vinay Patil
On Wed, Aug 7, 2019 at 2
the consumer is
lagging behind.
Regards,
Vinay Patil
On Fri, Jul 19, 2019 at 10:40 PM Andrey Zagrebin
wrote:
> Hi Vinay,
>
> 1. I would assume it works similar to kinesis connector (correct me if
> wrong, people who actually developed it)
> 2. If you have activated just che
Hi Ravi,
The uber jar was correct, adding ClosureCleanerLevel to TOP_LEVEL resolved
this issue. Thanks a lot.
Is there any disadvantage of explicitly setting this ?
Regards,
Vinay Patil
On Sat, Jul 20, 2019 at 10:23 PM Ravi Bhushan Ratnakar <
ravibhushanratna...@gmail.com> wrote:
>
Hi Vinay,
>
> Please make sure that all your custom code is serializable. You can run
> this using new mode.
>
> Thanks,
> Ravi
>
> On Sat 20 Jul, 2019, 08:13 Vinay Patil, wrote:
>
>> Hi,
>>
>> I am trying to run a pipel
ms-dynamo-streams",
new JsonSerializationSchema()))
.name("Kafka Sink");
try {
env.execute();
} catch (Exception e) {
System.out.println("Caught exception for pipeline" +
e.getMessage());
e.printStackTrace();
}
}
Regards,
Vinay Patil
recommended parallelism to be set for the source , should it be one to one
mapping , for example if there are 3 shards , then parallelism should be 3 ?
Regards,
Vinay Patil
On Wed, Aug 1, 2018 at 3:42 PM Ying Xu [via Apache Flink Mailing List
archive.] wrote:
> Thank you so much Fabian!
>
Hi Stephan.,
Yes, we tried setting fs.s3a.aws.credentials.provider but we are getting
class not found exception for InstanceProfileCredentialsProvider because of
shading issue.
Regards,
Vinay Patil
On Thu, Jan 17, 2019 at 3:02 PM Stephan Ewen wrote:
> Regarding configurations: According
Hi Till,
Can you please let us know the configurations that we need to set for
Profile based credential provider in flink-conf.yaml
Exporting AWS_PROFILE property on EMR did not work.
Regards,
Vinay Patil
On Wed, Jan 16, 2019 at 3:05 PM Till Rohrmann wrote:
> The old BucketingSink was us
. We tried
adding fs.s3a.impl to core-site.xml when the default configurations were
not working.
Regards,
Vinay Patil
On Wed, Jan 16, 2019 at 2:55 PM Till Rohrmann wrote:
> Hi Vinay,
>
> Flink's file systems are self contained and won't respect the
> core-site.xml if I'
Hi,
Can someone please help on this issue. We have even tried to set
fs.s3a.impl in core-site.xml, still its not working.
Regards,
Vinay Patil
On Fri, Jan 11, 2019 at 5:03 PM Taher Koitawala [via Apache Flink User
Mailing List archive.] wrote:
> Hi All,
> We have implemented S
Hi,
Changing the classloader config to parent-first solved the issue.
Regards,
Vinay Patil
On Wed, Nov 7, 2018 at 7:25 AM Vinay Patil wrote:
> Hi,
>
> Can someone please help here.
>
> On Nov 6, 2018 10:46 PM, "Vinay Patil [via Apache Flink User Mailing List
> arc
a:93)
at
com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:22)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
Please let me know if there is a fix for this issue as I have not faced this
problem for DataStreams.
Regards,
Vinay Patil
--
Sent fro
Thank you Till, I am able to start the session-cluster now.
Regards,
Vinay Patil
On Fri, Oct 5, 2018 at 8:15 PM Till Rohrmann wrote:
> Hi Vinay,
>
> are you referring to flink-contrib/docker-flink/docker-compose.yml? We
> recently fixed the command line parsing with Flink 1.5.4 an
lhost in /etc/hosts file.
Can you please let me know what is the issue here.
Regards,
Vinay Patil
Hi Vino,
Yes, Job runs successfully, however, no checkpoints are successful. I will
update the source
Regards,
Vinay Patil
On Fri, Jul 27, 2018 at 2:00 PM vino yang wrote:
> Hi Vinay,
>
> Oh! You use a collection source? That's the problem. Please use a general
> source lik
Source is
not being executed at the moment. Aborting checkpoint. In the pipeline I
have a stream initialized using "fromCollection". I think I will have to
get rid of this.
What do you suggest
Regards,
Vinay Patil
On Thu, Jul 26, 2018 at 12:04 PM vino yang wrote:
> Hi Vinay:
>
>
Hi Chesnay,
No error in the logs. That is why I am not able to understand why
checkpoints are getting triggered.
Regards,
Vinay Patil
On Wed, Jul 25, 2018 at 4:36 PM Chesnay Schepler wrote:
> Please check the job- and taskmanager logs for anything suspicious.
>
> On 25.07.2018 12:
No error in the logs. That is why I am not able to understand why
checkpoints are not getting triggered.
Regards,
Vinay Patil
On Wed, Jul 25, 2018 at 4:44 PM Vinay Patil wrote:
> Hi Chesnay,
>
> No error in the logs. That is why I am not able to understand why
> checkpoints
do not
see any checkpoints triggered on Flink UI.
Am I missing any configurations to be set for the
RemoteExecutionEnvironment for checkpointing to work.
Regards,
Vinay Patil
still not able to hit the rest
api's, Is there anything else I can do here ?
Yes, you are right about separating the API's into two parts.
Regards,
Vinay Patil
On Sat, Jul 21, 2018 at 1:46 AM Chesnay Schepler wrote:
> Something that I was thinking about a while ago was to separate t
Web UI running or am I missing any
configuration ?
Regards,
Vinay Patil
Hi Fabian,
Created a JIRA ticket : https://issues.apache.org/jira/browse/FLINK-9643
Regards,
Vinay Patil
On Fri, Jun 22, 2018 at 1:25 PM Fabian Hueske wrote:
> Hi Vinay,
>
> This looks like a bug.
> Would you mind creating a Jira ticket [1] for this issue?
>
> Thank you v
ink@taskmanager1:port/user/taskmanager)
Now, when I hit the above command for the data port, it does not allow
TLSv1.1 and only allows TLSv1.2
Can you please let me know how can I enforce all the flink ports to use
TLSv1.2.
Regards,
Vinay Patil
Hi,
Can someone please help me with this issue.
Regards,
Vinay Patil
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
ink@taskmanager1:port/user/taskmanager)
Now, when I hit the above command for the data port, it does not allow
TLSv1.1 and only allows TLSv1.2
Can you please let me know how can I enforce all the flink ports to use
TLSv1.2.
Regards,
Vinay Patil
--
Sent from: http://apache-flink-user-mailing-l
I have created FLINK-9111 <https://issues.apache.org/jira/browse/FLINK-9111> as
this is not handled in the latest code of GlobalConfiguration.
Regards,
Vinay Patil
On Thu, Mar 29, 2018 at 8:33 AM, Vinay Patil
wrote:
> Hi,
>
> If this is not part of Flink 1.5 or not handled
Hi,
If this is not part of Flink 1.5 or not handled in latest 1.4.2 release, I
can open a JIRA. Should be a small change.
What do you think ?
Regards,
Vinay Patil
On Wed, Mar 28, 2018 at 4:11 PM, Vinay Patil
wrote:
> Hi Greg,
>
> I am not concerned with flink-conf.yaml file, we h
print the ssl passwords .
Regards,
Vinay Patil
On Wed, Mar 28, 2018 at 3:53 PM, Greg Hogan wrote:
> With the current method you always have the risk, no matter which keywords
> you filter on ("secret", "password", etc.), that the key name is mistyped
> and inadver
release ? (I am using Flink 1.3.2)
Regards,
Vinay Patil
Hi,
I am not able to see more than 5 jobs on Flink Dashboard.
I have set web.history to 50 in flink-conf.yaml file.
Is there any other configuration I have to set to see more jobs on Flink
Dashboard
Regards,
Vinay Patil
Hi,
The passwords are shown in plain text in logs , is this fixed in newer
versions of flink (I am using 1.3.2)
Also, please let me know the answer to my previous queries in this mail
chain
Regards,
Vinay Patil
On Mon, Mar 19, 2018 at 7:35 PM, Vinay Patil
wrote:
> Hi,
>
> W
to my previous mail
Regards,
Vinay Patil
On Fri, Mar 16, 2018 at 10:15 AM, Vinay Patil
wrote:
> Hi Chesnay,
>
> After setting the configurations for Remote Execution Environment the job
> gets submitted ,I had to set ssl-verify-hostname to false.
> However, I don't understand
get a
Lost to Job Manager Exception.
This only happens when SSL is enabled.
Regards,
Vinay Patil
On Thu, Mar 15, 2018 at 10:28 AM, Vinay Patil
wrote:
> Just an update, I am submitting the job from the master node, not using
> the normal flink run command to submit the job , but using
Just an update, I am submitting the job from the master node, not using
the normal flink run command to submit the job , but using Remote Execution
Environment in code to do this.
And in that I am passing the hostname which is same as provided in
flink-conf.yaml
Regards,
Vinay Patil
On Thu
Hi Guys,
Any suggestions here
Regards,
Vinay Patil
On Wed, Mar 14, 2018 at 8:08 PM, Vinay Patil
wrote:
> Hi,
>
> After waiting for some time I got the exception as Lost Connection to Job
> Manager. Message: Could not retrieve the JobExecutionResult from Job Manager
>
> I am
import the certificate to the java default trustore, so
I have provided the trustore and keystore as jvm args to the job.
Is there any other configuration I should do so that the job is submitted
Regards,
Vinay Patil
?
Regards,
Vinay Patil
1.3.2, and I am making sure that the job name is different
for each job.
Can you please let me know if I am doing something wrong.
Regards,
Vinay Patil
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
I see we can generate our own JobID, but how do I use it to submit the job
to the cluster.
I am using remoteExecutionEnvironment to submit the job to the cluster.
Also, can you please answer the query of earlier mail.
Regards,
Vinay Patil
On Thu, Feb 1, 2018 at 1:50 PM, Vinay Patil wrote
Hi,
When the Flink job executes successfully I get the jobID, however when the
Flink job fails the jobID is not returned.
How do I get the jobId in this case ?
Do I need to call /joboverview REST api to get the job ID by looking for
the Job Name ?
Regards,
Vinay Patil
that we can have a next ack operator which will generate the response.
Also, how do I get/access the Watermark value in the ack operator ? It will
be a simple map operator, right ?
Regards,
Vinay Patil
On Thu, Jan 25, 2018 at 4:50 AM, Piotr Nowojski
wrote:
> Hi,
>
> As you fi
slot. So, when EOF
dummy records is read I can generate a response/ack.
Is there a better way I can deal with this ?
Regards,
Vinay Patil
)
Regards,
Vinay Patil
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
)
Regards,
Vinay Patil
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Can anyone please help me with this issue
On Aug 31, 2017 5:20 PM, "Vinay Patil" wrote:
> Hi,
>
> After adding the following two lines the serialization trace does not show
> the Schema related classes:
>
> env.getConfig().registerTypeWithKryoSerial
Hi,
After adding the following two lines the serialization trace does not show
the Schema related classes:
env.getConfig().registerTypeWithKryoSerializer(GenericData.Array.class,
Serializers.SpecificInstanceCollectionSerializerForArrayList.class);
env.getConfig().addDefaultKryoSerializer(
ESSink)
You can read this blog post:
https://aws.amazon.com/blogs/big-data/build-a-real-time-stream-processing-pipeline-with-apache-flink-on-aws/
Regards,
Vinay Patil
On Sun, Aug 27, 2017 at 7:02 PM, ant burton [via Apache Flink User Mailing
List archive.] wrote:
> Thanks! I'll check la
Hi Robert,
The test case code is as follows:
GenericRecord testData = new GenericData.Record(avroSchema);
SingleOutputStreamOperator testStream =
env.fromElements(testData)
.map(new DummyOperator(...));
Iterator
org.apache.flink.contrib.streaming.SocketStreamIterator.hasNext(SocketStreamIterator.java:114)
I tried to to register the above classes but it did not work. Also this
error comes randomly for some tests while some test pass.
What could be the issue ?
Regards,
Vinay Patil
--
View this message in
Hi,
Yes, I am able to write to S3 using DataStream API.
I have answered you the approach on SO
Regards,
Vinay Patil
On Mon, Aug 14, 2017 at 4:21 AM, ant burton [via Apache Flink User Mailing
List archive.] wrote:
> Hello,
>
> Has anybody been able to write to S3 when using the data
Hi,
The config should be *fs.s3a.impl* instead of *fs.s3.impl*
Also when you are providing the S3 write path in config file or directly in
code start with *s3a://*
Regards,
Vinay Patil
On Sat, Aug 12, 2017 at 6:07 AM, ant burton [via Apache Flink User Mailing
List archive.] wrote:
> He
/connectors/fs/bucketing/BucketingSink.html
Regards,
Vinay Patil
On Mon, Aug 7, 2017 at 9:15 AM, Raja.Aravapalli [via Apache Flink User
Mailing List archive.] wrote:
> Hi Vinay,
>
>
>
> Thanks for the response.
>
>
>
> I have NOT enabled any checkpointing.
>
>
>
&
Hi Raja,
Have you enabled checkpointing?
The files will be rolled to complete state when the batch size is reached
(in your case 2 MB) or when the bucket is inactive for a certain amount of
time.
Regards,
Vinay Patil
On Mon, Aug 7, 2017 at 7:53 AM, Raja.Aravapalli [via Apache Flink User
Tuning Guide:*
https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
Hope it helps.
Regards,
Vinay Patil
On Tue, Jul 25, 2017 at 6:51 PM, Shashwat Rastogi [via Apache Flink User
Mailing List archive.] wrote:
> Hi,
>
> We have several Flink jobs, all of which reads data from
Hi Stephan,
Sure will do that next time when I observe it.
Regards,
Vinay Patil
On Thu, Jul 13, 2017 at 8:09 PM, Stephan Ewen wrote:
> Is there any way you can pull a thread dump from the TMs at the point when
> that happens?
>
> On Wed, Jul 12, 2017 at 8:50 PM, vinay patil
>
Hi Gyula,
I have observed similar issue with FlinkConsumer09 and 010 and posted it to
the mailing list as well . This issue is not consistent, however whenever
it happens it leads to checkpoints getting failed or taking a long time to
complete.
Regards,
Vinay Patil
On Wed, Jul 12, 2017 at 7:00
try to set createStatistics() as well.
By the way I was able to get rid of memory consumption issue. Did you try
using FLASH_SSD_OPTION ?
Regards,
Vinay Patil
On Fri, Jun 30, 2017 at 2:49 PM, gerryzhou [via Apache Flink User Mailing
List archive.] wrote:
> Hi,
> Is there some m
lushing is taking place at regular
intervals )
Regards,
Vinay Patil
On Thu, Jun 29, 2017 at 9:13 PM, Vinay Patil
wrote:
> The state size is not that huge. On the Flink UI when it showed the data
> sent as 4GB , the physical memory usage was close to 90GB ..
>
> I will re-run by settin
The state size is not that huge. On the Flink UI when it showed the data
sent as 4GB , the physical memory usage was close to 90GB ..
I will re-run by setting the Flushing options of RocksDB because I am
facing this issue on 1.2.0 as well.
Regards,
Vinay Patil
On Thu, Jun 29, 2017 at 9:03 PM
Hi Aljoscha,
Yes I have tried with 1.2.1 and 1.3.0 , facing the same issue.
The issue is not of Heap memory , it is of the Off-Heap memory that is
getting used ( please refer to the earlier snapshot I have attached in
which the graph keeps on growing ).
Regards,
Vinay Patil
On Thu, Jun 29
you please help in resolving this issue
Regards,
Vinay Patil
On Thu, Jun 29, 2017 at 6:01 PM, gerryzhou [via Apache Flink User Mailing
List archive.] wrote:
> Hi, Vinay,
> I observed a similar problem in flink 1.3.0 with rocksdb. I wonder
> how to use FRocksDB as you mentioned abov
error ?
Regards,
Vinay Patil
On Thu, Jun 29, 2017 at 7:30 AM, SHI Xiaogang
wrote:
> Hi Vinay,
>
> We observed a similar problem before. We found that RocksDB keeps a lot of
> index and filter blocks in memory. With the growth in state size (in our
> cases, most states are only
I had attached is of Off-heap memory, I have only assigned
12GB heap memory per TM
Regards,
Vinay Patil
On Wed, Jun 28, 2017 at 8:43 PM, Aljoscha Krettek
wrote:
> Hi,
>
> Just a quick question, because I’m not sure whether this came up in the
> discussion so far: what kind of win
RocksDB configurations
<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n14013/TM_Memory_Usage.png>
Regards,
Vinay Patil
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-stateb
all memory. This was not happening with previous version ;
maximum 30GB was getting utilized.
Because of this issue the job manager was killed and the job failed.
Is there any other configurations I have to do ?
P.S I am currently using FRocksDB
Regards,
Vinay Patil
On Fri, May 5, 2017 at 1:01
Hi Guys,
Can anyone please provide me solution to my queries.
On Jun 8, 2017 11:30 PM, "Vinay Patil" wrote:
> Hi Guys,
>
> I am able to setup SSL correctly, however the following command does not
> work correctly and results in the error I had mailed earlier
>
> f
? Currently I am just relying on the logs.
2. Wild Card is not working with the keytool command, can you please let me
know what is the issue with the following command:
keytool -genkeypair -alias ca -keystore: -ext SAN=dns:node1.*
Regards,
Vinay Patil
On Mon, Jun 5, 2017 at 8:43 PM, vinay patil [via
org.apache.flink.configuration.GlobalConfiguration- Loading
configuration property: security.ssl.truststore-password, password*/
Regards,
Vinay Patil
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/In-transit-Data-Encryption-in-EMR-tp13455p13490
flink/application_1496660166576_0001/flink-dist_2.10-1.2.0.jar,
expected: file:///
I see a JIRA ticket regarding the same but did not find any solution to
this.
Regards,
Vinay Patil
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/In-transit-Data-Encry
Thank you Till.
Gordon can you please help.
Regards,
Vinay Patil
On Fri, Jun 2, 2017 at 9:10 PM, Till Rohrmann [via Apache Flink User
Mailing List archive.] wrote:
> Hi Vinay,
>
> I've pulled my colleague Gordon into the conversation who can probably
> tell you more about
,
Vinay Patil
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/In-transit-Data-Encryption-in-EMR-tp13455.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
Hi Guys,
Can someone please help me in understanding this ?
Regards,
Vinay Patil
On Thu, Apr 27, 2017 at 12:36 PM, Vinay Patil
wrote:
> Hi Guys,
>
> For historical reprocessing , I am reading the avro data from S3 and
> passing these records to the same pipeline for processing.
&g
state after processing , is this because Flink treats the S3
source as finite data ? What will happen if the data is continuously
written to S3 from one pipeline and from the second pipeline I am doing
historical re-processing ?
Regards,
Vinay Patil
state after processing , is this because Flink treats the S3
source as finite data ? What will happen if the data is continuously written
to S3 from one pipeline and from the second pipeline I am doing historical
re-processing ?
Regards,
Vinay Patil
--
View this message in context:
http://apache
the RocksDB fix in 1.2.1 so that I can test it out.
Regards,
Vinay Patil
On Sat, Mar 18, 2017 at 12:25 AM, Stephan Ewen [via Apache Flink User
Mailing List archive.] wrote:
> @vinay Let's see how fast we get this fix in - I hope yes. It may depend
> also a bit on the RocksDB community.
Hi Stephan,
Is the performance related change of RocksDB going to be part of Flink
1.2.1 ?
Regards,
Vinay Patil
On Thu, Mar 16, 2017 at 6:13 PM, Stephan Ewen [via Apache Flink User
Mailing List archive.] wrote:
> The only immediate workaround is to use windows with "reduce"
Streaming application (running on YARN -
EMR ) ?
Regards,
Vinay Patil
On Thu, Mar 16, 2017 at 6:36 PM, rmetzger0 [via Apache Flink User Mailing
List archive.] wrote:
> Yes, you can change the GC using the env.java.opts parameter.
> We are not setting any GC on YARN.
>
> On Thu, Mar 16,
Hi Stephan,
What can be the workaround for this ?
Also need one confirmation : Is G1 GC used by default when running the
pipeline on YARN. (I see a thread of 2015 where G1 is used by default for
JAVA8)
Regards,
Vinay Patil
On Wed, Mar 15, 2017 at 10:32 PM, Stephan Ewen [via Apache Flink User
sure how this will affect in
production as we are going to get above 200 million data.
As a workaround can I take the savepoint while the pipeline is running ?
Let's say if I take savepoint after every 30minutes, will it work ?
Regards,
Vinay Patil
On Tue, Mar 14, 2017 at 10:02 PM, Stephan
/flink-docs-release-1.2/ops/state_
> backends.html#the-fsstatebackend
>
> Regards
> Sai
>
> On Fri, Feb 10, 2017 at 6:19 AM, Stefan Richter <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=12126&i=0>> wrote:
>
>> Async snapshotting is th
Hi ,
@Shannon - I am not facing any issue while writing to S3, was getting
NoClassDef errors when reading the file from S3.
''Hadoop File System" - I mean I am using FileSystem class of Hadoop to read
the file from S3.
@Stephan - I tried with 1.1.4 , was getting the same issue.
The easiest way
Hi Guys,
Has anyone got this error before ? If yes, have you found any other
solution apart from copying the jar files to flink lib folder
Regards,
Vinay Patil
On Mon, Mar 6, 2017 at 8:21 PM, vinay patil [via Apache Flink User Mailing
List archive.] wrote:
> Hi Guys,
>
> I am getting
Hi Guys,
I am getting the same exception:
EMRFileSystem not Found
I am trying to read encrypted S3 file using Hadoop File System class.
(using Flink 1.2.0)
When I copy all the libs from /usr/share/aws/emrfs/lib and /usr/lib/hadoop
to Flink lib folder , it works.
However I see that all these lib
to 10minutes, I have observed that nothing gets written
to sink (tried with S3 as well as HDFS), atleast I was expecting pending
files here.
This issue gets worst when checkpointing is disabled as nothing is written.
Regards,
Vinay Patil
On Mon, Feb 27, 2017 at 10:55 PM, Stephan Ewen [via
nutes, now within 5minutes of run
the state size grows to 30GB , after checkpointing the 30GB state that is
maintained in rocksDB has to be copied to HDFS, right ? is this causing
the pipeline to stall ?
Regards,
Vinay Patil
On Sat, Feb 25, 2017 at 12:22 AM, Vinay Patil
wrote:
> Hi Stephan
there is a hit in overall throughput.
Regards,
Vinay Patil
On Fri, Feb 24, 2017 at 10:09 PM, Stephan Ewen [via Apache Flink User
Mailing List archive.] wrote:
> Flink's state backends currently do a good number of "make sure this
> exists" operations on the file systems
tes due to its throttling policies.
>
> That would be a super important fix to add!
>
> Best,
> Stephan
>
>
> On Fri, Feb 24, 2017 at 2:58 PM, vinay patil <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=11885&i=0>> wrote:
>
>> Hi,
>&g
Hi,
I have attached a snapshot for reference:
As you can see all the 3 checkpointins failed , for checkpoint ID 2 and 3 it
is stuck at the Kafka source after 50%
(The data sent till now by Kafka source 1 is 65GB and sent by source 2 is
15GB )
Within 10minutes 15M records were processed, and for t
1 - 100 of 176 matches
Mail list logo