Alexander Pivovarov created FLINK-4668:
--
Summary: Fix positive random int generation
Key: FLINK-4668
URL: https://issues.apache.org/jira/browse/FLINK-4668
Project: Flink
Issue Type: Bug
Vijay Srinivasaraghavan created FLINK-4667:
--
Summary: Yarn Session CLI not listening on correct ZK namespace
when HA is enabled to use ZooKeeper backend
Key: FLINK-4667
URL: https://issues.apache.org/jira
Alexander Pivovarov created FLINK-4666:
--
Summary: Make constants final in ParameterTool
Key: FLINK-4666
URL: https://issues.apache.org/jira/browse/FLINK-4666
Project: Flink
Issue Type: B
Alexander Pivovarov created FLINK-4665:
--
Summary: Remove boxing/unboxing to parse a primitive
Key: FLINK-4665
URL: https://issues.apache.org/jira/browse/FLINK-4665
Project: Flink
Issue T
Hi Guys,
We have requirement like – loading data from local CSV file to Postgress
database using Flink Scala…
Do you have any sample Flink scala code for this?
We have tried and searched in Google/Flinkweb website for data load, we
haven’t found any sample code for this requisite.
Code: Flink S
Hi All
Based on my code reading, I have following understanding of the Metrics
and Accumulators.
1. Accumulators for a Flink JOB work like global counters. They are
designed so that accumulator values from different instances of Execution
Vertex can be combined. They are essentially distribut
HI Aljoscha
I was able to get the ClusterClient and Accumulators using following:
DefaultCLI defaultCLI = new DefaultCLI();
CommandLine line = new DefaultParser().parse(new Options(), new
String[]{}, true);
ClusterClient clusterClient = defaultCLI.retrieveCluster(line,configuration);
Regards
S
Greg Hogan created FLINK-4664:
-
Summary: Add translator to NullValue
Key: FLINK-4664
URL: https://issues.apache.org/jira/browse/FLINK-4664
Project: Flink
Issue Type: New Feature
Compone
Hi All,
I tried to use FailureRate restart strategy by setting values for it in
flink-conf.yaml but flink (v 1.1.2) did not pick it up.
# Flink Restart strategy
restart-strategy: failure-rate
restart-strategy.failure-rate.delay: 120 s
restart-strategy.failure-rate.failure-rate-interval: 12 minute
Hi Guys,
We have requirement like – loading data from local CSV file to Postgress
database using Flink Scala…We have tried number of ways all faile
Do you have any example for this? With dependency libraries to understand
how to load data from CSV to postgres
We have tried and searched in Google
Hi Team,
Will you be able to guide me on this? Is this a known issue that we
can't implement dataload in flink scala ?
data load from csv to postgress or any relational database in Flink
Scala
Thanks
Jagan.
On 22 September 2016 at 20:15, Jagan wrote:
> Thanks Suneel,
>
> but cl
Thanks Suneel,
but client want to implement the data load in Flink Scala..
On 22 September 2016 at 20:07, Suneel Marthi wrote:
> Couldn't u use SQLLoader or something for doing that?
>
> http://stackoverflow.com/questions/2987433/how-to-
> import-csv-file-data-into-a-postgresql-table
>
>
>
> O
Couldn't u use SQLLoader or something for doing that?
http://stackoverflow.com/questions/2987433/how-to-import-csv-file-data-into-a-postgresql-table
On Thu, Sep 22, 2016 at 3:01 PM, Jagan wrote:
> Hi Guys,
>
> We have a requirement like – loading data from local CSV file to Postgress
> databa
Hi Guys,
We have a requirement like – loading data from local CSV file to Postgress
database using Flink Scala…We have tried number of ways all failed
Do you have any example for this? With dependency libraries to understand
how to load data from CSV to postgres
We have tried and searched in Goo
Not to derail this thread onto another topic but the problem with using a
static instance is that there's no way to shut it down when the job stops. So
if, for example, it starts threads, I don't think those threads will stop when
the job stops. I'm not very well versed in how various Java 8 imp
+1 for Fabian's suggestion
On Thu, Sep 22, 2016 at 3:25 PM, Swapnil Chougule
wrote:
> +1
> It will be good to have one module flink-connectors (union of streaming and
> batch connectors).
>
> Regards,
> Swapnil
>
> On Thu, Sep 22, 2016 at 6:35 PM, Fabian Hueske wrote:
>
> > Hi everybody,
> >
>
+1
It will be good to have one module flink-connectors (union of streaming and
batch connectors).
Regards,
Swapnil
On Thu, Sep 22, 2016 at 6:35 PM, Fabian Hueske wrote:
> Hi everybody,
>
> right now, we have two separate Maven modules for batch and streaming
> connectors (flink-batch-connectors
Hi everybody,
right now, we have two separate Maven modules for batch and streaming
connectors (flink-batch-connectors and flink-streaming-connectors) that
contain modules for the individual external systems and storage formats
such as HBase, Cassandra, Avro, Elasticsearch, etc.
Some of these sys
Hi,
there is ClusterClient.getAccumulators(JobID jobID) which should be able to
get the accumulators for a running job. If you can construct a
ClusterClient that should be a good solution.
Cheers,
Aljoscha
On Wed, 21 Sep 2016 at 21:15 Chawla,Sumit wrote:
> Hi Sean
>
> My goal here is to get Use
Swapnil Chougule created FLINK-4663:
---
Summary: Flink JDBCOutputFormat logs wrong WARN message
Key: FLINK-4663
URL: https://issues.apache.org/jira/browse/FLINK-4663
Project: Flink
Issue Type
Exactly :) That's why we haven't added neither the spanning tree nor the
strongly connected components algorithms yet.
On Sep 22, 2016 12:16 PM, "Stephan Ewen" wrote:
> Just as a general comment:
>
> A program with nested loops is most likely not going to be performant on
> any way. It makes sen
"flink-test-utils" contains, as the name says, utils for testing. Intended
to be used by users in writing their own tests.
"flink-tests" contains cross module tests, no user should ever need to have
a dependency on that.
They are different because users explicitly asked for test utils to be
factor
Just as a general comment:
A program with nested loops is most likely not going to be performant on
any way. It makes sense to re-think the algorithm, come up with a modified
or different pattern, rather than trying to implement the exact algorithm
line by line.
It may be worth checking that, bec
Timo Walther created FLINK-4662:
---
Summary: Bump Calcite version up to 1.9
Key: FLINK-4662
URL: https://issues.apache.org/jira/browse/FLINK-4662
Project: Flink
Issue Type: Improvement
Hi Olga,
when you use mapEdges() or mapVertices() with generics, Flink cannot
determine the type because of type erasure, like the exception says. That's
why we also provide methods that take the type information as a parameter.
You can use those to make the return type explicit. In your example,
shijinkui created FLINK-4661:
Summary: Failure to find
org.apache.flink:flink-runtime_2.10:jar:tests
Key: FLINK-4661
URL: https://issues.apache.org/jira/browse/FLINK-4661
Project: Flink
Issue Ty
26 matches
Mail list logo