Hi Guys,
We are in the process of creating Proof of concept.,
I am looking for the sample project - Flink scala or java which can load
data from database to database or
CSV to relational database(any).
CSV --> S3 --> AWS Redshift
could you please some one advise me on that..
C
Matt Zimmer created FLINK-4742:
--
Summary: NPE in WindowOperator.trigger() on shutdown
Key: FLINK-4742
URL: https://issues.apache.org/jira/browse/FLINK-4742
Project: Flink
Issue Type: Bug
Joseph Sims created FLINK-4741:
--
Summary: WebRuntimeMonitor does not shut down all of it's threads
(EventLoopGroups) on exit.
Key: FLINK-4741
URL: https://issues.apache.org/jira/browse/FLINK-4741
Project
Thanks for your prompt responses.
Except for the streaming file source issues, all mentioned issues are
addressed. As soon as the last one is in, I can kick off the first RC.
@Kostas, Stephan: you didn't mention anything here or in the PR. Did
you have time to work on this? :)
– Ufuk
On Tue, Oc
Greg Hogan created FLINK-4740:
-
Summary: Upgrade testing libraries
Key: FLINK-4740
URL: https://issues.apache.org/jira/browse/FLINK-4740
Project: Flink
Issue Type: Improvement
Component
PowerMock reports "org.powermock.reflect.exceptions.FieldNotFoundException:
Field 'fTestClass' was not found in class
org.junit.internal.runners.MethodValidator."
https://github.com/jayway/powermock/issues/551
This is fixed in PowerMock 1.6.1+ (currently using 1.5.5, latest is 1.6.5):
https://
>From my side +1, unless there are known issues with JUnit 4.12
On Tue, Oct 4, 2016 at 9:07 PM, Greg Hogan wrote:
> JUnit 4.12 was released 4 Dec 2014. Flink is currently using JUnit 4.11
> from 14 Nov 2012.
> https://github.com/junit-team/junit4/releases
>
> My use case is the support for ass
JUnit 4.12 was released 4 Dec 2014. Flink is currently using JUnit 4.11
from 14 Nov 2012.
https://github.com/junit-team/junit4/releases
My use case is the support for assert equals on boolean arrays, but in
general this looks to be an innocuous change and I could not find any prior
discussion.
Steffen Hausmann created FLINK-4739:
---
Summary: Adding packaging details for the Elasticsearch connector
Key: FLINK-4739
URL: https://issues.apache.org/jira/browse/FLINK-4739
Project: Flink
Till Rohrmann created FLINK-4738:
Summary: Port TaskManager logic to TaskExecutor
Key: FLINK-4738
URL: https://issues.apache.org/jira/browse/FLINK-4738
Project: Flink
Issue Type: Sub-task
Concerning the Pull Requests about Kafka 0.8 offset committing:
They look good, but I would actually like to not merge them to Flink 1.1.3,
as they may result in slightly changed behavior for users of Kafka 0.8
The crucial fix (Kafka 0.9) is already in, so on the Kafka side are are
good to go, in
Hi Anton,
1) according to org.apache.calcite.sql.fun.SqlAvgAggFunction " the
result is the same type" so I think this is standard SQL behavior.
2) This seems to be a code generation bug. The sqrt/power function seems
not accept the data types. Would be great if you could open an issue if
it do
Stephan Ewen created FLINK-4737:
---
Summary: Add more compression algorithms to FileInputFormat
Key: FLINK-4737
URL: https://issues.apache.org/jira/browse/FLINK-4737
Project: Flink
Issue Type: Im
Hello all,
I have some questions about work with FlinkSQL.
1)I'm want calculate average for column values:
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env, config)
val ds = env.fromElements(
(1.0f, 1),
(2.0f, 2)).toTa
Thank you both for your detailed replies.
I think we all agree on extending the evaluation framework to handle
recommendation models, and choosing the scalable form of ranking, so
we'll do it that way. For now we will work upon Theodore's PR.
Thanks for giving me the reasons behind the design
Greg Hogan created FLINK-4736:
-
Summary: Don't duplicate fields in Ordering
Key: FLINK-4736
URL: https://issues.apache.org/jira/browse/FLINK-4736
Project: Flink
Issue Type: Improvement
API compatibility is maintained through the 1.x.y major version line, that
is fine on paper, Flavio.
It would be great if you could test it after Fabian bumps the version, so
we can mitigate any potential issues.
Best,
Marton
On Tue, Oct 4, 2016, 14:24 Flavio Pompermaier wrote:
> We're curren
Kurt Young created FLINK-4735:
-
Summary: Migrate some job execution related akka messages to rpc
calls
Key: FLINK-4735
URL: https://issues.apache.org/jira/browse/FLINK-4735
Project: Flink
Issue
Hello all,
Thanks for starting this discussion Gabor you bring up a lot of interesting
points.
In terms of the evaluation framework I would also favor reworking it in
order to support recommendation models. We can either we merge the current
PR and use it as a basis, or open a new one.
For the f
Greg Hogan created FLINK-4734:
-
Summary: Remove use of Tuple setField for fixed position
Key: FLINK-4734
URL: https://issues.apache.org/jira/browse/FLINK-4734
Project: Flink
Issue Type: Improveme
Hey Eron! What will this mean for users running older Mesos versions?
It might help to post this question to the user mailing list as well.
On Tue, Oct 4, 2016 at 4:06 AM, Eron Wright wrote:
> Hello,
> For Flink's initial support for Mesos, coming in Flink 1.2, does anyone have
> an objection to
I think you can start from this (using flink table-api), I hope it could be
helpful:
PS:maybe someone could write a blog post on how to do this with Scala since
it's a frequent question on the mailing list... :)
public static void main(String[] args) throws Exception {
String path
We're currently using Cloudera 5.6.0 and there's HBase 1.0.0...is it
compatible?
On Tue, Oct 4, 2016 at 12:08 PM, Márton Balassi
wrote:
> Sounds reasonable to me. +1.
>
> On Tue, Oct 4, 2016 at 11:38 AM, Fabian Hueske wrote:
>
> > Hi everybody,
> >
> > Flink's TableInputFormat depends on a very
Hi Guys,
We are in the process of creating POC,
I am looking for the sample project - Flink scala or java which can load
data from database to database or
CSV to relational database(any).
CSV --> SQLSERVER --> AWS Redshift
could you please some one help me on that..
Cheers
Ram
Hi Gabor,
thanks for getting involved in Flink's ML library. Always good to have
people working on it :-)
Some thoughts concerning the points you've raised inline:
On Tue, Oct 4, 2016 at 12:47 PM, Gábor Hermann
wrote:
> Hey all,
>
> We've been working on improvements for the recommendation in
Hey all,
We've been working on improvements for the recommendation in Flink ML,
and some API design questions have come up. Our plans in short:
- Extend ALS to work on implicit feedback datasets [1]
- DSGD implementation for matrix factorization [2]
- Ranking prediction based on a matrix facto
Thanks Ufuk for stepping up as RM!
Regarding whether FLINK-4723 / FLINK-4727 should be included:
The failing test on PR #2580 is unrelated to the change.
However, I think it’s reasonable to skip them if we’re aiming to get a RC out
today, as they’ll need more time for reviewing.
Not including the
Thanks Ufuk for stepping up as release manager!
Yes, I will backport the fix for FLINK-4311 to Flink 1.1.3 and merge it
today.
2016-10-04 12:07 GMT+02:00 Ufuk Celebi :
> If there are no objections I would like to be the release manager for
> this release.
>
> Futhermore, I would like to add FLIN
Thanks for volunteering for RM, Ufuk.
On Tue, Oct 4, 2016 at 12:07 PM, Ufuk Celebi wrote:
> If there are no objections I would like to be the release manager for
> this release.
>
> Futhermore, I would like to add FLINK-4732 (Maven junction plugin
> security issue) to the list of fixes for this
Sounds reasonable to me. +1.
On Tue, Oct 4, 2016 at 11:38 AM, Fabian Hueske wrote:
> Hi everybody,
>
> Flink's TableInputFormat depends on a very old HBase dependency (0.98.11).
>
> We have received user requests (see FLINK-2765 [1]) to update the
> dependency for hadoop-2 to 1.2.
> In addition
If there are no objections I would like to be the release manager for
this release.
Futhermore, I would like to add FLINK-4732 (Maven junction plugin
security issue) to the list of fixes for this release. Other than
that, I think the list in this thread is good and we should now focus
on getting a
Chesnay Schepler created FLINK-4733:
---
Summary: Port WebFrontend to new metric system
Key: FLINK-4733
URL: https://issues.apache.org/jira/browse/FLINK-4733
Project: Flink
Issue Type: Improve
Hi everybody,
Flink's TableInputFormat depends on a very old HBase dependency (0.98.11).
We have received user requests (see FLINK-2765 [1]) to update the
dependency for hadoop-2 to 1.2.
In addition there is a pull request with critical fixes for the HBase
TableInputFormat [2] that bumps the vers
Update:
FLINK-4618 (https://github.com/apache/flink/pull/2579) had been merged.
There’s another 2 follow-up issues that I also want to include in 1.1.3 for the
Kafka connector after review, if others agree:
- FLINK-4723 (https://github.com/apache/flink/pull/2580)
- FLINK-4727 (https://github.com
Maximilian Michels created FLINK-4732:
-
Summary: Maven junction plugin security threat
Key: FLINK-4732
URL: https://issues.apache.org/jira/browse/FLINK-4732
Project: Flink
Issue Type: Bug
Stefan Richter created FLINK-4731:
-
Summary: HeapKeyedStateBackend restorig broken for scale-in
Key: FLINK-4731
URL: https://issues.apache.org/jira/browse/FLINK-4731
Project: Flink
Issue Type
On Fri, Sep 30, 2016 at 7:02 PM, dan bress wrote:
> Thanks for the answer. In 1.1.X, If I deploy job "Job-V1", run for a
> while, trigger a save point, cancel the job, submit job "Job-V2" and resume
> save point. Will Job-V2 understand the savepoint from Job-V1?
Currently it depends on the type
Stefan Richter created FLINK-4730:
-
Summary: Introduce CheckpointMetaData
Key: FLINK-4730
URL: https://issues.apache.org/jira/browse/FLINK-4730
Project: Flink
Issue Type: Bug
Repo
38 matches
Mail list logo