Robert Batts created FLINK-3763:
---
Summary: RabbitMQ Source/Sink standardize connection parameters
Key: FLINK-3763
URL: https://issues.apache.org/jira/browse/FLINK-3763
Project: Flink
Issue Type
Hi Till,
The cluster has started in HA.
I already patched Flink interpreter to allow passing the Configuration to
FlinkILoop. Neverthless I have to pass host and port to FlinkILoop, there
are required from FlinkILoop constructor and I retrieve them from
.yarn-properties file.
I logged Flink Confi
I don't have full list of metrics, but everything that is related to
runtime performance and possible bottlenecks of the system. All
interprocess communication counters, errors, latencies, checkpoint sizes
and checkpointing latencies. Buffer allocations and releases, etc.
As we aggregate ourselves
I'm currently working on a metric system that
a) exposes several TaskManger metrics
b) allows gathering metrics in various parts of a task, most notably
user-defined functions.
The first version makes these metrics available via JMX on each
TaskManager.
While a mechanism to make that pluggable
Hi!
I'm looking into integrating Flink into our stack and one of the
requirements is to report metrics to an internal system. The current
Accumulators are not adequate to provide visibility that we need to run
such a system in production. We want much more information about the
internal cluster sta
Andrew Palumbo created FLINK-3762:
-
Summary: Kryo StackOverflowError due to disabled Kryo Reference
tracking
Key: FLINK-3762
URL: https://issues.apache.org/jira/browse/FLINK-3762
Project: Flink
Hi Matthias,
I change the version as per your requirement but when I do that I have a
compilation error at the level of the classes
org.apache.flink.storm.util.BoltFileSink and
org.apache.flink.storm.util.OutputFormatter
Btw, the dependency
org.apache.flink
flink-storm-examples_2.11
1.1-SNA
Do you want me to open a jira/pr for this?
Original message
From: Stephan Ewen
Date: 04/13/2016 5:16 AM (GMT-05:00)
To: dev@flink.apache.org
Subject: Re: Kryo StackOverflowError
+1 to add this to 1.0.2
On Wed, Apr 13, 2016 at 1:57 AM, Andrew Palumbo wrote:
>
> Hi,
>
> Great
Till Rohrmann created FLINK-3761:
Summary: Introduce key group state backend
Key: FLINK-3761
URL: https://issues.apache.org/jira/browse/FLINK-3761
Project: Flink
Issue Type: Sub-task
There is an InputFormat object for each parallel task of a DataSource.
So for a source with parallelism 8 you will have 8 instances of the
InputFormat running, regardless whether this is on one box with 8 slots or
8 machines with 1 slots each.
The same is true for all other operators (Map, Reduce,
ok thanks!just one last question: an inputformat is instantiated for each
task slot or once for task manger?
On 14 Apr 2016 18:07, "Chesnay Schepler" wrote:
> no.
>
> if (connection==null) {
> establishCOnnection();
> }
>
> done. same connection for all splits.
>
> On 14.04.2016 17:59, Flavio Po
This vote is cancelled in favour of RC2, because of the bug reported
by Aljoscha.
On Thu, Apr 14, 2016 at 3:14 PM, Aljoscha Krettek wrote:
> -1
>
> A user just discovered this bug:
> https://issues.apache.org/jira/browse/FLINK-3760 which seems somewhat
> critical because there is no workaround.
>
no.
if (connection==null) {
establishCOnnection();
}
done. same connection for all splits.
On 14.04.2016 17:59, Flavio Pompermaier wrote:
I didn't understand what you mean for "it should also be possible to reuse
the same connection of an InputFormat across InputSplits, i.e., calls of
the ope
An InputFormat object processes several InputSplits, so open() is
repeatedly called on the same object.
I suggest to create the connection in the first open() call and reuse it in
all subsequent open() calls.
So no pool at all ;-)
2016-04-14 17:59 GMT+02:00 Flavio Pompermaier :
> I didn't unders
I didn't understand what you mean for "it should also be possible to reuse
the same connection of an InputFormat across InputSplits, i.e., calls of
the open() method".
At the moment in the open method there's a call to establishConnection,
thus, a new connection is created for each split.
If I unde
On 14.04.2016 17:22, Fabian Hueske wrote:
Hi Flavio,
that are good questions.
1) Replacing null values by default values and simply forwarding records is
very dangerous, in my opinion.
I see two alternatives: A) we use a data type that tolerates null values.
This could be a POJO that the user h
Hi Flavio,
that are good questions.
1) Replacing null values by default values and simply forwarding records is
very dangerous, in my opinion.
I see two alternatives: A) we use a data type that tolerates null values.
This could be a POJO that the user has to provide or Row. The drawback of
Row is
Hi,
Flink does not make any guarantees about the order of arriving elements
except in the case of one-to-one forwarding patterns. That is, only for
map/flatMap/filter and such operations will the order in which two
successive operations see their elements be the same.
Could you please describe in
Hi guys,
I'm integrating the comments of Chesnay to my PR but there's a couple of
thing that I'd like to discuss with the core developers.
1. about the JDBC type mapping (addValue() method at [1]: At the moment
if I find a null value for a Double, the getDouble of jdbc return 0.0. Is
i
Hi list,
I am surprised by the behaviour of the code below. In particular, I am
puzzled by the fact that events do not seem to enter the window in order.
What am I doing wrong?
Here's what I don't understand. This test outputs the following error:
java.lang.AssertionError: expected:<[[10 "Join(L
-1
A user just discovered this bug:
https://issues.apache.org/jira/browse/FLINK-3760 which seems somewhat
critical because there is no workaround.
On Thu, 14 Apr 2016 at 16:05 Ufuk Celebi wrote:
> Dear Flink community,
>
> Please vote on releasing the following candidate as Apache Flink version
Thanks, sorry for the inconvenience :-O
On Thu, 14 Apr 2016 at 16:11 Ufuk Celebi wrote:
> Just started the vote but let's include it. No biggie. I will re-run the
> script later today.
>
> On Thursday, 14 April 2016, Aljoscha Krettek wrote:
>
> > Hi,
> > a user (Hironori Ogibayashi) found anoth
Just started the vote but let's include it. No biggie. I will re-run the
script later today.
On Thursday, 14 April 2016, Aljoscha Krettek wrote:
> Hi,
> a user (Hironori Ogibayashi) found another somewhat critical bug:
> https://issues.apache.org/jira/browse/FLINK-3760
>
> It's an easy fix and I
Hi Andrea,
have you started the Flink Yarn cluster in HA mode? Then the job manager
address is stored in ZooKeeper and you have to tell your FlinkILoop that it
should retrieve the JobManager address from there. In order to do that you
have to set conf.setString(ConfigConstants.RECOVERY_MODE,
"zook
Dear Flink community,
Please vote on releasing the following candidate as Apache Flink version 1.0.2.
The commit to be voted on:
80211d64e5d9f0e4f7f5e41085fc6a0e61bcdc02
Branch:
release-1.0.2-rc1 (see
https://git1-us-west.apache.org/repos/asf/flink/?p=flink.git;a=shortlog;h=refs/heads/release-1.
Hi,
a user (Hironori Ogibayashi) found another somewhat critical bug:
https://issues.apache.org/jira/browse/FLINK-3760
It's an easy fix and I will open a PR momentarily.
On Thu, 14 Apr 2016 at 13:17 Ufuk Celebi wrote:
> Thanks for your feedback Till and Aljoscha. I will trigger the first RC
> t
Aljoscha Krettek created FLINK-3760:
---
Summary: Fix StateDescriptor.readObject
Key: FLINK-3760
URL: https://issues.apache.org/jira/browse/FLINK-3760
Project: Flink
Issue Type: Bug
Fabian Hueske created FLINK-3759:
Summary: Table API should throw exception is null value is
encountered in non-null mode.
Key: FLINK-3759
URL: https://issues.apache.org/jira/browse/FLINK-3759
Project
change the version to 1.1-SNAPSHOT
On 04/14/2016 11:52 AM, star jlong wrote:
> One question which dependency of flink are you using because I'm using
> org.apache.flink
> flink-storm-examples_2.11
> 1.0.0
> And once I change the version to SNAPSHOT version, the pom.xml complains that
> it cou
Thanks for your feedback Till and Aljoscha. I will trigger the first RC today.
– Ufuk
On Mon, Apr 11, 2016 at 5:55 PM, Aljoscha Krettek wrote:
> I also found an issue in the RocksDB backend (not introduced by me... ;-):
> https://issues.apache.org/jira/browse/FLINK-3730
>
> It's not that critica
Konstantin Knauf created FLINK-3758:
---
Summary: Add possibility to register accumulators in custom
triggers
Key: FLINK-3758
URL: https://issues.apache.org/jira/browse/FLINK-3758
Project: Flink
Konstantin Knauf created FLINK-3757:
---
Summary: addAccumulator does not throw Exception on duplicate
accumulator name
Key: FLINK-3757
URL: https://issues.apache.org/jira/browse/FLINK-3757
Project: Fl
One question which dependency of flink are you using because I'm using
org.apache.flink
flink-storm-examples_2.11
1.0.0
And once I change the version to SNAPSHOT version, the pom.xml complains that
it could not satisfy the given dependency.
Le Jeudi 14 avril 2016 10h45, star jlong a
écr
Yes it is.
Le Jeudi 14 avril 2016 10h39, Matthias J. Sax a écrit :
For the fix, you need to use the current development version of Flink,
ie, change your maven dependency from 1.0 to
1.1-SNAPSHOT
One question: what is FlinkGitService.class? It does only show up when
you get the ClassLoa
For the fix, you need to use the current development version of Flink,
ie, change your maven dependency from 1.0 to
1.1-SNAPSHOT
One question: what is FlinkGitService.class? It does only show up when
you get the ClassLoader:
> ClassLoader loader = URLClassLoader.newInstance(new URL[] { new URL(pa
Here a try that I given. As first I was configuring my cluster with private
ip and it was starting properly.
So to avoid this akka issue, I decided to configure my cluster with public
address but with this configuration, my cluster is not starting at all.
Here is the logs that I get
2016-04-14 09:0
Till Rohrmann created FLINK-3756:
Summary: Introduce state hierarchy in CheckpointCoordinator
Key: FLINK-3756
URL: https://issues.apache.org/jira/browse/FLINK-3756
Project: Flink
Issue Type:
Thanks Till for the reply.
But according to you how can I address that?
Thanks,
Ned
On Thu, Apr 14, 2016 at 9:56 AM, Till Rohrmann
wrote:
> Hi Ned,
>
> I think you are facing the issue described in this JIRA issue [1]. The
> problem is that you have a private and a public IP address and that A
Till Rohrmann created FLINK-3755:
Summary: Introduce key groups for key-value state to support
dynamic scaling
Key: FLINK-3755
URL: https://issues.apache.org/jira/browse/FLINK-3755
Project: Flink
Hi Ned,
I think you are facing the issue described in this JIRA issue [1]. The
problem is that you have a private and a public IP address and that Akka
binds to the private IP address. Since the registered IP of an ActorSystem
and the target IP address of a request to this ActorSystem have to be
m
2016-04-14 08:23:51,900 INFO
org.apache.flink.runtime.jobmanager.JobManager-
2016-04-14 08:23:51,902 INFO
org.apache.flink.runtime.jobmanager.JobManager- Starting
JobManager (Version:
I'm referring to the jobmanager.log file not the client log file. You can
find it in the `/log` directory.
Cheers,
Till
On Thu, Apr 14, 2016 at 9:56 AM, ned dogg wrote:
> Hi Till
>
> Thanks for the prompt reply.
>
> The logs say that Please make sure that the actor is running and its port
> is
Hi Till
Thanks for the prompt reply.
The logs say that Please make sure that the actor is running and its port
is reachable.
And it is actaully reachable because I can ping that address.
Ned.
On Thu, Apr 14, 2016 at 8:43 AM, Till Rohrmann
wrote:
> Hi Ned,
>
> what does the logs of the JobMana
Hi Ned,
what does the logs of the JobManager say?
Cheers,
Till
On Apr 14, 2016 9:19 AM, "ned dogg" wrote:
> Hi everybody,
>
> I'm Ned, a young and passionte developer of apache technologies. I have
> been playing with apache flink lastly.
>
> This is what I wanted to do submit a flink topology
Hi everybody,
I'm Ned, a young and passionte developer of apache technologies. I have
been playing with apache flink lastly.
This is what I wanted to do submit a flink topology to a remote flink
cluster. The following are the steps that I did.
- Install flink as a cluster indicated on the link
h
45 matches
Mail list logo