Dear Matthias,
Thank you for the reply! I am so sorry to respond late on the matter.
> I just double checked the Flink code and during translation from Storm
> to Flink declareOuputFields() is called twice. You are right that is
> does the same job twice, but that is actually not a problem. The F
Hi, As i know, when TaskManager send UpdateTaskExecutionState to
JobManager, if the JobManager failover and the future response is fail, the
task will be failed. Is it feasible to retry send UpdateTaskExecutionState
again when future response fail until success. In JobManager HA mode, the
U
Simply passing FlinkUserCodeClassLoader.class.getClassLoader to the parent
constructor cleared the impasse.
2016-01-13 20:06:43.637 INFO 35403 --- [ main]
o.o.e.j.s.SocketTextStreamWordCount$ : Started SocketTextStreamWordCount.
in 5.176 seconds (JVM running for 12.58)
[INFO] --
Hi!
Running this is Spring, the whole classloader configuration is probably a
bit different than in Flink's standalone or YARN or local mode.
Can you try if the following solves your problem:
At the end of the file "BlobLibraryCacheManager", there is the private
class "FlinkUserCodeClassloader".
thank you!!
2016-01-13 20:51 GMT+01:00 Matthias J. Sax :
> Hi,
>
> use JDBCOutputFormatBuilder to set all required parameters:
>
> > JDBCOutputFormatBuilder builder =
> JDBCOutputFormat.buildJDBCOutputFormat();
> > builder.setDBUrl(...)
> > // and more
> >
> > var.write(builder.finish, OL);
>
> -
Hi,
use JDBCOutputFormatBuilder to set all required parameters:
> JDBCOutputFormatBuilder builder = JDBCOutputFormat.buildJDBCOutputFormat();
> builder.setDBUrl(...)
> // and more
>
> var.write(builder.finish, OL);
-Matthias
On 01/13/2016 06:21 PM, Traku traku wrote:
> Hi everyone.
>
> I'm t
I’m experimenting combining Spring with Flink. I’ve successfully instrumented
for Gradle, but Maven is emitting ClassNotFoundExceptions for items ostensibly
on the class path.
Project is currently configured for:
1. Scala 2.10.4
2. Flink 0.9.1
I execute the following
```
# In one terminal
$
Hi Saiph,
In Flink, the key for keyBy() can be provided in different ways:
https://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#specifying-keys
(the doc is for DataSet API, but specifying keys is basically the same for
DataStream and DataSet).
As described in the doc
Hi,
This line «stream.keyBy(0)» only works if stream is of type
DataStream[Tuple] - and this Tuple is not a scala tuple but a flink tuple
(why not to use scala Tuple?). Currently keyBy can be applied to anything
(at least in scala) like DataStream[String] and
DataStream[Array[String]].
Can anyone
Hi everyone.
I'm trying to migrate some code to flink 0.10 and I'm having a problem.
I try to create a custom sink to insert the data to a postgresql database.
My code was this.
var.output(
// build and configure OutputFormat
JDBCOutputFormat
.buildJDBCOutputForma
Hi,
the window contents are stored in state managed by the window operator at all
times until they are purged by a Trigger returning PURGE from one of its on*()
methods.
Out of the box, Flink does not have something akin to the lateness and cleanup
of Google Dataflow. You can, however implement
Thanks Robert! I'll be keeping tabs on the PR.
Cheers,
David
On Mon, Jan 11, 2016 at 4:04 PM, Robert Metzger wrote:
> Hi David,
>
> In theory isEndOfStream() is absolutely the right way to go for stopping
> data sources in Flink.
> That its not working as expected is a bug. I have a pending pul
Thanks to both !!.
That's help me to understand the recovery process
El mié., 13 ene. 2016 a las 14:01, Stephan Ewen ()
escribió:
> Thanks, Gordon, for the nice answer!
>
> One thing is important to add: Exactly-once refers to state maintained by
> Flink. All side effects (changes made to the "o
Hi,
I'm trying to understand how the lifecycle of messages / state is managed
by Flink, but I'm failing to find any documentation.
Specially, if I'm using a windowed stream and a type of trigger that retain
the elements of the window to allow for processing of late data e.g.
ContinousEventTimeTri
Thanks, Gordon, for the nice answer!
One thing is important to add: Exactly-once refers to state maintained by
Flink. All side effects (changes made to the "outside" world), which
includes sinks, need in fact to be idempotent, or will only have "at-least
once" semantics.
In practice, this works o
Hi Francis,
A part of every complete snapshot is the record positions associated with
the barrier that triggered the checkpointing of this snapshot. The snapshot
is completed only when all the records within the checkpoint reaches the
sink. When a topology fails, all the operators' state will fall
Hi Stephan,
Thanks for your quickly response.
So, consider an operator task with two processed records and no barrier
incoming. If the task fail and must be records, the last consistent
snapshot will be used, which no includes information about the processed
but no checkpointed records. What abo
Hi!
I think there is a misunderstanding. There are no identifiers maintained
and no individual records deleted.
On recovery, all operators reset their state to a consistent snapshot:
https://ci.apache.org/projects/flink/flink-docs-release-0.10/internals/stream_checkpointing.html
Greetings,
Step
Hello,
I'm trying to understand the process of checkpoint processing for
exact-once in Flink, and I have some doubts.
The documentation says that when there is a failure and the state of an
operator is restored, the already processed records are deleted based on
their identifiers.
My doubts is,
Hi Fabian,
thanks for your quick response. I just figured out that I forgot to mention
a small but probably relevant detail: I am working with the streaming api.
Although there is a way to access the overall job settings, I need a
solution to "reduce" the view on configuration options available o
Hi Christian,
the open method is called by the Flink workers when the parallel tasks are
initialized.
The configuration parameter is the configuration object of the operator.
You can set parameters in the operator config as follows:
DataSet text = ...
DataSet wc = text.flatMap(new
Tokenizer()).ge
Hi
While working on a RichFilterFunction implementation I was wondering, if
there is a much better way to access configuration
options read from file during startup. Actually, I am
using getRuntimeContext().getExecutionConfig().getGlobalJobParameters()
to get access to my settings.
Reason for tha
Hi Robert,
We are on deadline for demo stage right now before production for
management so it would be great to have 0.10.2 for stable version within
this week if possible ?
Cheers
On Wed, Jan 13, 2016 at 4:13 PM, Robert Metzger wrote:
> Hi,
>
> there are currently no planned releases. I would
Hi,
there are currently no planned releases. I would actually like to start
preparing for the 1.0 release soon, but the community needs to discuss that
first.
How urgently do you need a 0.10.2 release? If this is the last blocker for
using Flink in production at your company, I can push for the b
24 matches
Mail list logo