Hi Aljoscha,
Thanks for your response. My use case is to track user trajectory based on
page view event when they visit a website. The input would be like a list
of PageView(userId, url, eventTimestamp) with watermarks (= eventTimestamp
- duration). I'm trying SessionWindows with some event time
Thanks Philipp,
I am also started looking at Jest client. Did you use it with Flink? is
possible for you to share the project so that i can reuse it?
On Tue, Oct 25, 2016 at 11:54 AM, Philipp Bussche wrote:
> Hi there,
> not me (which I guess is not what you wanted to hear) but I had to
> impl
Hi there,
not me (which I guess is not what you wanted to hear) but I had to implement
a custom ElasticSearch based on Jest to be able to sink data into ES on AWS.
Works quite alright !
Philipp
https://github.com/searchbox-io/Jest/tree/master/jest
--
View this message in context:
http://apach
Re-hi,
I actually realized that the problem comes from the fact that the datastream
that I am registering does not create properly the types.
I am using something like
DataStream ... .returns("TupleX<,,java.sql.Timestamp,
java.sql.Time>")...and I was expecting that these will be converted
Hi,
I would like to create a TIMESTAMP type from the data schema. I would need this
to match against the FlinkTypeFactory (toTypeInfo())
def toTypeInfo(relDataType: RelDataType): TypeInformation[_] =
relDataType.getSqlTypeName match {
case BOOLEAN => BOOLEAN_TYPE_INFO
case TINYINT => BY
Hi Aljoscha,
Thanks for the reply!
I found that my stateful operator (with parallelism 10) wasn't equally
split between the task managers on the two nodes (it was split 9/1) - so I
tweaked the task manager / slot configuration until Flink allocated them
equally with 5 instances of the operator on
Hi Maciek,
cases like this, where you essentially want to evict elements that are
older than a certain threshold while keeping a count of those elements that
are not older than that threshold tend to be quite tricky.
In order to start thinking about this, how would you implement this case in
a non
Hi Bart,
are you using your custom Trigger together with a merging session window
assigner?
You might want to consider overriding the clear() method in your trigger to
clean up the state that you use. If you don't you might run into memory
leaks because the state is never cleaned up.
Cheers,
Aljo
Hi Josh,
Checkpoints that take longer than the checkpoint interval should not be an
issue (if you use an up-to-date version of Flink). The checkpoint
coordinator will not issue another checkpoint while another one is still
ongoing. Is there maybe some additional data for the crashes? A log perhaps?
Hi,
there is already a mechanism for that. Currently, Flink will only keep the
most recent, successful checkpoint. We are currently working on making that
configurable so that, for example, the last n successful checkpoints can be
kept.
Cheers,
Aljoscha
On Tue, 25 Oct 2016 at 06:47 Juan RodrÃguez
Hi,
could you please go into more detail about the input and what the expected
output is. And then also what the output is with both apply() and reduce()?
With this we might be able to figure it out together.
Cheers,
Aljoscha
On Mon, 24 Oct 2016 at 18:11 Sendoh wrote:
> Hi Flink users,
>
> I s
hi,
I'm migrating some samza jobs to flink streaming, and on samza we sent the
errors to a kafka topic to make it easier to display on dashboards, I would
like to do the same on flink, what do you recommend?
Hi Fabian,
I commented on the issue and attached the program reproducing the bug, But
I couldn't find how to re-open it (I think maybe I don't have enough
permissions?).
Best,
Yassine
2016-10-25 12:49 GMT+02:00 Fabian Hueske :
> Hi Yassine,
>
> I thought I had fixed that bug a few weeks a ago,
Hi Yassine,
I thought I had fixed that bug a few weeks a ago, but apparently the fix
did not catch all cases.
Can you please reopen FLINK-2662 and post the program to reproduce the bug
there?
Thanks,
Fabian
[1] https://issues.apache.org/jira/browse/FLINK-2662
2016-10-25 12:33 GMT+02:00 Yassine
Hi all,
My job fails with the folowing exception : CompilerException: Bug: Plan
generation for Unions picked a ship strategy between binary plan operators.
The exception happens when adding partitionByRange(1).sortPartition(1,
Order.DESCENDING) to the union of datasets.
I made a smaller version t
15 matches
Mail list logo