Hi Lasse,
One approach I’ve used in a similar situation is to have a “UnionedSource”
wrapper that first emits the (bounded) data that will be loaded in-memory, and
then starts running the source that emits the continuous stream of data.
This outputs an Either, which I then split, and broadcast
Hi Gaurav,
I’m curious - for your use case, what are the windowing & aggregation
requirements?
E.g. is it a 10 second sliding window?
And what’s the aggregation you’re trying to do?
Thanks,
— Ken
> On Sep 28, 2018, at 4:00 AM, Gaurav Luthra wrote:
>
> Hi Chesnay,
>
> I know it is an issu
I'm writing a Flink connector to write a stream of events from Kafka to
Elastic Search. It is a typical metrics ingestion pipeline, where the
latest metrics preferred over the stale data.
What I mean by that, let's assume there was an outage of Elastic Search
cluster for about 20 minutes, all the m
Hi all,
if you don't want to read the wall of text below, in short, I want to
know if it is possible to get a superstep-based iteration on a possibly
unbounded DataStream in Flink in an efficient way and what general
concept(s) of synchronization you would suggest for that.
I would like to
Hi,
Can you express it more clearly? Whether you measure the execution time of
the job or the execution time of the task instance.
Why can't you measure the kind of scene you said? All jobs are logically a
DAG.
Thanks, vino.
Alejandro 于2018年9月26日周三 下午4:17写道:
> Hello,
>
> I am trying to measure
Hi,
Which version of Flink do you use? Also, can you give more details about
how you migrate your job?
Thanks, vino.
shashank734 于2018年9月28日周五 下午10:21写道:
> Hi, I am trying to move my job from one cluster to another cluster using
> Savepoint. But It's failing while restoring on the new cluster.
Hi Gianluca,
This is very strange, Till may be able to give an explanation, because it
is the release manager of this version.
Thanks, vino.
Gianluca Ortelli 于2018年9月28日周五 下午4:02写道:
> Hi,
>
> I just downloaded flink-1.6.1-bin-scala_2.11.tgz from
> https://flink.apache.org/downloads.html and no
Hi clay,
Are there any other lines after the last line in your picture? The final
result should be eventual consistency and correct.
In your sql, there is a left join, a keyed group by and a non-keyed group
by. Both of the left join and keyed group by will send retractions to the
downstream non-k
Hi Till,
Thanks for your reply!
We considered the expiration of delegation tokens, but as we submit the jobs
with keytabs and some jobs have been running for weeks, it seems that it’s not
the cause.
And the cause is probably the read access of HDFS that you mentioned (Hadoop
2.6.5-cdh-5.6.0 w
I would drop it.
Niels Basjes
On Sat, 29 Sep 2018, 10:38 Kostas Kloudas,
wrote:
> +1 to drop it as nobody seems to be willing to maintain it and it also
> stands in the way for future developments in Flink.
>
> Cheers,
> Kostas
>
> > On Sep 29, 2018, at 8:19 AM, Tzu-Li Chen wrote:
> >
> > +1
It works. Thanks.
> 在 2018年9月29日,下午2:14,Congxian Qiu 写道:
>
> Hi
> In my opinion, you can use jstack to find what are the processes doing, If
> you still have problem after that, you could share the code or some demo code
> you are running.
>
> weilongxing mailto:weilongx...@aicaigroup.com>>
+1 to drop it as nobody seems to be willing to maintain it and it also
stands in the way for future developments in Flink.
Cheers,
Kostas
> On Sep 29, 2018, at 8:19 AM, Tzu-Li Chen wrote:
>
> +1 to drop it.
>
> It seems few people use it. Commits history of an experimental
> module sparse oft
My final calculation result is implemented in the following way when writing
to kafka, because KafkaTableSink does not support retract mode, I am not
sure whether this method will affect the calculation result.
val userTest: Table = tEnv.sqlQuery(sql)
val endStream = tEnv.toRetractStream[Row](use
Hi everyone,
I am having some problems in the process of using flink sql, my sql is as
follows:
SELECT COUNT(DISTINCT mm.student_id),sum(price)
FROM (
SELECT a.student_id,a.confirm_date_time,sum(a.real_total_price)as price
FROM (
SELECT DISTINCT po.id as order_id
,po.student_id
,po.create_id
,po.
14 matches
Mail list logo