Blink planner support lazy translation for multiple SQLs, and the common
nodes will be reused in a single job.
The only thing you need note here is the unified TableEnvironmentImpl do
not support conversions between Table(s) and Stream(s).
U must use pure SQL api (DDL/DML by sqlUpdate, DQL by sqlQu
forgot to send to user mailing list.
Tony Wei 於 2019年8月9日 週五 下午12:36寫道:
> Hi Zhenghua,
>
> I didn't get your point. It seems that `isEagerOperationTranslation` is
> always return false. Is that
> means even I used Blink planner, the sql translation is still in a lazy
> manner?
>
> Or do you mean
Hi,
I used `flinkTableEnv.connect(new Kafka()...).registerTableSource(...)` to
register my kafka table.
However, I found that because SQL is a lazy operation, it will convert to
DataStream under some
criteria. For example, `Table#toRetractStream`.
So, when I used two SQLs in one application job,
Congratulations Hequn!
Best,
Yun
--
From:Congxian Qiu
Send Time:2019 Aug. 8 (Thu.) 21:34
To:Yu Li
Cc:Haibo Sun ; dev ; Rong Rong
; user
Subject:Re: Re: [ANNOUNCE] Hequn becomes a Flink committer
Congratulations Hequn!
Best,
Co
Hi Subramanyam,
Could you share more information? including:
1. the URL pattern
2. the detailed exception and the log around it
3. the cluster the job is running on, e.g. standalone, yarn, k8s
4. it's session mode or per job mode
This information would be helpful to identify the failure cause.
T
Hi,
We are trying to use StreamingFileSink to write to a S3 bucket. Its a
simple job which reads from Kafka and sinks to S3. The credentials for s3
are configured in the flink cluster. We are using flink 1.7.2 without pre
bundled hadoop. As suggested in the documentation we have added the
flink-s3
Hi
Maybe FLIP-49[1] "Unified Memory Configuration for TaskExecutors" can give
some information here
[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-49-Unified-Memory-Configuration-for-TaskExecutors-td31436.html
Best,
Congxian
Cam Mach 于2019年8月9日周五 上午4:59写道:
> Hi
Hi Biao, Yun and Ning.
Thanks for your response and pointers. Those are very helpful!
So far, we have tried with some of those parameters (WriterBufferManager,
write_buffer_size, write_buffer_count, ...), but still continuously having
issues with memory.
Here are our cluster configurations:
-
Hello,
I'm currently using flink 1.7.2.
I'm trying to run a job that's submitted programmatically using the
ClusterClient API.
public JobSubmissionResult run(PackagedProgram prog, int
parallelism)
The job makes use of some jars which I add to the packaged program through the
Hello Biao,
There's a legacy component that expect this "time slices" and tags to be
set on our operational data store.
Right now I would like to just have the tags set properly on each
record, after some reading I came out with the idea of setting multiple
sliding windows
but there's still an i
Hi Cam
I think FLINK-7289 [1] might offer you some insights to control RocksDB memory,
especially the idea using write buffer manager [2] to control the total write
buffer memory. If you do not have too many sst files, write buffer memory usage
would consume much more space than index and filte
Hi Cam,
This blog post has some pointers in tuning RocksDB memory usage that
might be of help.
https://klaviyo.tech/flinkperf-c7bd28acc67
Ning
On Wed, Aug 7, 2019 at 1:28 PM Cam Mach wrote:
>
> Hello everyone,
>
> What is the most easy and efficiently way to cap RocksDb's memory usage?
>
> Tha
Thanks for your response, Biao.
On Wed, Aug 7, 2019 at 11:41 PM Biao Liu wrote:
> Hi Cam,
>
> AFAIK, that's not an easy thing. Actually it's more like a Rocksdb issue.
> There is a document explaining the memory usage of Rocksdb [1]. It might be
> helpful.
>
> You could define your own option
Congratulations Hequn!
Best,
Congxian
Yu Li 于2019年8月8日周四 下午2:02写道:
> Congratulations Hequn! Well deserved!
>
> Best Regards,
> Yu
>
>
> On Thu, 8 Aug 2019 at 03:53, Haibo Sun wrote:
>
>> Congratulations!
>>
>> Best,
>> Haibo
>>
>> At 2019-08-08 02:08:21, "Yun Tang" wrote:
>> >Congratulations
Hi
When talking about sharing state, broadcast state [1][2] might be a choice.
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/broadcast_state.html#provided-apis
[2] https://flink.apache.org/2019/06/26/broadcast-state.html
Best
Yun Tang
Hello,
For anyone looking for setting up alerts for flink application ,here is
good blog by Flink itself :
https://www.ververica.com/blog/monitoring-apache-flink-applications-101
So, for dynamoDb streams we can set the alert on millisBehindLatest
Regards,
Vinay Patil
On Wed, Aug 7, 2019 at 2:24
16 matches
Mail list logo