Hi,
I'm not sure what you exactly mean. Could you describe more about your
requirements?
M Singh 于2019年7月14日周日 上午9:33写道:
> Hi:
>
> I wanted to find out what is the timestamp associated with the elements of
> a stream side output with different stream time characteristics.
>
> Thanks
>
> Man
>
Hi,
Is it possible to support two different `TimeCharacteristic` in one job at
the same time?
I guess the answer is no. So I don't think there exists such a scenario.
M Singh 于2019年7月19日周五 上午12:19写道:
> Hey Folks - Just checking if you have any pointers for me. Thanks for
> your advice.
>
> On
Hi,
I don't find any official document about it.
There are several state relevant methods in `TriggerContext`. I believe
it's absolutely safe to use state in `Trigger` through `TriggerContext`.
Regarding to `Evictor`, there is no such methods in `EvictorContext`. After
taking a glance on relevan
As far as I know. It is completely safe
On Fri, Jul 19, 2019, 1:35 AM M Singh wrote:
> Just wanted to see if there is any advice on this question. Thanks
>
> On Sunday, July 14, 2019, 09:07:45 AM EDT, M Singh
> wrote:
>
>
> Hi:
>
> Is it safe to manipulate the state of an object in the evictor
Hi Rong,
Thank you for reply :-)
which Flink version are you using?
I'm using Flink-1.8.0.
what is the "sourceTable.getSchema().toRowType()" return?
Row(time1: TimeIndicatorTypeInfo(rowtime))
what is the line *".map(a -> a)" *do and can you remove it?
*".map(a->a)"* is just to illustrate a p
Hi Soheil,
> I was wondering if is it possible to save logs into a specified file?
Yes, of course.
> I put the following file in the resource directory of the project but it
has no effect
I guess because the log4j has a higher priority. In the document [1], it
says "Users willing to use logback
Hi Soehil,
There is a logback.xml in the conf directory. You can modify that and see
if it works. For more information about logging please check
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/logging.html
Soheil Pourbafrani 于2019年7月19日周五 上午2:03写道:
> Hi,
>
> When we run the F
I am using ecs S3 instance to checkpoint, I use the following
configuration.
s3.access-key vdna_np_user
s3.endpoint https://SU73ECSG**COM:9021
s3.secret-key **I set the checkpoint in the code like
env.setStateBackend(*new *FsStateBackend("s3://vishwas.test1/checkpoints"))
I have a bucke
Here is my wire log while trying to checkpoint to ecs S3. I see the request
got a 404 , does this mean that it can't find the folder *checkpoints . *Since
s3 does not have folders, what should I put there ? Thanks so much for all
the help that you guys have provided so far. Really appreciate it.
2
Hi
I'm looking to write a sink function for writing to websockets, in
particular ones that speak the WAMP protocol (
https://wamp-proto.org/index.html).
Before going down that path, I wanted to ask if
a) anyone has done something like that already so I dont reinvent stuff
b) any caveats or warn
Just wanted to see if there is any advice on this question. Thanks
On Sunday, July 14, 2019, 09:07:45 AM EDT, M Singh
wrote:
Hi:
Is it safe to manipulate the state of an object in the evictor or trigger ?
Are there any best practices/dos and don't on this ?
Thanks
Hi,
When we run the Flink application some logs will be generated about the
running, in both local and distributed environment. I was wondering if is
it possible to save logs into a specified file?
I put the following file in the resource directory of the project but it
has no effect:
logback.xml
Hey Folks - Just checking if you have any pointers for me. Thanks for your
advice.
On Sunday, July 14, 2019, 03:12:25 PM EDT, M Singh
wrote:
Also, are the event time timers and processing time timers handled separately
- ie, if I register event time timer and then use the same tim
Hey Folks - Just wanted to see if there are any thoughts on this question.
ThanksOn Saturday, July 13, 2019, 09:33:15 PM EDT, M Singh
wrote:
Hi:
I wanted to find out what is the timestamp associated with the elements of a
stream side output with different stream time characteristics.
T
Hi Tangkailin,
If I understand correctly from the snippet, you are trying to invoke this
in some sort of window correct?
If that's the case, your "apply" method will be invoked every time at the
window fire[1]. This means there will be one new instance of the HashMap
created each time "apply" is i
Hi Dongwon,
Can you provide a bit more information:
which Flink version are you using?
what is the "sourceTable.getSchema().toRowType()" return?
what is the line *".map(a -> a)" *do and can you remove it?
if I am understanding correctly, you are also using "time1" as the rowtime,
is that want your
Hi team,
1.9 is bringing very exciting updates, State Processor API and MapState
migrations being two of them. Thank you for all the hard work!
I checked the burndown board [1], do you have an estimated timeline for the
GA release of 1.9?
[1]
https://issues.apache.org/jira/secure/RapidBoard.js
The image should be visible now at
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoints-timing-out-for-no-apparent-reason-td28793.html#none
It doesn't look like it is a disk performance or network issue. Feels more
like some buffer overflowing or timeout due to slightly
Hi Vishwas,
took me some time to find out as well. If you have your properties file
under lib following will work:
val kafkaPropertiesInputStream =
getClass.getClassLoader.getResourceAsStream("lib/config/kafka.properties")
Hope this helps,
Maxim.
On Wed, Jul 17, 2019 at 7:23 PM Vishwas Siravara
Can you check in the kafka logs what happens when you adding new brokers ?
On Thu, Jul 18, 2019 at 4:36 PM Yitzchak Lieberman <
yitzch...@sentinelone.com> wrote:
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
> after 6 ms.
>
>
>
>
> On Thu, Jul 18, 2019 at 3:
org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata after 6 ms.
On Thu, Jul 18, 2019 at 3:49 PM miki haiat wrote:
> Can you share your logs
>
>
> On Thu, Jul 18, 2019 at 3:22 PM Yitzchak Lieberman <
> yitzch...@sentinelone.com> wrote:
>
>> Hi.
>>
>> I have flink a a
Can you share your logs
On Thu, Jul 18, 2019 at 3:22 PM Yitzchak Lieberman <
yitzch...@sentinelone.com> wrote:
> Hi.
>
> I have flink a application that produces to kafka with 3 brokers.
> When I add 2 brokers that are not up yet it fails the checkpoint (a key in
> s3) due to timeout error.
>
>
Hi.
I have flink a application that produces to kafka with 3 brokers.
When I add 2 brokers that are not up yet it fails the checkpoint (a key in
s3) due to timeout error.
Do you know what can cause that?
Thanks,
Yitzchak.
I think that using Kafka to get CDC events is fine. The problem, in my
case, is really about how to proceed:
1) do I need to create Flink tables before reading CDC events or is there
a way to automatically creating Flink tables when they gets created via a
DDL event (assuming a filter on the name
I actually thinking about this option as well .
Im assuming that the correct way to implement it , is to integrate
debezium embedded to source function ?
[1] https://github.com/debezium/debezium/tree/master/debezium-embedded
On Wed, Jul 17, 2019 at 7:08 PM Flavio Pompermaier
wrote:
> Hi
Hi to all,
I'm trying to exploit async IO in my Flink job.
In my use case I use keyed tumbling windows and I'd like to execute the
async action only once per key and window (while
the AsyncDataStream.unorderedWait execute the async call for every element
of my stream) ..is there an easy way to do t
HI, Maxim
As far as I understand, it's hard to draw a simple conclusion that who's
faster. If the job is smaller (for example, the vertex number and the
parallelism are very small), the session is usually faster than the per-job
mode. I think the session has the advantage of sharing AM and TM,
Darling
Andrew D.Lin
> 下面是被转发的邮件:
>
> 发件人: 陈Darling
> 主题: FLink checkpoint,How to calculcate the number of files below the chk
> folder
> 日期: 2019年7月18日 GMT+8 下午4:02:11
> 收件人: user@flink.apache.org
>
> Hello
>
> state_checkpoints_dir/2d93ffacbddcf363b960317816566552/chk-2903/1e95606a-8f70
28 matches
Mail list logo