+ 30 = 60 min. Did you wait that long to
> check the output?
>
> Aneesha Kaushal 于2021年3月18日周四 下午11:29写道:
>
>> Hi,
>>
>> I am doing a simple POC using Flink SQL and I am facing some issues with
>> Interval Join.
>>
>> *Use Case*: I have two Kafka s
Hi,
I am doing a simple POC using Flink SQL and I am facing some issues with
Interval Join.
*Use Case*: I have two Kafka streams and using Flink SQL interval join I
want to remove rows from* stream 1*(abandoned_user_visits) that are present
in *stream 2*(orders) within some time interval.
*Data:
Thanks Chesnay! The exception is gone now.
On 03-Dec-2018, at 5:22 PM, Chesnay Schepler wrote:Based on the stacktrace the client is not running in legacy mode; please check the client flink-conf.yaml.
Thanks Chesnay! The exception is gone now.
> On 03-Dec-2018, at 5:22 PM, Chesnay Schepler wrote:
>
> Based on the stacktrace the client is not running in legacy mode; please
> check the client flink-conf.yaml.
Caused by: java.util.concurrent.TimeoutException
Thanks,
Aneesha Kaushal
> On 06-Sep-2018, at 10:45 AM, Gary Yao wrote:
>
> Hi Jason,
>
> From the stacktrace it seems that you are using the 1.4.0 client to list jobs
> on a 1.5.x cluster. This will not work. You have to use the 1.5
Hello,
I have a flink job which processes a stream of Event (an avro object) and
creates Sessions (another avro object) using Session-Windows.
I am not able to recover my job from save point when I try to make some changes
in the schema of Event object or in schema of Sessions.
Is there anyon
ote:
>
> Could you please send a screenshot?
>
>> On 20. Feb 2018, at 11:09, Aneesha Kaushal > <mailto:aneesha.kaus...@reflektion.com>> wrote:
>>
>> Hello Aljoscha
>>
>> I looked into the Subtasks session on Flink Dashboard, for the about two
>
anagers?
>
> Best,
> Aljoscha
>
>> On 20. Feb 2018, at 10:50, Aneesha Kaushal > <mailto:aneesha.kaus...@reflektion.com>> wrote:
>>
>> Hello,
>>
>> I have a fink batch job, where I am grouping dataset on some keys, and then
>> using group r
Hello,
I have a fink batch job, where I am grouping dataset on some keys, and then
using group reduce. Parallelism is set to 16.
The slots for the Map task is distributed across all the machines, but for
GroupReduce all the slots are being assigned to the same machine. Can you help
me underst
Hello,
I am using flink 1.2 and writing records to S3 using rolling sink.
I am encountering this S3 write error quite frequently :
TimerException{com.amazonaws.services.s3.model.AmazonS3Exception: Status Code:
404, AWS Service: Amazon S3, AWS Request ID: B573887B1850BF28, AWS Error Code:
nu
10 matches
Mail list logo