>
> Best,
>
> Dawid
>
> [1] https://issues.apache.org/jira/browse/FLINK-21229
> On 28/01/2021 17:39, Laurent Exsteens wrote:
>
> Hello,
>
> I'm trying to us Flink SQL (on Ververica Platform, so no other options
> than pure Flink SQL) to read confluent avro me
mal Flink job?
Thanks in advance for your help.
Best Regards,
Laurent.
--
*Laurent Exsteens*
Data Engineer
(M) +32 (0) 486 20 48 36
*EURA NOVA*
Rue Emile Francqui, 4
1435 Mont-Saint-Guibert
(T) +32 10 75 02 00
*euranova.eu <http://euranova.eu/>*
*research.euranova.eu* <http:
lication query, it
>> will keep last row, I think that’s what you want.
>>
>> BTW, the deduplication has supported event time in 1.12, this will be
>> available soon.
>>
>> Best,
>> Leonard
>>
>>
>
> --
> *Laurent Exsteens*
> Data Eng
o deduplicate, *but only keeping in the deduplication
> state the latest value of the changed column* to compare with. While here
> it seems to keep all previous values…
>
>
> You can use ` ORDER BY proctime() DESC` in the deduplication query, it
> will keep last row, I think that
am I doing wrong here?
Thanks in advance for your help.
Regards,
Laurent.
On Thu, 12 Nov 2020 at 21:56, Laurent Exsteens
wrote:
> I'm now trying with a MATCH_RECOGNIZE:
>
>
> SELECT *
> FROM customers
> MATCH_RECOGNIZE (
> PARTITION BY client_number
> ORD
ication
already needs a complex query involving CEP
Thanks in advance for your help!
Best Regards,
Laurent.
On Thu, 12 Nov 2020 at 17:22, Laurent Exsteens
wrote:
> I see what was my mistake: I was using a field in my ORDER BY, while it
> only support proctime() for now.
>
&
PARTITION BY client_number, address ORDER BY proctime() ASC) AS
> rownum
>FROM src)
> WHERE rownum = 1
>
> That means, the duplicate records on the same client_number + address will
> be ignored,
> but the new value of address will be emitted as an append-only stre
ans column value changes will notify downstream
> operators.
> The difference of keeping first row and last row is specified by the
> direction of ORDER BY clause [1].
>
> Best,
> Jark
>
> [1]:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#de
> name STRING,
> timestamp BIGINT METADATA, -- read timestamp
> ) WITH (
> 'connector' = 'kafka',
> 'topic' = 'test-topic',
> 'format' = 'avro'
> )
>
> Best,
> Jark
>
> [1]:
> https://cw
x27;kafka',
'format' = 'csv',
'properties.bootstrap.servers' =
'kafka-0.kafka-headless.vvp.svc.cluster.local:9092',
'properties.group.id' = 'flinkSQL',
'topic' = 'sat_customers_address'
);
INSERT INTO sat_custo
ct configuration to put there to
have it running on a k8s volume.
Thanks a lot for your help.
Best Regards,
Laurent.
--
*Laurent Exsteens*
Data Engineer
(M) +32 (0) 486 20 48 36
*EURA NOVA*
Rue Emile Francqui, 4
1435 Mont-Saint-Guibert
(T) +32 10 75 02 00
*euranova.eu <http://e
Hi Nick,
On a project I worked on, we simply made the file accessible on a shared
NFS drive.
Our source was custom, and we forced it to parallelism 1 inside the job, so
the file wouldn't be read multiple times. The rest of the job was
distributed.
This was also on a standalone cluster. On a resour
is intended to inform the `SourceFunction` to
> cleanly exit it’s `#run` method/loop (note SIGINT will be issued anyway).
> In this case `#close` also will be called after source’s threads exit.
>
> Piotrek
>
> On 25 May 2020, at 21:37, Laurent Exsteens
> wrote:
>
> Tha
proper way to handle this issue? Is there some kind of closable
source interface we should implement?
Thanks in advance for your help.
Best Regards,
Laurent.
--
*Laurent Exsteens*
Data Engineer
(M) +32 (0) 486 20 48 36
*EURA NOVA*
Rue Emile Francqui, 4
1435 Mont-Saint-Guibert
(T) +32 10 75
gt;
> Seth
>
> On Apr 11, 2020, at 11:06 AM, Laurent Exsteens <
> laurent.exste...@euranova.eu> wrote:
>
>
> Hello,
>
> I have a generic ProcessFunction using list state, for which I receive the
> type information as constructor parameter (since it is not poss
Hello,
I have a generic ProcessFunction using list state, for which I receive the
type information as constructor parameter (since it is not possible to
create the type information in the class due to type Erasure).
I now need to keep not only the data, but also the timestamp at which they
appear
Hi Aljoscha,
Thank you for your answer!
Out of curiosity, would writing my own serializer involve implementing a
serialisation for every your I could get?
On Wed, Apr 1, 2020, 13:57 Aljoscha Krettek wrote:
> Hi Laurent!
>
> On 31.03.20 10:43, Laurent Exsteens wrote:
> > Yesterd
gt; Best,
> Robert
>
> On Tue, Mar 31, 2020 at 10:48 AM Laurent Exsteens <
> laurent.exste...@euranova.eu> wrote:
>
>> Hi Tzu-Li,
>>
>> thanks a lot for your answer. I will try this!
>>
>> However, I was looking for something that does fully simula
ion errors.
>
> Cheers,
> Gordon
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
*Laurent Exsteens*
Data Engineer
(M) +32 (0) 486 20 48 36
*EURA NOVA*
Rue Emile Francqui, 4
1435 Mont-Saint-Guibert
(T) +32 10 75 02
he descriptor is Some[T] instead of T, I had to wrap and unwrap it
> every time I used it.
>
> On Sat, Mar 28, 2020 at 6:02 AM Laurent Exsteens <
> laurent.exste...@euranova.eu> wrote:
>
>> Hello,
>>
>> Using Flink 1.8.1, I'm getting the following error:
&
state using Generic types? if yes, how?
Thanks in advance for your help!
Best Regards,
Laurent.
--
*Laurent Exsteens*
Data Engineer
(M) +32 (0) 486 20 48 36
*EURA NOVA*
Rue Emile Francqui, 4
1435 Mont-Saint-Guibert
(T) +32 10 75 02 00
*euranova.eu <http://euranova.eu/>*
*resea
Hello,
I would like to test a Flink application, including any problem that would
happen when deployed on a distributed cluster.
The way we do this currently is to launch a Flink cluster in Docker and run
the job on it. This setup seems heavy and might not be necessary.
Is there a way to simulat
22 matches
Mail list logo