What you are doing is registering two third-party classes as POJO types, and
this is actually the default behavior of Flink even without the registration.
And for POJO types, Flink will create a POJO serializer for serialization.
A side note: since Flink 2.0, built-in serialization support for j
Hi,
So I have two classes (third party pojos):
public class A {
private List bList;
...
}
public class B {
...
}
I have defined my type info facto
Hi Nix,
I tried the python approach and successfully ran it with command in docker,
and was able to save data in InfluxDB cloud.
*Running python code, working*
python3 main.py
*Below not working,*
/opt/flink/bin/flink run -py /opt/flink/flink_job/main.py
error:
\Traceback (most recent call last):
Hi Guy,
Most Flink CDC connectors are distributed as both “thin" and “fat" jars [1].
Fat jars (with names like flink-sql-connector-mysql-cdc) have all external
dependencies bundled inside, mainly used for Flink Table SQL jobs. Thin jars
(with names like flink-connector-mysql-cdc) are mostly use
I am new to Flink and the days I used java was long ago. I am trying to write
a Flink app that utilizes Table API with a source table used the MySQL CDC
connector 3.3.0 to talk to a Postgres table using JDBC driver. I am using
Gradle to build a shadowJar which I submit to my Flink containers by
Hi, you may use the option "pipeline.serialization-config" [1] to register type
info for any custom type, which is available since Flink 1.19.
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#pipeline-serialization-config
Best,
Zhanghao Chen
_
Hi Gabor
Glad we helped, but thanks to you for working on this. The performance
improvement you made is fantastic.
I think on our side we will wait for the final official patch, and then
decide if we move up to 1.20.1 (or whatever includes it) or just apply the
patch on what we have. Our Flink is
Hi Jean-Marc,
Thanks for your efforts! We've done quite extensive tests inside and they
are all passing. Good to see that the number of keys matches on your side
too.
Regarding key ordering, the state processor API is not giving any
guarantees in terms of ordering.
All in all I'm confident with t
Hi gabor
I see all my 1,331,301 keys with your patch, which is exactly what I am
expecting. If you have more specific concerns I suppose I can instrument my
code further. I have no expectation regarding the key ordering.
JM
On Wed, Feb 12, 2025 at 11:29 AM Gabor Somogyi
wrote:
> I think the h
Hi,
I have a Pojo class provided by some library.
Say A.class
I can create a type info factory of the same like:
public class ATypeInfoFactory extends TypeInfoFactory {
@Override
public TypeInformation createTypeInfo(
Type t, Map> genericParameters) {
Map> fields =
new HashM
I think the hashmap vs patched RocksDB makes sense, at least I'm measuring
similar number with relatively small states.
RockDb commenting out the remove() is a bit surprisingly high but since
it's causing correctness issues under some circumstances I would abandon
that.
> I probably need more time
Hi Gabor,
I applied your 1.20 patch, and I got some very good numbers from it... so
for my 5GB savepoint, I made sure I skip all my code overhead to get the
raw number, and I can read it in
- HashMap : 4 minutes
- RockDb with your patch: ~19 minutes
- RockDb commenting out the remove(): 49 minutes
Hi Gabor,
I applied your 1.20 patch, and I got some very good numbers from it... so
for my 5GB savepoint, I made sure I skip all my code overhead to get the
raw number, and I can read it in
- HashMap : 4 minutes
- RockDb with your patch: ~19 minutes
- RockDb commenting out the remove(): 49 minutes
13 matches
Mail list logo