Hi Surendra,
You can follow the discussion on this topic in the Dev mailing list [1]. I
would expect it in the next couple of weeks.
Best regards,
Martijn
[1] https://lists.apache.org/thread/n417406j125n080vopljgfflc45yygh4
On Fri, 4 Feb 2022 at 08:49, Surendra Lalwani
wrote:
> Hi Team,
>
>
Hi Team,
Any ETA on Flink version 1.13.6 release.
Thanks and Regards ,
Surendra Lalwani
On Sun, Jan 9, 2022 at 3:50 PM David Morávek wrote:
> Flink community officially only supports current and previous minor
> versions [1] (1.13, 1.14) with bug fixes. Personally I wouldn’t expect
> there wi
Hey Ingo,
Thanks for the quick response. I will bother you a bit more : ).
We have never used external catalogs do you perhaps have a link that we can
look at ?
The only reference that i see online is for custom catalogs is this the
same as external catalogs:
https://docs.ververica.com/user_guid
Hi Natu,
the functionality hasn't been actively blocked, it just hasn't yet been
implemented in the Ververica Platform Built-In Catalog. Using any
external catalog which supports partitioning will work fine.
I'll make a note internally for your request on this, though I cannot
make any state
Good day,
Although flink sql allows us to create partitioned tables, we are unable to
do so on vvp at the moment because of the below error:
Cause: Partitioned tables are not supported yet.
Can we understand the why the functionality was blocked or when will
partitioned tables be supported on vvp?
Ok it's not my data either. I think it may be a volume issue. I have
managed to consistently reproduce the error. I'll upload a reproducer ASAP.
On Thu, 3 Feb 2022 at 15:37, John Smith wrote:
> Ok so I tried to create a reproducer but I couldn't reproduce it. But the
> actual job once in a whi
Ok so I tried to create a reproducer but I couldn't reproduce it. But the
actual job once in a while throws that error. So I'm wondering if maybe one
of the records that comes in is not valid, though I do validate prior to
getting to the key and window operators.
On Thu, 3 Feb 2022 at 14:32, John
Actually maybe not because with PrintSinkFunction it ran for a bit and then
it threw the error.
On Thu, 3 Feb 2022 at 14:24, John Smith wrote:
> Ok it may be the ElasticSearch connector causing the issue?
>
> If I use PrintSinkFunction then I get no error and my stats print as
> expected.
>
> On
Ok it may be the ElasticSearch connector causing the issue?
If I use PrintSinkFunction then I get no error and my stats print as
expected.
On Wed, 2 Feb 2022 at 03:01, Francesco Guardiani
wrote:
> Hi,
> your hash code and equals seems correct. Can you post a minimum stream
> pipeline reproducer
Hello,
Thanks for the research! Good to know the cause.
Greetings,
Frank
On 03.02.22 17:18, Dawid Wysakowicz wrote:
I looked into the code again and unfortunately I have bad news :(
Indeed we treat S3 as if it always injects entropy. Even if the
entropy key is not specified, which effectively
I looked into the code again and unfortunately I have bad news :( Indeed
we treat S3 as if it always injects entropy. Even if the entropy key is
not specified, which effectively means it is disabled. I created a JIRA
ticket[1] to fix it.
Best,
Dawid
[1] https://issues.apache.org/jira/browse/
Hello,
I didn't know about entropy injection. I have checked, and there is no
entropy injection configured in my flink-conf.yaml. This is the relevant
section:
s3.access-key: ???
s3.endpoint: http://minio/
s3.path.style.access: true
s3.secret-key: ???
I see that there are still S3 paths defi
Hello
Most examples available still use the FlinkKafkaConsumer unfortunately.
I need to consume events from Kafka.
The format is Long,Timestamp,String,String.
Do I need to create a custom deserializer?
What also confuses me is
KafkaSource** source = KafkaSource
How does it relate to the de
Hi Frank.
Do you use entropy injection by chance? I am afraid savepoints are not
relocatable in combination with entropy injection as described here[1].
Best,
Dawid
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/state/savepoints/#triggering-savepoints
On 03/02/2022 14:4
Hi,
I think the more stable option would be the first one, as it also gives you
more flexibility. Reading the row as string and then parsing it in a query
definitely costs more, and makes less straightforward to use the other
Schema features of table, such as watermark definition, primary keys, et
Hi,
I’m using the Table / SQL API.
I have a stream of strings, where each message contains several json strings
separated by "\n”.
For example:
{“timestamp”: “2021-01-01T00:00:00”, person: {“name”: “Vasya”}}\n
{“timestamp”: “2021-01-01T01:00:00”, person: {“name”: “Max” }}
I would like to sp
Hi all,
We are currently deploying flink on k8s 3 nodes cluster - with 1 job-manager
and 3 task managers
We are trying to understand the recommendation for deployment, more
specifically for recovery from job-manager failure, and have some questions
about that:
1. If we use flink HA solution
Hello,
I'm trying to inspect a savepoint that was stored on s3://flink/ (on a
minio server), i downloaded it to my laptop for inspection. I have two
KeyedProcessFunctions (state in the same savepoint) and strangely
enough, one works perfectly and the other one doesn't.
The code is fairly sim
Dear flink user,
In our project, restoring the timer's state creates numerous issues, so I
would like to know
if it is possible to avoid save/restore of timers altogether.
If it isn't possible, how could I delete all registered timers during the
open function ?
Best regards,
Alexander
Hi,
Unfortunately at the moment, creating a row with named fields is not
possible from the ROW constructor.
One solution could be to wrap it in a cast, like: CAST((f0 + 12, 'Hello
world') AS ROW)
Or you could create a UDF and use the @DataTypeHint to define the row
return type, with named fields.
Thanks for the JIRA ticket,
This is for sure pretty critical.
The "workaround" is to not remove the field but I am not sure if this is
acceptable :)
I could work on that, but someone need to point out to me where to start,
Do I work on the PojoSerializer, to make this case not throwing an
excep
21 matches
Mail list logo