Hi,
your hash code and equals seems correct. Can you post a minimum stream
pipeline reproducer using this class?
FG
On Tue, Feb 1, 2022 at 8:39 PM John Smith wrote:
> Hi, getting java.lang.IllegalArgumentException: Key group 39 is not in
> KeyGroupRange{startKeyGroup=96, endKeyGroup=103}. Unles
Hi,
>From what I see here
https://nightlies.apache.org/flink/flink-docs-release-1.14/api/java/org/apache/flink/connector/file/src/AbstractFileSource.AbstractFileSourceBuilder.html#setFileEnumerator-org.apache.flink.connector.file.src.enumerate.FileEnumerator.Provider-
the file enumerator can be set
Hello,
I am currently working on program that uses flink to read avro type records
from kafka.
I have the avro schema of the records I want to read in a file but I looked all
over github, the documentation and stack Overflow for examples on how to use
AvroRowDeserializationSchema to deserialize
Hi Hussein,
Thanks for your question. I see that you've posted this multiple times in
the past couple of days, please leave some time for people to reply. The
User mailing list is set up for asynchronous replies, sending it multiple
times in such a short time frame makes it seem pushy. People try
Thank you Fabian,
I have one followup question.
You wrote:
*isUtcTimestamp denotes whether timestamps should be represented asSQL UTC
timestamps.*
Quetion:
So, if *isUtcTimestamp *is set to false, how timestamps are represented?
Regards,
Krzysztof Chmielewski
wt., 25 sty 2022 o 11:56 Fabian
Hi Yoel,
Thank you for answering my question, really appreciated. However, that
brings my mind more questions.
1) As far as I understood from the Flink built-in serialization, if a
class is a POJO type wrt the items defined in POJOs section here [1],
if an object is an instance of a POJO type cla
Sorry for not finishing the last - 4th - question
4) When Kyro fallback is enabled, that means fallback is the last
resort if the object can be serialized with any other serializer like
`PojoSerializer`?
Can we still benefit from the performance of Flink built-in serializer
without sacrifying the
Well, for those who might be interested in the semantics I mentioned, I
implemented a custom operator that seems to achieve what I want by mostly
ignoring the actual timestamps from the side stream's watermarks. However, it
kind of depends on the fact that my main stream comes from a previous wi
Hello,
I have some trouble restoring a state (pojo) after deleting a field
According to documentation, it should not be a problem with POJO :
*"Fields can be removed. Once removed, the previous value for the removed
field will be dropped in future checkpoints and savepoints."*
Here is a short sta
Hello,
Happened to me too, here’s the JIRA ticket:
https://issues.apache.org/jira/browse/FLINK-21752
Regards,
Alexis.
From: bastien dine
Sent: Mittwoch, 2. Februar 2022 16:01
To: user
Subject: Pojo State Migration - NPE with field deletion
Hello,
I have some trouble restoring a state (pojo)
Hi everyone,
I am trying to run a StateFun job as a jar in an existing cluster. But I run
into a large error which is caused by the following:
Caused by: java.lang.AbstractMethodError: Method
org/apache/flink/statefun/flink/core/functions/FunctionGroupDispatchFactory
.setMailboxExecutor(Lorg/ap
Hi,
I'm trying to create Row(..) using Flink SQL, but I can't assign names to
its fields.
*For example:*Input table1 structure:* (id INT, some_name STRING)*
Query: *select *, ROW(id, some_name) as row1 from table1*
Output result structure:
*(id INT , some_name STRING, row1 ROW (EXPR$0 INT, EX
Hi,
Totally missed that setFileEnumerator method. That definitely helps, I
checked it out and this does what we were looking for.
Thanks FG!
On Wed, Feb 2, 2022 at 3:07 AM Francesco Guardiani
wrote:
> Hi,
> From what I see here
> https://nightlies.apache.org/flink/flink-docs-release-1.14/api/j
Hello Christopher,
It seems to be like a version mismatch, which StateFun version are
you using, and what is the Flink version of the cluster that you are trying
to submit to?
StateFun 3.1.1 was built with Flink 1.13.5
StateFun 3.2.0 was built with Flink 1.14.3
The version needs to match, since
Thanks, Robert!
I tried the classloader.resolve.order: parent-first option but ran into
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder" errors
(because I use logback so I followed
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/advanced/logging/#configuring-log
According to the Flink 1.12 documentation (
https://nightlies.apache.org/flink/flink-docs-release-1.12/dev/connectors/kafka.html),
it states to use FlinkKafkaSource when consuming from Kafka.
However, I noticed that the newer API uses KafkaSource, which uses
KafkaSourceBuilder and OffsetsInitializ
16 matches
Mail list logo