Hi,
The source for Flink ( or even Kafka ) is a problem we find hard to solve.
This data seems to indicate that the source could be MQ. Is there a need to
pull from MQ to Hive and then write to Flink ? What can be the flow ?
Kafka connect workers can issue JDBC queries and pull to Kafka. Is there
Hi, is there a way for the UDF of a source function, extended from
RichParallelSourceFunction, to access its Task instance, so as to call
Task.isBackPressured()?
I'm trying to give priorities to different input sources that need to be
managed from within the same source function and want to stop
Hi Robert & Yun,
Thanks a lot for your helpful analysis! I’ve confirmed that it’s the checkpoint
cleanup problem that caused the inaccurate checkpoint trigger time.
Best,
Paul Lam
> 2022年1月30日 19:45,Yun Tang 写道:
>
> Hi Paul,
>
> I think Robert's idea might be right.
>
> From the log you pas
Hi,
We are currently running Flink 1.11 and I am exploring the option of using
the universal Kafka connector with our Kafka cluster running on Kafka
version 1.1.1 but log message format version 0.10.2.0.
On the wiki page of the connector (
https://nightlies.apache.org/flink/flink-docs-release-1.11
Hi,
Could you clarify whether your question is about
- connecting Table and DataStream APIs? [1]
- matching the records using the Table API?
- or matching in the DataStream API? [3]
...
You should probably look at the temporal joins to match records using
Table API [2]; and connect/join/ or broad
Hi,
According to the documentation, event-time temporal join is triggered
by a watermark from the left and right sides. So the described
behavior seems correct.
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/joins/#event-time-temporal-join
Regards,
Roma
Hi,
scan.partition.num (the number of partitions [1]) translates into
parallel queries to the database (with different to/from). Batch size
is further calculated from lower and upper bounds and the number of
partitions.
scan.fetch-size hints JDBC driver to adjust the fetch size (see [2]).
The fi
Hi,
We are using flink jdbc connector to read whole database table line by
line. A few things I don't quite understand.
We configured BATCH_SIZE=100 and PARTITION_NUM=1000, table is pretty big.
What is the flink internal behavior to read data from table?
Flink read BATCH_SIZE data each time? Or it
Hi,
setDeserializer() expects KafkaRecordDeserializationSchema;
JSONKeyValueDeserializationSchema you provided is not compatible with
it.
You can convert it using [1]
[1]
https://nightlies.apache.org/flink/flink-docs-master/api/java/org/apache/flink/connector/kafka/source/reader/deserializer/Kafk
Hi Kevin,
I'm afraid AvroFileFormatFactory is not intended for the DataStream API.
Have you tried adapting AvroParquetRecordFormat via StreamFormatAdapter?
Regards,
Roman
On Fri, Jan 7, 2022 at 7:23 PM Kevin Lam wrote:
>
> Hi all,
>
> We're looking into using the new FileSource API, we see that
Hi.
It seems event time temporal join has a bug in version 1.14.3.
the left and right side of the join are inverted.
In the this query:
SELECT * FROM orders AS o
LEFT OUTER JOIN products FOR SYSTEM_TIME AS OF o.updatedAt AS p
ON o.pId = p.id
join triggered when a new event is published to the p
I am trying to match streaming data against my database table
using Flink Table API. I am able to match the data that exists in my
streaming data and the specific column of the database table. Now I need
to activate some rules that users have entered in the system (another
database table) and I wa
I am facing the same issue, do we know when 1.14.4 will be released?
Thanks.
On Fri, Jan 21, 2022 at 3:28 AM Chesnay Schepler wrote:
> While FLINK-24550 was indeed fixed unfortunately a similar bug was also
> introduced (https://issues.apache.org/jira/browse/FLINK-25732).
> On 20/01/2022 21:18,
Hi all,
When I build this code:
KafkaSource source = KafkaSource.builder()
.setProperties(kafkaProps)
.setProperty("ssl.truststore.type",trustStoreType)
.setProperty("ssl.truststore.password",trustStorePassword)
.setProperty("ssl.truststore.location",trustStoreLoca
I tried ExecutionConfig#disableGenericTypes and I get this error when I try to
execute my Flink job with StreamExecutionEnvironment#execute (attached is the
full stacktrace):
java.lang.UnsupportedOperationException: Generic types have been disabled in
the ExecutionConfig and type java.util.List
Event time would be fine for window processing or if all keyed streams use
at least the same timeline - however, in our case it isn't always the case.
Imagine that half of streams have real time timestamps and other half is
historic data ( in our case from pcap files ).
The real time timestamps are
Sorry to revive this thread, I'm just circling back to this now. Is it
possible to use https://issues.apache.org/jira/browse/FLINK-24565 with the
DataStream API? I am not sure how to make use of AvroFileFormatFactory in
the DataStream APi context, and couldn't find any examples.
On Mon, Jan 10,
Hi!
I am having continued issues running a StateFun job in an existing Flink
Cluster. My Flink cluster is using Flink version 1.14.3 and the StateFun job is
using version 3.2.0 of the java SDK and statefun distribution. I get the
following error:
Caused by: java.lang.NoSuchMethodError:
org.
Hi,
You can use other (de)serialization methods besides ByteBuffer as
well. Endianness is set explicitly to have the same byte order during
serialization and deserialization.
Regards,
Roman
On Fri, Feb 4, 2022 at 2:43 PM Dawid Wysakowicz wrote:
>
> Hi,
>
> You can use DeserializationSchema with
Hi everyone,
The vote on deprecating per-job mode in Flink 1.15 has been
unanimously approved in [1].
I've created a ticket for deprecation [2] and dropping [3] and linked the
current blockers for dropping it to the latter.
Binding +1
Thomas Weise
Xintong Song
Yang Wang
Jing Zhang
Till Rohrmann
20 matches
Mail list logo