Hi community,
Pravega connector is a connector that provides both Batch and Streaming Table
API implementation. We uses descriptor API to build Table source. When we plan
to upgrade to Flink 1.10, we found the unit tests are not passing with our
existing Batch Table API. There is a type convers
Thanks Piotr and Yun for involving.
Hi Piotr and Yun, for implementation,
FLINK-14254 [1] introduce batch sink table world, it deals with partitions
thing, metastore thing and etc.. And it just reuse Dataset/Datastream
FileInputFormat and FileOutputFormat. Filesystem can not do without
FileInputF
Hi Sidney,
The WARN logging you saw was from the AbstractPartitionDiscoverer which is
created by FlinkKafkaConsumer itself. It has an internal consumer which
shares the client.id of the actual consumer fetching data. This is a bug
that we should fix.
As Rong said, this won't affect the normal ope
For K8s deployment(including standalone session/perjob, native
integration), Flink could
not support port range for rest options. You need to set the
`rest.bind-port` exactly same
with `rest.port`.
Hi LakeShen
I am trying to understand your problem and do not think it is about the
port configurati
Dear community,
happy to share this week's community digest with an outlook on Apache Flink
1.11 & 1.10.1, an update on the recent development around Apache Flink
Stateful Functions, a couple of SQL FLIPs (planner hints, Hbase catalog)
and a bit more.
Flink Development
==
* [release
We also had seen this issue before running Flink apps in a shared cluster
environment.
Basically, Kafka is trying to register a JMX MBean[1] for application
monitoring.
This is only a WARN suggesting that you are registering more than one MBean
with the same client id "consumer-1", it should not a
Hey,
I've been using Flink for a while now without any problems when running apps
with a FlinkKafkaConsumer.
All my apps have the same overall logic (consume from kafka -> transform event
-> write to file) and the only way they differ from each other is the topic
they read (remaining kafka confi
Anyone?
On Wed, Mar 11, 2020 at 11:23 PM Yitzchak Lieberman <
yitzch...@sentinelone.com> wrote:
> Hi.
>
> Did someone encountered problem with sending metrics with datadog http
> reporter?
> My setup is flink version 1.8.2 deployed on k8s with 1 job manager and 10
> task managers.
> Every version
Hi All,
I set a transformation like this and my events in the stream have a
sequential timestamp like 1,2,3, and I set the watermark to event time.
myStream
.keyBy(0)
.timeWindow(Time.of(1000, TimeUnit.MILLISECONDS))
.aggregate(new myAggregateFunction())
.print(
Thanks Arvid and Kurt. That's very helpful discussion.
Currently we will continue this with Lambda, but I'll definitely do a A-A
test between Lambda and Flink for this case.
Regards,
Jiawei
On Wed, Mar 11, 2020 at 5:40 PM Kurt Young wrote:
> > The second reason is this query need to scan the wh
10 matches
Mail list logo