Piotr Nowojski created FLINK-13491:
--
Summary: AsyncWaitOperator doesn't handle endOfInput call properly
Key: FLINK-13491
URL: https://issues.apache.org/jira/browse/FLINK-13491
Project: Flink
Simon Su created FLINK-13492:
Summary: BoundedOutOfOrderTimestamps cause Watermark's timestamp
leak
Key: FLINK-13492
URL: https://issues.apache.org/jira/browse/FLINK-13492
Project: Flink
Issue T
zhijiang created FLINK-13493:
Summary: BoundedBlockingSubpartition only notifies
onConsumedSubpartition when all the readers are empty
Key: FLINK-13493
URL: https://issues.apache.org/jira/browse/FLINK-13493
Hi all,when I use blink flink-sql-parser module,the maven dependency like
this:
com.alibaba.blink
flink-sql-parser
1.5.1
I also import the flink 1.9 blink-table-planner module , I
use FlinkPlannerImpl to parse the sql to get the List. But
when I run the program , it throws the exception like th
There is nothing to report; we already know what the problem is but it
cannot be fixed.
On 30/07/2019 08:46, Yun Tang wrote:
I met this problem again at https://api.travis-ci.com/v3/job/220732163/log.txt
. Is there any place we could ask for help to contact tarvis or any clues we
could use to
Zhenghua Gao created FLINK-13494:
Summary: Blink planner changes source parallelism which causes
stream SQL e2e test fails
Key: FLINK-13494
URL: https://issues.apache.org/jira/browse/FLINK-13494
Proje
Jingsong Lee created FLINK-13495:
Summary: blink-planner should support decimal precision to table
source
Key: FLINK-13495
URL: https://issues.apache.org/jira/browse/FLINK-13495
Project: Flink
Yun Tang created FLINK-13496:
Summary: Correct the documentation of Gauge metric initialization
Key: FLINK-13496
URL: https://issues.apache.org/jira/browse/FLINK-13496
Project: Flink
Issue Type:
Hi all,
Progress updates:
1. the bui...@flink.apache.org can be subscribed now (thanks @Robert), you
can send an email to builds-subscr...@flink.apache.org to subscribe.
2. We have a pull request [1] to send only apache/flink builds
notifications and it works well.
3. However, all the notification
Hi Lakeshen,
Thanks for trying out blink planner.
First question, are you using blink-1.5.1 or flink-1.9-table-planner-blink
?
We suggest to use the latter one because we don't maintain blink-1.5.1, you
can try flink 1.9 instead.
Best,
Jark
On Tue, 30 Jul 2019 at 17:02, LakeShen wrote:
> Hi
Hello All,
I am new to Apache Flink. In my company we are thinking of using Flink to
perform transformation of the data. The source of the data is Apache Kafka
topics. Each message that we receive on Kafka topic, we want to transform
it and store it on RocksDB. The messages can come out of order.
Hi,
With a one-week survey in user list[1], nobody expect Flavio and Jeff
participant the thread.
Flavio shared his experience with a revised Program like interface.
This could be regraded as downstream integration and in client api
enhancements document we propose rich interface for this integra
Hi Shilpa,
The easiest way to do this is the make the Rocks DB state queryable.
Then use the Flink queryable state client to access the state you have
created.
Regards
Taher Koitawala
On Tue, Jul 30, 2019, 4:58 PM Shilpa Deshpande wrote:
> Hello All,
>
> I am new to Apache Flink. In my
I will open a PR later today, changing the module to use reflection rather
than a hard MapR dependency.
On Tue, Jul 30, 2019 at 6:40 AM Rong Rong wrote:
> We've also experienced some issues with our internal JFrog artifactory. I
> am suspecting some sort of mirroring problem but somehow it only
Hi!
Are you looking for online access or offline access?
For online access, you can to key lookups via queryable state.
For offline access, you can read and write rocksDB state using the new
state processor API in Flink 1.9
https://ci.apache.org/projects/flink/flink-docs-master/dev/libs/state_pr
Till Rohrmann created FLINK-13497:
-
Summary: Checkpoints can complete after CheckpointFailureManager
fails job
Key: FLINK-13497
URL: https://issues.apache.org/jira/browse/FLINK-13497
Project: Flink
Nico Kruber created FLINK-13498:
---
Summary: Reduce Kafka producer startup time by aborting
transactions in parallel
Key: FLINK-13498
URL: https://issues.apache.org/jira/browse/FLINK-13498
Project: Flink
Stephan Ewen created FLINK-13499:
Summary: Remove dependency on MapR artifact repository
Key: FLINK-13499
URL: https://issues.apache.org/jira/browse/FLINK-13499
Project: Flink
Issue Type: Imp
David Judd created FLINK-13500:
--
Summary: RestClusterClient requires S3 access when HA is configured
Key: FLINK-13500
URL: https://issues.apache.org/jira/browse/FLINK-13500
Project: Flink
Issue
Xuefu Zhang created FLINK-13501:
---
Summary: Fixes a few issues in documentation for Hive integration
Key: FLINK-13501
URL: https://issues.apache.org/jira/browse/FLINK-13501
Project: Flink
Issue
godfrey he created FLINK-13502:
--
Summary: CatalogTableStatisticsConverter should be in
planner.utils package
Key: FLINK-13502
URL: https://issues.apache.org/jira/browse/FLINK-13502
Project: Flink
Hi Thomas,
IIUC this "launcher" should run on client endpoint instead
of dispatcher endpoint. "jar run" will extract the job graph
and submit it to the dispatcher, which has mismatched
semantic from your willing.
Could you run it with CliFrontend? Or propose that "jar run"
supports running direct
Hi Jiangjie,
Thanks for your response. I was able to figure out the issue.
We have multiple end points from which we receive data. Out of which, one
of the endpoints NTP was not set up/rather not getting synced to ntp
properly. so those VM's were sending packets which was 2 mins ahead of
time. So
Jing Zhang created FLINK-13503:
--
Summary: Correct the behavior of `JDBCLookupFunction`
Key: FLINK-13503
URL: https://issues.apache.org/jira/browse/FLINK-13503
Project: Flink
Issue Type: Task
Jark Wu created FLINK-13504:
---
Summary: NoSuchFieldError when executing DDL via tEnv.sqlUpdate in
application project
Key: FLINK-13504
URL: https://issues.apache.org/jira/browse/FLINK-13504
Project: Flink
Is the watermarking configured per-partition in Kafka, or per source?
If it is configured per partition, then a late (trailing) or early
(leading) partition would not affect the watermark as a whole.
There would not be any dropping of late data, simply a delay in the results
until the latest parti
26 matches
Mail list logo