Hi community ,
I see the flink RocksDBStateBackend state cleanup,now the code like this :
StateTtlConfig ttlConfig = StateTtlConfig
.newBuilder(Time.seconds(1))
.cleanupInRocksdbCompactFilter(1000)
.build();
> The default background cleanup for RocksDB backend queries the current
>
Piotr Nowojski created FLINK-16629:
--
Summary: Streaming bucketing end-to-end test output hash mismatch
Key: FLINK-16629
URL: https://issues.apache.org/jira/browse/FLINK-16629
Project: Flink
jackray wang created FLINK-16630:
Summary: '@version' STRING is not support
Key: FLINK-16630
URL: https://issues.apache.org/jira/browse/FLINK-16630
Project: Flink
Issue Type: Bug
C
Piotr Nowojski created FLINK-16631:
--
Summary: Flink cache on Travis does not exist
Key: FLINK-16631
URL: https://issues.apache.org/jira/browse/FLINK-16631
Project: Flink
Issue Type: Bug
Hi Tison & Till and all,
I have uploaded the client, taskmanager and jobmanager log to Gist (
https://gist.github.com/kylemeow/500b6567368316ec6f5b8f99b469a49f), and I
can reproduce this bug every time when trying to cancel Flink 1.10 jobs on
YARN.
Besides, in earlier Flink versions like 1.9, the
Hi Lake
Flink leverage RocksDB's background compaction mechanism to filter out-of-TTL
entries (by comparing with current timestamp provided from RocksDB's
time_provider) to not let them stay in newly compacted data.
This would iterator over data entries with FlinkCompactionFilter::FilterV2 [1],
Hi Lake,
When the Flink doc mentions a state entry in RocksDB, we mean one key/value
pair stored by user code over any keyed state API
(keyed context in keyed operators obtained e.g. from keyBy()
transformation).
In case of map or list, the doc means map key/value and list element.
- value/aggreg
+1 (non-binding)
BTW it's in the same thread in my gmail too.
Kurt Young 于2020年3月17日周二 上午11:47写道:
> Looks like I hit the gmail's bug again...
>
> Best,
> Kurt
>
>
> On Tue, Mar 17, 2020 at 11:11 AM Wei Zhong wrote:
>
> > Hi Kurt,
> >
> > This vote thread is independent from my side[1]. If th
Jingsong Lee created FLINK-16632:
Summary: Cast string to timestamp fail
Key: FLINK-16632
URL: https://issues.apache.org/jira/browse/FLINK-16632
Project: Flink
Issue Type: Bug
Compo
Robert Metzger created FLINK-16633:
--
Summary: CI builds without S3 credentials fail
Key: FLINK-16633
URL: https://issues.apache.org/jira/browse/FLINK-16633
Project: Flink
Issue Type: Improve
Hi Andrey,
Thanks for your explanation.
> About the logging
What i mean is we could not forward the stdout/stderr to local files and
docker stdout
at the same time by using log4j. For the jobmanager.log/taskmanager.log, it
works
quite well since we only need to add a console appender in
the log4j
Thanks Robert for all this,
I think that we should also post a thread in the user ML so that users
can also comment on the topic.
What do you think?
Kostas
On Mon, Mar 16, 2020 at 12:27 PM Robert Metzger wrote:
>
> Thank you all for your feedback.
>
> I will try to fix the test then (or disabl
@Tison could you create an issue to track the problem. Please also link
the uploaded log file for further debugging.
I think the reason why it worked in Flink 1.9 could have been that we had a
async callback in the longer chain which broke the flow of execution and
allowed to send the response. T
Jiangjie Qin created FLINK-16634:
Summary: The PartitionDiscoverer in FlinkKafkaConsumer should not
use the user provided client.id.
Key: FLINK-16634
URL: https://issues.apache.org/jira/browse/FLINK-16634
+1 (binding)
Best,
Hequn
> On Mar 17, 2020, at 5:03 PM, Benchao Li wrote:
>
> +1 (non-binding)
>
> BTW it's in the same thread in my gmail too.
>
>
>
> Kurt Young 于2020年3月17日周二 上午11:47写道:
>
>> Looks like I hit the gmail's bug again...
>>
>> Best,
>> Kurt
>>
>>
>> On Tue, Mar 17, 2020 a
Till Rohrmann created FLINK-16635:
-
Summary: Incompatible okio dependency in flink-metrics-influxdb
module
Key: FLINK-16635
URL: https://issues.apache.org/jira/browse/FLINK-16635
Project: Flink
+1 (binding)
On Tue, Mar 17, 2020 at 6:56 PM Hequn Cheng wrote:
> +1 (binding)
>
> Best,
> Hequn
>
> > On Mar 17, 2020, at 5:03 PM, Benchao Li wrote:
> >
> > +1 (non-binding)
> >
> > BTW it's in the same thread in my gmail too.
> >
> >
> >
> > Kurt Young 于2020年3月17日周二 上午11:47写道:
> >
> >> Looks
+1 (binding)
On Tue, Mar 17, 2020 at 10:35 AM jincheng sun
wrote:
> +1
>
> Best,
> Jincheng
>
>
>
> Hequn Cheng 于2020年3月16日周一 上午10:01写道:
>
> > Hi everyone,
> >
> > I'd like to start the vote of FLIP-112[1] which is discussed and reached
> > consensus in the discussion thread[2].
> > The vote wi
I would really like to see us converging the stack and the functionality
here.
Meaning to try and use the same sinks in the Table API as for the
DataStream API, and using the same sink for batch and streaming.
The StreamingFileSink has a lot of things that can help with that. If
possible, it would
Jark Wu created FLINK-16636:
---
Summary: TableEnvironmentITCase failed on traivs
Key: FLINK-16636
URL: https://issues.apache.org/jira/browse/FLINK-16636
Project: Flink
Issue Type: Bug
Compo
Hi Guys:
I want to contribute to Apache Flink.
Would you please give me the permission as a
contributor?
My jira username is luck monkey
Hi Jingsong ,
I am looking forward this feature. Because in some streaming application,it
need transfer their messages to hdfs , in order to offline analysis.
Best wishes,
LakeShen
Stephan Ewen 于2020年3月17日周二 下午7:42写道:
> I would really like to see us converging the stack and the functionality
>
Thanks for bringing up this discussion Flavio. And thanks Bowen for the
ping.
For me, I'm not quite sure whether adding an HBase catalog suits into the
existing Catalog interface. It seems to be coupled with SQL standard
instead of a more general database catalog [1], which also reflects in the
FL
Hey Guys,
I have observed a weird behavior on using the Temporal Table Join and the
way it pushes the Watermark forward. Generally, I think the question is *When
is the Watermark pushed forward by the Temporal Table Join?*
The issue I have noticed is that Watermark seems to be pushed forward even
Hi Danny,
thanks for updating the FLIP. I think your current design is sufficient
to separate hints from result-related properties.
One remark to the naming itself: I would vote for calling the hints
around table scan `OPTIONS('k'='v')`. We used the term "properties" in
the past but since we
I'm a bit late to the party but also +1 from my side. Pulling the
dependency graph straight is very good idea and will improve the
maintainability in the long run.
Cheers,
Till
On Tue, Mar 10, 2020 at 5:21 AM tison wrote:
> Thanks for your attention!
>
> Best,
> tison.
>
>
> Aljoscha Krettek 于
+1 for removing them.
On Wed, Mar 11, 2020 at 10:06 AM Chesnay Schepler
wrote:
> +1 on removing them.
>
> They are so limited in terms of functionality that I doubt anyone would
> be significantly impaired by us removing them.
>
> On 11/03/2020 02:13, Xintong Song wrote:
> > Thanks for the surve
Hi!
Great to see that you're interested in contributing to Apache Flink.
Since assigning you the contributor status means that you'll be able to be
assigned to JIRA tickets by committers, could you first let us know if
there's a specific ticket that you're looking towards picking up?
Cheers,
Gor
Hi Till,
Sure. I'll take a look and start a discuss thread soon.
Thanks,
Sivaprasanna
On Mon, Mar 16, 2020 at 4:01 PM Till Rohrmann wrote:
> Hi Sivaprasanna,
>
> do you want to collect the set of Hadoop utility classes which could be
> moved to a flink-hadoop-utils module and start a discuss t
Zili Chen created FLINK-16637:
-
Summary: Flink YARN app terminated before the client receives the
result
Key: FLINK-16637
URL: https://issues.apache.org/jira/browse/FLINK-16637
Project: Flink
Is
JIRA created as https://jira.apache.org/jira/browse/FLINK-16637
Best,
tison.
Till Rohrmann 于2020年3月17日周二 下午5:57写道:
> @Tison could you create an issue to track the problem. Please also link
> the uploaded log file for further debugging.
>
> I think the reason why it worked in Flink 1.9 could h
Thanks for creating this FLIP Andrey.
I agree with Xintong that we should rename jobmanager.memory.direct.size
into jobmanager.memory.off-heap.size which accounts for native and direct
memory usage. I think it should be good enough and is easier to understand
for the user.
Concerning the default
+1 for a soonish bug fix release. Thanks for volunteering as our release
manager Yu.
I think we can soon merge the increase of metaspace size and improving the
error message. The assumption is that we currently don't have too many
small Flink 1.10 deployments with a process size <= 1GB. Of course,
Thanks for the updates Till!
For FLINK-16018, maybe we could create two sub-tasks for easy and complete
fix separately, and only include the easy one in 1.10.1? Or please just
feel free to postpone the whole task to 1.10.2 if "all or nothing" policy
is preferred (smile). Thanks.
Best Regards,
Yu
Bashar Abdul Jawad created FLINK-16638:
--
Summary: Flink checkStateMappingCompleteness doesn't include
UserDefinedOperatorIDs
Key: FLINK-16638
URL: https://issues.apache.org/jira/browse/FLINK-16638
Hi!
I am a beginner for Flink, so i want to start with simple
ticket. With the deepening of learning, i prefer Flink runtime、 table、sql
and so on.
Best regards
---Original---
From: "Tzu-Li (Gordon) Tai"http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
LakeShen created FLINK-16639:
Summary: Flink SQL Kafka source connector, add the no json format
filter params when format.type is json
Key: FLINK-16639
URL: https://issues.apache.org/jira/browse/FLINK-16639
Lu Niu created FLINK-16640:
--
Summary: Expose listStatus latency in flink filesystem
Key: FLINK-16640
URL: https://issues.apache.org/jira/browse/FLINK-16640
Project: Flink
Issue Type: Improvement
Thanks Timo ~
For the naming itself, I also think the PROPERTIES is not that concise, so +1
for OPTIONS (I had thought about that, but there are many codes in current
Flink called it properties, i.e. the DescriptorProperties,
#getSupportedProperties), let’s use OPTIONS if this is our new prefer
Have one question for adding `supportedHintOptions` method to
`TableFactory`. It seems
`TableFactory` is a base factory interface for all *table module* related
instances, such as
catalog, module, format and so on. It's not created only for *table*. Is it
possible to move it
to `TableSourceFactory`
Thanks for your feedback!
Since FLINK-15090 got resolved, the next step I'd like to decouple dep from
flink-streaming to flink-java. They should not have any dependency
conceptually but happen we have some common formats in flink-java module.
Best,
tison.
Till Rohrmann 于2020年3月17日周二 下午10:33写道:
Zhijiang created FLINK-16641:
Summary: Announce sender's backlog to solve the deadlock issue
without exclusive buffers
Key: FLINK-16641
URL: https://issues.apache.org/jira/browse/FLINK-16641
Project: Flin
Kurt Young created FLINK-16642:
--
Summary: CSV TableSource / TableSink shouldn't be in
flink-table-api-java-bridge package
Key: FLINK-16642
URL: https://issues.apache.org/jira/browse/FLINK-16642
Project:
-->
On Mon, Mar 16, 2020 at 1:58 AM Andrey Zagrebin
wrote:
> Thanks for the further feedback Thomas and Yangze.
>
> > A generic, dynamic configuration mechanism based on environment variables
> is essential and it is already supported via envsubst and an environment
> variable that can supply a
Hi Stephan, Thanks very much for your detailed reply.
*## StreamingFileSink not support writer with path*
The FLIP is "Filesystem connector in Table", it's about building up Flink
Table's capabilities. But I think Hive is important, I see that most users
use Flink and Spark to write data from Kaf
Yes, I think we should move the `supportedHintOptions` from TableFactory to
TableSourceFactory, and we also need to add the interface to TableSinkFactory
though because sink target table may also have hints attached.
Best,
Danny Chan
在 2020年3月18日 +0800 AM11:08,Kurt Young ,写道:
> Have one question
Hi,
I am thinking we can provide hints to *table* related instances.
- TableFormatFactory: of cause we need hints support, there are many format
options in DDL too.
- catalog and module: I don't know, maybe in future we can provide some
hints for them.
Best,
Jingsong Lee
On Wed, Mar 18, 2020 at
I second Thomas that we can support both Java 8 and 11.
Best,
Yangze Guo
On Wed, Mar 18, 2020 at 12:12 PM Thomas Weise wrote:
>
> -->
>
> On Mon, Mar 16, 2020 at 1:58 AM Andrey Zagrebin wrote:
>>
>> Thanks for the further feedback Thomas and Yangze.
>>
>> > A generic, dynamic configuration mech
48 matches
Mail list logo