Jark Wu created FLINK-12947:
---
Summary: Translate "Twitter Connector" page into Chinese
Key: FLINK-12947
URL: https://issues.apache.org/jira/browse/FLINK-12947
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-12946:
---
Summary: Translate "Apache NiFi Connector" page into Chinese
Key: FLINK-12946
URL: https://issues.apache.org/jira/browse/FLINK-12946
Project: Flink
Issue Type: Sub-tas
Jark Wu created FLINK-12944:
---
Summary: Translate "Streaming File Sink" page into Chinese
Key: FLINK-12944
URL: https://issues.apache.org/jira/browse/FLINK-12944
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-12945:
---
Summary: Translate "RabbitMQ Connector" page into Chinese
Key: FLINK-12945
URL: https://issues.apache.org/jira/browse/FLINK-12945
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-12943:
---
Summary: Translate "HDFS Connector" page into Chinese
Key: FLINK-12943
URL: https://issues.apache.org/jira/browse/FLINK-12943
Project: Flink
Issue Type: Sub-task
Jark Wu created FLINK-12942:
---
Summary: Translate "Elasticsearch Connector" page into Chinese
Key: FLINK-12942
URL: https://issues.apache.org/jira/browse/FLINK-12942
Project: Flink
Issue Type: Sub-t
Jark Wu created FLINK-12941:
---
Summary: Translate "Amazon AWS Kinesis Streams Connector" page
into Chinese
Key: FLINK-12941
URL: https://issues.apache.org/jira/browse/FLINK-12941
Project: Flink
Iss
Jark Wu created FLINK-12940:
---
Summary: Translate "Apache Cassandra Connector" page into Chinese
Key: FLINK-12940
URL: https://issues.apache.org/jira/browse/FLINK-12940
Project: Flink
Issue Type: Su
Jark Wu created FLINK-12939:
---
Summary: Translate "Apache Kafka Connector" page into Chinese
Key: FLINK-12939
URL: https://issues.apache.org/jira/browse/FLINK-12939
Project: Flink
Issue Type: Sub-ta
Jark Wu created FLINK-12938:
---
Summary: Translate "Streaming Connectors" page into Chinese
Key: FLINK-12938
URL: https://issues.apache.org/jira/browse/FLINK-12938
Project: Flink
Issue Type: Sub-task
godfrey he created FLINK-12937:
--
Summary: Introduce join reorder planner rules in blink planner
Key: FLINK-12937
URL: https://issues.apache.org/jira/browse/FLINK-12937
Project: Flink
Issue Type:
Jingsong Lee created FLINK-12936:
Summary: Support intersect all and minus all to blink planner
Key: FLINK-12936
URL: https://issues.apache.org/jira/browse/FLINK-12936
Project: Flink
Issue Ty
I cant find any place to specify the parallelism for the join here.
stream1.join( stream2 )
.where( .. )
.equalTo( .. )
.window( .. )
.apply( .. );
How can we specify that ?
-roshan
Bowen Li created FLINK-12935:
Summary: package flink-connector-hive and some flink dependencies
into /opt of flink distribution
Key: FLINK-12935
URL: https://issues.apache.org/jira/browse/FLINK-12935
Proj
Bowen Li created FLINK-12934:
Summary: add additional dependencies for flink-connector-hive to
connect to standalone hive metastore
Key: FLINK-12934
URL: https://issues.apache.org/jira/browse/FLINK-12934
Bowen Li created FLINK-12933:
Summary: support "use catalog" and "use database" in SQL CLI
Key: FLINK-12933
URL: https://issues.apache.org/jira/browse/FLINK-12933
Project: Flink
Issue Type: Sub-t
Bowen Li created FLINK-12932:
Summary: support show catalogs and show databases in SQL CLI
Key: FLINK-12932
URL: https://issues.apache.org/jira/browse/FLINK-12932
Project: Flink
Issue Type: Sub-t
Bowen Li created FLINK-12931:
Summary: lint-python.sh cannot find flake8
Key: FLINK-12931
URL: https://issues.apache.org/jira/browse/FLINK-12931
Project: Flink
Issue Type: Bug
Component
Hi Guys,
I want to contribute to Apache Flink. Would you please give me
contributor permission? My JIRA ID: Chance Li, email: chanc...@gmail.com
Regards,
Chance
Hi all,
As the event is around the corner. If you haven't responded, please RSVP at
meetup.com. Otherwise, I will see you next Wednesday, June 26.
Regards,
Xuefu
On Mon, Jun 10, 2019 at 7:50 PM Xuefu Zhang wrote:
> Hi all,
>
> As promised, we planned to have quarterly Flink meetup and now it's
Hi Aljoscha,
Sorry for the late reply, I think the solution makes sense. Using the NULL
return value to mark a message is corrupted is not a valid way since NULL
value has semantic meaning in not just Kafka but also in a lot of other
contexts.
I was wondering if we can have a more meaningful inte
Robert Metzger created FLINK-12930:
--
Summary: Update Chinese "how to contribute" pages
Key: FLINK-12930
URL: https://issues.apache.org/jira/browse/FLINK-12930
Project: Flink
Issue Type: Task
Fabio Lombardelli created FLINK-12929:
-
Summary: scala.StreamExecutionEnvironment.addSource does not
propagate TypeInformation
Key: FLINK-12929
URL: https://issues.apache.org/jira/browse/FLINK-12929
Seth Wiesman created FLINK-12928:
Summary: Remove old Flink ML docs
Key: FLINK-12928
URL: https://issues.apache.org/jira/browse/FLINK-12928
Project: Flink
Issue Type: Improvement
Co
Thanks, everyone, for the positive feedback :-)
@Robert - It probably makes sense to break this down into various pages,
like PR, general code style guide, Java, component specific guides,
formats, etc.
Best,
Stephan
On Fri, Jun 21, 2019 at 4:29 PM Robert Metzger wrote:
> It seems that the di
It seems that the discussion around this topic has settled.
I'm going to turn the Google Doc into a markdown file (maybe also multiple,
I'll try out different things) and then open a pull request for the Flink
website.
I'll post a link to the PR here once I'm done.
On Fri, Jun 14, 2019 at 9:36 AM
By default, flushOnCheckpoint is set to True.
So ideally, based on env.enableCheckpointing(30); the flush to ES
must be triggered every 30seconds, though our ES Flush timeout is 60
seconds.
If the above assumption is correct, then still we do not see packets
getting flushed till the next p
Yun Tang created FLINK-12927:
Summary: YARNSessionCapacitySchedulerITCase failed due to non
prohibited exception
Key: FLINK-12927
URL: https://issues.apache.org/jira/browse/FLINK-12927
Project: Flink
Yes, we do maintain checkpoints
env.enableCheckpointing(30);
But we assumed it is for Kafka consumer offsets. Not sure how this is
useful in this case? Can you pls. elaborate on this.
~Ramya.
On Fri, Jun 21, 2019 at 4:33 PM miki haiat wrote:
> Did you set some checkpoints configuration?
Hi vino,
Thanks a lot for unblocking the email address. I have told the user about
this.
Hope things can get better.
Best, Hequn
On Fri, Jun 21, 2019 at 3:14 PM vino yang wrote:
> Hi Hequn,
>
> Thanks for reporting this case.
>
> The reason replied by QQ mail team is also caused by *bounce att
Zhu Zhu created FLINK-12926:
---
Summary: Main thread checking in some tests fails
Key: FLINK-12926
URL: https://issues.apache.org/jira/browse/FLINK-12926
Project: Flink
Issue Type: Bug
Comp
Did you set some checkpoints configuration?
On Fri, Jun 21, 2019, 13:17 Ramya Ramamurthy wrote:
> Hi,
>
> We use Kafka->Flink->Elasticsearch in our project.
> The data to the elasticsearch is not getting flushed, till the next batch
> arrives.
> E.g.: If the first batch contains 1000 packets, t
Hi,
We use Kafka->Flink->Elasticsearch in our project.
The data to the elasticsearch is not getting flushed, till the next batch
arrives.
E.g.: If the first batch contains 1000 packets, this gets pushed to the
Elastic, only after the next batch arrives [irrespective of reaching the
batch time limi
Hi All,
The last blocker(FLINK-12863) of 1.8.1 release have been fixed!
But also welcome report any issues you think is a blocker!
I will also do the final check, if no new problems are found, I will
prepare RC1 as soon as possible! :)
Cheers,
Jincheng
jincheng sun 于2019年6月17日周一 上午9:24写道:
> Hi
Alex created FLINK-12925:
Summary: Docker embedded job end-to-end test fails
Key: FLINK-12925
URL: https://issues.apache.org/jira/browse/FLINK-12925
Project: Flink
Issue Type: Bug
Component
Timo Walther created FLINK-12924:
Summary: Introduce basic type inference interfaces
Key: FLINK-12924
URL: https://issues.apache.org/jira/browse/FLINK-12924
Project: Flink
Issue Type: Sub-tas
Chesnay Schepler created FLINK-12923:
Summary: Introduce a Task termination future
Key: FLINK-12923
URL: https://issues.apache.org/jira/browse/FLINK-12923
Project: Flink
Issue Type: Impro
Hi Hequn,
Thanks for reporting this case.
The reason replied by QQ mail team is also caused by *bounce attack*. So
this mail address has been intercepted and it's an IP level interception.
Today, the QQ mail team has unblocked this email address. So it can receive
the follow-up email from Apach
38 matches
Mail list logo