Hi Chesnay,
Agree with your proposal. I submitted a jira FLINK-9676 related with deadlock
issue.
I think it needs to be confirmed whether to be covered in this release or later.
Zhijiang
--
发件人:Chesnay Schepler
发送时间:2018年7月2日(星期一)
+1.
Thanks for being the release manager Chesnay!
I just merged FLINK-9567.
Cheers,
Till
On Mon, Jul 2, 2018 at 8:48 PM Stephan Ewen wrote:
> +1 there are many minor fixes that are important for 1.5.1
>
> I would suggest to make 1.5.1 rather asap and consider also a 1.5.2 quite
> soon for the
+1 there are many minor fixes that are important for 1.5.1
I would suggest to make 1.5.1 rather asap and consider also a 1.5.2 quite
soon for the known issues that are not yet fixed.
In the future, with the increasing test automation, we can hopefully make a
lot of minor releases fast, so fixed
Sayat Satybaldiyev created FLINK-9705:
-
Summary: Failed to close kafka producer - Interrupted while
joining ioThread
Key: FLINK-9705
URL: https://issues.apache.org/jira/browse/FLINK-9705
Project:
Chesnay Schepler created FLINK-9704:
---
Summary: QueryableState E2E test failed on travis
Key: FLINK-9704
URL: https://issues.apache.org/jira/browse/FLINK-9704
Project: Flink
Issue Type: Impr
+1, great work!
2018-07-02 22:57 GMT+08:00 Rong Rong :
> Huge +1!!!
>
> On Mon, Jul 2, 2018 at 6:07 AM Piotr Nowojski
> wrote:
>
> > Hi,
> >
> > Together with Fabian, Timo and Stephan we were working on a proposal to
> > add a support for time versioned joins in Flink SQL/Table API. The idea
> >
Hi Piotr,
thanks for bumping this thread and thanks for Xingcan for the comments.
I think the first step would be to separate the flink-table module into
multiple sub modules. These could be:
- flink-table-api: All API facing classes. Can be later divided further
into Java/Scala Table API/SQL
-
Huge +1!!!
On Mon, Jul 2, 2018 at 6:07 AM Piotr Nowojski
wrote:
> Hi,
>
> Together with Fabian, Timo and Stephan we were working on a proposal to
> add a support for time versioned joins in Flink SQL/Table API. The idea
> here is to support use cases, when user would like to join a stream of dat
Hi all,
I also think about this problem these days and here are my thoughts.
1) We must admit that it’s really a tough task to interoperate with Java and
Scala. E.g., they have different collection types (Scala collections v.s.
java.util.*) and in Java, it's hard to implement a method which tak
Bumping the topic.
If we want to do this, the sooner we decide, the less code we will have to
rewrite. I have some objections/counter proposals to Fabian's proposal of doing
it module wise and one module at a time.
First, I do not see a problem of having java/scala code even within one module,
Hi,
Together with Fabian, Timo and Stephan we were working on a proposal to add a
support for time versioned joins in Flink SQL/Table API. The idea here is to
support use cases, when user would like to join a stream of data with a table,
that changes over time and where future updates to the ta
Hello guys,
We have discovered minor issue with Flink 1.5 on YARN particularly which was
related with the way Flink manages temp paths (io.tmp.dirs
) in configuration:
https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#io-tmp-dirs
1. From what we can see in the code,
CrossJoins are not supported.
You should add an equality join predicate.
2018-07-02 13:26 GMT+02:00 Amol S - iProgrammer :
> Hello fabian,
>
> I have tried to convert table into stream as below
>
>
> Cannot generate a valid execution plan for the given query:
>
> tableEnv.toDataStream(result, Opl
Rune Skou Larsen created FLINK-9703:
---
Summary: Mesos does not expose TM Prometheus port
Key: FLINK-9703
URL: https://issues.apache.org/jira/browse/FLINK-9703
Project: Flink
Issue Type: Bug
Hello fabian,
I have tried to convert table into stream as below
Cannot generate a valid execution plan for the given query:
tableEnv.toDataStream(result, Oplog.class);
and it is giving me below error.
LogicalFilter(condition=[<>($1, $3)])
LogicalJoin(condition=[true], joinType=[inner])
Hi Chesnay,
It's a good idea. And I just created a pull request on Flink-9567. Please
have a look if you had some free time.
Cheers,
Shimin
On Mon, Jul 2, 2018 at 7:17 PM vino yang wrote:
> +1
>
> 2018-07-02 18:19 GMT+08:00 Chesnay Schepler :
>
> > Hello,
> >
> > it has been a little over a mo
Stefan Richter created FLINK-9702:
-
Summary: Improvement in (de)serialization of keys and values for
RocksDB state
Key: FLINK-9702
URL: https://issues.apache.org/jira/browse/FLINK-9702
Project: Flink
+1
2018-07-02 18:19 GMT+08:00 Chesnay Schepler :
> Hello,
>
> it has been a little over a month since we've release 1.5.0. Since then
> we've addressed 56 JIRAs [1] for the 1.5 branch, including stability
> enhancement to the new execution mode (FLIP-6), fixes for critical issues
> in the metric
Hello Fabian,
Can you please tell me hot to convert Table back into DataStream? I just
want to print the table result.
---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com
*iProgrammer Solutions Pvt. Ltd.*
*Office 103, 104, 1st Floor Pride P
Andrey Zagrebin created FLINK-9701:
--
Summary: Activate TTL in state descriptors
Key: FLINK-9701
URL: https://issues.apache.org/jira/browse/FLINK-9701
Project: Flink
Issue Type: Sub-task
You can also use Row, but then you cannot rely on automatic type extraction
and provide TypeInformation.
Amol S - iProgrammer schrieb am Mo., 2. Juli 2018,
12:37:
> Hello Fabian,
>
> According to my requirement I can not create static pojo's for all classes
> because I want to create dynamic job
Hello Fabian,
According to my requirement I can not create static pojo's for all classes
because I want to create dynamic jobs for all tables based on rule engine
config. Please suggest me if there any other way to achieve this.
---
*Amol Suryawanshi*
J
Hi Amol,
These are the requirements for POJOs [1] that are fully supported by Flink.
Best, Fabian
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/api_concepts.html#pojos
2018-07-02 12:19 GMT+02:00 Amol S - iProgrammer :
> Hello Xingcan
>
> As mentioned in above mail thread
Hello Fabian,
The output of customerMISMaster.printSchema() is undefined
---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com
*iProgrammer Solutions Pvt. Ltd.*
*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Nea
Hello Xingcan
As mentioned in above mail thread I am streaming mongodb oplog to join
multiple mongo tables based on some unique key (Primary key). To achieve
this I have created one java pojo as below. where o represent generic pojo
type of mongodb which has my table fields i.e. dynamic. now I wan
Hi,
It looks like the type of master is not known to Flink.
What's the output of
customerMISMaster.printSchema(); ?
Best, Fabian
2018-07-02 11:33 GMT+02:00 Amol S - iProgrammer :
> Hello Xingcan
>
> DataStream streamSource = env
> .addSource(kafkaConsumer)
> .setParallelism(
Hello,
it has been a little over a month since we've release 1.5.0. Since then
we've addressed 56 JIRAs [1] for the 1.5 branch, including stability
enhancement to the new execution mode (FLIP-6), fixes for critical
issues in the metric system, but also features that didn't quite make it
into
Ufuk Celebi created FLINK-9700:
--
Summary: Document FlinkKafkaProducer behaviour for Kafka versions
> 0.11
Key: FLINK-9700
URL: https://issues.apache.org/jira/browse/FLINK-9700
Project: Flink
Is
Hello Xingcan
DataStream streamSource = env
.addSource(kafkaConsumer)
.setParallelism(4);
StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
// Convert the DataStream into a Table with default fields "f0", "f1"
Table table1 = tableEnv.fromDataStream(strea
Hi Amol,
The “dynamic table” is just a logical concept, following which the Flink table
API is designed.
That means you don’t need to implement dynamic tables yourself.
Flink table API provides different kinds of stream to stream joins in recent
versions (from 1.4).
The related docs can be foun
Hello,
I am streaming mongodb oplog using kafka and flink and want to join
multiple tables using flink table api but i have some concerns like is it
possible to join streamed tables in flink and if yes then please provide me
some example of stream join using table API.
I gone through your dynamic
Dear community,
this is the weekly community update thread #27. Please post any news and
updates you want to share with the community to this thread.
# Feature freeze and release date for Flink 1.6
The community is currently discussing the feature freeze and, thus, also
the release date for Flin
Jeff Zhang created FLINK-9699:
-
Summary: Add api to replace table
Key: FLINK-9699
URL: https://issues.apache.org/jira/browse/FLINK-9699
Project: Flink
Issue Type: Improvement
Reporter
33 matches
Mail list logo