Richard Deurwaarder created FLINK-13700:
---
Summary: PubSub connector example not included in flink-dist
Key: FLINK-13700
URL: https://issues.apache.org/jira/browse/FLINK-13700
Project: Flink
OK, Thanks Jark
Thanks,
SImon
On 08/13/2019 14:05,Jark Wu wrote:
Hi Simon,
This is a temporary workaround for 1.9 release. We will fix the behavior in
1.10, see FLINK-13461.
Regards,
Jark
On Tue, 13 Aug 2019 at 13:57, Simon Su wrote:
Hi Jark
Thanks for your reply.
It’s weird that
Hi Simon,
This is a temporary workaround for 1.9 release. We will fix the behavior in
1.10, see FLINK-13461.
Regards,
Jark
On Tue, 13 Aug 2019 at 13:57, Simon Su wrote:
> Hi Jark
>
> Thanks for your reply.
>
> It’s weird that In this case the tableEnv provide the api called
> “registerCatalog”
Hi Jark
Thanks for your reply.
It’s weird that In this case the tableEnv provide the api called
“registerCatalog”, but it does not work in some cases ( like my cases ).
Do you think it’s feasible to unify this behaviors ? I think the document is
necessary, but a unify way to use tableEnv is
Hi all,
I just find an issue when testing connector DDLs against blink planner for
rc2.
This issue lead to the DDL doesn't work when containing timestamp/date/time
type.
I have created an issue FLINK-13699[1] and a pull request for this.
IMO, this can be a blocker issue of 1.9 release. Because
ti
I think we might need to improve the javadoc of
tableEnv.registerTableSource/registerTableSink.
Currently, the comment says
"Registers an external TableSink with already configured field names and
field types in this TableEnvironment's catalog."
But, what catalog? The current one or default in-me
Jark Wu created FLINK-13699:
---
Summary: Fix TableFactory doesn't work with DDL when containing
TIMESTAMP/DATE/TIME types
Key: FLINK-13699
URL: https://issues.apache.org/jira/browse/FLINK-13699
Project: Flink
Yes, tableEnv.registerTable(_) etc always registers in the default catalog.
To create table in your custom catalog, you could use
tableEnv.sqlUpdate("create table ").
Thanks,
Xuefu
On Mon, Aug 12, 2019 at 6:17 PM Simon Su wrote:
> Hi Xuefu
>
> Thanks for you reply.
>
> Actually I have tried
Hi Xuefu
Thanks for you reply.
Actually I have tried it as your advises. I have tried to call
tableEnv.useCatalog and useDatabase. Also I have tried to use
“catalogname.databasename.tableName” in SQL. I think the root cause is that
when I call tableEnv.registerTableSource, it’s always use
Thanks Stephan. It was the case,.. I had an empty override of the
processWatermark() in the operator that went unnoticed. Removing it fixed the
problem.-roshan
On Monday, August 12, 2019, 02:39:45 AM PDT, Stephan Ewen
wrote:
Do you know what part of the code happens to block off your
-1 for rushing into conclusions that we need to split the repo before
saturating our efforts in improving current build/CI mechanism. Besides all
the build system issues mentioned above (no incremental builds, no
flexibility to build only docs or subsets of components), it's hard to keep
configurat
Hi Rishindra,
It would be helpful to checkout [1] and [2].
For your specific case, it's nice that you speak out your
willing to contribute in dev list. It is hopeful other
Flink developers participant the discussion about the issue
and with consensus one of our committers could assign
the ticket
Hi Thomas,
Thanks for proposing this concern. The barrier alignment takes long time in
backpressure case which could cause several problems:
1. Checkpoint timeout as you mentioned.
2. The recovery cost is high once failover, because much data needs to be
replayed.
3. The delay for commit-based s
Hi,
One of the major operational difficulties we observe with Flink are
checkpoint timeouts under backpressure. I'm looking for both confirmation
of my understanding of the current behavior as well as pointers for future
improvement work:
Prior to introduction of credit based flow control in the
Hi All !
Join our next Seattle Flink Meetup at Uber Seattle, featuring talks of
[Flink + Kappa+ @ Uber] and [Flink + Pulsar for streaming-first, unified
data processing].
- TALK #1: Moving from Lambda and Kappa Architectures to Kappa+ with Flink
at Uber
- TALK #2: When Apache Pulsar meets Apache
Hi Simon,
Thanks for reporting the problem. There is some rough edges around catalog
API and table environments, and we are improving post 1.9 release.
Nevertheless, tableEnv.registerCatalog() is just to put a new catalog in
Flink's CatalogManager, It doens't change the default catalog/database a
Hi All,
I subscribed to developer mailing list recently and want to start
contributing with the following ticket.
https://issues.apache.org/jira/browse/FLINK-13689
Could you please let me know the procedure to start the discussion?
--
*Maddila Rishindra Kumar*
*Software Engineer*
*Walmartlabs I
Thanks Gordon, will do that.
On Mon, Aug 12, 2019 at 4:42 PM Tzu-Li (Gordon) Tai
wrote:
> Concerning FLINK-13231:
>
> Since this is a @PublicEvolving interface, technically it is ok to break
> it across releases (including across bugfix releases?).
> So, @Becket if you do merge it now, please ma
I split small and medium-sized repositories in several projects for various
reasons. In general, the more mature a project, the fewer pain after the
split. If interfaces are somewhat stable, it's naturally easier to work in
a distributed manner.
However, projects should be split for the right reas
That sounds good to me. I was initially trying to piggyback it into an RC,
but fell behind and was not able to catch the last one.
Thanks,
Jiangjie (Becket) Qin
On Mon, Aug 12, 2019 at 4:25 PM Till Rohrmann wrote:
> I agree that it would be nicer. Not sure whether we should cancel the RC
> for
Concerning FLINK-13231:
Since this is a @PublicEvolving interface, technically it is ok to break it
across releases (including across bugfix releases?).
So, @Becket if you do merge it now, please mark the fix version as 1.9.1.
During the voting process, in the case a new RC is created, we usually
I changed the permissions of the page.
On Mon, Aug 12, 2019 at 4:21 PM Till Rohrmann wrote:
> +1 for the proposal. Thanks a lot for driving this discussion Becket!
>
> Cheers,
> Till
>
> On Mon, Aug 12, 2019 at 3:02 PM Becket Qin wrote:
>
> > Hi Robert,
> >
> > That's a good suggestion. Will yo
Thanks a lot for starting the discussion Chesnay!
I would like to throw in another aspect into the discussion: What if we
consider this repo split as a first step towards making connectors, machine
learning, gelly, table/SQL? independent projects within the ASF, with their
own mailing lists, comm
I agree that it would be nicer. Not sure whether we should cancel the RC
for this issue given that it is open for quite some time and hasn't been
addressed until very recently. Maybe we could include it on the shortlist
of nice-to-do things which we do in case that the RC gets cancelled.
Cheers,
T
+1 for the proposal. Thanks a lot for driving this discussion Becket!
Cheers,
Till
On Mon, Aug 12, 2019 at 3:02 PM Becket Qin wrote:
> Hi Robert,
>
> That's a good suggestion. Will you help to change the permission on that
> page?
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Aug 12, 2019 a
Hi Till,
Yes, I think we have already documented in that way. So technically
speaking it is fine to change it later. It is just better if we could avoid
doing that.
Thanks,
Jiangjie (Becket) Qin
On Mon, Aug 12, 2019 at 4:09 PM Till Rohrmann wrote:
> Could we say that the PubSub connector is p
Could we say that the PubSub connector is public evolving instead?
Cheers,
Till
On Mon, Aug 12, 2019 at 3:18 PM Becket Qin wrote:
> Hi all,
>
> FLINK-13231(palindrome!) has a minor Google PubSub connector API change
> regarding how to config rate limiting. The GCP PubSub connector is a newly
>
Hi all,
FLINK-13231(palindrome!) has a minor Google PubSub connector API change
regarding how to config rate limiting. The GCP PubSub connector is a newly
introduced connector in 1.9, so it would be nice to include this change
into 1.9 rather than later to avoid a public API change. I am thinking
Hi Robert,
That's a good suggestion. Will you help to change the permission on that
page?
Thanks,
Jiangjie (Becket) Qin
On Mon, Aug 12, 2019 at 2:41 PM Robert Metzger wrote:
> Thanks for starting the vote.
> How about putting a specific version in the wiki up for voting, or
> restricting edit
Thanks for starting the vote.
How about putting a specific version in the wiki up for voting, or
restricting edit access to the page to the PMC?
There were already two changes (very minor) to the page since the vote has
started:
https://cwiki.apache.org/confluence/pages/viewpreviousversions.action?
Hi Kurt,
Thanks for your explanation. For [1] I think at least we should change
the JIRA issue field, like unset the fixed version. For [2] I can see
the change is all in test scope but wonder if such a commit still invalid
the release candidate. IIRC previous RC VOTE threads would contain a
relea
+1 as well. starting the work in parallel may also give some insights on
whether some additional API on SourceReader is needed in order to support
the interaction between SourceReader and runtime.
On Mon, Aug 12, 2019 at 11:29 AM Stephan Ewen wrote:
> +1 to looking at the Source Reader interface
Piotr Nowojski created FLINK-13698:
--
Summary: Rework threading model of CheckpointCoordinator
Key: FLINK-13698
URL: https://issues.apache.org/jira/browse/FLINK-13698
Project: Flink
Issue Typ
Dawid Wysakowicz created FLINK-13697:
Summary: Drop deprecated ExternalCatalog API
Key: FLINK-13697
URL: https://issues.apache.org/jira/browse/FLINK-13697
Project: Flink
Issue Type: Impro
Till Rohrmann created FLINK-13696:
-
Summary: Revisit & update Flink's public API annotations
Key: FLINK-13696
URL: https://issues.apache.org/jira/browse/FLINK-13696
Project: Flink
Issue Type
Till Rohrmann created FLINK-13695:
-
Summary: Integrate checkpoint notifications into StreamTask's
lifecycle
Key: FLINK-13695
URL: https://issues.apache.org/jira/browse/FLINK-13695
Project: Flink
vinoyang created FLINK-13694:
Summary: Refactor Flink on YARN configuration with relavent
overlay classes
Key: FLINK-13694
URL: https://issues.apache.org/jira/browse/FLINK-13694
Project: Flink
I
Rui Li created FLINK-13693:
--
Summary: Identifiers are not properly handled in some DDLs
Key: FLINK-13693
URL: https://issues.apache.org/jira/browse/FLINK-13693
Project: Flink
Issue Type: Bug
+1
Thanks for all the efforts you put into this for documenting how the
project operates.
Regards,
Timo
Am 12.08.19 um 10:44 schrieb Aljoscha Krettek:
+1
On 11. Aug 2019, at 10:07, Becket Qin wrote:
Hi all,
I would like to start a voting thread on the project bylaws of Flink. It
aims to
Hi Zili,
Thanks for the heads up. The 2 issues you mentioned were opened by me. We
have
found the reason of the second issue and a PR was opened for it. As said in
jira, the
issue was just a testing problem, should not be blocker of 1.9.0 release.
However,
we will still merge it into 1.9 branch.
Do you know what part of the code happens to block off your watermark?
Maybe a method that is overridden in AbstractStreamOperator in your code?
On Sat, Aug 10, 2019 at 4:06 AM Roshan Naik
wrote:
> Have streaming use cases where it is useful & easier to generate the
> watermark in the Source (vi
Hi,
I just noticed that a few hours ago there were two new issues
filed and marked as blockers to 1.9.0[1][2].
Now [1] is closed as duplication but still marked as
a blocker to 1.9.0, while [2] is downgrade to "Major" priority
but still target to be fixed in 1.9.0.
It would be worth to have atte
Till Rohrmann created FLINK-13692:
-
Summary: Make CompletedCheckpointStore backwards compatible
Key: FLINK-13692
URL: https://issues.apache.org/jira/browse/FLINK-13692
Project: Flink
Issue Ty
+1 to looking at the Source Reader interface as converged with respect to
its integration with the runtime.
Especially the semantics around the availability future and "emitNext" seem
to have reach consensus.
On Sat, Aug 10, 2019 at 10:51 PM zhijiang
wrote:
>
> Hi all,
>
> As mentioned in FLIP-
Timo Walther created FLINK-13691:
Summary: Remove deprecated query config
Key: FLINK-13691
URL: https://issues.apache.org/jira/browse/FLINK-13691
Project: Flink
Issue Type: Improvement
Hi Maximillian,
Thanks for the feedback. Please see the reply below:
Step 2 should include a personal email to the PMC members in question.
I'm afraid reminders inside the vote thread could be overlooked easily.
This is exactly what I meant to say by "reach out" :) I just made it more
explicit
molsion created FLINK-13690:
---
Summary: Connectors/JDBC LookupFunction getFieldFromResultSet BUG
Key: FLINK-13690
URL: https://issues.apache.org/jira/browse/FLINK-13690
Project: Flink
Issue Type: Bu
Thanks Stephan :)
That looks easy enough, will try!
Gyula
On Mon, Aug 12, 2019 at 11:00 AM Stephan Ewen wrote:
> Hi Gyula!
>
> Thanks for reporting this.
>
> Can you try to simply build Flink without Hadoop and then exporting
> HADOOP_CLASSPATH to your CloudEra libs?
> That is the recommended w
Hi Gyula!
Thanks for reporting this.
Can you try to simply build Flink without Hadoop and then exporting
HADOOP_CLASSPATH to your CloudEra libs?
That is the recommended way these days.
Best,
Stephan
On Mon, Aug 12, 2019 at 10:48 AM Gyula Fóra wrote:
> Thanks Dawid,
>
> In the meantime I als
Thanks Dawid,
In the meantime I also figured out that I need to build the
https://github.com/apache/flink-shaded project locally with
-Dhadoop.version set to the specific hadoop version if I want something
different.
Cheers,
Gyula
On Mon, Aug 12, 2019 at 9:54 AM Dawid Wysakowicz
wrote:
> Hi Gy
+1
> On 11. Aug 2019, at 10:07, Becket Qin wrote:
>
> Hi all,
>
> I would like to start a voting thread on the project bylaws of Flink. It
> aims to help the community coordinate more smoothly. Please see the bylaws
> wiki page below for details.
>
> https://cwiki.apache.org/confluence/pages/v
Just in case we decide to pursue the repo split in the end, some thoughts
on Chesnay's questions:
(1) Git History
We can also use "git filter-branch" to rewrite the history to only contain
the connectors.
It changes commit hashes, but not sure that this is a problem. The commit
hashes are still v
Hi Gyula,
As for the issues with mapr maven repository, you might have a look at
this message:
https://lists.apache.org/thread.html/77f4db930216e6da0d6121065149cef43ff3ea33c9ffe9b1a3047210@%3Cdev.flink.apache.org%3E
Try using the "unsafe-mapr-repo" profile.
Best,
Dawid
On 11/08/2019 19:31, Gyu
Hi All
I want to use a custom catalog by setting the name “ca1” and create a
database under this catalog. When I submit the
SQL, and it raises the error like :
Exception in thread "main" org.apache.flink.table.api.ValidationException:
SQL validation failed. From line 1, column 98 to li
Rishindra Kumar created FLINK-13689:
---
Summary: Rest High Level Client for Elasticsearch6.x connector
leaks threads if no connection could be established
Key: FLINK-13689
URL: https://issues.apache.org/jira/brows
55 matches
Mail list logo