Hello, flink folks.
As stated earlier in 2018, and reiterated two weeks ago, all git
repositories must be migrated from the git-wip-us.apache.org URL to
gitbox.apache.org, as the old service is being decommissioned. Your
project is receiving this email because you still have repositories on
git-wip
dengjie created FLINK-11325:
---
Summary: Flink Consumer Kafka Topic Not Found ConsumerID
Key: FLINK-11325
URL: https://issues.apache.org/jira/browse/FLINK-11325
Project: Flink
Issue Type: Bug
Ankit Sharma created FLINK-11324:
Summary: Custom Log4j properties for each flink job
Key: FLINK-11324
URL: https://issues.apache.org/jira/browse/FLINK-11324
Project: Flink
Issue Type: Improv
Igal Shilman created FLINK-11323:
Summary: Upgrade KryoSerializer snapshot to implement new
TypeSerializerSnapshot interface
Key: FLINK-11323
URL: https://issues.apache.org/jira/browse/FLINK-11323
Pro
I agree with Chesnay here.
How about introducing a bot just to tag the stale PRs, not close them. Then
we can get the numbers of how many stale PRs in there, and go farther
according to the numbers.
Best,
Congxian
Timo Walther 于2019年1月14日周一 下午10:30写道:
> I totally agree with Chesnay here. A bot
Fokko Driesprong created FLINK-11322:
Summary: Use try-with-resource for FlinkKafkaConsumer010
Key: FLINK-11322
URL: https://issues.apache.org/jira/browse/FLINK-11322
Project: Flink
Issue
Fokko Driesprong created FLINK-11321:
Summary: Clarify the NPE on fetching an nonexistent Kafka topic
Key: FLINK-11321
URL: https://issues.apache.org/jira/browse/FLINK-11321
Project: Flink
Allen Wang created FLINK-11320:
--
Summary: Support user callback in Kafka sink
Key: FLINK-11320
URL: https://issues.apache.org/jira/browse/FLINK-11320
Project: Flink
Issue Type: Improvement
Allen Wang created FLINK-11319:
--
Summary: Allow usage of custom implementation of Kafka Producer
and Consumer in source and sink
Key: FLINK-11319
URL: https://issues.apache.org/jira/browse/FLINK-11319
Pr
Hi,
Welcome to the Flink community!
I gave you contributor permissions and assigned FLINK-11311 to you.
Best, Fabian
Am So., 13. Jan. 2019 um 16:02 Uhr schrieb Benchao Li :
> Hi, everyone
>
> I would like to make contribution to JIRA(FLINK-11311). Would anyone
> kindly give me the contribution
Edward Rojas created FLINK-11318:
Summary: [Regression] StreamingFileSink can overwrite existing
files
Key: FLINK-11318
URL: https://issues.apache.org/jira/browse/FLINK-11318
Project: Flink
begginghard created FLINK-11317:
---
Summary: test
Key: FLINK-11317
URL: https://issues.apache.org/jira/browse/FLINK-11317
Project: Flink
Issue Type: Bug
Reporter: begginghard
--
I totally agree with Chesnay here. A bot just treats the symptoms but
not the cause.
Maybe this needs no immediate action but we as committers should aim for
a more honest communication. A lot of PRs have a reason for being stale
but instead of communicating this reason we just don't touch the
Below is maven build error:
[ERROR] Failed to execute goal on project flink-dist_2.11: Could not resolve
dependencies for project org.apache.flink:flink-dist_2.11:jar:1.7.0: Failure
to find org.apache.flink:flink-shaded-hadoop2-uber:jar:1.7.0 in
https://repo.maven.apache.org/maven2 was cached in t
For reference, I'm still very much -1 on this.
The short version is that auto-closing PRs hides symptoms that lead to
stale PRs in the first place.
As an example, consider flink-ml. We have a fair amount of open PRs
targeted at this feature, that naturally this bot would close.
What are they
Gary Yao created FLINK-11316:
Summary: JarFileCreatorTest fails when run with Java 9
Key: FLINK-11316
URL: https://issues.apache.org/jira/browse/FLINK-11316
Project: Flink
Issue Type: Test
+1 to try the bot out.
Regarding auto closing the PR's, worst case a PR can be reopened in the
event of a false positive.
Whereas tagging stale PR's and requiring further human intervention isn't
accomplishing much in the grand scheme of things.
Cheers,
Cameron
--
On Mon, 14 Jan 2019 at 09:34,
Hi Fabian,
+1 👍
Cheers
Dhanuka
On Mon, 14 Jan 2019, 21:29 Fabian Hueske Hi,
>
> That's a Java limitation. Methods cannot be larger than 64kb and code that
> is generated for this predicate exceeds the limit.
> There is a Jira issue to fix the problem.
>
> In the meantime, I'd follow a hybrid ap
Hi Seth,
Thanks for the feedback. Re-caching makes sense to me. Piotr and I had some
offline discussion and we generally reached consensus on the following API:
{
/**
* Cache this table to builtin table service or the specified customized
table service.
*
* This method provides a hi
Hi,
That's a Java limitation. Methods cannot be larger than 64kb and code that
is generated for this predicate exceeds the limit.
There is a Jira issue to fix the problem.
In the meantime, I'd follow a hybrid approach and UNION ALL only as many
tables as you need to avoid the code compilation exc
Hi Fabian ,
I was encounter below error with 200 OR operators so I guess this is JVM
level limitation.
Error :
of class "datastreamcalcrule" grows beyond 64 kb
Cheers
Dhanuka
On Mon, 14 Jan 2019, 20:30 Fabian Hueske Hi,
>
> you should avoid the UNION ALL approach because the query will scan
yuqi created FLINK-11315:
Summary: Make magic number in KvStateSerializer as a constant to
make more readable
Key: FLINK-11315
URL: https://issues.apache.org/jira/browse/FLINK-11315
Project: Flink
I
Hi,
you should avoid the UNION ALL approach because the query will scan the
(identical?) Kafka topic 200 times which is highly inefficient.
You should rather use your second approach and scale the query
appropriately.
Best, Fabian
Am Mo., 14. Jan. 2019 um 08:39 Uhr schrieb dhanuka ranasinghe <
d
yuqi created FLINK-11314:
Summary: Reuse RocksDBWriteBatchWrapper in the flush method of
RocksDBMapState
Key: FLINK-11314
URL: https://issues.apache.org/jira/browse/FLINK-11314
Project: Flink
Issue
HI Chris,
I'm not sure what you want to test. As far as I know there isn't an option
that forcing the data must be through network. And I don't think it's a
generic feature we should support. I think zhijiang has given a good
suggestion. Changing the runtime codes would be a fast way to satisfy the
Yun Tang created FLINK-11313:
Summary: [checkpoint] Introduce LZ4 compression for keyed state in
full checkpoints and savepoints
Key: FLINK-11313
URL: https://issues.apache.org/jira/browse/FLINK-11313
Pro
I think the automatic closing is an integral part, without it we would never
close those stale PRs that we have lying around from 2015 and 2016.
I would suggest to set the staleness interval quite high, say 2 months. Thus
initially the bot would mainly close very old PRs and we shouldn’t even no
27 matches
Mail list logo