Hi,
When we are implementing that InputFormat Interface, if we have that Input
split part in our data analytics server APIs can we directly go to the
second phase that you have described earlier?
Since Our data source has database tables architecture I have a thought of
follow that 'JDBCInputF
Hi Fabian,
Thanks for your detailed response and sorry for the late response. Your
opinions all make sense to me, and here is some thoughts to your open
questions:
- Regarding to table without sufficient statistics, especially these kind
of "dynamic" table which derived from some arbitrary DataSe
godfrey he created FLINK-5571:
-
Summary: add open/close methods for UserDefinedFunction in
TableAPI & SQL
Key: FLINK-5571
URL: https://issues.apache.org/jira/browse/FLINK-5571
Project: Flink
Iss
Kurt Young created FLINK-5570:
-
Summary: Support register external catalog to table environment
Key: FLINK-5570
URL: https://issues.apache.org/jira/browse/FLINK-5570
Project: Flink
Issue Type: Su
Kurt Young created FLINK-5569:
-
Summary: Migrate current table registration to in-memory catalog
Key: FLINK-5569
URL: https://issues.apache.org/jira/browse/FLINK-5569
Project: Flink
Issue Type: S
Kurt Young created FLINK-5568:
-
Summary: Introduce interface for catalog, and provide an in-memory
implementation
Key: FLINK-5568
URL: https://issues.apache.org/jira/browse/FLINK-5568
Project: Flink
Kurt Young created FLINK-5567:
-
Summary: Introduce and migrate current table statistics to
FlinkStatistics
Key: FLINK-5567
URL: https://issues.apache.org/jira/browse/FLINK-5567
Project: Flink
Is
Kurt Young created FLINK-5566:
-
Summary: Introduce structure to hold table and column level
statistics
Key: FLINK-5566
URL: https://issues.apache.org/jira/browse/FLINK-5566
Project: Flink
Issue
Kurt Young created FLINK-5565:
-
Summary: Improve flink cost model framework
Key: FLINK-5565
URL: https://issues.apache.org/jira/browse/FLINK-5565
Project: Flink
Issue Type: Improvement
Hi everyone,
I have drafted the design doc (link is provided below) for UDAGG, and
created the JIRA (FLINK-5564) to track the progress of this design.
Special thanks to Stephan and Fabian for their advice and help.
Please check the design doc, feel free to share your comments in the google
doc:
ht
Shaoxuan Wang created FLINK-5564:
Summary: User Defined Aggregates
Key: FLINK-5564
URL: https://issues.apache.org/jira/browse/FLINK-5564
Project: Flink
Issue Type: Improvement
Compo
Greg Hogan created FLINK-5562:
-
Summary: Driver fixes
Key: FLINK-5562
URL: https://issues.apache.org/jira/browse/FLINK-5562
Project: Flink
Issue Type: Bug
Components: Gelly
Affects
Greg Hogan created FLINK-5563:
-
Summary: Add density to vertex metrics
Key: FLINK-5563
URL: https://issues.apache.org/jira/browse/FLINK-5563
Project: Flink
Issue Type: Improvement
Compo
Nico Kruber created FLINK-5561:
--
Summary: DataInputDeserializer#available returns one too few
Key: FLINK-5561
URL: https://issues.apache.org/jira/browse/FLINK-5561
Project: Flink
Issue Type: Bug
Ufuk Celebi created FLINK-5560:
--
Summary: Header in checkpoint stats summary misaligned
Key: FLINK-5560
URL: https://issues.apache.org/jira/browse/FLINK-5560
Project: Flink
Issue Type: Bug
Nico Kruber created FLINK-5559:
--
Summary: queryable state:
KvStateRequestSerializer#deserializeKeyAndNamespace() throws an IOException
without own failure message if deserialisation fails
Key: FLINK-5559
URL: https:
Greg Hogan created FLINK-5558:
-
Summary: Replace TriangleCount with a Count analytic
Key: FLINK-5558
URL: https://issues.apache.org/jira/browse/FLINK-5558
Project: Flink
Issue Type: Improvement
Greg Hogan created FLINK-5557:
-
Summary: Fix link in library methods
Key: FLINK-5557
URL: https://issues.apache.org/jira/browse/FLINK-5557
Project: Flink
Issue Type: Improvement
Compone
Hello, fellow Apache enthusiast. Thanks for your participation, and
interest in, the projects of the Apache Software Foundation.
I wanted to remind you that the Call For Papers (CFP) for ApacheCon
North America, and Apache: Big Data North America, closes in less than a
month. If you've been puttin
Ufuk Celebi created FLINK-5556:
--
Summary: BarrierBuffer resets bytes written on spiller roll over
Key: FLINK-5556
URL: https://issues.apache.org/jira/browse/FLINK-5556
Project: Flink
Issue Type:
I'm +0 on switching to a pre-determined schedule. It may be that the Flink
codebase has reached a level of maturity allowing for a time-based release
schedule, and I'm hopeful that a known schedule will improve communication
about and expectations for new features.
I'd like to hear a retrospective
Robert Metzger created FLINK-:
-
Summary: Add documentation about debugging watermarks
Key: FLINK-
URL: https://issues.apache.org/jira/browse/FLINK-
Project: Flink
Issue Type: Sub-
Anton Solovev created FLINK-5554:
Summary: Add sql operator to table api for getting columns from
HBase
Key: FLINK-5554
URL: https://issues.apache.org/jira/browse/FLINK-5554
Project: Flink
I
Thanks everybody for working on the website!
2017-01-18 15:24 GMT+01:00 Robert Metzger :
> Cool, thank you for merging!
>
> Once we've got enough feedback here that its working, I'll also tweet about
> it from @ApacheFlink.
>
> On Wed, Jan 18, 2017 at 3:17 PM, Ufuk Celebi wrote:
>
> > The update
Robert Metzger created FLINK-5553:
-
Summary: Job fails during deployment with IllegalStateException
from subpartition request
Key: FLINK-5553
URL: https://issues.apache.org/jira/browse/FLINK-5553
Proj
Cool, thank you for merging!
Once we've got enough feedback here that its working, I'll also tweet about
it from @ApacheFlink.
On Wed, Jan 18, 2017 at 3:17 PM, Ufuk Celebi wrote:
> The updated page has been merged: http://flink.apache.org/
>
> Would appreciate it if you took a minute to just br
PS: Make sure to clear your caches. Otherwise there can be some weirdness. :D
On Wed, Jan 18, 2017 at 3:17 PM, Ufuk Celebi wrote:
> The updated page has been merged: http://flink.apache.org/
>
> Would appreciate it if you took a minute to just browse the web page
> and look out for any left over
The updated page has been merged: http://flink.apache.org/
Would appreciate it if you took a minute to just browse the web page
and look out for any left over errors.
On Tue, Jan 10, 2017 at 2:35 PM, Till Rohrmann wrote:
> Great work Mike :-) I like the new web page a lot.
>
> Some comments:
>
>
Hi Tao,
first of all welcome to the Flink community and thank you for your
contributions!
It is good practice to assign JIRA issues to yourself if you plan to work
on them to avoid duplicate work.
If you tell me your JIRA user I can give you contributor permissions which
will allow you to assign
Hi all,
I am a newbie in community. Recently I have contributed some code to
flink. The PRs have merged but I don't assign the issues to me. Is it
necessary to assign the issues before code ? If it is , who can assign the
issues ?
Thanks.
Hi,
Thanks Fabian and Chesnay for providing those information.
Pawan
On Wed, Jan 18, 2017 at 2:11 PM, Chesnay Schepler
wrote:
> Hello,
>
> The dependencies are fine.
>
> The short answer is i would recommend you to read op on java generics.
>
> The long answer is that OT and T are just placeho
david.wang created FLINK-5552:
-
Summary: Make the JMX port available through RESTful API
Key: FLINK-5552
URL: https://issues.apache.org/jira/browse/FLINK-5552
Project: Flink
Issue Type: Improveme
Andrey created FLINK-5551:
-
Summary: NPE at SourceStreamTask
Key: FLINK-5551
URL: https://issues.apache.org/jira/browse/FLINK-5551
Project: Flink
Issue Type: Bug
Components: DataStream API
jiwengang created FLINK-5550:
Summary: NotFoundException: Could not find job with id
Key: FLINK-5550
URL: https://issues.apache.org/jira/browse/FLINK-5550
Project: Flink
Issue Type: Bug
Robert Metzger created FLINK-5549:
-
Summary: TypeExtractor fails with RuntimeException, but should use
GGenericTypeInfoeneric
Key: FLINK-5549
URL: https://issues.apache.org/jira/browse/FLINK-5549
Proj
�&�-�ǖy�Z�_5�^��m5�2�1(���n��
Thanks a lot for the positive feedback so far.
Thank you Fabian for spotting the off by one error in my email.
"There are two hard things in computer science: cache invalidation, naming
things, and off-by-one errors." (https://twitter.com/codinghorror/status/
506010907021828096?lang=en)
I agree w
@Robert: I really like this. +1 to implement this after 1.2.0 is released.
Small note about your release dates: you started with 1.3.0 but probably meant
1.2.0 right?
On 18 January 2017 at 09:57:31, Tzu-Li (Gordon) Tai (tzuli...@apache.org) wrote:
> Hi Robert,
>
> Thanks for bringing up the di
Hi Robert,
thanks a lot for starting this discussion and for putting together the wiki
pages.
This proposal makes a lot of sense to me.
Big +1 for merging only features which are tested and *documented*.
I believe that having a clear timeline will not only make users happier but
also contributor
Fabian Hueske created FLINK-5548:
Summary: Move
Key: FLINK-5548
URL: https://issues.apache.org/jira/browse/FLINK-5548
Project: Flink
Issue Type: Improvement
Components: Table API &
Fabian Hueske created FLINK-5547:
Summary: Move checks for DataSetRel validity into constructor
Key: FLINK-5547
URL: https://issues.apache.org/jira/browse/FLINK-5547
Project: Flink
Issue Type
Syinchwun Leo created FLINK-5546:
Summary: When multiple users run test, /tmp/cacheFile conflicts
Key: FLINK-5546
URL: https://issues.apache.org/jira/browse/FLINK-5546
Project: Flink
Issue Ty
Hi Robert,
Thanks for bringing up the discussion. I like the proposal.
Regarding some of the downsides mentioned in the wiki:
1. Features that don’t make it in time with the feature freeze:
I think that’s ok, as long as we’re consistent with the schedules for the next
release. This way users wa
In general, I like the idea of time-based releases.
For the development process this would mean, that we would need to fork off
feature branches and work on those until the feature can be merged back
into master.
We did that already in the past when porting the Table API to Apache
Calcite and for t
Hello,
The dependencies are fine.
The short answer is i would recommend you to read op on java generics.
The long answer is that OT and T are just placeholders for types that
are supposed to be replaced.
You can either provide the type in your implementation:
(in this example, the ReadFromFi
Hi all!
Since the 1.2.0 release is about to come out, I would like to propose a
change in the way we do releases in the Flink community.
In my opinion, the current model leads to dissatisfaction among users and
contributors, because releases are really not predictable. A recent example
for the is
46 matches
Mail list logo