Hi,
In my scenario I have 2 streams. DS1 is main data stream reading logs from
kafka, and DS2 is a parameter stream which is used to maintain a state
about all processing parameters (including filters) need to be applied at
runtime by DS1. The processing parameters can be changed anytime during th
@Sebastian: I am not sure Apache has really guidelines there. So far, I
thought projects establish their own policies.
The compatibility questions here is also one of APIs (code), but of
savepoint forwarding, which is a but different, I think. For example 1.0
and 1.1 were not compatible there, the
Kostas Kloudas created FLINK-7060:
-
Summary: Change annotation in TypeInformation subclasses
Key: FLINK-7060
URL: https://issues.apache.org/jira/browse/FLINK-7060
Project: Flink
Issue Type: I
Kostas Kloudas created FLINK-7059:
-
Summary: Queryable state does not work with ListState
Key: FLINK-7059
URL: https://issues.apache.org/jira/browse/FLINK-7059
Project: Flink
Issue Type: Bug
Hi Paris,
Thanks for the reply. Any idea when will be Gelly-Stream become part of
official Flink distribution?
Regards,
Ameet
On Fri, Jun 30, 2017 at 8:20 PM, Paris Carbone wrote:
> Hi Ameet,
>
> Flink’s Gelly currently operates on the DataSet model.
> However, we have an experimental project w
Piotr Nowojski created FLINK-7058:
-
Summary: flink-scala-shell unintended dependencies for scala 2.11
Key: FLINK-7058
URL: https://issues.apache.org/jira/browse/FLINK-7058
Project: Flink
Issu
Yes, I know that Theo is engaged in the ML efforts but wasn't sure how much
he is involved in the model serving part (thought he was more into the
online learning part).
It would be great if Theo could help here!
I just wanted to make sure that we find somebody to help bootstrapping.
Cheers, Fabi
Hi Fabian,
However, we should keep in mind that we need a committer to bootstrap the
> new module.
Absolutely I thought Theodore Vassiloudis could help, as an initial
committer.
Is this known? He is part of the effort btw.
Best,
Stavros
On Fri, Jun 30, 2017 at 6:42 PM, Fabian Hueske wrote:
>
Thanks Stavros (and everybody else involved) for starting this effort and
bringing the discussion back to the mailing list.
As I said before, a model serving module/component would be a great feature
for Flink.
I see the biggest advantage for such a module in the integration with the
other APIs an
Nico Kruber created FLINK-7057:
--
Summary: move BLOB ref-counting from LibraryCacheManager to
BlobCache
Key: FLINK-7057
URL: https://issues.apache.org/jira/browse/FLINK-7057
Project: Flink
Issue
Hi Ameet,
Flink’s Gelly currently operates on the DataSet model.
However, we have an experimental project with Vasia (Gelly-Stream) that does
exactly that.
You can check it out and let us know directly what you think:
https://github.com/vasia/gelly-streaming
Paris
On 30 Jun 2017, at 13:17, Ame
Nico Kruber created FLINK-7056:
--
Summary: add API to allow job-related BLOBs to be stored
Key: FLINK-7056
URL: https://issues.apache.org/jira/browse/FLINK-7056
Project: Flink
Issue Type: Sub-tas
Nico Kruber created FLINK-7055:
--
Summary: refactor BlobService#getURL() methods to return a File
object
Key: FLINK-7055
URL: https://issues.apache.org/jira/browse/FLINK-7055
Project: Flink
Issu
Nico Kruber created FLINK-7054:
--
Summary: remove LibraryCacheManager#getFile()
Key: FLINK-7054
URL: https://issues.apache.org/jira/browse/FLINK-7054
Project: Flink
Issue Type: Sub-task
Nico Kruber created FLINK-7053:
--
Summary: improve code quality in some tests
Key: FLINK-7053
URL: https://issues.apache.org/jira/browse/FLINK-7053
Project: Flink
Issue Type: Sub-task
C
Nico Kruber created FLINK-7052:
--
Summary: remove NAME_ADDRESSABLE mode
Key: FLINK-7052
URL: https://issues.apache.org/jira/browse/FLINK-7052
Project: Flink
Issue Type: Sub-task
Compone
Timo Walther created FLINK-7051:
---
Summary: Bump up Calcite version to 1.14
Key: FLINK-7051
URL: https://issues.apache.org/jira/browse/FLINK-7051
Project: Flink
Issue Type: New Feature
Hi ,
Can anyone please point me to examples on streaming graph processing based
on Gelly.
Regards,
Ameet
Based on Kurt's scenario, if the cumulator allocates a big ByteBuf from
ByteBufAllocator during expansion, it is easy to result in creating a new
PoolChunk(16M) because of no consistent memory in current PoolChunks. And this
will cause the total used direct memory beyond estimated.
For further
Usman Younas created FLINK-7050:
---
Summary: RFC Compliant CSV Parser for Table Source
Key: FLINK-7050
URL: https://issues.apache.org/jira/browse/FLINK-7050
Project: Flink
Issue Type: Improvement
Hi all,
After coordinating with Theodore Vasiloudis and the guys behind the Flink
Model Serving effort (Eron, Radicalbit people, Boris, Bas (ING)), we
propose to start working on the model serving over Flink in a more official
way.
That translates to capturing design details in a FLIP document.
P
Hi,
Ufuk had write up an excellent document about Netty's memory allocation [1]
inside Flink, and i want to add one more note after running some large
scale jobs.
The only inaccurate thing about [1] is how much memory will
LengthFieldBasedFrameDecoder
use. From our observations, it will cost at m
22 matches
Mail list logo