Yes, that would be the way to go.
We could follow Cask CDAP hydrator plugin repository [1] that support
different plugins to run in their main CDAP hydrator [2] product
- Henry
[1] https://github.com/caskdata/hydrator-plugins
[2] https://github.com/caskdata/cdap
On Mon, Dec 14, 2015 at 1:49 AM
>
> Regarding Max suggestion to have version compatible connectors: I'm not
> sure if we are able to maintain all connectors across different releases.
>
That was not my suggestion. Whenever we release, existing connectors should
be compatible with that release. Otherwise, they should be removed f
Regarding Max suggestion to have version compatible connectors: I'm not
sure if we are able to maintain all connectors across different releases. I
think its okay to have a document describing the minimum required Flink
version for each connector.
With the interface stability guarantees from 1.0 o
Yes, absolutely. Setting up another repository for Flink ML would be no problem.
On Sat, Dec 12, 2015 at 1:52 AM, Henry Saputra wrote:
> I had small chat with Till about how to help manage Flink ML Libraries
> contributions, which use Flink ML as dependencies.
>
> I suppose if this approached is
I had small chat with Till about how to help manage Flink ML Libraries
contributions, which use Flink ML as dependencies.
I suppose if this approached is the way to go for Flink connectors,
could we do the same for Flink ML libraries?
- Henry
On Fri, Dec 11, 2015 at 1:33 AM, Maximilian Michels
We should have release branches which are in sync with the release
branches in the main repository. Connectors should be compatible
across minor releases. The versioning could be of the form
"flinkversion-connectorversion", e.g. 0.10-connector1.
>The pluggable architecture is great! (why Don't we
+1 from my side as well. Good idea.
On Thu, Dec 10, 2015 at 11:00 PM, jun aoki wrote:
> The pluggable architecture is great! (why Don't we call it Flink plugins?
> my 2 cents)
> It will be nice if we come up with an idea of what directory structure
> should look like before start dumping connect
The pluggable architecture is great! (why Don't we call it Flink plugins?
my 2 cents)
It will be nice if we come up with an idea of what directory structure
should look like before start dumping connectors (plugins).
Also wonder what to do with versioning.
At some point, for example, Twitter v1 con
We would need to have a stable interface between the connectors and flink and
have very good checks that ensure that we don’t inadvertently break things.
> On 10 Dec 2015, at 15:45, Fabian Hueske wrote:
>
> Sounds like a good idea to me.
>
> +1
>
> Fabian
>
> 2015-12-10 15:31 GMT+01:00 Maxim
I like this a lot. It has multiple advantages:
- Obviously more frequent connector updates without being forced to go to
a snapshot version
- Reduce complexity and build time of the core flink repository
We should make sure that for example 0.10.x connectors always work with
0.10.x flink core
Sounds like a good idea to me.
+1
Fabian
2015-12-10 15:31 GMT+01:00 Maximilian Michels :
> Hi squirrels,
>
> By this time, we have numerous connectors which let you insert data
> into Flink or output data from Flink.
>
> On the streaming side we have
>
> - RollingSink
> - Flume
> - Kafka
> - Ni
Hi squirrels,
By this time, we have numerous connectors which let you insert data
into Flink or output data from Flink.
On the streaming side we have
- RollingSink
- Flume
- Kafka
- Nifi
- RabbitMQ
- Twitter
On the batch side we have
- Avro
- Hadoop compatibility
- HBase
- HCatalog
- JDBC
Ma
12 matches
Mail list logo