In my experience a basic "official" (but optional) program description
would be very useful indeed (in order to ease the integration with other
frameworks).
Of course it should be extended and integrated with the REST services and
the Web UI (when defined) in order to be useful..
It ease to show t
Jul 19, 2019 at 11:14 AM Biao Liu wrote:
> To Flavio, good point for the integration suggestion.
>
> I think it should be considered in the "Flink client api enhancement"
> discussion. But the outdated API should be deprecated somehow.
>
> Flavio Pompermaier 于2019
eze and release testing effort.
> > >
> > > I personally still recognize this issue as one important to be solved.
> > I'd
> > > be happy to help resume this discussion soon (after the 1.9 release)
> and
> > > see if we can do some step towards th
t;>
> https://docs.google.com/document/d/1E-8UjOLz4QPUTxetGWbU23OlsIH9VIdodpTsxwoQTs0/edit#heading=h.na7k0ad88tix
> >> <
> >>
> https://docs.google.com/document/d/1E-8UjOLz4QPUTxetGWbU23OlsIH9VIdodpTsxwoQTs0/edit#heading=h.na7k0ad88tix
> >> >
> >>
> >&g
Hi all,
I've tried to migrate my very simple Elasticsearch SourceFunction (that
use scroll API and produce batch of documents) to new Source API, but I
gave up because it's too complicated. It should much simpler to migrate
that function to a bounded or unbounded source.
Before removing completely
Hi Gyula,
thanks for taking care of integrating Flink with Atlas (and Egeria
initiative in the end) that is IMHO the most important part of all the
Hadoop ecosystem and that, unfortunately, was quite overlooked. I can
confirm that the integration with Atlas/Egeria is absolutely of big
interest.
Il
+1 for dropping all Elasticsearch connectors < 6.x
On Mon, Feb 10, 2020 at 2:45 PM Dawid Wysakowicz
wrote:
> Hi all,
>
> As described in this https://issues.apache.org/jira/browse/FLINK-11720
> ticket our elasticsearch 5.x connector does not work out of the box on
> some systems and requires a v
I use Eclipse but the stuff added in the pom.xml to improve the
out-of-the-box experience is pretty useless, I always have to change it
On Fri, Feb 28, 2020 at 4:01 PM Chesnay Schepler wrote:
> Hello,
>
> in various maven pom.xml we have some plugin definitions exclusively to
> increase support
Big +1 from my side. I'd be very interested in what Jeff proposed, in
particular everything related to client part (job submission, workflow
management, callbacks on submission/success/failure, etc).
Something I can't find anywhere is also how to query Flink states..would it
be possible to have som
quickstart archetypes)?
>
> On Fri, Feb 28, 2020 at 4:10 PM Chesnay Schepler
> wrote:
>
>> What do you have to change it to?
>>
>> What happens if you just remove it completely?
>>
>> On 28/02/2020 16:08, Flavio Pompermaier wrote:
>> > I use Eclip
Yes, in my experience.. I always asked myself if I was the only one using
Eclipse.. :D
On Tue, Mar 3, 2020 at 2:33 PM Chesnay Schepler wrote:
> To clarify, the whole lifecycle-mapping business is both unnecessary and
> actively harmful?
>
> On 03/03/2020 14:18, Flavio Pompe
+1 (non-binding).
There's also a related issue that I opened a long time ago
https://issues.apache.org/jira/browse/FLINK-10879 that could be closed once
implemented this FLIP (or closed immediately and referenced as duplicated
by the new JIRA ticket that would be created
On Thu, Mar 12, 2020 at 1
Hello everybody,
I started a new FLIP to discuss about an HBaseCatalog implementation[1]
after the opening of the relative issue by Bowen [2].
I drafted a very simple version of the FLIP just to discuss about the
critical points (in red) in order to decide how to proceed.
Best,
Flavio
[1]
https:/
Hi all,
what do you think if we exploit this job-submission sprint to address also
the problem discussed in https://issues.apache.org/jira/browse/FLINK-10862?
Best,
Flavio
, Mar 30, 2020 at 11:38 AM Aljoscha Krettek
wrote:
> On 18.03.20 14:45, Flavio Pompermaier wrote:
> > what do you think if we exploit this job-submission sprint to address
> also
> > the problem discussed in
> https://issues.apache.org/jira/browse/FLINK-10862?
>
> That
Hi all,
just a remark about the Flink REST APIs (and its client as well): almost
all the times we need a way to dynamically know which jobs are contained in
a jar file, and this could be exposed by the REST endpoint under
/jars/:jarid/entry-points (a simple way to implement this would be to check
t
I'm obviously pro about promoting the usage of this amazing library but,
maybe, in this early stage I'd try to keep it as a separate project.
However, this really depends about how frequently the code is goin to
change..the Flink main repo is becoming more and more complex to handle due
to the incr
Definitely on the same page..+1 to keep it in a separate repo (at least
until the cose becomes "stable" and widely adopted from the community)
Il Mar 15 Ott 2019, 23:17 Stephan Ewen ha scritto:
> Hi Flink folks!
>
> After the positive reaction to the contribution proposal for Stateful
> Function
Hi all,
we're using a lot the multiple jobs in one program and this is why: when
you fetch data from a huge number of sources and, for each source, you do
some transformation and then you want to write into a single directory the
union of all outputs (this assumes you're doing batch). When the numb
+1 to drop the old UI
On Thu, Nov 21, 2019 at 1:05 PM Chesnay Schepler wrote:
> Hello everyone,
>
> Flink 1.9 shipped with a new UI, with the old one being kept around as a
> backup in case something wasn't working as expected.
>
> Currently there are no issues indicating any significant problem
Why not adding also a suggest() method (also unimplemented initially) that
would return the list of suitable completions/tokens on the current query?
How complex eould it be to implement it in you opinion?
Il Ven 17 Gen 2020, 18:32 Fabian Hueske (Jira) ha scritto:
> Fabian Hueske created FLINK-1
Ok thanks for the pointer, I wasn't awareof that!
Il Dom 19 Gen 2020, 03:00 godfrey he ha scritto:
> hi Flavio, TableEnvironment.getCompletionHints maybe already meet the
> requirement.
>
> Flavio Pompermaier 于2020年1月18日周六 下午3:39写道:
>
> > Why not adding als
Hi all,
I'm happy to see a lot of interest in easing the integration with JDBC data
sources. Maybe this could be a rare situation (not in my experience
however..) but what if I have to connect to the same type of source (e.g.
Mysql) with 2 incompatible version...? How can I load the 2 (or more)
con
+1. Torally agree
On Sat, 12 May 2018, 18:14 Christophe Jolif, wrote:
> Hi all,
>
> There is quite some time Flink Elasticsearch sink is broken for
> Elastisearch 5.x (nearly a year):
>
> https://issues.apache.org/jira/browse/FLINK-7386
>
> And there is no support for Elasticsearch 6.x:
>
> htt
Hi to all.
we were trying to run a 1.5 Flink job and we set the version to
1.5-SNAPSHOT.
Unfortunately the 1.5-SNAPSHOT version uploaded on the apache snapshot repo
is very old (february 2018). Shouldn't be this version be updated as well?
Best,
Flavio
updated the version on master to 1.6-SNAPSHOT.
> >
> > Best, Fabian
> >
> > 2018-05-14 15:51 GMT+02:00 Flavio Pompermaier :
> >
> > > Hi to all.
> > > we were trying to run a 1.5 Flink job and we set the version to
> > > 1.5-SNAPSHOT.
&g
I think that it is important to have a nice "official" (or at least free)
Flink UI, we use it to see the detail of the jobs.
It's very useful for people starting working with Flink and also for those
that does not have the resources to write a custom UI.
How are you going to monitor the status of a
Hi to all,
in our ETL we need to call an external (REST) service once a job ends: we
extract informations about accumulators and we update the job status.
However this is only possible if using the CLI client: if we call the job
via the REST API o Web UI (that is very useful to decouple our UI from
way, as such please open a JIRA.
>
> On 12.11.2018 12:50, Flavio Pompermaier wrote:
> > Hi to all,
> > in our ETL we need to call an external (REST) service once a job ends: we
> > extract informations about accumulators and we update the job status.
> > However this is
+1
On Wed, Nov 21, 2018 at 12:05 PM Saar Bar wrote:
> 💯 agree
>
> Sent from my iPhone
>
> > On 21 Nov 2018, at 13:03, Maximilian Michels wrote:
> >
> > Hi!
> >
> > Do you think it would make sense to send JIRA notifications to a
> separate mailing list? Some people just want to casually follow
What about to add also Apache Plasma + Arrow as an alternative to Apache
Ignite?
[1] https://arrow.apache.org/blog/2017/08/08/plasma-in-memory-object-store/
On Mon, Nov 26, 2018 at 11:56 AM Fabian Hueske wrote:
> Hi,
>
> Thanks for the proposal!
>
> To summarize, you propose a new method Table.c
Hi to all,
we often need to track the number of rows of a dataset.
In order to burden on the job complexitye we use accumulators to track this
information.
The problem is that we have to extends all InputFormats that we use in
order to properly handle such row-count accumulator...my question is: wh
ccumulator directly in the input
formats?
On Mon, Feb 4, 2019 at 10:18 AM Flavio Pompermaier
wrote:
> Hi to all,
> we often need to track the number of rows of a dataset.
> In order to burden on the job complexitye we use accumulators to track
> this information.
> The problem is th
Hi to all,
I was going to upgrade our Flink cluster to 1.7.x release but I saw that
releases are no more taged and I need to download the sources for a
specific version. Is this a definitive choice? I think that tagging was
really helpful when recompiling the code..why has this policy changed?
Ther
You're right Chesnay, I did a git fetch and I didn't remember that it
doesn't update the tag list..
After a git fetch --tags --all I was able to find the tags.
It would be nice to add this info into the "build from source"
documentation [1]:
the page states "This page covers how to build Flink 1.7.
Hi all,
many times I had the feeling that allowing Row.setField() to return the
modified object instead of void would really make the (Java) code cleaner
in a very unobtrusive way.
For example, I could write something like:
DataSet columnData = input.map(value -> new Row(1).setField(0, value))
in
this would be convenient but please find a better
> > example; yours can be solved easily using "Row.of(value)".
> >
> > On 22/03/2019 12:26, Flavio Pompermaier wrote:
> >> Hi all,
> >> many times I had the feeling that allowing Row.setField() to
Very BIG +1 for adoption of Apache Arrow. This would simplify a lot the
integration with other tools
On Thu, Apr 11, 2019 at 2:21 PM Run wrote:
> Hi guys,
>
>
> Apache Arrow provides a cross-language, standardized, columnar, memory
> format for data.
> So it is highly desirable to import Arrow t
Hi to all,
I have read many discussion about Flink ML and none of them take into
account the ongoing efforts carried out of by the Streamline H2020 project
[1] on this topic.
Have you tried to ping them? I think that both projects could benefits from
a joined effort on this side..
[1] https://h2020
Hi everybody,
any news on this? For us would be VERY helpful to have such a feature
because we need to execute a call to a REST service once a job ends.
Right now we do this after the env.execute() but this works only if the job
is submitted via the CLI client, the REST client doesn't execute anyth
Is there any possibility to have something like Apache Livy [1] also for
Flink in the future?
[1] https://livy.apache.org/
On Tue, Jun 11, 2019 at 5:23 PM Jeff Zhang wrote:
> >>> Any API we expose should not have dependencies on the runtime
> (flink-runtime) package or other implementation det
e main method further.
> >>
> >> I'd like to write down a few of notes on configs/args pass and respect,
> >> as well as decoupling job compilation and submission. Share on this
> >> thread later.
> >>
> >> Best,
> >> tison.
> >>
+1 to remove the old web client in favor of an improved dashboard (not so
urgent however)
On Thu, Nov 5, 2015 at 11:11 AM, Robert Metzger wrote:
> Hi,
> I personally like the idea of allowing users to submit a job from the web
> interface a lot. In particular for users new to the system, its a v
; On Thu, Nov 5, 2015 at 11:46 AM, Sachin Goel <
>> sachingoel0...@gmail.com
>> >>>
>> >>>> wrote:
>> >>>>
>> >>>>> Okay. I think that does it. I've already designed the basic
>> >> interface,
>&g
I hope that it's not too late to suggest to add to restructuring also
https://issues.apache.org/jira/browse/FLINK-1827 so that to be able to
compile Flink without compiling also tests (-Dmaven.test.skip=true) and
save a lot of time...
I should be fairly easy to fix that.
Best,
Flavio
On Wed, Jan
compiles tests, but
> does not execute them.
>
> Stephan
>
>
> On Mon, Jan 11, 2016 at 11:34 AM, Flavio Pompermaier >
> wrote:
>
> > I hope that it's not too late to suggest to add to restructuring also
> > https://issues.apache.org/jira/browse/FLINK-1827
+1 as long as there's a well defined template/pattern of restructuring the
code and class-naming
On Fri, Jan 22, 2016 at 9:48 AM, Andrea Sella
wrote:
> +1 for moving to external classes, it is much simpler to analyze/study few
> little blocks of code than one bigger imho.
>
> Andrea
>
> 2016-01-
Hi to all,
we've recently migrated our sqoop[1] import process to a Flink job, using
an improved version of the Flink JDBC Input Format[2] that is able to
exploit the parallelism of the cluster (the current Flink version
implements NonParallelInput).
Still need to improve the mapping part of sql t
Hi guys,
I'm integrating the comments of Chesnay to my PR but there's a couple of
thing that I'd like to discuss with the core developers.
1. about the JDBC type mapping (addValue() method at [1]: At the moment
if I find a null value for a Double, the getDouble of jdbc return 0.0. Is
i
to
>> reuse the same connection of an InputFormat across InputSplits, i.e.,
>> calls
>> of the open() method. Wouldn't that be sufficient?
>>
> this is the right approach imo.
>
>> Best, Fabian
>>
>> 2016-04-14 16:59 GMT+02:00 Flavio Pompermai
a pretty low level API though. I am leaning
>>>> towards the user-provided POJO option.
>>>>
>>>> i would also lean towards the POJO option.
>>>
>>> 2) The JDBCInputFormat is located in a dedicated Maven module. I think we
>>>> can add a depen
uce, Join, etc.) and
> DataSinks.
>
> Note, a single task does not fill a slot, but a "slice" of the program (one
> parallel task of each operator) fills a slot.
>
> Cheers, Fabian
>
> 2016-04-14 18:47 GMT+02:00 Flavio Pompermaier :
>
> > ok thanks!just one last
on, then we should also be careful with the close()
> implementation. I did not see changes for this method in the PR.
>
> saluti,
> Stefano
>
> 2016-04-15 11:01 GMT+02:00 Flavio Pompermaier :
>
> > Following your suggestions I've fixed the connection reuse in my P
t; > > As for the open() and close() issue, I agree with Flavio that we'd
> need a
> > > better management of the inputformat lifecycle. Perhaps a new interface
> > > extending it: RichInputFormat?
> > >
> > > my2c.
> > >
> > > Stefa
s across input splits.
> On the other hand, input splits should not be too fine-grained as well,
> because input split assignment has some overhead as well.
>
> Best, Fabian
>
> 2016-04-18 9:49 GMT+02:00 Flavio Pompermaier :
>
> > Yes, I forgot to mention that I could instanti
We just issued a PR about FLINK-1827 (
https://github.com/apache/flink/pull/1915) that improves test stability
except for the ml library that has still some problem to solve..
On 21 Apr 2016 23:59, "Fabian Hueske" wrote:
> Hi Sourigna,
>
> thanks for contributing!
>
> Unrelated test failures are
We just issued a PR about this (FLINK-1827 - https://github.com/apache/
flink/pull/1915) that improves test stability (and allow to skip entirely
their compilation when it's not required) except for the ml library that
has still some one error to solve ( in the hadoop-1 build and in the
ml-library
Hi flinkers,
I was converting my parameter validation code to the new (and very useful)
RequiredParameters APIs. However I faced some issue with its semantic and I
wanted to report my doubts to the community and see whether those APIs
coould be improved or not.
I started using RequiredParameters t
oscha Krettek" wrote:
I think RequiredParameters is meant to be used with ParameterTool. For
example, check out RequiredParametersTest.
On Wed, 11 May 2016 at 11:05 Flavio Pompermaier
wrote:
> Hi flinkers,
> I was converting my parameter validation code to the new
That would be definitely awesome (and useful also for us)! +1
On Thu, May 12, 2016 at 7:38 AM, Aljoscha Krettek
wrote:
> I favor the one-cluster-per job approach. If this becomes the dominant
> approach to doing things we could also think about introducing a separate
> component that would allo
Since FLINK-1827 was merged you could also skip test compilation with
-Dmaven.test.skip=true if you don't want to waste time and resources :)
On 12 May 2016 10:06, "Jark" wrote:
> Sorry for mistyped the command. You can enter into
> flink/flink-streaming-java and run `mvn clean package install
>
If you're interested to I created an Eclipse version that should follows
Flink coding rules..should I create a new JIRA for it?
On Thu, May 5, 2016 at 6:02 PM, Dawid Wysakowicz wrote:
> I opened JIRA: https://issues.apache.org/jira/browse/FLINK-3870. and
> created PR both to flink and flink-web.
Do I need to open also a Jira or just the PR?
On Thu, May 12, 2016 at 12:03 PM, Stephan Ewen wrote:
> Yes, please open a pull request for that.
>
> On Thu, May 12, 2016 at 11:40 AM, Flavio Pompermaier >
> wrote:
>
> > If you're interested to I created an Eclips
Hi to all,
for debugging Flink from Eclipse this is what you have to do:
1. go to 'Run' -> 'Debug configurations...'
2. Create a new 'Remote Java Application'
3. In the 'Connect' tab choose:
1. the project to debug
2. Connection type 'Standard (Socket Attach)'
3. Connec
No I can't edit that page :(
On Tue, May 17, 2016 at 3:46 PM, Stefano Baghino <
stefano.bagh...@radicalbit.io> wrote:
> Thanks Flavio,
>
> perhaps it would be a nice addition to the Wiki page, would you care to
> contribute your suggestion? :)
>
> On Tue, Ma
I've just signed up as f.pompermaier
Thanks!
On Tue, May 17, 2016 at 5:04 PM, Robert Metzger wrote:
> Can you give me your wiki user id, then I can give you permissions.
>
> On Tue, May 17, 2016 at 3:56 PM, Flavio Pompermaier
> wrote:
>
> > No I can't edit that
Done ;)
On Tue, May 17, 2016 at 5:37 PM, Robert Metzger wrote:
> Okay, I gave you permissions.
>
> On Tue, May 17, 2016 at 5:22 PM, Flavio Pompermaier
> wrote:
>
> > I've just signed up as f.pompermaier
> >
> > Thanks!
> >
> > On Tue,
You're welcome ;)
On Thu, May 19, 2016 at 11:54 AM, Till Rohrmann
wrote:
> Thanks Flavio for adding the Eclipse section for remote debugging :-)
>
> On Tue, May 17, 2016 at 5:55 PM, Flavio Pompermaier
> wrote:
>
> > Done ;)
> >
> > On Tue, May 17, 2016
If it's ok for you I'd need also to merge FLINK-3901[1] and FLINK-3908[2]
[1] https://github.com/apache/flink/pull/1989
[2] https://github.com/apache/flink/pull/2007
Best,
Flavio
On Wed, May 25, 2016 at 5:04 PM, Fabian Hueske wrote:
> Hi everybody,
>
> thanks for the feedback so far.
>
> I jus
Awesome work guys!
And even more thanks for the detailed report...This troubleshooting summary
will be undoubtedly useful for all our maven projects!
Best,
Flavio
On 30 May 2016 23:47, "Ufuk Celebi" wrote:
> Thanks for the effort, Max and Stephan! Happy to see the green light again.
>
> On Mon,
Hi to all,
if Flink 1.1 will introduce ufficially the Table API, do you think someone
could take care of rewriting in scala the necessary java code of my PR
about reading CSV as Rows instead of tuples[1]?
For our use cases, and many new users approaching to Flink IMHO, that will
be definitely usef
in there so we shouldn't force people to write Scala
> code if they make a valuable contribution in Java.
>
> On Tue, 5 Jul 2016 at 17:33 Flavio Pompermaier
> wrote:
>
> > Hi to all,
> > if Flink 1.1 will introduce ufficially the Table API, do you think
> someone
&g
Maybe someone could complete FLINK-3901 - Added CsvRowInputFormat ?
This would be very useful to us..
Best,
Flavio
On Thu, Jul 21, 2016 at 3:31 PM, Aljoscha Krettek
wrote:
> Sounds good!
>
> This one I just merged: https://github.com/apache/flink/pull/2273 (Only
> allow/require query
> for Tupl
I have a nice case of RDF manipulation :)
Let's say I have the following RDF triples (Tuple3) in two files or tables:
TABLE A:
http://test/John, type, Person
http://test/John, name, John
http://test/John, knows, http://test/Mary
http://test/John, knows, http://test/Jerry
http://test/Jerry, type, P
y operation in terms of memory and
> communication for large graphs.
>
> Let me know if you have any questions!
>
> Cheers,
> V.
>
> On 3 March 2015 at 09:13, Flavio Pompermaier wrote:
>
> > I have a nice case of RDF manipulation :)
> > Let's say I have the fo
r as a vertex-centric
> graph computation. For that, you can use both "Gelly" (the graph library)
> or the standalone Spargel operator (Giraph-like).
>
> Does that help with your questions?
>
> Greetings,
> Stephan
>
>
> On Thu, Mar 19, 2015 at 2:57 PM, Flav
Is there anu example about rdf graph generation based on a skeleton
structure?
On Mar 22, 2015 12:28 PM, "Fabian Hueske" wrote:
> Hi Flavio,
>
> also, Gelly is a superset of Spargel. It provides the same features and
> much more.
>
> Since RDF is graph-structured, Gelly might be a good fit for yo
gt; }
>
> After you have this, in your main method, you just write:
> Graph rdfGraph = Graph.fromDataSet(edges, env);
>
> I picked up the conversation later on, not sure if that's what you meant by
> "graph generation"...
>
> Cheers,
> Andra
>
>
Thanks Vasiliki,
when I'll find the time I'll try to make a quick prototype using the
pointers you suggested!
Thanks for the support,
Flavio
On Mon, Mar 23, 2015 at 10:31 AM, Vasiliki Kalavri <
vasilikikala...@gmail.com> wrote:
> Hi Flavio,
>
> I'm not familiar with JSON-LD, but as far as I unde
Hi Santosh, which version of Flink are you using? And which version of
hbase?
On Mar 31, 2015 12:50 PM, "santosh_rajaguru" wrote:
> Hi guys,
>
> I am facing problem while connecting remote hbase from Apache flink.
> I am able to connect successfully through simple hbase java program.
> However, w
Strange, I'm using Flink 0.8.1 and HBase 0.98.6 and everything works fine
(at least during reading).
Remember to put the correct hbase-site.xml in the classpath!
To output data I'm trying to find the best way to achieve it..
It came out that the hadoop compatibility layer of flink probably doesn't
Hi Flink devs,
this is my final report about the HBaseOutputFormat problem (with Flink
0.8.1) and I hope you could suggest me the best way to make a PR:
1) The following code produce the error reported below (this should be
fixed in 0.9 right?)
Job job = Job.getInstance();
myDataset.output
Any feedback about this?
On Tue, Mar 31, 2015 at 7:07 PM, Flavio Pompermaier
wrote:
> Hi Flink devs,
> this is my final report about the HBaseOutputFormat problem (with Flink
> 0.8.1) and I hope you could suggest me the best way to make a PR:
>
> 1) The following code produce the
modified branch.
>
> Let's discuss your changes on GitHub.
>
> Best,
> Max
>
> On Wed, Apr 1, 2015 at 1:44 PM, Flavio Pompermaier
> wrote:
>
> > Any feedback about this?
> >
> > On Tue, Mar 31, 2015 at 7:07 PM, Flavio Pompermaier <
> pomperm
Hi to all,
I was trying to compile Flink 0.9 skipping test compilation
(-Dmaven.test.skip=true) but this is not possible because there are
projects like flink-test-utils (for example) that requires test classes at
compile scope..shouldn't be better to keep the test source files in the
test folder
use.f1 = put;
return reuse;
}
}).output(new HadoopOutputFormat(new
TableOutputFormat(), job));
Do I have to register how to serialize Put somewhere?
On Wed, Apr 1, 2015 at 2:32 PM, Fabian Hueske wrote:
> What ever works best for you.
> We can easily backport or forwardport the patch.
>
> 2
Which field?the Tuple2?I use it with Flink 0.8.1 without errors
On Apr 3, 2015 2:27 AM, wrote:
> If Put is not Serializable it cannot be serialized and shipped.
>
> Is it possible to make that field transient and initialize Put in
> configure()?
>
>
>
>
>
>
>
Don't you agree?
On Fri, Apr 3, 2015 at 1:42 PM, Márton Balassi
wrote:
> Dear Flavio,
>
> 'mvn clean install -DskipTests' should do the trick.
>
> On Fri, Apr 3, 2015 at 12:11 AM, Flavio Pompermaier
> wrote:
>
> > Hi to all,
> >
>
Any fix for this?
On Apr 3, 2015 7:43 AM, "Flavio Pompermaier" wrote:
starting.
>
> Can you try making the following changes to your code?
> https://gist.github.com/rmetzger/a218beca4b0442f3c1f3
> This is basically making the field that contains the non-serializable "Put"
> element transient.
>
>
>
> On Sat, Apr 4, 2015 at 8:40 AM,
a non-serializable member
> variable, you need to declare it as transient and initialize it before it
> is executed, e.g., via open() or the first invocation of the functions
> processing method such as map().
>
> 2015-04-04 10:59 GMT+02:00 Flavio Pompermaier :
>
> > There
I opened a JIRA for this porblem
https://issues.apache.org/jira/browse/FLINK-1827.
Obviously it's an improvement with minor priority but I think this will be
a nice fix for user that want to compile java sources quickly.
On Fri, Apr 3, 2015 at 2:38 PM, Flavio Pompermaier
wrote:
> Th
utput
> object in open().
>
>
>
> Switching function serialization to Kryo is on our TODO list (FLINK-1256).
> Would be good to fix that soon, IMO.
>
>
> Cheers, Fabian
>
>
> From: Flavio Pompermaier
> Sent: Saturday, 4. April, 2015 11:23
> To: dev@flink.
I jusy wrote an example ob my branch!you can find it at
https://github.com/fpompermaier/flink
On Apr 8, 2015 10:00 AM, "santosh_rajaguru" wrote:
> is there any link or code samples or any sortf of pointers for
> TableOutPutFormat for Hbase like HbaseReadExample?
>
>
>
>
> --
> View this message i
Do you think it could be possible to include the Hadoop outputFormat fix
(FLINK-1828)?
On Thu, Apr 9, 2015 at 9:42 AM, Fabian Hueske wrote:
> +1
>
> I ran tests the following tests.
>
> 1. Cygwin/Windows:
> - start/stop local
> - run all examples with build-in data from ./bin/flink
> - run wordc
nice example of how Gelly could help in
handling RDF graphs :)
Best,
Flavio
On Mon, Mar 23, 2015 at 10:41 AM, Flavio Pompermaier
wrote:
> Thanks Vasiliki,
> when I'll find the time I'll try to make a quick prototype using the
> pointers you suggested!
>
> Thanks for the suppor
do you mean all the vertices that can be
> reached starting from this node and following the graph edges?
>
> -Vasia.
>
> On 14 April 2015 at 10:37, Flavio Pompermaier
> wrote:
>
> > Hi to all,
> > I made a simple RDF Gelly test and I shared it on my github repo at
>
I was looking at this great example and I'd like to ask you which
serialization framework is the best if I have to serialize
Tuple3 with Parquet.
The syntax I like the most is the Thrift one but I can't see all the pros
and cons of using it and I'd like to hear your opinion here.
Thanks in advance
3: optional list f3;
> }
>
> See: http://diwakergupta.github.io/thrift-missing-guide/
>
> I like Thrift the most, because the API for Thrift in Parquet is the
> easiest.
>
> Have fun with Parquet :)
>
> Best regards,
>
> Felix
>
> 2015-04-24 12:28 GMT+02:00 Flavi
There was an attempt to build such a queue during the Dopa project when
Flink was still Stratosphere.
Probably it could be a good idea to collect the good and bad things learned
from it to start designing the new scheduler :)
On Thu, Apr 30, 2015 at 10:08 AM, Stephan Ewen wrote:
> Most component
1 - 100 of 208 matches
Mail list logo