/confluence/display/FLINK/Remote+Debugging+of+Flink+Clusters
> > (it is linked from the front page
> > https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home )
> >
> > Greetings,
> > Stephan
> >
> >
> > On Tue, May 17, 2016 at 4:26 AM, Vijay Srinivas
ed from the front page
> > https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home )
> >
> > Greetings,
> > Stephan
> >
> >
> > On Tue, May 17, 2016 at 4:26 AM, Vijay Srinivasaraghavan <
> > vijikar...@yahoo.com.invalid> wrote:
&g
It's there, I've left the Eclipse paragraph empty, unfortunately I have no
experience with remote debugging using it.
On Tue, May 17, 2016 at 12:29 PM, Stephan Ewen wrote:
> Super, thanks!
>
> On Tue, May 17, 2016 at 11:32 AM, Stefano Baghino <
> stefano.bagh...@radic
ome )
>
> Greetings,
> Stephan
>
>
> On Tue, May 17, 2016 at 4:26 AM, Vijay Srinivasaraghavan <
> vijikar...@yahoo.com.invalid> wrote:
>
> > Awesome, thanks Stefano!!
> >
> > On Monday, May 16, 2016 9:57 AM, Stefano Baghino <
> > stefano.
inivasaraghavan <
vijikar...@yahoo.com.invalid> wrote:
> How do I attach remote debugger to running Flink cluster from IntelliJ?
> Appreciate if anyone could share the steps?
> RegardsVijay
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
access to a secured HDFS?
>
> Your observation is right. We are not check if a job submitted by any user
> is running in the same security context as the Flink cluster.
>
>
> On Thu, May 5, 2016 at 11:57 AM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
>
the secure cluster without
authentication (I tried it with the WordCount example).
I'd say this is a bug; is there a reason for this? If you share my feeling
on this, I pinpointed the code that's responsible for this and the fix
seems trivial, I can open an issue and a PR today. Thanks!
--
oString().length()-2))) }}}
>
> val k=j.groupBy(_.get("ga_date"))
>
> But when I execute this, it throws an exception saying:
>
> org.apache.flink.api.common.InvalidProgramException: Return type
> Option[String] of KeySelector class
> org.apache.flink.api.scala.Da
it gives me a warning saying "Type Any has no fields that are visible
> > from Scala Type analysis. Falling back to Java Type Analysis
> > (TypeExtractor)." in eclipse, and when I run it, the code just hangs and
> > does not print a thing.
> >
> > On Thu
om>
>>>>>> > > wrote:
>>>>>> > > >
>>>>>> > > > > Hi Punit,
>>>>>> > > > >
>>>>>> > > > > JSON can be hard to parse in parallel due to its nested
>>>>>> structure. It
>>>>>> > > > > depends on the schema and (textual) representation of the JSON
>>>>>> > whether
>>>>>> > > > and
>>>>>> > > > > how it can be done. The problem is that a parallel input
>>>>>> format needs
>>>>>> > > to
>>>>>> > > > be
>>>>>> > > > > able to identify record boundaries without context
>>>>>> information. This
>>>>>> > > can
>>>>>> > > > be
>>>>>> > > > > very easy, if your JSON data is a list of JSON objects which
>>>>>> are
>>>>>> > > > separated
>>>>>> > > > > by a new line character. However, this is hard to generalize.
>>>>>> That's
>>>>>> > > why
>>>>>> > > > > Flink does not offer tooling for it (yet).
>>>>>> > > > >
>>>>>> > > > > If your JSON objects are separated by new line characters, the
>>>>>> > easiest
>>>>>> > > > way
>>>>>> > > > > is to read it as text file, where each line results in a
>>>>>> String and
>>>>>> > > parse
>>>>>> > > > > each object using a standard JSON parser. This would look
>>>>>> like:
>>>>>> > > > >
>>>>>> > > > > ExecutionEnvironment env =
>>>>>> > > > ExecutionEnvironment.getExecutionEnvironment();
>>>>>> > > > >
>>>>>> > > > > DataSet text = env.readTextFile("/path/to/jsonfile");
>>>>>> > > > > DataSet json = text.map(new
>>>>>> > > > YourMapFunctionWhichParsesJSON());
>>>>>> > > > >
>>>>>> > > > > Best, Fabian
>>>>>> > > > >
>>>>>> > > > > 2016-04-26 8:06 GMT+02:00 Punit Naik >>>>> >:
>>>>>> > > > >
>>>>>> > > > > > Hi
>>>>>> > > > > >
>>>>>> > > > > > I am new to Flink. I was experimenting with the Dataset API
>>>>>> and
>>>>>> > found
>>>>>> > > > out
>>>>>> > > > > > that there is no explicit method for loading a JSON file as
>>>>>> input.
>>>>>> > > Can
>>>>>> > > > > > anyone please suggest me a workaround?
>>>>>> > > > > >
>>>>>> > > > > > --
>>>>>> > > > > > Thank You
>>>>>> > > > > >
>>>>>> > > > > > Regards
>>>>>> > > > > >
>>>>>> > > > > > Punit Naik
>>>>>> > > > > >
>>>>>> > > > >
>>>>>> > > >
>>>>>> > > >
>>>>>> > > >
>>>>>> > > > --
>>>>>> > > > Thank You
>>>>>> > > >
>>>>>> > > > Regards
>>>>>> > > >
>>>>>> > > > Punit Naik
>>>>>> > > >
>>>>>> > >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > --
>>>>>> > Thank You
>>>>>> >
>>>>>> > Regards
>>>>>> >
>>>>>> > Punit Naik
>>>>>> >
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thank You
>>>>>
>>>>> Regards
>>>>>
>>>>> Punit Naik
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Thank You
>>>>
>>>> Regards
>>>>
>>>> Punit Naik
>>>>
>>>
>>>
>>>
>>> --
>>> Thank You
>>>
>>> Regards
>>>
>>> Punit Naik
>>>
>>
>>
>>
>> --
>> Thank You
>>
>> Regards
>>
>> Punit Naik
>>
>
>
>
> --
> Thank You
>
> Regards
>
> Punit Naik
>
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
gt; > val data=env.readTextFile("file:///home/punit/vik-in")
> > > val j=data.flatMap { x=>x }
> > > j.first(1).print()
> > >
> > > This prints "{"
> > >
> > > --
> > > Thank You
> > >
> > > Regards
> > >
> > > Punit Naik
> > >
> >
>
>
>
> --
> Thank You
>
> Regards
>
> Punit Naik
>
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
n Fri, Apr 8, 2016 at 12:30 PM, Dawid Wysakowicz <
>> > > > wysakowicz.da...@gmail.com> wrote:
>> > > >
>> > > > > Hi all,
>> > > > >
>> > > > > I am currently working on some issues and been wondering if you
>> have
>> > > > > settings for Intellij code style that would follow your coding
>> > > guidelines
>> > > > > available (I tried to look on wikis but could not find it). If not
>> > > could
>> > > > > someone share its own? I would be grateful.
>> > > > >
>> > > > > Regards
>> > > > > Dawid Wysakowicz
>> > > > >
>> > > >
>> > >
>> >
>>
>
>
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
include them in the next major release. Minor releases typically don't
> include new features. Now it's questionable whether this is a fix or a
> new feature :)
>
> Cheers,
> Max
>
> On Wed, Apr 27, 2016 at 3:18 PM, Stefano Baghino
> wrote:
> > Ok, thanks fo
Ok, thanks for the feedback.
On Wed, Apr 27, 2016 at 3:16 PM, Till Rohrmann wrote:
> Hi Stefano,
>
> in this case I think it's best if you opened a PR against the release
> branch so that a committer can pull it in.
>
> Cheers,
> Till
>
> On Wed, Apr 27,
ch that the committer who merges it can also put it into the older
> release branch.
>
> Cheers,
> Aljoscha
>
> On Wed, 27 Apr 2016 at 13:38 Stefano Baghino <
> stefano.bagh...@radicalbit.io>
> wrote:
>
> > I'm currently working on FLINK-3239
> &g
PR directly onto the
1.0.x branch? I would assume the latter is unadvisable, just asking if
that's the case.
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
Stefano Baghino created FLINK-3699:
--
Summary: Allow per-job Kerberos authentication
Key: FLINK-3699
URL: https://issues.apache.org/jira/browse/FLINK-3699
Project: Flink
Issue Type
Stefano Baghino created FLINK-3678:
--
Summary: Make Flink logs directory configurable
Key: FLINK-3678
URL: https://issues.apache.org/jira/browse/FLINK-3678
Project: Flink
Issue Type
Stefano Baghino created FLINK-3675:
--
Summary: YARN ship folder incosistent behavior
Key: FLINK-3675
URL: https://issues.apache.org/jira/browse/FLINK-3675
Project: Flink
Issue Type: Bug
ed to a
> separate class (YarnConfigKeys).
>
> Cheers,
> Max
>
>
>
> On Wed, Mar 23, 2016 at 10:06 AM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
> > Thanks for pointing out Max's work (awesome PR, btw). It actually seem to
> > hav
t; contained in separate classes which can be integrated even when code
> > changes. Let's just stay in sync.
> >
> > If you like, you could start off by opening an issue and submitting a
> > short design document.
> >
> > Cheers,
> > Max
> >
>
ity on this, thank you in
advance to anyone who’s willing to share their insight and opinion with us.
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
Thanks for pointing out Max's work (awesome PR, btw). It actually seem to
have introduced an environment variable regarding ship directories, it
would be good to have his feedback on this.
On Tue, Mar 22, 2016 at 10:24 PM, Ufuk Celebi wrote:
> On Tue, Mar 22, 2016 at 8:42 PM, Stefano
f the way the
job is run.
Let me know what you think, thank you for your attention.
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
Stefano Baghino created FLINK-3653:
--
Summary: recovery.zookeeper.storageDir is not documented on the
configuration page
Key: FLINK-3653
URL: https://issues.apache.org/jira/browse/FLINK-3653
Project
merge the change because its API breaking.
> > >>>>>> One of the promises of the 1.0 release is that we are not breaking
> > >> any
> > >>>>> APIs
> > >>>>>> in the 1.x.y series of Flink. We can fix those issues with a 2.x
> > >>> release.
> > >>>>>>
> > >>>>>> On Sun, Mar 13, 2016 at 5:27 AM, Márton Balassi <
> > >>>>> balassi.mar...@gmail.com>
> > >>>>>> wrote:
> > >>>>>>
> > >>>>>>> The JIRA issue is FLINK-3610.
> > >>>>>>>
> > >>>>>>> On Sat, Mar 12, 2016 at 8:39 PM, Márton Balassi <
> > >>>>>> balassi.mar...@gmail.com>
> > >>>>>>> wrote:
> > >>>>>>>
> > >>>>>>>> I have just come across a shortcoming of the streaming Scala
> API:
> > >> it
> > >>>>>>>> completely lacks the Scala implementation of the DataStreamSink
> > and
> > >>>>>>>> instead the Java version is used. [1]
> > >>>>>>>>
> > >>>>>>>> I would regard this as a bug that needs a fix for 1.0.1.
> > >>>>> Unfortunately
> > >>>>>>>> this is also api-breaking.
> > >>>>>>>>
> > >>>>>>>> Will post it to JIRA shortly - but issues.apache.org is
> > >> unresponsive
> > >>>>>> for
> > >>>>>>>> me currently. Wanted to raise the issue here as it might affect
> > the
> > >>>>>> api.
> > >>>>>>>> [1]
> > >>>>> https://github.com/apache/flink/blob/master/flink-streaming-scala
> > >>>>>>>>
> > >> /src/main/scala/org/apache/flink/streaming/api/scala/DataStream.scala
> > >>>>>>>> #L928-L929
> > >>>>>>>>
> > >>>
> > >>>
> > >>
> >
> >
>
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
either the flag was specified, nor
> > the script run.
> >
> > On Fri, Mar 4, 2016 at 11:43 AM, Ufuk Celebi wrote:
> >
> >> @Stefano: Yes, would be great to have a fix in the docs and pointers
> >> on how to improve the docs for this.
> >>
> >
tefano: Yes, would be great to have a fix in the docs and pointers
> > on how to improve the docs for this.
> >
> > On Fri, Mar 4, 2016 at 11:41 AM, Stefano Baghino
> > wrote:
> > > Build successful, thank you.
> > >
> > > On Fri, Mar 4, 2016 at 11:
Build successful, thank you.
On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
stefano.bagh...@radicalbit.io> wrote:
> I'll try it immediately, thanks for the quick feedback and sorry for the
> intrusion. Should I add this to the docs? The flag seem to be
> -Dscala.ve
ofiles will not get properly
> activated.
> >
> > Can you try that again?
> >
> > Thanks,
> > Stephan
> >
> >
> > On Fri, Mar 4, 2016 at 11:17 AM, Stefano Baghino <
> > stefano.bagh...@radicalbit.io> wrote:
> >
> >> I
>>
> > > >> commit a79521fba60407ff5a800ec78fcfeee750d826d6
> > > >> Author: Robert Metzger
> > > >> Date: Thu Mar 3 09:32:40 2016 +0100
> > > >>
> > > >>[hotfix] Make 'force-shading' deployable
> > > >>
> > > >> commit 3adc51487aaae97469fc05e511be85d0a75a21d3
> > > >> Author: Maximilian Michels
> > > >> Date: Wed Mar 2 17:52:05 2016 +0100
> > > >>
> > > >>[maven] add module to force execution of Shade plugin
> > > >>
> > > >>This ensures that all properties of the root pom are properly
> > > >>resolved by running the Shade plugin. Thus, our root pom does not
> > > have
> > > >>to depend on a Scala version just because it holds the Scala
> > version
> > > >>properties.
> > > >>
> > > >> commit b862fd0b3657d8b9026a54782bad5a1fb71c19f4
> > > >> Author: Márton Balassi
> > > >> Date: Sun Feb 21 23:01:00 2016 +0100
> > > >>
> > > >>[FLINK-3422][streaming] Update tests reliant on hashing
> > > >>
> > > >> commit a049d80e8aef7f0d23fbc06d263fb3e7a0f2f05f
> > > >> Author: Gabor Horvath
> > > >> Date: Sun Feb 21 14:54:44 2016 +0100
> > > >>
> > > >>[FLINK-3422][streaming][api-breaking] Scramble HashPartitioner
> > > hashes.
> > > >>
> > > >>
> > > >>
> > >
> > >
> >
>
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
Stefano Baghino created FLINK-3518:
--
Summary: Stale docs for quickstart setup
Key: FLINK-3518
URL: https://issues.apache.org/jira/browse/FLINK-3518
Project: Flink
Issue Type: Bug
Stefano Baghino created FLINK-3438:
--
Summary: ExternalProcessRunner fails to detect ClassNotFound
exception because of locale settings
Key: FLINK-3438
URL: https://issues.apache.org/jira/browse/FLINK-3438
tails rewriting
> the code if you change the API in a breaking manner. This can be really
> annoying for users.
>
> Cheers,
> Till
>
>
> On Tue, Feb 9, 2016 at 2:51 PM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
> > I agree with you, but I a
cits as well. IMHO
> though this should be the default
> behavior, without the need to add another import.
>
> On Tue, Feb 9, 2016 at 1:29 PM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
> > I see, thanks for the tip! I'll work on it;
> Not the DataSet but the JoinDataSet and the CoGroupDataSet do in the form
> of an apply function.
>
>
> On Tue, Feb 9, 2016 at 11:09 AM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
> > Sure, it was just a draft. I agree that filter and mapParti
xample, the filter, mapPartition, coGroup and join functions
> are missing.
>
> Cheers,
> Till
>
>
> On Tue, Feb 9, 2016 at 1:18 AM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
> > What do you think of something like this?
> &
> of the first proposal where we add a new method xxxWith via implicit
> conversions.
>
> Cheers,
> Till
>
>
> On Sun, Feb 7, 2016 at 12:44 PM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
> > It took me a little time but I was able to pu
k-runtime directory and run "mvn
> verify" from there.
>
> This assumes Flink has been built and installed locally before. If
> not, Maven will try to download the artifacts.
>
> Cheers,
> Max
>
> On Mon, Feb 8, 2016 at 9:55 AM, Stefano Baghino
> > wrote:
round.
> It takes longer to build then on a fast machine though.
>
> Cheers,
> Max
>
> On Sun, Feb 7, 2016 at 7:12 PM, Stephan Ewen wrote:
> > Hi!
> >
> > I basically test with "mvn clean install" as well.
> >
> > Greetings,
> > Stephan
implicit conversion from DataSet to
>> DataSetExtended (which implements the mapWith, reduceWith, ...) methods
>> could help there...
>>
>> What do you think?
>>
>> Greetings,
>> Stephan
>>
>>
>> On Thu, Jan 28, 2016 at 2:05 PM, Stefano B
Stefano Baghino created FLINK-3353:
--
Summary: CSV-related tests may fail depending on locale
Key: FLINK-3353
URL: https://issues.apache.org/jira/browse/FLINK-3353
Project: Flink
Issue Type
m basically
running `mvn [clean] install [-rf :]` each time to make
sure I have a fresh build to test. Would this be the right path or is there
a quicker way to have a fresh build and running the tests on them? Feel
free to point me to any relevant documentation, if you wish.
Thank you in advance for a
nt of
> each function to the DataSet. An implicit conversion from DataSet to
> DataSetExtended (which implements the mapWith, reduceWith, ...) methods
> could help there...
>
> What do you think?
>
> Greetings,
> Stephan
>
>
> On Thu, Jan 28, 2016 at 2:05 PM, Stefano Ba
anges to make this pattern
available to Scala users?
Thank you all in advance for your feedback.
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
Stefano Baghino created FLINK-3289:
--
Summary: Double reference to flink-contrib
Key: FLINK-3289
URL: https://issues.apache.org/jira/browse/FLINK-3289
Project: Flink
Issue Type: Bug
. With that, we could persist that information.
>
>
> On Wed, Jan 20, 2016 at 5:29 PM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
> > Hi Aljoscha,
> >
> > thank you for your tip as well. I've started working on an issue on the
> > examp
t; > > everything to external classes, IMHO we should do it, but I can also
> > see
> > > > why it is nice to have the whole example contained in one file. So
> > let’s
> > > > see what the others think.
> > > >
> > > > Cheers,
&
files/classes in most editors without having to scroll through the file to
reach the code you're interested in.
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
> in some other repository. This way users can learn by studying from more
> and more complex examples.
>
> Cheers,
> Aljoscha
> > On 20 Jan 2016, at 15:12, Stefano Baghino
> wrote:
> >
> > Thank you very much for the pointers, we'll look into it.
> >
a-td4107.html
> In that thread you'll also find this repository:
>
> https://github.com/rzvoncek/flink/tree/2b281120f4206c4fd66bec22090e0b6d62ebb8ad/flink-staging/flink-cassandra
>
>
>
>
>
> On Wed, Jan 20, 2016 at 2:20 PM, Stefano Baghino <
> stefano.bagh...@
ledge to the Flink community.
>
> Regards,
> Robert
>
>
> On Wed, Jan 20, 2016 at 11:55 AM, Stefano Baghino <
> stefano.bagh...@radicalbit.io> wrote:
>
> > Hello everyone,
> >
> > I’m Stefano Baghino and I’m a Software Engineer at Radicalbit (
>
Hello everyone,
I’m Stefano Baghino and I’m a Software Engineer at Radicalbit (
www.radicalbit.io). Our company is working on a brand new OSS distribution
focused on distributed, low-latency processing (“Fast Data”). Flink will
play a pivotal role on our platform and we’re starting to work on
files have
> attribution, but most of them do not.
>
> I thought Flink was more advanced. Why?
>
--
BR,
Stefano Baghino
Software Engineer @ Radicalbit
54 matches
Mail list logo