Could you be missing the call to execute()?
On 23.03.2016 01:25, Tarandeep Singh wrote:
Hi,
I wrote a simple Flink job that uses Avro input format to read avro
file and save the results in avro format. The job does not get
submitted and job client exist out immediately. Same thing happens if
Aljoscha,
Thank you for fixing the issue.
I built both Flink server and job with the code you provided, and it
worked as almost expected.
The output was below. I am wondering why the value emitted at
19:44:44.635 while I set
ContinuousProcessingTimeTrigger.of(Time.seconds(5)), but it's not a
probl
Jamie,
That looks fantastic!
Thanks for the help.
David
On Tue, Mar 22, 2016 at 6:22 PM, Jamie Grier
wrote:
> Hi David,
>
> Here's an example of something similar to what you're talking about:
> https://github.com/jgrier/FilteringExample
>
> Have a look at the TweetImpressionFilteringJob.
>
>
Hi,
I wrote a simple Flink job that uses Avro input format to read avro file
and save the results in avro format. The job does not get submitted and job
client exist out immediately. Same thing happens if I run the program in
IDE or if I submit via command line.
Here is the program-
import com.s
Hi Gna,
thanks for sharing the good news and opening the JIRA!
Cheers, Fabian
2016-03-22 23:30 GMT+01:00 Sourigna Phetsarath :
> Ufek & Fabian,
>
> FYI, I was about to extend the FileInputFormat and extend the
> createInputSplits
> to handle multiple Path - there was an improvement of reduced
Ufek & Fabian,
FYI, I was about to extend the FileInputFormat and extend the
createInputSplits
to handle multiple Path - there was an improvement of reduced resource
usage and increased performance of the job.
Also added this ticket: https://issues.apache.org/jira/browse/FLINK-3655
-Gna
On Mon
Hi David,
Here's an example of something similar to what you're talking about:
https://github.com/jgrier/FilteringExample
Have a look at the TweetImpressionFilteringJob.
-Jamie
On Tue, Mar 22, 2016 at 2:24 PM, David Brelloch wrote:
> Konstantin,
>
> Not a problem. Thanks for pointing me in t
Hi,
I am converting a storm topology to Flink-storm topology using the flink-storm
dependency. When I run my code the FlinkTopologyBuilder eventually calls
createTopology method in TopologyBuilder and throws the error at the following
highlighted line:-
public StormTopology createTopology() {
Konstantin,
Not a problem. Thanks for pointing me in the right direction.
David
On Tue, Mar 22, 2016 at 5:17 PM, Konstantin Knauf <
konstantin.kn...@tngtech.com> wrote:
> Hi David,
>
> interesting use case, I think, this can be nicely done with a comap. Let
> me know if you run into problems, u
Hi David,
interesting use case, I think, this can be nicely done with a comap. Let
me know if you run into problems, unfortunately I am not aware of any
open source examples.
Cheers,
Konstnatin
On 22.03.2016 21:07, David Brelloch wrote:
> Konstantin,
>
> For now the jobs will largely just invo
Konstantin,
For now the jobs will largely just involve incrementing or decrementing
based on the json message coming in. We will probably look at adding
windowing later but for now that isn't a huge priority.
As an example of what we are looking to do lets say the following 3
message were
read fr
Hi David,
I have no idea how many parallel jobs are possible in Flink, but
generally speaking I do not think this approach will scale, because you
will always only have one job manager for coordination. But there is
definitely someone on the list, who can tell you more about this.
Regarding your
Hi all,
We are currently evaluating flink for processing kafka messages and are
running into some issues. The basic problem we are trying to solve is
allowing our end users to dynamically create jobs to alert based off the
messages coming from kafka. At launch we figure we need to support at least
Ah ok, in the case of initial the problem is the following. When you apply
an aggregation, then only the aggregated fields are valid. Data in the
other fields doesn’t necessarily correspond to the element where the
maximum value, for example, has been found. This becomes clear when you
compute the
The JDBC formats don't make any assumption as to what DB backend is used.
A JDBC float in general is returned as a double, since that was the
recommended mapping i found when i wrote the formats.
Is the INT returned as a double as well?
Note: The (runtime) output type is in no way connected t
Hi Robert!
Thank you! :)
David
On Tue, Mar 22, 2016 at 7:59 AM, Robert Metzger wrote:
> Hey David,
>
> FLINK-3602 has been merged to master.
>
> On Fri, Mar 11, 2016 at 5:11 PM, David Kim <
> david@braintreepayments.com> wrote:
>
>> Thanks Stephan! :)
>>
>> On Thu, Mar 10, 2016 at 11:06 AM
Sorry I was not clear:
I meant the initial DataSet is changing. Not the ds. :)
> Am 22.03.2016 um 15:28 schrieb Till Rohrmann :
>
> From the code extract I cannot tell what could be wrong because the code
> looks ok. If ds changes, then your normalization result should change as
> well, I w
>From the code extract I cannot tell what could be wrong because the code
looks ok. If ds changes, then your normalization result should change as
well, I would assume.
On Tue, Mar 22, 2016 at 3:15 PM, Lydia Ickler
wrote:
> Hi Till,
>
> maybe it is doing so because I rewrite the ds in the next
Hi Till,
maybe it is doing so because I rewrite the ds in the next step again and then
the working steps get mixed?
I am reading the data from a local .csv file with readMatrix(env, „filename")
See code below.
Best regards,
Lydia
//read input file
DataSet> ds = readMatrix(env, input);
/**
Hi Lydia,
I tried to reproduce your problem but I couldn't. Can it be that you have
somewhere a non deterministic operation in your program or do you read the
data from a source with varying data? Maybe you could send us a compilable
and complete program which reproduces your problem.
Cheers,
Til
Hi Simone, can your problem be related to this mail thread [1]?
[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Flink-1-0-0-JobManager-is-not-running-in-Docker-Container-on-AWS-td10711.html
Cheers,
Till
On Tue, Mar 22, 2016 at 1:22 PM, Simone Robutti <
simone.robu...@radicalbi
Hi Till
yes it does, thanks for the clear example.
Bart
On Tue, Mar 22, 2016, at 14:25, Till Rohrmann wrote:
> Hi Bart,
> there are multiple ways how to specify a window function using the
> Scala API. The most scalaesque way would probably be to use an
> anonymous function:
> val env =
Hi Bart,
there are multiple ways how to specify a window function using the Scala
API. The most scalaesque way would probably be to use an anonymous function:
val env = StreamExecutionEnvironment.getExecutionEnvironment
val input = env.fromElements(1,2,3,4,5,7)
val pair = input.map(x => (x, x))
Hi all,
I have a question.
If I have a DataSet DataSet> ds and I want to
normalize all values (at position 2) in it by the maximum of the DataSet
(ds.aggregate(Aggregations.MAX, 2)).
How do I tackle that?
If I use the cross operator my result changes every time I run the program (see
code bel
Hey David,
FLINK-3602 has been merged to master.
On Fri, Mar 11, 2016 at 5:11 PM, David Kim
wrote:
> Thanks Stephan! :)
>
> On Thu, Mar 10, 2016 at 11:06 AM, Stephan Ewen wrote:
>
>> The following issue should track that.
>> https://issues.apache.org/jira/browse/FLINK-3602
>>
>> @Niels: Thanks
val aggregatedStream = stream.apply( (w:Window, values:
scala.Iterable[(List[String], Long, Int)], out:
Collector[Aggregation]) => {
import scala.collection.JavaConversions._
val agg = Aggregation( values.toList.map {
case (pages, _, ct) => (ct, pages)
})
Hello,
we are trying to set up our system to do remote debugging through Intellij.
Flink is running on a yarn long running session. We are launching Flink's
CliFrontend with the following parameters:
> run -m **::48252
/Users//Projects/flink/build-target/examples/batch/WordCount.jar
The error r
Hi all
I'm using 1.0, and have all my data nicely bundled in one allWindow, but
I don't understand the syntax in Scala to make on json out of those for
dumping the whole window into Kafka.
My type is:
val stream: AllWindowedStream[(List[String], Long, Int), TimeWindow]
and I want to do
stream
by using DataStream#writeAsCsv(String path, WriteMode writeMode)
On 22.03.2016 12:18, subash basnet wrote:
Hello all,
I am trying to write the streaming data to file and update it
recurrently with the streaming data. I get the following unable to
override exception error:
*Caused by: java.i
Hi subash,
You can pass WriteMode in second parameter of write* method. For example:
```
DataStream<…> myStream = …;
myStream.writeAsCsv(“path of output”, FileSystem.WriteMode.OVERWRITE);
```
I hope this helps.
Regards,
Chiwan Park
> On Mar 22, 2016, at 8:18 PM, subash basnet wrote:
>
> Hell
Hello all,
I am trying to write the streaming data to file and update it recurrently
with the streaming data. I get the following unable to override exception
error:
*Caused by: java.io.IOException: File or directory already exists. Existing
files and directories are not overwritten in NO_OVERWRI
I just place the parameterTool.getProperties() in the
FlinkKafkaConsumer082 constructor. Now I am able to decide with
--auto.offset.reset smallest at the commandline each time i start up a
route.
Thanks for your hints!
Dominique
Am 18.03.2016 um 03:02 schrieb Balaji Rajagopalan:
If it is a o
Hi,
I have some thoughts about Evictors as well yes, but I didn’t yet write them
down. The basic idea about them is this:
class Evictor {
Predicate getPredicate(Iterable> elements, int size, W
window);
}
class Predicate {
boolean evict(StreamRecord element);
}
The evictor will return a pr
Thanks for the write-up Aljoscha.
I think it is a really good idea to separate the different aspects (fire,
purging, lateness) a bit. At the moment, all of these need to be handled in
the Trigger and a custom trigger is necessary whenever, you want some of
these aspects slightly differently handled
34 matches
Mail list logo