Hi Bruno,
>From the code I conclude that "akka.client.timeout" setting is what affects
this. It defaults to 60 seconds.
I'm not sure why this setting is not documented though as well as many
other "akka.*" settings - maybe there are some good reasons behind.
Regards,
Yury
2017-01-31 17:47 GMT+0
Yes I did open a socket with netcat. Turns out my first error was due
to a stream without a sink triggering the socket connect and (I
thought that without a sink the stream wouldn't affect anything so I
didn't comment it out, and I didn't open the socket for that port).
However
I did play with it
I somehow still suspect that iterations might work for your use case. Note,
that in the streaming API, iterations are currently nothing more than a
back-edge in the topology, i.e. a low-level tool to create a cyclic
topology, like as you say with your hypothetical setter syntax. (It's quite
differe
Hi Nancy,
Currently there is no way to do so. Flink only provides the mode you described,
i.e.
a modified file is considered a new file. The reason is that many filesystems
do not
give you separate creation from modification timestamps.
If you control the way files are created, a solution cou
Hi guys,
I have the following use case. Every day a new file is created and
periodically some log records are appended to it. I am reading the file in
the following way:
executionEnvironment.readFile(format, directoryPath, PROCESS_CONTINUOUSLY,
period.toMilliseconds(),filePathFilter);
However, F
+u...@apache.org Because he implemented queryable state.
There is also queryable state, which allows you to query the internal keyed
state of Flink user functions.
On Mon, 30 Jan 2017 at 00:46 Jonas wrote:
> You could write your data back to Kafka using the FlinkKafkaProducer and
> then
> use
These 2 rows if converted to Row[] of Strings should cause the problem:
http://www.aaa.it/xxx/v/10002780063t/000/1,f/10001957530,cf/13,cpva/77,cf/13,,sit/A2046X,strp/408,10921957530,,1,5,1,2013-01-04T15:02:25,5,,10002780063,XXX,1,,3,,,2013-01-04T15:02:25,XXX,XXX,13,2013-01-04T1
I hope to have time to write a test program :)
Otherwise I hope someone else could give it a try in the meantime..
Best,
Flavio
On Tue, Jan 31, 2017 at 4:49 PM, Fabian Hueske wrote:
> Hi Flavio,
>
> I do not remember that such a bug was fixed. Maybe by chance, but I guess
> not.
> Can you open
Hi Flavio,
I do not remember that such a bug was fixed. Maybe by chance, but I guess
not.
Can you open a JIRA and maybe provide input data to reproduce the problem?
Thank you,
Fabian
2017-01-31 16:25 GMT+01:00 Flavio Pompermaier :
> Hi to all,
> I'm trying to read from a db and then writing to
Hi to all,
I'm trying to read from a db and then writing to a csv.
In my code I do the following:
tableEnv.fromDataSet(myDataSet).writeToSink(new CsvTableSink(csvOutputDir,
fieldDelim));
If I use fieldDelim= "," everything is Ok, if I use "\t" some tab is not
printed correctly...
PS: myDataSet is
Hi there,
I am trying to cancel a job and create a savepoint (ie flink cancel -s) but
it takes more than a minute to do that and then it fails due to the
timeout. However, it seems that the job will be cancelled successfully and
the savepoint made, but I can only see that through the dasboard.
Ca
Hello, one last note on this thread: we've processed and published the
Flink user survey results, and you can find a file with graphs summarizing
multiple-choice responses as well as anonymous feedback from open-ended
questions in a GitHub repository [1]. We also published a summary of
responses on
Can you try opening a socket with netcat on localhost?
nc -lk 9000
and see it this works? For me this works.
-- Jonas
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Connection-refused-error-when-writing-to-socket-tp11372p11376.html
Sent f
Hi Diego,
you can also broadcast a changelog stream:
DataStream mainStream = ...
DataStream changeStream = ...
mainStream.connect(changeStream.broadcast()).flatMap(new
YourCoFlatMapFunction());
All records of the changeStream will be forwarded to each instance of the
flatmap operator.
Best, Fa
14 matches
Mail list logo