Re: Timers and Checkpoints

2019-08-01 Thread Andrea Spina
> Hi Alberto, >>> >>> do you get exactly the same exception? Maybe you can share some logs >>> with us? >>> >>> Regards, >>> Timo >>> >>> Am 25.05.18 um 13:41 schrieb Alberto Mancini: >>> > Hello, >>>

Re: streaming predictions

2018-07-24 Thread Andrea Spina
assified using this trained SVM model. >>> >>> Since the predict function does only support DataSet and not DataStream, >>> on stackoverflow a flink contributor mentioned that this should be done >>> using a map/flatMap function. >>> Unfortunately I am not able to work this function out. >>> >>> It would be incredible for me if you could help me a little bit further! >>> >>> Kind regards and thanks in advance >>> Cederic Bosmans >>> >> >> >> > > -- > *David Anderson* | Training Coordinator | data Artisans > -- > Join Flink Forward - The Apache Flink Conference > Stream Processing | Event Driven | Real Time > -- *Andrea Spina* Software Engineer @ Radicalbit Srl Via Borsieri 41, 20159, Milano - IT

Flink 1.3.2 RocksDB map segment fail if configured as state backend

2018-09-17 Thread Andrea Spina
07so-failed-to-map-segment-from-shared-object-operation-not-permitted [2] - http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Unable-to-use-Flink-RocksDB-state-backend-due-to-endianness-mismatch-td13473.html -- *Andrea Spina* Software Engineer @ Radicalbit Srl Via Borsieri 41, 20159, Milano - IT

Re: Flink 1.3.2 RocksDB map segment fail if configured as state backend

2018-09-18 Thread Andrea Spina
ghts problem. > Please make sure that your path /tmp has the exec right. > > Best, > Stefan > > > Am 17.09.2018 um 11:37 schrieb Andrea Spina : > > Hi everybody, > > I run with a Flink 1.3.2 installation on a Red Hat Enterprise Linux Server > and I'm not a

Re: Flink 1.7.2: All jobs are getting deployed on the same task manager

2019-03-18 Thread Andrea Spina
ensure that the jobs get distributed evenly across > all 3 task managers? > > > > Thanks, > > Harshith > > > -- *Andrea Spina* Software Engineer @ Radicalbit Srl Via Borsieri 41, 20159, Milano - IT

Flink sent/received bytes related metrics: how are they calculated?

2019-06-14 Thread Andrea Spina
sink Sout records 10GB of *bytes received* inbound, then my source Sin emits between 20-25GB *bytes sent*. [image: Screenshot 2019-06-14 at 09.40.08.png] Is someone able to detail how these two metrics are calculated? Thank you, -- *Andrea Spina* Software Engineer @ Radicalbit Srl Via Borsieri

Re: Flink sent/received bytes related metrics: how are they calculated?

2019-06-14 Thread Andrea Spina
Sorry, I totally missed the version: flink-1.6.4, Streaming API Il giorno ven 14 giu 2019 alle ore 11:08 Chesnay Schepler < ches...@apache.org> ha scritto: > Which version of Flink are you using? There were some issues at some point > about double-counting. > > On 14/06/2019 0

Re: Flink sent/received bytes related metrics: how are they calculated?

2019-06-14 Thread Andrea Spina
< ches...@apache.org> ha scritto: > How does the *P1 *pipeline look like? Are there 2 downstream operators > reading from *Sin* (in this case the number of bytes would be measured > twice)? > > On 14/06/2019 12:09, Andrea Spina wrote: > > Sorry, I totally missed the vers

Linkage Error RocksDB and flink-1.6.4

2019-06-23 Thread Andrea Spina
ColumnFamilyOptions = columnOptions } val stateBE = new RocksDBStateBackend(properties.checkpointDir.get, properties.checkpointIncremental.getOrElse(false)) stateBE.setPredefinedOptions(PredefinedOptions.FLASH_SSD_OPTIMIZED) stateBE.setOptions(rocksdbConfig) stateBE

Re: Linkage Error RocksDB and flink-1.6.4

2019-06-24 Thread Andrea Spina
tions = > > currentOptions.setWriteBufferSize(MemorySize.parseBytes(properties.writeBufferSize)) > } > > You just need to serialize the properties via closure to TMs. Hope this could > help you. > > Best > Yun Tang > -- &

Process Function's timers "postponing"

2019-06-24 Thread Andrea Spina
for your precious help, [1] https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/process_function.html#timer-coalescing -- *Andrea Spina* Head of R&D @ Radicalbit Srl Via Giovanni Battista Pirelli 11, 20124, Milano - IT

Re: Process Function's timers "postponing"

2019-06-25 Thread Andrea Spina
/org/apache/flink/streaming/api/operators/InternalTimerServiceImpl.java#L237 > > <https://github.com/apache/flink/blob/adb7aab55feca41c4e4a2646ddc8bc272c022098/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/InternalTimerServiceImpl.java#L237> > > -- > *Fro

Re: Process Function's timers "postponing"

2019-06-25 Thread Andrea Spina
runtime/functions/CleanupState.java > > ------ > *From:* Andrea Spina > *Sent:* Tuesday, June 25, 2019 23:40 > *To:* Yun Tang > *Cc:* user > *Subject:* Re: Process Function's timers "postponing" > > Hi Yun, thank you for your ans

HDFS checkpoints for rocksDB state backend:

2019-06-26 Thread Andrea Spina
tly. I read also here [1] but is not helping. Thank you for the precious help [1] - https://www.cnblogs.com/chendapao/p/9170566.html -- *Andrea Spina* Head of R&D @ Radicalbit Srl Via Giovanni Battista Pirelli 11, 20124, Milano - IT

Re: HDFS checkpoints for rocksDB state backend:

2019-06-27 Thread Andrea Spina
thub.com/alibaba/arthas > Best, > Congxian > > > Andrea Spina 于2019年6月27日周四 上午1:57写道: > >> Dear community, >> I'm trying to use HDFS checkpoints in flink-1.6.4 with the following >> configuration >> >> state.backend: rocksdb >> s

Providing Custom Serializer for Generic Type

2019-07-03 Thread Andrea Spina
ation.html#defining-type-information-using-a-factory -- *Andrea Spina* Head of R&D @ Radicalbit Srl Via Giovanni Battista Pirelli 11, 20124, Milano - IT

Re: Providing Custom Serializer for Generic Type

2019-07-04 Thread Andrea Spina
le.com/Custom-Serializer-for-Avro-GenericRecord-td25433.html > [3] - > https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/types_serialization.html#defining-type-information-using-a-factory > -- > Andrea Spina > Head of R&D @ Radicalbit Srl > Via Giovanni Battista Pirelli 11, 20124, Milano - IT > > > -- *Andrea Spina* Head of R&D @ Radicalbit Srl Via Giovanni Battista Pirelli 11, 20124, Milano - IT

Re: Providing Custom Serializer for Generic Type

2019-07-05 Thread Andrea Spina
in the future, I would recommend either Avro or Pojo as > Jingsong Lee had already mentioned. > > Cheers, > Gordon > > [1] > https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/types_serialization.html#flinks-typeinformation-class > > On Thu, Jul 4, 2019 at

Flink java.io.FileNotFoundException Exception with Peel Framework

2016-06-28 Thread ANDREA SPINA
hannel.java:57) ... 15 more here <https://dl.dropboxusercontent.com/u/78598929/flink-hadoop-jobmanager-0-cloud-11.log> the related jobmanager full log. I can't figure out a solution. Thank you and have a nice day. -- *Andrea Spina* Guest student at DIMA, TU Berlin N.Tessera: *74598* MAT: *89369* *Ingegneria Informatica* *[LM] *(D.M. 270)

Re: Flink java.io.FileNotFoundException Exception with Peel Framework

2016-06-28 Thread ANDREA SPINA
pache.org/projects/flink/flink-docs-master/setup/config.html#background > > So we have to investigate a little more. Which version of Flink are you > using? > > Cheers, > Max > > On Tue, Jun 28, 2016 at 11:57 AM, ANDREA SPINA > <74...@studenti.unimore.it> wrote: > >

Re: Flink java.io.FileNotFoundException Exception with Peel Framework

2016-06-29 Thread ANDREA SPINA
Hi, the problem was solved after I figured out there was an istance of Flink TaskManager running on a node of the cluster. Thank you, Andrea 2016-06-28 12:17 GMT+02:00 ANDREA SPINA <74...@studenti.unimore.it>: > Hi Max, > thank you for the fast reply and sorry: I use flink-1.0.3. &g

Apache Flink 1.0.3 IllegalStateException Partiction State on chaining

2016-06-29 Thread ANDREA SPINA
nDS: DataSet[Int]): DataSet[Vector] = { 69dimensionDS.map { 70 dimension => 71 val values = DenseVector(Array.fill(dimension)(0.0)) 72 values 73} 74 } I can't figure out a solution, thank you for your help. Andrea -- *Andrea Spina* N.Tessera: *74598* MAT: *89369* *Inge

Re: Apache Flink 1.0.3 IllegalStateException Partiction State on chaining

2016-06-29 Thread ANDREA SPINA
did > the producer fail? > > > On Wed, Jun 29, 2016 at 12:19 PM, ANDREA SPINA > <74...@studenti.unimore.it> wrote: > > Hi everyone, > > > > I am running some Flink experiments with Peel benchmark > > http://peel-framework.org/ and I am struggling with e

Re: Apache Flink 1.0.3 IllegalStateException Partiction State on chaining

2016-07-13 Thread ANDREA SPINA
g the akka ask timeout: >> >> akka.ask.timeout: 100 s >> >> Does this help? >> >> >> On Wed, Jun 29, 2016 at 3:10 PM, ANDREA SPINA <74...@studenti.unimore.it> >> wrote: >> > Hi Ufuk, >> > >> > so the

FlinkML ALS matrix factorization: java.io.IOException: Thread 'SortMerger Reading Thread' terminated due to an exception: null

2016-08-31 Thread ANDREA SPINA
32768 taskmanager.network.numberOfBuffers = 98304 akka.ask.timeout = 300s Any help will be appreciated. Thank you. -- *Andrea Spina* N.Tessera: *74598* MAT: *89369* *Ingegneria Informatica* *[LM] *(D.M. 270)

Re: FlinkML ALS matrix factorization: java.io.IOException: Thread 'SortMerger Reading Thread' terminated due to an exception: null

2016-09-01 Thread ANDREA SPINA
o(UnilateralSortMerger.java:1344) > at org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ > ThreadBase.run(UnilateralSortMerger.java:796) > > > On Wed, Aug 31, 2016 at 5:57 PM, Stefan Richter < > s.rich...@data-artisans.com> wrote: > >> Hi, >> >> c

Re: FlinkML ALS matrix factorization: java.io.IOException: Thread 'SortMerger Reading Thread' terminated due to an exception: null

2016-09-02 Thread ANDREA SPINA
to find > this problem is to take a look to the exceptions that are reported in the > web front-end for the failing job. Could you check if you find any reported > exceptions there and provide them to us? > > Best, > Stefan > > Am 01.09.2016 um 11:56 schrieb ANDREA SPINA <

Re: FlinkML ALS matrix factorization: java.io.IOException: Thread 'SortMerger Reading Thread' terminated due to an exception: null

2016-09-07 Thread ANDREA SPINA
erThread.run(ForkJoinWorkerThread.java:107) So each node has 32G memory, I'm working with taskmanager.heap.mb = 28672 And I tried with different memory fractions taskmanager.memory.fraction = (0.5, 0.6, 0.8) Hope you have enough info now. Thank you for your help. Andrea 2016-09-02 11:30 GMT+02:00

Re: Model serving in Flink DataStream

2017-11-15 Thread Andrea Spina
Hi Adarsh, we developed flink-JPMML for streaming model serving based on top of the PMML format and of course Flink: we didn't release any official benchmark numbers yet. We didn't bump into any performance issue along the library employment. In terms of throughput and latency it doesn't require mo

Cogrouped Stream never triggers tumbling event time window

2017-03-23 Thread Andrea Spina
Dear Community, I'm really struggling on a co-grouped stream. The workload is the following: * val firstStream: DataStream[FirstType] = firstRaw.assignTimestampsAndWatermarks(new MyCustomFirstExtractor(maxOutOfOrder)) val secondStream: DataStream[SecondType] = secondRaw .assi

Re: Cogrouped Stream never triggers tumbling event time window

2017-03-23 Thread Andrea Spina
Sorry, I forgot to put the Flink version. 1.1.2 Thanks, Andrea -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Cogrouped-Stream-never-triggers-tumbling-event-time-window-tp12373p12374.html Sent from the Apache Flink User Mailing List archi

Re: Cogrouped Stream never triggers tumbling event time window

2017-03-30 Thread Andrea Spina
Dear community, I finally solved the issue i was bumped into. Basically the reason of the encountered problem was the behavior of my input: incoming rates were so far different in behavior (really late and scarce presence of second type event in event time). The solution I employed was to assign

Async Functions and Scala async-client for mySql/MariaDB database connection

2017-03-30 Thread Andrea Spina
Dear Flink community, I started to use Async Functions in Scala, Flink 1.2.0, in order to retrieve enriching information from MariaDB database. In order to do that, I firstly employed classical jdbc library (org.mariadb.jdbc) and it worked has expected. Due to the blocking behavior of jdbc, I'm t

Painful KryoException: java.lang.IndexOutOfBoundsException on Flink Batch Api scala

2017-06-07 Thread Andrea Spina
Good afternoon dear Community, Since few days I'm really struggling to understand the reason behind this KryoException. Here the stack trace. 2017-06-07 10:18:52,514 ERROR org.apache.flink.runtime.operators.BatchTask - Error in task code: CHAIN GroupReduce (GroupReduce at my.or

Re: Painful KryoException: java.lang.IndexOutOfBoundsException on Flink Batch Api scala

2017-06-08 Thread Andrea Spina
Hi guys, thank you for your interest. Yes @Flavio, I tried both 1.2.0 and 1.3.0 versions. Following Gordon suggestion I tried to put setReference to false but sadly it didn't help. What I did then was to declare a custom serializer as the following: class BlockSerializer extends Serializer[Block

Re: Painful KryoException: java.lang.IndexOutOfBoundsException on Flink Batch Api scala

2017-06-21 Thread Andrea Spina
I Gordon, sadly no news since the last message. At the end I jumped over the issue, I was not able to solve it. I'll try provide a runnable example asap. Thank you. Andrea -- View this message in context: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Painful-KryoExcept

Re: FlinkML ALS is taking too long to run

2017-07-11 Thread Andrea Spina
Dear Ziyad, could you kindly share some additional info about your environment (local/cluster, nodes, machines' configuration)? What does exactly you mean by "indefinitely"? How much time the job is hanging? Hope to help you, then. Cheers, Andrea -- View this message in context: http://apach

Re: FlinkML ALS is taking too long to run

2017-07-12 Thread Andrea Spina
Dear Ziyad, Yep, I had encountered same very long runtimes with ALS as well at the time and I recorded improvements by increasing the number of blocks / decreasing #TSs/TM like you've stated out. Cheers, Andrea -- View this message in context: http://apache-flink-user-mailing-list-archiv