> Hi Alberto,
>>>
>>> do you get exactly the same exception? Maybe you can share some logs
>>> with us?
>>>
>>> Regards,
>>> Timo
>>>
>>> Am 25.05.18 um 13:41 schrieb Alberto Mancini:
>>> > Hello,
>>>
assified using this trained SVM model.
>>>
>>> Since the predict function does only support DataSet and not DataStream,
>>> on stackoverflow a flink contributor mentioned that this should be done
>>> using a map/flatMap function.
>>> Unfortunately I am not able to work this function out.
>>>
>>> It would be incredible for me if you could help me a little bit further!
>>>
>>> Kind regards and thanks in advance
>>> Cederic Bosmans
>>>
>>
>>
>>
>
> --
> *David Anderson* | Training Coordinator | data Artisans
> --
> Join Flink Forward - The Apache Flink Conference
> Stream Processing | Event Driven | Real Time
>
--
*Andrea Spina*
Software Engineer @ Radicalbit Srl
Via Borsieri 41, 20159, Milano - IT
07so-failed-to-map-segment-from-shared-object-operation-not-permitted
[2] -
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Unable-to-use-Flink-RocksDB-state-backend-due-to-endianness-mismatch-td13473.html
--
*Andrea Spina*
Software Engineer @ Radicalbit Srl
Via Borsieri 41, 20159, Milano - IT
ghts problem.
> Please make sure that your path /tmp has the exec right.
>
> Best,
> Stefan
>
>
> Am 17.09.2018 um 11:37 schrieb Andrea Spina :
>
> Hi everybody,
>
> I run with a Flink 1.3.2 installation on a Red Hat Enterprise Linux Server
> and I'm not a
ensure that the jobs get distributed evenly across
> all 3 task managers?
>
>
>
> Thanks,
>
> Harshith
>
>
>
--
*Andrea Spina*
Software Engineer @ Radicalbit Srl
Via Borsieri 41, 20159, Milano - IT
sink Sout records 10GB of *bytes received*
inbound, then my source Sin emits between 20-25GB *bytes sent*.
[image: Screenshot 2019-06-14 at 09.40.08.png]
Is someone able to detail how these two metrics are calculated?
Thank you,
--
*Andrea Spina*
Software Engineer @ Radicalbit Srl
Via Borsieri
Sorry, I totally missed the version: flink-1.6.4, Streaming API
Il giorno ven 14 giu 2019 alle ore 11:08 Chesnay Schepler <
ches...@apache.org> ha scritto:
> Which version of Flink are you using? There were some issues at some point
> about double-counting.
>
> On 14/06/2019 0
<
ches...@apache.org> ha scritto:
> How does the *P1 *pipeline look like? Are there 2 downstream operators
> reading from *Sin* (in this case the number of bytes would be measured
> twice)?
>
> On 14/06/2019 12:09, Andrea Spina wrote:
>
> Sorry, I totally missed the vers
ColumnFamilyOptions = columnOptions
}
val stateBE =
new RocksDBStateBackend(properties.checkpointDir.get,
properties.checkpointIncremental.getOrElse(false))
stateBE.setPredefinedOptions(PredefinedOptions.FLASH_SSD_OPTIMIZED)
stateBE.setOptions(rocksdbConfig)
stateBE
tions =
>
> currentOptions.setWriteBufferSize(MemorySize.parseBytes(properties.writeBufferSize))
> }
>
> You just need to serialize the properties via closure to TMs. Hope this could
> help you.
>
> Best
> Yun Tang
> --
&
for your precious help,
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/process_function.html#timer-coalescing
--
*Andrea Spina*
Head of R&D @ Radicalbit Srl
Via Giovanni Battista Pirelli 11, 20124, Milano - IT
/org/apache/flink/streaming/api/operators/InternalTimerServiceImpl.java#L237
>
> <https://github.com/apache/flink/blob/adb7aab55feca41c4e4a2646ddc8bc272c022098/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/InternalTimerServiceImpl.java#L237>
>
> --
> *Fro
runtime/functions/CleanupState.java
>
> ------
> *From:* Andrea Spina
> *Sent:* Tuesday, June 25, 2019 23:40
> *To:* Yun Tang
> *Cc:* user
> *Subject:* Re: Process Function's timers "postponing"
>
> Hi Yun, thank you for your ans
tly.
I read also here [1] but is not helping.
Thank you for the precious help
[1] - https://www.cnblogs.com/chendapao/p/9170566.html
--
*Andrea Spina*
Head of R&D @ Radicalbit Srl
Via Giovanni Battista Pirelli 11, 20124, Milano - IT
thub.com/alibaba/arthas
> Best,
> Congxian
>
>
> Andrea Spina 于2019年6月27日周四 上午1:57写道:
>
>> Dear community,
>> I'm trying to use HDFS checkpoints in flink-1.6.4 with the following
>> configuration
>>
>> state.backend: rocksdb
>> s
ation.html#defining-type-information-using-a-factory
--
*Andrea Spina*
Head of R&D @ Radicalbit Srl
Via Giovanni Battista Pirelli 11, 20124, Milano - IT
le.com/Custom-Serializer-for-Avro-GenericRecord-td25433.html
> [3] -
> https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/types_serialization.html#defining-type-information-using-a-factory
> --
> Andrea Spina
> Head of R&D @ Radicalbit Srl
> Via Giovanni Battista Pirelli 11, 20124, Milano - IT
>
>
>
--
*Andrea Spina*
Head of R&D @ Radicalbit Srl
Via Giovanni Battista Pirelli 11, 20124, Milano - IT
in the future, I would recommend either Avro or Pojo as
> Jingsong Lee had already mentioned.
>
> Cheers,
> Gordon
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/types_serialization.html#flinks-typeinformation-class
>
> On Thu, Jul 4, 2019 at
hannel.java:57)
... 15 more
here
<https://dl.dropboxusercontent.com/u/78598929/flink-hadoop-jobmanager-0-cloud-11.log>
the
related jobmanager full log. I can't figure out a solution.
Thank you and have a nice day.
--
*Andrea Spina*
Guest student at DIMA, TU Berlin
N.Tessera: *74598*
MAT: *89369*
*Ingegneria Informatica* *[LM] *(D.M. 270)
pache.org/projects/flink/flink-docs-master/setup/config.html#background
>
> So we have to investigate a little more. Which version of Flink are you
> using?
>
> Cheers,
> Max
>
> On Tue, Jun 28, 2016 at 11:57 AM, ANDREA SPINA
> <74...@studenti.unimore.it> wrote:
> >
Hi,
the problem was solved after I figured out there was an istance of Flink
TaskManager running on a node of the cluster.
Thank you,
Andrea
2016-06-28 12:17 GMT+02:00 ANDREA SPINA <74...@studenti.unimore.it>:
> Hi Max,
> thank you for the fast reply and sorry: I use flink-1.0.3.
&g
nDS: DataSet[Int]): DataSet[Vector] = {
69dimensionDS.map {
70 dimension =>
71 val values = DenseVector(Array.fill(dimension)(0.0))
72 values
73}
74 }
I can't figure out a solution, thank you for your help.
Andrea
--
*Andrea Spina*
N.Tessera: *74598*
MAT: *89369*
*Inge
did
> the producer fail?
>
>
> On Wed, Jun 29, 2016 at 12:19 PM, ANDREA SPINA
> <74...@studenti.unimore.it> wrote:
> > Hi everyone,
> >
> > I am running some Flink experiments with Peel benchmark
> > http://peel-framework.org/ and I am struggling with e
g the akka ask timeout:
>>
>> akka.ask.timeout: 100 s
>>
>> Does this help?
>>
>>
>> On Wed, Jun 29, 2016 at 3:10 PM, ANDREA SPINA <74...@studenti.unimore.it>
>> wrote:
>> > Hi Ufuk,
>> >
>> > so the
32768
taskmanager.network.numberOfBuffers = 98304
akka.ask.timeout = 300s
Any help will be appreciated. Thank you.
--
*Andrea Spina*
N.Tessera: *74598*
MAT: *89369*
*Ingegneria Informatica* *[LM] *(D.M. 270)
o(UnilateralSortMerger.java:1344)
> at org.apache.flink.runtime.operators.sort.UnilateralSortMerger$
> ThreadBase.run(UnilateralSortMerger.java:796)
>
>
> On Wed, Aug 31, 2016 at 5:57 PM, Stefan Richter <
> s.rich...@data-artisans.com> wrote:
>
>> Hi,
>>
>> c
to find
> this problem is to take a look to the exceptions that are reported in the
> web front-end for the failing job. Could you check if you find any reported
> exceptions there and provide them to us?
>
> Best,
> Stefan
>
> Am 01.09.2016 um 11:56 schrieb ANDREA SPINA <
erThread.run(ForkJoinWorkerThread.java:107)
So each node has 32G memory, I'm working with
taskmanager.heap.mb = 28672
And I tried with different memory fractions
taskmanager.memory.fraction = (0.5, 0.6, 0.8)
Hope you have enough info now.
Thank you for your help.
Andrea
2016-09-02 11:30 GMT+02:00
Hi Adarsh,
we developed flink-JPMML for streaming model serving based on top of the
PMML format and of course Flink: we didn't release any official benchmark
numbers yet. We didn't bump into any performance issue along the library
employment. In terms of throughput and latency it doesn't require mo
Dear Community,
I'm really struggling on a co-grouped stream. The workload is the following:
*
val firstStream: DataStream[FirstType] =
firstRaw.assignTimestampsAndWatermarks(new
MyCustomFirstExtractor(maxOutOfOrder))
val secondStream: DataStream[SecondType] = secondRaw
.assi
Sorry, I forgot to put the Flink version. 1.1.2
Thanks, Andrea
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Cogrouped-Stream-never-triggers-tumbling-event-time-window-tp12373p12374.html
Sent from the Apache Flink User Mailing List archi
Dear community,
I finally solved the issue i was bumped into.
Basically the reason of the encountered problem was the behavior of my
input: incoming rates were so far different in behavior (really late and
scarce presence of second type event in event time).
The solution I employed was to assign
Dear Flink community,
I started to use Async Functions in Scala, Flink 1.2.0, in order to retrieve
enriching information from MariaDB database. In order to do that, I firstly
employed classical jdbc library (org.mariadb.jdbc) and it worked has
expected.
Due to the blocking behavior of jdbc, I'm t
Good afternoon dear Community,
Since few days I'm really struggling to understand the reason behind this
KryoException. Here the stack trace.
2017-06-07 10:18:52,514 ERROR org.apache.flink.runtime.operators.BatchTask
- Error in task code: CHAIN GroupReduce (GroupReduce at
my.or
Hi guys,
thank you for your interest. Yes @Flavio, I tried both 1.2.0 and 1.3.0
versions.
Following Gordon suggestion I tried to put setReference to false but sadly
it didn't help. What I did then was to declare a custom serializer as the
following:
class BlockSerializer extends Serializer[Block
I Gordon, sadly no news since the last message.
At the end I jumped over the issue, I was not able to solve it. I'll try
provide a runnable example asap.
Thank you.
Andrea
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Painful-KryoExcept
Dear Ziyad,
could you kindly share some additional info about your environment
(local/cluster, nodes, machines' configuration)?
What does exactly you mean by "indefinitely"? How much time the job is
hanging?
Hope to help you, then.
Cheers,
Andrea
--
View this message in context:
http://apach
Dear Ziyad,
Yep, I had encountered same very long runtimes with ALS as well at the time
and I recorded improvements by increasing the number of blocks / decreasing
#TSs/TM like you've stated out.
Cheers,
Andrea
--
View this message in context:
http://apache-flink-user-mailing-list-archiv
38 matches
Mail list logo