Hello,

I am writing a Flink application whose purpose is to train neural networks.
I have implemented a version of SDG based on mini batches using the
IterativeDataSet class. In a nutshell, my code groups the input features in
a set called mini batch (i have as many mini batches as the parallelism
degree), then it calculates a gradient over each mini batch in a map tasks
and then those gradients are merged in a reduce task.
Unfortunately it seems the trained param is not broadcasted at the end of
each epoch and it looks like that it starts the next epoch using the
initial parameter. I know it is probably a programming error which I cannot
see at the moment, but I was wondering if there were some guidelines for
broadcasting an object in the correct way.
Is it asking too much if you could have a look at my code, please? Thank
you in advance.

Regards,
Ventura

Reply via email to