If your data set is 11 points, surely this is not a distributed problem? or
are you asking how to build tens of thousands of those projections in
parallel?

On Tue, Jan 5, 2021 at 6:04 AM Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> Hi,
>
> I am not sure Spark forum is the correct avenue for this question.
>
> I am using PySpark with matplotlib to  get the best fit for data using the
> Lorentzian Model. This curve uses 2010-2020 data points (11 on x-axis). I
> need to predict predict the prices for years 2021-2025 based on this fit.
> So not sure if someone can advise me? If Ok, then I can post the details
>
> Thanks
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>

Reply via email to