Hi roberto,I have ever done some experiments on a dataset with 3196
transactions and 289154813 frequent itemsets. FPGrowth can finish the computing
within 10 minutes. I can have a try if you could share the artificial dataset.
From: roberto.pagli...@asos.com
To: m2linc...@outlook.com
CC
you want, I can try generate an artificial
dataset to share. Did you ever try with hundreds of millions of frequent
itemsets?
With small datasets it works, but it looks like there might be issues when the
number of combination grows.
Thanks,
From: LinChen mailto:m2linc...@outlook.com>>
Da
Hi Roberto,What is the minimum support threshold you set? Could you check which
stage you ran into StackOverFlow exception?
Thanks.
From: roberto.pagli...@asos.com
To: yblia...@gmail.com
CC: user@spark.apache.org
Subject: Re: frequent itemsets
Date: Sat, 2 Jan 2016 12:01:31 +
Hi Yanbo
very well.
Thank you,
From: Yanbo Liang mailto:yblia...@gmail.com>>
Date: Saturday, 2 January 2016 09:03
To: Roberto Pagliari
mailto:roberto.pagli...@asos.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>"
mailto:user@spark.apache.org>>
Subject:
Hi Roberto,
Could you share your code snippet that others can help to diagnose your
problems?
2016-01-02 7:51 GMT+08:00 Roberto Pagliari :
> When using the frequent itemsets APIs, I’m running into stackOverflow
> exception whenever there are too many combinations to deal with and/
When using the frequent itemsets APIs, I'm running into stackOverflow exception
whenever there are too many combinations to deal with and/or too many
transactions and/or too many items.
Does anyone know how many transactions/items these APIs can deal with?
Thank you ,