OK I managed to sort this out.
First you need to create the base image as per code below
# Spark home dir
cd $SPARK_HOME
echo `date` ", ===> Asking local environment to use Docker daemon inside
the Minikube"
eval $(minikube docker-env)
echo `date` ", ===> Building Docker base image from provide
Yes, it is working now..Thank you very much.
Best Regards,
Eduardus Hardika Sandy Atmaja
From: Russell Spitzer
Sent: Monday, June 28, 2021 11:22 PM
To: Eduardus Hardika Sandy Atmaja
Cc: user
Subject: Re: Request for FP-Growth source code
Sorry wrong repository,
Sorry wrong repository,
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/fpm/FPGrowth.scala
> On Jun 28, 2021, at 11:21 AM, Eduardus Hardika Sandy Atmaja
> wrote:
>
> I am sorry, I can't open the link. "This site can’t be reached".
> Is there any Java/Python
https://github.pie.apple.com/IPR/apache-spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/fpm/FPGrowth.scala
This?
On Mon, Jun 28, 2021 at 5:11 AM Eduardus Hardika Sandy Atmaja
wrote:
> Dear Apache Spark Admin
>
> Hello, my name is Edo. I am a Ph.D. student from India. Now I am still
Dear Apache Spark Admin
Hello, my name is Edo. I am a Ph.D. student from India. Now I am still learning
about High Utility Itemset Mining which is extension of Frequent Itemset Mining
for my research. I am interested to implement my algorithm using Apache Spark
but I do not have any idea how to
Hi,
Using Minikube to create a containerised Spark, I can easily use spark
submit below with uber jar file
bin/spark-submit \
--master k8s://$KSERVER \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=3 \
--conf spark.kubernet