rates seamlessly into
existing workflows and formats like Iceberg and Parquet.
Spatial data can be used just like any other data type, unlocking powerful
insights for business intelligence, analytics, and more.
Speakers:
Jia Yu – Co-Founder & Chief Architect, Wherobots (https://wherobots.com/)
Mat
Dear all,
We are happy to report that we have released Apache Sedona 1.7.1.
Thank you again for your help.
Apache Sedona is a cluster computing system for processing large-scale
spatial data on top of Apache Spark, Flink and Snowflake.
Vote thread (Permalink from https://lists.apache.org/list.ht
Dear all,
We are happy to report that we have released Apache Sedona 1.7.0.
Thank you again for your help.
Apache Sedona is a cluster computing system for processing large-scale
spatial data.
Vote thread (Permalink from https://lists.apache.org/list.html):
https://lists.apache.org/thread/5hvcr80
Dear all,
We are happy to report that we have released Apache Sedona 1.6.1.
Apache Sedona is a cluster computing system for processing large-scale
spatial data.
Website:
http://sedona.apache.org/
Release notes:
https://github.com/apache/sedona/blob/sedona-1.6.1/docs/setup/release-notes.md
Down
Hi,
I am also trying to use the spark.mesos.constraints but it gives me the
same error: job has not be accepted by any resources.
I am doubting that I should start some additional service like
./sbin/start-mesos-shuffle-service.sh. Am I correct?
Thanks,
Jia
On Tue, Dec 1, 2015 at 5:14 PM, rared
Hi Peng,
I got exactly same error! My shuffle data is also very large. Have you
figured out a method to solve that?
Thanks,
Jia
On Fri, Apr 24, 2015 at 7:59 AM, Peng Cheng wrote:
> I'm deploying a Spark data processing job on an EC2 cluster, the job is
> small
> for the cluster (16 cores with
Hi folks,
Help me! I met a very weird problem. I really need some help!! Here is my
situation:
Case: Assign keys to two datasets (one is 96GB with 2.7 billion records and
one 1.5GB with 30k records) via MapPartitions first, and j
Hi guys,
Currently I am running Spark program on Amazon EC2. Each worker has around
(less than but near to )2 gb memory.
By default, I can see each worker is allocated 976 mb memory as the table
shows below on Spark WEB UI. I know this value is from (Total memory minus
1 GB). But I want more than