This is an automated email from the ASF dual-hosted git repository.
jiayu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/sedona.git
The following commit(s) were added to refs/heads/master by this push:
new 670bb4c4a6 [GH-2290] Update the Databricks instruction for the Python
API (#2315)
670bb4c4a6 is described below
commit 670bb4c4a6fea49f0b0159ebdf2a92f00d3ed07a
Author: Jia Yu <[email protected]>
AuthorDate: Wed Aug 27 23:53:13 2025 -0700
[GH-2290] Update the Databricks instruction for the Python API (#2315)
---
docs/setup/databricks.md | 8 ++++++++
docs/tutorial/sql.md | 2 +-
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/docs/setup/databricks.md b/docs/setup/databricks.md
index 2d44faccdc..af96a2bc64 100644
--- a/docs/setup/databricks.md
+++ b/docs/setup/databricks.md
@@ -129,6 +129,14 @@ Create a Databricks notebook and connect it to the
cluster. Verify that you can

+If you want to use Sedona Python functions such as [DataFrame
APIs](../api/sql/DataFrameAPI.md) or
[StructuredAdapter](../tutorial/sql.md#spatialrdd-to-dataframe-with-spatial-partitioning),
you need to initialize Sedona as follows:
+
+```python
+from sedona.spark import *
+
+sedona = SedonaContext.create(spark)
+```
+
You can also use the SQL API as follows:

diff --git a/docs/tutorial/sql.md b/docs/tutorial/sql.md
index fb01f12f6d..5031af398d 100644
--- a/docs/tutorial/sql.md
+++ b/docs/tutorial/sql.md
@@ -111,7 +111,7 @@ You can add additional Spark runtime config to the config
builder. For example,
## Initiate SedonaContext
-Add the following line after creating Sedona config. If you already have a
SparkSession (usually named `spark`) created by AWS EMR/Databricks/Microsoft
Fabric, please call `sedona = SedonaContext.create(spark)` instead. For
==Databricks==, the situation is more complicated, please refer to [Databricks
setup guide](../setup/databricks.md), but generally you don't need to create
SedonaContext.
+Add the following line after creating Sedona config. If you already have a
SparkSession (usually named `spark`) created by AWS EMR/Databricks/Microsoft
Fabric, please call `sedona = SedonaContext.create(spark)` instead.
=== "Scala"