xunliu commented on code in PR #104:
URL: 
https://github.com/apache/gravitino-playground/pull/104#discussion_r1847659057


##########
init/jupyter/init.sh:
##########
@@ -16,16 +16,12 @@
 # specific language governing permissions and limitations
 # under the License.
 #
-cp -r /tmp/gravitino/*.ipynb /home/jovyan
-export HADOOP_USER_NAME=root
 
-# This needs to be downloaded as root user
-wget 
https://repo1.maven.org/maven2/org/apache/iceberg/iceberg-spark-runtime-3.4_2.12/1.5.2/iceberg-spark-runtime-3.4_2.12-1.5.2.jar
  -O $SPARK_HOME/jars/iceberg-spark-runtime-3.4_2.12-1.5.2.jar
-wget 
https://repo1.maven.org/maven2/org/apache/gravitino/gravitino-spark-connector-runtime-3.4_2.12/0.6.0-incubating/gravitino-spark-connector-runtime-3.4_2.12-0.6.0-incubating.jar
 -O 
$SPARK_HOME/jars/gravitino-spark-connector-runtime-3.4_2.12-0.6.0-incubating.jar
-
-# in pyspark-notebook, SPARK_HOME is at /usr/local/spark, we need to link it 
back to /opt/spark 
-ln -s $SPARK_HOME /opt/spark
-
-su - jovyan
+if [ -z "$RANGER_ENABLE" ]; then
+  cp -r /tmp/gravitino/*.ipynb /home/jovyan
+else
+  cp -r /tmp/gravitino/authorization/*.ipynb /home/jovyan
+fi
 
+export HADOOP_USER_NAME=anonymous

Review Comment:
   Why do we need to add this `HADOOP_USER_NAME`?
   Can we set environment variable in the jupyter notebook?



##########
init/hive/core-site.xml:
##########
@@ -0,0 +1,51 @@
+<configuration>
+  <property>
+    <name>fs.defaultFS</name>
+    <value>hdfs://__REPLACE__HOST_NAME:9000</value>
+  </property>
+
+  <property>
+    <name>name</name>
+    <value>Development Cluster</value>
+  </property>
+
+  <property>
+    <name>hadoop.http.staticuser.user</name>
+    <value>hadoopuser</value>
+  </property>

Review Comment:
   I think maybe we didn't need this configure ?
   I'm not sure, Please check it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@gravitino.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to