This is an automated email from the ASF dual-hosted git repository.

liyang pushed a commit to branch kylin5
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 043a540fc560a888e8beb9c0c900e48bfa0c2db7
Author: Yinghao Lin <[email protected]>
AuthorDate: Fri Sep 13 20:39:57 2024 +0800

    KYLIN-5970 [FOLLOWUP] Update standalone docker readme
---
 .../standalone-docker/all-in-one/README.md         | 369 +++++++++++++++------
 1 file changed, 274 insertions(+), 95 deletions(-)

diff --git a/dev-support/release-manager/standalone-docker/all-in-one/README.md 
b/dev-support/release-manager/standalone-docker/all-in-one/README.md
index 88dd86b7db..b3ba5dc040 100644
--- a/dev-support/release-manager/standalone-docker/all-in-one/README.md
+++ b/dev-support/release-manager/standalone-docker/all-in-one/README.md
@@ -1,12 +1,13 @@
 # Preview latest Kylin (5.x)
 
 ## [Image Tag 
Information](https://hub.docker.com/r/apachekylin/apache-kylin-standalone)
-| Tag                  | Image Contents                                        
                     | Comment & Publish Date                                   
                                                                                
                |
-|----------------------|----------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
-| 5.0-beta             | [**Recommended for users**] The official 5.0.0-beta 
with Spark bundled.    | Uploaded at 2023-09-08, worked fine on Docker Desktop 
Mac 4.3.0 & 4.22.1(and Windows) ,                                               
                   |
-| kylin-4.0.1-mondrian | The official Kylin 4.0.1 with **MDX** function 
enabled                     | Uploaded at 2022-05-13                            
                                                                                
                       | 
+| Tag                  | Image Contents                                        
                    | Comment & Publish Date                                    
                                                                                
               |
+|----------------------|---------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
+| 5.0.0-GA             | [**Recommended for users**] The official 5.0.0-GA 
with Spark & Gluten bundled. | Uploaded at 2024-09-13 |
+| 5.0-beta             | The official 5.0.0-beta with Spark bundled.    | 
Uploaded at 2023-09-08, worked fine on Docker Desktop Mac 4.3.0 & 4.22.1(and 
Windows) ,                                                                  |
+| kylin-4.0.1-mondrian | The official Kylin 4.0.1 with **MDX** function 
enabled                    | Uploaded at 2022-05-13                             
                                                                                
                      | 
 | 5-dev                | [**For developer only**] Kylin 5.X package with some 
sample data/tools etc | Uploaded at 2023-11-21, this image for developer to 
debug and test Kylin 5.X source code if he/her didn't have a Hadoop env         
                     |
-| 5.x-base-dev-only    | [**For maintainer only**] Hadoop, Hive, Zookeeper, 
MySQL, JDK8             | Uploaded at 2023-09-07, this is the base image for 
all Kylin 5.X image, so it didn't contain Kylin package, see file 
`Dockerfile_hadoop` for information |
+| 5.x-base-dev-only    | [**For maintainer only**] Hadoop, Hive, Zookeeper, 
MySQL, JDK8            | Uploaded at 2023-09-07, this is the base image for all 
Kylin 5.X image, so it didn't contain Kylin package, see file 
`Dockerfile_hadoop` for information |
 
 ## Why you need Kylin 5
 
@@ -44,16 +45,17 @@ Deploy a Kylin 5.X instance without any pre-deployed hadoop 
component by followi
 
 ```shell
 docker run -d \
-  --name Kylin5-Machine \
-  --hostname Kylin5-Machine \
-  -m 8G \
-  -p 7070:7070 \
-  -p 8088:8088 \
-  -p 9870:9870 \
-  -p 8032:8032 \
-  -p 8042:8042 \
-  -p 2181:2181 \
-  apachekylin/apache-kylin-standalone:5.0-beta
+    --name Kylin5-Machine \
+    --hostname localhost \
+    -e TZ=UTC \
+    -m 10G \
+    -p 7070:7070 \
+    -p 8088:8088 \
+    -p 9870:9870 \
+    -p 8032:8032 \
+    -p 8042:8042 \
+    -p 2181:2181 \
+    apachekylin/apache-kylin-standalone:5.0.0-GA
 
 docker logs --follow Kylin5-Machine
 ```
@@ -62,11 +64,20 @@ When you enter these two commands, the logs will scroll
 out in terminal and the process will continue for 3-5 minutes.
 
 ```
+===============================================================================
+*******************************************************************************
+|
+|   Start SSH server at Fri Sep 13 12:15:24 UTC 2024
+|   Command: /etc/init.d/ssh start
+|
+ * Starting OpenBSD Secure Shell server sshd
+   ...done.
+[Start SSH server] succeed.
 
 ===============================================================================
 *******************************************************************************
 |
-|   Start MySQL at Fri Sep  8 03:35:26 UTC 2023
+|   Start MySQL at Fri Sep 13 12:15:25 UTC 2024
 |   Command: service mysql start
 |
  * Starting MySQL database server mysqld
@@ -77,115 +88,283 @@ su: warning: cannot change directory to /nonexistent: No 
such file or directory
 ===============================================================================
 *******************************************************************************
 |
-|   Create Database at Fri Sep  8 03:35:35 UTC 2023
+|   Create Database kylin at Fri Sep 13 12:15:36 UTC 2024
 |   Command: mysql -uroot -p123456 -e CREATE DATABASE IF NOT EXISTS kylin 
default charset utf8mb4 COLLATE utf8mb4_general_ci;
 |
 mysql: [Warning] Using a password on the command line interface can be 
insecure.
-[Create Database] succeed.
+[Create Database kylin] succeed.
+
+===============================================================================
+*******************************************************************************
+|
+|   Create Database hive3 at Fri Sep 13 12:15:36 UTC 2024
+|   Command: mysql -uroot -p123456 -e CREATE DATABASE IF NOT EXISTS hive3 
default charset utf8mb4 COLLATE utf8mb4_general_ci;
+|
+mysql: [Warning] Using a password on the command line interface can be 
insecure.
+[Create Database hive3] succeed.
 
 ===============================================================================
 *******************************************************************************
 |
-|   Init Hive at Fri Sep  8 03:35:35 UTC 2023
+|   Init Hive at Fri Sep 13 12:15:36 UTC 2024
 |   Command: schematool -initSchema -dbType mysql
 |
 SLF4J: Class path contains multiple SLF4J bindings.
+SLF4J: Found binding in 
[jar:file:/opt/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: Found binding in 
[jar:file:/opt/hadoop-3.2.4/share/hadoop/common/lib/slf4j-reload4j-1.7.35.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
+SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
+Metastore connection URL: 
jdbc:mysql://127.0.0.1:3306/hive3?useSSL=false&allowPublicKeyRetrieval=true&characterEncoding=UTF-8
+Metastore Connection Driver : com.mysql.cj.jdbc.Driver
+Metastore connection User: root
+Starting metastore schema initialization to 3.1.0
+Initialization script hive-schema-3.1.0.mysql.sql
+...
+Initialization script completed
+schemaTool completed
+[Init Hive] succeed.
 
+===============================================================================
+*******************************************************************************
+|
+|   Format HDFS at Fri Sep 13 12:15:50 UTC 2024
+|   Command: hdfs namenode -format
+|
+WARNING: /opt/hadoop-3.2.4/logs does not exist. Creating.
+2024-09-13 12:15:51,423 INFO namenode.NameNode: STARTUP_MSG: 
+/************************************************************
+STARTUP_MSG: Starting NameNode
+STARTUP_MSG:   host = localhost/127.0.0.1
+STARTUP_MSG:   args = [-format]
+STARTUP_MSG:   version = 3.2.4
+STARTUP_MSG:   classpath = 
/opt/hadoop-3.2.4/etc/hadoop:/opt/hadoop-3.2.4/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop-3.2.4/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/hadoop-3.2.4/share/hadoop/common/lib/curator-client-2.13.0.jar:/opt/hadoop-3.2.4/share/hadoop/common/lib/jetty-server-9.4.43.v20210629.jar:/opt/hadoop-3.2.4/share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/hadoop-3.2.4/share/hadoop/common/lib/woodstox-core-5.3.0.jar:/opt/hadoop-3.2.4/share/
 [...]
+STARTUP_MSG:   build = Unknown -r 7e5d9983b388e372fe640f21f048f2f2ae6e9eba; 
compiled by 'ubuntu' on 2022-07-12T11:58Z
+STARTUP_MSG:   java = 1.8.0_422
+************************************************************/
+2024-09-13 12:15:51,434 INFO namenode.NameNode: registered UNIX signal 
handlers for [TERM, HUP, INT]
+2024-09-13 12:15:51,539 INFO namenode.NameNode: createNameNode [-format]
+Formatting using clusterid: CID-e9f0293c-adcd-40a6-9c7f-ab7537b2eedf
+2024-09-13 12:15:52,059 INFO namenode.FSEditLog: Edit logging is async:true
+2024-09-13 12:15:52,090 INFO namenode.FSNamesystem: KeyProvider: null
+2024-09-13 12:15:52,092 INFO namenode.FSNamesystem: fsLock is fair: true
+2024-09-13 12:15:52,092 INFO namenode.FSNamesystem: Detailed lock hold time 
metrics enabled: false
+2024-09-13 12:15:52,100 INFO namenode.FSNamesystem: fsOwner             = root 
(auth:SIMPLE)
+2024-09-13 12:15:52,100 INFO namenode.FSNamesystem: supergroup          = 
supergroup
+2024-09-13 12:15:52,100 INFO namenode.FSNamesystem: isPermissionEnabled = true
+2024-09-13 12:15:52,100 INFO namenode.FSNamesystem: HA Enabled: false
+2024-09-13 12:15:52,153 INFO common.Util: 
dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO 
profiling
+2024-09-13 12:15:52,165 INFO blockmanagement.DatanodeManager: 
dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
+2024-09-13 12:15:52,165 INFO blockmanagement.DatanodeManager: 
dfs.namenode.datanode.registration.ip-hostname-check=true
+2024-09-13 12:15:52,169 INFO blockmanagement.BlockManager: 
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
+2024-09-13 12:15:52,169 INFO blockmanagement.BlockManager: The block deletion 
will start around 2024 Sep 13 12:15:52
+2024-09-13 12:15:52,171 INFO util.GSet: Computing capacity for map BlocksMap
+2024-09-13 12:15:52,171 INFO util.GSet: VM type       = 64-bit
+2024-09-13 12:15:52,172 INFO util.GSet: 2.0% max memory 1.7 GB = 34.8 MB
+2024-09-13 12:15:52,172 INFO util.GSet: capacity      = 2^22 = 4194304 entries
+2024-09-13 12:15:52,180 INFO blockmanagement.BlockManager: Storage policy 
satisfier is disabled
+2024-09-13 12:15:52,180 INFO blockmanagement.BlockManager: 
dfs.block.access.token.enable = false
+2024-09-13 12:15:52,186 INFO blockmanagement.BlockManagerSafeMode: 
dfs.namenode.safemode.threshold-pct = 0.9990000128746033
+2024-09-13 12:15:52,186 INFO blockmanagement.BlockManagerSafeMode: 
dfs.namenode.safemode.min.datanodes = 0
+2024-09-13 12:15:52,186 INFO blockmanagement.BlockManagerSafeMode: 
dfs.namenode.safemode.extension = 30000
+2024-09-13 12:15:52,187 INFO blockmanagement.BlockManager: defaultReplication  
       = 1
+2024-09-13 12:15:52,187 INFO blockmanagement.BlockManager: maxReplication      
       = 512
+2024-09-13 12:15:52,187 INFO blockmanagement.BlockManager: minReplication      
       = 1
+2024-09-13 12:15:52,187 INFO blockmanagement.BlockManager: 
maxReplicationStreams      = 2
+2024-09-13 12:15:52,187 INFO blockmanagement.BlockManager: 
redundancyRecheckInterval  = 3000ms
+2024-09-13 12:15:52,187 INFO blockmanagement.BlockManager: encryptDataTransfer 
       = false
+2024-09-13 12:15:52,188 INFO blockmanagement.BlockManager: maxNumBlocksToLog   
       = 1000
+2024-09-13 12:15:52,238 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 
maxEntries=536870911
+2024-09-13 12:15:52,238 INFO namenode.FSDirectory: USER serial map: bits=24 
maxEntries=16777215
+2024-09-13 12:15:52,238 INFO namenode.FSDirectory: GROUP serial map: bits=24 
maxEntries=16777215
+2024-09-13 12:15:52,238 INFO namenode.FSDirectory: XATTR serial map: bits=24 
maxEntries=16777215
+2024-09-13 12:15:52,259 INFO util.GSet: Computing capacity for map INodeMap
+2024-09-13 12:15:52,259 INFO util.GSet: VM type       = 64-bit
+2024-09-13 12:15:52,259 INFO util.GSet: 1.0% max memory 1.7 GB = 17.4 MB
+2024-09-13 12:15:52,259 INFO util.GSet: capacity      = 2^21 = 2097152 entries
+2024-09-13 12:15:52,260 INFO namenode.FSDirectory: ACLs enabled? false
+2024-09-13 12:15:52,260 INFO namenode.FSDirectory: POSIX ACL inheritance 
enabled? true
+2024-09-13 12:15:52,260 INFO namenode.FSDirectory: XAttrs enabled? true
+2024-09-13 12:15:52,261 INFO namenode.NameNode: Caching file names occurring 
more than 10 times
+2024-09-13 12:15:52,265 INFO snapshot.SnapshotManager: Loaded config 
captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, 
snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
+2024-09-13 12:15:52,267 INFO snapshot.SnapshotManager: SkipList is disabled
+2024-09-13 12:15:52,272 INFO util.GSet: Computing capacity for map cachedBlocks
+2024-09-13 12:15:52,272 INFO util.GSet: VM type       = 64-bit
+2024-09-13 12:15:52,272 INFO util.GSet: 0.25% max memory 1.7 GB = 4.4 MB
+2024-09-13 12:15:52,272 INFO util.GSet: capacity      = 2^19 = 524288 entries
+2024-09-13 12:15:52,284 INFO metrics.TopMetrics: NNTop conf: 
dfs.namenode.top.window.num.buckets = 10
+2024-09-13 12:15:52,284 INFO metrics.TopMetrics: NNTop conf: 
dfs.namenode.top.num.users = 10
+2024-09-13 12:15:52,284 INFO metrics.TopMetrics: NNTop conf: 
dfs.namenode.top.windows.minutes = 1,5,25
+2024-09-13 12:15:52,288 INFO namenode.FSNamesystem: Retry cache on namenode is 
enabled
+2024-09-13 12:15:52,288 INFO namenode.FSNamesystem: Retry cache will use 0.03 
of total heap and retry cache entry expiry time is 600000 millis
+2024-09-13 12:15:52,291 INFO util.GSet: Computing capacity for map 
NameNodeRetryCache
+2024-09-13 12:15:52,291 INFO util.GSet: VM type       = 64-bit
+2024-09-13 12:15:52,292 INFO util.GSet: 0.029999999329447746% max memory 1.7 
GB = 535.3 KB
+2024-09-13 12:15:52,292 INFO util.GSet: capacity      = 2^16 = 65536 entries
+2024-09-13 12:15:52,314 INFO namenode.FSImage: Allocated new BlockPoolId: 
BP-1031271309-127.0.0.1-1726229752306
+2024-09-13 12:15:52,328 INFO common.Storage: Storage directory 
/data/hadoop/dfs/name has been successfully formatted.
+2024-09-13 12:15:52,352 INFO namenode.FSImageFormatProtobuf: Saving image file 
/data/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no 
compression
+2024-09-13 12:15:52,435 INFO namenode.FSImageFormatProtobuf: Image file 
/data/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 396 
bytes saved in 0 seconds .
+2024-09-13 12:15:52,447 INFO namenode.NNStorageRetentionManager: Going to 
retain 1 images with txid >= 0
+2024-09-13 12:15:52,470 INFO namenode.FSNamesystem: Stopping services started 
for active state
+2024-09-13 12:15:52,470 INFO namenode.FSNamesystem: Stopping services started 
for standby state
+2024-09-13 12:15:52,474 INFO namenode.FSImage: FSImageSaver clean checkpoint: 
txid=0 when meet shutdown.
+2024-09-13 12:15:52,475 INFO namenode.NameNode: SHUTDOWN_MSG: 
+/************************************************************
+SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
+************************************************************/
+[Format HDFS] succeed.
 
+===============================================================================
+*******************************************************************************
+|
+|   Start Zookeeper at Fri Sep 13 12:15:52 UTC 2024
+|   Command: /opt/apache-zookeeper-3.7.2-bin/bin/zkServer.sh start
+|
+ZooKeeper JMX enabled by default
+Using config: /opt/apache-zookeeper-3.7.2-bin/bin/../conf/zoo.cfg
+Starting zookeeper ... STARTED
+[Start Zookeeper] succeed.
+
+===============================================================================
+*******************************************************************************
+|
+|   Start Hadoop at Fri Sep 13 12:15:53 UTC 2024
+|   Command: /opt/hadoop-3.2.4/sbin/start-all.sh
+|
+Starting namenodes on [localhost]
+localhost: Warning: Permanently added 'localhost' (ED25519) to the list of 
known hosts.
+Starting datanodes
+Starting secondary namenodes [localhost]
+Starting resourcemanager
+Starting nodemanagers
+[Start Hadoop] succeed.
+
+===============================================================================
+*******************************************************************************
+|
+|   Start History Server at Fri Sep 13 12:16:09 UTC 2024
+|   Command: /opt/hadoop-3.2.4/sbin/start-historyserver.sh
+|
+WARNING: Use of this script to start the MR JobHistory daemon is deprecated.
+WARNING: Attempting to execute replacement "mapred --daemon start" instead.
+[Start History Server] succeed.
+
+===============================================================================
+*******************************************************************************
+|
+|   Start Hive metastore at Fri Sep 13 12:16:11 UTC 2024
+|   Command: /opt/apache-hive-3.1.3-bin/bin/start-hivemetastore.sh
+|
+[Start Hive metastore] succeed.
+Checking Check Hive metastore's status...
++
+Check Check Hive metastore succeed.
+
+===============================================================================
+*******************************************************************************
+|
+|   Start Hive server at Fri Sep 13 12:16:22 UTC 2024
+|   Command: /opt/apache-hive-3.1.3-bin/bin/start-hiveserver2.sh
+|
+[Start Hive server] succeed.
+Checking Check Hive server's status...
++
+Check Check Hive server succeed.
+
+===============================================================================
+*******************************************************************************
+|
+|   Prepare sample data at Fri Sep 13 12:16:45 UTC 2024
+|   Command: /home/kylin/apache-kylin-5.0.0-GA-bin/bin/sample.sh
+|
+Loading sample data into HDFS tmp path: /tmp/kylin/sample_cube/data
+WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
+WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
+Going to create sample tables in hive to database SSB by hive
+WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
+SLF4J: Class path contains multiple SLF4J bindings.
+SLF4J: Found binding in 
[jar:file:/opt/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: Found binding in 
[jar:file:/opt/hadoop-3.2.4/share/hadoop/common/lib/slf4j-reload4j-1.7.35.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
+SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
+Hive Session ID = 2e70c349-7575-4a00-84c8-08b24f1a38cb
+
+Logging initialized using configuration in 
jar:file:/opt/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties
 Async: true
+Hive Session ID = bc35a6b2-1846-4a03-837f-271679ac6185
+OK
+Time taken: 1.136 seconds
+WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
+SLF4J: Class path contains multiple SLF4J bindings.
+SLF4J: Found binding in 
[jar:file:/opt/apache-hive-3.1.3-bin/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: Found binding in 
[jar:file:/opt/hadoop-3.2.4/share/hadoop/common/lib/slf4j-reload4j-1.7.35.jar!/org/slf4j/impl/StaticLoggerBinder.class]
+SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
+SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
+Hive Session ID = 2e1f6b0e-4017-4e44-871d-239ab6dadc29
+
+Logging initialized using configuration in 
jar:file:/opt/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties
 Async: true
+Hive Session ID = 965d433f-9635-4bfb-9aa6-93705ab733a4
 ...
-...
+Time taken: 1.946 seconds
+Loading data to table ssb.customer
+OK
+Time taken: 0.517 seconds
+Loading data to table ssb.dates
+OK
+Time taken: 0.256 seconds
+Loading data to table ssb.lineorder
+OK
+Time taken: 0.248 seconds
+Loading data to table ssb.part
+OK
+Time taken: 0.254 seconds
+Loading data to table ssb.supplier
+OK
+Time taken: 0.243 seconds
+Sample hive tables are created successfully; Going to create sample project...
+kylin version is 5.0.0.0
+The metadata backup path is 
hdfs://localhost:9000/kylin/kylin/_backup/2024-09-13-12-17-29_backup/core_meta.
+Sample model is created successfully in project 'learn_kylin'. Detailed 
Message is at "logs/shell.stderr".
+[Prepare sample data] succeed.
 
+===============================================================================
+*******************************************************************************
+|
+|   Kylin ENV bypass at Fri Sep 13 12:17:29 UTC 2024
+|   Command: touch /home/kylin/apache-kylin-5.0.0-GA-bin/bin/check-env-bypass
+|
+[Kylin ENV bypass] succeed.
 
 ===============================================================================
 *******************************************************************************
 |
-|   Start Kylin Instance at Fri Sep  8 03:38:13 UTC 2023
-|   Command: /home/kylin/apache-kylin-5.0.0-beta-bin/bin/kylin.sh -v start
+|   Start Kylin Instance at Fri Sep 13 12:17:29 UTC 2024
+|   Command: /home/kylin/apache-kylin-5.0.0-GA-bin/bin/kylin.sh -v start
 |
-Turn on verbose mode.
-java is /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java
+java is /usr/lib/jvm/java-8-openjdk-amd64/bin/java
 Starting Kylin...
 This user don't have permission to run crontab.
-
-Kylin is checking installation environment, log is at 
/home/kylin/apache-kylin-5.0.0-beta-bin/logs/check-env.out
-
-Checking Kerberos
-...................................................[SKIP]
-Checking OS Commands
-...................................................[PASS]
-Checking Hadoop Configuration
-...................................................[PASS]
-Checking Permission of HDFS Working Dir
-...................................................[PASS]
-Checking Java Version
-...................................................[PASS]
-Checking Kylin Config
-...................................................[PASS]
-Checking Ports Availability
-...................................................[PASS]
-Checking Spark Driver Host
-...................................................[WARN]
-WARNING:
-    Current kylin_engine_deploymode is 'client'.
-    WARN: 'kylin.storage.columnar.spark-conf.spark.driver.host' is missed, it 
may cause some problems.
-    WARN: 'kylin.engine.spark-conf.spark.driver.host' is missed, it may cause 
some problems.
-Checking Spark Dir
-...................................................[PASS]
-Checking Spark Queue
-...................................................[SKIP]
-Checking Spark Availability
-...................................................[PASS]
-Checking Metadata Accessibility
-...................................................[PASS]
-Checking Zookeeper Role
-...................................................[PASS]
-Checking Query History Accessibility
-...................................................[PASS]
-
->   WARN: Command lsb_release is not accessible. Please run on Linux OS.
->   WARN: Command 'lsb_release -a' does not work. Please run on Linux OS.
->   WARN: 'dfs.client.read.shortcircuit' is not enabled which could impact 
query performance. Check 
/home/kylin/apache-kylin-5.0.0-beta-bin/hadoop_conf/hdfs-site.xml
->   Available YARN RM cores: 8
->   Available YARN RM memory: 8192M
->   The max executor instances can be 8
->   The current executor instances is 1
-Checking environment finished successfully. To check again, run 
'bin/check-env.sh' manually.
-
-KYLIN_HOME is:/home/kylin/apache-kylin-5.0.0-beta-bin
-KYLIN_CONFIG_FILE 
is:/home/kylin/apache-kylin-5.0.0-beta-bin/conf/kylin.properties
-SPARK_HOME is:/home/kylin/apache-kylin-5.0.0-beta-bin/spark
+KYLIN_HOME is:/home/kylin/apache-kylin-5.0.0-GA-bin
+KYLIN_CONFIG_FILE 
is:/home/kylin/apache-kylin-5.0.0-GA-bin/conf/kylin.properties
+SPARK_HOME is:/home/kylin/apache-kylin-5.0.0-GA-bin/spark
 Retrieving hadoop config dir...
-KYLIN_JVM_SETTINGS is -server -Xms1g -Xmx8g -XX:+UseG1GC 
-XX:MaxGCPauseMillis=200 -XX:G1HeapRegionSize=16m -XX:+PrintFlagsFinal 
-XX:+PrintReferenceGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps 
-XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy 
-XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark  
-Xloggc:/home/kylin/apache-kylin-5.0.0-beta-bin/logs/kylin.gc.%p  
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M 
-XX:-OmitStackTraceInFastThrow -Dlog4j [...]
+KYLIN_JVM_SETTINGS is -server -Xms1g -Xmx8g -XX:+UseG1GC 
-XX:MaxGCPauseMillis=200 -XX:G1HeapRegionSize=16m -XX:+PrintFlagsFinal 
-XX:+PrintReferenceGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps 
-XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy 
-XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark  
-Xloggc:/home/kylin/apache-kylin-5.0.0-GA-bin/logs/kylin.gc.%p  
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M 
-XX:-OmitStackTraceInFastThrow -Dlog4j2. [...]
 KYLIN_DEBUG_SETTINGS is not set, will not enable remote debuging
 KYLIN_LD_LIBRARY_SETTINGS is not set, it is okay unless you want to specify 
your own native path
 SPARK_HDP_VERSION is set to 'hadoop'
-Export SPARK_HOME to /home/kylin/apache-kylin-5.0.0-beta-bin/spark
+Export SPARK_HOME to /home/kylin/apache-kylin-5.0.0-GA-bin/spark
+LD_PRELOAD= is:/home/kylin/apache-kylin-5.0.0-GA-bin/server/libch.so
 Checking Zookeeper role...
 Checking Spark directory...
-KYLIN_HOME is:/home/kylin/apache-kylin-5.0.0-beta-bin
-KYLIN_CONFIG_FILE 
is:/home/kylin/apache-kylin-5.0.0-beta-bin/conf/kylin.properties
-SPARK_HOME is:/home/kylin/apache-kylin-5.0.0-beta-bin/spark
-KYLIN_JVM_SETTINGS is -server -Xms1g -Xmx8g -XX:+UseG1GC 
-XX:MaxGCPauseMillis=200 -XX:G1HeapRegionSize=16m -XX:+PrintFlagsFinal 
-XX:+PrintReferenceGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps 
-XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy 
-XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark  
-Xloggc:/home/kylin/apache-kylin-5.0.0-beta-bin/logs/kylin.gc.%p  
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M 
-XX:-OmitStackTraceInFastThrow -Dlog4j [...]
-KYLIN_DEBUG_SETTINGS is not set, will not enable remote debuging
-KYLIN_LD_LIBRARY_SETTINGS is not set, it is okay unless you want to specify 
your own native path
-SPARK_HDP_VERSION is set to 'hadoop'
-Export SPARK_HOME to /home/kylin/apache-kylin-5.0.0-beta-bin/spark
-KYLIN_HOME is:/home/kylin/apache-kylin-5.0.0-beta-bin
-KYLIN_CONFIG_FILE 
is:/home/kylin/apache-kylin-5.0.0-beta-bin/conf/kylin.properties
-SPARK_HOME is:/home/kylin/apache-kylin-5.0.0-beta-bin/spark
-KYLIN_JVM_SETTINGS is -server -Xms1g -Xmx8g -XX:+UseG1GC 
-XX:MaxGCPauseMillis=200 -XX:G1HeapRegionSize=16m -XX:+PrintFlagsFinal 
-XX:+PrintReferenceGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps 
-XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy 
-XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark  
-Xloggc:/home/kylin/apache-kylin-5.0.0-beta-bin/logs/kylin.gc.%p  
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M 
-XX:-OmitStackTraceInFastThrow -Dlog4j [...]
-KYLIN_DEBUG_SETTINGS is not set, will not enable remote debuging
-KYLIN_LD_LIBRARY_SETTINGS is not set, it is okay unless you want to specify 
your own native path
-SPARK_HDP_VERSION is set to 'hadoop'
-Export SPARK_HOME to /home/kylin/apache-kylin-5.0.0-beta-bin/spark
-Kylin is starting. It may take a while. For status, please visit 
http://Kylin5-Machine:7070/kylin/index.html.
-You may also check status via: PID:9781, or Log: 
/home/kylin/apache-kylin-5.0.0-beta-bin/logs/kylin.log.
+Kylin is starting. It may take a while. For status, please visit 
http://localhost:7070/kylin/index.html.
+You may also check status via: PID:4746, or Log: 
/home/kylin/apache-kylin-5.0.0-GA-bin/logs/kylin.log.
 [Start Kylin Instance] succeed.
 Checking Check Env Script's status...
-/home/kylin/apache-kylin-5.0.0-beta-bin/bin/check-env-bypass
+/home/kylin/apache-kylin-5.0.0-GA-bin/bin/check-env-bypass
 +
 Check Check Env Script succeed.
-0
+Checking Kylin Instance's status...
+...
+Check Kylin Instance succeed.
 Kylin service is already available for you to preview.
 ```
 
@@ -219,7 +398,7 @@ docker rm Kylin5-Machine
 If you are using mac docker desktop, please ensure that you have set 
Resources: Memory=8GB and Cores=6 cores at least,
 so that can run kylin standalone on docker well.
 
-If you are interested in `Dockerfile`, please visit 
https://github.com/apache/kylin/blob/kylin5/dev-support/release-manager/standalone-docker/all_in_one/Dockerfile_kylin
 .
+If you are interested in `Dockerfile`, please visit 
https://github.com/apache/kylin/blob/kylin5/dev-support/release-manager/standalone-docker/all-in-one/Dockerfile
 .
 
 If you want to configure and restart Kylin instance,
 you can use `docker exec -it Kylin5-Machine bash` to login the container.

Reply via email to