I am starting these services manually, one at a time. For example, after 
'accumulo init' completed, I ran 'accumulo master' I get this error:

bash-5.1$ accumulo master
2023-01-10T15:53:30,143 [main] 
[org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Using 
Accumulo configuration at /opt/acc
umulo/conf/accumulo.properties
Using Accumulo configuration at /opt/accumulo/conf/accumulo.properties
2023-01-10T15:53:30,207 [main] 
[org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Create 2nd 
tier ClassLoader using URLs:
[]
Create 2nd tier ClassLoader using URLs: []
2023-01-10T15:53:30,372 [main] 
[org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating 
ThreadPoolExecutor for Scheduled Future
 Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
Creating ThreadPoolExecutor for Scheduled Future Checker with 1 core threads 
and 1 max threads 180000 MILLISECONDS timeout
2023-01-10T15:53:30,379 [main] 
[org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating 
ThreadPoolExecutor for zoo_change_updat
e with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
Creating ThreadPoolExecutor for zoo_change_update with 2 core threads and 2 max 
threads 180000 MILLISECONDS timeout
2023-01-10T15:53:30,560 [master] 
[org.apache.accumulo.core.conf.SiteConfiguration] INFO : Found Accumulo 
configuration on classpath at /op
t/accumulo/conf/accumulo.properties
Found Accumulo configuration on classpath at 
/opt/accumulo/conf/accumulo.properties
2023-01-10T15:53:30,736 [master] [org.apache.hadoop.util.Shell] DEBUG: setsid 
exited with exit code 0
setsid exited with exit code 0
2023-01-10T15:53:30,780 [master] 
[org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field 
org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
annotation @org.apache.hadoop.metrics2.annotation.Metric(a
lways=false, sampleName="Ops", valueName="Time", about="", interval=10, 
type=DEFAULT, value={"GetGroups"})
field org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
annotation @org
.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", 
valueName="Time", about="", interval=10, type=DEFAULT, value={"G
etGroups"})
2023-01-10T15:53:30,784 [master] 
[org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field 
org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure 
with annotation @org.apache.hadoop.metrics2.annotation.Metri
c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, 
type=DEFAULT, value={"Rate of failed kerberos logins and latenc
y (milliseconds)"})
field org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
annotation @
org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", 
valueName="Time", about="", interval=10, type=DEFAULT, value=
{"Rate of failed kerberos logins and latency (milliseconds)"})
2023-01-10T15:53:30,784 [master] 
[org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field 
org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess 
with annotation @org.apache.hadoop.metrics2.annotation.Metri
c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, 
type=DEFAULT, value={"Rate of successful kerberos logins and la
tency (milliseconds)"})
field org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
annotation @
org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", 
valueName="Time", about="", interval=10, type=DEFAULT, value=
{"Rate of successful kerberos logins and latency (milliseconds)"})
2023-01-10T15:53:30,784 [master] 
[org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private 
org.apache.hadoop.metrics2.li
b.MutableGaugeInt 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with 
annotation @org.apache.hadoop.metrics2.a
nnotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", 
interval=10, type=DEFAULT, value={"Renewal failures since las
t successful login"})
field private org.apache.hadoop.metrics2.lib.MutableGaugeInt 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures wi
th annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
sampleName="Ops", valueName="Time", about="", interval=10, type=
DEFAULT, value={"Renewal failures since last successful login"})
2023-01-10T15:53:30,785 [master] 
[org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private 
org.apache.hadoop.metrics2.li
b.MutableGaugeLong 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal 
with annotation @org.apache.hadoop.metr
ics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", 
about="", interval=10, type=DEFAULT, value={"Renewal failures sin
ce startup"})
field private org.apache.hadoop.metrics2.lib.MutableGaugeLong 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTo
tal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
sampleName="Ops", valueName="Time", about="", interval=10,
 type=DEFAULT, value={"Renewal failures since startup"})
2023-01-10T15:53:30,789 [master] 
[org.apache.hadoop.metrics2.impl.MetricsSystemImpl] DEBUG: UgiMetrics, User and 
group related metrics
UgiMetrics, User and group related metrics
2023-01-10T15:53:30,808 [master] [org.apache.hadoop.security.SecurityUtil] 
DEBUG: Setting hadoop.security.token.service.use_ip to true
Setting hadoop.security.token.service.use_ip to true
2023-01-10T15:53:30,827 [master] [org.apache.hadoop.security.Groups] DEBUG:  
Creating new Groups object
 Creating new Groups object
2023-01-10T15:53:30,829 [master] [org.apache.hadoop.util.NativeCodeLoader] 
DEBUG: Trying to load the custom-built native-hadoop library...
Trying to load the custom-built native-hadoop library...
2023-01-10T15:53:30,830 [master] [org.apache.hadoop.util.NativeCodeLoader] 
DEBUG: Loaded the native-hadoop library
Loaded the native-hadoop library
2023-01-10T15:53:30,830 [master] 
[org.apache.hadoop.security.JniBasedUnixGroupsMapping] DEBUG: Using 
JniBasedUnixGroupsMapping for Group r
esolution
Using JniBasedUnixGroupsMapping for Group resolution
2023-01-10T15:53:30,831 [master] 
[org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] DEBUG: Group 
mapping impl=org.apache.h
adoop.security.JniBasedUnixGroupsMapping
Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
2023-01-10T15:53:30,854 [master] [org.apache.hadoop.security.Groups] DEBUG: 
Group mapping impl=org.apache.hadoop.security.JniBasedUnixGrou
psMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
Group mapping 
impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; 
cacheTimeout=300000; warningDeltaMs=5000
2023-01-10T15:53:30,869 [master] 
[org.apache.hadoop.security.UserGroupInformation] DEBUG: Hadoop login
Hadoop login
2023-01-10T15:53:30,870 [master] 
[org.apache.hadoop.security.UserGroupInformation] DEBUG: hadoop login commit
hadoop login commit
2023-01-10T15:53:30,871 [master] 
[org.apache.hadoop.security.UserGroupInformation] DEBUG: Using user: "accumulo" 
with name: accumulo
Using user: "accumulo" with name: accumulo
2023-01-10T15:53:30,871 [master] 
[org.apache.hadoop.security.UserGroupInformation] DEBUG: User entry: "accumulo"
User entry: "accumulo"
2023-01-10T15:53:30,871 [master] 
[org.apache.hadoop.security.UserGroupInformation] DEBUG: UGI loginUser: 
accumulo (auth:SIMPLE)
UGI loginUser: accumulo (auth:SIMPLE)
2023-01-10T15:53:30,872 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-n
amenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
Starting: Acquiring creator semaphore for 
hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
2023-01-10T15:53:30,873 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.
accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
Acquiring creator semaphore for 
hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo:
 duration 0:00.000s
2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-h
dfs-namenodes:8020/accumulo/data0/accumulo
Starting: Creating FS 
hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
Loading filesystems
Loading filesystems
2023-01-10T15:53:30,887 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/h
adoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
file:// = class org.apache.hadoop.fs.LocalFileSystem from 
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,892 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem fro
m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from 
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,894 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hado
op/share/hadoop/client/hadoop-client-api-3.3.4.jar
har:// = class org.apache.hadoop.fs.HarFileSystem from 
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,896 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /o
pt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
http:// = class org.apache.hadoop.fs.http.HttpFileSystem from 
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,897 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from 
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,905 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem fro
m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from 
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,912 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem
from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from 
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,913 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSyste
m from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from 
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem
from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from 
/opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
Looking for FS supporting hdfs
Looking for FS supporting hdfs
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
looking for configuration option fs.hdfs.impl
looking for configuration option fs.hdfs.impl
2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
Looking in service filesystems for implementation class
Looking in service filesystems for implementation class
2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: FS 
for hdfs is class org.apache.hadoop.hdfs.DistributedFileSyste
m
FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
2023-01-10T15:53:30,969 [master] 
[org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: 
dfs.client.use.legacy.blockreader.local = false
dfs.client.use.legacy.blockreader.local = false
2023-01-10T15:53:30,969 [master] 
[org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: 
dfs.client.read.shortcircuit = false
dfs.client.read.shortcircuit = false
2023-01-10T15:53:30,969 [master] 
[org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: 
dfs.client.domain.socket.data.traffic = false
dfs.client.domain.socket.data.traffic = false
2023-01-10T15:53:30,969 [master] 
[org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: 
dfs.domain.socket.path =
dfs.domain.socket.path =
2023-01-10T15:53:30,980 [master] [org.apache.hadoop.hdfs.DFSClient] DEBUG: Sets 
dfs.client.block.write.replace-datanode-on-failure.min-rep
lication to 0
Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
2023-01-10T15:53:30,990 [master] [org.apache.hadoop.io.retry.RetryUtils] DEBUG: 
multipleLinearRandomRetry = null
multipleLinearRandomRetry = null
2023-01-10T15:53:31,011 [master] [org.apache.hadoop.ipc.Server] DEBUG: 
rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apach
e.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, 
rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class 
org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apac
he.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
...
... LONG PAUSE HERE - ALMOST 10 minutes
...
2023-01-10T16:03:52,316 [master] [org.apache.hadoop.ipc.Client] DEBUG: getting 
client out of cache: Client-5197ff3375714e029d5cdcb1ac53e74
2
getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,679 [client DomainSocketWatcher] 
[org.apache.hadoop.net.unix.DomainSocketWatcher] DEBUG: 
org.apache.hadoop.net.unix.Do
mainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
org.apache.hadoop.net.unix.DomainSocketWatcher$2@35d323c6: starting with 
interruptCheckPeriodMs = 60000
2023-01-10T16:03:52,686 [master] [org.apache.hadoop.util.PerformanceAdvisory] 
DEBUG: Both short-circuit local reads and UNIX domain socket
 are disabled.
Both short-circuit local reads and UNIX domain socket are disabled.
2023-01-10T16:03:52,694 [master] 
[org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil] DEBUG: 
DataTransferProtocol not
using SaslPropertiesResolver, no QOP found in configuration for 
dfs.data.transfer.protection
DataTransferProtocol not using SaslPropertiesResolver, no QOP found in 
configuration for dfs.data.transfer.protection
2023-01-10T16:03:52,697 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: 
Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-nameno
des:8020/accumulo/data0/accumulo: duration 10:21.822s
Creating FS 
hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo:
 duration 10:21.822s
2023-01-10T16:03:52,718 [master] 
[org.apache.accumulo.core.conf.ConfigurationTypeHelper] DEBUG: Loaded class : 
org.apache.accumulo.core.sp
i.fs.PreferredVolumeChooser
Loaded class : org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
2023-01-10T16:03:52,761 [master] [org.apache.hadoop.ipc.Client] DEBUG: The ping 
interval is 60000 ms.
The ping interval is 60000 ms.
2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: 
Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.4
2.15.98:8020
Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Setup 
connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenode
s/10.42.15.98:8020
Setup connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
2023-01-10T16:03:52,787 [IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: starting, having connections 1
IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from 
accumulo: starting, having con
nections 1
2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] 
[org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accum
ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo 
sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi
sting
IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo 
sending #0 org.apache
.hadoop.hdfs.protocol.ClientProtocol.getListing
2023-01-10T16:03:52,801 [IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo got value #0
IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo 
got value #0
2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] 
DEBUG: Call: getListing took 72ms
Call: getListing took 72ms
2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] 
DEBUG: Trying to read instance id from hdfs://accumulo-hdfs
-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Trying to read instance id from 
hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] 
ERROR: unable to obtain instance id at hdfs://accumulo-hdfs
-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
unable to obtain instance id at 
hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id 
at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at 
org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102)
        at 
org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
        at 
org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
        at org.apache.accumulo.manager.Manager.main(Manager.java:408)
        at 
org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
        at java.base/java.lang.Thread.run(Thread.java:829)
2023-01-10T16:03:52,808 [master] [org.apache.accumulo.start.Main] ERROR: Thread 
'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id 
at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at 
org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
 ~[accumulo-server-base-2.1.0.jar:2.1.
0]
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) 
~[accumulo-server-base-2.1.0.jar:2.1.0]
        at 
org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) 
~[accumulo-server-base-2.1.0.jar:2.1.0]
        at 
org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) 
~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) 
~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.main(Manager.java:408) 
~[accumulo-manager-2.1.0.jar:2.1.0]
        at 
org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) 
~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) 
~[accumulo-start-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:829) ~[?:?]
Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id 
at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at 
org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
 ~[accumulo-server-base-2.1.0.jar:2.1.
0]
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) 
~[accumulo-server-base-2.1.0.jar:2.1.0]
        at 
org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) 
~[accumulo-server-base-2.1.0.jar:2.1.0]
        at 
org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) 
~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) 
~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.main(Manager.java:408) 
~[accumulo-manager-2.1.0.jar:2.1.0]
        at 
org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) 
~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) 
~[accumulo-start-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:829) ~[?:?]
2023-01-10T16:03:52,812 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] 
DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.Di
stributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo 
(auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-n
amenodes:8020; URI: 
hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity 
Hash: 50257de5
FileSystem.close() by method: 
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518));
 Key: (accumulo (auth:S
IMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: 
hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; O
bject Identity Hash: 50257de5
2023-01-10T16:03:52,814 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: 
stopping client from cache: Client-5197ff3375714e029d5cdcb
1ac53e742
stopping client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,815 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: 
removing client from cache: Client-5197ff3375714e029d5cdcb
1ac53e742
removing client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: 
stopping actual client because no more references remain:
Client-5197ff3375714e029d5cdcb1ac53e742
stopping actual client because no more references remain: 
Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: 
Stopping client
Stopping client
2023-01-10T16:03:52,820 [IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: closed
IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from 
accumulo: closed
2023-01-10T16:03:52,820 [IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: stopped, remaining connections 0
IPC Client (585906429) connection to 
accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from 
accumulo: stopped, remaining c
onnections 0
2023-01-10T16:03:52,820 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] 
DEBUG: Completed shutdown in 0.010 seconds; Timeouts: 0
Completed shutdown in 0.010 seconds; Timeouts: 0
2023-01-10T16:03:52,843 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] 
DEBUG: ShutdownHookManager completed shutdown.
ShutdownHookManager completed shutdown.

________________________________
From: Ed Coleman <edcole...@apache.org>
Sent: Tuesday, January 10, 2023 11:17 AM
To: user@accumulo.apache.org <user@accumulo.apache.org>
Subject: Re: [External] Re: accumulo init error in K8S

Running init does not start the Accumulo services.  Is the manager and are the 
tserver processes running?

I may have missed it, but what version are you trying to use?  2.1?

A quick look of the documentation at 
https://urldefense.com/v3/__https://accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$
  I would assume that add-volumes may not be required if your initial 
configuration is correct.

At this point, logs may help more than stack traces.

Ed C

On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> Yes, I ran it just now. I had debug enabled, so the prompt for instance name 
> was hidden. I had to enter a few CRs to see the prompt. Once the prompts for 
> instance name and password were answered, I can see entries for the accumulo 
> config in the zookeeper.
>
> Should I run 'accumulo init --add-volumes' now?
>
> If I run 'accumulo master' and it seems to be hung up the thread:
>
> "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s 
> tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
>    java.lang.Thread.State: RUNNABLE
>         at 
> sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native 
> Method)
>         at 
> sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
>         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
>         at 
> sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
>         at 
> sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
>         at 
> sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at 
> java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at 
> sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at 
> sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
>         at 
> sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
>         at 
> java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at 
> sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
>         at 
> sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
>         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
>         at 
> sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
>         at 
> sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
>         at 
> sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
>         at 
> java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
>         at 
> java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
>         at 
> java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
>         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
>         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
>         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
>         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
>         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
>         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
>         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
>         at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
>         at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
>         at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
>         at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
>         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
>         at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
>         at 
> org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
>         at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
>         at 
> org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
>         at 
> org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
>         at 
> org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at 
> org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown 
> Source)
>         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
>
>
>
> I will wait and see when there is more log output.
>
> Thanks
> Ranga
>
> ________________________________
> From: Ed Coleman <edcole...@apache.org>
> Sent: Tuesday, January 10, 2023 10:16 AM
> To: user@accumulo.apache.org <user@accumulo.apache.org>
> Subject: [External] Re: accumulo init error in K8S
>
> Have you tried running accumulo init without the --add-volumes?  From your 
> attached log it looks like it cannot find a valid instance id
>
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] 
> DEBUG: Trying to read instance id from 
> hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Trying to read instance id from 
> hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] 
> ERROR: unable to obtain instance id at 
> hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> unable to obtain instance id at 
> hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
>
>
> On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > Hello,
> > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and 
> > Zookeeper are up and running in the same K8S namespace.
> > accumulo.properties is as below:
> >
> >   
> > instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   general.custom.volume.preferred.default=accumulo
> >   instance.zookeeper.host=accumulo-zookeeper
> >   # instance.secret=DEFAULT
> >   
> > general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> >   
> > general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   trace.user=tracer
> >   trace.password=tracer
> >   instance.secret=accumulo
> >   tserver.cache.data.size=15M
> >   tserver.cache.index.size=40M
> >   tserver.memory.maps.max=128M
> >   tserver.memory.maps.native.enabled=true
> >   tserver.sort.buffer.size=50M
> >   tserver.total.mutation.queue.max=16M
> >   tserver.walog.max.size=128M
> >
> > accumulo-client.properties is as below:
> >
> >  auth.type=password
> >  auth.principal=root
> >  auth.token=root
> >  instance.name=accumulo
> >  # For Accumulo >=2.0.0
> >  instance.zookeepers=accumulo-zookeeper
> >  instance.zookeeper.host=accumulo-zookeeper
> >
> > When I run 'accumulo init --add-volumes', I see an error as below and what 
> > is wrong with the setup?
> >
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at 
> > org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> >         at 
> > org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> >         at 
> > org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> >         at 
> > org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at java.base/java.lang.Thread.run(Thread.java:829)
> > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: 
> > Thread 'init' died.
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at 
> > org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) 
> > ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at 
> > org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) 
> > ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at 
> > org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) 
> > ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at 
> > org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) 
> > ~[accumulo-start-2.1.0.jar:2.1.0]
> >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > Thread 'init' died.
> >
> > I have attached complete log:
> >
> >
>

Reply via email to