As a follow up, when name node IP address changes, the master recognizes the change in IP address but it still throws error:
2024-05-17 12:33:18,235 WARN [Repo runner 3] [org.apache.hadoop.ipc.Client]: Address change detected. Old: accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/20.10.240.183:8020 New: accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/20.10.250.199:8020 2024-05-17 12:33:18,236 WARN [Repo runner 3] [org.apache.accumulo.fate.Fate]: Failed to undo Repo, FATE[185675be3f171e4d] java.net.NoRouteToHostException: No Route to Host from accumulo-masters-1.accumulo-masters.accumulo-dev.svc.cluster.local/20.10.249.101 to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:833) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:784) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549) at org.apache.hadoop.ipc.Client.call(Client.java:1491) at org.apache.hadoop.ipc.Client.call(Client.java:1388) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy16.delete(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:641) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy17.delete(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1607) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:946) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:943) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:953) at org.apache.accumulo.server.fs.VolumeManagerImpl.deleteRecursively(VolumeManagerImpl.java:214) at org.apache.accumulo.master.tableOps.create.CreateDir.undo(CreateDir.java:62) at org.apache.accumulo.master.tableOps.create.CreateDir.undo(CreateDir.java:31) at org.apache.accumulo.master.tableOps.TraceRepo.undo(TraceRepo.java:59) at org.apache.accumulo.fate.Fate$TransactionRunner.undo(Fate.java:199) at org.apache.accumulo.fate.Fate$TransactionRunner.processFailed(Fate.java:175) at org.apache.accumulo.fate.Fate$TransactionRunner.run(Fate.java:68) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35) at java.lang.Thread.run(Thread.java:750) Caused by: java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:700) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:804) at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:421) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1606) at org.apache.hadoop.ipc.Client.call(Client.java:1435) ... 30 more However, when I look up the name node service from command line, the service is available: bash-5.1$ nc -z -v accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes 8020 Ncat: Version 7.92 ( https://nmap.org/ncat ) Ncat: Connected to 20.10.250.199:8020. Ncat: 0 bytes sent, 0 bytes received in 0.06 seconds. bash-5.1$ nc -z -v accumulo-hdfs-namenode-1.accumulo-hdfs-namenodes 8020 Ncat: Version 7.92 ( https://nmap.org/ncat ) Ncat: Connected to 20.10.246.194:8020. Ncat: 0 bytes sent, 0 bytes received in 0.03 seconds. bash-5.1$ hostname accumulo-masters-0 regards Ranga From: Samudrala, Ranganath [USA] via user <user@accumulo.apache.org> Date: Thursday, May 16, 2024 at 5:35 PM To: user@accumulo.apache.org <user@accumulo.apache.org>, edcole...@apache.org <edcole...@apache.org> Subject: Re: [External] Re: accumulo init error in K8S Hello I am still sorting out Accumulo related issue in K8S environment. I this regard, I request pointers to where I should look to address these issues. In K8S, it is possible from time to time, HDFS name node PODs restart and get a new IP Hello I am still sorting out Accumulo related issue in K8S environment. I this regard, I request pointers to where I should look to address these issues. In K8S, it is possible from time to time, HDFS name node PODs restart and get a new IP address. This causes Accumulo master PODs distress. We have seen, 2 things happen 1. Disappearance of users in zookeeper 2. Requires restart of master and t-server PODs in sequence I have changed java.security entry for DNS caching from -1 to 30 seconds. I have avoided running “init” commands during restarts thinking this will avoid deleting users from zookeeper, but this has not helped. I am still using Accumulo v2.0.1. 1. How to prevent users being deleted from zookeeper. 2. Where does Accumulo cache name node IP address and how to reset it dyanamically? Thanks Ranga From: Samudrala, Ranganath [USA] via user <user@accumulo.apache.org> Date: Wednesday, January 11, 2023 at 3:44 PM To: user@accumulo.apache.org <user@accumulo.apache.org> Subject: Re: [External] Re: accumulo init error in K8S Hello Thanks for the feedback. I did change the instance.volumes from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo to hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo and I have made progress since. I also ran 'accumulo init --upload-accumulo-props' first before running the manager and other services. thanks Ranga ________________________________ From: Ed Coleman <edcole...@apache.org> Sent: Tuesday, January 10, 2023 6:27 PM To: user@accumulo.apache.org <user@accumulo.apache.org> Subject: Re: [External] Re: accumulo init error in K8S I suspect that your hdfs config / specification may need adjusted, but not sure. "accumulo init" should create the necessary accumulo paths - but with your config you may need to manually create the /accumulo/data0 directory (if that's what it is) so that init can create the accumulo root from there. About the configs in ZooKeeper: In 2.1 the configuration is stored on the config node. You could use a ZooKeeper client stat command and see that it has a non-zero data length. If you do a zkCli get, it returns a binary array (the values are compressed). There is a utility to dump the config accumulo zoo-info-viewer --print-props The viewer also has other options that may help. accumulo -zoo-info-viewer --print-instances (should display the instance ids in ZooKeeper) accumulo zoo-info-viewer --instanceName (should be able to find your instance id in hdfs) accumulo zoo-info-viewer --instanceId (if you have the instance id) On 2023/01/10 22:17:13 "Samudrala, Ranganath [USA] via user" wrote: > Instance.volumes: > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo > > So, does it make sense that I am expecting instance_id folder underneath > /accumulo/data0/accumulo folder. > > Thanks > Ranga > > From: Ed Coleman <edcole...@apache.org> > Date: Tuesday, January 10, 2023 at 12:03 PM > To: user@accumulo.apache.org <user@accumulo.apache.org> > Subject: Re: [External] Re: accumulo init error in K8S > Can you use manager instead of master - it has been renamed to manager, but > maybe we missed some old references. > > After you run accumulo init, what is in hadoop? > > > hadoop fs -ls -R /accumulo > drwxr-xr-x - x x 0 2023-01-10 16:49 /accumulo/instance_id > -rw-r--r-- 3 x x 0 2023-01-10 16:49 > /accumulo/instance_id/bdcdd3d8-7623-4882-aae7-357a9db2efd4 > drwxr-xr-x - x x 0 2023-01-10 16:49 /accumulo/tables > ... > drwx------ - x x 0 2023-01-10 16:49 /accumulo/version > drwx------ - x x 0 2023-01-10 16:49 /accumulo/version/10 > > Running > > > accumulo tserver > > accumulo tserver > 2023-01-10T16:53:26,858 [conf.SiteConfiguration] INFO : Found Accumulo > configuration on classpath at > /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties > 2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Version 2.1.1-SNAPSHOT > 2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Instance > bdcdd3d8-7623-4882-aae7-357a9db2efd4 > 2023-01-10T16:53:27,816 [metrics.MetricsUtil] INFO : Metric producer > PropStoreMetrics initialize > 2023-01-10T16:53:27,931 [server.ServerContext] INFO : tserver starting > 2023-01-10T16:53:27,931 [server.ServerContext] INFO : Instance > bdcdd3d8-7623-4882-aae7-357a9db2efd4 > 2023-01-10T16:53:27,933 [server.ServerContext] INFO : Data Version 10 > > When starting a manager / master - are you seeing: > > 2023-01-10T16:57:26,125 [balancer.TableLoadBalancer] INFO : Loaded class > org.apache.accumulo.core.spi.balancer.SimpleLoadBalancer for table +r > 2023-01-10T16:57:26,126 [balancer.SimpleLoadBalancer] WARN : Not balancing > because we don't have any tservers. > > tservers should be started first, before the other management processes. > > The initial manager start-up should look like: > > > accumulo manager > 2023-01-10T16:56:43,649 [conf.SiteConfiguration] INFO : Found Accumulo > configuration on classpath at > /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties > 2023-01-10T16:56:44,581 [manager.Manager] INFO : Version 2.1.1-SNAPSHOT > 2023-01-10T16:56:44,582 [manager.Manager] INFO : Instance > bdcdd3d8-7623-4882-aae7-357a9db2efd4 > 2023-01-10T16:56:44,627 [metrics.MetricsUtil] INFO : Metric producer > PropStoreMetrics initialize > 2023-01-10T16:56:44,742 [server.ServerContext] INFO : manager starting > 2023-01-10T16:56:44,742 [server.ServerContext] INFO : Instance > bdcdd3d8-7623-4882-aae7-357a9db2efd4 > 2023-01-10T16:56:44,745 [server.ServerContext] INFO : Data Version 10 > 2023-01-10T16:56:44,745 [server.ServerContext] INFO : Attempting to talk to > zookeeper > 2023-01-10T16:56:44,746 [server.ServerContext] INFO : ZooKeeper connected and > initialized, attempting to talk to HDFS > 2023-01-10T16:56:44,761 [server.ServerContext] INFO : Connected to HDFS > > And then key things to look for after the config dump: > > 023-01-10T16:56:44,802 [manager.Manager] INFO : Instance > bdcdd3d8-7623-4882-aae7-357a9db2efd4 > 2023-01-10T16:56:44,825 [manager.Manager] INFO : SASL is not enabled, > delegation tokens will not be available > 2023-01-10T16:56:44,872 [metrics.MetricsUtil] INFO : Metric producer > ThriftMetrics initialize > 2023-01-10T16:56:44,888 [manager.Manager] INFO : Started Manager client > service at ip-10-113-15-42.evoforge.org:9999 > 2023-01-10T16:56:44,890 [manager.Manager] INFO : trying to get manager lock > 2023-01-10T16:56:44,900 [manager.EventCoordinator] INFO : State changed from > INITIAL to HAVE_LOCK > > > > > On 2023/01/10 16:22:11 "Samudrala, Ranganath [USA] via user" wrote: > > Yes, I am trying to set Accumulo v2.1.0 with Hadoop v3.3.4. > > ________________________________ > > From: Samudrala, Ranganath [USA] <samudrala_rangan...@bah.com> > > Sent: Tuesday, January 10, 2023 11:21 AM > > To: user@accumulo.apache.org <user@accumulo.apache.org> > > Subject: Re: [External] Re: accumulo init error in K8S > > > > I am starting these services manually, one at a time. For example, after > > 'accumulo init' completed, I ran 'accumulo master' I get this error: > > > > bash-5.1$ accumulo master > > 2023-01-10T15:53:30,143 [main] > > [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Using > > Accumulo configuration at /opt/acc > > umulo/conf/accumulo.properties > > Using Accumulo configuration at /opt/accumulo/conf/accumulo.properties > > 2023-01-10T15:53:30,207 [main] > > [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Create > > 2nd tier ClassLoader using URLs: > > [] > > Create 2nd tier ClassLoader using URLs: [] > > 2023-01-10T15:53:30,372 [main] > > [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating > > ThreadPoolExecutor for Scheduled Future > > Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout > > Creating ThreadPoolExecutor for Scheduled Future Checker with 1 core > > threads and 1 max threads 180000 MILLISECONDS timeout > > 2023-01-10T15:53:30,379 [main] > > [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating > > ThreadPoolExecutor for zoo_change_updat > > e with 2 core threads and 2 max threads 180000 MILLISECONDS timeout > > Creating ThreadPoolExecutor for zoo_change_update with 2 core threads and 2 > > max threads 180000 MILLISECONDS timeout > > 2023-01-10T15:53:30,560 [master] > > [org.apache.accumulo.core.conf.SiteConfiguration] INFO : Found Accumulo > > configuration on classpath at /op > > t/accumulo/conf/accumulo.properties > > Found Accumulo configuration on classpath at > > /opt/accumulo/conf/accumulo.properties > > 2023-01-10T15:53:30,736 [master] [org.apache.hadoop.util.Shell] DEBUG: > > setsid exited with exit code 0 > > setsid exited with exit code 0 > > 2023-01-10T15:53:30,780 [master] > > [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field > > org.apache.hadoop.metrics2.lib.Mutabl > > eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups > > with annotation @org.apache.hadoop.metrics2.annotation.Metric(a > > lways=false, sampleName="Ops", valueName="Time", about="", interval=10, > > type=DEFAULT, value={"GetGroups"}) > > field org.apache.hadoop.metrics2.lib.MutableRate > > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with > > annotation @org > > .apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", > > valueName="Time", about="", interval=10, type=DEFAULT, value={"G > > etGroups"}) > > 2023-01-10T15:53:30,784 [master] > > [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field > > org.apache.hadoop.metrics2.lib.Mutabl > > eRate > > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure > > with annotation @org.apache.hadoop.metrics2.annotation.Metri > > c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, > > type=DEFAULT, value={"Rate of failed kerberos logins and latenc > > y (milliseconds)"}) > > field org.apache.hadoop.metrics2.lib.MutableRate > > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure > > with annotation @ > > org.apache.hadoop.metrics2.annotation.Metric(always=false, > > sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, > > value= > > {"Rate of failed kerberos logins and latency (milliseconds)"}) > > 2023-01-10T15:53:30,784 [master] > > [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field > > org.apache.hadoop.metrics2.lib.Mutabl > > eRate > > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess > > with annotation @org.apache.hadoop.metrics2.annotation.Metri > > c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, > > type=DEFAULT, value={"Rate of successful kerberos logins and la > > tency (milliseconds)"}) > > field org.apache.hadoop.metrics2.lib.MutableRate > > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess > > with annotation @ > > org.apache.hadoop.metrics2.annotation.Metric(always=false, > > sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, > > value= > > {"Rate of successful kerberos logins and latency (milliseconds)"}) > > 2023-01-10T15:53:30,784 [master] > > [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private > > org.apache.hadoop.metrics2.li > > b.MutableGaugeInt > > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures > > with annotation @org.apache.hadoop.metrics2.a > > nnotation.Metric(always=false, sampleName="Ops", valueName="Time", > > about="", interval=10, type=DEFAULT, value={"Renewal failures since las > > t successful login"}) > > field private org.apache.hadoop.metrics2.lib.MutableGaugeInt > > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures > > wi > > th annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, > > sampleName="Ops", valueName="Time", about="", interval=10, type= > > DEFAULT, value={"Renewal failures since last successful login"}) > > 2023-01-10T15:53:30,785 [master] > > [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private > > org.apache.hadoop.metrics2.li > > b.MutableGaugeLong > > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal > > with annotation @org.apache.hadoop.metr > > ics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", > > about="", interval=10, type=DEFAULT, value={"Renewal failures sin > > ce startup"}) > > field private org.apache.hadoop.metrics2.lib.MutableGaugeLong > > org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTo > > tal with annotation > > @org.apache.hadoop.metrics2.annotation.Metric(always=false, > > sampleName="Ops", valueName="Time", about="", interval=10, > > type=DEFAULT, value={"Renewal failures since startup"}) > > 2023-01-10T15:53:30,789 [master] > > [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] DEBUG: UgiMetrics, User > > and group related metrics > > UgiMetrics, User and group related metrics > > 2023-01-10T15:53:30,808 [master] [org.apache.hadoop.security.SecurityUtil] > > DEBUG: Setting hadoop.security.token.service.use_ip to true > > Setting hadoop.security.token.service.use_ip to true > > 2023-01-10T15:53:30,827 [master] [org.apache.hadoop.security.Groups] DEBUG: > > Creating new Groups object > > Creating new Groups object > > 2023-01-10T15:53:30,829 [master] [org.apache.hadoop.util.NativeCodeLoader] > > DEBUG: Trying to load the custom-built native-hadoop library... > > Trying to load the custom-built native-hadoop library... > > 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.util.NativeCodeLoader] > > DEBUG: Loaded the native-hadoop library > > Loaded the native-hadoop library > > 2023-01-10T15:53:30,830 [master] > > [org.apache.hadoop.security.JniBasedUnixGroupsMapping] DEBUG: Using > > JniBasedUnixGroupsMapping for Group r > > esolution > > Using JniBasedUnixGroupsMapping for Group resolution > > 2023-01-10T15:53:30,831 [master] > > [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] DEBUG: > > Group mapping impl=org.apache.h > > adoop.security.JniBasedUnixGroupsMapping > > Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping > > 2023-01-10T15:53:30,854 [master] [org.apache.hadoop.security.Groups] DEBUG: > > Group mapping impl=org.apache.hadoop.security.JniBasedUnixGrou > > psMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000 > > Group mapping > > impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; > > cacheTimeout=300000; warningDeltaMs=5000 > > 2023-01-10T15:53:30,869 [master] > > [org.apache.hadoop.security.UserGroupInformation] DEBUG: Hadoop login > > Hadoop login > > 2023-01-10T15:53:30,870 [master] > > [org.apache.hadoop.security.UserGroupInformation] DEBUG: hadoop login commit > > hadoop login commit > > 2023-01-10T15:53:30,871 [master] > > [org.apache.hadoop.security.UserGroupInformation] DEBUG: Using user: > > "accumulo" with name: accumulo > > Using user: "accumulo" with name: accumulo > > 2023-01-10T15:53:30,871 [master] > > [org.apache.hadoop.security.UserGroupInformation] DEBUG: User entry: > > "accumulo" > > User entry: "accumulo" > > 2023-01-10T15:53:30,871 [master] > > [org.apache.hadoop.security.UserGroupInformation] DEBUG: UGI loginUser: > > accumulo (auth:SIMPLE) > > UGI loginUser: accumulo (auth:SIMPLE) > > 2023-01-10T15:53:30,872 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-n > > amenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo > > Starting: Acquiring creator semaphore for > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo > > 2023-01-10T15:53:30,873 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0. > > accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s > > Acquiring creator semaphore for > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: > > duration 0:00.000s > > 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-h > > dfs-namenodes:8020/accumulo/data0/accumulo > > Starting: Creating FS > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo > > 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > Loading filesystems > > Loading filesystems > > 2023-01-10T15:53:30,887 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/h > > adoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > file:// = class org.apache.hadoop.fs.LocalFileSystem from > > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > 2023-01-10T15:53:30,892 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem fro > > m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from > > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > 2023-01-10T15:53:30,894 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hado > > op/share/hadoop/client/hadoop-client-api-3.3.4.jar > > har:// = class org.apache.hadoop.fs.HarFileSystem from > > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > 2023-01-10T15:53:30,896 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /o > > pt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > http:// = class org.apache.hadoop.fs.http.HttpFileSystem from > > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > 2023-01-10T15:53:30,897 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from > > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from > > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > 2023-01-10T15:53:30,905 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem fro > > m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from > > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > 2023-01-10T15:53:30,912 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem > > from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from > > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > 2023-01-10T15:53:30,913 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSyste > > m from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from > > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar > > 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem > > from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar > > s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from > > /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar > > 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > Looking for FS supporting hdfs > > Looking for FS supporting hdfs > > 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > looking for configuration option fs.hdfs.impl > > looking for configuration option fs.hdfs.impl > > 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > Looking in service filesystems for implementation class > > Looking in service filesystems for implementation class > > 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSyste > > m > > FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem > > 2023-01-10T15:53:30,969 [master] > > [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: > > dfs.client.use.legacy.blockreader.local = false > > dfs.client.use.legacy.blockreader.local = false > > 2023-01-10T15:53:30,969 [master] > > [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: > > dfs.client.read.shortcircuit = false > > dfs.client.read.shortcircuit = false > > 2023-01-10T15:53:30,969 [master] > > [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: > > dfs.client.domain.socket.data.traffic = false > > dfs.client.domain.socket.data.traffic = false > > 2023-01-10T15:53:30,969 [master] > > [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: > > dfs.domain.socket.path = > > dfs.domain.socket.path = > > 2023-01-10T15:53:30,980 [master] [org.apache.hadoop.hdfs.DFSClient] DEBUG: > > Sets dfs.client.block.write.replace-datanode-on-failure.min-rep > > lication to 0 > > Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0 > > 2023-01-10T15:53:30,990 [master] [org.apache.hadoop.io.retry.RetryUtils] > > DEBUG: multipleLinearRandomRetry = null > > multipleLinearRandomRetry = null > > 2023-01-10T15:53:31,011 [master] [org.apache.hadoop.ipc.Server] DEBUG: > > rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apach > > e.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, > > rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798 > > rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class > > org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, > > rpcInvoker=org.apac > > he.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798 > > ... > > ... LONG PAUSE HERE - ALMOST 10 minutes > > ... > > 2023-01-10T16:03:52,316 [master] [org.apache.hadoop.ipc.Client] DEBUG: > > getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e74 > > 2 > > getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e742 > > 2023-01-10T16:03:52,679 [client DomainSocketWatcher] > > [org.apache.hadoop.net.unix.DomainSocketWatcher] DEBUG: > > org.apache.hadoop.net.unix.Do > > mainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000 > > org.apache.hadoop.net.unix.DomainSocketWatcher$2@35d323c6: starting with > > interruptCheckPeriodMs = 60000 > > 2023-01-10T16:03:52,686 [master] > > [org.apache.hadoop.util.PerformanceAdvisory] DEBUG: Both short-circuit > > local reads and UNIX domain socket > > are disabled. > > Both short-circuit local reads and UNIX domain socket are disabled. > > 2023-01-10T16:03:52,694 [master] > > [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil] > > DEBUG: DataTransferProtocol not > > using SaslPropertiesResolver, no QOP found in configuration for > > dfs.data.transfer.protection > > DataTransferProtocol not using SaslPropertiesResolver, no QOP found in > > configuration for dfs.data.transfer.protection > > 2023-01-10T16:03:52,697 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: > > Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-nameno > > des:8020/accumulo/data0/accumulo: duration 10:21.822s > > Creating FS > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: > > duration 10:21.822s > > 2023-01-10T16:03:52,718 [master] > > [org.apache.accumulo.core.conf.ConfigurationTypeHelper] DEBUG: Loaded class > > : org.apache.accumulo.core.sp > > i.fs.PreferredVolumeChooser > > Loaded class : org.apache.accumulo.core.spi.fs.PreferredVolumeChooser > > 2023-01-10T16:03:52,761 [master] [org.apache.hadoop.ipc.Client] DEBUG: The > > ping interval is 60000 ms. > > The ping interval is 60000 ms. > > 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: > > Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.4 > > 2.15.98:8020 > > Connecting to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 > > 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: > > Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenode > > s/10.42.15.98:8020 > > Setup connection to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 > > 2023-01-10T16:03:52,787 [IPC Client (585906429) connection to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum > > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) > > connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15. > > 98:8020 from accumulo: starting, having connections 1 > > IPC Client (585906429) connection to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from > > accumulo: starting, having con > > nections 1 > > 2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] > > [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to > > accum > > ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo > > sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi > > sting > > IPC Client (585906429) connection to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from > > accumulo sending #0 org.apache > > .hadoop.hdfs.protocol.ClientProtocol.getListing > > 2023-01-10T16:03:52,801 [IPC Client (585906429) connection to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum > > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) > > connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15. > > 98:8020 from accumulo got value #0 > > IPC Client (585906429) connection to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from > > accumulo got value #0 > > 2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] > > DEBUG: Call: getListing took 72ms > > Call: getListing took 72ms > > 2023-01-10T16:03:52,804 [master] > > [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read > > instance id from hdfs://accumulo-hdfs > > -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id > > Trying to read instance id from > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id > > 2023-01-10T16:03:52,804 [master] > > [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain > > instance id at hdfs://accumulo-hdfs > > -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id > > unable to obtain instance id at > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id > > Thread 'master' died. > > java.lang.RuntimeException: Accumulo not initialized, there is no instance > > id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8 > > 020/accumulo/data0/accumulo/instance_id > > at > > org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) > > at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) > > at > > org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) > > at > > org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) > > at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) > > at org.apache.accumulo.manager.Manager.main(Manager.java:408) > > at > > org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) > > at > > org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) > > at java.base/java.lang.Thread.run(Thread.java:829) > > 2023-01-10T16:03:52,808 [master] [org.apache.accumulo.start.Main] ERROR: > > Thread 'master' died. > > java.lang.RuntimeException: Accumulo not initialized, there is no instance > > id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8 > > 020/accumulo/data0/accumulo/instance_id > > at > > org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) > > ~[accumulo-server-base-2.1.0.jar:2.1. > > 0] > > at > > org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) > > ~[accumulo-server-base-2.1.0.jar:2.1.0] > > at > > org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) > > ~[accumulo-server-base-2.1.0.jar:2.1.0] > > at > > org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) > > ~[accumulo-server-base-2.1.0.jar:2.1.0] > > at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) > > ~[accumulo-manager-2.1.0.jar:2.1.0] > > at org.apache.accumulo.manager.Manager.main(Manager.java:408) > > ~[accumulo-manager-2.1.0.jar:2.1.0] > > at > > org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) > > ~[accumulo-manager-2.1.0.jar:2.1.0] > > at > > org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) > > ~[accumulo-start-2.1.0.jar:2.1.0] > > at java.lang.Thread.run(Thread.java:829) ~[?:?] > > Thread 'master' died. > > java.lang.RuntimeException: Accumulo not initialized, there is no instance > > id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8 > > 020/accumulo/data0/accumulo/instance_id > > at > > org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) > > ~[accumulo-server-base-2.1.0.jar:2.1. > > 0] > > at > > org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) > > ~[accumulo-server-base-2.1.0.jar:2.1.0] > > at > > org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) > > ~[accumulo-server-base-2.1.0.jar:2.1.0] > > at > > org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) > > ~[accumulo-server-base-2.1.0.jar:2.1.0] > > at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) > > ~[accumulo-manager-2.1.0.jar:2.1.0] > > at org.apache.accumulo.manager.Manager.main(Manager.java:408) > > ~[accumulo-manager-2.1.0.jar:2.1.0] > > at > > org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) > > ~[accumulo-manager-2.1.0.jar:2.1.0] > > at > > org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) > > ~[accumulo-start-2.1.0.jar:2.1.0] > > at java.lang.Thread.run(Thread.java:829) ~[?:?] > > 2023-01-10T16:03:52,812 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] > > DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.Di > > stributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo > > (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-n > > amenodes:8020; URI: > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object > > Identity Hash: 50257de5 > > FileSystem.close() by method: > > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); > > Key: (accumulo (auth:S > > IMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; O > > bject Identity Hash: 50257de5 > > 2023-01-10T16:03:52,814 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] > > DEBUG: stopping client from cache: Client-5197ff3375714e029d5cdcb > > 1ac53e742 > > stopping client from cache: Client-5197ff3375714e029d5cdcb1ac53e742 > > 2023-01-10T16:03:52,815 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] > > DEBUG: removing client from cache: Client-5197ff3375714e029d5cdcb > > 1ac53e742 > > removing client from cache: Client-5197ff3375714e029d5cdcb1ac53e742 > > 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] > > DEBUG: stopping actual client because no more references remain: > > Client-5197ff3375714e029d5cdcb1ac53e742 > > stopping actual client because no more references remain: > > Client-5197ff3375714e029d5cdcb1ac53e742 > > 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] > > DEBUG: Stopping client > > Stopping client > > 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum > > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) > > connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15. > > 98:8020 from accumulo: closed > > IPC Client (585906429) connection to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from > > accumulo: closed > > 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum > > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) > > connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15. > > 98:8020 from accumulo: stopped, remaining connections 0 > > IPC Client (585906429) connection to > > accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from > > accumulo: stopped, remaining c > > onnections 0 > > 2023-01-10T16:03:52,820 [Thread-5] > > [org.apache.hadoop.util.ShutdownHookManager] DEBUG: Completed shutdown in > > 0.010 seconds; Timeouts: 0 > > Completed shutdown in 0.010 seconds; Timeouts: 0 > > 2023-01-10T16:03:52,843 [Thread-5] > > [org.apache.hadoop.util.ShutdownHookManager] DEBUG: ShutdownHookManager > > completed shutdown. > > ShutdownHookManager completed shutdown. > > > > ________________________________ > > From: Ed Coleman <edcole...@apache.org> > > Sent: Tuesday, January 10, 2023 11:17 AM > > To: user@accumulo.apache.org <user@accumulo.apache.org> > > Subject: Re: [External] Re: accumulo init error in K8S > > > > Running init does not start the Accumulo services. Is the manager and are > > the tserver processes running? > > > > I may have missed it, but what version are you trying to use? 2.1? > > > > A quick look of the documentation at > > https://urldefense.com/v3/__https://accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$<https://urldefense.com/v3/__https:/accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$><https://urldefense.com/v3/__https:/accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$%3chttps:/urldefense.com/v3/__https:/accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$%3e> > > I would assume that add-volumes may not be required if your initial > > configuration is correct. > > > > At this point, logs may help more than stack traces. > > > > Ed C > > > > On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote: > > > Yes, I ran it just now. I had debug enabled, so the prompt for instance > > > name was hidden. I had to enter a few CRs to see the prompt. Once the > > > prompts for instance name and password were answered, I can see entries > > > for the accumulo config in the zookeeper. > > > > > > Should I run 'accumulo init --add-volumes' now? > > > > > > If I run 'accumulo master' and it seems to be hung up the thread: > > > > > > "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s > > > tid=0x000056488630b800 nid=0x90 runnable [0x00007f5d63753000] > > > java.lang.Thread.State: RUNNABLE > > > at > > > sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native > > > Method) > > > at > > > sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239) > > > - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod) > > > at > > > sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243) > > > at > > > sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143) > > > at > > > sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140) > > > at > > > java.security.AccessController.doPrivileged(java.base@11.0.17/Native > > > Method) > > > at > > > sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140) > > > at > > > sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251) > > > at > > > sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242) > > > at > > > java.security.AccessController.doPrivileged(java.base@11.0.17/Native > > > Method) > > > at > > > sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242) > > > at > > > sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222) > > > - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig) > > > at > > > sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266) > > > at > > > sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156) > > > at > > > sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151) > > > at > > > java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371) > > > at > > > java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264) > > > at > > > java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219) > > > at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101) > > > at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147) > > > at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42) > > > at org.apache.hadoop.ipc.Client.<init>(Client.java:1367) > > > at > > > org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59) > > > - locked <0x00000000fffc3458> (a > > > org.apache.hadoop.ipc.ClientCache) > > > at > > > org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158) > > > at > > > org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145) > > > at > > > org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111) > > > at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629) > > > at > > > org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365) > > > at > > > org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343) > > > at > > > org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135) > > > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374) > > > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308) > > > at > > > org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202) > > > at > > > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187) > > > at > > > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469) > > > at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174) > > > at > > > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574) > > > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521) > > > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540) > > > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) > > > at > > > org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45) > > > at > > > org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371) > > > at > > > org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96) > > > at > > > org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) > > > at > > > org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) > > > at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) > > > at org.apache.accumulo.manager.Manager.main(Manager.java:408) > > > at > > > org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) > > > at > > > org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) > > > at > > > org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown > > > Source) > > > at java.lang.Thread.run(java.base@11.0.17/Thread.java:829) > > > > > > > > > > > > I will wait and see when there is more log output. > > > > > > Thanks > > > Ranga > > > > > > ________________________________ > > > From: Ed Coleman <edcole...@apache.org> > > > Sent: Tuesday, January 10, 2023 10:16 AM > > > To: user@accumulo.apache.org <user@accumulo.apache.org> > > > Subject: [External] Re: accumulo init error in K8S > > > > > > Have you tried running accumulo init without the --add-volumes? From > > > your attached log it looks like it cannot find a valid instance id > > > > > > 2023-01-09T21:22:13,522 [init] > > > [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read > > > instance id from > > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id > > > Trying to read instance id from > > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id > > > 2023-01-09T21:22:13,522 [init] > > > [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain > > > instance id at > > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id > > > unable to obtain instance id at > > > hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id > > > > > > > > > On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote: > > > > Hello, > > > > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and > > > > Zookeeper are up and running in the same K8S namespace. > > > > accumulo.properties is as below: > > > > > > > > > > > > instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo > > > > general.custom.volume.preferred.default=accumulo > > > > instance.zookeeper.host=accumulo-zookeeper > > > > # instance.secret=DEFAULT > > > > > > > > general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser > > > > > > > > general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo > > > > trace.user=tracer > > > > trace.password=tracer > > > > instance.secret=accumulo > > > > tserver.cache.data.size=15M > > > > tserver.cache.index.size=40M > > > > tserver.memory.maps.max=128M > > > > tserver.memory.maps.native.enabled=true > > > > tserver.sort.buffer.size=50M > > > > tserver.total.mutation.queue.max=16M > > > > tserver.walog.max.size=128M > > > > > > > > accumulo-client.properties is as below: > > > > > > > > auth.type=password > > > > auth.principal=root > > > > auth.token=root > > > > instance.name=accumulo > > > > # For Accumulo >=2.0.0 > > > > instance.zookeepers=accumulo-zookeeper > > > > instance.zookeeper.host=accumulo-zookeeper > > > > > > > > When I run 'accumulo init --add-volumes', I see an error as below and > > > > what is wrong with the setup? > > > > > > > > java.lang.RuntimeException: None of the configured paths are > > > > initialized. > > > > at > > > > org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) > > > > at > > > > org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) > > > > at > > > > org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) > > > > at > > > > org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) > > > > at java.base/java.lang.Thread.run(Thread.java:829) > > > > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: > > > > Thread 'init' died. > > > > java.lang.RuntimeException: None of the configured paths are > > > > initialized. > > > > at > > > > org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) > > > > ~[accumulo-server-base-2.1.0.jar:2.1.0] > > > > at > > > > org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) > > > > ~[accumulo-server-base-2.1.0.jar:2.1.0] > > > > at > > > > org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) > > > > ~[accumulo-server-base-2.1.0.jar:2.1.0] > > > > at > > > > org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) > > > > ~[accumulo-start-2.1.0.jar:2.1.0] > > > > at java.lang.Thread.run(Thread.java:829) ~[?:?] > > > > Thread 'init' died. > > > > > > > > I have attached complete log: > > > > > > > > > > > > > >