Running init does not start the Accumulo services.  Is the manager and are the 
tserver processes running?

I may have missed it, but what version are you trying to use?  2.1?

A quick look of the documentation at 
https://accumulo.apache.org/docs/2.x/administration/in-depth-install#migrating-accumulo-from-non-ha-namenode-to-ha-namenode
 I would assume that add-volumes may not be required if your initial 
configuration is correct.

At this point, logs may help more than stack traces.

Ed C

On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> Yes, I ran it just now. I had debug enabled, so the prompt for instance name 
> was hidden. I had to enter a few CRs to see the prompt. Once the prompts for 
> instance name and password were answered, I can see entries for the accumulo 
> config in the zookeeper.
> 
> Should I run 'accumulo init --add-volumes' now?
> 
> If I run 'accumulo master' and it seems to be hung up the thread:
> 
> "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s 
> tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
>    java.lang.Thread.State: RUNNABLE
>         at 
> sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native 
> Method)
>         at 
> sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
>         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
>         at 
> sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
>         at 
> sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
>         at 
> sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at 
> java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at 
> sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at 
> sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
>         at 
> sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
>         at 
> java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at 
> sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
>         at 
> sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
>         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
>         at 
> sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
>         at 
> sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
>         at 
> sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
>         at 
> java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
>         at 
> java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
>         at 
> java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
>         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
>         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
>         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
>         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
>         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
>         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
>         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
>         at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
>         at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
>         at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
>         at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
>         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
>         at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
>         at 
> org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
>         at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
>         at 
> org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
>         at 
> org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
>         at 
> org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at 
> org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown 
> Source)
>         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
> 
> 
> 
> I will wait and see when there is more log output.
> 
> Thanks
> Ranga
> 
> ________________________________
> From: Ed Coleman <edcole...@apache.org>
> Sent: Tuesday, January 10, 2023 10:16 AM
> To: user@accumulo.apache.org <user@accumulo.apache.org>
> Subject: [External] Re: accumulo init error in K8S
> 
> Have you tried running accumulo init without the --add-volumes?  From your 
> attached log it looks like it cannot find a valid instance id
> 
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] 
> DEBUG: Trying to read instance id from 
> hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Trying to read instance id from 
> hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] 
> ERROR: unable to obtain instance id at 
> hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> unable to obtain instance id at 
> hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 
> 
> On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > Hello,
> > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and 
> > Zookeeper are up and running in the same K8S namespace.
> > accumulo.properties is as below:
> >
> >   
> > instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   general.custom.volume.preferred.default=accumulo
> >   instance.zookeeper.host=accumulo-zookeeper
> >   # instance.secret=DEFAULT
> >   
> > general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> >   
> > general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   trace.user=tracer
> >   trace.password=tracer
> >   instance.secret=accumulo
> >   tserver.cache.data.size=15M
> >   tserver.cache.index.size=40M
> >   tserver.memory.maps.max=128M
> >   tserver.memory.maps.native.enabled=true
> >   tserver.sort.buffer.size=50M
> >   tserver.total.mutation.queue.max=16M
> >   tserver.walog.max.size=128M
> >
> > accumulo-client.properties is as below:
> >
> >  auth.type=password
> >  auth.principal=root
> >  auth.token=root
> >  instance.name=accumulo
> >  # For Accumulo >=2.0.0
> >  instance.zookeepers=accumulo-zookeeper
> >  instance.zookeeper.host=accumulo-zookeeper
> >
> > When I run 'accumulo init --add-volumes', I see an error as below and what 
> > is wrong with the setup?
> >
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at 
> > org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> >         at 
> > org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> >         at 
> > org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> >         at 
> > org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at java.base/java.lang.Thread.run(Thread.java:829)
> > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: 
> > Thread 'init' died.
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at 
> > org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) 
> > ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at 
> > org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) 
> > ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at 
> > org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) 
> > ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at 
> > org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) 
> > ~[accumulo-start-2.1.0.jar:2.1.0]
> >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > Thread 'init' died.
> >
> > I have attached complete log:
> >
> >
> 

Reply via email to