This was very helpful.
Thanks
Ranga
From: Jeff Kubina
Sent: Friday, January 20, 2023 2:03 PM
To: user@accumulo.apache.org ; Samudrala, Ranganath
[USA]
Subject: Re: [External] Re: Accumulo with S3
You might want to look at this repo
https://github.com/Accumulo
NF_DIR}:${ZOOKEEPER_HOME}/*:${ZK_JARS}:${HADOOP_HOME}/share/hadoop/client/*:${HADOOP_HOME}/share/hadoop/common/*:${HADOOP_HOME}/share/hadoop/hdfs/*"
>
> export CLASSPATH
>
>
>
> .
>
> .
>
>
>
>
>
>
>
> *From: *Arvind Shyamsundar
> *Date: *Friday,
OOP_HOME}/share/hadoop/client/*:${HADOOP_HOME}/share/hadoop/common/*:${HADOOP_HOME}/share/hadoop/hdfs/*"
export CLASSPATH
.
.
From: Arvind Shyamsundar
Date: Friday, January 20, 2023 at 12:55 PM
To: user@accumulo.apache.org , Samudrala, Ranganath
[USA]
Subject: RE: [External] Re: Accu
rvind Shyamsundar (HE / HIM)
From: Samudrala, Ranganath [USA] via user
Sent: Friday, January 20, 2023 9:46 AM
To: user@accumulo.apache.org
Subject: Re: [External] Re: Accumulo with S3
The logic is using "org.apache.hadoop.fs.s3a.S3AFileSystem" as we can see in
the stack trace. Shouldn'
The logic is using “org.apache.hadoop.fs.s3a.S3AFileSystem” as we can see in
the stack trace. Shouldn’t this then be using S3 related configuration in
HADOOP_CONF_DIR? In Hadoop’s core-site.xml, we have the S3 related
configuration parameters as below:
fs.s3a.endpoint
ht