Accumulo Colleagues,
We’re excited to announce that version 1.2 of the Python implementation of
D4M (D4M.py) has been released on github (https://github.com/Accla/D4M.py). In
particular, this version of D4M.py includes Accumulo bindings for reading and
writing Accumulo tables as D4M associati
Thanks Appreciated!
-S
From: Christopher
Sent: Friday, September 10, 2021 9:15 AM
To: accumulo-user
Subject: [External] Re: [EXTERNAL EMAIL] - Re: accumulo and hdfs data locality
One correction to what Mike said. The last location column doesn't store where
it was last hosted. The current loc
Yep, thanks for the correction Christopher.
On Fri, Sep 10, 2021 at 9:15 AM Christopher wrote:
> One correction to what Mike said. The last location column doesn't store
> where it was last hosted. The current location column does that. Rather,
> the last location column stores where it was host
One correction to what Mike said. The last location column doesn't store
where it was last hosted. The current location column does that. Rather,
the last location column stores where it was hosted when it last wrote to
HDFS. The goal is what Mike said: it tries to provide a mechanism for
preservin
Frequent compactions can help maintain locality, as can a custom balancer
and volume chooser. However, the impact of more frequent compactions on
query performance and other system metrics would need to be considered.
Achieving optimal locality may not be that important overall.
Side note: your ma
If a tablet moves, the data files in HDFS do not go with it. However,
during the next compaction one copy of the rfile should be written locally.
Note, the metadata has a last column for each tablet, to record where the
table was last hosted. On startup, Accumulo will try to assign a tablet to
t
Thank you,
Is there way to maintain that data locality, I mean over time with table
splitting, hdfs rebalancing etc we may not have data locality…
Thanks again
-S
From: Christopher
Sent: Friday, September 10, 2021 8:40 AM
To: accumulo-user
Subject: [EXTERNAL EMAIL] - Re: accumulo and hdfs da
Data locality and simplified deployments are the only reasons I can think
of. Accumulo doesn't do anything particularly special for data locality,
but typically, an HDFS client (like Accumulo servers) will (or can be
configured to) write one copy of any new blocks locally, which should
permit effic
Hello I am suing Hadoop 3.3 and accumulo 1.10. Does accumulo take advantage of
Hadoop data locality? What are the other benefits of having tserver and
datanode process on the same instance?
-S