Dear HBase Community,

We are planning to migrate petabytes of HBase data from an old cluster to a
new one and are evaluating the best approach to achieve this efficiently.
Currently, we are considering two options:

   1.

   *Row-by-row migration:* Reading data from the old cluster and replaying
   it row by row into the new cluster.
   2.

   *Bulk loading via HFiles:* Reading data from the old cluster, generating
   large HFiles, and placing them directly into the new cluster, allowing
   HBase to split them into smaller regions.

We would greatly appreciate your insights and recommendations:

   -

   Have any of you dealt with a similar migration at this scale?
   -

   Which of the above approaches would you recommend, or are there better
   alternatives we should consider?

Thank you in advance for your time and expertise. We look forward to
hearing your suggestions and learning from your experiences.

Reply via email to