0.94.27 be restorable in hbase 2.x?- Having
two clusters of different versions, is there a way to export the snapshot to
the new cluster?
- If the snapshot can't work, is there a way to migrate data quickly?
Thanks,
Hamado Dene
:30006,"client":"10.200.86.130:53476","queuetimems":0,"class":"HRegionServer","scandetails":"table:
mn1_7491_hinvio region: mn1_7491_hinvio ..}
We are still trying to understand which improvements to implement in order to
be able to manage the problem.Has anyone ever had the same problem? AndWhat are
the configurations on which we can act to have better performance?
Thanks,
Hamado Dene
Sorry i forgot to Specify my hbase version:
Hbase 2.2.6Hadoop: 2.8.5
Il giovedì 28 ottobre 2021, 16:55:40 CEST, Hamado Dene
ha scritto:
Hi hbase community,
Lately during our nactivities we constantly receive the warn:
2021-10-28 16:45:00,854 WARN
took 8mins, 15sec to execute.
Our configuration is the default one. Then major every 7d.Maybe we missed
something?
Thanks in advance,
Hamado Dene
tbeats: tr \ u003cTRUNCATED \ u003e "," processingtimems ":
28005," client ":" 10.200.86.173:60806","queuetimclass "":0 HRegionServer ","
scandetails ":" table: mn1_7491_hinvio region: mn1_7491_hinvio .}
Thanks,
Hamado Dene
try to force to STREAM.
thanks,
Hamado Dene
Il sabato 27 novembre 2021, 13:13:55 CET, 张铎(Duo Zhang)
ha scritto:
The behavior for filters has been changed a lot between 0.94 and 2.x. Mind
providing more information about what filter you use?
And for large scans, STREAM can perform better
Hbase2.
I'll let you know if we can improve our situation.
Thanks,
Hamado DeneIl sabato 27 novembre 2021, 14:39:30 CET, 张铎(Duo Zhang)
ha scritto:
scan.setFilter(List.of(res1, res2));
What is the 'List' here? You mean FilterList? How do you combine these two
filters, AND
| 15,565,516 | 7.28 GB | 9 | 0 MB | 1.0 |
\x00\x00\x04'\x00R\xE1-\x00\x0E\xBD\xCC | | OPEN |
For example for this table we see that all requests always end on rzv-db12-hd
Thanks,
Hamado Dene
Il domenica 28 novembre 2021, 17:54:27 CET, Hamado Dene
ha scritto:
Yes, we creat
Hi Hbase community,
On our production installation we have two hbase clusters in two different
datacenters.The primary datacenter replicates the data to the secondary
datacenter.When we create the tables, we first create on the secondary
datacenter and then on the primary and then we set replic
n,acv-db11-hn,acv-db12-hn:2181:/hbase | | ENABLED |
true | UNLIMITED | true
|
Il domenica 12 dicembre 2021, 09:39:44 CET, Mallikarjun
ha scritto:
Which version of hbase are you using? Is your replication serial enabled?
---
Mallikarjun
On Sun, Dec 12, 2021 at 1:54 PM Hamado Dene
ordering
is allowed to unblock the replication. Can result into some inconsistencies
between clusters which can be fixed using sync table utility since your
setup is active passive
Another fix: delete barriers for each regions in hbase:meta. Same
consequence as above.
On Sun, Dec 12, 2021, 2:24 P
?
Il domenica 12 dicembre 2021, 10:55:05 CET, Mallikarjun
ha scritto:
https://hbase.apache.org/book.html#hashtable.synctable
To copy the difference between tables for a specific time period.
On Sun, Dec 12, 2021, 3:12 PM Hamado Dene
wrote:
> Interesting, thank you very much for
tart replicating
> from
> > the time it was stuck. You can build dashboards from jmx metrics
> generated
> > from hmaster to know about these and setup alerts as well.
> >
> >
> >
> > On Sun, Dec 12, 2021, 3:33 PM Hamado Dene
> > wrote:
> >
&
gions when doing a
simple reboot of a node for configuration change?- Is it normal for transitions
to time out and not recover on their own? Is there any way to avoid this
problem?
Thank you,
Hamado Dene
synchronize the two
clusters without impacting the primary cluster too much? Still keeping
replication on?
Hbase version: 2.2.Hadoop version: 2.8.5
Thanks
Hamado Dene
replication
2. Run Hash table, sync table utility
3. Enable replication
This is resource intensive if the difference is huge. Because it will do in
hbase layer and scan whole table and ship batch of rows at a time.
On Sun, Jan 23, 2022, 11:22 PM Hamado Dene
wrote:
> hi Hbase community, In
82_hevents/d37341ab3adad67e2c911edd6d5e6de7/d/27f6d74f99654685b5518a8db1c1496a
> map
Is there a way to be able to do this export efficiently?
The command I run is:
hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot $snapshotName
-copy-from $source -copy-to $destination -overwrite
Thanks,
Hamado Dene
Hi Mallikarjun,
Increasing the TTL worked.
Thanks for your help,
Hamado Dene
Il mercoledì 9 febbraio 2022, 15:09:15 CET, Mallikarjun
ha scritto:
The problem is that it seems like hfile TTL cleaner. Which can be
configured with --> *hbase.master.hfilecleaner.ttl*
On ExportSnapshot
to manage this issue?I saw on the net the possibility of
increasing the propery hbase.region.store.parallel.put.limit, but in the hbase
documentation I don't find any reference about it.Is this property still valid?
It can be enabled at the level of hdfs-site.xml
Thanks,
Hamado Dene
disabled it.
Thanks,
Hamado Dene
Il mercoledì 23 marzo 2022, 12:01:28 CET, Bryan Beaudreault
ha scritto:
Hello,
Unfortunately I don’t have good guidance on what to tune this to. What I
can say though is that this feature will be disabled by default starting
with version 2.5.0. Part of the
version
present in hbase is not used
Furthermore we would also like to update hadoop from 2.8.5 to 2.10.2. In this
case it is necessary to keep hbase off while updating hadoop, correct?
Thanks in advance,
Hamado Dene
:73)
at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:1109)
... 14 more
Thanks,
Hamado Dene
I saw that the Fix was done in
https://issues.apache.org/jira/browse/HBASE-27027Do you have any idea when the
release for hbase 2.4.13 will be made?
Thanks,
Hamado DeneIl venerdì 17 giugno 2022, 10:56:01 CEST, Hamado Dene
ha scritto:
Hi, this morning we updated our Hbase installation
a:626) at
java.lang.Thread.run(Thread.java:748)
Thanks in advance,
Hamado Dene
]
Thanks in advance,
Hamado Dene
ow between our region servers.
Has anyone encountered this problem before, or does anyone have insights into
potential causes and solutions?
Thank you in advance for your assistance!
Hamado Dene
Hi community,Could anyone kindly assist me in resolving this issue I'm facing?
Thank you in advance!
Hamado Dene
Il mercoledì 11 settembre 2024 alle ore 16:26:55 CEST, Hamado Dene
ha scritto:
Hi HBase Community,
We are currently facing an issue in our production environment with
I resolved this issue by increasing the master snapshot timeout with:
hbase.snapshot.master.timeout.millis
120
Il mercoledì 11 settembre 2024 alle ore 13:31:46 CEST, Hamado Dene
ha scritto:
Hi,
I am encountering this error on HBase for some snapshot procedures of a table
-db09-hd.%2C16020%2C1674973354605.1696810476448-rw-r--r--
2 hbase hadoop 13622784 2023-10-09 08:26
/hbase/oldWALs/rzv-db10-hd.%2C16020%2C1674973984596.1696810895708
Il giovedì 12 settembre 2024 alle ore 09:30:46 CEST, Hamado Dene
ha scritto:
Hi community,Could anyone kindly
error from WALPrettyPrinter while reading these files?
Hamado Dene 于2024年9月16日周一 16:15写道:
>
> Checking the WALs on HDFS, there are very old WALs, from a year ago... Does
> anyone have any idea how to handle this issue in production?
>
> -rw-r--r-- 2 hbase hadoop 20684288 2023-10-09 08
d you please double check the WAL file which blocks the
replication? Is it really one of these old WAL files?
Thanks.
Hamado Dene 于2024年9月16日周一 21:57写道:
>
> Thanks for your response.
> If I try to read the WALs with the following command:
> hbase org.apache.hadoop.hbase.wal.WALPretty
Hello Community,
I’m still encountering this issue in production and haven’t yet found a way to
resolve it. Do you have any suggestions on how I can debug and address this
problem?
Thank,
Hamado Dene
Il giovedì 26 settembre 2024 alle ore 09:53:35 CEST, Hamado Dene
ha scritto
nce for the help.
Hamado Dene
Il martedì 24 settembre 2024 alle ore 18:08:44 CEST, Hamado Dene
ha scritto:
Is there an HBase utility to dump the contents of ZooKeeper? The data in that
path is not directly readable from ZooKeeper... I probably need to decode it
somehow
Tha
Is there an HBase utility to dump the contents of ZooKeeper? The data in that
path is not directly readable from ZooKeeper... I probably need to decode it
somehow
Thanks,
Hamado Dene
Il mercoledì 18 settembre 2024 alle ore 16:26:20 CEST, 张铎(Duo Zhang)
ha scritto:
It is a bit
Could this be the cause of the issue?
Hamado Dene
Il lunedì 16 settembre 2024 alle ore 16:37:12 CEST, Hamado Dene
ha scritto:
I deduced that it was one of the old WALs because, from the UI, I see that
these old WALs are not being replicated. However, I'll do another round of
c
35 matches
Mail list logo